The Plausibility of Future Scenarios: Conceptualising an Unexplored Criterion in Scenario Planning 9783839453193

What does plausibility mean in relation to scenario planning and how do users of scenarios assess it? Despite the concep

175 90 8MB

English Pages 264 Year 2020

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

The Plausibility of Future Scenarios: Conceptualising an Unexplored Criterion in Scenario Planning
 9783839453193

Table of contents :
Cover
Contents
List of Figures
List of Tables
Summary of the book
1 Introduction
2 Scenario planning: characteristics and current issues
2.1 Scenarios and scenario development
2.1.1 Core characteristics of scenarios
2.1.2 Methodological choices in scenario planning
2.2 Evaluative research: scenario objectives and evidence
2.2.1 Objectives of scenarios
2.2.2 Effects and effectiveness of scenarios
2.2.3 The ‘users’ of scenarios
2.3 Two exemplary scenario planning methods
2.3.1 Intuitive Logics
2.3.2 Cross-Impact Balance Analysis
2.3.3 A comparison of both methods
3 Scenario plausibility: emerging debates in research and practice
3.1 The value of plausibility for scenario planning
3.2 Excursus: probability and judgment under uncertainty
3.3 Operationalising and assessing scenario plausibility
3.4 Critical reflections on scenario plausibility
4 Conceptual explorations: plausibility across disciplines
4.1 Framework for exploration: the life path of scenarios
4.2 Plausible reasoning in informal logic and argumentation theory
4.2.1 A theory of plausible reasoning
4.2.2 Plausibility in argumentative discourse analysis
4.2.3 Relevance for scenario plausibility
4.3 Plausibility in narrative theory
4.3.1 Structural and cultural theory approaches to plausibility
4.3.2 Reader‐oriented approaches to plausibility
4.3.3 Relevance for scenario plausibility
4.4 Plausibility judgments in cognitive and educational psychology
4.4.1 Models‐of-data theory
4.4.2 Plausibility Analysis Model (PAM)
4.4.3 Plausibility Judgment in Conceptual Change Framework (PJCC)
4.4.4 Relevance for scenario plausibility
4.5 Directions and propositions for empirical research
5 Empirical research: Methodology to study scenario plausibility
5.1 Experimental structure
5.2 Experimental material
5.3 Procedures and data collection
5.4 Considerations of the study’s validity
6 Experimental study: quantitative research findings
6.1 Statistical tests for data analysis
6.2 Experimental sample and treatment groups
6.3 Differences in scenario plausibility judgments
6.4 Plausibility, credibility and trustworthiness
6.5 Plausibility, participants’ own beliefs and perceptions of data
6.6 Plausibility, participants’ cognitive styles and heuristics
7 Experimental study: qualitative research findings
7.1 Procedure of qualitative data analysis
7.2 Internal structure of scenarios
7.3 Scenario’s relation to other forms of knowledge and data
7.4 Scenario methodology
7.5 Discussion and data triangulation
8 Synthesis: A conceptual map of scenario plausibility
8.1 Units and contexts of the map
8.2 Unit of analysis A: scenario development sources and methods
8.2.1 Indicator 1: credibility
8.2.2 Indicator 2: scenario methods
8.3 Unit of analysis B: scenario(s) and scenario report
8.3.1 Indicator 3: internal structures of scenarios
8.4 Unit of analysis C: scenario user‐recipient
8.4.1 Indicator 4: conceptual coherence
8.4.2 Indicator 5: cognitive heuristics and dispositions
9 Conclusions and outlook
9.1 Implications for research and practice
9.2 Critical review of the research process
9.3 Suggestions for further research
Abbreviations
Acknowledgments
References

Citation preview

Ricarda Schmidt-Scheele The Plausibility of Future Scenarios

Science Studies

Ricarda Schmidt-Scheele is a postdoctoral research associate at the Center for Interdisciplinary Risk and Innovation Studies (ZIRIUS), University of Stuttgart. Her research areas are scenarios and foresight methods in the context of energy transformation and sustainability processes. She also works as a facilitator at the Oxford Scenarios Programme at Saïd Business School, University of Oxford.

Ricarda Schmidt-Scheele

The Plausibility of Future Scenarios Conceptualising an Unexplored Criterion in Scenario Planning

Dissertation, Universität Stuttgart, D 93 The research presented in this book was funded by the Deutsche Forschungsgemeinschaft (DFG) as part of the Cluster of Excellence »Simulation Technology« (EXC 310/2) at the University of Stuttgart.

Bibliographic information published by the Deutsche Nationalbibliothek The Deutsche Nationalbibliothek lists this publication in the Deutsche Nationalbibliografie; detailed bibliographic data are available in the Internet at http:// dnb.d-nb.de

© 2020 transcript Verlag, Bielefeld All rights reserved. No part of this book may be reprinted or reproduced or utilized in any form or by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying and recording, or in any information storage or retrieval system, without permission in writing from the publisher. Cover layout: Maria Arndt, Bielefeld Cover illustration: Pixabay Printed by Majuskel Medienproduktion GmbH, Wetzlar Print-ISBN 978-3-8376-5319-9 PDF-ISBN 978-3-8394-5319-3 https://doi.org/10.14361/9783839453193 Printed on permanent acid-free text paper.

Content

List of Figures...................................................................... 9 List of Tables....................................................................... 11 Summary of the book ..............................................................13 1

Introduction .................................................................. 17

Scenario planning: characteristics and current issues ...................... 27 Scenarios and scenario development ....................................................... 28 2.1.1 Core characteristics of scenarios ................................................... 28 2.1.2 Methodological choices in scenario planning ..................................... 30 2.2 Evaluative research: scenario objectives and evidence................................. 35 2.2.1 Objectives of scenarios................................................................. 36 2.2.2 Effects and effectiveness of scenarios ............................................ 39 2.2.3 The ‘users’ of scenarios ................................................................42 2.3 Two exemplary scenario planning methods ................................................45 2.3.1 Intuitive Logics ...........................................................................46 2.3.2 Cross-Impact Balance Analysis ...................................................... 48 2.3.3 A comparison of both methods....................................................... 50

2 2.1

3 3.1 3.2 3.3 3.4

Scenario plausibility: emerging debates in research and practice ........... 57 The value of plausibility for scenario planning ........................................... 58 Excursus: probability and judgment under uncertainty ................................. 61 Operationalising and assessing scenario plausibility .................................... 68 Critical reflections on scenario plausibility ................................................. 70

4 Conceptual explorations: plausibility across disciplines .................... 73 4.1 Framework for exploration: the life path of scenarios ................................... 74 4.2 Plausible reasoning in informal logic and argumentation theory ..................... 78 4.2.1 A theory of plausible reasoning ..................................................... 80 4.2.2 Plausibility in argumentative discourse analysis ............................... 84 4.2.3 Relevance for scenario plausibility ................................................. 88 4.3 Plausibility in narrative theory ..................................................................90 4.3.1 Structural and cultural theory approaches to plausibility ..................... 91 4.3.2 Reader-oriented approaches to plausibility .......................................95 4.3.3 Relevance for scenario plausibility .................................................. 98 4.4 Plausibility judgments in cognitive and educational psychology .................... 100 4.4.1 Models-of-data theory ................................................................. 101 4.4.2 Plausibility Analysis Model (PAM) ....................................................102 4.4.3 Plausibility Judgment in Conceptual Change Framework (PJCC) .......... 103 4.4.4 Relevance for scenario plausibility .................................................. 107 4.5 Directions and propositions for empirical research .....................................109 5 5.1 5.2 5.3 5.4

Empirical research: Methodology to study scenario plausibility.............. 117 Experimental structure .......................................................................... 119 Experimental material............................................................................ 121 Procedures and data collection ...............................................................125 Considerations of the study’s validity ...................................................... 132

6 6.1 6.2 6.3 6.4 6.5 6.6

Experimental study: quantitative research findings ........................ 137 Statistical tests for data analysis ............................................................137 Experimental sample and treatment groups ..............................................140 Differences in scenario plausibility judgments ...........................................142 Plausibility, credibility and trustworthiness ................................................152 Plausibility, participants’ own beliefs and perceptions of data........................ 157 Plausibility, participants’ cognitive styles and heuristics ...............................164

7 7.1 7.2 7.3 7.4 7.5

Experimental study: qualitative research findings .......................... 171 Procedure of qualitative data analysis ...................................................... 171 Internal structure of scenarios ...............................................................173 Scenario’s relation to other forms of knowledge and data ............................. 179 Scenario methodology .......................................................................... 183 Discussion and data triangulation ........................................................... 188

8 Synthesis: A conceptual map of scenario plausibility ....................... 193 8.1 Units and contexts of the map ............................................................... 193 8.2 Unit of analysis A: scenario development sources and methods .....................195 8.2.1 Indicator 1: credibility ..................................................................195 8.2.2 Indicator 2: scenario methods....................................................... 200 8.3 Unit of analysis B: scenario(s) and scenario report ..................................... 204 8.3.1 Indicator 3: internal structures of scenarios ................................... 204 8.4 Unit of analysis C: scenario user-recipient ................................................ 209 8.4.1 Indicator 4: conceptual coherence ................................................ 209 8.4.2 Indicator 5: cognitive heuristics and dispositions............................... 211 9 9.1 9.2 9.3

Conclusions and outlook ....................................................215 Implications for research and practice...................................................... 215 Critical review of the research process ..................................................... 219 Suggestions for further research ........................................................... 220

Abbreviations .................................................................... 225 Acknowledgments................................................................ 227 References ...................................................................... 229

List of Figures

Figure 1: Overview of the structure of the book................................................... 25 Figure 2: Scenario objectives on intra- and inter-actor levels ................................ 36 Figure 3: The two-dimensional scenario axis in Intuitive Logics ............................. 48 Figure 4: Cross-impact matrix illustrating impact balance calculations for the SRES scenarios by the IPCC......................................................................... 51 Figure 5: Plausible scenarios within Knight’s epistemic modes of knowledge ........... 68 Figure 6: Trajectories of plausibility across a scenario’s life path ........................... 77 Figure 7: Components of policy analysis in argumentative discourse analysis .......... 87 Figure 8: Dynamic relationship between narrative authors and readers ................... 96 Figure 9: Plausibility Judgments in Conceptual Change ...................................... 106 Figure 10: Experimental procedures ................................................................ 127 Figure 11: Relative frequency distributions of scenario plausibility judgments (T1) ..... 143 Figure 12: Boxplot of scenario plausibility judgments .......................................... 144 Figure 13: Differences in scenario plausibility judgments between T1 and T2 ........... 148 Figure 14: Boxplot showing scenario plausibility (CIB1) by disciplinary backgrounds.... 151 Figure 15: Jitter plots for trustworthiness and scenario plausibility judgments in T1 .. 153 Figure 16 Plausibility model curve with ‘trustworthiness’ as predictor ..................... 155 Figure 17: Jitter plots for items NCC15/ NCC2 and plausibility judgments in T1 ......... . 166 Figure 18: Plausibility model curve with ‘cognitive closure’ as predictor .................. 169 Figure 19: Categories for reasons of scenario [im]plausibility identified from written protocols ........................................................................... 172 Figure 20: Developing a map on scenario plausibility judgments ........................... 194 Figure 21: Conceptual map explaining scenario plausibility judgments ....................203

List of Tables

Table 1: Overview of explored theoretical concepts of plausibility .......................... 22 Table 2: Three modes of orientation provided by Futures Studies ........................... 38 Table 3: Contrast of guiding principles for IL and CIB .......................................... 53 Table 4: Cognitive heuristics in human judgment under uncertainty........................ 66 Table 5: Related concepts to plausibility .......................................................... 104 Table 6: Differences in formats of scenario reports............................................ 123 Table 7: Structure and content of scenario reports ............................................ 123 Table 8: Experimental data collection .............................................................. 129 Table 9: Cross-tabulation of academic discipline and gender................................. 141 Table 10: Cross-tabulation of academic discipline and treatment groups.................. 141 Table 11: Plausibility judgments across scenario formats (asymptotic Wilcoxon-Test)............................................................................ 144 Table 12: Effects of sequence of treatment on plausibility judgments .................... 146 Table 13: Bivariate correlations of tested factors and scenario plausibility in T1 (H4) ....................................................................................... 154 Table 14: Bivariate correlations of tested factors and scenario plausibility in T1 (re: H5) ......................................................................................... 159 Table 15: Bivariate correlations of success factors and scenario plausibility in T1 (H5) ....................................................................................... 160 Table 16: Bivariate correlations of tested factors and scenario plausibility in T1 (H5 & H8) ....................................................................................... 161 Table 17: Bivariate correlations of NCC-items and scenario plausibility in T1 (H6) ...... 165 Table 18: Reasons for plausibility and implausibility: internal structure of scenario.... 177 Table 19: Reasons for plausibility and implausibility: scenario’s relation to other data ................................................................................... 181 Table 20: Reasons for plausibility and implausibility: scenario methodology ........... 185 Table 21: Specific characteristics of IL- and CIB-based scenario evaluations ........... 187

Summary of the book

Plausibility as a concept is omnipresent in the scenario planning literature. Practitioners and researchers regularly conclude that their planning processes have revealed ‘plausible scenarios’. The common position is that for scenario planning exercises to create alternative future pathways, their selection cannot be simply limited to the most probable ones; neither does mere possibility allow for a meaningful collection of relevant and challenging scenarios. Methodological reviews, therefore, name plausibility a key effectiveness criterion for both scenario construction and utilisation. This has practical consequences: Plausibility guides what kind of scenarios are generated and presented and prescribes how to assess and consider scenarios for decision-making. Yet, insights into what scenario plausibility really means and how it is established and assessed by different actors, including scenario users, is largely unexplored. The book addresses this conceptual and empirical gap and analyses the concept from the perspective of prospective scenario users. The small group of scholars more recently involved in the concept has predominantly looked at plausibility from the angle of scenario construction: Here, plausibility is thought to be established either by method-driven processes, e.g. different techniques and procedures prescribe scenarios as plausible only when they are internally consistent, or through actor-driven processes, meaning that involved stakeholders interactively co-produce a common understanding of the scenarios. Both positions neglect that important scenario user groups i) are often not involved in the actual construction process, ii) are confronted with multiple, contradicting scenarios in different formats, and iii) may consequently follow different mechanisms when assessing the plausibility of a scenario. Therefore, in this book, the following research questions are pursued: How do scenario users assess the plausibility of a scenario? What factors influence an individual’s plausibility judgment? Do judgments differ across scenario formats?

14

The Plausibility of Future Scenarios

The research literature on scenario planning and the community of Futures Studies does not provide theoretical frameworks or distinct directions for research designs to investigate these questions. However, a systematic review of extant debates in scenario research reveals helpful starting points: Scenario plausibility is associated with informal logic and inferences, with narrative storytelling and with cognitive capabilities of scenario users. To guide an empirical research agenda and to foster more nuanced conceptions of scenario plausibility, the book follows an explorative research design: It analyses theoretical models and concepts from corresponding academic disciplines and discusses their applicability to the context of scenario planning. Five core research propositions are synthesised that hypothesise what makes a scenario plausible from a user’s perspective. A semi-experimental study is adopted to test the propositions. Master-level students are confronted with two scenario reports on the future of Germany’s energy transformation and are asked to make several assessments regarding the scenarios’ content and contexts. The reports were based on two scenario methods (Intuitive Logics, Cross-Impact Balance Analysis) that seek to convey plausibility in fundamentally different ways: through narrative storylines and matrix-based systems maps. Quantitative and qualitative findings are presented in the form of a conceptual map of scenario plausibility. Against key assumptions from the theoretical discussions, matrix-based scenarios were judged significantly more plausible than narrative ones. Qualitative data suggests that individuals deploy different mechanisms when judging the two formats. Such flexibility in plausibility assessments has not been accounted for in theory. Findings also show that high credibility of a scenario, in the sense of its trustworthiness, likely leads to high plausibility perceptions. Yet, the map also determines source credibility as important factor – an interesting finding given that scenarios are not regularly related to specific sources or authors. A critical indicator for scenario plausibility is whether a scenario corresponds with a user’s own beliefs and expectations. Compared to all other indicators, such effects are most strongly and robustly found across both scenario formats and are in unison with notions from the theoretically explored concepts. Scenarios that are perceived as too far-fetched in the sense of their likelihood are unsurprisingly judged less plausible. Such dynamics also relate to the internal structure of scenarios. (Dis)agreements with scenarios’ causal links were often brought forward as reason for (im)plausibility. While causality has frequently been named as key driver of plausibility in theoretical models, it is particularly the sense of causality, i.e. ‘implicit causalities’, that are exploited

Summary of the book

in scenario contexts. This also demonstrates the power of several heuristic patterns that are at play when making scenario plausibility assessments. In sum, while theoretical and empirical findings picture scenarios to be rather vulnerable to users’ assessments, plausibility judgments do not appear arbitrary. On the contrary, the conceptual map points to distinct patterns that cannot simply be represented by other concepts, such as credibility or believability. The findings bear direct implications for scenario research and practice. They demonstrate that scholarly-derived plausibility factors, e.g. internal consistency or narrative richness, play a role in users’ judgments, but fail in accounting for the different mechanisms scenario users apply when they have not been involved in the scenario construction processes. Key directions for further empirical and conceptual research are outlined at the end of the book. Empirically, the identified plausibility indicators should be further studied using different contexts, e.g. high- or low-stake topics, with different scenario user-groups. The findings also reveal interesting parallels to probability judgments. The almost compulsive isolation of the scenario planning community from questions of probability has not been worthwhile, and future research should be opened towards investigating the relationships between plausibility and probability. Conceptually, the presented research triggers more critical reflections as to whether plausibility as a normative effectiveness criterion of scenarios is still tenable. Rather, plausibility should be discussed as a descriptive means to better understand intended and unintended effects of scenarios on decision-makers. Such insights can ultimately enable more targeted research on scenario construction and communication.

15

1

Introduction

During an airport security check, the scenario researcher Selin was asked to remove her Christmas gift – a snow globe – from her suitcase. In a subsequent article, she reflected: “In an odd fit of anger, amusement and astonishment, I probed the security risk with the officer to learn that they have a policy prohibiting snow globes. [...] Since they cannot gain access to the magical juice, no snow globe is safe even if they contain only three ounces of glittering liquid. [...] By what mechanism does an innocent pleasure-giver, a rare treat of slowness and sparkle, become transformed to a security threat? […] First, we have the climate of fear and a thwarted attack in 2006 thought to involve liquid explosives. The thing that didn’t happen – the liquid bombs killing people – became the justification for much to do, the basis for a whole host of interventions. From one particular ‘almost event’ made plausible through intention, many other events have been imagined plausible, right down to my forsaken snow globe as a vehicle of mass destruction.” (Selin 2011b:240) This bizarre example illustrates how the emergence and assessment of scenarios about the future is guided by means of plausibility (there is not only the mere possibility that parts of a snow globe could be misused, but events in the past make it imaginable), rather than primarily by principles of probability (the instrumentalisation of a snow globe for an assassination does not seem very likely). This book explores the concept of plausibility, its meaning and consequences in contexts of scenario planning. To consider the future and its possible developments has become an urgency, if not necessity in contemporary Western societies. According to Beckert (2016:58), today “[t]he future matters just as much as history matters” and so scenarios are considered universal remedies in almost all social spheres. As descriptions of multiple possible events in the future, scenarios are con-

18

The Plausibility of Future Scenarios

cerned with the changes or differentiations of the future from the present or the status quo (Fuller & Loogma 2009). Scenario planning thereby means the systematic assessment of strategies or plans and their performance across a number of identified scenarios (Amer et al 2013; Bradfield et al 2005; Schoemaker 1995). For organisations, it presents “a key survival skill” (Ramírez et al 2010) and is imperative for organisations’ viability and competitive edge (Noss 2013). For reasons of legitimacy and accountability, there is also an increasing political demand for scenarios. In times of anticipatory governance structures and responsible innovation, foresight is expected as a necessary capacity to assess and manage possible consequences of today’s policy decisions or emerging technologies (Guston 2014; Nordmann 2014; Volkery & Ribeiro 2009). Scholars also acknowledge the relevance of scenario-based thinking in our daily life “[w]hen we think about changing jobs, getting married, buying a home, making an investment […]” (Tetlock & Gardner 2015:2). In recent years, the interest of social scientists in ‘the future’ has rapidly increased. Scholars pay attention to whether and how future projections can impact actors’ perspectives and behaviours in the present (Beckert 2013, 2016; Jasanoff & Kim 2009, 2013; Verschraegen & Vandermoere 2017). Thereby, future projections are looked at from different conceptual perspectives. While in Science and Technology Studies (STS), research focuses on rather implicit and collective future imaginaries or visions (Patomäki & Steger 2010; Verschraegen & Vandermoere 2017), this book addresses the more explicit and purposeful construction of scenarios as it is pursued in the distinct communities Scenario Planning, Futures Studies or Foresight (Fuller & Loogma 2009; Masini 2006; Sardar 2010). Over the past decades, this deliberate construction of futures has significantly increased in the socio-political sphere – so much that in areas of environmental planning and sustainable energy transformation, scenarios have become “ubiquitous knowledge products” (Pulver & VanDeveer 2009:2)1 . In this field, scenarios bear a specified definition as “internally consistent and plausible picture[s] of a possible future reality” (EEA 2009:6). It is for this reason that practitioners in scenario reports and scholars in research papers regularly conclude that their work has produced “plausible scenarios”

1

It should be noted that the construction of sustainable energy scenarios, indeed, has a long tradition. The well-documented construction and application of scenarios by The Royal Dutch Shell, but also the renown ‘Limits of Growth’ study by the Club of Rome are often considered prominent starting points for scenarios in the energy field (Ringland 2008).

1 Introduction

(Agnolucci 2007; Moss et al 2010; Sala et al 2000; Sheppard et al 2011; Wiebe et al 2015). Plausibility is assumed to be a key indicator for the construction and utilisation of scenarios. In the methodological development process of scenarios, plausibility ought to be a guiding benchmark to ensure that the depicted developments challenge actors’ expectations about the future, while at the same time still ‘speak’ to actors so that they are willing to ‘suspend their disbeliefs’ (McClanahan 2009; van der Heijden 2005). Two of the founding fathers of scenario planning populated the relationship between plausibility and scenarios during cold war times: “Any particular scenario may in fact contain paranoid ideas, but this must be judged on the basis of the plausibility of the particular scenario - often a difficult judgment in a world of many surprises - and care must be taken to allow for a possibly realistic inclusion of a not-implausible degree of paranoia […].” (Kahn & Wiener 1972:161) In this quote, plausibility presents a continuum that shall help to push individuals towards the edge of their own imagination of the future. It also links plausibility directly to the uptake of scenarios by its audiences and, hence, to the very core objectives of Futures Studies: Through plausibility, scenarios ought to challenge actors’ mental maps, confront them with surprises and future shocks that have not been imaginable prior to considering the scenarios and guide them towards improved decision-making (Dufva & Ahlqvist 2015:252). A screening of the scenario literature and surveys of scenario planning methodologies demonstrate an overwhelming dominance of plausibility as an ‘effectiveness criterion’ for scenario work (Amer et al 2013; Wilkinson & Ramírez 2009). Yet, paradoxically, the omnipresence of plausibility in scenario research and practice for a long time did not trigger closer investigations of the concept. Scenario plausibility has simply been contrasted with possibility (a simple collection of options), probability (quantified assessment of likelihood) and desirability (preferred options by certain actors) (Selin & Pereira 2013:3). Only in recent years, several scholars have begun to inquire more closely the scope of the concept for scenario planning. Research workshops, roundtables and a Special Issue2 dedicated to plausibility are indicators for this increased

2

The scholarly activities around plausibility entail: The ‘Plausibility Project’ as a joint workshop between Oxford University and Arizona State University (2009), a roundtable during the SNet Conference in 2011 (Selin 2011b), a Special Issue in International Journal of Foresight and Innovation Policy (vol. 9).

19

20

The Plausibility of Future Scenarios

interest. The emerging debate unanimously contemplates the lack of theoretical underpinnings and empirical studies. Selin (2015), for instance, criticises an insufficient understanding of the complexity of the concept and the diverse, implicit expectations associated with it: “[…] [W]hat plausibility actually (and symbolically) means, how it matters for practice, and why it is important for the contemporary coping with uncertainty is unclear”. Extant scholarly discussions reveal that with plausible knowledge, scenario planning enters uncharted epistemological territory. Two diametrically opposed camps seek to define and operationalise plausibility: One camp links it to the subject matter of scenarios itself; from this perspective, a scenario’s plausibility can be independently established using clear indicators (Lloyd & Schweizer 2013; Wiek et al 2013). Examples include whether the depicted scenario is ‘theoretically occurrable’ or has occurred in the past under different circumstances. For a second camp, in contrast, plausibility is not intrinsic to the scenario itself but is linked to the social contexts in which scenarios are considered (Strand 2013; Wilkinson & Ramírez 2009). Particularly the contextualised conception of plausibility has received attention in recent research. Plausibility is discussed as the fruitful result of co-production, negotiation and collective sense-making processes (Ramírez & Selin 2014; Selin 2011a; Wilkinson & Ramírez 2009). To view plausibility as overarching framework for contextualising values, ideas and imaginations about the future, indeed, is viable for appreciating how plausibility can be jointly produced by scenario experts and stakeholders in participatory processes. More recent scholarly contributions, for instance, have added to a better understanding of the heuristic value of plausibility in scenario building processes (Uruena 2019), and have developed approaches for the methodological construction of highly plausible scenarios (Walton et al 2019). This research, however, does not account for the mechanisms that are potentially at play when scenario users assess the plausibility of given scenarios that do not specifically target their own values and perspectives about the future. Also, formalised notions of plausibility rather focus on establishing plausibility as quality criterion from scenario producers’ perspectives, rather than providing conceptual or empirical explanations for users’ assessments of it. Yet, the latter perspective is particularly relevant for cases in which scenario users, i.e. those actors whose assessments supposedly decide about the uptake of a scenario in decision-making, are not involved in the construction process. This is the case in many energy and environmental policy scenario projects (Schubert et al 2015; Dieckhoff 2015). Here, proponents of plausibility as a collective sense-making vehicle acknowledge that “[i]f

1 Introduction

the scenarios are not used exclusively by those that produced them, and/or need to be shared or disseminated, altogether different dynamics around plausibility and probability erupt.” (Ramírez & Selin 2014:65). The scenario literature hints at some scenario-specific contexts that may evoke different dynamics of plausibility: •

Scenarios often leave their construction contexts and ‘travel’ (Selin 2011b) to other discourses or societal spheres in which they develop a life of their own. As such, the plausibility of a scenario is assessed detached from its original context. Analyses of foresight products, therefore, assume differences in the communication and reception processes of scenarios by citizens, the media, organisations or public bodies (Lösch et al 2016:9). In cases of institutional divides between scenario developers and users, different ‘cultures of plausibility’ may clash. Particularly, discrepancies between scientific and non-scientific understandings of plausibility may exist. Wilkinson & Ramírez (2009:6) raise the concern that “plausible/implausible outputs from science become implausible/plausible inputs for policy-making.” Different scenario formats – narratives, models, systems maps – may trigger different assessments. Analyses have shown how models versus narratives affect the construction of meaning by different stakeholders (Chabay 2015; Lord et al 2016) so that scenarios may be more easily conveyed by some overarching storylines than by models (Strand 2013).





On this basis, the book investigates the following research question: How do scenario users that were not involved in the scenario development process assess the plausibility of a scenario?   This involves two sub-questions: • •

What factors influence an individual’s scenario plausibility judgment? How do plausibility judgments differ across scenario formats?

A systematic review of the extant scenario plausibility debate provides some helpful starting points to approach the research questions. It shows that plausibility has been associated with informal logic and inferences (Walton 2008),

21

22

The Plausibility of Future Scenarios

storytelling (Bowman et al 2013; Schwartz 1991; van der Heijden 2005), the grounding of scenarios in cultural narratives (Boenink 2013; Strand 2013) as well as with individuals’ cognitive capabilities (Morgan & Keith 2008). Plausibility does not have a natural disciplinary home-base; neither do scenarios. In fact, scenario research is continuously enriched through other academic disciplines and theory-building (Ahlqvist & Rhisiart 2015). Therefore, to initiate conceptual and empirical research on scenario plausibility, theoretical concepts from the corresponding disciplines are explored (table 1). While this theoretical discussion of plausibility is not meant to be exhaustive, it is thought to enrich understandings in the context of scenario planning.

Table 1: Overview of explored theoretical concepts of plausibility

The theoretical concepts are synthesised and research propositions are derived that revolve around three core dimensions: The assumed relation between plausibility judgments and i) context-related assessments of scenarios (credibility, trustworthiness, form of scenario presentation), ii) content-re-

1 Introduction

lated assessments of scenarios (concept coherence of scenarios with individuals’ beliefs, background knowledge, their educational background), and iii) personality-related assessments (individuals’ need for cognitive closure, their interest in engaging with scenarios). Due to the novelty of this approach, it cannot revert to previous studies and research designs for an operationalisation and testing of the theory-derived propositions. In the explored academic disciplines, empirical research on plausibility and how it is perceived has predominantly been investigated using experimental study designs (Canter et al 2003; Lombardi et al 2016b; Lombardi et al 2015; Lombardi et al 2014; Lombardi & Sinatra 2013; Nahari et al 2010). This book adopts this tradition for two reasons. First, scenario planning is a context-sensitive field that is dependent on the content of the scenario exercise, the methods applied (Wright et al 2013a, b) and the actors involved (Franco et al 2013). An experiment enables the control and manipulation of the contexts under which plausibility assessments are made and allows for planned observations of the theoretically-derived variables (Sarris & Reiss 2005). Second, the presented study can build on the few previous studies that empirically investigated plausibility in other areas so that findings can be compared and reflected. The experiment uses a classroom-based setting with master-level students. Participants are presented with two different scenario formats (narrative and systems-maps) and are asked for plausibility and related assessments at different points in time. Quantitative and qualitative data is collected.   An analysis of scenario users’ plausibility assessments is by no means an intellectual ‘nit-picking’ exercise but has practical relevance with respect to fundamental issues in scenario planning. For a long time, Futures Studies has engaged in more ‘utilitarian approaches’ (Ahlqvist & Rhisiart 2015) with the ultimate purpose to arrive at a set of scenarios about the future. The scenario community has produced more and more different methods to construct scenarios – “sometimes to a point of excess” (Mermet et al 2009:67). This has caused a recent upsurge of critical perspectives on this machinery of ‘futures making’. Scholars criticise the manifestation of a ‘forecasting industry’ (Verschraegen & Vandermoere 2017) in which futures have simply become ‘commodities’ (Urry 2016). According to Colonomos (2016), futures products, e.g. scenarios, are ‘sold’ to its consumers and an increasing estrangement between ‘sellers’ and ‘consumers’ is evident: The power to claim and populate plausible futures clearly resides with those actors who develop the scenarios, typically scenario experts or researchers (Ferry 2016). Hulme & Dessai (2008:56) point

23

24

The Plausibility of Future Scenarios

to the imbalance of supply and demand for energy scenarios. The authors criticise the disproportional supply of scenarios that is paired with a lack of investigations into the effects scenarios can have on users and on society as a whole. Over the past years, scholars have increasingly urged for more empirical research to address this gap in ‘evaluative scenario work’ (EEA 2009; Lempert et al 2008; Parker et al 2015; Volkery & Ribeiro 2009) and called for comparative case studies, ethnographic observations or laboratory studies on the cognitive impact of scenarios (Bryant & Lempert 2010:35). The research presented in this book, therefore, is situated in these current debates around intended and unintended effects of scenarios. It sheds light on user perspectives and questions the applicability of scenario developers’ sets of quality criteria for users’ needs and judgment patterns. Structure of the book Chapter 2 provides a broad introduction to scenario planning research and practice that is essential for the exploration of plausibility. The chapter does not enlarge on all the different scenario development methods that are currently ‘on the market’ but focuses on the core characteristics of scenario methodologies as well as on fundamental and still unresolved questions about scenario objectives and usage. For the purpose of this book, the scope of scenario planning as a “very fuzzy multi-field” (Marien 2002) is narrowed down towards a workable basis: The construction of sustainable energy futures presents an area of application in which the methodological debates about how scenarios should be developed, for what purpose and with what effects are particularly vivid. Within these debates, two qualitative scenario methods – Intuitive Logics and Cross-Impact Balance Analysis – receive increased attention and, therefore, are introduced and discussed in chapter 2.3 as application examples for the subsequent empirical analysis. Chapter 3 reviews previous contributions towards understanding plausibility that exist within the Scenario Planning and Futures Studies literature. The ‘life path of scenarios’ (Grunwald 2011) is adopted and enhanced in chapter 4 as framework to identify relevant contexts and actor constellations in which plausibility assessments are to be explored. The remainder of the chapter discusses theoretical concepts of plausibility from three disciplinary perspectives. The chapter closes with five core research propositions. Chapter 5 outlines the semi-experimental study design; quantitative and qualitative findings are presented in chapter 6 and 7 respectively. Chapter 8 constitutes the core outcome of the study: A concep-

1 Introduction

tual map presents units of analysis and indicators to understand plausibility assessments of scenario users. Based on the map, chapter 9 discusses implications for scenario research and practice and proposes further research agendas. Figure 1 provides an overview of the book’s structure.

Figure 1: Overview of the structure of the book

25

2 Scenario planning: characteristics and current issues

Since the past century, scenario planning has emerged as a very popular foresight technique. Originally developed for strategic war planning, the ideas have been mainstreamed for business and non-business environments, for instance through implementations by The Royal Dutch Shell, British Petroleum or British Airways (Ringland 2008). Today it covers a variety of methods for organisations, governmental and non-governmental institutions and individuals to systematically explore the future. The present proliferation of scenarios in sustainable energy transformations and climate change contexts (Moss et al 2010; Volkery & Ribeiro 2009) is a further proof of the enormous flexibility of the method. From a research perspective, the term ‘scenario’ covers a broad range of definitions and methodologies that are populated by different research communities. For instance, the Futures Studies literature mostly proposes qualitative-driven scenario developments – with as some would call ‘softer’ understandings of what a scenario is and ought to fulfil (Amer et al 2013; Börjeson et al 2006; Bradfield et al 2005; Godet 2000). In contrast, model-based scenario approaches put forth more specific understandings; here, scenarios result from different model runs and are supported by sensitivity analyses (Bishop et al 2006; Rogelij et al 2012; Sala et al 2000). Scenario planning attracts more and more scholarly interest. The different research communities actively develop, apply, advance and criticise scenario planning methodologies from perspectives of management studies, economics, sociology or political sciences. More recently, increased cooperation and merging activities between different scenario approaches are evident – notably between qualitative scenario and quantitative model developments (Alcamo 2008; O’Mahony 2014; Weimer-Jehle et al 2016). While this diversity is noteworthy, it is not the purpose of this book to discuss all the different scenario development methods

28

The Plausibility of Future Scenarios

and approaches. Instead, it is important to point out the core characteristics and issues in the practice of scenario planning. Method descriptions often depict scenario planning as straightforward procedure; however, it is underpinned by very different understandings of what the future is and how it should be studied. This chapter sets the stage for deeper discussions of plausibility in chapter 3. The first part of the chapter briefly introduces the concept of scenarios and provides a broad overview of the different understandings and methodological approaches for scenario development. The key message is that methodological choices are underlined by different rationalities of its developers, meaning their ontological and epistemological assumptions about the future. These rationalities inform what scenarios are presented and produced as plausible or implausible. The second part of the chapter addresses the difficulty to define ‘effects’ or even ‘effectiveness’ of scenarios. It shows that there is no common basis for scenario objectives, that is, how scenarios ought to perform and with what effects. This circumstance is further aggravated by a lack of classification or typology of ‘scenario users’ and even an understanding of how they process scenarios. Discussions remain unstructured and resemble “a Swiss pocket knife of multiple users” (Masini & Vasquez 2000:49). The final section narrows down the scenario planning landscape for this book. Since explorations of plausibility cannot be extended to all diverse scenario approaches, one qualitative and one semi-qualitative scenario method – Intuitive Logics (IL) and Cross-Impact Balance Analysis (CIB) – are presented and later used as basis for the empirical study. Proponents of both methods explicitly dedicate themselves to plausibility as the guiding principle for scenario development and assessment. As stand-alone methods or in combination with quantitative modelling (O’Mahony 2014; Weimer-Jehle et al 2016), the potential of IL and CIB to better attain scenario users and their sense of plausibility is discussed.

2.1 2.1.1

Scenarios and scenario development Core characteristics of scenarios

According to the Cambridge Dictionary (2020), a scenario is “a description of possible actions or events in the future”. In the scholarly scenario literature, it is more specifically understood as “a consistent and plausible picture of a possible future reality” (EEA 2009:6) and traditionally comes in sets of two

2 Scenario planning: characteristics and current issues

or more scenarios. Despite the diversity of scenario methods, core characteristics are evident across practices. On the one hand, scenarios are a powerful conceptual framework (Stirling 2005) for systematically exploring the future. During a phase of ‘opening up’ discussions, the uncertainty, complexity and ambiguity of alternative futures can be comprehended and discussed (Renn et al 2011). Subsequently, a ‘closing down’ phase helps to structure explorations, condense discussions and communicate insights in a systematic and structured way (Bryant & Lempert 2010). Independent of the method, the scenario logic is emphasised. For Fahey & Randall (1998:10-11), this is the rationale that underpins the narratives of scenarios, i.e. “the ‘why’ underlying the ‘what’ and ‘how’ of a plot”. On the other hand, scenarios present a normative framework (Stirling 2005) in the sense of a more ‘responsible’ approach towards future uncertainty. Scenarios move away from controlling or reducing uncertainty and appreciate the unpredictability and irreducibility of uncertainty (Slaughter 2002a, b). They are seen as the natural successors of past, reductionist attempts to predict the future, which substantially failed, among others, in the field of energy (Parson 2008) or financial systems (Flowers et al 2009). In their work on ‘feral futures’, Ramírez & Ravetz (2011) point towards the danger of treating uncertainties as risks and argue for scenarios as a ‘better alternative’ for dealing with unpredictability. Taken together, the raison d’être of scenarios lies in its special approach to future uncertainty by laying out multiple plausible pathways of the future. The relationship between scenarios and uncertainty is particularly emphasised by the more qualitatively-oriented Futures Studies literature (Amer et al 2013; Bishop et al 2006; Börjeson et al 2006; Bradfield et al 2005), but also by the more quantitatively-oriented literature on sustainability science and energy systems analysis (Kowalski et al 2009). For van der Heijden (2008), scenarios’ approach to uncertainty affects both the ‘systems managed’ and the ‘managing system’. In a complex system, uncertainty does not only relate to how identified factors may develop in the future, but includes the indeterminacy of further, still unknown factors. Uncertainty is naturally accompanied by hidden complexity and ambiguity, since relationships between uncertain and not-yet-known factors are incomprehensible and unforeseeable in the present (Renn 2017; Renn et al 2011). At the same time, the management of complex systems in times of deep uncertainty requires the integration of different viewpoints and values of relevant actors (IRGC 2005; Renn & Schweizer 2009; Webler 1995). Scenarios are, therefore, seen as an operationalisation of post-normal science (Funtowicz & Ravetz 1990), because a recourse to expert-

29

30

The Plausibility of Future Scenarios

based knowledge neither captures all relevant knowledge of the system, nor does it help to resolve conflicts between actors.

2.1.2

Methodological choices in scenario planning

The proliferation of scenarios in business and public policy spheres has paved the way for the introduction of ever new scenario methods. It contributes to the diversity of the scenario methods landscape, which some have called “methodological chaos” (Martelli 2001). Most descriptions of scenario development focus on the level of methods and attend to operational ‘how-to’ instructions. The overview of van der Heijden (2005) shows the basic steps involved in scenario development: •





The scenario team conducts a data analysis. Depending on the method, this involves consultations of historical studies, quantitative databases, interviews, literature reviews or results of workshops. The level of detail and the inclusion of actors need to be decided. The collected data is to be structured and put into context. The scenario logic is achieved through different techniques, such as influence diagrams, crossimpact matrices, scenario axes or computer-assisted models. The scenarios are to be communicated using strategies that are suitable for intended scenario users, e.g. storytelling, network diagrams or the quantification of results.

While these general procedures are reflected in most scenario methods (with iterations of single or all steps), they differ greatly in the means used to perform these steps. Generally, four different paths to scenario development can be distinguished: 1. Scenarios can be developed using expert-based knowledge. The core scenario team applies elicitation techniques that are common for risk assessment, for instance through structured interviews (Ratcliffe 2003; WeimerJehle et al 2016) or Delphi processes (Rowe & Bolger 2016). Rationales brought forward for expert-based scenario development include an improved quality of insights due to reduced cognitive and social biases of experts as compared to non-experts, or simply an unbearable complexity of development methods or the subject matters (Bolger & Wright 2017).

2 Scenario planning: characteristics and current issues

2. As a counter principle to the reliance on experts, scenarios can be developed in participatory processes. Participative techniques have been developed and range from unstructured, creativity-driven processes (van Vliet et al 2012) to formalised approaches of stakeholder inclusion (SchmidtScheele et al 2020) and participatory modelling (Seidl 2015). Outputs may be qualitative and/ or quantitative. Proponents of participatory processes maintain that dynamic interaction between participants is a necessary condition for the quality of futures knowledge (Dufva & Ahlqvist 2015; Ramírez & Wilkinson 2016) and leads to improved identification with and usage of scenarios (Soste et al 2015). 3. Scenario development can be guided by theoretical insights. For the exploration of grand societal challenges, transition theories (Bergman et al 2008; Haxeltine et al 2008), social theory (Sondeijker et al 2006) or cultural theory (Rotmans et al 2000) are popular resources to inform scenario development. Theoretical concepts and empirical cases that account for complex interactions between technology and society are thought to be a powerful reference to develop ‘socio-technical’ (Elzen et al 2002) or ‘socioethical’ scenarios (Boenink 2013). These scenarios focus on an improved understanding of the changing dynamics instead of future end-states. 4. Scenarios can be developed using computer-based modelling approaches. Particularly in fields of energy and environmental development, the systems complexity is the main reason for using model-based approaches. Systems modelling is a popular approach for developing energy scenarios for a variety of private and public actors (Grunwald 2011); yet, models naturally limit the focus to technical and energy-economic aspects (Pregger et al 2020; Weimer-Jehle et al 2016). For this reason, combinations of quantitative modelling approaches and qualitative scenario methods have been proposed (Alcamo 2008; Kemp-Benedict 2012).

In practice, often hybrid forms of scenario approaches exist and are explicitly called for (Alcamo 2008; Scheele et al 2017). In this entanglement, scenario typologies are popular means to structure and compare different development approaches (Amer et al 2013; Bradfield et al 2005, Börjeson et al 2006). Bradfield et al (2005) distinguish between ‘scenario schools’: The ‘Probabilistic/ Cross-Trends’ school originated in the US (through the Rand Corporation) and promotes quantitative scenarios in forms of systems analysis or probability distributions; the France-based ‘La Prospective’, in contrast, is known for the development of normative scenarios. The ‘Intuitive Logics’ school is mainly

31

32

The Plausibility of Future Scenarios

attributed to the work of Wack (1985a, b) in The Royal Dutch Shell with qualitative, non-probabilistic and explorative scenarios. More recent developments in South-East Asia and the US propose qualitative approaches that emphasise the revelation of underlying myths, metaphors and cultures in scenario planning processes. Börjeson et al (2006) and van Notten et al (2003) structure scenario approaches based on the overall project objectives. The authors maintain that the overall goal of a scenario project drives the design of the development processes, e.g. what methods to use and what scope to allow. In a similar way, Ramírez & Wilkinson (2016) argue that the ultimate purpose of a scenario intervention needs to serve as the starting point to decide on the appropriate methodology.   The typologies illustrate that differences exist not only in terms of how scenario development is done operationally; they differ in terms of who is included in the development process, what knowledge counts as ‘legitimate’ and how knowledge is assembled, processed and communicated. A number of scholars criticise that these fundamental, epistemological choices are often concealed when scenario development is depicted on the operational, method-level only (van Asselt et al 2010; Wright et al 2013a). Wright et al (2013b) argue that the pressing question for scenario researchers as to whether one method works better than another cannot be answered by comparing operational procedures. To interpret differences in the heterogeneous scenario landscape and to understand recent trends in methodological advancements, scenario researchers have suggested to move away from viewing scenarios simply as a planning method or management technique. They propose to engage with scenarios on the level of methodologies in which methods are integral parts. Ramírez & Wilkinson (2016:219) define scenario methodology as “a process or procedural manifestation of a given epistemology of a particular set of methods, rules, techniques, tools, procedures, and/ or ideas”. Thus, the authors impose a meta-perspective that puts emphasis on the underlying assumptions that lead to the implementation of certain methods. In a similar vein, Scheele et al (2018) distinguish between two levels of analysis in scenario research: The first level entails activities to look into the future, i.e. the application and development of methods with the ultimate purpose to arrive at a set of scenarios. The second level of analysis involves looking at how the future is created by scenario teams and stakeholders. For Brown et al (2000:4), the objective of this level is “to shift the discussion [...] to looking at how the future as a temporal abstraction is constructed and managed, by

2 Scenario planning: characteristics and current issues

whom and under what conditions”. This level is not targeted to ‘the future’ as subject of analysis, but to the methodological choices – and with it the implementation of methods – that are shaped by the underlying ontological and epistemological assumptions about the future. The underlying ontological and epistemological assumptions in Scenario Planning or Futures Studies have concerned a number of researchers (Hejazi 2012; Masini 2006; Walton 2008). Gaßner & Kosow (2008:11 ff.) and similarly Dreborg (2004) illustrate how ‘the future’ is perceived by scenario developers as either calculable, evolutionary or shapeable. Slaughter (2002b) demonstrates how the very process of developing scenarios is underpinned by some kind of beliefs in our agency towards the future. Wilkinson (2009) notes a clash in the epistemological foundation of scenario planners: One camp seeks to conquer uncertainty and seeks to approximate prediction while a second camp believes in the existence of intrinsically uncertain and unforecastable futures. For Inayatullah (1990), this is connected to respective methodologies that can be pursued in predictive-empirical, cultural-interpretative or critical post-structuralist manners. The main argument here is that scenario methodologies are performative; their underlying rationales determine what kind of scenarios are presented as ‘plausible’ and can influence how decision-makers perceive the future (Postma & Liebl 2005:165). Scenario methods put in place these rationalities in that they include different methodological choices. Three important rationalities are briefly presented as potentially having a significant impact on how plausible scenarios are created, with what actors and what kind of knowledge. Methodological choices on the inclusion of actors Scenario processes are often framed as ‘participative discussions’ (Cuhls 2003) or ‘strategic conversations’ (van der Heijden 2005). A critical question is how inclusive or exclusive scenario processes should be. Inclusive scenario development processes are associated with improved qualities, because a diverse group of actors brings in diverse viewpoints and perspectives and contributes to an improved end-product (Metzger et al 2010) and user satisfaction. Meissner & Wulf (2013) show that full – as opposed to partly – participation in scenario development creates more value for decision-makers. However, methodological choices about participation are also based on questions of competences. Researchers assume that competence and expertise of participants are key for the quality of scenarios (Molitor 2009). Finding the ‘right’ people

33

34

The Plausibility of Future Scenarios

then is pursued by selecting the ‘most knowledgeable persons’. Hodgkinson & Healey (2008) propose to select participants by studying their information processing and problem-solving capacities. Also, practical constraints limit the inclusion of participants, for instance the inclusion of non-modellers in complex modelling activities. Quality criteria such as ‘transparency’ and ‘traceability’ are discussed as means to substitute participation (Kosow 2015). To sum up, inclusion or exclusion of participants is a conscious and inevitable choice in any scenario activity and shapes not only the methods applied, but also the scenario content. Methodological choices on the inclusion of knowledge Questions on the kind of ‘knowledge’ to include in scenario development is closely related to aspects of participation. Both issues have been subject to heated debates since the emergence of the Futures Studies community (Dufva & Ahlqvist 2015; Inayatullah 1990). With the broadening of scenario applications to large-scale systems, there is the desire to advance scenario methodologies to cope with this level of complexity. A move towards the integration of different knowledge components involves the combination of quantitative computer models and qualitative narratives (Alcamo 2008). With it has come the realisation that qualitative methods are more adequate for accounting for insights from behavioural and social transition research (Weimer-Jehle et al 2013). At the same time, moves towards improving the ‘scientific nature’ of knowledge to be included in scenario processes and products are noteworthy (Kuusi et al 2015). Thus, a decision for knowledge components as manifested in a scenario method, can also lead to an exclusion of other insights. Here, Inayatullah (1998:817) emphasises how ‘epistemes’, i.e. “the knowledge boundaries that frame our knowing”, define what is knowable and what is not in the course of scenario activities. Methodological choices on the processing and presentation of scenarios Methodological choices also involve how scenarios are processed and communicated. Bowman et al (2013) suggest viewing scenarios through the lens of storytelling to enable temporal sequencing, plotting, novelty and dramaturgy. Other authors, however, warn that storylines can create undue authority of scenarios and encourage overconfidence (Morgan & Keith 2008). In the light of ‘true uncertainty’, another aspect pertains to how scenarios can represent this uncertainty. While some scenario theorists propose a smaller set of plau-

2 Scenario planning: characteristics and current issues

sible scenarios to demonstrate the very inability to depict all possible future and to prevent perceptions of comprehensibility (Ramírez & Wilkinson 2016), others call for a large(r) number of scenarios: A handful of scenarios, so the argument, only provides an incomplete picture and leaves many uncertainties unacknowledged for scenario developers and users (Guivarch et al 2013).

2.2

Evaluative research: scenario objectives and evidence

The previous section has demonstrated that the comparatively young scenario literature is application-oriented. In other words, research and practice are dominated by the actual production of scenarios in multiple fields (Meissner & Wulf 2013; Varho & Tapio 2013). In contrast to the rich repertory of scenario methods and case studies, surprisingly few insights on the effects of scenarios, their different methods and formats are available. This gap in ‘evaluative scenario work’ has been criticised. What is missing are: “[…] [S]tudies that systematically attempt to evaluate scenario approaches with methods such as comparative case studies, ethnographic or other detailed observations of the impact of scenarios within organizations, laboratory studies of the cognitive impacts of scenarios, and structured comparisons of the performance of organizations that do and do not use scenarios.” (Bryant & Lempert 2010:35) Scholars agree on the urgency and the practical need for evaluative scenario research. Trutnevyte et al (2016:376) call for evidence-based and systematic insights into users’ perspectives, because scenario developers cannot know “(1) whether what they intend to say with their scenarios is in fact what is being heard and (2) whether these scenarios are what the audience wants and needs to know”. Parker et al (2015:13) consider it a necessary basis for contemplating whether scenarios can be unambiguous or whether their effects on users are arbitrary. Trutnevyte et al (2016) even question the entire reasonableness of scenario projects when scenarios are based on developers’ own rationalities without proper reflections of the consequences. This necessitates a critical scrutiny of the enormous efforts and resources put into methodological developments of scenarios. Yet, the body of evaluative scenario research remains considerably small. Certainly, the attractiveness of the actual production of scenarios is one reason (Ahlqvist & Rhisiart 2015); the methodological difficulty of attributing effects to the performance of scenarios is another (Ramírez &

35

36

The Plausibility of Future Scenarios

Selin 2014). However, research on the performance of scenarios is also impeded by two underlying problems: The lack of clarity about scenario objectives and about scenario users.

2.2.1

Objectives of scenarios

The purported objectives of scenarios vary considerably across research studies and practitioners’ reports. Often, objectives are directly dependent on the respective scenario projects; a systematic review or even classification of objectives is missing. As the least common denominator, scenario activities are conceptualised as ‘interventions’ (Pulver & VanDeveer 2009; Ramírez & Wilkinson 2016) and as means to interfere in individuals’ thinking or acting upon the future. Scholarly contributions agree that scenarios are no encouragement for ‘business as usual’, but instead induce some kind of change (Selin 2006). For this book, a classification of scenario objectives is synthesised from the literature and targets two levels: Scenarios can exert effects on individual actors (intra-actor level) or influence the relation between actors or stakeholders (inter-actor level). The levels can be structured along a continuum from low to high requirements (figure 2).

Figure 2: Scenario objectives on intra- and inter-actor levels

Intra-actor scenario objectives Debates about the ‘effects’ of scenarios often revolve around scenarios’ potential to impact the cognitive processes of those actors involved in scenario pro-

2 Scenario planning: characteristics and current issues

cesses, for instance by reducing cognitive heuristics, clarifying uncertainties and complexities of systems factors and by improving individuals’ preparedness to consider a wider range of plausible futures (van Notten et al 2003; Volkery & Ribeiro 2009; Wright et al 2013a). Particularly, the idea of widening the plausibility perceptions of scenario users has been recognised as a central requirement for scenario effectiveness. Boenink (2013:155) notes that plausible scenarios shall “enhance the reflexivity of users”. Scenarios should help users to better understand how their agency may impact the future as well as how future developments may determine their own future. Other authors even associate successful scenario interventions with the change of mental models (Chermack 2005; Glick et al 2012) and with learning in the sense of reperceiving the present and reframing the future (Ramírez & Wilkinson 2016). While most studies refrain from seeing scenarios as direct decision-guidelines, some studies connect the use of scenarios to improved decision-making (Chermack 2005; Wulf & Meissner 2013). The purported objectives of scenarios can be contextualised by three different modes of orientation from Futures Studies (table 2). The first mode maintains the presence of a ‘single best’ future, which needs to be detected by accumulating data using sophisticated methods. ‘Optimal solutions’ can be derived; yet, Grunwald (2013) points towards the inadequacy of mode 1 for scenario purposes: Because the future holds intrinsic uncertainties, deriving decisions about the future inevitably depends on values, ideas and experiences of individuals. In the same vein, von Wirth et al (2014:125) caution against viewing scenarios as “tool for direct decisions support” and suggest the function of scenarios to be “heuristic only”. The second mode suggests working constructively with the diversity of scenarios (Grunwald 2013:5). When distinguishing between possibilities, scenarios help finding robust strategies: Detecting pathways that are feasible with respect to more than one future presented by a set of scenarios (Bishop et al 2006; Lindgren & Bandhold 2003). The analysis of multiple possibilities reminds users of the openness of the future and its formability. The third mode of orientation takes effect if it is impossible to determine sufficiently developed cones of plausibility in which scenarios may be diverse, but not divergent, i.e. when they are internally contradictory. In Grunwald’s opinion, those futures are often rightly suspected of being arbitrary. Nevertheless, they provide value in the sense of “a semantic and hermeneutic structuring” of the future space (Grunwald 2013:7). Mode three thereby resembles the cognitive benefits of scenarios on individuals’ thinking about the future.

37

38

The Plausibility of Future Scenarios

Table 2: Three modes of orientation provided by Futures Studies Mode 1: Direct decision support

Mode 2: Wind-tunnelling

Mode 3: Deliberative choice

Approach to the future

Predictive: one future

Look for a corridor of sensible futures

Open space of futures

Spectrum of futures

Convergence

Bounded diversity

Unbounded divergence

Preferred methodology

Quantitative, model-based

Quantitative/ qualitative; participatory

Narrative

Orientation provided

Decision-support

Robust action strategies

Self-reflection and contemporary diagnostics

Source: Shortened version of Grunwald (2013:2)

Inter-actor scenario objectives Next to effects on individual scenario users, the literature also assumes that scenarios, as a nexus between scenario developers and scenario users, induce improved understanding and cooperation between actors. For this purpose, authors have been interested in how scenarios get transferred across actors and whether they are processed and ultimately used according to the intentions of its developers (Parker et al 2015). In this vein, scenarios are often discussed as ‘boundary objects’. Several studies name scenarios as being able to act as bridge between different worlds or communities (Dieckhoff 2015; Hulme & Dessai 2008; Pulver & VanDeveer 2009; Selin 2006). The concept of boundary objects is referenced to describe not primarily what kind of objects scenarios are, but more what scenario can do: They mediate between communities or actor worlds by incorporating actor-specific meanings without deviating too much from a common reference that makes mutual understanding and cooperation possible1 . This means there exists a common plausible ground of scenarios that holds for the diverse actors. They can successfully cooperate 1

See the definition of boundary objects by the originators Star & Griesemer (1989:393): “Boundary objects are objects which are both plastic enough to adapt to local needs and the constraints of the several parties employing them, yet robust enough to maintain a common identity across sites. They are weakly structured in common use and become strongly structured in individual-site use.”

2 Scenario planning: characteristics and current issues

without the need to agree on one interpretation of a scenario. In the context of scenarios’ institutional value, Selin (2006) argues that the integrity and success of scenario work rests in its work on bridging the boundary, for instance, between scenario developers and users. Dieckhoff (2015) notes how in cases of full integration of scenario users in the development process, there is no need for boundary objects. In turn, this implies the attractiveness of boundary objects for cases of separation between development and usage of scenarios. Pulver & VanDeveer (2009) stress the role of scenarios as boundary objects to account for the diverse users and functions scenarios are intended to address. Through interviews with scenario developers, Dieckhoff (2015) specifies how storylines in model-based energy scenarios serve as the common ‘philosophy’ of the scenarios.

2.2.2

Effects and effectiveness of scenarios

The previous section has outlined different objectives that are associated with scenarios. Only a small number of research studies exist that empirically evaluate the effects of scenarios and that go beyond anecdotal evidence. For the purpose of this book’s research question, this section focuses on empirical studies that evaluated the intra-individual scenario objectives. In business environments, researchers have called for empirically-based evaluations of the relation between scenarios and decision-making (Harries 2003). Several case studies conclude an improvement of organisations’ performance due to scenario interventions; Rohrbeck & Schwarz (2013) argue for a value contribution due to scenarios while Glick et al (2012) demonstrate scenarios’ effects on mental models. Yet, for scenario activities outside of the business context, such analyses have been mostly absent. Few authors have addressed the importance of the context sensitivity, institutional settings or the incompatibility of long-term scenarios and short-term policymaking (EEA 2009; Hughes et al 2013; Nilsson et al 2011). Volkery & Ribeiro (2009) argue that scenario intervention can only inform indirect policy advice, for instance in terms of agenda-setting and issue-framing. Several authors have pointed to challenges that arise with ‘using’ scenarios (Braunreiter et al 2016; Enserink et al 2013; Pulver & VanDeveer 2009). Grunwald (2011) maintains that users are confronted with an enormous number of scenarios that seem to have followed almost similar development processes, and yet often produce very different results. This gives rise to arbitrariness from the viewpoint of users. In the same vein, Dieckhoff (2015:15) notes that

39

40

The Plausibility of Future Scenarios

for users, scenarios combine apparently opposing elements: The precision of scenario models and their alleged clarity, versus the ambiguity, or even arbitrariness of only a few depicted futures and the relating question, who chose them and why? For Dieckhoff, this results in users’ irritations with regards to the way uncertain knowledge about the future is generated and communicated, but also the ambivalence of locating scenarios between certain and uncertain knowledge and between political beliefs and scientific assessments. Empirical analyses have shown that scenarios are oftentimes judged as to whether they come true – with relevant repercussions: ‘False predictions’ then contribute to the decreasing credibility of both the scientists and the scenarios, which in turn causes decision-makers to call for ‘better’ methods (Enserink et al 2013; Maxim & van der Sluijs 2011). In this context, Dieckhoff (2015) notes how modellers often wish to create more than ‘just possible’ scenarios. This pressure on scenario developers can be observed to have a paradoxical effect: Current method enhancements in scenario work show how an increased complexity and uncertainty of the subject matter (e.g. energy transformation) also triggers an increase in the complexity and dimensionality of scenario work itself. What this means for the audience and users of scenarios is demonstrated by studies that show individuals’ discomfort about uncertainty. When they are confronted with the selective uncertainties as presented in the scenarios, they often look for certainty rather than embracing the uncertainty (Enserink et al 2013). Bryant & Lempert (2010:34) argue how scenarios “fall short of their potential especially when used in broad public debates among participants with diverse interests and values.” Some empirical studies even question whether usage of scenarios also equals an improvement in decisionmaking quality (Hulme & Dessai 2008:55). Grunwald (2013:4) argues that in contexts in which scenarios are needed the most – that is, when not even overall directions for change are available – scenarios are often unable to provide the help that is expected by actors. In this sense, for scenario users this may be a key contradiction in itself: Scenarios promise orientation in conflictual and ambitious social debates while simply delivering divergent perspectives of the future2 .

2

The discussions on scenarios and its orientation for decision-makers thereby resemble earlier contributions on the relation between science and decision-makers. Funtowicz & Ravetz (1990:1) describe a dilemma: “Previously it was assumed that Science provided ‘hard facts’ in numerical form, in contrast to the ‘soft’, interest-driven, valueladen determinants of politics. Now, policy makers increasingly need to make ‘hard’

2 Scenario planning: characteristics and current issues

  A small body of experimental research presents an interesting angle to evaluate how scenarios are perceived by individuals. The research designs of most studies are informed by experimental psychology and deal with the cognitive effects of scenarios as means for assessing scenario effectiveness (Chermack 2004; Meissner & Wulf 2013; Wright & Goodwin 2002). Noteworthy are the efforts of Schoemaker (1993) and Bradfield (2008) to empirically analyse the effects of scenarios on reducing individuals’ cognitive biases. Schoemaker (1993) found that scenarios can stretch people’s confidence ranges, particularly if an individual developed the scenarios personally and was not just presented existing ones. Bradfield (2008) observed groups of individuals and studied the learning effects resulting from scenario development. In contrast to Schoemaker (1993), he questioned the learning potential arising from scenario planning by analysing language content, behavioural patterns and subjective reflections of study participants. While Bradfield (2008) and Schoemaker (1993) study the effects of scenarios more generally, other studies compare the effects of different scenario techniques. Here, the experimental research suggests that scenario methods matter in the way scenarios and their development processes are perceived and interpreted by scenario users. van Vliet et al. (2012) studied the negative effects structured scenario approaches can have on the creativity of involved stakeholders. Parker et al (2015) as well as Gong et al (2017) looked at how more complex scenario representations can reduce the interpretability of scenario results. In the study of Han & Diekmann (2001), the guiding principle is to compare the effects of cross-impact analyses on the one hand, and bare intuition on the other when facing uncertain business decisions. The experimental studies show two different perspectives for studying the effects of scenarios. One group maintains that a distinction between ‘right’ and ‘wrong’ usage of scenarios is possible. Consequently, they study the ‘effectiveness’ of scenarios in the sense of the decision-quality of users. Han & Diekmann (2001), Meissner & Wulf (2013) and Parker et al (2015) do so by presenting participants with an intellective task to be solved. The studies measure decision-quality after scenario engagement in different ways: In Han & Dieckmann (2001), participants must make a “Go/ No-Go” investment decision, whereby their decisions are measured against initial assessments of

decisions, choosing between conflicting options, using scientific information that is irremediably ‘soft’.”

41

42

The Plausibility of Future Scenarios

experts. Meissner & Wulf (2013) measure decision quality by comparing the performance of individuals upon engaging in scenario planning with those that engaged with other traditional planning tools. The main difficulty stems from establishing clear and unequivocal concepts to measure ‘quality’ in scenario planning contexts. Some authors, therefore, work towards manufacturing such criteria for scenario effectiveness. van Vliet et al (2012), for instance, investigated creativity and measured creativity of scenario products by comparing the length of developed storylines and the amount of new aspects mentioned in scenarios. Yet, other studies consider ‘cognitive benefits’ and perceived usefulness as overarching objectives of scenarios. As such, the studies focus on ‘effect’ rather than ‘effectiveness’ of scenarios (Bradfield 2008; van Vliet et al 2012).

2.2.3

The ‘users’ of scenarios

The previous sections have demonstrated that there is little agreement on scenario objectives and what is really meant by ‘using’ scenarios. In this context, also the target audience of scenarios is often not specified. Scenario users are often taken as natural givens in scenario activities and, hence, remain empty concepts. An array of scholars have demanded more attention to be paid to those actors (Parker et al 2015; Pulver & VanDeveer 2009; Trutnevyte et al 2016). Kunkel et al (2016:55), for instance, mention the need for more “engagement and collaboration of end-users in scenario development”. Few authors have attempted to more closely define scenario users. Selin (2007:213) refers to insiders and outsiders to distinguish two user groups that are either in or outside of scenario developing processes. Pulver & VanDeveer (2009) speak about user-producers and user-recipients. In business settings, decisionmakers who shall make use of the scenarios are often involved in the scenario exercise. Here, the user group can be clearly defined. Ramírez & Wilkinson (2016) emphasise the centrality of the users in scenario planning and frame all actors – including the users and developers – as learners. Their characteristics, fears, hopes and interests can be directly taken up in the scenarios. In practices outside of the business arena, particularly in public policy settings, scenario users are often not involved in the development process, and hence, are more difficult to grasp. Pulver & VanDeveer (2009:2) take issue with the fact that fundamental matters on scenario use and scenario users remain unanswered, even though renowned scenario studies, e.g. from IPCC, “simultaneously seem to be widely used by actors and for analyses for which they

2 Scenario planning: characteristics and current issues

were not explicitly designed and seem to not be significantly shaping global policymaking to date”. In a similar vein, Beck & Mahony (2017) call for closer attention to both intended and unintended users of scenarios. Dieckhoff (2015:17) describes a dominant practice for energy scenario projects in Germany: Public or private research institutes are commissioned by private or public-sector clients (e.g. ministries, companies, non-governmental organisations) with the development of scenarios. With respect to the interaction between commissioners and developers, he distinguishes three process types: An interactive entanglement of commissioners and developers, an iterative separation and a sequential separation of both parties. Most of the studies analysed by Dieckhoff (2015) represent sequential separations; commissioners are involved in discussions about scope and scenario directions yet are not involved in the actual development processes. Schubert et al (2015:49) present an overview of the involved parties in the German landscape of energy scenario studies and identify a small, clear group of developers (contractors/ editors) and commissioners (clients/ publishers). It seems that oftentimes a distinct and well-known group of developers stand in contrast to a broad and undefined array of users. For this book, preliminary categories of scenario ‘users’ are distinguished based on extant scholarly debates: 1. As first-order users, commissioners and a distinct group of decision-makers can be assumed. It is the only well-defined target group that is known to the scenario developers prior to the construction process. Depending on the process style, they are involved in certain steps of the development. Appelrath et al (2016) hold that this group potentially has the power to determine and guide the direction of the scenario project. Schubert et al (2015:48) note that across different federal ministries often diverse intentions and responsibilities take effect in scenario negotiations3 . 2. Second-order users are wider, non-specified audiences of decisionmakers in ministries, agencies, companies, political parties. These ‘implementing organisations’ (Appelrath et al 2016:19) may use scenarios as knowledge basis to decide on investments, overall strategies or as means for internal and external communication. The difference to first-

3

According to the authors, the Federal Ministry of Economics (BMWi), for instance, is interested in issues of security energy supply while the Federal Environment Ministry (BMU) naturally looks for the sustainable use of raw materials or the human health issues in scenarios.

43

44

The Plausibility of Future Scenarios

order users is their non-involvement in scenario development and the fact that mostly no direct relationships exist with developers. Based on an analysis of scenarios within the European Patent Office, Lang (2012:389) differentiates between inner, middle and outer networks in which influence and contact of stakeholders with the core scenario team is decreasing in the middle and outer circles. The scientific community, i.e. fellows of the scenario developers, is another second-order group. Scenarios are primarily used as input for future research and are subject to more thorough methodological scrutiny (Braunreiter et al 2016). 3. The least defined user group, the third-order audience, is formed by stakeholders in the wider, democratic public. This group, e.g. in form of political parties, NGOs, mass media or interested citizens (Teske 2011), needs to be considered as rightful addressees of scenario studies (Appelrath et al 2016:19). Scenarios are often made possible through public funds; once published, they enter public debates through medial reception and dissemination. Kunkel et al (2016:57) call for a more nuanced distinction between intermediate users, i.e. scholars, and this group of end-users. Questions about scenario use also pertain the different characteristics, expectations and processing styles of users. These are particularly relevant for second- and third-order groups of users that are separated from the developing teams and their operations. Selin (2007:213) proposes the concepts of ‘actor worlds’ (Callon 1986) and ‘communities of practice’ (Brown & Duguid 1991) to pay attention to actors’ social contexts when establishing, supporting or refuting assumptions about the future. According to her, “actor worlds are unified by a specific way of acting and performing in a particular context. […] Scientists use the distant future and its promises to gain funding and legitimacy with the politicians but continue to reject the vision when seeking legitimacy within their own communities […]”. As such, the concept illustrates that different rationalities are at play when users of scenarios are envisioned. The reach of multiple user groups can also result in tensions (Kunkel et al 2016:56). While intermediate, scholarly users of scenarios require scientifically sound insights, end-users expect a focus on ‘interesting’ scenarios, e.g. those with low probabilities but high consequences. Lösch et al (2016:11) demand that ‘society’ as scenario user group cannot be reduced to a single characteristic and instead has to be investigated as “complex arrangements and dynamics that are produced by actions, interests, power relations, negotiation and communication processes of social actors” [own translation by RSS].

2 Scenario planning: characteristics and current issues

Pulver & VanDeveer (2009:10) argue that the proximity of actors to the scenario development process can impact their perceptions of scenarios as they may develop different views about the credibility, relevance and legitimacy of the scenarios.

2.3

Two exemplary scenario planning methods

So far, the chapter has emphasised scenario research and practice as a broad field with diverse approaches towards the construction of scenarios as well as a field that is struggling with fundamental questions about the objectives of scenario work and its effects on diverse user groups. This section narrows down the field towards a workable basis for this book: The construction of sustainable energy futures presents an application area in which the methodological debates about how scenarios should be developed, for what purpose and with what effects are particularly vivid. A proliferation of scenario methods in the field of energy has been visible for some decades (Karlsen & Karlsen 2007; Midttun & Baumgartner 1986; Nielsen & Karlsson 2007) and scenarios methodologies are expected to keep up with the conceptualisation of energy system transitions as socio-technical transformation (Grunwald & Schippl 2013; Pfenninger et al 2014; Stirling 2014). Particularly, the social sciences aim at an equivalent position next to the still dominant systems modelling communities in scenario development. While model-based energy scenarios still dominate the field, they are increasingly viewed critically. They present energy systems in a highly structured way with relevant technical detail (Braunreiter et al 2016:9), but do not lay out fundamentally different views of how the world and its contexts could change (Hulme & Dessai 2008:67) or pay attention to the contextual environment in which scenarios may play out (Weimer-Jehle et al 2013). Furthermore, model-based scenario approaches often fail to address demand-side energy dynamics in future energy outlooks (Alcamo 2008; Elzen et al 2002; Midttun & Baumgartner 1986; Nielsen & Karlsson 2007; O’Mahony 2014; Weimer-Jehle et al 2016). The limitations of quantitative models have given rise to an upsurge of qualitative scenario techniques. Qualitative scenarios are assumed to fulfil what quantitative models fail to accomplish: Content-wise, they are expected to better contextualise energy futures within wider societal, political, cultural developments (Weimer-Jehle et al 2016). Yet, important for this book, qualitative scenarios are also discussed for their effective communication with

45

46

The Plausibility of Future Scenarios

scenario users. The IPCC has famously included qualitative scenarios in their Special Report on Emissions Scenarios (SRES) and legitimised this by making it “easier to explain the scenario to the various user communities by providing a narrative description of alternative futures that goes beyond quantitative scenario features” (Nakicenovic et al 2000:170-171). Some basic assumptions about the potential of qualitative scenarios include (Kunkel et al 2016:10,57; O’Mahony 2014:41,54; Trutnevyte et al 2016; Trutnevyte et al 2012): • • •

to present scenarios in an accessible fashion for diverse users; to provide a basis for considering wider notions of uncertainty; to include more surprises than quantitative scenarios due to its less constrained character and ‘freer rein’ of imagination.

Within these debates, two qualitative scenario methods receive increased attention. While Intuitive Logics (IL) presents the more ‘traditional’ approach that has been adopted by the IPCC but also in business-related contexts (e.g. The Royal Dutch Shell), the Cross-Impact Balance Analysis (CIB) is positioned as the ‘antithesis’ to IL (Weimer-Jehle 2006) as a more analytical, semi-qualitative scenario approach. CIB is propagated as a methodological enhancement compared to traditional qualitative approaches, as it formalises scenario development processes and more strongly adheres to ‘scientific standards’. Importantly, both methods are also discussed in direct comparison with regards to their effects on scenario users. IL and CIB are introduced and compared as suitable cases for the empirical study in this book.

2.3.1

Intuitive Logics

The method is often considered the “mainstream scenario approach” (Postma & Liebl 2005:163). In a review, Ringland (2008) presents multiple organisations that are engaged in scenario planning through the use of Intuitive Logics as was pioneered by Pierre Wack. In his work at the Royal Dutch Shell, Wack (1985a, b) developed Intuitive Logics which was based on “creating a coherent and credible set of stories of the future as a ‘wind tunnel’ for testing business plans or projects, prompting public debate or increasing coherence” (Ringland 2008:183). This approach has been followed by generations of researchers and practitioners (although sometimes not explicitly declared as Intuitive Logics) and its heritage is evident key scenario books (Ramírez & Wilkinson 2016; Schwartz 1991; van der Heijden 2005, 2008; van der Heijden et al 2002). The

2 Scenario planning: characteristics and current issues

method has been constantly developed further (Wright et al 2013a) and new versions of IL emerged. Advocates see it as a strength that the method allows flexibility in how and what procedures are applied, in what combination and what methodological rigor (Ramírez & Wilkinson 2016). Despite the new branches of IL and the absence of a standardised procedure, a basic structure is evident and summarised by Postma & Liebl (2005:163): A first step in the scenario process identifies factors that are expected to influence future developments. Distinctions are made between ‘predetermined elements’ and ‘key uncertainties’. While predetermined elements denote developments that already seem evident in the present or constitute slowly emerging phenomena, key uncertainties are the focus of the scenario analysis and are factors with unknown behaviour, i.e. when both their individual developments as well as their interactions with other factors are uncertain. It follows a step of clustering the factors to pay attention to the most relevant driving forces. This phase may be used to reduce the number of factors to a smaller, manageable number. The actual construction of scenarios then can be pursued deductively or inductively (van der Heijden 2005). In the deductive manner, two critical uncertainties are identified (i.e. factors that are expected to have both a high level of uncertainty and potentially high impact on the system under investigation)4 . A two-dimensional scenario axis with the two key uncertainties as the x- and y-axes build the underlying structure of the scenarios (figure 3). Sequentially, additional uncertainty factors can be added to the basic structure. Thereby, the four fields as structured by the axis build the scaffold for the scenario development. The inductive approach starts by developing systems maps of the identified factors, from which patterns and focal questions of the scenarios evolve incrementally. Essentially, both procedures differ in terms of the point in time in which some structure is placed onto the scenarios. Regardless of the procedure, IL is considered as an iterative process in which scenario planners move back and forth between the phases of identifying factors and constructing the

4

Ramírez & Wilkinson (2014) note that the literature is vague about the arrangement of the two uncertainty factors. They see two possibilities: They can be arranged as ‘either-or’ factors, e.g. a factor ‘International Cooperation’ can be defined as developing towards either globalisation or protectionism. The factors can also be arranged in a ‘more-or-less’ fashion, e.g. the factor globalisation can develop to be more or less in the future.

47

48

The Plausibility of Future Scenarios

scenarios’ logics. A key characteristic of IL is that scenario users can be easily engaged in the development process.

Figure 3: The two-dimensional scenario axis in Intuitive Logics

Source: Based on van der Heijden (2005)

2.3.2

Cross-Impact Balance Analysis

The scenario method Cross-Impact Balance Analysis (CIB) emerged in 2006 and has been developed as an ‘antithesis’ to the method Intuitive Logics (Lloyd & Schweizer 2013; Weimer-Jehle 2006)5 . In 2012, the reconstruction of the Special Report on Emissions Scenarios (SRES) commissioned by the Intergovernmental Panel on Climate Change (IPCC) using CIB by Schweizer & Kriegler (2012) has aroused more widespread, international interest in the method (Carlsen et al 2017; Kemp-Benedict 2012; Schweizer & O’Neill 2014). CIB is

5

Both methods are relevant given their use in what is called ‘Storyline and Simulation’ approaches (Alcamo 2008). The methods have been used in the coupling of qualitative scenarios with quantitative, model-based simulations (Weimer-Jehle et al 2016, 2020; Schweizer & O’Neill 2014; O’Mahony 2014).

2 Scenario planning: characteristics and current issues

based on earlier scenario development methods characterised by recombination, e.g. morphological analysis (Zwicky 1969) and cross-impact analysis (Helmer 1981)6 . In CIB, factors that describe a scenario are called ‘descriptors’ and may be qualitative or quantitative in nature. The procedure to arrive at a set of ‘descriptor states’ for each scenario is informed by clear procedural steps that contribute to the method’s formalised character. First of all, a clear, formal definition of scenarios is prescribed: One scenario is a combination of outcomes, or states, for each descriptor. Within the method, effort is put into identifying relationships between descriptor states and determining combinations that are mutually enforcing, or ‘internally consistent’. The method is based on three procedural steps: First, specify the descriptors and their possible states; second, make judgments (or assumptions) of how the descriptors and their possible states are interrelated with each other; and third, evaluate the internal consistency of scenarios as descriptor-state combinations (Schweizer & Kriegler 2012; Weimer-Jehle 2006). Like for the method IL, a variety of procedures can be used to complete the first two steps, for instance, surveys of experts and stakeholders, workshops, Delphimethods or literature reviews. For the second step, assessments of interrelations between descriptor states are gathered through judgments using an ordinal scale (usually from [-3] strongly inhibiting influence, to [+3] strongly enforcing influence). These judgments are reported in a cross-impact matrix. Compared to IL, the matrix makes it possible to include a larger number of ‘key uncertainties’ as well as more than two possible end-states of a descriptor. Based on the cross-impact matrix, CIB can produce many possible scenarios. In the reconstruction of the IPCC’s SRES scenarios (Schweizer & Kriegler 2012), for instance, the matrix revealed 1728 possible scenarios, because this many combinations of descriptor states are possible (1728 = 22*33*42 as specified by the number of descriptors and their respective states). Based on this basic quantity, CIB calculations can now determine scenarios with a strong internal consistency using a dedicated computer software7 . The procedures performed by the software, however, can also be retraced manually. For this purpose, the interrelations amongst descriptors reported in the cross-impact matrix are used to identify ‘self-reinforcing’ effects that make combinations

6 7

The following paragraphs on the procedure of CIB are based on an earlier publication of the author (Scheele et al 2018). The software Scenario Wizard is publicly available and can be downloaded using the following link: https://www.cross-impact.de/english/CIB_e_ScW.htm

49

50

The Plausibility of Future Scenarios

of descriptor states more coherent. The assessment of internally consistent scenarios is carried out by calculating the ‘impact balances’ on selected scenario states in the matrix. The impact balance scores are derived by analysing a scenario configuration extracted from the cross-impact matrix. When all the impact balance scores for each state for each descriptor have been calculated in a similar fashion, CIB can analyse whether the selected scenario configuration is internally consistent. In figure 4, the highlighted descriptor states A1, B4, C2, D3, E2, F2 and G2 are selected. The impact balance scores for each of the selected states (highlighted with downward-facing arrows) can now be checked against whether the other states predominantly support the outcome in the selected scenarios (see ‘internal consistency check’). As illustrated in figure 4, this is the case for all selected descriptor states except for E2. This way, each possible scenario is checked for internally (in)consistent relationships. In the example of the reconstruction of the SRES scenarios, out of the 1728 possible scenarios, eleven were perfectly internally consistent. The internally consistent scenarios as identified from the matrix can also be displayed as systems diagrams that show how the scenario descriptors support and inhibit each other.

2.3.3

A comparison of both methods

IL and CIB stand for two scenario traditions that respectively follow discursive/ creative and formalised/ analytical approaches to scenario development. Chapter 2.1.2 has emphasised that methodological choices are shaped by underlying ontological and epistemological assumptions, i.e. about how the future is like and how it can/ should be studied. The main argument was that these choices can determine what kind of scenarios emerge as ‘plausible’ and ‘implausible’ from scenario development. This can be applied to the two methods. Both methods are based on comparable ontologies of the future: The methods view the future as unpredictable, because not all uncertainty factors about the future are or can be known. Thus, both methods explicitly focus on only a number of relevant factors. In accordance with both methods, the future is also unpredictable, because the interrelations between known and unknown uncertainty factors are complex and ambiguous. So, both methods attempt to understand and develop scenarios based on these complex relations. In other words, IL and CIB pursue similar questions: What system factors are uncertain, complex and ambiguous? How may they develop in the future? How do they interact and change the system? The main differences between

2 Scenario planning: characteristics and current issues

Figure 4: Cross-impact matrix illustrating impact balance calculations for the SRES scenarios by the IPCC

Source: Adapted from Schweizer & Kriegler (2012)

51

52

The Plausibility of Future Scenarios

IL and CIB stem from their epistemological and methodological variations; that is, what kind of knowledge is relevant for scenario construction and how can this knowledge be gathered and processed. Epistemological Differences Different guiding principles are evident for both methods. Advocates of creative approaches like IL argue for a trustworthy process with higher perceived plausibility of processes and products. IL focuses on the learning of those people involved in the scenario development. At the same time, the small number of ‘maximally different’ scenarios is also expected to have an effect on external stakeholders (Ramírez & Wilkinson 2016). A higher sense of ownership and higher perceived plausibility are central benefits often mentioned regarding intuitive, creative scenario techniques. Unlike structured, computer-assisted techniques like CIB, participants perceive control over the entire scenario process (Kosow & Leon 2015:236). Wilson (1998:81) argues that stakeholders accept the scenarios as their own, identify with them and hence, are more likely to use the scenarios later. Proponents maintain that the point of scenarios is not to present a ‘full picture’ of the future but to demonstrate the irreducibility of the future’s uncertainty by presenting a number of enlightening, relevant and contrasting scenarios (Ramírez & Wilkinson 2016; Wilkinson et al 2013). In contrast, CIB as a formalised and structured approach promises credibility by presenting a broader spectrum of consistent and plausible futures. CIB’s attention to a larger number of scenarios as well as its ability to limit attention to internally consistent (= plausible) scenarios lends itself to the aspiration of achieving a more comprehensive representation of future developments. While IL focusses on an interactive group process to explore both uncertainty factors and their interrelations, CIB fosters separate assessments of factor relations and the aid of software to create systems rules. For advocates, this constitutes a significant improvement compared to IL, which, according to Carlsen et al (2017:613),“is not very transparent and has problems of reproducibility”. In fact, epistemological discussions about CIB are often centred around its scientific nature and objectivity compared to the narrative character of IL (Lloyd & Schweizer 2013). In the literature, scenario methods are discussed as sitting on a continuum between structuralist vs. creative approaches (van Vliet et al 2012) or simplicity vs. accuracy (Parker et al 2015). Both structures are applicable to the two methods (table 3).

2 Scenario planning: characteristics and current issues

Table 3: Contrast of guiding principles for IL and CIB Intuitive Logics (IL) Creativity

Cross-Impact Balance Analysis (CIB) vs.

Trust, understanding, identification, sense of ownership, perceived learning effect, satisfaction Simplicity Mutual learning processes, interpretability, smaller number of detailed scenarios

Structure Credibility, consistency, system connectedness, relevance, method transferability

vs.

Accuracy More density, coverage, higher complexity, larger number of scenarios

Source: Elaboration based on van Vliet et al (2012) and Parker et al (2015)

Methodological Differences The methodological procedures of IL and CIB give rise to the different ways, in which both methods ‘manufacture’ scenario plausibility. Hinkel (2008:4647) defines a methodology as consisting of data, methods and actors, which in different combinations perform an activity (individual step in the methodology) towards a final output or product8 . Activities can be more method-driven or actor-driven. The CIB methodology is characterised by both actor-driven and method-driven activities; it is propagated as a “more promising division of labour between man and method”, in which “[e]verybody should contribute that at which they are best, namely the […] [actors] at recognising the impact within a complex system and the mathematical method at analysing how this impact pattern works” (Weimer-Jehle 2006:337-338, highlights added by RSS). As a soft system analysis, CIB emphasises the value of human judgments in providing both the components and the ‘rules’ of the system, i.e. the themes and the interrelations of the end-scenarios are fundamentally dependent on the actors involved. In other words, the gathering of data and systems rules significantly depends on the actors’ ability to think in causal relations, their knowledge about the field, and their personal opinions, values and expectations. At the same time, this process is also strongly method-driven, in that the method prescribes how and what kind of input the actors can contribute. Especially the selection of scenarios from the factor combination can 8

Hinkel acknowledges this framework as simplified; yet, it reduces the complexity of a methodology to its core characteristics and allows for communication and comparisons of methodologies.

53

54

The Plausibility of Future Scenarios

be interpreted as purely method-based activity. According to Weimer-Jehle (2006:336), the main reason why a mathematical model is used to intervene is that “the human mind is limited in its capability of mentally processing multifactor-interdependencies”. The proponents of the method emphasise the method merely as necessary guiding mechanism, helping developers as a “super brain” in processing factor combination, without limiting the sovereignty of the human actors. Their core argument to support human agency is transparency and verifiability of the model by non-modellers as well as those not involved in the development process. As opposed to more complex ‘black-box’ methods, the CIB does not expect its developers and users to simply trust the method, but in fact allows them to trace why a scenario has been developed as it is. The internal structure executes a strict selection logic, e.g. by presenting only internally consistent scenarios. Individual human agents involved in the scenario development cannot directly influence the form and content of each scenario. In contrast to CIB, IL-based methodologies consist of mostly actorbased activities. A group of developers, together with their knowledge, worldviews, values and expectations form the centre of the process; the tools, e.g. the coordinates of axis are thought to merely stimulate, but not restrict actors in any way (Ramírez & Wilkinson 2016). According to Bradfield (2008:2), the steps of the scenario development process are primarily to structure the considerations of stakeholders without restraining their creativity. These methodological handlings of the future can determine which future is looked at and which is left out – or what scenarios are presented as plausible or implausible. Bruun et al (2002:108) refer to ‘epistemic closures’ to mean that any scenario method inevitably limits the construction to certain procedures and thereby excludes the collection of certain insights. Epistemic closures are here not meant as critique of (the presented) scenario methods, but as rationale for critically reflecting on how and to what extent future possibilities are limited by actor- and/ or method-driven activities. Methods, although not understood as rigid ‘recipes’, prescribe some structure that guides plausibility development. Applicable to the CIB method, Liebl (2001, 2002) and Postma & Liebl (2005:162) argue that scenarios may systematically exclude discontinuities and paradoxes as logically impossible or inconsistent, and potentially inhibit providing decision-makers with the information they need when facing unanticipated situations. Also Bruun et al (2002:108-109) argue that the consolidation or even an institutionalisation of a specific kind of epistemic closure can lead to ‘conventional scenarios’. According to the authors, the problem is that scenarios can naturally only pay at-

2 Scenario planning: characteristics and current issues

tention to those developments that the respective method allows to consider. While formalised methodologies may be more obviously associated with epistemic closure (by imposed standards and quality criteria for scenario development), it is also evident in discursive-based scenario activities. Actors are viable components and substantially influential for how scenarios look in the end: Metzger et al (2010), for instance, have shown how individual judgments can significantly influence scenario outcomes. At the same time, the collective process of developing storylines can also foster patterns of content convergences. The persuasive and unifying power of narratives makes scenario methods like IL also prone epistemic closures. According to Bowman et al (2013:737), narratives serve as ‘sensemaking currency’ and Bruun et al (2002:111) further argue that interactions of heterogeneous actors may result in the formation of discursive coalitions, which in turn can exclude scenarios that do not fit within the actors’ discourse.

55

3 Scenario plausibility: emerging debates in research and practice

Scenarios and plausibility seem to be closely interconnected. Plausibility plays a central role in the methodological development of scenarios, as demonstrated in the discussion of the methods IL and CIB (chapter 2). Also, for the uptake and usage of scenarios, plausibility is assumed to be a key prerequisite. In scenario reviews, the concept is named as one of the most widely used criteria for scenario effectiveness (Amer et al 2013) and is considered a soft ‘metric of evaluation’ (Selin 2007:237). Yet paradoxically, the omnipresence of plausibility in scenario research and practice stand in contrast to very few closer investigations of the concept. Scholarly attempts have simply contrasted plausibility with possibility, probability and desirability on a rather broad level (Selin & Pereira 2013:3). Only in recent years, some scholars have inquired more closely what plausibility means in the context of scenarios and Futures Studies. Research workshops, e.g. by the University of Oxford and the Arizona State University (2009) or a Roundtable during the S.Net - Society for the Studies of New and Emerging Technology (Selin 2011b) and a Special Issue in International Journal of Foresight and Innovation Policy (2013) are indicators for the increased scholarly interest. From the existing scholarly discussions, three types of contributions can be distinguished: First, scholars have emphasised certain qualities of the concept that would add value to scenario processes. For instance, plausibility is defined as a “non-technical concept” (Strand 2013:111) which, in light of intrinsic uncertainties, opens up rather than closes down discourses about the future. In this context, a clear demarcation from probability assessments is made. The latter have not been thoroughly considered in the scenario literature although longstanding research offers interesting insights into human judgments under uncertainty. It is, therefore, discussed here as an excursus. Second, by acknowledging that it lacks conceptual and methodological foun-

58

The Plausibility of Future Scenarios

dations, researchers here proposed ways for how plausibility is to be made more tangible and operationalisable for scenario development and evaluation. Third, the debates point towards some practical consequences, which include a potential discrepancy between theoretical understandings and practical applications of plausibility. These strands of research are discussed in this chapter.

3.1

The value of plausibility for scenario planning

There is a general agreement in the literature on the purpose of explorative scenarios: They present future developments and illustrate how these futures might occur, including developments that may seem improbable. Thus, the scenario community mostly agrees that assigning probabilities to individual scenarios is a contradiction to the core principles of scenario work. Yet, as several authors have also pointed out, the intrinsic uncertainty of scenario subjects leaves an infinite number of possible futures to be presented (Trutnevyte et al 2016:374). Scenarios, however, are always just a limited representation of all theoretically possible futures. Whether a scenario exercise presents three or three-hundred scenarios, the ultimate number will always be a deliberate selection. To achieve such a purposeful and targeted collection of scenarios, plausibility is used as a vehicle for selecting scenarios and for presenting them as ‘justified’ assumptions about the future (Walton 2008:161). Some scholars, therefore, have proposed guidelines for scenario developing teams on how to construct scenarios with high plausibilities (Walton et al 2019). At first sight, plausibility as a concept may seem self-explanatory; scenarios need to be imaginable and feasible – simply plausible. Yet, scenarios also ought to challenge actors’ mental maps, confront them with surprises and future shocks that have not been imaginable prior to considering the scenarios. Dufva & Ahlqvist (2015:252) see the aim of foresight processes in testing “the limits of the futures horizon, that is, the scope of what is thought to be plausible (…)”. In this notion, plausibility presents a continuum, on which involved actors are pushed towards the edge of their imagination. So, plausibility is also linked to learning and conceptual change. Yet, reviews merely specify plausibility as scenarios that are capable of happening (Amer et al 2013:36). One reason for the lack of research on scenario plausibility may be the diverging perspectives about the value of the concept for scenario planning.

3 Scenario plausibility: emerging debates in research and practice

An analysis of the scenario literature reveals fundamentally different opinions about plausibility. More specifically, the literature seems divided between those who… • • •

reject plausibility and argue for its impracticability and shortcomings; accept it as a concept without adequate alternatives; embrace it as deliberate choice and distinct approach towards uncertainty.

Rejection There are scholars who deny any value to be gained from plausibility. Two arguments are present: Millett (2003, 2009) points towards a discrepancy between theory and practice. According to him, the ‘ideal’ that multiple scenarios are considered as equally plausible is a theoretical myth. While plausibility constitutes a theoretically sufficient idea, in practice, scenario developers and users would eventually focus on the most interesting, i.e. the most likely scenarios. Morgan & Keith (2008:196) argue for the futility and ineffectiveness of the concept per se: “We cannot find any sensible interpretation of these terms other than as synonyms for relative subjective probability. Absent a supernatural ability to foresee the future, what could be meant by a statement that one scenario is feasible and another infeasible but that the first is (subjectively) more probable that the second?”. They maintain that efforts to work with plausibility lead to biases, systematic overconfidence and leaves users with no effective guidance for how to interpret scenarios. Acceptance The raison d’être of plausibility in scenario activities is often simply attributed to the inadequacy of probability as assessment criterion for scenarios. While most scholars agree that using probability would blur the lines between forecasting and scenario work, confidence in plausibility as a concept is often restrained, because it runs counter to more traditional aspirations for tangibility and measurability. For many, the concept is inevitable but at the same time lacks scientific character. There are recent attempts to upgrade it; by linking it with other evaluation criteria such as transparency (Alcamo 2008), or by developing indicators for plausibility. Wiek et al (2013), for instance, require internal consistency as a necessary but not sufficient criterion for plausibility. A related idea is to establish plausibility through formal methods ins-

59

60

The Plausibility of Future Scenarios

tead of having stakeholders determine plausible scenarios themselves (Lloyd & Schweizer 2013). Embracement Uruena (2019) proposes two distinct angles to theorise the potential of plausibility for scenario planning processes: For the construction process of scenarios, plausibility is viewed as a criterion to limit the space of scenarios towards a meaningful selection of pathways. For the assessment phase, plausibility serves as an epistemic device that helps individuals open up to more diverse future perspectives. Ramírez & Selin (2014:64) describe the decision for plausibility as an active, deliberate choice based on ontological and epistemological preferences. The argument is as follows: “Probability-focused scenarios claim to offer enhanced views of the likelihood of a given future over alternatives. For the plausibility-focused, that purpose is impossible: for them, if one could establish probability about a given future it would be unnecessary to do scenarios”. Acknowledging the presence of plausibility in any step of the scenario process is another way of embracing the concept: Preparatory steps – be it for qualitative or quantitative scenarios – require plausibility judgments on behalf of the scenario team regarding the selection of techniques, system boundaries, variables and the choice of presenting scenario outputs.   In sum, not all scenario scholars value plausibility as an essential concept for the development and assessment of scenarios. What the arguments have in common is that scholars who articulate these different viewpoints, all develop and apply their own understanding of plausibility in their research. This results in the circumstance that scenario techniques show different underlying notions of what plausibility is and how it can be determined (the methods IL and CIB are only two examples here). Across the different perspectives in scenario literature, also different sources for plausibility are determined: Plausibility has been associated with logic and inferences (Walton 2008), with narratives and storylines (Bowman et al 2013; Eidinow & Ramírez 2016) as well as with individuals’ cognitive capacities (Morgan & Keith 2008). The diverging perspectives of scenario plausibility also result from epistemological confusions about the concept. The starting point for further inquiries has, therefore, often been the differentiation of plausibility from other epistemological constructs. By contrasting plausibility from probability, the qualitative character of the former is emphasised. Yet, Ramírez & Selin (2014) maintain that there

3 Scenario plausibility: emerging debates in research and practice

are inherent problems in separating both terms and explain this confusion with an etymological excursus. In its early definition, plausibility was defined as ‘likelihood’, but also meant to define ‘false appearance’. The subjective character that many authors have criticised may also stem from its connotation with receiving ‘applause’. This has continued until the present, in which plausibility is used to mean “the appearance of believability and credibility” (Ramírez & Selin 2014:57). Following from the etymological diversity of plausibility, scenario scholars as presented in this chapter also stress an epistemological difference between plausibility and probability. However, this perspective is less explored. Betz (2010:87), for instance, argues for the need to more carefully investigate ‘possibilistic’ reasoning about the future as something that cannot be simply neglected given the dominance of probability. As this book seeks to investigate what kind of knowledge plausibility refers to and how it is assessed, it is helpful to start here with an excursus that briefly discusses probability as a mode of knowledge and knowing under risk and uncertainty.

3.2

Excursus: probability and judgment under uncertainty

A juxtaposition of two perspectives about probability assessment is relevant for this book: First, over the past centuries, human-made systems have been developed to represent knowledge in the presence of uncertainty. Probability distributions are the most commonly known systems to represent risk and uncertainty. The underlying principles are formulated in a normative manner, i.e. prescribing how one should legitimately reason in the face of uncertainty. Second, a great number of scholars across different disciplines – economics, psychology, sociology – have provided compelling empirical evidence that reveal distinct patterns when humans actually do make judgments about uncertainty. These patterns follow considerable different rules as those suggested by epistemologists. Both perspectives are briefly discussed with a focus on the patterns of human judgment given the sociological, empirical interest of this research. Epistemological perspectives on risk and uncertainty Knight (1921) populated three different epistemic modes to relate to knowledge – certainty, risk and uncertainty. States of risk refer to situations in which all uncertain factors relevant for a judgment are known and hence, clear probability distributions can be established. In states of true uncertainty, howe-

61

62

The Plausibility of Future Scenarios

ver, uncertainty factors are insufficiently known, and their complexity and ambiguity make a quantification or distribution impossible. While for Knight (1921), this state implies greater uncertainty and requires a move towards mere possibility, other scholars link it to decisions under ignorance or ‘the things we don’t know we don’t know’ (Tversky & Fox 1995; Wynne 1992). Indeed, there have been disputes for decades as to whether a clear distinction between risk and uncertainty, as proclaimed by Knight, is appropriate (see Runde [1998] for an elaborate discussion). While both concepts do not fully coincide, contemporary scholars argue that risk and uncertainty are systematically related. Uncertainty relates to a lack of knowledge about outcomes and this uncertainty also prevails when probabilities about a risk are precisely known. Uncertainty is considered an ubiquitous “experience of modernity” (Zinn 2006:281) and is linked to uncertainty due to (perceived) lack of information, too much and/or conflicting information (Zimmermann 2000)1 . Consequently, several theories exist for dealing with uncertainty. The most common theory is based on the Kolmogorov axioms of probability (Jaynes 2003). For discrete probability functions2 , i.e. for functions that are defined by a set of countable and mutually exclusive events (e.g. when rolling a dice), the probability distribution is based on the additivity rule. It holds that the probability [P] of two mutually exclusive events [X1, X2 ] may individually range between 0 and 1, and must be the sum of their individual probabilities [P(X1 •U•X2 )= P(X1 ) + P(X2 )]. At the same time, the probability of this union of events and its negation must always add up to 1. Following from these foundations, three different ‘probability situations’ can be distinguished (Runde 1998:540-541): First, the assignment of numerical probabilities to events can happen a priori on the basis of solid, available knowledge, for example when assigning probabilities of 1/6 to each side of a dice. Second, probabilities can be assigned

1

2

This discussion of uncertainty is by far not exhaustive but deemed sufficient for the purpose of this book. Rowe (1994), for instance, distinguishes between uncertainty as metrical (relating to the difficulty of measuring it), structural (the complexity of a system), temporal (future-orientation of uncertainty) or translational (difficulty of explaining uncertainty). For a more elaborate discussion of epistemological uncertainty, see Visschers (2018); for a discussion of uncertainty in relation to risk, see van Asselt & Rotmans (2002). For this book, discrete probability functions are sufficient to explain the basic principles of probability theory in relation to empirical studies in human judgment under uncertainty. Continuous probability distributions and probability density functions are, therefore, left out.

3 Scenario plausibility: emerging debates in research and practice

to events based on frequency distributions, for instance when statistical results show that one in 300 photovoltaic installations show signs of glass breakages. Third, when there is no valid knowledge basis to assign probabilities, only estimates can be given. For such cases, Bayesian probability theory is a proposed alternative to classical probability distributions (Kangas & Kangas 2004:170)3 . It refers to a subjective, personalised view of probability: An individual assigns a probability p to the occurrence of an event X that follows the principle p(X|e), whereby e denotes all the relevant evidence the individual has (Morgan & Keith 2008:197). Hence, even subjective assignments of probability still need to follow core principles of probability theory. In essence, the difficulty in clearly conceptualising risk and uncertainty and the lack of an unequivocal framework has led to a great number of different epistemological approaches that prescribe how uncertainties are to be represented; these range from more ‘objective probabilities’ to rather ‘subjective possibilities’. Alternatives to probability approaches are, for example, evidence theory (an approach that follows subjective probabilities but do not prescribe probability rules [Schafer 1978]), or fuzzy set theory (an approach that defines whether an event is more or less true by establishing more precise definitions of the event which can then be assessed using membership functions [Zimmermann 1985])4 . Despite their nuanced elaboration and discussion in the philosophy of sciences, these approaches have not been widely implemented in empirical research, because these methods tend to be rather complicated, abstract and too far from empirical reality (Kangas & Kangas 2004:183). This may be also because all approaches underlie a core assumption about the rationality of individuals: The rational actor carefully assesses the probability or serious possibility of each possible outcome as well as the expected utility and ultimately decides for the optimal combination of likelihood and utility (Gilovich & Griffin 2002:1). These conceptual assumptions have been seriously challenged by longstanding empirical research. Human judgments under risk and uncertainty How both laypeople and experts assess the probability of uncertain events has been of interest to economists, social scientists and psychologists for more

3

4

Here, subjective probabilities represent prior beliefs about the values of events. The term Bayesian should not be confused with the Bayes’ Rule in which prior distributions are updated, see Oaksford & Chater (2007). For discussions of evidence theory and fuzzy set theory, see Kangas & Kangas (2004).

63

64

The Plausibility of Future Scenarios

than half a century. Since then, an enormous amount of empirical studies has supported what Simon (1957) famously termed ‘bounded rationality’: It holds that people do reason rationally, yet not in the sense of following the rules epistemologists have set out for them, for instance through the laws of probability. How uncertain information is processed by individuals has been discussed using many different approaches. The framework by Petty & Cacioppo (1986) to distinguish between a central and a peripheral route of processing remains among the most powerful ones. Given the enormous amount of information individuals are confronted with on a daily basis, individuals are likely to decide what efforts are necessary to make a judgment. In the central route, individuals are assumed to first assess the probability of each statement and then assign weights based on the personal salience to each statement. Perceived credibility, plausibility, personal experience and the perceived motives of the statement’s author play a role (Renn & Rohrmann 2000:24). The peripheral route, in contrast, holds that individuals perform judgments holistically by looking for assessable clues, such as the statement’s length and complexity or the form of presentation. Upon receiving information, individuals apply mechanisms, particularly with regards to assessing the probability of statements, known as cognitive or intuitive heuristics. Here, the influential research by Tversky & Kahneman (1974, 1981) in its early stages has been framed as evidence to show that people’s judgments of uncertain statements do not conform with the basic principles of probability theory. During time, empirical studies have then moved away from viewing individuals’ assessment as ‘simplistic’ or even ‘wrong’, towards acknowledging that judgments follow very different rules and heuristics to deal with the uncertainty. These are guided by people’s natural constraints, including their limited time for investigating a statement, the often overwhelming amount of available information as well as their own capacity and effort (Gilovich & Griffin 2002:2-3). While cognitive heuristics can be interpreted as obvious violations of logical, probabilistic rules which also do not exactly correspond with subjective probabilities in the Bayesian sense, more contemporary discussions demonstrate that the systematic and consistent patterns may actually be more appropriate and targeted responses than those proposed by classical utility theory (Gigerenzer 2000; Renn 2008). Over the years, the list of heuristics has been constantly enlarged. A detailed discussion of all of them is beyond the scope of this excursus; table 4, therefore, summarises the main heuristics. Insights on cognitive heuristics are also relevant for the psychological research on risk perception. The core message is that individuals’ assessments of

3 Scenario plausibility: emerging debates in research and practice

risk often do not correspond with findings of expert-based risk assessments. Individuals’ judgments are the product of multidimensional judgment processes that go beyond assessing the probability of an anticipated harm and evaluating the personal consequences (IRGC 2005; Sjöberg 2000a; Slovic et al 1982). Based on the work of Slovic (1992) and Renn (2008), qualitative risk characteristics have been established that can impact individuals’ risk perception and consequently increase or decrease risk tolerance, for instance the voluntariness of exposure to risk, the control over the risk or the familiarity of the risk. Studies have also demonstrated that individuals’ personality and character but also their perceptions of their environment as well as their worldviews on aspects such as nature, technology and society can accentuate the impact of risk characteristics (Sjöberg 2000b). The extensive research on risk perception, furthermore, demonstrates two important aspects: First, risk perception has been discussed as a multidimensional product that is not only influenced by individuals’ cognitions and affects, but also by social and cultural drivers (Sjöberg et al 2004; Zinn 2006). Sociological and cultural perspectives on risk perception form a large body of research that does not set forward distinct underlying causes for certain judgment patterns. Rather, it has been very influential in revealing relevant context factors that need to be understood in relation to risk perception (Renn 2008; Rippl 2002). While an elaborate discussion of these factors is not possible in this excursus, the impact of social values (Sjöberg 2000b) and the perceived credibility and trust in the institutions or people that issue information (Siegrist et al 2000) are worth mentioning. Particularly, trust is constitutive of several further components and value judgments on the side of the evaluator. This demonstrates the complexity and intricacy of dynamics relating to risk. Second, risk perception research naturally leads to questions of risk communication and risk governance. In this context, a large body of research has investigated how uncertainty and risk perceptions of individuals vary across different formats of presentation. In a nutshell, if uncertainties are presented as numbers, percentages are often misinterpreted; risks communicated as rates are often better understood compared to when presented as proportions, and information on relative risk reduction is often misinterpreted as absolute risk reduction (Visschers et al 2009). Research on the usage of verbal representations shows that when uncertainties are communicated by phrases such as ‘weak indication – indication – strong indication – evidence’, the scales are regularly interpreted differently by recipients (Thalmann 2005). Visschers et al (2009) emphasise that not only the format matters, but also the

65

66

The Plausibility of Future Scenarios

context under which the information is provided. Interesting in this respect are also newer studies focusing on scientific uncertainty (Visschers 2018) and uncertainty resulting from incomplete scientific knowledge (ambiguity) or contradictory scientific knowledge (conflict) (Visschers 2015). While human judgment under uncertainty for a long time was related to the uncertainty about the probability of a risk, these empirical studies show that individuals tend to be particularly averse to uncertainty resulting from conflict.

Table 4: Cognitive heuristics in human judgment under uncertainty HEURISTIC

DESCRIPTION

Availability

Events that are more readily recalled by individuals are judged as more probable than events that are less mentally available.

Representativeness

Events that individuals have personally experienced are perceived as more significant for assessment than information based on frequencies.

    Conjunction fallacy

Individuals assign higher probabilities to specific conditions than a single general one of the specific.

    Disjunction fallacy

Individuals assess a disjunctive statement to be less probable than at least one of its component statements.

    Base-rate fallacy

When given general background information (base rate info) and information about a single case, individuals base their judgment on the latter.

    Insensitivity to sample size

Individuals judge samples with different sizes as having similar properties without respecting that smaller samples show higher variations.

Anchoring

Individuals adjust probabilities to the information that is available and perceived as directly relevant.

    Overconfidence

Individuals’ confidence in their own judgment is greater than the more ‘objective’ accuracy of the probability.

Avoidance of cognitive dissonance

Information that challenges individuals’ existing beliefs tends to be downplayed or ignored.

Affect heuristic

In their judgments, individuals’ emotions, e.g. fear, pleasure, surprise or comfort, play an important role.

Source: Based on Renn (2008:103) and Gilovich et al (2002)

3 Scenario plausibility: emerging debates in research and practice

Coming back to plausibility, this excursus begs the question of how plausibility relates to concepts of probability and possibility. Indeed, some attempts are visible in the scenario literature to situate plausibility among these existing concepts of knowledge (figure 5). Specifically, the literature on plausibility has proposed that: • •





Plausibility is nothing more than (subjective) probability (arrow 1). Plausibility is an intensification, or a special form of the possibilistic mode. The idea is to limit the space of possibilities to a workable sample without falling into the trap of probability (arrow 3). By emphasising plausible scenarios as more specific and meaningful outputs as opposed to mere possibilities, plausibility is located between probabilistic and possibilistic modes (arrow 2). Populating the interspaces between risk and uncertainty has in fact been a more general scientific debate (see Betz [2010] and his proposition of an ‘intermediate mode’ between probability and possibility). Concepts like Bayesian probabilities have been brought up as propositions to fill the void conceptually but are empirically questioned. Incomplete knowledge about the future prevents complete accounts of possibilities. Instead of pretending to have achieved a full account of all possibilities, few plausible scenarios may even do better justice to the deep uncertainty and the remaining blank space about the future. Therefore, plausibilistic scenarios are also sometimes located at the vacuum between what is possible and what is not even known about the future (arrow 4).

In sum, with plausibilistic knowledge, scenario planning enters new epistemological territory. Figure 5 exposes the difficulty in defining what kind of ‘knowledge’ plausibility-based scenarios refer to in contrast to other, more specified epistemological concepts. Yet, while the purpose of this book is not to primarily establish an epistemological notion of plausibility, it seeks to analyse how human make judgments about scenarios’ plausibility. Here, the extensive literature on probability assessment can serve as a relevant reference point for the subsequent analysis of plausibility judgments.

67

68

The Plausibility of Future Scenarios

Figure 5: Plausible scenarios within Knight’s epistemic modes of knowledge

Source: Illustration informed by Knight (1921)

3.3

Operationalising and assessing scenario plausibility

Research about what plausibility is and how it can be established is tied to epistemological debates of futures knowledge. Previous contributions to plausibility thereby reflect opposed positions in Futures Studies. On the one side, positivists hold that by collecting and processing all available data in the present, future pathways can be ‘discovered’ using appropriate methods. On the other side, constructivists maintain that the future as presented in scenarios, is the product of actors’ beliefs, experiences and interactions with one another (Bell 1997; Inayatullah 1990; Slaughter 2002b). Social constructivists focus on the idea that scenarios, as ‘social constructs’, take shape through the decisions, choices and understandings of actors5 . Also the debates on plausibility reveal two camps: One proposes clear, operationalisable indicators to define plausibility while the other emphasises plausibility as a product of social processes that evade attempts towards ‘methodological rigor’ (Selin 2011b:238). Both pursuits understand plausibility as a peculiar category of relating to the future. Yet, disagreements dominate the research landscape. Operationalised approaches see plausibility as linked to the subject matter itself; from 5

For a detailed discussion of epistemological debates in Futures Studies, see Fuller & Loogma (2009).

3 Scenario plausibility: emerging debates in research and practice

this perspective, plausibility of statements can be independently established using clear indicators. The contextualised path, in contrast, connects plausibility more strongly with the actors involved as well as their contexts. Here, plausibility is not intrinsic to the scenario itself, but is bound up to ‘contextual properties’ (Selin & Pereira 2013:2). Plausibility is often presented as a ‘soft measure’ for the quality of the scenarios and as efforts to tackle critical questions on scenarios’ validity (Selin 2011b:237). Both approaches seek to fill plausibility with life as a measure for establishing and assessing scenarios and are discussed in turn. More rigor and operationalised stances interpret the concept as “a moderate extension of probability” (Wiek et al 2013:136). In this sense, a rather pragmatic stance is prevailing: Plausibility is not considered a matter of likelihood, but still defined as a measurable, scientific criterion. Wiek et al (2013) developed a framework for appraising the plausibility of scenarios through six indicators: The scenarios are (1) ‘theoretically occurrable’, (2) have occurred in the past under different circumstances, or (3) under similar circumstances; they are occurring at present elsewhere (4) under different or (5) under similar circumstances, or occur (6) currently at the same location as in the scenario. The indicators account for the fact that “plausibility scenarios are composed of elements that are to a sufficient degree grounded in what we consider ‘real’” (Wiek et al 2013: 138). In other words, plausibility is about the evidence a scenario holds that relates it to past or present developments. O’Mahony (2014:46) also contends that plausibility is established by referring to past and present situations as well as driving forces and comparisons with existing forecasts. As a limitation, he mentions that wildcards, catastrophes or contradictions are consequentially not subject of the analysis. A similar stance is taken by referring to internal consistency as a necessary condition for scenario plausibility. Taking consistency as an indicator for the plausibility of a scenario alleviates an objective measuring that can formally assess plausibility. In contrast to this appreciation of plausibility, other scholars warn against defining plausibility in a narrow way, because “a razor-sharp gaze would destroy the richness encapsulated with the non-deterministic concept” (Selin 2011b:239). Bosch (2010:387) holds that plausibility means “in accordance with […] empirical findings; subjective/ intersubjective ideas, thoughts, and feelings; and the opinions of and cultural categories used by others”. The author embraces the subjective nature of plausibility as fruitful and valid. This corresponds to views in scenario literature where plausibility is considered a medium for creativity and indispensable for the exploration of the unknown.

69

70

The Plausibility of Future Scenarios

In the opinion of some scenario scholars, however, this adherence to creativity and intersubjective knowledge may lead to more detailed scenarios that in turn produces downsides: The more detailed scenarios are, the more plausible they are considered by potential users so that their likelihood may be overemphasised (Morgan & Keith 2008). Also, the ambition to incorporate values and informal knowledge may cause potential users to interpret the scenarios as more plausible and credible; simultaneously, this can reduce the capacity to include surprises or future shocks and “undermine[s] the sense of mere plausibility that enables groups of individuals with differing world-views to engage with scenarios” (Lempert 2012:632). Kahn (1984) pointed towards this problem when he wrote “the most likely scenario isn’t”. Particularly in terms of exploratory designs, a set of plausible scenarios does not automatically make for a ‘good’ scenario set. In the contextualised perspective, plausibility is in itself a subjective criterion and again triggers subjective assessments and measures. The concept’s purpose can be, thus, interpreted as being accepted and eventually used by actors. Other scholars contend that justifications of the nature of scenario knowledge are dependent on the argumentation associated with it. In other words, whether one scenario (e.g. a longer-term power failure in Germany) is merely possible, plausible or even probable can only be substantiated through lines of argumentation (Grunwald 2013:6). According to participants of a S.Net Roundtable on plausibility, too fine-grained modes of assessments run the risk of conflating plausibility and subjective probability and in turn would neglect any other means to interpret uncertain knowledge about the future. Scholars see the value of plausibility in the social processes that deeper discussions of scenarios can bring about. According to Ramírez & Wilkinson (2016:169), plausibility needs to be co-produced, and hence, only those involved in this process will be able to understand justifications on plausibility. Plausibility is here explicitly linked to learning with scenarios, where learning occurs at the interspace between ‘too plausible’ and ‘too implausible’ scenarios.

3.4

Critical reflections on scenario plausibility

Scenario plausibility research is still in an early phase of development. The previous review demonstrates that most elaborate investigations of the concept’s nature and its operationalisation have been pursued from more constructivist-driven perspectives. For plausibility to unfold its value, these per-

3 Scenario plausibility: emerging debates in research and practice

spectives on scenario planning, however, require ‘default’ actor constellations. In other words, a co-production of scenario plausibility demands a close (physical) interaction between scenario team members, including those stakeholders who are expected to make use of the scenarios. Wilkinson & Ramírez (2009:6) maintain: “From our perspective, it appears that the most effective basis for plausibility in scenario work at the science-policy interface must be co-produced.” Yet, this can also run counter to some practical observations. The extant literature as well as practical scenario reports demonstrate that scenario projects often need to deliver a small number of scenarios that are usable for a wide range of actors (Trutnevyte et al 2016). This implies a focus on scenario products instead of scenario processes. In these contexts, also institutional divides between scenario developers and users can become an obstacle. Wilkinson & Ramírez (2009), therefore, warn against different ‘cultures of plausibility’ that may clash, especially between scientists’ and policymakers’ understandings of what constitutes a plausible future. Few scholarly contributions have pointed towards the imperative to pay closer attention to different understandings and workings of plausibility within different actor worlds. A proliferation of ever new scenario development methods and techniques has led to the circumstance that an enormous amount of different scenarios ‘float around’ within society between a wide range of actors. Selin (2011b:237) summarises this as a “trafficking of futures” and maintains regarding plausibility: “The work of a futuristic claim occurs not just at the point of origin (i.e. a scenario-building workshop) but continues as the claim travels in new settings.” This directs attention to the complex dynamics in establishing and assessing plausibilistic scenario claims. Participants of the S.Net Roundtable, for instance, point to the need to investigate the relationship between plausibility and credibility. They contemplate whether scenarios can become instrumentalised as rationale for already decided actions (Selin 2011b:238-239). Furthermore, they suggest investigating plausible scenarios’ potential to be exploited as ‘tools for persuasion’. It also raises questions about the role of power residing in plausible scenarios. Topic experts may enforce more formal modes of plausibility to the detriment of actors with other ways of relating to the future. If plausibility is about evidence of the past and the present, attention needs to be paid to how different actors respond to different forms of evidence. The potential influence and power of scenarios in steering public discourses also invoke concerns about the ethics of scenarios; more specifically the ethical responsibility surrounding the creation of scenarios: What is presen-

71

72

The Plausibility of Future Scenarios

ted as plausible knowledge about the future? Whose plausibility is presented? The question of ethical responsibility also applies to the level of methods. Wachs (1985:xiv) argues that any method is underpinned by some moral implications. Following key futurists Bell (1997:113) and Kicker (2009:167), ethical responsibility lies with the futurists, i.e. the scenario developers. They not only have to try to foresee the future, but also how their own view of the future may impact society. Nordmann (2007) and Nordmann & Rip (2009) also point to the danger of producing scenarios, or in their words ‘speculations’, that are illegitimately used in the present. In a similar mode, Cunha et al (2006:950) criticise plausibility when it merely represents “expectancy confirming evidence”. Other authors have emphasised the performative effects of scenarios: Adam (2004), for instance, assumes that by developing scenarios, they are inevitably granted an “ontological status”, i.e. they become a social reality for actors in the present. Applied to the study of plausibility, the depiction of one scenario as plausible, may be used as justification for action in the present.

4 Conceptual explorations: plausibility across disciplines

The previous chapters have shed light on scenario planning research and the challenges in establishing and evaluating both scenario objectives and scenario uses. Also, attempts to conceptualise plausibility from a methodological perspective as either operationalised, independent assessment criterion or contextualised social process have been discussed. Taken together, theoretical concepts are still missing that help to better understand and empirically study scenario plausibility from the perspective of scenario users. The book, therefore, explores theoretical and empirical approaches to plausibility residing in academic disciplines outside of the pertinent scenario literature. Here, it draws upon the assumed connection of scenario plausibility with informal logic and inferences (Walton 2008), storytelling (Bowman et al 2013; Schwartz 1991; van der Heijden 2005) and the grounding of scenarios in cultural narratives (Boenink 2013; Strand 2013) as well as with individuals’ cognitive capacities (Morgan & Keith 2008). To structure and target this theoretical exploration, a framework is used that synthesises findings from the state of scenario research (chapter 2) and extant plausibility debates (chapter 3). More specifically, it targets explorations to those forms and conditions of interaction between scenario producers and users that are most relevant for individuals’ plausibility assessments. After introducing the framework, the remainder of this chapter discusses concepts of plausibility from three different disciplinary perspectives and highlights the relevance for scenario contexts. The final sections discuss the insights and derive key research propositions to be applied and tested in the empirical study.

74

The Plausibility of Future Scenarios

4.1

Framework for exploration: the life path of scenarios

As a framework, the ‘life path of scenarios’, originally proposed by Grunwald (2011), is introduced and enhanced. It marks three main phases in the life of scenarios: the development of scenarios, some evaluation of the produced insights, and a phase of scenario uptake or usage. At first glance, the life path may appear as a simple, linear process: Scenarios are brought to life and are then evaluated before they can be brought to some action by individuals or organisations. Yet, the life path can be populated and shaped by scenario producers and scenario users in different ways. In fact, the relationship and communication between both actor groups implies different trajectories and meanings of plausibility along the life path. To illustrate these different forms and conditions of producer-user interaction and their consequences for scenario plausibility, Habermas’ (1976:120ff) three modes of communication between science and politics are applied: Pragmatic, technocratic and decisionist modes. The two sub-systems ‘science’ and ‘politics’ are expanded to mean scenario producers (oftentimes scientists or consultants) and scenario users (i.e. politicians, stakeholders, citizens). Pragmatic mode of scenario practice The relation between science and politics has been critically discussed in social sciences for decades. Given the increased demand for scientific advice on socially relevant issues, Habermas (1976) proposed a dynamic interaction between the social systems ‘science’ and ‘politics’. A ‘true dialogue’ between advisors and advisees allows not only for an exchange of scientific findings but also for an exchange of practical requirements, mutual values and ideas. In this non-hierarchical relationship, common definitions and understandings can be established between all parties (Sager 2007). Many participatory, interactive scenario activities resemble characteristics of Habermas’ pragmatic mode. Knowledge about the future is established in a mutual interaction process involving a variety of actors (scenario researchers, users, stakeholders, lay persons). In this context, plausibility assessments are viewed as social accreditation processes, providing guidance and legitimisation. While actors may also individually assess the plausibility of scenarios, in this mode, actors rather establish plausibility of those scenarios they have developed themselves and in which their values, ideas and models about the future have directly been incorporated. For this mode of communication to be successful, scena-

4 Conceptual explorations: plausibility across disciplines

rio users, their demands, expectations and values need to be clearly defined so that scenarios can be specifically designed for a clear set of actors for a very specific purpose (Ramírez & Wilkinson 2016). As such, plausibility is not established about scenario products, but the mutual interaction processes are emphasised. Advocates of these notions of plausibility criticise that a separation of users and producers may not only run counter to learning processes with and through plausibility-based scenarios but deny any value to reside in the concept of plausibility. For them, plausibility can only be helpful for potential users when they are included in the key phases of scenario development. Technocratic mode of scenario practice In the scientific-technocratic mode of communication, a separation of science and politics is vital because it enables an effective coordination of the production, distribution and exploitation of scientific knowledge (Andersen & Woyke 2013). In a scientific-technical civilisation, political norms and ideologies are replaced by inherent necessities (‘Sachgesetzlichkeiten’) (Schelsky 1961) and, thus, politics – or knowledge users – become the executive agency of science. As Kropp (2013:10) notes, all that is left to planners or stakeholders is the implementation of decisions that have already been made by scientists without its addressees. In scenario planning, this technocratic separation is reflected in dominant practices. Dieckhoff (2015) distinguishes three process types (interactive entanglement, iterative separation and sequential separation of scenario producers and users) and notes that most studies he analysed represent sequential separations. It implies that commissioners are involved in discussions about scope and scenario directions, yet are not involved in the actual development processes. While this separation may be an intentional choice, e.g. when scenario developers wish to construct scenarios without the influence of third parties, it may also be a necessary decision, e.g. when complex matters need to be studied and processed for a very heterogenous group of scenario users and it is impossible to attend to all those users’ needs and preferences (Wilkinson & Eidinow 2008). In the technocratic mode, considerably different trajectories of plausibility take effect as opposed to the pragmatic mode. Plausibility is not jointly negotiated but rather prescribed by scenario developers and their methods. Scenario developers make critical decisions regarding what scenarios to display and what methods and quality criteria to choose in order to arrive at a final set of scenarios. Plausibility is then in-

75

76

The Plausibility of Future Scenarios

dependently created by scenario user-recipients. Critics of Habermas’ technocratic communication mode have noticed that the technical feasibilities as produced by science cannot be fully replaced by political feasibilities and desirability (Lompe 1972). In a similar vein, a number of scenario researchers have raised the concern that scientific and political notions of plausibility may differ greatly (Ramírez & Wilkinson 2009) and so the desired uptake of scenarios by intended users may fail (Trutnevyte et al 2016). Decisionist mode of scenario practice In Habermas’ third mode of communication, political decisions form the main unit of analysis. It is assumed that political actors use scientific insights selectively and depending on their own interests. The significance of scientific actors – as opposed to the technocratic mode – is marginalised as they can merely offer scientific discoveries (Sager 2007:431). Decisions follow the rationalities of politicians and are formed independently from an interaction or communication with the science producers. This mode is most often associated with tactical and strategic (mis)use of scientific findings (Konzendorf 2013). Applied to the study of scenario plausibility, this notion explains how plausibility assessments and ultimate decisions of whether to consider a set of scenarios remains with the scenario users and is beyond the direct influence of the scenario developers. In his elaborations of the lifepath research agenda, Grunwald (2011:13) notes that any decision-maker who chooses to consult scenarios, faces a critical judgment even before the actual decision; that is, to form an opinion as to what scenarios to consult and how to assess the scenarios. In the presence of the multitude, heterogeneity and often contradictory nature of scenarios, a decision-maker must ultimately make a judgment call. While this need for evaluating information applies to many decision-problems, it is particularly pressing and insufficiently researched in the context of scenarios, because the nature of the knowledge presented in scenarios as well as the development processes are less clear to the decision-maker. According to Grunwald (2011), a variety of evaluation processes are at play: Evaluating scenarios involve inquiries backwards, because the decision-maker’s judgment will be based on the development of the scenarios and their normative and/ or assumptive substances. It also requires decision-makers to inquire forwards with regards to how and what kind of scenarios are even relevant for the decision they are confronting.

4 Conceptual explorations: plausibility across disciplines

Figure 6: Trajectories of plausibility across a scenario’s life path

Source: First line informed by Grunwald (2011)

Figure 6 shows an overview of the three modes and its application to the trajectories of plausibility in scenarios’ life path. Taken together, within pragmatic scenario practice, the role of plausibility is clearer: It serves as a ‘glue’ between involved actors and represents a consensus, or at least common understanding of scenarios. Plausibility is studied from a social construc-

77

78

The Plausibility of Future Scenarios

tivist perspective and constitutes a substitute for ‘meaning’. Scenario practices that resemble Habermas’ notion of technocratic and decisionist modes, however, bring about forms and conditions of communication that leave more open questions regarding scenario plausibility assessments. These practices are characterised by: • • • •

a clear divide and limited interaction between scenario producers and users; the construction of plausibility through scenario developers without the involvement of scenario users; often no clear understanding of the scenario users, their values and rationalities; scenario assessments by scenario users are made based on the physical products, i.e. the written scenarios.

Under these circumstances, individual users’ assessments of scenario plausibility become central: To what extent do users’ assessments follow the logic of the developers’ (technocratic mode) or are guided by their own rationalities and agendas (decisionist mode)? The life-path framework demonstrates that extant debates on plausibility have not been able to fill this conceptual and empirical vacuum. New and further concepts of scenario plausibility are needed. The following sections, therefore, explore theoretical conceptions of plausibility. All approaches address plausibility within different contexts – in the production and assessment of formal knowledge and argumentation, in communication and persuasion of narratives, in learning and conceptual change. While this analysis is not supposed to be exhaustive, it provides a first systemic and targeted exploration of plausibility under these conditions of communication between scenario producers and users.

4.2

Plausible reasoning in informal logic and argumentation theory

Regularly, we are confronted with pieces of information from which we need to draw conclusions on how to think or act about a matter. Often, this information lacks details that we need for our conclusion, or else, the information given is inconsistent. These mundane assessments to which we do not have ready answers often happen based on plausibility (Collins & Michalski 1989;

4 Conceptual explorations: plausibility across disciplines

Renn 2017). Scholars across traditional disciplinary boundaries contend that the reasoning involved in making plausibility assessments follow some common rules of inferences. Within the broad traditions of informal logic1 , i.e. the analysis of practical arguments and argumentation theory (Scriven 1976), it has been established that next to deductive and inductive inference, plausible reasoning presents a third, distinct mode of logic that is more tentative and defeasible (Walton 1992b). Plausible reasoning includes inference patterns that present a radical departure from formal logic (Collins & Michalski 1989:2): Deductive inferences cannot resolve the cognitive dissonance, meaning our discomfort when confronted with contradictory and/or incomplete data. It would merely emphasise that given propositions are not to be uphold; yet, it does not give any practical direction of how to deal with the propositions altogether. Also, inductive reasoning and probability assessments cannot resolve these issues. To illustrate the peculiar logic following plausible reasoning, two different strands of research are now discussed. First, in the philosophy of science, Rescher’s (1976) Theory of Plausible Reasoning is among the most widely known, reviewed, but also criticised accounts of the logic that is to be applied to a set of given incomplete and/ or inconsistent propositions2 . The concept of plausibility thereby has a long history in the philosophy of science. Contemporary philosophical discussions refer back as far as to Aristotle and his notion of plausibility as a systematic reasoning process (Rescher 1976; Walton 1992b). It is also thought to be an important justification for scholars within academic disciplines such as physics or mathematics (Polya 1954; Shapere 1966).

1

2

The concept ‘informal logic’ has been controversial for decades, see for instance Johnson (1999), Johnson & Blair (1987). The analysis in this book follows the notion of informal logic as ‘practical argument analysis’ (Scriven 1976) which involves a type of reasoning that is closer to the practice of argumentation and the use of ordinary language and assessments than to formal logic. Under informal logic, a number of concepts can be subsumed, including defeasible reasoning, presumptive reasoning or warrants for reasoning, see Blair (2007) for a discussion. This account of plausibility in the philosophy of science is not exhaustive and only presents accounts of plausibility that have been prominently discussed in the literature and/or seem relevant for the study of scenario users’ plausibility assessments. Plausibility debates within the philosophical strand of confirmation theory are acknowledged but left out of the analysis. These primarily pay attention to how scientific conclusions can be drawn from non-deductive reasoning. For a comprehensive analysis, see Achinstein (2005), Crupi (2016) and Crupi & Tentori (2016).

79

80

The Plausibility of Future Scenarios

In this context, Rescher’s attempts to systemise plausibility assessments for isolated propositions has been applied at the interface between philosophy and rhetoric (Walton 1992a) to cover plausibility in arguments and conversations of proponents and respondents. Second, while plausibility assessments of arguments are central in many fields, they are deemed especially relevant in policy-making contexts. Under approaches to argumentative discourse analysis (ADA), Majone (1989) has developed a widely known and cited ‘quasi-judicial’ approach to argumentation and proposes yet a different logic of plausibility.

4.2.1

A theory of plausible reasoning

Within the philosophy of science community, Rescher’s approach presents a clear distinction from probabilistic programs. It provides a clear set of rules to guide plausibility analysis and relies on qualitative, comparative judgments of propositions. It refrains from quantitative calculations and an overly complex apparatus of symbolic logics (Rescher 1976:xi-xii). The practicality and flexibility of the plausibility model makes it applicable to various contexts, ranging from scientific deliberations and argumentation in critical discussions to everyday reasoning in that it proposes mechanisms for “reasonable people” (Rescher 1976:5). The theory seeks to bridge philosophical formalism and logic with research interests of other disciplines, particularly cognitive theory. Rescher’s intention is to substitute the often intuitive and ad-hoc human judgment with formal but simple rules that clearly and unequivocally specify how to proceed in such circumstances. Two key premises underlie his formal rules3 : 1. Plausibility assessments are dependent on source reliability. The plausibility of a given proposition is based on the reliability of the source of the proposition. Rescher (1976:6-7) labels this an “authority-oriented approach to plausibilistic inference” and includes a broad understanding of what a source can be: (1) individuals who put forth a proposition; (2) depersonalised sources, e.g. common knowledge or the media; (3) an individual’s observation or sensory perception; (4) individuals’ intellectual resources, e.g. own reasoning or hypothesising; or (5) general, societal principles with recognised authority. Each ‘source’ that brings up a proposition needs 3

The review of premises is structured along Rescher (1976:6-20).

4 Conceptual explorations: plausibility across disciplines

to be graded with regards to its ‘trustworthiness of credibility’. While for personalised sources this constitutes a more straightforward task, it is subtler for generalised principles or own intellectual resources. All sources are to be ranked in relation to one another: 1 denotes maximal trust in the source credibility, while n/n, n-1/n, n-2/n express the comparatively decreasing degrees for other sources. 2. Plausibility assessments are made for sets of propositions (p-Set). Plausibility assessments are to be pursued in a comparative manner for a number of propositions (P1 , P2 , P3 … Pk ) from different sources (X 1 , X 2 , X 3 … X k ) (Rescher 1976:8). For a proposition to enter a p-Set (S = P1 , P2 , P3 … Pk ), the ‘evaluator’ has to grant all sources at least some degree of credibility. At the outset of the assessment, all propositions in a p-Set then constitute potentially legitimate ‘truth-candidates’. The outcome of the assessment suggests which of the propositions can be accepted as true. Rescher here lays out explicitly how plausibility relates to acceptance – a relation that is often implicitly assumed in scenario research. According to Rescher (1976:9), acceptance following from plausible reasoning is only tentative, because “[t]he ‘acceptance’ of a proposition as a potential truth is not actual acceptance of it at all but a highly provisional and conditional epistemic inclination towards it, an inclination that falls far short of outright commitment.” (highlights in original). What becomes clear from Rescher’s explanation is that propositions in a p-Set are potentially conflicting, and because they cannot all be accepted as true, plausible reasoning always expresses the comparative degree of their acceptability. The actual assessment of a proposition within a p-Set then is performed by weighing the credibility of its source: The higher the credibility, the higher the plausibility of the corresponding proposition. Like the grading of source credibility, a plausibility index is used (1 as maximal plausibility; n/n, n-1/n, n2/n for decreasing degrees). The ultimate purpose is not only to comparatively grade several propositions but to guide conclusions and inferences following two general principles: First, for two different propositions (P1 , P2 ) with different degrees of plausibility (0.3, 0.8), the plausibility of the conclusion needs to reflect the level of the least plausible proposition (in this case 0.3). Second, for two sources (X1 , X2 ) with different degrees of credibility (0.5, 0.75) that bring forth the same argument (P1 ), the plausibility of P1 needs to reflect the respectively more credible source (in this case 0.75).

81

82

The Plausibility of Future Scenarios

Because plausibility is determined not based on the actual content of propositions but on the reliability of its sources, the theory of plausible reasoning is then operationalised through six formal rules (Rescher 1976:15-6)4 : 1. For a p-Set, every individual proposition is assigned a plausibility value |P| that lies between 0 < | P| ≤ 1, based on the credibility of the sources. 2. Propositions that are logically true (i.e. their truthfulness can be derived from deductive reasoning) are always maximally plausible (|P| = 1). 3. All propositions in a p-Set that are graded as maximally plausible need to be logically compatible and consistent with one another. 4. When mutually consistent propositions in a p-Set suggest some conclusion (i.e. a new, resulting proposition), then this conclusion cannot be less plausible than the least plausible of the consistent propositions (‘weakest link idea’). 5. While all propositions with a plausibility index |P|= 1 need to be mutually consistent, two contradictory and inconsistent propositions can still coexist in a p-Set with varying degrees of plausibility. 6. If two propositions have different plausibility values, for the final determination of acceptability, the more plausible should always be chosen. The operationalisation of plausibility assessments is hierarchical: “[I]n case of conflict, never make the more plausible give way to what is less so […]” (Rescher 1976:14).

In the formal rules, a distinction between consistent and inconsistent propositions is notable. The difference has practical consequences for the conclusions from the p-Set. In a consistent case, conclusions that naturally result from the given propositions can be integrated into the p-Set as new propositions (Rescher 1976:20). The original set is expanded and consequently requires a reassessment of all original plausibility values. The source of the new proposition needs to be evaluated as well. Its credibility is determined by tracing back the sources of the original propositions. Subsequently, the formal six rules above can be applied to the new p-Set. In an inconsistent case, this is not possible. According to Rescher (1976:40), inconsistencies most often arise when propositions lay out fundamentally different explanations for a problem. Even 4

Rescher backs up the rules with theorems using symbolic logics. These are left out here, because the purpose is to explain the basic ideas of how plausibility judgments are made.

4 Conceptual explorations: plausibility across disciplines

for inconsistent p-Sets, plausibility values can be assigned following the formal rules; however, any conclusion that is derived from the propositions poses problems and would inevitably be ‘arbitrary’ (Rescher 1976:17, 20, 47). For this case, the theory proposes two options: One, any ‘mere consequences’ that follow from the p-Set propositions need to be explicitly excluded from further plausibility considerations. This simply means that no conclusion is possible from an inconsistent p-Set. Two, if a p-Set leads to an ‘unwanted’ (because ‘arbitrary’) conclusion, then this conclusion can be neutralised by including its very negation into the original p-Set. Particularly, the latter option illustrates that plausibility assessment is no straightforward process – despite the presentation of explicit rules. Any addition to the p-Set requires a re-adjustment of the initial plausibility values. As clarified for inconsistent cases, even when one has arrived at a conclusion, a revisiting of the propositions may be necessary to ‘neutralise’ unwanted results by inserting or defeating propositions. Hence, plausibility analysis happens in circles or feedback loops.   A modification of Rescher’s theory by Walton (1992a, b) and Walton et al (2008) is noteworthy. The authors are interested in patterns for plausibility evaluation during critical discussions between proponents and respondents. For them, plausibility of an argument is established through their direct interaction. During the plausibility assessment of an argument, the ‘burden of proof’ shifts back and forth between proponent and respondent: The proponent presents an argument which the respondent can assess or even defeat by asking critical questions (Walton 1992a). Only if the respondent can answer appropriately, i.e. by pointing to already given or new premises that enhance the argument’s plausibility, the respondent can tentatively accept it. Thus, whether the respondent accepts the argument as plausible, depends on the ability of the proponent to persuade by referring to different defence- and attack-strategies. It also depends on the respondent’s ability to criticise. This prominence of individuals’ abilities and resources in determining plausibility has been criticised by more formal, epistemological theorists who maintain that plausibility is to be based on well-constructed arguments as in the case of Rescher’s theory (Lumer 2011). This dissent illustrates how debates in scenario literature about the nature of plausibility and its legitimate determination also recur in philosophical conversations.

83

84

The Plausibility of Future Scenarios

4.2.2

Plausibility in argumentative discourse analysis

The philosophical notions of informal reasoning about inconclusive data has also been adopted and made fruitful in policy studies. Beginning in the early 1990s, a number of scholars proposed that policy-making processes should not – as was the case in the enduring rational-scientific model of policy – be understood as a set of decisions made by politicians upon receiving scientists’ input (Fischer & Forester 1993; Hajer 1995, 2002; Stone 1997). Rather, they contended that the assessment and presentation of evidence in policy processes and the struggles of bringing ideas to the policy-making table should be conceptualised and empirically studied as processes of argumentation (Greenhalgh & Russell 2006:34). With his book Evidence, Argument and Persuasion in the Policy Process, Majone (1989) significantly contributed to this paradigm shift by placing the production and assessment of arguments at the centre of scholarly policy analysis and therein explicitly discussed the role of plausibility. According to Majone (1989:44), arguments – either written or verbal – are involved in key processes for political actors but also citizens to arrive at conclusive judgments and to ultimately promote certain policy choices. For him, a policy argument is a complex entanglement of data, information and pieces of evidence involving “factual proposition, logical deductions, evaluations, and recommendations [a]long with mathematical and logical arguments, […], statistical inferences, references to previous studies and to expert opinion, value judgments and caveats and provisos of different kinds”. For any conclusive assessment based on this complex mixture, the judgment of an argument’s plausibility constitutes the central and admissible criterion for moving arguments into action. So, in contrast to the clear and often ‘artificial’ propositions for which plausibility has been discussed in the philosophy of science (Rescher 1976), more complex arguments are subject to plausibility assessments.   The proposed methodology by Majone (1989) places particular emphasis on policy analysists, who possess the ability to ‘craft’ sound and persuasive policy arguments and are capable of evaluating other political actors’ arguments. In placing emphasis on arguments, the methodology features aspects on the use of language and rhetoric in the production and assessment of policy arguments5 . Majone’s methodology combines two elements: First the element 5

It goes beyond pure linguistic approaches and does not only consider what political actors say and how they say it, but includes the contexts of the discourse, i.e. why, how

4 Conceptual explorations: plausibility across disciplines

of empirical data, i.e. the extent to which data and information are effectively used as evidence from the perspective of the argument’s audience, and second, the rhetoric power, i.e. the extent to which data, information and evidence are presented in a persuasive manner. Hence, while it does not follow a process of formal logic that weighs evidence and reaches one single best decision (Greenhalgh & Russell 2006:38), it also does not imply that assessments of arguments by outside parties are arbitrary and only follow own ideologies or agendas. According to Majone (1989:10), just like the assessment of scientific or legal arguments, also the assessment of policy arguments requires some formalities and explicit rules. He makes an analogy to court decisions in which assessments of arguments also need to be based on varying degrees of proof provided by testimony and other sources. A good policy argument is essentially a good legal argument – “if the judge and jury buy it, it is at least acceptable” (Anton 1990:179). An argumentation assessment thereby follows a reasoning that is a combination of logical reasoning and the rhetorical representation of information. At the same time, it is clearly not without rationality. For Majone’s (1989:23) notion of rationality, plausibility assessments are central; he sees rationality as “the ability to provide acceptable reasons for one’s choices and actions”, whereby plausibility is meant as a form of acceptance of the conclusion of an argument. It directly results from the evidence structure and persuasiveness. Plausibility as a final judgment of a policy argument is based on logical reasoning on the one hand, and rhetorical persuasion on the other. Different analytic components in a policy analysis are relevant when it comes to developing and establishing policy analysis (Majone 1989:57ff.): data and information, tools and methods, evidence and arguments, and conclusions. For each component, Majone proposes to draw attention to its microstructures. He argues that while formal logic may not be applicable to analysing these components, different forms of reasoning and the application of certain assessment criteria are still feasibly. Figure 7 presents an overview of the analytic components and criteria. It suggests that logical deduction and scientific quality criteria should, indeed, be applied to the use of scholarly data in an argument. Because most often analysts must rely on data that they

and what arguments are criticised or accepted and with what modes of justification, see Billig (1987:91). For a concise comparison of what has been termed ‘linguistic turn’ and ‘argumentative turn’ in policy studies, see Hajer (2002). For a more elaborate discussion of the argumentative tradition, see Fischer & Forrester (1993).

85

86

The Plausibility of Future Scenarios

did not produce themselves, analysts need to be sensitive towards how research questions are framed, how data was collected and what methods or statistical coefficients were used to interpret data (Majone 1989:58). In this context, Majone spends a lot of time illustrating possible pitfalls – conceptual errors in assessing analytic components – and demands better awareness or even training of analysts to avoid them6 . Some important pitfalls in the assessment of analytic components involve (Majone 1989:58-64): • •









Analysts are not critical enough to what clients, e.g. politicians, say is the main problem. Analysts fail to recognise the sensitivity of scientific data, meaning the way the data was produced and interpreted, including common errors in socioeconomic statistics. The interpretation of scientific concepts or methods may vary significantly across disciplines, e.g. the notion of ‘costs’ is interpreted differently by economists and social scientists. The application of formal techniques and models that the analyst did not apply herself can make the assessment of reliable and relevant data even more difficult. Mathematical formalisations may obscure the assessment of issues behind the research and induce a tendency to simply accept the overly complex data. When existing data has been collected previously for a different, broader purpose and is now applied to a very specific aspect, the strength of data often depends on the original mode of production which the analyst cannot clearly assess.

Taken together, assessing the nature and the quality of data and its effectiveness to be used as evidence presents an important preparatory step for plausibility assessments, because a faulty assessment could result in the fact that pitfalls determine what is judged as a plausible argument. So, Majone (1989:9) places particular emphasis on the analysis of empirical and scientific information to be used as evidence. At the same time, he also stresses

6

The subject of pitfalls in argumentation is not peculiar to the approach of Majone, and long lists of fallacies or pitfalls have also been noted in the work of Thouless (1974) or Toulmin et al (1979).

4 Conceptual explorations: plausibility across disciplines

Figure 7: Components of policy analysis in argumentative discourse analysis

Source: Based on Majone (1989:57-67)

the necessary interrelation between empirical analysis and persuasive argumentation. This way, it becomes clear that the approach clearly differentiates between evidence and argument7 . This is also illustrated in the four objectives policy analysts should pursue when promoting their own policy argument: i) providing and evaluating evidence that strongly supports a problem/ a perspective, and ii) developing the argument itself; iii) adjusting the argument so that it persuades a specific audience, and iv) effectively communicating the argument using rhetorical and dialectical crafts. Hence, for the establishment of plausibility of an argument, persuasion is essential. Majone (1989:8) admits that persuasion is often associated with manipulation or brainwashing, yet he sees it as a “two-way interchange” between the producer and evaluator of an argument. While not diminishing the importance of a thorough analysis, for a policy argument to be successful, “it is ultimately up to the audience to accept or reject such arguments on their own merits” (Dryzek 1993:216-217). So while persuasion does not follow from a logical demonstration, it is not irrationality (Majone 1989:8). Rather, similarly to judicial arguments, the reasons for justifying a final ruling need to

7

This is in fact a key difference between Majone’s notion of persuasion and the notions proposed in chapter 4.3 on narrative coherence. In the latter, argument and evidence are not distinguished clearly and assumed to be contributing to plausibility together.

87

88

The Plausibility of Future Scenarios

be persuasive enough. Here, a shift in perspectives becomes clear: In the assessment of data, information and evidence, the procedures of how to arrive at certain evidence and arguments have been the main centre for plausibility assessments. Now, the relation between persuasion and plausibility moves away from formal procedures towards individuals’ justifications and reasons for why an argument should be accepted (Majone 1989:29). This relationship is less clear and much more context-dependent than the plausibility assessment of evidence. Majone (1989:9) makes clear that the reason why plausibility or reasonableness of a policy argument cannot unequivocally be determined is that in policy, a multiplicity of reasonable standards to assess the argument exists. Different actors that are confronted with these arguments, e.g. citizens, legislators, administrators, judges, experts or the media, all have their specific criteria for assessment.

4.2.3

Relevance for scenario plausibility

The two research strands towards plausible reasoning as mechanism in informal logic and argumentation offer relevant insights for better understanding and conceptualising scenario plausibility from a user’s perspective. Both approaches provide helpful reflections on the contexts of plausibility judgments, more specifically on the role of producers and evaluators of plausibility statements and their relationship. Rescher’s theory of plausible reasoning provides explicit accounts on the nature of plausibility and its direct relation with source credibility. With the clear rules for assessment, it does so more explicitly than any other disciplinary model of plausibility reviewed in this book. In a normative nature, the theory features clear rules for how to proceed in situations where conclusions are needed to be drawn from consistent or inconsistent propositions. This resembles the challenges scenario recipients face. Individual scenarios are thought to be internally consistent (at least from the developers’ perspectives). Yet, a number of scenarios, potentially from different reports and sources may as well be inconsistent to their recipients. In the theory, the direct relationship between plausibility and source credibility is remarking: Rescher (1976:14) provides a practical reason for why source credibility operates as an overarching, guiding principle for plausibility. Propositions, or ‘truth-candidates’, can more easily be accepted and integrated in individuals’ cognitive schemes when they consider the source of the propositions to be trustworthy and credible. Applied to study of scenario plausibility, such a dominance of scenario context over content would imply significant con-

4 Conceptual explorations: plausibility across disciplines

sequences for the overall objectives of scenario planning, that is, to convince and challenge recipients with compelling content. In contrast to what may be called a more ‘conservative’ assessment by Rescher (Walton 1992b), Majone’s methodology to develop and assess the plausibility of policy arguments is a much more complex and less straightforward assessment of arguments. Majone’s judicial approach to analysing arguments incorporates a dynamic interplay between a thorough analysis of data, information and evidence in an argument, and the persuasive presentation of the argument to an audience. While Majone acknowledges that the credibility of a data’s source as well as its scientific quality (validity, reliability) can play a significant role for assessing the overall plausibility of an argument, the extent to which the argument is persuasively presented and communicated to the audience ultimately determines its plausibility. Majone (1989:10-11) proposes a clear hierarchy when he argues that if information or data are placed “at a wrong point in the argument or choosing a style of presentation that is not suitable for the intended audience, it can destroy effectiveness of information used as evidence […].” Hence, whether data is accepted as evidence in a scenario is up to its users. This resembles scenario practices following Habermas’ decisionist mode of policy communication.   The two concepts of plausibility reveal another important difference. Rescher follows a logic that pertains to any person that is able and willing to follow his proposed rules. He discusses the applicability of his theory to arguments, the processing of inconsistent information, as well as to hypothetical and scientific reasoning. Reviewing all applications in detail is beyond the scope of this chapter; yet, it is relevant to note that in all applications, the individual ‘evaluator’ who is performing the plausibility assessment is not further specified. The individual is just presented as a ‘given’. In fact, any characteristics of the individual are irrelevant. The theory equips any individual that wishes to assess the plausibility of propositions. Majone’s notion of plausibility, however, is directly linked to a specific audience: An argument that is plausible to one set of actors may not be plausible for another group. This leads to a second distinction. For Rescher, the credibility of the statement’s author is assessed by the evaluator. This is the only direct influence an evaluator has on the plausibility process. Majone grants the producer of the arguments – in his case the policy analysts – a much more active role. According to him, whether a policy argument is accepted as plausible, depends on the craft and capabilities of the policy analyst. In conceiving the relationship between analysts and decision-

89

90

The Plausibility of Future Scenarios

makers as argumentation processes, it emphasises the dynamic production of plausibility on both sides of the table. The theory of Rescher prescribes independent criteria for how plausibility judgments should be performed. This leads to no concrete research agendas or designs for empirical research on plausibility judgments and does not leave much room for the messier reality that is often found in scenario practice. At the same time, also Rescher (1976:111) acknowledges: “There is no single, monolithic basis for plausibility – the situation is one of a context-dependent pluralism. Not universalizable logical considerations, but situation-specific, extra-logical (material) principles will govern the process of plausibility-evaluation”. In line with this, Majone’s methodology tries to understand how policy arguments are plausible or implausible to some policy actors (descriptive approach), but also proposes ways to make policy argumentation more plausible and, thus, more effective (normative approach). For the context of scenario plausibility, it seems to present a more realistic and fitting context. In fact, Majone’ approach is most applicable in policy-making contexts that involve a lot of uncertainty, meaning arguments that need to be formed and assessed around policy reports and different or contradicting pieces of information and evidence (deHaven-Smith 1990:673). Also, the various components that seem to be involved in policy analysis and are subject to plausibility assessment reflect the circumstances in scenario studies. These include proposed observations, empirical statements, methodological statements, images and metaphors, value judgments and recommendations (Gasper 1996:38).

4.3

Plausibility in narrative theory

A scenario’s plausibility is often linked to its narrative form. The terms ‘narrative’, ‘story’ and ‘storyline’ are regularly used interchangeably in connection with the expected value of scenarios. The narrative nature thereby is valuable for both scenario developers and users, so the argument. Scenario developers can assemble their ideas and diverse knowledge components into plausible and coherent contexts (van der Heijden 2005:236). At the same time, the very characteristics of a story – the temporal ordering of events and its causeand-effect linkage – render scenario content accessible and plausible for recipients (Bowman et al 2013:737). Particularly, qualitative scenario research

4 Conceptual explorations: plausibility across disciplines

emphasises the benefits of narrative scenarios in conveying plausibility, yet mostly without an explicit recourse to narrative theory. The interdisciplinary field of narrative research is, therefore, consulted to inquire how plausibility resides in narrative forms and how it is perceived by readers. The increasing popularity of narratives as subjects for social scientific research is understood as the ‘narrative turn’, a considerable paradigm shift in what is thought to constitute relevant knowledge for researchers to inquire human reasoning (Brockmeier & Harré 2001). As such, narrative theories are interested in how and why certain narratives are constructed and with what effects for individuals and society. In reviewing the literature on narratives, two research perspectives can be identified (Abbott 2002; Herman et al 2012; Phelan & Rabinowitz 2005, 2012): On the one hand, traditional (or classical) narrative research focuses on the nature of narratives and its foundational, unchanging principles. This research strand has produced a variety of assumptions on the workings of narratives as opposed to other forms of discourse. On the other hand, narrative research is more and more interested in testing and re-examining its inherent assumptions. As part of this reflective agenda, ‘reader-oriented theories’ (Phelan & Rabinowitz 2012) investigate narrative effects from readers’ perspectives. Both strands present different angles to the study of plausibility and have been populated by various communities. Linguistic approaches maintain that the internal structure of narratives convey plausibility, while sociological and cultural theory approaches are more concerned with how the content of narratives reflects what is culturally plausible. Psychological approaches have turned to the second research strand to investigate the contexts in which narratives are perceived and assessed by individuals. They empirically inquire how effective narratives are a means for persuasion and belief change. The different approaches are discussed below.

4.3.1

Structural and cultural theory approaches to plausibility

To begin with, the difference between story and narrative needs to be established. While many authors point out the difficulty in distinguishing the concepts (see Brockmeier & Harré (2001) for a useful discussion), the definitions of Abbott (2002:13) are used here to determine a story as the actual events and the temporal sequences, while the narrative constitutes the way in which the story is presented (orally, written, pictorial) and in which context (who is the teller, who is the receiver, what are the circumstances of production and reception). Therefore, it is usually the narrative that is the subject of scholarly

91

92

The Plausibility of Future Scenarios

investigation. The focus on narratives is also more relevant for this book: It seeks to understand how a scenario’s storyline is perceived and with what effect, rather than being primarily interested in the depicted pathways of each scenario. In cultural theory, the teller of a narrative and the cultural context in which the narrative is constructed is central to the purpose of narratives. All narratives, so the argument, underlie a “cultural-historical fabric” (Brockmeier & Harré 2001:42): The way an individual tells a story is not only dependent on own emotions, experiences and abilities; the narrative is also constructed in accordance with some “cultural conventions”. Narratives are thereby seen as more than descriptions of individuals’ perceived realities; rather, they constitute instructions and implicit orders for humans to make meaning of their social lives and their environment. These instructions incorporate what is plausible within a cultural context. Brockmeier & Harré (2001:54), for instance, view narratives as “flexible models” that make sense of some given phenomenon or event by connecting it to recognised and accepted social rules. Thus, the plausibility of a narrative attends to whether the narrative makes an adequate analogue to the overarching cultural norms and rules of a given society. At the same time, the narrative needs to be accessible (or plausible) to the individual’s specific context and reality. From interpreting cultural approaches, it seems that plausibility only resides in narratives when there exists a successful balance between references to cultural norms and individuals’ own reading of social life (Abbott 2002). This, in turn, is vital for narratives to function as cultural resources. In sum, cultural theory accredits key functions to plausibility and links it to the content represented in the narrative. Cultural theory notions are often used to illustrate narratives’ power in political agenda-setting or decision-making (Eder 2006) so as to explain how the production and reproduction of narratives can impact social discourse (Hajer 1995). With their focus on narrative content, cultural theories do not go into detail on the formal characteristics of narratives that may establish plausibility. Here, structural, linguistic approaches to narratives offer more insights8 . A broad body of literature agrees that the central features of narratives are the

8

Note that cultural approaches pay some attention to narrative structure as well. According to Brockmeier & Harré (2001:45), the internal structure of narratives represents what we today perceive as plausible – that is, a sequential ordering of events paired with action-based storytelling. This plausible structure has been culturally established in the past centuries.

4 Conceptual explorations: plausibility across disciplines

sequential ordering of events and the connection of events with actions or behaviours of actors (Abbott 2002; Carbonell et al 2017; Dahlstrom 2010; Herman et al 2012). This internal structure of narratives and the resulting formal logic conveys plausibility. Structuralists and linguists emphasise the canonical sequence of events as the ‘story grammar’ (Canter et al 2003) that organises time in narratives (Pennington & Hastie 1986). Different ‘story grammars’ are proposed that involve more or less standardised components. Canter et al (2003) refer to several empirical studies to argue that the implementation of these internal structures directly leads to improved understandability and recall by readers. Many of these structures are formed based on the propositions of Rumelhart (1975): Story – Setting – Theme – Plot – Resolution. Other linguistic approaches have put forth structures for more complex narratives, such as Labov (1972) (Abstract – Orientation – Complicating Action – Evaluation – Resolution) which have again been refined by Stein & Glenn (1979). What all propositions have in common is that they maintain their particular structures and sequences to be powerful drivers of plausibility, because they reflect “our knowledge of the normal order of things in the world in which we live” (Caron 1992:162). Both cultural and structural approaches to narratives build on the inherent assumption that the cultural manifestations or internal structures of narratives respectively convey plausibility to readers. Abbott (2002:36ff.) provides more detailed reasons for why the ‘rhetorical power’ of narratives produces plausibility, acceptance or simply appeal towards the narratives. First, for many narrative researchers, the causal connection between events presents a central condition for narrative success. Causation is thought to be “the glue that hold narratives together” (Magliano 1999) and is, thus, a condition for narrative plausibility (Pargman et al 2017). This shall result in greater acceptance of the messages of narratives as compared to information that is noncausally connected (Dahlstrom 2010:857). In fact, causality is most often cited in scenario literature to bring forth plausibility. In the Millennium Project, for instance, a scenario is defined as “a story with plausible cause and effect links that connects a future condition with the present […]” (Glenn 2000:52). Abbott (2002:37) argues that the reason why causality is giving way to plausibility relies on humans’ inevitable desire to find cause everywhere. Even if stories are narrated in a non-causal manner, it still often carries a feeling as if events in the story are causally linked. Barthes (1982) sees this as the ultimate power of narratives over readers, namely “[…] the confusion of consecution and consequence, what comes after being read in narratives as what is caused by.” (highlights in original). The second narrative characteristic to convey

93

94

The Plausibility of Future Scenarios

plausibility, as proposed by Abbott (2002:40-42) relates to the assumption that the very act of narrating or reading a story generates a normalisation of the story’s events and actions. The process of rendering events and actions as culturally ‘natural’ or ‘normal’ hinges upon the ability of narratives to convey the events as ‘real’ and as potentially ‘true’. Plausibility only emerges when it is sufficiently clear for the reader how the story and the ‘real word’ hang together. Thirdly and related to this, Abbott (2002:42-44) points to the integration of ‘masterplots’, which lend narratives credibility and plausibility. According to the author, cultures inhabit certain masterplots or overarching narratives that emotionally speak to individuals and reflect their moral beliefs of what is desirable or simply ‘good’. For example, when including ideas about the ‘American Dream’, the narrative may be well received by individuals who feel to belong to the American culture. Thus, while some argue for the inevitable impact of masterplots on human’s interpretation of narratives, others warn that they may easily turn into cultural stereotypes that cause a turning away from narratives (Kashima 2000; Kashima et al 2013; Lyons & Kashima 2001). Lastly, Abbott (2002:53-54) points to narrative closure as a characteristic that has relevance for both cultural and structural perspectives. It relates to whether narratives arrive at a closure (content- or structural-wise) that satisfies readers’ expectations and resolves open questions on the narrative. Because readers have a general desire for closure, so the assumption, satisfying this need can lead to narrative plausibility. At the same time, the integration of suspense and surprise may contribute to narrative attractiveness and may, thus, result in plausibility, too. In sum, while cultural theory approaches to narratives assume that the plausibility of narrated stories depends on the articulation of “culturally powerful cohesion strategies” (Bennett 1997:99), structural approaches see the internal structure of narratives as essential condition for plausibility. In both approaches, plausibility is understood as necessary intermediate state for narratives to become meaningful to individuals, as summarised by Bennett (1997:99-100): “The plausibility of a story in itself is a function of its hearer’s readiness to make sense of its organisation at multiple levels: the plausibility of narrative relies on the symbiotic relation of text organisation (schemas) and cultural assumptions about the way the world works”.

4 Conceptual explorations: plausibility across disciplines

4.3.2

Reader-oriented approaches to plausibility

Most of the insights presented above are dominated by the assumption that those who construct or narrate a story hold considerable power in establishing narrative plausibility. Several authors, however, argue that while narratives exert influence on the way individuals see, read and interpret narratives, it is also the power of the readers that should not be underestimated. Abbott (2002:79) warns that we need to be attentive to “how vulnerable texts are to their audiences”. In this context, psychological approaches have become an important voice within the study of narratives. By empirically investigating many inherent assumptions of narrative research, they have positioned narrative readers as valid counterparts to narrative authors and narrators. Abbott’s discussions of narratives provide a useful framework through which to understand the dynamics of narrative construction, reception and interpretation. Thereby it helps to put into context the empirical psychological research to narrative interpretation and plausibility judgment. Abbott (2002:77) maintains that authors of narratives – which are not to be confused with the narrator – have ‘authorial intention’, that is, they have some intended meanings in mind when constructing narratives. In the same vein, authors also have an ‘implied reader’, or ‘authorial audience’ (Herman 2007:275) that the author infers to be the intended recipient of narratives. Authors and readers only interact via the narrative and so a lot of sensation and underlying assumptions about each other are at work. Also, readers are assumed to construct an image of the author when reading and interpreting narratives (figure 8). To make their judgment and interpretation of the narrative, readers have the desire to find out how and with which intention the narrative was constructed: “After all, the real author is a complex, continually changing individual of whom we never may have any secure knowledge. So, we posit an implied author.” (Abbott 2002:77). As a consequence, Abbott (2002:95-101) distinguishes three ways in which the reader may relate to an author’s intention: (1) Intentional reading: The reader interprets the narrative according to the intended meaning of its implied author; (2) symptomatic reading: The reader interprets the narrative as an author’s expression about his/ her own values, ideas and beliefs; and (3) adaptive reading: The reader interprets the narrative by making an own version of it and adapting it to other contexts. This trisection provides some references for how to approach readers’ interpretations from a scholarly perspective. Yet, while intentional readings may be viewed as a desired outcome for those who construct narratives, it is difficult for re-

95

96

The Plausibility of Future Scenarios

searchers or even the authors themselves to determine what ‘real intentions’, let alone ‘intentional readings’ of narratives are. While this does not deny that any narrative is constructed with some intention for effect, empirical investigations pose challenges to the proposed forms of interpretation and, in turn, show a much richer, yet also more complicated picture.

Figure 8: Dynamic relationship between narrative authors and readers

Source: Illustration based on Abbott (2002)

Whether or not narratives meet expected effects is often investigated using psychological persuasion theories. Empirical persuasion studies have emerged as major research strand in narrative research as it investigates inherent assumptions revolving around the persuasive power of narratives. As an overarching objective of narratives, persuasion is thought to be achieved through different means, e.g. internal narrative structure, cultural identity or credibility of sources. Many scholars view persuasiveness as the major quality of narratives that distinguishes it from mere discourse (Brockmeier & Harré 2001; Cin et al 2004). In comparison to other forms of information presentation, narratives are thought to actively impact individuals’ beliefs (Dahlstrom 2010:857) and have, therefore, long been discovered as mediums for science communication (McComas & Shanahan 1999).   Most narrative persuasion scholars contend that narratives are persuasive when they are plausible. While only some draw explicit linkages and analyse the relationship in more detail (Boswell et al 2011; Canter et al 2003; Nahari et al 2010), others imply it more indirectly and use ‘plausibility’ and ‘acceptance’ interchangeably (Jones & McBeth 2010). Several studies argue that a narrative is plausible and persuasive if it corresponds with readers’ belief systems

4 Conceptual explorations: plausibility across disciplines

and attitudes (Boswell et al 2011; Hajer 1995:63; Jones & McBeth 2010). Narratives that are closer to one’s own ideas more easily ‘ring true’. Plausibility is connected here to perceived truthfulness of narrative content. Green & Brock (2002) argue that plausibility constitutes the major criterion for measuring truth, independent of whether assessing fictional or non-fictional presentations. Likewise, Connelly & Clandinin (1990:8) see the roots of plausibility not so much in readers’ imagination and openness to fantasy but in the empirical evidence readers find in narratives. In line with this argument, for Nahari et al (2010), plausibility is connected to whether ‘ordinary’ as opposed to ‘anomalous’ information is presented to readers (acknowledging that ordinality of information is in itself dependent on the individual reader). The relevance of readers’ own beliefs for plausibility, however, does not answer how narratives may change attitudes, i.e. how readers’ initial plausibility perceptions change and open up towards new plausibilities. Cin et al (2004:177-178) argue that narratives reduce individuals’ tendencies to defend own attitudes and beliefs against change. According to the authors, narratives overcome this resistance, because they limit readers’ search for counterarguments and logical inconsistencies in the narrative. This may be because narratives present their points often in a subtler, indirect manner and leave readers’ with almost no reference points to criticise. Also, individuals are very sensible to attempts of persuasion. In other words, individuals ‘smell the rat’ when they are tried to be persuaded. Narratives, however, according to the authors, are less perceived as persuasive attempts. This means that narratives may actively limit readers to point critical questions towards its plausibility. Other authors are concerned with the impact of personality traits on plausibility judgments. They hold that the emotional and cognitive resources involved in narrative reading limits readers’ ability and motivation to question the plausibility of a narrative (Green & Brock 2000). This has been discussed under the concept of readers’ ‘absorption’. Nahari et al (2010) provide empirical indications that individuals with high levels of narrative absorption rank the plausibility of narratives higher as compared to readers with low-level absorption. As a similar concept, Green & Brock (2002) discuss ‘narrative transportation’ that has the following dimensions: cognitive attention to the story, emotional involvement, feelings of suspense, neglect of the surroundings during reading and mental imaginary. In an experimental study, Green & Brock (2000) found higher ‘logical inattention’ towards narratives for highly transported readers. Taken together, the argumentation of Nahari et al (2010) and

97

98

The Plausibility of Future Scenarios

Green & Brock (2000, 2002) can mean that high levels of narrative absorption and transportation may even suppress considerations of implausibility. The content of narratives is also discussed as being influential for narrative plausibility and persuasiveness. Researchers emphasise the fine line between satisfying individuals’ natural desire for closure and narratives’ suspense by breaking with individuals’ expectations. Several authors argue that a narrative’s breach of expectations can foster its persuasiveness (Abbott 2002; Jones & McBeth 2010:343). This is contradictory to the assertion that acceptance of narratives is largely dependent on its congruence with individuals’ beliefs and attitudes. Indeed, a narrative’s closure of expectations is also thought to reduce cognitive complexity for readers (Hajer 1995:63). In this contradiction, plausibility is conceptualised to balance this dynamic, for “[…] plausibility has always been determined in a dialectical fashion, both by our anticipation of narrative closure and completion and by literature’s capacity to surprise us and to disrupt that closure […]” (McClanahan 2009:59).

4.3.3

Relevance for scenario plausibility

The added value of narrative theory to the study of scenario plausibility is twofold: First, it clearly situates plausibility judgments in an author-reader relationship that is applicable to the contexts in which scenario developers and scenario users interact. Like in technocratic and decisionist scenario practices, the only medium through which authors and readers interact are the scenarios themselves. Both authors and readers, therefore, need to construct their own images of one another to make sense of and interpret the scenarios. The ‘implied author’ (recall figure 8) is thereby a particularly powerful concept to assume that also scenario users inevitably create their own picture of scenario authors and try to identify the authors’ intentions behind the narrative. The author-reader framework presented in this chapter, actively includes the (scenario) readers into the equation. It accounts not only for the power residing in the authors and their understanding of a narrative’s plausibility. With the ‘reader-oriented theories’ (Phelan & Rabinowitz 2012) and the empirical psychological studies, also the power of the readers in determining plausibility is explicitly discussed. This also supports the key hypothesis of this book that formalised notions of how plausibility unfolds (e.g. structural approaches) may not necessarily be in line with plausibility judgments of individual recipients and users. As a second contribution to the study of scenario plausibility, narrative theories reveal findings on how plausibility is determined by

4 Conceptual explorations: plausibility across disciplines

readers/recipients. Plausibility is explicitly linked to persuasion. Thereby, persuasiveness is also present in arguments about scenario planning, for instance when newer scholarly contributions offer guidance towards the construction of powerful scenarios through “compelling storytelling” (Carbonell et al 2017). Yet, this relationship has often only been implicitly assumed or even been avoided due to the scientific ambitions of many scenario planning projects. In narrative research, however, most scholars contend that narratives are persuasive when they are plausible. Assumptions on narrative persuasion are based on the fact that by telling a story, narratives display information in a lively and approachable manner. At the same time, the effects of narratives also depend on what the narrative does not explicitly disclose. Brockmeier & Harré (2001:46) illustrate this by arguing that readers often do not have information about the narrative author. Thus, readers inevitably need to draw inferences on the aspects they do not know but are relevant for their interpretation. In this context, the relationship between narrative evaluation, plausibility judgments and credibility /trustworthiness become relevant. Several authors argue that plausibility is a cognitive condition of the narrative reader, and along with credibility and source trustworthiness separately contributes to narrative persuasion (Boswell et al 2011; Hajer 1995). Yet, the dynamics with which the concepts interact remain less clear. Jones & McBeth (2010:344) and McClanahan (2009:42) argue that narrative plausibility depends on individuals’ trust in the narrative source. Because Nahari et al (2010:320) view plausibility as indications for potential truthfulness, for them it is a necessary but not sufficient condition for narrative credibility. Ochs & Capps (1997:83) maintain that narrative authors all actively work towards the credibility of their work and do so by ‘tweaking’ its plausibility.   Coming back to Abbott’s proposal of the different forms of narrative interpretations (intentional, symptomatic and adaptive reading), it is difficult to justify that there can be one ‘correct’ or unequivocal interpretation, let alone an ‘intentional reading’ of narratives. The term ‘multi-interpretability’ of narratives is often used to articulate this. Narratives provide a rich basis for readers to draw their conclusions and judgments. Thereby readers rely on a variety of resources: the internal structure of narratives, the content explicitly mentioned in the narrative, the context conditions readers need to construct or their own beliefs, attitudes and cognitive styles. From all these basins, plausibility emerges implicitly and explicitly. Following the research on narrative theory, plausibility relates to more than the internal consistency

99

100

The Plausibility of Future Scenarios

of narratives, as is often suggested by scenario planning research. Plausibility is rather presented as an amalgamation of text structures, representations of cultural identities and psychological accounts. Narrative research thereby also provides interesting pathways for empirical research on scenario plausibility. Particularly, experimental designs have been used in narrative research, for instance to study the different effects of narratives versus non-narrative information (Dahlstrom 2010).

4.4

Plausibility judgments in cognitive and educational psychology

The scenario literature is rich in statements about the cognitive benefits of scenarios as compared to forecasts or other deterministic prognoses. Scenarios limit the power of common cognitive heuristics in decision-making (e.g. availability or representativeness heuristics) by emphasising the uncertainty, complexity and ambiguity of possible future developments, and thus, by enhancing people’s preparedness to consider more than one future (van Notten et al 2003; Volkery & Ribeiro 2009:1206ff.; Wright et al 2013a). According to scenario theorists, the pursuit of plausibility – instead of probability – enables these benefits. The concept of plausibility has long been anchored in debates about how individuals construct their own mental representations about given information, and how they subsequently evaluate new incoming data (Chinn & Brewer 2001). Amidst these processes, plausibility has a somewhat decisive function: It determines whether individuals open up to new data, accept or even reject it. Despite the presumed relevance of plausibility, empirical studies that pay special attention to the concept have been rare. Rather, plausibility has been typically operationalised in terms of ratings without any theoretical accounts (Connell & Keane 2004, 2006). Already JohnsonLaird (1983:375) investigated the relationship between plausibility, memorability and comprehensibility, and contemplated that the extant literature lacks “a good theory of plausibility”. Three models of plausibility are discussed in this chapter. The models are key attempts towards not simply operationalising plausibility, but to focus on its meaning and functions as the main unit of analysis.

4 Conceptual explorations: plausibility across disciplines

4.4.1

Models-of-data theory

Cognitive theories are generally interested in how individuals make sense of new information, how they assess information and how and when information processing leads to a change in previously held conceptions (Chinn & Brewer 2001; Kunda 1990; Posner et al 1982). Cognitive researchers, albeit in different ways, assume that individuals construct mental representations of incoming data and integrate the data with some theoretical explanation. A large body of cognitive research has, therefore, proposed ways for revealing these mental representations in verbal or visual forms. Examples are the Explanatory Coherence Network (Thagard 1989, 1992) that connects knowledge components using different forms of arrows or Dansereau (1985) and his schemas that use slots for categories of evidence and theory. Chinn & Brewer (2001:323) follow this tradition by investigating (a) the nature of the representations that individuals develop, and (b) the processes of matching these theoretical representations with new data. While many cognitive studies have been mostly interested in how individuals modify their theories in reaction to new data, Chinn and Brewer argue that a better understanding is needed for how the data itself is represented by individuals. Their theory holds that data is represented through events connected through five types of links (Chinn & Brewer 2001:330): causal links, impossible causal links, inductive links and contrastive links. The plausibility of these links can then be assessed against individuals’ own theories, and/or given scientific theories. The reasons for discounting or accepting data are enriched in the approach of Chinn & Brewer (2001), because the presented data is not seen as ‘facts’, but rather as ‘complex evidence structures’ that individuals engage with. As such, for the assessment of the data’s plausibility, also information about the underlying methodology of the data is consulted. This approach captures a broad range of strategies that individuals may adopt when being confronted with ‘anomalous data’: Individuals may (1) ignore data (discard without explanation), (2) reject data (dismiss links in the data), (3) exclude data (deny its relevance for their existing theoretical explanations), (4) hold in abeyance (data is put on hold to deal with it later), (5) reinterpret data (incorporate data into their existing explanation/ theory), (6) modify peripherally (make a superficial change to existing conceptions), and (7) reconstruct theory (conceptual change so as to make existing conceptions consistent with data). Which of the seven strategies are adopted by individuals depends on the “availability of a plausible alternative theory” (Chinn 1993:15). This means that the plausibility of an individual’s existing model or

101

102

The Plausibility of Future Scenarios

predisposition competes with the plausibility of the data presented. In sum, the theory of Chinn & Brewer (2001) presents a qualitative opportunity – as opposed to rather quantitative-driven approaches – to investigate how plausibility judgments of presented data are made.

4.4.2

Plausibility Analysis Model (PAM)

Connell & Keane (2004, 2006) developed and empirically tested a model that is informed by both linguistic and cognitive accounts of information processing. In the model, plausibility of incoming data is linked to ‘concept coherence’. This means that plausibility is determined by the coherence of the data with an individual’s prior knowledge. The authors provide a simple example: [...] [I]f someone is asked to assess the plausibility of the statement The bottle rolled off the shelf and smashed on the floor, he or she might make the inference that the bottle rolling off the shelf caused it to smash on the floor. Then he or she might consider this elaborated description to be highly plausible because past experience has suggested that falling fragile things often end up breaking when they hit floors.” (Connell & Keane 2004:186, highlights added by RSS). Thus, plausibility of data depends on whether individuals find enough bridges between their own knowledge and the presented data. While this realisation has been previously made, for instance by Black et al (1986), the model seeks more nuanced and empirical insights into the presumptively many inferences that an individual draws. For this purpose, ‘concept coherence’ is approached and operationalised from two perspectives: computational linguistics and cognitive science. Within computational linguistics, the concept of ‘word coherence’ is used to explain how individual words, in a sentence or in groups of sentences, relate to each other (Lapata et al 1999). Through an experimental study, Connell & Keane (2004) demonstrate that certain word combinations convey stronger plausibility to individuals than others. Specifically, they found that sentences inviting causal inferences were judged more plausible than attributional relations within a sentence. Temporal inferences were judged even less plausible, while completely unrelated sentence components present the least plausible option. Next to the nature of the inference made, Connell & Keane (2004:187) also note that the length and the complexity of the sentences as well as the presence of alternative, possible inferences influence plausibility judgments

4 Conceptual explorations: plausibility across disciplines

by individuals. As an advancement, their ‘Plausibility Analysis Model’ (PAM) approaches the process of knowledge fitting between given sentences, or scenarios, and an individual’s previous experience from the perspective of cognitive psychology. In line with Chinn & Brewer (2001), the model assumes that plausibility judgments are based on a comprehension stage and an assessment stage. In the comprehension stage, a given scenario is dissected and then prior knowledge is used to draw inferences from the scenario. In the example presented by Connell & Keane (2006:98) “The bottle fell off the shelf. The bottle smashed.”, an individual may infer that bottles are often fragile, and shelves are often high. This mental representation forms the input for the assessment stage. Here, plausibility is determined by three components: The degree to which (a) the scenario features clear and not complex explanations; (b) the scenario holds multiple sources of corroboration from the perspective of an individual, and (c) the scenario includes minimal hypothetical or vague explanations. In short, their key hypothesis holds that as the complexity of a scenario increases, plausibility judgment decreases, and while the increasing number of corroborations cause plausibility to go up, more explanations involving conjectures cause plausibility to decrease. The two approaches to ‘concept coherence’ as the decisive indicator for plausibility judgments have been developed based on experimental research studies. The ultimate objective of both models has been a computational simulation of plausibility for sentences or even more complex data structures. Hence, according to the authors, the presented angles to plausibility have some predictive qualities that may be helpful in situations in which individual judgments about a data’s plausibility is needed.

4.4.3

Plausibility Judgment in Conceptual Change Framework (PJCC)

The plausibility model of Lombardi et al (2015) builds on the two previously discussed models. It is also informed by the definition of plausibility as ‘tentative acceptance’ of statements as proposed by Rescher (1976). Given the recentness of the PJCC, it has been applied in a comparatively large number of empirical studies (Lombardi 2012; Lombardi et al 2016a; Lombardi et al 2016b; Lombardi et al 2015; Lombardi et al 2014; Lombardi & Sinatra 2010, 2013; Lombardi et al 2013; Sinatra et al 2011) and constitutes an elaborate model of plausibility judgments. From the perspective of educational psychology, the purpose of the model is to empirically investigate how plausibility contributes to learning and

103

104

The Plausibility of Future Scenarios

conceptual change. The latter is understood as a restructuring of previously held knowledge by students (Dole & Sinatra 1998), which may even lead to an epistemic change in the sense of a modification of the way students view and assess the nature of new data (Sinatra & Chinn 2011). Lombardi et al (2013) view plausibility as a mechanism used by students to assess given scientific concepts. Basically, plausibility is thought to function as a threshold for both conceptual and epistemic change: New data must be judged more plausible than previously held conceptions of students in order for new knowledge interpretations and learning to occur. While the authors acknowledge some similarity of plausibility with other concepts, such as probability, coherence or believability (table 5 for a comparison), it is distinct in its qualitative nature that richly incorporates content-, context- and learners-related interpretations of data. Table 5: Related concepts to plausibility Concept

Relation with plausibility

Probability

The probabilities of alternative explanations need to sum up to 1; plausibility is an ordinal and qualitative evaluation of explanations.

Coherence

Coherence is related to corroborative alignment (degree of fit) but does not account for other source validity factors (e.g. information complexity).

Comprehensibility

Comprehensibility is related to plausibility as both are needed in evaluations; high comprehensibility may or may not result in greater plausibility.

Credibility

Credibility is generally conceptualised via characteristics of an information messenger (e.g. trustworthiness) but does not account for other source factors (e.g. information complexity).

Believability

Believability is related to Bayesian principles in the sense of degrees of belief. Believability as degree of belief contains an association with willingness to assert an idea as a source; plausibility does not require such an association.

Source: Simplified and slightly adjusted version of Lombardi et al (2015:37)

The model offers a descriptive account of factors and cues that influence plausibility judgments; figure 9 shows a detailed overview. In the pre-processing stage of a given novel explanation, the model draws on Connell & Keane’s

4 Conceptual explorations: plausibility across disciplines

(2006) model that relates plausibility to the degree of complexity, corroboration and conjecture. The influence of source credibility is informed by a working definition of Rescher. In an experimental study, Lombardi et al (2014) operationalise source credibility as perceived ‘trustworthiness’ and ‘expertise’; the results show significant impacts on students’ plausibility judgments of stories about human-induced climate change. The empirical model of plausibility judgments incorporates a variety of cognitive processes and mechanisms. The model thereby moves beyond ‘cold processing’ (Pintrich et al 1993; Sinatra 2005) and integrates affect, motivation, and motivated reasoning. Lombardi et al (2015:431) criticise prior models of plausibility as ‘strongly rational’ because of their dominance of ‘cold cognitive processes’. Their main argument is that conceptualising plausibility judgments as purely rational choices does not resemble a realistic environment. It confines plausibility to be a function that is simply based on the storage of knowledge, the processing of it in memory and the attention to specific information components. Lombardi et al (2015) refer to several empirical studies that demonstrate the relevance and richness of ‘warm constructs’ involved in plausibility judgment. Johnson & Sinatra (2013) identify individuals’ perceptions of the usefulness and goal-orientation of tasks as important variable for plausibility. An empirical study also reveals a relationship between teachers’ plausibility perceptions of climate change explanations and their personal emotions towards teaching climate change (Sinatra et al 2011). Feelings of anger correlated with lower plausibility perceptions. As indicated in figure 9, the PJCC also distinguishes between plausibility judgments that happen more quickly and impromptu, and those that are made as more explicit and conscious assessments. Following the plausibility model, these judgments differ and refer to what scholars have explained as ‘system 1 mode’ and ‘system 2 mode’ processes (Evans & Stanovich 2013; Kahneman & Klein 2009; Lombardi et al 2015). The rationale for Lombardi et al (2015) to introduce this dual process theory is that in educational contexts, mode 2 judgments are the key benchmark for researchers and practitioners. Longstanding research by Kahneman & Tversky (1984) and Tversky & Kahneman (1974, 1981) pictures system 1 as a very powerful mode in which individuals often apply cognitive heuristics. Following these insights, Lombardi et al (2015:41) assume plausibility judgments to occur implicitly and “with little expenditure of cognitive resources”. As such, individuals may draw on their own background knowledge and experience when judging the source of a statement, with the result that “individuals may employ heuristics when gauging

105

106

The Plausibility of Future Scenarios

plausibility, similar to those used when assessing probabilities and quantifying uncertainty.” The model also draws on findings that suggest plausibility judgments sometimes reduce cognitive efforts, e.g. when reading statements (Maier & Richter 2012; Richter & Schmid 2010; Schroeder et al 2008). Applied to plausibility, Lombardi et al (2016b) assume that individuals are often not critical enough about their initial plausibility judgments. Empirical applications of the model demonstrate how critical evaluations of statements changed plausibility perceptions in favour of the more scientific explanation (Lombardi et al 2015). Figure 9: Plausibility Judgments in Conceptual Change

Source: Lombardi et al (2015:45) with minor adjustments

4 Conceptual explorations: plausibility across disciplines

4.4.4

Relevance for scenario plausibility

Despite the widely acknowledged power of plausibility judgments in data evaluation and learning processes, cognitive theory has only in the recent past began to explicitly address the concept. The three models of plausibility discussed in this chapter present an enhancement for the study of scenario plausibility in three different ways: First, they provide detailed empirical insights into the dynamics that determine plausibility. Second, clear directions for empirical research on scenario plausibility can be derived. Lastly, the role of plausibility as a precondition for learning and conceptual change is explicitly debated and reveals parallels to the expected objectives of scenarios. More explicitly than previous theoretical concepts from philosophy and narrative research, the models relate plausibility to ‘concept coherence’, i.e. the degree to which a given scenario resonates with an individual’s prior knowledge and experience. Furthermore, plausibility is explicitly associated with people’s dispositional characteristics and their emotions. In fact, scenarios do not only operate on the cognitive level, but also on the emotional level. As narrative constructs that lay out pictures and ideas about the future, scenarios also touch on people’s concerns, fears and hopes. To achieve a more comprehensive understanding of scenario plausibility judgments, the integration of both cold and warm cognitive processes in one framework is fruitful. Furthermore, the PJCC accounts for a variety of cognitive heuristics that play a role in plausibility judgments. According to the ‘softer‘ approaches to scenario planning, explicit discussions of the scenarios are key in developing, understanding and ‘making sense’ of the scenarios. Ramírez & Selin (2014) point out that implausibility can be the beginning of a conversation, not the end. This implies that in scenario evaluation, ‘system 1’ and ‘system 2’ judgments could make a difference. While all models emphasise the richness and complexity of plausibility judgments, they differ in the general impression they convey about plausibility: Connell & Keane (2004, 2006) assume that plausibility can be somewhat predicted by their model, whereas the main purpose of the PJCC is to better understand or even intervene in individuals’ plausibility judgments to foster learning and conceptual change. The research designs of both the PJCC and models-of-data theory present interesting pathways to studying scenario plausibility of individuals. Cognitive theories are traditionally interested in how humans process data, and how they integrate new data within existing theories. As Chinn & Brewer (2001:325) argue, the relation between data and theory has been addressed

107

108

The Plausibility of Future Scenarios

with one-sided bias in the extant literature: Most research designs do not provide room for objecting or modifying the data, be it because no methodology behind the data is presented so that participants simply had to accept the data as given facts, or be it because test persons were not given the opportunity to object the data (Chinn & Brewer 2001:326). This involves an oversimplification of individuals: It assumes that in cognitive processes, humans basically are preoccupied with coordinating their theories and preconceptions and lack the ability or will to engage critically with the data. The models-of-data theory hence, more adequately accounts for the epistemic nature of scenarios. When scenario users engage with scientific reports or scenario reports, they face multiple positions from different sources that are often even contradictory. This requires a careful evaluation of the methodology and source structures of the scenario reports. To account for the complex internal structures of the data (i.e. the scenarios), including details about the procedures, methods and context of the data, the theory allows to express all kinds of reasons for plausibility or implausibility of scenarios. Lastly, the cognitive models suggest interesting linkages between plausibility judgment and learning. Lombardi et al (2015:50) state that “plausibility judgments may be an important way in which students evaluate scientific concepts to facilitate reconstruction of alternative knowledge structures”. Also, Chinn & Brewer (2001) consider plausibility an important mechanism by which individuals evaluate both the new incoming data and ultimately engage with the data. The connection between plausibility and learning is also implied in scenario literature. Ramírez & Wilkinson (2016) even anchor their understanding of plausibility in learning processes when they maintain that plausibility means an iterative process of reframing and re-perception of futures. For plausibility in a learning atmosphere, Lombardi et al (2015:36-37) notice that conceptual change might not happen even if one considers the new data plausible, but less plausible compared to his/her existing models. Instances in which there is no contradiction between both usually do not receive much attention as simply the acceptance of the data is assumed. In scenario planning, however, these instances can be important. Scenarios do not necessarily compete for higher plausibility. The fundamental idea of scenarios, indeed, is that they can co-exist. Thus, the strive for (high) plausibility in the models also bears a limitation for its application to scenario work: Scenario theorists have maintained that it is not only plausibility that is helpful in scenario learning: Also, implausibility can help open up discussions about the scenarios’ logics (Ramírez & Selin 2014). This challenges not only the as-

4 Conceptual explorations: plausibility across disciplines

sumption that achieving a single high plausibility should be an individual’s objective, it also suggests that learning with scenario plausibility cannot be limited to reaching a very high plausibility of only one scenario.

4.5

Directions and propositions for empirical research

The discussions in this chapter have demonstrated that plausibility judgments are thought to be linked to several different factors that are related to the content, but also the context of scenarios. Furthermore, the theoretical concepts have illuminated the circumstances and values in the use of plausibility: In the assessment of given statements following rules of informal logic (philosophy of science) and in the formation and analysis of strong argumentation (argumentative discourse analysis), in the construction and reception of narratives (linguistic and narratology), in the reception and interpretation of verbal and non-verbal messages (cognitive psychology) and in conditions for individuals’ learning and conceptual change (educational psychology). In some cases, it is operationalised as an essential and purposeful assessment criterion (philosophy of science, educational psychology), while all disciplines see it as a natural human mechanism to make judgments under conditions where no ready answers are available. The approaches have in common that they all view plausibility as a means to achieve some higher-level purpose: the furthering of scientific knowledge (‘Erkenntnis’), the communication and acceptance of some narrative, the conceptual and epistemic change of students or individuals in order to adopt new ideas and insights or even to take (policy) actions. This also implies that plausibility is connected to some ‘intention’. For some approaches discussed in this chapter, achieving these intentions is simply a matter of following guidelines and rules. According to Rescher (1976), following the guidelines to plausible reasoning will ultimately lead to intended or justified results. This is also echoed in newer philosophical debates that point to the need of establishing set principles and guidelines for possibilistic inferences from scenarios (Betz 2010). Also, within educational psychology, concepts and classroom instructions evidently help students to judge scientific statements as more plausible and to enhance their epistemic cognition (Lombardi et al 2015). Empirical studies also suggest that through a higher plausibility of messages, attitudes of individuals can be changed. Yet, here the contribution of plausibility judgments already becomes less clear and ‘measurable’. Narrative theory, along with some cognitive psychological ap-

109

110

The Plausibility of Future Scenarios

proaches, contents that plausibility judgments are far from being predictable. They emphasise the multi-interpretability of content and maintain that “every single thing signifies”, to mean that even “minor details, parts that are quite unnecessarily to the story – like supplementary events and the setting – can exert considerable rhetorical leverage on the way we read” (Abbott 2002:48). The realisation suggests that plausibility is not a ‘set screw’ that can be manipulated into desired directions. Different academic perspectives, most strongly narrative research, acknowledge that the plausibility constructed by the (narrative) authors or scenario developers often do not correspond with the judgments of readers and scenario recipients. Next to an intentional reading, Abbott (2002), for instance, has argued that how exactly a reader interprets a narrative and which components of the narrative ‘speak’ to the reader is not always in the power of the authors. This can give way to symptomatic or adaptive interpretations. In their compositions, the discussed academic disciplines highlight different aspects of plausibility judgments that are relevant for scenario plausibility. Approaches from structuralist narratology maintain that narratives feature a generalised internal structure, e.g. a certain sequence of events, that establishes plausibility. Hence, if authors follow clear guidelines and be mindful in constructing their work, it conveys plausibility to their readers. Clear principles are also put forward in philosophical accounts, albeit for those who assess the plausibility of given statements (see Rescher 1976). Such normative connotations with the establishment/ assessment of plausibility are also evident in educational psychology. The work of Lombardi et al (2015) implies that for students to achieve some learning about new phenomena, their plausibility judgments of new data ought to rely on explicit evaluation. Cultural theoryinfused approaches to the study of narratives emphasise that plausibility is necessarily dependent on the story’s resonance with cultural identities and well-established rules of social interaction. Here, plausibility resides in and is established by a given society rather than by the individual. In contrast to these cultural notions, the psychological accounts described in this chapter shift attention to the processes and resources that are activated when individuals encounter a statement or a scenario and make plausibility judgments. Theoretical and empirical insights maintain a relation between plausibility and the congruence of information with individuals’ own beliefs and mental models. The explanation is also evident in strands of narrative theory in the sense of a familiarity with certain narrative components. Along with narrative research, cognitive and educational psychologists also emphasise that the format

4 Conceptual explorations: plausibility across disciplines

in which a statement is presented (narrative vs. non-narrative) can make a difference in information interpretation, and more specifically in plausibility assessments. Except for structuralist, linguistic understandings, authors across the discussed academic disciplines make a strong case for the role of credibility and trustworthiness for understanding plausibility. Rescher (1976), for instance, is not primarily interested in the content or structure of an argument, but relates to the ‘relative credibility’ of the source (Humphreys 1978). What becomes clear from Abbott’s (2002) presented framework on narratives is that the reader’s interpretation does not stop at the narrative. The ‘implied author’ that readers necessarily construct to interpret and find meaning in the narrative is particularly powerful in this regard. Plausibility is associated not only with the material – the text, the narrative or the model – itself, but with the source of the material. All theoretical concepts view those who make plausibility judgments, here the scenario recipients and users, as active collaborators in scenario practice. Independently of whether they apply structural guidelines to their judgments or not, it is them who decide what meaning, and ultimately what value will derive from the scenarios. In sum, the presented theoretical concepts populate the life path of scenarios in greater detail than the discussions of plausibility within the scenario literature. While the extant scenario literature mostly focuses on how plausibility is constructed by scenario developers, this chapter sheds more light on the relation between scenarios and their recipients during the evaluation and assessment process. While these theoretical discussions of other disciplines offer more detailed indications for plausibility judgments of individuals, the underlying assumptions have not been tested empirically in a scenario context. Following the exploratory character of this book, five research propositions and corresponding hypotheses are formulated that are a) relevant for scenario planning contexts (chapter 4.1), b) an advancement to previous scenario plausibility debates (chapter 3), and c) derived from the theoretical concepts (chapter 4.24.4). Proposition 1: A scenario user’s plausibility judgment of a scenario is linked to the format in which a scenario is presented (narrative vs. non-narrative) Scholars from narrative research, argumentative discourse analysis, but also cognitive psychology explicitly discuss the merits of narrative formats when it comes to assessing the plausibility of a statement. In the former, schol-

111

112

The Plausibility of Future Scenarios

ars highlight the persuasiveness messages convey when presented as stories (Dahlstrom 2010). Also the approach by Majone (1989) to view any verbal or written interaction between policy analysts and their audiences as argumentation emphasises how the overall plausibility assessment rests on sharing, criticising and engaging with specific formulations. At the same time, Majone (1989:51) maintains that complex models and presentation styles can seriously bolster the overall plausibility judgment by audiences, because models are often perceived as the ‘ultimate authority’ and give the impression of high credibility. The assumed link between narratives and plausibility, however, is not without reservations. The discussions by Abbott (2002), for instance, show that how exactly storylines can be interpreted by readers is not very clear and every aspects of it – be it individual components, the protagonists, or the overall dramaturgy – may be extremely vulnerable to plausibility judgments. This proposition is also very relevant from the scenario literature perspective. Here, different scenario methods – notably Intuitive Logics (IL) and CrossImpact Balance Analysis (CIB) – are controversially discussed for their accessibility by scenario users; their qualities are attributed to their qualitative storyline-character (Bowman et al 2013; Ramírez & Wilkinson 2016) or formal analysis and traceability in the scenario matrix (Lloyd & Schweizer 2013). Following the tendencies towards narrative plausibility as suggested by the theoretical concepts above, it is hypothesised that narrative-based scenarios are perceived as more plausible than matrix-based scenarios (H1)9 , also across different times and contexts (H2-H3). Since this has not been investigated empirically in scenario literature or narrative research, this proposition presents a significant advancement. Proposition 2: A scenario user’s plausibility judgment of a scenario is linked to the perceived credibility of the scenario source and to the trust in the scenario itself The discussion of theoretical concepts in this chapter has presented ‘credibility’ as a key factor for plausibility assessments. Interestingly, this concept is both featured in normative notions of plausibility (e.g. in Rescher’s (1976) guidelines on how plausibility assessments should be performed), but also in elaborations on how individuals actually make judgments (Lombardi et al 2015). Overall, two different notions of credibility are used and need to be clarified. On the one hand, credibility of the source of narratives or statements 9

An overview of all hypotheses and operationalisations can be found in chapter 5.1.

4 Conceptual explorations: plausibility across disciplines

is emphasised by several scholars across disciplines. For Rescher (1976:6), the plausibility of a proposition is based on the credibility of its source; he labels it an “authority-oriented approach to plausibilistic inference” and acknowledges that ‘source’ may mean the author of a statement, but also the type of knowledge used in the statement. The credibility or ‘trustworthiness’ of the source is furthermore emphasised within notions of educational and cognitive psychology, e.g. when individuals assess scientific versus media-based statements about climate change (Lombardi et al 2015) or in argumentative discourse analysis with regards to policy analysts’ perceived authority in a field (Majone 1989). On the other hand, in conceptual discussions of plausibility, the credibility or trustworthiness of the statement/ the narrative itself is assumed to play a major role. In their models-of-data theory, Chinn & Brewer (2001) assume that individuals assess the presented data as well as the connection between the data. With this level of detail, this explanation can also be found in Majone’s (1989) analysis of argumentation, and as an omnipresent rationale in narrative research more generally. With both forms of credibility – of the source and of the scenario itself – the ladder may play a more significant role in scenario plausibility, since often the source of a scenario is not exactly known to its users (see chapter 4.1). However, also the credibility of the source is integrated in this proposition for two reasons: First, discussions in narrative research maintain that even if the author is unknown, readers still posit an ‘implied author’ to make assessments (Abbott 2002). Second, in scenario literature, the relevance of trust and credibility has been conceptually discussed by Selin (2006), where she distinguished between trust in scenario sources, in scenario methods, in the content and in the narrative. Taking these still marginalised debates in scenario research forward, this proposition and relating hypothesis (H4) constitute an advancement to the field. Proposition 3: A scenario user’s plausibility judgment of a scenario is linked to whether the scenario corresponds with the user’s own worldviews Models from cognitive and educational psychology see great relevance in ‘concept coherence’ for plausibility judgments. Connell & Keane (2004:96) maintain that “some concept, scenario, or discourse is plausible if it is conceptually consistent with what is known to have occurred in the past”. The authors operationalise this notion as degrees of complexity, corroboration and conjecture between a statement and what an individual ‘knows’. This is also evident in Chinn & Brewer (2001), who assume that individuals contrast given state-

113

114

The Plausibility of Future Scenarios

ments with different forms of ‘knowledge’, ranging from individuals’ factbased background knowledge to underlying theories or intuitive gut-feelings about how the world works. Researchers from narrative theory, on the one hand, distinguish between what we believe in and what we know about the world, and on the other, what corresponds best with accepted guidelines and norms in a given society (Brockmeier & Harré 2001). In this respect, the unruliness of individuals’ own worldviews in contrast to given plausibilities is often emphasised. Majone (1989), for instance, notes that arguments must be very convincing for people to give up their own plausibilities and seriously consider other explanations and actions. This is also evident in the plausibility model PJCC; here, Lombardi et al (2015) maintain that conceptual or epistemic change may only happen if a given statement’s plausibility is assessed to be significantly higher than their own. For this reason, the hypothesis is put forward that a scenario is judged plausible if a scenario features many corroborations, only few conjectures with individuals own worldviews and expectations about the future (H5). This is also a particularly relevant aspect from scenario research perspectives. Here, scenarios are often assumed to challenge individuals’ previously held opinions about the future and even challenge existing mental models (Chermack 2005; Glick et al 2012). Yet this widely held assumption is also increasingly questioned (Bradfield 2008). Proposition 4: A scenario user’s plausibility judgment of a scenario is linked to her/his cognitive styles Models on plausibility by cognitive psychologists hold that when confronted with decision tasks, individuals’ cognitive styles, i.e. the way they approach data and process it, play an important role. Lombardi & Sinatra (2013), for instance, found that decisiveness, which means an urgent need for a person to reach closure on an open matter, was significantly related to plausibility judgments of science teachers. This notion is particularly relevant with respect to the distinction Lombardi et al (2015) draw between plausibility judgments made implicitly and quick (‘mode 1’) versus more consciously and explicitly (‘mode 2’) and hence, between the cognitive efforts individuals put into assessing a statement. In this respect, the authors also note that several cognitive heuristics (e.g. availability or representativeness) play a role in plausibility judgments, meaning that what is most readily available in individuals’ minds will drive assessments. The relevance of cognitive heuristics for plausibility judgments is also acknowledged by research that sits at the interface between

4 Conceptual explorations: plausibility across disciplines

cognitive psychology and narrative theory (Canter et al 2003). The effect of cognitive styles is investigated in this book for two reasons: First, longstanding, empirical research on human judgments of probability has demonstrated the prevalence of some cognitive patterns (see chapter 3.2). The research in this book, therefore, seeks to explore whether these powerful heuristics may also be reflected in plausibility assessments. Second, scenario researchers have been discussing cognitive styles of scenario actors for some years now. For instance, they assume that competence and expertise of participants is key for the quality of scenarios (Molitor 2009:86) and even propose to select scenario builders based on their information processing and problem-solving styles (Hodgkinson & Healey 2008). In assuming that also scenario users’ cognitive styles make a difference in scenario plausibility, the hypotheses (H6, H7) constitute a furthering of scenario research. Proposition 5: A scenario user’s plausibility judgment of a scenario is linked to the internal structure of the scenario, the argumentation strength and persuasiveness Besides the assumed effects of context-factors (e.g. credibility) or psychometric factors (e.g. cognitive styles), the relevance of a scenario’s internal data structure presents a common thread in the discussed theoretical concepts of plausibility. Linguistic, structuralist approaches to narratives maintain that sequence in which sentence components are structured directly influences plausibility judgments (Labov 1972). According to this research strand, typical structures such as Abstract – Orientation – Complicating Action – Evaluation – Resolution positively effect plausibility. Next to narrative approaches, also cognitive psychology holds that the internal structure of a scenario matters for its plausibility judgment. Chinn & Brewer (2001) assume that the linkages between arguments (causally, attributional, temporal) are carefully reconstructed and subsequently assessed by individuals. Following Connell & Keane (2006), if this structure is too complicated for individuals to deconstruct and understand, plausibility decreases. Also, Majone’s (1989) ‘quasi-judicial’ approach to argumentation assumes that people assess the data and information, the methods and tools to decide whether the argument holds admissible evidence. He most strongly argues towards the importance of persuasiveness of a scenario to be accepted as plausible. In a sense, this is also evident in more formal notions of plausibility assessments. Rescher (1976), for instance, notes that as a preparatory step, an evaluator looks out for logical inconsistencies between

115

116

The Plausibility of Future Scenarios

two or more statements. Plausibility becomes a matter of formal consistency. Given the importance of a scenario’s internal structure as admitted across different academic disciplines, the hypothesis (H8) is put forward to assume that formal consistencies and logical connections between scenario components as perceived by the scenario user is positively linked to plausibility assessments.

5 Empirical research: Methodology to study scenario plausibility

Studying scenario users’ plausibility judgments of a set of given scenarios is at the centre of this research. Chapter 4 has reviewed theoretical concepts of plausibility across different academic disciplines and related it to the context of scenario planning. This has led to five research propositions with nine hypotheses, which are now operationalised and tested in an empirical study. Still, even in the explored academic disciplines that offer more nuanced conceptions of plausibility, empirical research on what plausibility means and how it is perceived is underrepresented. Experimental study designs build a small exception in narrative and psychological research (Canter et al 2003; Lombardi et al 2016b; Lombardi et al 2015; Lombardi et al 2014; Lombardi & Sinatra 2013; Nahari et al 2010). Particularly, the approaches from cognitive and educational psychology offer detailed experimental classroom-sessions to analyse plausibility judgments in controlled circumstances. The theoretical concepts have suggested that plausibility is very context-sensitive and may vary across different settings. To operationalise the propositions for scenario contexts, an experimental study design thereby constitutes a promising contribution. In an experiment, the contexts under which plausibility judgments are made, can be controlled and manipulated. This allows for ‘planned observations’ of a number of variables that the theory suggests to be relevant (Fuchs-Heinritz et al 1994:190). Furthermore, the present study can build on previous attempts to observe plausibility so that findings can be compared and reflected against insights from empirical research in narrative and psychology. For the experiment, the propositions and hypotheses are operationalised as summarised below:

118

The Plausibility of Future Scenarios

Proposition 1: Plausibility judgments are linked to the format of scenario presentation (IL and CIB) H1: Individuals judge the plausibility of the narrative-based IL scenarios higher than the matrix-based CIB scenarios. H2: Individuals change plausibility judgments of IL- and CIB-based scenarios over time after conducting critical evaluations of the scenarios. H3: Individuals with a disciplinary background in engineering judge the plausibility of CIB-based scenarios higher than IL-based scenarios. Operationalisation: • Scenarios based on methods IL and CIB are operationalised as stimuli. • Plausibility judgments of two IL-based and CIB-based scenarios and the overall sets (YIL1 , YIL2 , YILSet , YCIB1 , YCIB2 , YCIBSet ) are gathered in T1 and T2 from students with a background in engineering (eng) and social sciences (soc). Proposition 2: Plausibility judgments are linked to perceived credibility and trustworthiness H4: Scenario plausibility judgments are positively related to perceived credibility of the scenario itself and to the source credibility. operationalisation: • Scenario’s credibility related to the scenario itself is collected as trustworthiness (X1 ) of a scenario. • Source credibility is collected as perceived expertise (X2 ) of scenario authors. Proposition 3: Plausibility judgments are linked to users’ worldviews and prior beliefs H5: A scenario is judged plausible if the scenario corresponds with the individual’s own beliefs and expectations about the subject matter. operationalisation: • Individuals perceive a scenario to match their own ideas (X5 ) or think the scenario is pure speculation (X4 ). • Factors are ranked as important success factors for the subject matter from individuals’ perspectives, e.g. oil price (X6 ). • The desirability of the scenario (X11 ) is collected.

5 Empirical research: Methodology to study scenario plausibility

Proposition 4: Plausibility judgments are linked to users’ cognitive styles H6: Scenario plausibility judgments are negatively related to an individual’s need to immediately reach closure on an undecided matter. H7: Scenario plausibility judgments are positively related to an individual’s interest in reading the scenario. operationalisation: • An individuals’ need for making decisions is collected as 16-item research scale on the ‘Need for Cognitive Closure’ (X8 ) as derived from Lombardi & Sinatra (2013). • Individuals are asked as to how interesting the scenario was to read (X7 ). • Eight questions test the participants’ background knowledge (X9 ) on the subject matter. Proposition 5: Scenario plausibility judgments are linked to the internal structure of the scenario and its argumentation strength H8: When judging a scenario’s plausibility, individuals pick up on the scenario components, the (causal) relations and complexity of components as well as the persuasive presentation of the argumentation. H9: When judging a scenario’s plausibility, individuals refer to the scenario authors and the methods/ procedures from which the scenario emerged. operationalisation: • The degree of the scenario’s complexity (X3 ) as perceived by the individual (derived from Keane & Connell 2005). • The extent to which individuals perceive the scenario to be internally consistent (X10 ).

5.1

Experimental structure

To study plausibility judgments of scenarios, treatment conditions are created and controlled in the experimental setting, so as to manipulate independent variables and randomly assign participants to different conditions (Druckman et al 2011:23; Holbrook 2011:264). This allows for a stringent form of testing the theoretically-derived propositions (Atteslander 2008). This means the design also requires control over the study material, i.e. the scenario reports that form the basis for participants’ judgments. Judgmental tasks likely depend not only on the content of the reports, but also on the forms of presentation, the report layout and design, and the (assumed) sources of the reports. There-

119

120

The Plausibility of Future Scenarios

fore, the scenario reports need to be standardised with regards to the layout of the report, the information given about the scenario developers. They need to differ in their actual scenario presentation (narrative vs. impact diagrams and matrix). For this reason, the scenarios and scenario reports were developed in a controlled workshop setting as a preparatory step for the experiment. The actual experimental part does not constitute an experiment in a strict methodological sense but follows a semi-experimental set-up (Sarris & Reiss 2005). On the one hand, the design adheres to clear methodological guidelines for laboratory research with a high degree of standardisation and a manipulation of the experimental material. On the other hand, in a judgmental task, not all relevant variables can be controlled or manipulated, for instance, study participants’ previous knowledge and experience, their beliefs and mental models. Particularly from the perspective of psychological experimental research, these ‘organism variables’ are potentially confounding variables in the strict methodological sense (Sarris & Reiss 2005:32) but can still be considered in semi-experimental settings1 . One possibility to do so is the transformation of potentially confounding variables into experimental control factors as is done in this study (see chapter 5.4). The development of the experimental material and the actual experimental design was tested and refined during two periods of pilot testing. The first pilot test was used to test the feasibility of the planned design, for checking the timing of the experiment and for exploring relevant feedback participants had when being confronted with scenario reports. At the start of both pilot tests, participants were informed about the early stage of the experimental design and were invited to note questions/ comments/ feedback throughout the study and share at the end of the pilot run. A key lesson from pilot test 1 was that the development of the experimental material (Part A) and the plausibility judgments (Part B) need to be kept separate in terms of participants; participants who developed the scenarios should not also assess the plausibility of scenarios. Additionally, the pilot study revealed that scenario plausibility should be captured quantitatively and qualitatively. Following the insights of pilot test 1, the second pilot was two-fold: For Part A, scenarios were developed by participants using either the IL or the CIB method while all participants were well informed on both the subject matter and the scenario techniques using dossiers, glossaries, information sheets and guidance steps. 1

In methodological discussions, studies that take into account organism variables may also be called correlation studies (Sarris & Reiß 2005).

5 Empirical research: Methodology to study scenario plausibility

For the final implementation of the study, two scenario reports (one for each method) were selected by the researcher (RSS) and consistently developed and formatted to serve as material for the experimental Part B. As part of the pilot study, tools and items for Part B were further refined and tested using test respondents. The focus here was on the understandability and clarity of the scenario reports and questionnaires.

5.2

Experimental material

Two scenario reports with each two individual scenarios represent the core experimental material. They build the basis for study participants to make plausibility judgments. The scenarios were specifically designed for two reasons: First, a high standardisation and control of treatment conditions requires the two reports to mainly differ in the way scenarios were developed and presented; other variables (the subject matter of the scenarios, length of reports, information about developers, layout of the reports, etc.) were standardised2 . Second, the existing ‘market of scenarios’ does not offer suitable scenarios to be used for this experimental study, because original scenario reports are often very long and differ in their attention to specific topics3 . During the pilot test 2, a group of master-level students developed the scenario reports in the course of a guided workshop session. As an overall topic of both reports, the German energy system transformation with a specific focus on societal and political transformation was chosen. Energy transformation processes did not only reflect vivid societal debates at the time of the experiment, it also takes up scholarly discussions of the applicability of both the IL and CIB method. In the box below, the construction process is briefly described.    

2

3

This design is in line with existing narrative research in which the same information for experimental participants was prepared in causal or non-causal formats (Dahlstrom 2010). Indeed, the scenario reports could also have been simply made up by the researcher (RSS) herself. Since she already possessed some expertise in the CIB method, both scenario reports could have been biased towards her own knowledge and her ideas about the hypotheses.

121

122

The Plausibility of Future Scenarios

In a workshop setting, participants were first given a detailed introduction into the scenario methods IL and CIB. Subsequently, participants were provided with a predefined list of uncertainty factors with clear definitions about the assumed relevance for energy transformation processes in Germany. In line with the guidelines of the scenario methods (van der Heijden 2005; Weimer-Jehle 2006), participants were asked to choose at least seven factors for the scenario construction with IL, while the CIB-matrix included all factors. The list of factors was derived from the research project ENERGY TRANS (Weimer-Jehle et al 2016, 2020); furthermore, an extensive review by Gallego Carrera et al (2013) on energy societal context factors was consulted to represent relevant and currently debated factors. Although this standardisation of input factors might have compromised participants’ creativity and leaves out an important step in the scenario development process, this step was considered necessary. By providing participants with a list of possible descriptors, the analysis will not be centred on the question of how scenario developers may influence the selection of different descriptors based on their perceived importance of the factors. This is well documented, for instance, by Bradfield (2008) or Metzger et al (2010). Also, the determination of input factors ensured that the scenario reports featured the same topics and remain comparable. Participants were given three hours to complete the scenarios for IL, while CIB developers were granted an additional hour, as by experience, the completion of the CIB matrix is more time-consuming. After the workshop, the researcher (RSS) reviewed the developed scenarios and refined them on a language level to improve readability of the scenarios. As a last step, the four scenarios (from the two methods each) were put together into scenario reports that feature the same introduction, a short method description, and the presentation of the scenarios. The ‘professional’ processing and presentation of the scenario reports was done to enhance equal conditions of appearance.

Some key characteristics of both scenario reports are summarised in table 6. Note that the characteristics are derived from the method’s descriptions, but are no necessary condition for the scenario methods, i.e. CIB scenarios could also theoretically be named. They are used here to more clearly differentiate both scenario formats.

5 Empirical research: Methodology to study scenario plausibility

Table 6: Differences in formats of scenario reports IL Scenario Report

CIB Scenario Report 2 Scenarios

Narrative structure: Stories about how the future could evolve

System-based structure: Matrix and diagram networks about how uncertainty factors interrelate

Scenarios have names: “We can do this”, “A matter of conscience”

Scenarios are not named: Scenario 1 and 2

Narratives make use of emotions, metaphors: “on whose back are we building our own living standards?”

Focusses on factor relations: In a world of global confrontation, high oil prices are more likely, because cooperation /trade agreements between states are limited.

Table 7 presents the overall structure of both scenario reports; both reports feature identical introduction texts regarding the purpose of the scenarios and the authorship. The introduction is followed by a paragraph on the respective methodology used for developing the scenarios. The final reports contain two different scenarios each, the combination of the two scenarios within one report is referred to as ‘scenario set’4 . Table 7: Structure and content of scenario reports Scenario Report based on the method Intuitive Logics (IL)

Scenario Report based on the method Cross-Impact Balance Analysis (CIB)

Identical Introduction on the purpose of the reports. The scenario reports seek to: • • •

4

Explore societal contexts of the energy transformation Elaborate and analyse multiple pathways to demonstrate uncertainty of future developments Authorship: A group of researchers developed the scenarios

Both scenario reports can be found in the supplementary material of this book (Schmidt-Scheele 2020). Since the experiment was conducted with German students, both reports are in German.

123

124

The Plausibility of Future Scenarios

Description of IL methodology: Discursive - analytical process

Description of CIB methodology: Systematic, cross-impact approach









Step 1: Data analysis: Identifying and characterising potential impact factors Step 2: Scenario structuring: Using scenario axis to form structure and order impact factors accordingly Step 3: Transferring into storylines





Step 1: Defining systems factors and their future states, compose a matrix Step 2: Define relations between each future state in matrix using an ordinal scale Step 3: Determine internally consistent systems

EACH REPORT PRESENTS TWO SCENARIOS Scenario title: “We can do this”

Scenario title: “Scenario 1”



• • • •













Religious and political conflicts nationally and globally give rise to a ‘Energiewende’ as common threat that unifies citizens International states community fails to cope with global confrontation, oil prices increase, resources are scarce German government aims for secure and sustainable energy supply by expansion of renewables Bottom-up development: Founding of cooperative societies, investments in decentralised energy production Political support schemes boost effects and promote identity building of communities Decentralised systems do not necessitate expensive infrastructure development; previous increase in energy prices can be stopped New concepts (smart grids) become popular, the overall positive mentality leads to constant economic growth

• •







Global confrontation High oil prices Low economic growth ‘Secure supply’ as the dominating energy priority of the German government Citizens tend to be rather sceptical of the energy transition in general Infrastructure planning is executing based on the principle of speed, rather than participation Renewable energy technologies and the necessary infrastructure are developed slowly Impacts of climate change are increasingly visible/ tangible for citizens The mutual influence of each of the factors is explained using an ordinal scale in the matrix and impact networks

5 Empirical research: Methodology to study scenario plausibility

 

Scenario title: “A matter of conscience”

Scenario title: “Scenario 2”



• • • •









5.3

Objectives to reduce climate change have been unsuccessful; the international community is behind emission reduction goals; no incentives to move away from fossil fuels Years of problems of living up to the expectations of the Paris 2015 summit; German Government takes a different stance: it wants to take a determining role Moral-ethical debate: What is the price for the high German living standards? Top-down approach on behalf of the government: Regulatory mechanisms to influence economic actors’ and citizens’ behaviours also give rise to dissatisfaction and protests Increased participation in infrastructure planning to ensure public acceptance











Global confrontation High oil prices Low economic growth ‘Sustainable supply’ and ‘environmentally-friendliness’ as the dominating energy priority of the German government ottom-up process: Citizens tend to approve of the energy transition in general Infrastructure planning is executing based on the principle of participation rather than speed Renewable energy technologies and the necessary infrastructure are developed fast Impacts of climate change are increasingly visible/ tangible for citizens The mutual influence of each of the factors is explained using an ordinal scale in the matrix and impact networks

Procedures and data collection

This experimental study follows a within-subject design to analyse judgments of participants regarding the plausibility of scenarios. Data was collected in a pre-questionnaire and a classroom session using two treatment groups. The procedures are explained below, critical aspects are discussed and reflected. Pre-questionnaire Several empirical studies and theoretical concepts assume that plausibility judgments are influenced by recipients’ cognitive heuristics as well as their

125

126

The Plausibility of Future Scenarios

own beliefs and expectations (Canter et al 2003; Caron 1992; Lombardi et al 2015). Prior to the study’s classroom-based session, such items were collected. To ensure a time delay between the explicit collection of own expectations and the plausibility judgments, the online questionnaire was to be completed maximally two weeks prior to the classroom-based session. The questionnaire asked for participants’ overall agreement with the study’s conditions, their pre-conceptions of energy related topics and 16 items related to the need for cognitive closure (NCC). Classroom setting For logistical reasons, five classroom-based sessions were conducted with participants (n= 55) distributed over these dates, i.e. every participant only attended one session. All sessions took place in the same seminar room at the University of Stuttgart and were conducted by the same instructor (RSS). During the implementation of the study, the instructor adhered to a careful wording to avoid any evaluative or judgmental statements that could point participants towards favouring one method over the other. At the beginning of the session, the guideline and the schedule of the session were presented, after that no textual questions were allowed from participants to ensure the same amount of information in every of the five sessions. Experimental treatment groups The two scenario reports were used as experimental treatment. To allow for a direct comparison of both scenario formats in the within-subject design, participants received both treatments, i.e. were presented both scenario reports and asked to make judgments at several points during the experiment. A control group was not established in the study since the treatment itself constituted the basis for all judgmental tasks; an elimination of the treatment (scenario reports) for a control group would not have brought helpful insights on scenario plausibility judgments. This design choice is based on previous experimental research on the effects of narrative persuasion, mostly in the field of health risk perception where participants were presented with traditional information formats (e.g. statistics) and didactic information (narratives) (de Wit et al 2008; Durkin & Wakefield 2008; Wise et al 2008). A within-subject design, therefore, theoretically does not need a randomisation of groups (in contrast to a between-subject design in which individuals receive only one of many treatments). Yet, in within-subject designs where individuals receive

5 Empirical research: Methodology to study scenario plausibility

both treatments, sequence effects need to be accounted for. This means that the first treatment could receive higher attention, and therefore, different reactions than the second, simply because of the order (‘carry-over effect’, see Sarris & Reiss 2005:89-90). To acknowledge this effect, the experiment entails a cross-over feature. Participants are randomly assigned to Treatment Group 1 or 2, so that they statistically have the same chances of receiving scenario report IL or CIB first (figure 10).

Figure 10: Experimental procedures

Instructions Instructions for participants were the same across the treatment groups: Participants were invited to behave as potential decision-makers who need to make decisions about the future of energy systems changes in Germany. During their “work”, they are confronted with different scenario studies and are invited to read and assess the scenarios. Participants were not given the in-

127

128

The Plausibility of Future Scenarios

formation that the main purpose of this study was to empirically define the concept of plausibility. The study started with participants being presented the first scenario report with the instruction to read the report carefully for 30 minutes. After that, the first questionnaire on the plausibility of the overall report and the individual scenarios followed. Then the second scenario report followed with the same procedure. After having both scenario reports in front of them, participants received a longer questionnaire targeted to both scenario reports asking participants to assess scenarios based on different items with closed and open questions. Critical evaluation task Next to the different judgments, participants were also asked to more critically evaluate the given scenarios after their first round of plausibility judgments. This assignment was included because plausibility models from cognitive psychology maintain that quick and less-reflective assessments (judgments under ‘system 1’) differ from more conscious and critical assessments (‘system 2’) (Lombardi et al 2015). To trigger a more thorough engagement with a given material, educational psychologists suggest providing participants with critical questions to re-evaluate the material. This task was adjusted to the context of scenario planning. Grunwald (2015) has offered normativity, traceability and consistency as three categories to scrutinise scenarios. These dimensions were provided to participants with the task to evaluate the scenarios. These categories were not introduced as being dimensions of plausibility but served as an intervention in the experiment to trigger critical engagement with the scenarios. In line with the models-of-data theory (Chinn & Brewer 2001), participants were then asked to write down their reasons for plausibility or implausibility of the scenarios. At the end of the experiment, participants received a final questionnaire entailing again plausibility judgments of all scenarios and reports (T2) as well as questions concerning their prior knowledge on the subject matter and socio-demographic questions. Nature of data collected In the experiment, quantitative and qualitative data was collected (table 8) with the purpose of data triangulation. Triangulation has been popular but also highly criticised in social sciences, particularly when it pertains to different forms of knowledge collection (Olsen 2004). Since both qualitative and quantitative data comes from the same set of participants and was collected

5 Empirical research: Methodology to study scenario plausibility

at the same time on the same subject matter, the qualitative data can be used to offer more nuanced perspectives on human judgment for contexts in which the quantitative data was less suitable (e.g. proposition 5).

Table 8: Experimental data collection

Collecting plausibility judgments The core objective of this research has been to better understand what goes into an individual’s plausibility assessment of a given scenario. One of the major difficulties of this experiment has, therefore, been the need to collect participants’ plausibility judgments on an operationalised scale without being able to provide them with a clear-cut definition of what plausibility is. First, there is no clear-cut definition available in scenario planning (hence the relevance of this study). Second, had participants been provided with all the assumed factors that are involved in plausibility (i.e. the research propositions derived in this study), participants likely would have followed the definition due to effects of social desirability or simply the availability of helpful angles for their judgments. At the same time, some instruction was deemed necessary. Given this challenge, the study followed previous experimental research

129

130

The Plausibility of Future Scenarios

by Lombardi et al (2015), who provided a working definition of plausibility. This working definition was adjusted to the context of scenarios: “We understand scenarios not as forecasts, but as possible alternative futures. Scenarios thereby also cover futures that may not be very probable, but still plausible. In this context, we define ‘plausibility’ as potential truthfulness of the scenario. The question about a scenario’s plausibility is: “Can I imagine this scenario to come true based on its descriptions?” Plausibility judgments were collected on a 5-point Likert-scale5 . The choice is in line with previous experiments on plausibility judgments (Canter et al 2003; Nahari et al 2010). The decision to provide a working definition of plausibility allows to investigate whether the overall factors that were identified in the theoretical concepts on plausibility can also be found in an artificial, yet empirical context. Sequence of items A particularly sensitive topic in the design of this study was the sequencing of questionnaire items to avoid sequence effects in the answers of participants. However, given the assumed mutual relationships between certain items, the final design inevitably entails potential trade-offs. The rationale for the chosen sequence is briefly outlined: Items about the pre-conceptions of energyrelated aspects as well as items relating to participants’ need for cognitive closure (NCC) were collected no less than two weeks prior to the collection of plausibility judgments. By ensuring an interval of two weeks, it was aimed that participants did not have their answers on their energy-related attitudes fresh in their mind when being confronted with the scenarios. Likewise, the interval was intended to prevent participants from drawing inferences from 16 NCC-items. Items about participants’ prior knowledge and demographics were purposefully left to the very end of the last questionnaire. This was primarily to prevent assessments of prior knowledge to have potential influences on plausibility judgments and to have prior knowledge and preconceptions set apart in the questionnaires. Demographic questions, as studies have proven,

5

A 5-point scale was chosen for the following reasons: Due to the novelty of participants being asked for their plausibility judgments, participants were given the opportunity to opt for a “middle category” (rank 3). Additionally, a scale larger than 5 (e.g. 7, 9, 11) was not chosen because the small number of participants would have likely caused a very widespread distribution of ranks.

5 Empirical research: Methodology to study scenario plausibility

can have an influence on people’s subsequent answering habits and, therefore, were left until the end as well. Assessment-items that, according to the hypotheses, are assumed to determine plausibility judgments (credibility, interest, complexity etc.), were asked after the first plausibility judgments were recorded. Participants The study is interested in the scenario plausibility judgments of scenario users that are not involved in the development process, so-called ‘user-recipients’ (Pulver & VanDeveer 2009). However, as has been discussed in chapter 2, the scenario literature does not provide clear definitions or characterisations of this scenario user group. Given the exploratory nature of this study, potential users are characterised as having a general knowledge about the subject matter as well as a general interest in engaging with scenarios. These requirements can be met by using students as study participants. Thus, master-level students from the University of Stuttgart from social sciences programs and engineering programs were used. The requirement for participation was that the study programs needed to have a dedicated focus on energy. The sample aims at reflecting the diverse backgrounds of scenario users who often come from different disciplinary backgrounds. Participants were recruiting using different channels: The researcher (RSS) personally visited lectures and seminars of the relevant study programs and raised attention for the possibility of attending the study. Furthermore, the call for participation was disseminated through posters on bulletin boards, email-newsletters and social media. This way, the students do not represent a random sample, but an ad-hoc sample. Students were randomly assigned to the two experimental groups. Random assignment is here a step towards overcoming the problem of causal inference in that each participant cannot choose the kind of treatment he/ she prefers but is randomly assigned to a condition. Hence, every participant has an equal chance to be in a particular treatment group (Druckman et al 2011:20). Potential participants were given almost complete information about the study, including that a) the study will be about scenarios on German energy systems transformations until 2050, b) they will be confronted with different scenarios during the study and c) they will get compensated for a total of four hours of work (9€/ hour).

131

132

The Plausibility of Future Scenarios

5.4

Considerations of the study’s validity

Experimental designs need to be carefully analysed regarding possible threats to its validity using four types of validity as proposed by Cook & Campbell (1976). Inferential statistical validity Given the exploratory nature of this study, it does not aspire to draw conclusions for a total population or to put forward generalisations. While findings of this study are based on a relatively small sample size, the study draws on rich data including qualitative data from open questions and quantitative data from five questionnaires. The statistical models on plausibility, therefore, are a starting point to better understand and conceptualise plausibility. However, replications and cross-validation studies are needed so that the findings can be supported or adjusted. Construct validity To put forth valid propositions on the dynamics of scenario plausibility judgments, independent and dependent variables needed to be carefully operationalised so that both reflect the real nature of the construct and the current state of scholarly research (Sarris & Reiss 2005:40). In this study, the independent variables, i.e. the scenario reports, resemble the outputs that prospective scenario users receive in practice. The study has demonstrated that both the scenario methods IL and CIB as well as the subject matter energy systems transformations correspond with current dynamics in energy policy debates. The variables pertaining to plausibility are well anchored in the present academic literature. Yet, as it is in the exploratory nature of this study, the concept of plausibility may need to be conceptually and empirically expanded in future experimental studies. This also pertains to variables that could not be tested in this study, but may influence plausibility judgments. Examples are the cultural and/ or demographic differences in the perception of the future. It can also be assumed that different age groups view the future differently so that older individuals, for instance, possess more life experiences compared to younger people and may, therefore, develop stronger (positive and negative) associations. Such attitudes towards the future more generally could impact plausibility judgments of the participants.

5 Empirical research: Methodology to study scenario plausibility

Internal validity In experimental studies, it is critical to determine whether the dependent variable results from the independent variables (in this case the scenario reports). In this quasi-experimental study, independent variables are used as treatment variables. Emphasis is placed on their effects in terms of correlations between plausibility judgments and other perceptions of participants regarding the presented scenarios. In a ‘perfect’ experiment, variations in the dependent variables can be fully attributed to the experimental manipulation. Yet, in psychological experiments on cognitive-social processes, studies often show a degree of data fluctuation that in part has to be attributed to not yet known variables that unsystematically impact the dependent variable (Sarris & Reiß 2005:36). Several guidelines exist to increase this internal validity of experimental studies; the max-con-min principle (Kerlinger & Pehazur 1973) constitutes a prominent one. For this study, it guided the maximisation of primary variance (max) by selecting distinguishable groups of participants (participants were classified by different disciplinary backgrounds). Also, the scenario reports (independent variable) were designed as clearly distinguishable: The IL-based scenario report is strictly narrative, while the CIB-based report features network diagrams and the matrix, although in practice both scenario techniques sometimes use mixed versions. Following the max-con-min principle, systematic errors were sought to be controlled (con). This was not only done through randomising experimental conditions (sequences), but also by integrating potentially confounding variables into the study. All scientifically hypothesised arguments for the manifestation of plausibility judgments were considered and, when possible, integrated. This included personalityrelated factors such as need for cognitive closure, expectations, beliefs, interests, but also source-related factors such as perceived trustworthiness and expertise of scenario developers. Lastly, error variance was sought to be minimised (min) through a high standardisation of experimental procedures and a tested reliability of measuring instruments. Certainly, not all confounding factors can be accounted for. Two assumptions are discussed below as examples of potentially threatening the validity. •

Problems of comprehension: Plausibility judgments of scenarios may be biased by problems of comprehension. As opposed to many experimental studies in which single statements are tested, scenarios are more multifaceted and complex. Wong-Parodi et al (2014), for instance, demonstrated

133

134

The Plausibility of Future Scenarios



how individuals, including experts, did not fully understand the traditional climate chart presented in the IPCC scenarios. Hence, there is also the danger that participants do not fully understand the scenarios presented in this experiment. While incomprehensibilities cannot be ruled out completely in the quantitative judgments, the qualitative data can give indications of whether participants’ judgments may have been influenced by a lack of understanding. Additionally, given that reports reflect real situations in scenario practice, this insight is in itself informative for the study, because plausibility is often also related to comprehensibility. Duration of study: With four hours, the study’s duration could be interpreted as (too) long and may be subject to fatigue effects of the participants or a lack of interest over time. While this cannot be ruled out completely, the change in treatment sequence (i.e. changing the order of scenario reports) can control for a lack of interest/ attention paid to the second report. As the pilot test indicated, most participants felt rather positive about the scenario methods and were interested in getting to know both scenario methods and to learn more about different perspectives on the future of energy in Germany.

External validity As has been implicated throughout this study, the stimulation of participants in the experimental study can be considered representative of real, practical situations. In practice, decision-makers are usually confronted with more than one set of scenarios that may have been developed using different approaches and techniques (as represented by two exemplary methods in this study). Furthermore, decision-makers often need to make a ‘quick scan’ of scenarios or other scientific policy advice outputs as input for decision-making. At the same time, decision-makers also often ‘dig deeper’ and engage with the material at hand. Both types of engagement are resembled in the study (questionnaires 1A, 1B, 3 and 4). Two aspects are regularly raised in the context of external validity and require a brief discussion. •

Students as test persons: Experimental designs using students are often criticised when students do not present an externally valid sample. Indeed, students lack the experience and background knowledge of experts and stakeholders usually involved in scenario exercises. Their knowledge of potential energy futures is acquired over a short period of time,

5 Empirical research: Methodology to study scenario plausibility



whereas in real world situations, stakeholders, decision-makers or experts are often familiar with the topic for years. At the same time, the scenario literature does not give clear classifications of the user group. Indeed, master students can be assumed to possess comparable analytical skills as do stakeholders or even experts and, therefore, are not disadvantaged. Research shows that many experimental findings replicate when conducted again with more representative samples (Holbrook 2011). Also, studies have found that students, although with less experience, act in a similar manner as managers (Bateman & Zeithaml 1989). Regarding this study’s design, one potential bias results from the fact that participants were all master-level students: Questionnaires entailed items about scientific credibility of the scenario reports. During the university education, students typically come to appreciate an evidence-based, objective understanding of what constitutes ‘good science’. This may have an effect, although standardised, on their response and attitude towards the credibility of scenarios. In fact, this effect can arguably show in real-life cases as well. Still, the study’s findings, particularly the ones based on credibility, need to be considered in this light. Monetary compensation: Each participant was paid 9 €/hour to compensate for their efforts. Being in selective master programmes on energyrelated topics, it can be assumed, however, that students not only have a basic background in the field but also do have some degree of intrinsic interest in the topic. In a discussion of different experimental settings, Gräfe (2009:92) found no effects from monetary incentives on experimental responses. Performance-based incentive mechanisms were not given although it constitutes a popular means in forecasting and decision-making experiments to increase participants’ motivation (Gräfe 2009; Rowe & Wright 1999). Since the experiment does not measure right/ wrong answers, offering incentives is difficult and could only be awarded based on rather subjective measurements.

135

6 Experimental study: quantitative research findings

This chapter presents the quantitative findings of the experimental study that tested the theoretically-derived propositions and hypotheses. Key findings are summarised at the end of each sub-chapter. A more thorough discussion and reflection of findings against the literature is done in the conceptual map (chapter 8). The chapter starts with a brief overview of the statistical measures applied.

6.1

Statistical tests for data analysis

The presented data does not stem from a random sample in the strict methodological sense, because the lengthy study design required participants who were willing and eligible to participate (see chapter 5). Many sociological statisticians consider this a reason to refrain from any inferential statistical analysis, including hypothesis testing (Sahner 2005). Yet, so-called ‘ad-hoc samples’ along with smaller sample sizes are common circumstances in semi-experimental studies in psychology and many clinical disciplines. Heller (2011) considers psychological experiments as sequences of random choices. Next to the choice of participants, also the random allocation of participants to treatments (fulfilled in this study) constitutes a small, random choice in itself. Following this interpretation, in hypothesis testing, calculated p-values are not primarily interpreted as attempts to draw inferences about an (unknown) population, but as indications of whether observed tendencies in the sample can be interpreted as extreme and more frequently than random choice would predict. The focus, however, remains on analysing the strength of observed relationships.

138

The Plausibility of Future Scenarios

The study only performs non-parametric tests. Parametric tests, for instance the analysis of variance (ANOVA) which compares the means of distributions or linear regression analysis (Rasch et al 2006), are not used because mathematical requirements are not fulfilled, i.e. variables with normal distributions, metrical scales and a larger sample. Most variables in this data were collected on 5-point Likert scales and are treated as ordinal scales in the subsequent analysis1 . Relevant response variables also do not follow a normal distribution; the Kolmogorov-Smirnov Test and the Shapiro-Wilk-Test confirm this assumption2 . To analyse whether differences in the data regarding plausibility judgments of IL-based and CIB-based scenarios can be interpreted as more-than-chance differences (Proposition 1), frequency distributions, measures of central tendencies and analyses of variances help to explore hypotheses H1-H3. •

Statistical implementation: The Mann-Whitney U-Test (for unpaired data) and Wilcoxon Test (for paired data) were conducted. The Mann-Whitney U-Test is an analysis of variances by ranks; measured values of two variables are ranked from the lowest to highest (same measured values are given a mean rank). H0 assumes that all ranks are randomly distributed among the two variables, while the alternative hypothesis holds that the distributions are not equal. The Wilcoxon Test calculates the relation between the rank sums.

Propositions 2 – 5 (with the hypotheses H4-H8) pertain to the research objective to explore whether there exists any relation between several factors collected as an individual’s assessment (trustworthiness, expertise etc.) and the individual’s scenario plausibility judgment. Bivariate correlation coefficients are used to analyse whether a relation exists, how strong the correlation is and which direction it points to. 1

2

The wide controversies about how to interpret scales are acknowledged here. The 5point Likert scale (1 ‘very implausible’ to 5 ‘very plausible’) could also be treated as a metrical scale. The decision for treating it as ordinal scale is made, because individuals’ subjective judgments shall not be interpreted as even continuum of metrics. In addition, the 5-point scale was verbalised (‘very implausible’ to ‘very plausible’) in the experimental questionnaires, which speaks for an ordinal, rather than a metrical scale. Given that the study was completed as a classroom-based session, missing values in the data were very limited and do not need to be discussed as potential problem for data evaluation.

6 Experimental study: quantitative research findings



Statistical implementation: For two variables with ordinal scales, Goodman and Kruskal’s Gamma (γ) was used. Gamma ranges from -1 (perfect negative correlation) to +1 (perfect positive correlation); for its calculation, pairs of data are compared on whether pairs do not rank the same (discordant pairs) or whether they rank the same (concordant pairs). The reasons for choosing Gamma are its applicability to different sizes of cross tabulations, its interpretation according to the ‘proportional reduction in error’ (PRE), and the symmetry of the tested correlation, meaning that the two tested variables do not need to be classified as independent/ dependent (Benninghaus 2007). For relations between a categorical and an ordinalscaled variable, Cramer’s V based on chi-square tests is applied. While with chi-square (x2 )-based measures, their values increase with the number of n, Cramers’ V can also take on higher values irrespective of the size of the contingency table. All x2 -based coefficients require the number of expected frequencies in each cell to be > 5. In cases of more than 20 percent of cells with less expected frequencies, the Fisher’s Exact Test is given as alternative to Cramer’s V. This test is particularly recommended for psychological experiments with small sample sizes (Sarris & Reiss 2005).

Bivariate correlation analyses do not provide any indications on the causal relationship between variables, i.e. whether a change in related judgments of the scenario may cause the scenario plausibility to change as well. When such relationships are explored in social science research and cannot be derived from indications by previous studies, linear regression analyses are often the best guess. Yet, not only is linear regression ruled out due to data requirements3 , it also assumes that the relationship between one dependent and one or more independent variables is constantly linear (as suggested by a straight line to fit the data). Human judgments, particularly when expressed on a scale, however, is seldom linear as longstanding research on human judgment under uncertainty has demonstrated (see chapter 3.2). For this research to investigate hypotheses H4-H8, methods are needed that account for differently distributed slopes in the relationship between two or more variables.

3

Linear regression analyses require rigid statistical assumptions on the linearity, normality and continuity of data that is drawn from multivariate normal distributions with equal distributions and covariances for all tested variables (Peng & So 2002:32).

139

140

The Plausibility of Future Scenarios



6.2

Statistical implementation: Logistic regression analysis presents an option to explore such dynamics that are assumed to be not entirely linear (but rather shaped as an S-curve); they are increasingly used in empirical social sciences and educational psychology research (Peng & So 2002). They are popular means to work with ordinarily scaled variables. ‘Plausibility models’ are developed to explore relations between the categorical outcome variable4 (plausibility judgments of scenarios) and one or more categorical or continuous predictor variables (trustworthiness, expertise, need for cognitive closure, etc.). A logistic regression analysis is based on the concept of maximum-likelihood estimations. In contrast to linear regressions, data of the independent variables is not fitted to predict the dependent one. Rather, the relationship between variables is assumed to be more complex and expressed in odds ratios to explain the odds of an event (a scenario is judged plausible) given certain changes in the independent variables (University of Zurich 2018).

Experimental sample and treatment groups

The experimental findings are based on a sample of n= 55 participants with 38 men and 17 women between 20 and 37 years (mean= 25.38, SD= 3.02). All participants were students of the University of Stuttgart on a master’s level; 52.7 percent studied engineering sciences and 43.6 percent took social sciences programmes. A chi-square test demonstrates a correlation between gender and academic discipline of participants (table 9): Male students are particularly more often enrolled in engineering sciences (Cramer’s V= .549). Participants from both disciplines were enrolled in master programmes with a broad link to energy issues. In the online pre-questionnaire, participants were asked about previous engagement with the subject matter. 69.1 percent (n= 38) of participants replied to have engaged in questions on energy transformations in Germany before. A chi-square test shows no statistical association between academic discipline and previous topic engagement. Descriptive statistical tests further show that participants had rather diverse opinions on processes of energy transformations in Germany. When asked how realistic they perceive a realisation of energy transition goals of the German government, 41.8 4

Indeed, some versions of logistic regressions can be applied to ordinal outcome variables, however, the data sample does not suffice in quantity for such an analysis.

6 Experimental study: quantitative research findings

percent of participants thought it to be ‘rather unrealistic’ while 47.3 percent answered with ‘rather realistic’; only a few participants thought in more extreme judgments (‘very [un]realistic’). The question has been taken from a previous study (Scheer et al 2014), in which it sparked similarly diverse opinions. Hence, while the sample does not constitute a random sample, participants appeared to have rather diverse experiences, backgrounds and expectations about the subject matter. All participants were randomly allocated to two treatment groups. In the two groups, the sequence of scenario report presentation and plausibility judgments were reversed: Group 1 (n= 30) first received the scenario report that was developed with the method Intuitive Logics (IL) and then the scenario report based on scenarios from the Cross-Impact Balance Analysis report (CIB). Group 2 (n= 25) received it in reversed order. Participants with the different academic backgrounds were randomly allocated to the two groups in a way that ensured an equal distribution of academic backgrounds in both treatment groups (table 10). Table 9: Cross-tabulation of academic discipline and gender Gender Academic Discipline

male

female

Total

social sciences

11

15

26

engineering sciences

27

2

29

38

17

55

Total

Table 10: Cross-tabulation of academic discipline and treatment groups Treatment Groups

Academic Discipline

Group 1 (IL then CIB Report)

Group 2 (CIB then IL Report)

Total

social sciences

13

13

26

engineering sciences

17

12

29

30

25

55

Total

141

142

The Plausibility of Future Scenarios

6.3

Differences in scenario plausibility judgments

Analyses of differences are based on judgments for the two individual scenarios of each scenario report as well as an overall plausibility judgment of the two reports as ‘scenario sets’. Plausibility was collected on a five-point Likert-scale, ranging from (1) ‘very implausible’ to (5) ‘very plausible’. Proponents of both IL and CIB methods assume respectively better performances in producing ‘plausible scenarios’. From the discussed theoretical concepts, the assumption that plausibility is conveyed by narratives and storylines stands out. For this reason, hypothesis H1 denotes that individuals judge the plausibility of the narrative-based IL scenarios higher than the matrix-based CIB scenarios. Figure 11 illustrates the relative frequency distributions of all six plausibility judgment variables for each scenario (IL1 , IL2 , CIB1 , CIB2 ) and the scenario sets (ILSet , CIBSet ) in T1. Judgments for all individual scenarios feature a left skewed distribution with a flat curve (skewness range from -.337 for IL2 to -.486 for CIB1 ) and relatively high plausibility judgment for all scenarios. Except for ILSet , rank 4 (‘rather plausible’) constitutes the mode of the distributions. Differences are evident with respect to the extreme rankings; compared to CIB scenarios, IL scenarios are less ranked ‘very plausible’ and more often judged ‘very implausible’. Regarding judgments of the overall scenario reports (ILSet , CIBSet ), ILSet shows a curve with most participants judging the overall scenario report as ‘neither/ nor’, while CIBSet demonstrates a clearly steep curve with ‘rather plausible’ as the mode of distribution. The tendencies are also reflected in the distribution quartiles. For the individual scenario judgments, the highest 25 percent of IL-based judgments cover plausibility ranks 4 or 5; in contrast, for CIB-related judgments, the highest 25 percent of collected data are all within the highest rank. The Boxplot (figure 12) further underlines these tendencies. It visualises that what denotes the median (rank 4) for the individual CIB-related judgments, already presents the upper quartile for IL-based judgments. It also shows that judgments of scenario IL2 , CIB1 and CIB2 are rather widespread; the respective whiskers indicate that data exists that is higher and lower than the middle 50 percent. Very few outliers at the lower bottom denote judgments with values less than 1.5 times lower than the interquartile ranges. The descriptive differences are also supported by a Wilcoxon-Test that directly compares plausibility judgments of all individual scenarios and scenario sets (table 11). It confirms that plausibility judgments for all CIB-related scenarios are significantly higher compared to all IL-related scenarios. For

6 Experimental study: quantitative research findings

Figure 11: Relative frequency distributions of scenario plausibility judgments (T1)

instance, for judgments of scenario CIB2 as compared to IL2 , the asymptotic Wilcoxon-Test (z= -3.106, p= .002, n= 55) shows that for 55 of the ranks of data pairs, 29 ranks are [CIB2 > IL2 ] and only 10 ranks are [CIB2 < IL2 ]. The strength of the effect after Cohen (1992) is r= .42 and constitutes a strong effect. Strong effects are also evident for comparisons of the scenario sets and a cross-comparison of scenarios IL2 and CIB1 .

143

144

The Plausibility of Future Scenarios

Figure 12: Boxplot of scenario plausibility judgments

Table 11: Plausibility judgments across scenario formats (asymptotic Wilcoxon-Test)

Analyses of differences also need to mind possible sequence effects. Descriptive comparisons of treatment groups reveal that participants who received the CIB report after having already read the IL scenarios (Treatment Group 1) more often rated the overall CIBSet as ‘very plausible’ (36.6 %), while

6 Experimental study: quantitative research findings

participants who encountered the CIB report first (Treatment Group 2) seemed to be more reluctant in assigned the scenario set the highest rank (8 %). Individual CIB scenarios display the same tendencies. Mann-Whitney U Tests manifest these observations and expose some significant differences between the two treatment groups. The relatively better performance of CIB scenarios when judged after the IL-based scenarios is indicated by weak effects, e.g. for plausibility judgments of CIBSet (asymptotic Mann-Whitney U Test: z= -1.713; p= .087; r= .23). Respective tendencies are evident for IL-related scenarios, which were ranked higher when they were encountered as first scenario report (Treatment Group 1). While this observation shows no meaningful or only weak effects for T1 judgments, the effects become evident and statistically significant for T2- plausibility judgments (table 12). For scenario IL1 , for instance, T2 plausibility judgments are higher when the IL report was read first than when individuals first judged the CIB report; the asymptotic Mann-Whitney U Test (z= -2.003; p= .045) shows a moderate effect (r= .27). The statistical tests also reveal effects of the sequence of treatment regarding other variables tested in the study. The desirability of scenarios is differently assessed by participants from treatment group 1 and 2. For the scenario CIB1 , for instance, its desirability is judged significantly higher when participants have encountered the IL-based scenario report before (group 1). With r= .57 it represents a strong effect. One explanation for this observation is that at the time of the scenario assessment of CIB1 , participants had already read other scenarios that show considerably different worlds against which the given scenario may have appeared more desirable. The observed sequence effects across different variables show that for their respectively second judgments, participants had a direct reference at hand, i.e. the first material and their assessment of it could be used as benchmarks for the second judgment. Overall, in direct comparison between IL and CIB, most participants were then more convinced of the CIB-related scenarios. The reference to previous judgments as benchmarks for further tasks is a common observation in experimental studies and has been extensively discussed from different angles. While some authors see it simply as a result of individuals’ attention span, others attribute it to participants’ automatic desire to avoid cognitive dissonance (Egner et al 2010). Experimental psychologists researching individuals’ reactions to perceptual stimuli, in contrast, argue that when experimental participants review only few stimuli, they have difficulties in putting into context the stimuli against the absolute magnitude (Stewart et al 2002). Consequently, first judgments tend to serve as anchor

145

146

The Plausibility of Future Scenarios

for further ones. Similar explanations have been found specifically for experimental reading material (Maier & Richter 2012; McNamara 2001) and seem applicable to the effects found in this study. Although the observed sequence effects need to be at least in part attributed to the experimental design, it has practical relevance for the context of scenario plausibility. In practice, scenario users are often confronted with multiple scenarios that may be very different and contradictory to each other. Scenario recipients need to make multiple, comparative judgments and thus, comparable effects may happen.

Table 12: Effects of sequence of treatment on plausibility judgments

Summary H1: The data presents evidence to reject the underlying null hypothesis that there exist no significant differences in the plausibility judgments of IL-related and CIB-related scenarios. Supported by descriptive measures of central tendencies in the data, asymptotic Wilcoxon-Tests suggest that there exist significant differences for all comparisons of plausibility judgments across scenario formats.

6 Experimental study: quantitative research findings

Yet, the data does not support the hypothesis H1, holding that narrative-based scenarios better convey plausibility. In fact, statistical analyses point towards the contrary: Not narrative-based IL scenarios, but matrix-based CIB scenarios are judged significantly more plausible. For this reason, H1 is not accepted, but an alternative H1_Alt is derived from the data. While it is counterfactual to narrative theory concepts, it does support key arguments expressed by CIB advocates (Lloyd & Schweizer 2013). Certainly, every scenario depicts a unique future world and hence, also the content of the scenarios could have contributed to the observed differences. However, as has been described in chapter 5.2, both scenario reports are based on the same uncertainty factors and differ in their interpretation and processing of data.

Plausibility judgments over time (T1-T2) Are participants’ plausibility judgments stable or do they change during a closer engagement with the scenarios? Propositions from theoretical concepts hold that after participants have engaged more critically with the scenarios, plausibility judgments can change. Specifically, Lombardi et al (2013) argue that in instant judgments, individuals often are not sufficiently critical or reflective in their plausibility assessments. Also, Grunwald (2015) suggests that analysing scenarios with regards to their argumentative testability (‘Argumentative Prüfbarkeit’) is vital for a thorough assessment of scenarios. The hypothesis H2, therefore, denotes that participants change their plausibility judgments of IL- and CIB-based scenarios after conducting critical evaluations of the scenarios. For this hypothesis, plausibility judgments from T1 (immediately after reading the individual scenarios) and T2 (at the end of the experiment) are considered. Figure 13 visualises discrepancies between T1 and T2 judgments. It shows that most participants did not change their initial plausibility judgments. The direction in which the plausibility judgments change, is worth noting. Lombardi et al (2013) suggest a clear direction: Critical evaluation would trigger participants to rank stories about climate change higher than in the first judgment. This tendency is not found for scenario contexts. In contrast, further engagements with scenarios rather prompted participants to reconsider and downgrade what they previously considered plausible. Participants who changed their judgments, mostly downgraded plausibility of IL scenarios. Only 14.8, 18.2 and 16.4 percent (for IL1, IL2, ILSet ) upgraded the scenarios by one or two ranks. The figure demonstrates similar tendencies for CIB-related sce-

147

148

The Plausibility of Future Scenarios

Figure 13: Differences in scenario plausibility judgments between T1 and T2

narios. Also, results of asymptotic Wilcoxon-Tests support a significant downgrade of plausibility across the scenario formats. Except for scenario IL2 , weak to moderate effects are found in the change of plausibility judgments over time. For instance, the plausibility judgments for scenario IL1 in T1 is significantly higher than in T2. The asymptotic Wilcoxon-Test (z= -2.251, p= .024, n= 55) shows a moderate effect (r= .30). Definite reasons for these tendencies cannot be derived from the statistical analysis. Yet, since such engagement

6 Experimental study: quantitative research findings

was structured in the form of a critical evaluation of scenarios – as suggested by several scenario theorists – this can explain the downgrading. At the same time, longstanding experimental psychology holds that adjustments of judgments in experiments often result from the artificial, experimental situation (Hogarth & Einhorn 1992). Several well-known effects call for a cautious interpretation: First, the relatively little time difference between the two judgments; second, the fact that participants knew they were observed; and third, the at-length discussed assumption that study participants tend to be reluctant to change their original assessment, for instance because they wish to ensure consistency between their judgments (Festinger 1957; Jermias 2001; Kahneman & Klein 2009; Mayer & Hanson 1995). What can be concluded is that plausibility judgments seem to be sensitive towards the time and context of retrieval. Summary H2: The data presents evidence to reject the underlying null hypothesis that there exist no significant differences in plausibility judgments over time (T1-T2). While most participants did not change their initial judgments, descriptive statistics and the Wilcoxon-Test reveal that overall initial plausibility judgments (T1) are significantly higher than judgments in T2. The analysis presents a mixed picture of plausibility changes for the two scenario formats. While the tendency to downgrade plausibility is evident for all scenarios, judgments for ILrelated scenarios differ in the strength of the effect. For instance, the asymptotic Wilcoxon-Test for changes in scenario IL1 (z=-2.251; p= .024; n= 55) shows a moderate effect (r= .30), the same test reveals no effect for scenario IL2 (z=-.541; p= .589; n= 55; r= .07). For the CIB-related scenarios, the Wilcoxon-Test shows a constant weak effect in downgrading T1 plausibility judgments. For this reason, hypothesis H2 is accepted only with caution, among others because longstanding cognitive research shows very different reasons for why study participants do or do not change initial judgments. Further research, therefore, needs to evaluate the hypothesis to see whether observed effects increase or disappear with a longer time difference using larger sample sizes.

Plausibility judgments and disciplinary backgrounds With the different scenario development methods, the two reports resemble different ways in approaching uncertainty and in structuring and presenting insights. In the scenario literature, the relevance of individuals’ back-

149

150

The Plausibility of Future Scenarios

ground knowledge is regularly mentioned as relevant factor for the perception of scenarios (Bolger & Wright 2017). For instance, due to the resemblance of CIB to model-based scenario development, it can be assumed that those familiar with the logic of models, are more conducive to those scenarios. Consequently, hypothesis H3 holds that individuals with a disciplinary background in engineering judge the plausibility of CIB-based scenarios higher than IL-based scenarios. The participants with backgrounds in social sciences and engineering sciences present an interesting comparison, because both disciplines familiarise students with different approaches towards (scientific) knowledge (Hachmeister et al 2016). Analyses of the effects of participants’ disciplinary backgrounds on the scenario plausibility judgments, using the Mann-Whitney U statistic, show a rather mixed picture. The group of participants with an academic background in social sciences overall shows higher plausibility judgments across the scenario formats as compared to participants with a background in engineering. Those differences, however, yield merely weak effects (r ≤ .24). Descriptive measures support the tendencies as documented in the boxplot (figure 14). Contrary to initial assumptions, participants with a background in social sciences seem to be slightly more attracted to CIB scenarios than participants with an engineering background. At the same time, for T2 judgments of scenario IL2 , participants with an engineering background judge the plausibility significantly higher (Asymptotic Mann-Whitney U Test: z= -1.635; p= .102; r= .22) than participants with a background in social sciences. The exact same tendency is also observable for CIB2 , and is further supported by looking at the change in plausibility judgment of CIB2 (T1-T2). After the critical evaluation, social science participants were more critical towards the narrative scenario. The observed differences are only minor indications and do not suffice to confidently support the hypothesis. Furthermore, as has been explained at the beginning of this chapter, the variables academic discipline and gender are correlated, with significantly more male students enlisted in engineering sciences. Hence, it cannot be ruled out that the observed effects are the result of gender differences.      

6 Experimental study: quantitative research findings

Figure 14: Boxplot showing scenario plausibility (CIB1) by disciplinary backgrounds

Summary H3: The data presents only mixed evidence for the effects of participants’ academic discipline on scenario plausibility judgments. The group of participants with a background in social sciences exhibit higher judgments across the two scenario formats compared to participants with a background in engineering. Yet, these differences yield merely weak effects and reverse effects are observable. Overall, the data offers some cautious assumptions that the format participants are oftentimes less familiar with from their academic background, seem to convey comparatively higher plausibility judgments. One explanation can be that participants are more aware of the flaws and limitations of the more familiar presentation format and are, thus, more critical. Yet, the data does not present enough evidence to fully reject the underlying null hypothesis and to accept H3. Further research needs to carefully investigate possible underlying dynamics,

151

152

The Plausibility of Future Scenarios

i.e. whether and how exactly do engineering and social sciences students differ in their judgments.

6.4

Plausibility, credibility and trustworthiness

The explored theoretical concepts denote ‘credibility’ as a key factor influencing individuals’ plausibility assessments. Following the research proposition 2, two different notions of credibility are explored in the context of scenario plausibility: the credibility as trustworthiness of the scenario itself and the credibility of the source of scenarios. Both concepts are judged on a 5-point Likert scale for all individual scenarios and the two sets. The corresponding hypothesis H4 holds that scenario plausibility judgment is positively related to perceived trustworthiness of the scenario itself and to the source credibility. Descriptive analyses of the present data show that trustworthiness of the scenarios is positively correlated with the plausibility judgment of several scenarios: The more trustworthy a scenario is perceived, the more plausible the scenario is judged by the participant, as visualised by the jitter plot (figure 15). Observations are supported by bivariate analyses using Goodman and Kruskal’s Gamma (table 13). The tests allow for the conclusion that, by knowing the relationship between the two variables, the errors in predicting plausibility judgments can be proportionally reduced by as much as 64 percent (for ILSet ) and 77 percent (for CIBSet ). For the individual scenario judgments, bivariate analyses show mixed results: While moderate to strong positive relationships exist between trustworthiness and the respectively first scenarios of the reports (IL1 , CIB1 ), the statistical tests reveal no meaningful relationship with the second scenarios. One explanation for this interesting phenomenon could be that for the first scenario of the respective report, trustworthiness served as a helpful anchor for participants in making their judgments. For the second scenarios, however, this anchor was not as present, because the first scenario could now be used as benchmark. This explanation is backed up by psychological research on order and sequence effects (Stewart et al 2002). For the variable expertise (X2 ), the data also hints at a positive correlation with plausibility judgments, albeit less conclusively. Participants’ perceptions of the expertise show a strong relationship with judgments of the CIBSet , to the extent that errors in predicting the plausibility judgments can be proportionally reduced by 52 percent. For the other variables, however, relationships

6 Experimental study: quantitative research findings

Figure 15: Jitter plots for trustworthiness and scenario plausibility judgments in T1

are merely weak or even non-observable. Reasons for this inconsistent picture can be manifold; one could be that participants did not know the scenario developers in person, but only read a general description on the ‘involved experts’, and so may have had problems assessing this factor. Another interesting explanation can be found in social psychology literature: In their ‘Salient Values Similarity Theory’, Earle & Cvetkovich (1995) have demonstrated that the more a statement corresponds with an individual’s own beliefs, the higher individuals assess the expertise of the statements’ authors. Applied to this study, this means that expertise is very much a mental construction of parti-

153

154

The Plausibility of Future Scenarios

Table 13: Bivariate correlations of tested factors and scenario plausibility in T1 (H4)

cipants that can change from scenario to scenario. As is evident from further conducted bivariate analyses, the variables expertise (X2 ) and the match of a scenario with own ideas (X5 ) are, in fact, positively correlated for several of the tested scenarios. Logistic regression analyses further support hypothesis 4: According to the fitted data, the likelihood that some individual judges a scenario as plausible is related to an individual’s perception of trustworthiness of the scenario but also to the credibility of the scenario source. The outcome variables (plausibility judgments for YIL1, YIL2, YILSet, YCIB1, YCIB2, YCIBSet ) were individuals judging a scenario as plausible (1= yes, 0=no)5 , and the two predictors were individuals’ perceptions of trustworthiness (X1 ) and expertise (X2 )6 . Onepredictor logistic models show that the variable trustworthiness constitutes a powerful predictor of plausibility judgments for both scenario formats. For the outcome variable plausibility judgment scenario IL set (YILSet ), the resulting

5

6

The original scales had to be recoded into dummy variables: Original plausibility ranks 1-3 were recoded into 0 (= not plausible) and the ranks 4-5 were recoded into 1 (= plausible). The two variables could have been combined using a factor analysis. While this can be an option for future empirical research, it was not done because the combination of variables is accompanied with a reduction in complexity and detail of the given data. Following the exploratory nature of the research, the variables are kept in original to explore which specific variables may induce what influence.

6 Experimental study: quantitative research findings

Figure 16 Plausibility model curve with ‘trustworthiness’ as predictor

model7 suggests that the log of the odds of an individual judging scenario set IL as plausible is positively related to perceptions of trustworthiness (p = .002). Put differently, the odds of a scenario set to be judged plausible is 3.887 times higher when an individual perceives the scenario set as trustworthy than when it is perceived not trustworthy. Figure 16 shows that the relationship is almost linear. The predictive power of the variable and the validity of the model itself is supported by several statistical back tests8 . It shows, among other things, high sensitivities and specificities of the model so that

7 8

Predicted logic = -4.2671+(1.3577) *TRUSTWORTHINESS. Four groups of statistical tests were applied to evaluate the effectiveness and the soundness of models: Statistical tests of individual predictors, overall model evaluation, goodness-of-fit statistics, and validations of predicted probabilities, see Peng et al (2010:5-8).

155

156

The Plausibility of Future Scenarios

with the model, 71.1 percent fewer errors can be made in predicting which of two random individuals judged the scenario set IL as plausible compared to estimating it by chance. Next to this model, three other plausibility models with the predictor trustworthiness show high statistical significances and point into a similar direction. One significant plausibility model with the predictor expertise could be discerned. It provides an effective prediction of plausibility (CIBSet ) in that the higher an individual perceives the scenario set’s source expertise, the more likely it is that the scenario set is judged as plausible. In fact, with this model, 77.9 percent fewer errors can be made in predicting which of two random individuals judged the scenario set IL as plausible compared to estimating it by chance. Certainly, further research is needed to investigate the steadiness and robustness of the relationship between plausibility and trustworthiness as well as perceived author expertise. Repeated studies with larger sample sizes should investigate if the observed effects reinforce or diminish. Yet, overall, the conclusion can be drawn that contextual interpretations of a scenario and thereby the trustworthiness of the scenario itself – rather than the credibility of the scenario’s source – appears to be an informative factor for understanding scenario plausibility judgments. Summary H4: The null hypothesis that the variables trustworthiness and expertise are not related to scenario plausibility judgments can be rejected with reference to bivariate correlations and logistic regression analyses. Both the perceived trustworthiness of the scenario itself as well as the perceived expertise of the scenario’s source appear to be positively related to plausibility; however, to a different extent. For the individual scenarios, bivariate analyses between plausibility judgments and trustworthiness range from rather strong relations for the respectively first scenarios of the report (IL1 : γ= .46/ CIB1 : γ= .35) to no observable relations for the partner-scenarios (IL2 : γ= .05/ CIB2 : γ= .07). Because the tendencies are observable for both scenario types, the robustness of the relationship between plausibility and trustworthiness (X1 ) need to be questioned and investigated in further research. The assumption that scenario trustworthiness serves as an anchor for plausibility judgments also substantiates when looking at judgments of the overall scenario sets. For both scenario sets (ILSet , CIBSet ), correlation coefficients show very strong effects. This is further supported by the performed logistic regressions, which reveal powerful prediction models across

6 Experimental study: quantitative research findings

the two scenario formats, with the most powerful being the ones with plausibility judgments of scenario sets as outcome variable. All models constitute a significant improvement of the intercept-only model and are good fits when assessed against actual outcomes. This can be interpreted as sound evidence to accept the hypothesis H4. Overall, contextual interpretations may play a particularly important role, assumedly because judging the very detailed and complex scenarios can be very difficult and time-consuming. For the perceived expertise of the scenario source (X2 ), the data also carefully hints at a positive relation with plausibility judgments. Participants’ perceptions of the expertise show a strong association with judgments of the CIB Set (YCIBSet ) and a respective logistic model denotes that 30.6 percent of the proportion of variance in the outcome variable can be explained by expertise. However, for the other scenario plausibility judgments merely weak or even non-observable associations can be demonstrated. This inconsistent pattern is worth noting, also with respect to the theoretical notions of plausibility. Both normative and descriptive models of plausibility have maintained that when making assessments, individuals draw their own conclusions about a statement’s source or author. Majone (1989), for instance, maintains that in argumentative discourse analysis, policy analysts’ perceived authority play a major role. At the same time, Abbott (2002) argues that in cases where the reader of a narrative does not know the author, s/he still posits an ‘implied author’. This notion appears to be tenable for the observed patterns.

6.5

Plausibility, participants’ own beliefs and perceptions of data

Conceptual plausibility models from cognitive and educational psychology see great relevance in ‘concept coherence’. Following their notion, “some concept, scenario, or discourse is plausible if it is conceptually consistent with what is known to have occurred in the past” (Connell & Keane 2004:96). These assumptions have been established in research proposition 3 and are operationalised in the variables speculation (X4 ), match with own ideas (X5 ), oil price as factor (X6 ) and desirability of the scenario (X11 ). The corresponding hypothesis H5 holds that a scenario is judged plausible if the scenario corresponds with the individual’s own beliefs and expectations about the subject matter. In the experiment, participants were asked to what extent the scenarios reflect their own ideas and expectations about energy transformation processes (data collected

157

158

The Plausibility of Future Scenarios

on a 5-point Likert scale). The variable match with own ideas (X5 ) shows strong positive relationships with plausibility judgments of all scenarios. Respective gamma values indicate that by knowing the relationship between the two variables, errors in predicting the plausibility can be reduced significantly, for the scenario IL1 by as much as 68 percent. For IL-related scenarios, the bivariate relations are slightly stronger than for CIB-related scenarios (table 14). A second, related variable was derived from the plausibility model (PAM) of Connell & Keane (2004, 2006). According to PAM, plausibility is determined by whether individuals perceive the scenario entails “sources of conjectures” or speculations. For all IL-based scenario judgments, the variable speculations (X4 ) is negatively correlated with plausibility; i.e. the more speculative or untenable the participants believed the scenario to be, the less plausible it was judged. Noteworthy is that statistical measures show a relevant strength of the relationships only for IL-based scenarios. Reasons for why this relationship with CIB scenarios is less conclusive could be that for the more complex CIB scenarios – participants, indeed, assessed those scenarios significantly more complex (see later in this chapter) – it was more difficult for participants to clearly discern sources of speculation. In contrast, for IL-based scenarios, it could be assumed that the storylines allowed participants to pick up more easily on individual sentences that they perceived as speculative. As a further variable, scenario desirability (X11 ) was tested. Scenario theorists have conceptually connected plausibility judgments with what an individual perceives as desirable. Results of statistical measures show both weak positive and weak negative relations with plausibility judgments (table 14). This inconsistent picture is underlined by participants’ qualitative statements (see chapter 7.3). Here, some participants explicitly mention the desirability of a scenario as reason for plausibility, while others determine a scenario as implausible despite its desirability. In fact, both arguments are understandable. The former case underscores assumptions of H5, because it indicates that a scenario is judged plausible if the scenario illustrates future developments that the individual perceives as important or wishful. In the latter case, the scenario’s desirability may still play a role, but may be overruled by other considerations. The variables discussed above have been collected with a short time difference to plausibility judgments. In the pre-questionnaire (two weeks before the plausibility judgments), participants were asked about their own ideas and expectations regarding the energy systems transformation in Germany. From a list of eight factors, they were asked to select those three that they

6 Experimental study: quantitative research findings

Table 14: Bivariate correlations of tested factors and scenario plausibility in T1 (re: H5)

perceived critical for a success of the transformation9 . Statistical measures indicate strong, positive bivariate relationships between scenario plausibility judgments and participants’ choice of success factors (table 15). For instance, those participants who declared the development of the oil price as success factor, judged the plausibility of CIB1 higher than those who did not name this factor (F.E.= 8.44; p=.033). In fact, when looking at the content of this scenario, the relevance of macro-economic developments, along with the development of the oil price becomes apparent10 . The same tendency is evident for the positive relationship between the choice of ‘acceptance by citizens’ as success factor and plausibility judgments of Scenario ILSet (F.E.= 9.00; p=.037), whereby this scenario report emphasises more strongly the role of citizens in shaping the energy transformations. Despite the statistical evidence of several bivariate relations between plausibility judgments of scenarios and participants’ choice of success factors, the small sample size and the resulting low expected frequencies in the cells call

9 10

The eight factors correspond to the eight key uncertainty factors that were used to develop the two scenario reports as the experiment’s treatment material. For an overview of the scenarios’ content, including CIB1 , consult table 7.

159

160

The Plausibility of Future Scenarios

Table 15: Bivariate correlations of success factors and scenario plausibility in T1 (H5)

for a cautious interpretation. Also, the fragmented observation of relationships across success factors and scenario judgments urges further empirical research with a larger sample size. Nonetheless, the observations at least partly back up the hypothesis H5 that plausibility judgments are related to what participants know and expect about the future. The fact that success factors were collected with significant time difference to the plausibility judgments lends the argumentation additional weight; answers according to social desirability or to avoid cognitive dissonance become less likely. The notion that plausibility assessment is associated with individuals’ conceptual coherence is often discussed rather broadly in theory. For instance, conceptual coherence is argued to involve the extent to which a scenario is a very “complex explanation” (Connell & Keane 2004:99): The more complex a scenario, the less plausible it is. In this study, the variable complexity (X3 ) is consulted to test hypothesis H8 that holds when judging a scenario, individuals pick up on the scenario components, the (causal) relations and the complexity of the components. Interestingly, bivariate analyses point towards a counterintuitive relationship between plausibility and complexity (table 16). Gamma exhibits a weak positive relation for three of the four tested individual scenarios, and carefully suggests that the more complex an individual

6 Experimental study: quantitative research findings

Table 16: Bivariate correlations of tested factors and scenario plausibility in T1 (H5 & H8)

perceives a scenario, the more plausible it is judged. Several theoretical models also assume that the internal consistency of scenarios, i.e. the absence of any contradictory assumptions within one scenario, plays a relevant role for scenario plausibility. This argument is most prominently put forth by CIBadvocates (Lloyd & Schweizer 2013; Weimer-Jehle et al 2016, 2020). Indeed, the tested variable internal consistency (X10 ) of scenarios shows positive relations with plausibility judgments and exhibits moderate to very strong effects. Note that from a philosophy of sciences and informal logic perspective (for instance by Rescher [1976] or Walton [1992a]), internal consistency constitutes a formal testing procedure. However, with the focus of this study on participants’ perceptions, the variable expresses whether some individual beliefs a scenario is internally consistent or not. This does not necessarily correspond with formal testing. In fact, the results indicate that although the CIB method features formal measures to ensure internal consistency and the IL method does not, participants did not reflect this in their subjective assessments: bivariate correlations are even stronger for IL than for CIB scenarios.   The variables discussed above were scrutinised as predictor variables for plausibility judgments using logistic regressions. The models show that the likeli-

161

162

The Plausibility of Future Scenarios

hood that some individual judges a scenario as plausible is strongly related to whether an individual sees her/his own ideas reflected in the scenarios. Particularly, the variable match with own ideas (X5 ) constitutes a powerful predictor of plausibility judgments for both scenario formats. A two-predictor logistic model was fitted to the data to illustrate the resulting dynamics11 . The model incorporates the predictors oil price as success factor (X6 ) and match with own ideas (X5 ). Both variables are powerful predictors of plausibility judgments of scenario CIB1 (YCIB1 )12 . The proposed model suggests that the log of the odds of an individual judging scenario CIB1 as plausible is positively related to match with own ideas (p = .019) and to oil price as success factor (p = .019). In fact, the odds of the scenario CIB1 to be judged plausible is 7.38 times higher when an individual’s beliefs corresponded with the scenario and their selection of the oil price as success factor was hold at a constant. The Likelihood Ratio Test indicates that the model constitutes a better fit to the data compared to the intercept-only model. These tendencies are also reflected in the change in deviance between the null deviance and the residual deviance of the model: Of the 15.88 reduced deviance, almost equal amounts of deviance are reduced by the predictor match with own ideas (X5 ) (8.87) and oil price as success factor (X6 ) (7.02). While this regression model needs to be interpreted carefully, it points towards the potential meaningfulness of individuals’ own worldviews and ideas in understanding plausibility judgments. Further research needs to scrutinise and potentially adjust these models. The empirical findings thereby do not only support notions of the discussed theoretical concepts of plausibility but are also in line with longstanding cognitive psychology research, most notably with dissonance theory. Theories on cognitive dissonance denote that individuals tend to believe and favour information that corresponds with their own view while they discredit or ignore statements contradictory to their own (Cooper 2007; Koehler 1993; Mahoney 1977; Sanbonmatsu et al 1993; Swann & Read 1981).

11

12

For binary models, not the entire sample size but the limiting sample size is relevant to determine the maximally appropriate number of predictors in one model to avoid over-fitting. For the present models, maximally two predictors can be used for the outcome variables IL1 and CIB2 . Predicted logit = -3.8220+(-0.1.18277218) *MATCH WITH OWN IDEAS+(1.9989) *OIL PRICE AS FACTOR.

6 Experimental study: quantitative research findings

Summary H5 and H8: The data provides clear evidence to reject the null hypothesis that scenario plausibility judgments are not associated with conceptual coherence. Strong to very strong, positive relationships are found between plausibility and the degree to which participants see pieces of own ideas/ knowledge to be supported by the scenario. For the plausibility of scenario IL1 , for instance, in knowing the relationship with match with own ideas (X4 ), the errors in predicting the plausibility can be proportionally reduced by 68 percent. In this context, the degree of speculation was tested as related variable for this hypothesis and was derived from the cognitive PAM model. For all scenario judgments, this variable is negatively correlated with plausibility; i.e. the more speculative or untenable the participants believed the scenario to be, the less plausible it was judged. Noteworthy is that statistical measures show a relevant strength of the relationships only for IL-based scenarios. Reasons for why this relationship pertains to IL-based scenarios could be that the storylines let the participants more easily pick up on individual sentences that they perceive as speculative. Statistical results also indicate that some strong, positive bivariate relationships between scenario plausibility judgments and participants’ choice of success factors for the energy transformation in Germany. Logistic regressions further specify such variables as very powerful predictors for plausibility judgments. For example, the odds of the scenario CIB1 to be judged plausible is 7.38 times higher when an individual believed the development of the oil price to be a success factor and when the scenario matches with her/his own ideas. Although sample sizes are small and statistical effects need to be interpreted cautiously, the results allow to accept hypothesis H5. Some of the discussed theoretical concepts associate the degree of complexity with individuals’ conceptual coherence (for instance the PAM model). The more complex a scenario, the less plausible it is, so the assumed relationship. Interestingly, bivariate analyses here point towards a counterintuitive relationship between plausibility and complexity for both scenario formats. Weak positive relations for three of the four tested individual scenarios carefully suggests that the more complex an individual perceives a scenario, the more plausible it is judged. In this context, the linkage between internal consistency of scenarios, i.e. the absence of any contradictory assumptions within one scenario, and scenario plausibility was confirmed. Indeed, the tested variable internal consistency (X10 ) of scenarios shows positive relations with plausibility judgments exhibiting moderate to very strong effects. While H8 is predominantly investigated us-

163

164

The Plausibility of Future Scenarios

ing qualitative findings, the data here presents some indications in support of this hypothesis (further discussions in chapter 7.2).

6.6

Plausibility, participants’ cognitive styles and heuristics

Longstanding research from psychology and behavioural economics suggests that when confronted with decision-making tasks under uncertainty, individuals’ cognitive styles, i.e. the way their approach data and process it, play an important role (Kahneman & Klein 2009; Kahneman & Tversky 1984; Tversky & Kahneman 1974). While many empirical studies have focused on the distinction between intuitive and analytical thinking, the concept ‘need for cognitive closure’ (NCC) is increasingly established and operationalised to reveal how individuals approach decisions under uncertainty (Webster & Kruglanski 1994). In the pre-questionnaire of this study, 16 items related to NCC were collected, because they feature highly relevant aspects regarding scenario judgments, e.g. the handling of uncertainty or the comfort of discussing inconclusive, unresolvable issues. Participants were asked to rank statements from (1) ‘don’t agree’ to (5) ‘fully agree’. Statistical analyses reveal that four items of the variable cognitive closure (X8 ) are negatively related to scenario plausibility judgments (table 17). Interestingly, only meaningful relationships between NCC-items and CIB-related scenarios are observed. Particularly, the relationships between the item NCC15 on avoiding participation in inconclusive and controversial topics on the one hand, and plausibility judgments for CIB-related scenarios, on the other, stand out: Participants who disagreed with the statement ranked the CIB scenarios as rather plausible. Particularly, for CIBSet , in knowing the relationship with NCC15, errors in prediction can proportionally be reduced by 54 percent. The three other items show significant relationships with the plausibility judgments of the scenario set of CIB. All three observations show similar tendencies as the item NCC15: Low ranks, i.e. disagreements with the statements suggest a degree of cognitive openness. Participants are more open to new perspectives and do not feel the need to rush towards conclusions. Those participants tend to rank the plausibility of scenarios as rather higher. In turn, participants with a stronger need for cognitive closure (higher agreements with the statement) tend to judge CIBSet as rather implausible (figure 17). An explanation for this finding can be that higher plausibility judgments

6 Experimental study: quantitative research findings

Table 17: Bivariate correlations of NCC-items and scenario plausibility in T1 (H6)

of scenarios mean for participants to seriously consider one or multiple scenarios for further discussions; they do not feel the need to discredit or even ignore them as implausible. This is an interesting parallel to the scenario literature. Here, low plausibility of scenarios is often embraced as the beginning of an open conversation about scenarios, whereas high plausibility is thought to close down discussions about the future too early (Ramírez & Selin 2014). Relating to the discussion of cognitive styles, educational psychologists (Lombardi et al 2016a) assume that individuals’ background knowledge on the subject matter is a relevant factor for plausibility. The underlying assumption holds that when individuals do not possess relevant background knowledge, they tend to refer to mental short-cuts and cognitive heuristics. At the end of the experiment, eight items tested the level of knowledge participants had regarding energy systems and technologies. Participants were presented with statements about energy technologies that they needed to mark as ‘incorrect’ (0) or ‘correct’ (1)13 . Most participants answered the questions correctly; 52.7 percent marked six or more statements correctly, 20 percent answered four

13

Examples of the statements include “Energy from biogas is – just as wind energy – an inflexible power source”, or “The overall power consumptions in Germany has decreased by 15 percent in the last 10 years”.

165

166

The Plausibility of Future Scenarios

or less than four items correctly. Two items – one about biogas as a flexible or inflexible power source and one about pumped-storage plants – reveal moderate to strong relationships with scenario plausibility judgments: Participants who did not know the answer to the knowledge items, tended to judge the plausibility of CIB scenarios’ rather high.

Figure 17: Jitter plots for items NCC15/ NCC2 and plausibility judgments in T1

Indeed, previous studies have noted the relevance of individuals’ prior knowledge for plausibility. In a study with undergraduate students, Lombardi & Sinatra (2013) evidenced that the more knowledge the students had about

6 Experimental study: quantitative research findings

the distinction between weather and climate changes, the less plausible they ranked stories in which climate change was conceptualised as men-made. Such straightforward parallels between the kind of background knowledge and the scenario content cannot be drawn here since the subject matters of the knowledge items were rather broad. Rather, the observations can be interpreted as another indication for the use of some mental short-cuts or heuristics: Individuals who have less background knowledge tend to make judgments about scenarios’ plausibility in the extremes. Due to the small sample sizes and low expected frequencies in cross-tabulations, however, these results are interpreted as hints towards some potentially interesting patterns. Some scenario researchers argue that the purpose of scenario activities is to produce ‘interesting research’ (Ramírez et al 2015). Interesting scenarios, so the argument, can lead to a better productivity and resonance with involved stakeholders. In this context, cognitive and educational psychology concepts of plausibility also emphasise the relevance of individuals’ motivation to engage with new information (Lombardi et al 2015). In the experiment, participants were asked how interesting it was for them to read the individual scenarios (collected on a 5-point Likert scale). Bivariate correlation analyses of the variable interesting read (X7 ) only show weak to moderate relations with plausibility judgments of all three CIB-related scenarios. The positive gamma values suggest that participants who find the general format of the CIB-based scenario reporting interesting, also tend to rate the plausibility of the scenarios higher (for instance for CIBSet : γ= .36). Several explanations are possible for this effect. As a frequency distribution of the variable shows, participants were either very attracted to the scenario formats or not at all; this is most strongly reflected for the scenario Set CIB. Even though interestingness and readability are particularly attributed to narrative-based formats, no noteworthy relationships are evident.   The variables discussed above were also scrutinised as predictor variables for plausibility judgments using logistic regressions. With the respective outcome variables for plausibility judgments (YIL1, YIL12, YILSet, YCIB1, YCIB2, YCIBSet ), models suggest that the variable cognitive closure (X8 ) constitutes a moderate predictor of plausibility judgments for CIB-related scenario judgments only. Hence, the respective models maintain that the likelihood that some individual judges a CIB scenario as plausible is related to the individual’s preparedness to participate in discussions over inconclusive and controversial topics. Specifically, for the outcome variable plausibility judgment scenario CIB Set (YCIBSet ),

167

168

The Plausibility of Future Scenarios

the resulting model suggests that the log of the odds of an individual judging the set as plausible is negatively related to cognitive closure (p = .002)14 . In other words, the less cognitive closure an individual indicates, the more likely it is that scenario Set CIB is judged as plausible. Figure 18 shows an almost linear effect. Similar tendencies are evident for models for scenario CIB1 (YCIB1 ) and scenario CIB2 (YCIB2 ). Although cognitive closure yields a significant predictor variable in the presented model, the magnitude of this effect is only small to moderate. Compared to logistic models presented in the previous chapters, they perform less well when assessed against actual outcomes. To conclude, an interesting observation is that for all three variables tested for research proposition 4, relationships with plausibility judgments are only evident for all CIB-related scenarios, however not for IL-based scenarios. Furthermore, effects were the strongest for judgments of the CIB set, not for the individual CIB scenarios. This observation can be read as further evidence for the use of cognitive heuristics. CIB-related scenarios were perceived as more complex by participants, and from an analysis point of view included more information in different forms, such as matrix, numbers, network diagrams and explanatory text. The IL scenario report, in contrast, featured narratives and the two-dimensional scenario axis. This comparative level of complexity and amount of information could have led to the circumstance that cognitive anchors, such as patterns in dealing with controversial material or the own interest in the scenario, were stronger at play for the CIB scenarios. To conclude, an interesting observation is that for all three variables tested for research proposition 4, relationships with plausibility judgments are only evident for all CIB-related scenarios, however not for IL-based scenarios. Furthermore, effects were the strongest for judgments of the CIB set, not for the individual CIB scenarios. This observation can be read as further evidence for the use of cognitive heuristics. CIB-related scenarios were perceived as more complex by participants, and from an analysis point of view included more information in different forms, such as matrix, numbers, network diagrams and explanatory text. The IL scenario report, in contrast, featured narratives and the two-dimensional scenario axis. This comparative level of complexity and amount of information could have led to the circumstance that cognitive anchors, such as patterns in dealing with controversial material or the own interest in the scenario, were stronger at play for the CIB scenarios.

14

Predicted logit = 3.4920+(-1.0266) *COGNITIVE CLOSURE

6 Experimental study: quantitative research findings

Figure 18: Plausibility model curve with ‘cognitive closure’ as predictor

Summary H6 and H7: Bivariate analyses reveal several moderate to strong, negative relationships between scenario plausibility judgments and participants’ need for cognitive closure. Specifically, participants who disagreed with the statement ‘In general, I avoid participating in discussions over inconclusive and controversial topics’ (NCC-item 15) ranked the CIB-related scenarios as rather plausible. Similar tendencies are evident for some of the other NCC- related items: For instance, participants, who said to be more open to new perspectives and do not feel the need to rush towards conclusions, tended to rank the plausibility of scenarios as higher than those with a higher need for cognitive closure. The fact that NCC-items were collected two weeks prior to plausibility judgments reduces the risks of social desirability effects. Logistic regression analyses reveal that the variable cognitive closure (X8 ) (specifically item 15) also serves as a suitable predictor for CIB-related

169

170

The Plausibility of Future Scenarios

scenario plausibility judgments. Although these models predict the probability of the scenarios to be judged plausible significantly better than the mean of the outcome variable, their diagnostics show comparatively lower results than predictive models in the previous chapters. Nevertheless, the fact that some degree of predictive power of cognitive closure is observed for all CIB-related judgments is interpreted as a hint towards the relevance of cognitive closure for conceptualising scenario plausibility. The underlying null hypothesis holding that there exists no relation between plausibility and cognitive styles is, therefore, rejected. The analyses point into the direction of H6. Related to cognitive styles, an analysis of participants’ background knowledge on the subject matter of energy transformation reveals that those participants with less background knowledge, tended to judge the plausibility of scenarios in their extreme (very plausible). In line with longstanding research on cognitive heuristics, this allows for the interpretation that participants with less background knowledge applied more short-cuts when asked to make judgments. Caution, however, must be applied to this result due to the small number of expected frequencies in each cell. Future research is encouraged to test participants’ background knowledge with other items, and analyse relations between plausibility judgments and background knowledge in other contexts and with larger sample sizes. Lastly, the analysis tested hypothesis H7 holding that scenario plausibility judgments are positively related to an individual’s interest in reading the scenario. Bivariate correlation analyses of the variable interesting read (X7 ) but also logistic regression analyses show moderately positive relationships with plausibility judgments of all three CIB-related scenarios. The underlying null hypothesis, therefore, is rejected and H7 is accepted.

7 Experimental study: qualitative research findings

The qualitative data presented in this chapter is drawn from open questions within the experiments. Participants were asked to i) write down the reasons for why they judged the respective scenarios (IL1, IL2, CIB1, CIB2) as plausible or implausible, ii) note to what extent they believed the scenarios are influenced by the developers own ideas and worldviews, and iii) which of the scenario reports (IL, CIB) they would rather use in a presentation to political decision-makers or as input for scientific research.

7.1

Procedure of qualitative data analysis

The qualitative data was coded deductively and inductively using the software programme MAXQDA. Thereby, the scheme of the models-of-data theory as reviewed in the theoretical exploration served as deductive starting point. The theory presents a useful framework for this research, because it seeks to i) understand the way in which individuals cognitively represent given scenarios, and ii) analyses the processes that operate on these representations, including plausibility judgments. The theory proposes a classification scheme to be used for coding written protocols as reasons for believing or disbelieving data. During the coding process, codes were adapted for the purpose of studying scenario plausibility judgments and inductively extended by more categories to be able to include any reason brought forward for plausibility or implausibility. Most participants stated more than one reason for each of the scenarios; some wrote down their reasons as bullet points, others as full sentences. All given reasons were codified separately. In total, 431 reasons were coded in 13 main codes and 42 sub-codes. Figure 19 presents an overview

172

The Plausibility of Future Scenarios

of the identified reasons (i.e. categories) for scenario [im]plausibility. They mostly correspond with the models-of-data theory but also go beyond it. The collection of codes shows three overarching themes (boxes in white) along with some second-order categories. The second-order categories are thereby multi-faceted, as most of them were used as reasons for both plausibility and implausibility of scenarios. The remainder of this chapter illustrates the three themes and closes with a discussion of their relation to the research propositions. Figure 19: Categories for reasons of scenario [im]plausibility identified from written protocols

7 Experimental study: qualitative research findings

7.2

Internal structure of scenarios

The first theme pertains to the overall observation that individuals comment explicitly on how data in a scenario is arranged – more specifically, on the individual scenario factors, the linkages and inferences drawn between them. These are also prominent categories in the models-of-data theory, which pays special attention to the complex evidence structure of data. Applied to the coding of the written protocols, the theory holds two relevant premises. First, when evaluating the presented scenario, individuals construct a model as a cognitive representation of the data based on reading the scenario. Data may represent any detail of the scenario, e.g. individual factors, linkages or interferences (Chinn & Brewer 2001:340). Within the cognitive representation, individuals do not treat the data in the scenarios as facts but construct a model of the scenario’s internal structure (how the data in the scenario itself is linked, but also how this bundle of assumptions is related to theoretical interpretations or other forms of data). The written protocols reveal how participants recapitulate the scenario by sharing their own representation of the data, like this participant: “The scenario 1 from report CIB is plausible to the extent that it incorporates and considers different aspects. Because of the speed of infrastructure planning activities, it is assumed that projects will get going, which in turn pertains citizens. This slows down the expansion of renewable energy and citizens rather tend to reject it. Therefore, scepticism arises as well as potential conflicts to invest in the expansion of the infrastructure.” [TN22/SOC/m/1/CIB1]1 The second premise by the models-of-data theory holds that these cognitive models build the basis for an evaluation of the scenario, which happens through an assessment of the plausibility of the links within a scenario (Chinn & Brewer 2001:330). In the present data, four different kinds of links – causal links, impossible causal links, inductive links and analogical links – are identified as reasons for the plausibility or implausibility of a scenario. In line with

1

Text passages are fully anonymised. Passages are labelled with the number of participant (TN#), followed by the disciplinary background of the participant (social sciences [SOC], engineering sciences [ENG]), the gender (female [f], male [m]), the treatment group (IL report was read first [1], CIB report was read first [2]), and which scenario the statement refers to (IL1, IL2, IL report, CIB1, CIB2, CIB report).

173

174

The Plausibility of Future Scenarios

theoretical discussions, especially causal links were often expressed as reasons for scenario plausibility, for example: “I find scenario IL2 very plausible. On the one hand, because here a relaxed global situation is assumed because of the state’s focus on internal affairs politics.” [TN08/SOC/m/2/IL2]. Table 18 shows an overview of the reasons given for plausibility or implausibility of data, along with some indications on how often the respective codes appeared from the data analysis2 . Causal links are expressed 21 times as reasons for plausibility of a scenario. At the same time, causal links are also heavily criticised by participants, and in turn also presented as a source for implausibility: “Even less plausible is that on the one hand, the visibility of climate change impacts increases while on the other hand, particularly pricing strategies shall ensure acceptance for the Energiewende.” [TN11/SOC/f/1/CIB1]. In the written protocols, participants also raised questions regarding the causal links presented in the scenarios. An analysis of the codes does not clearly reveal if such questions are brought forward merely as contemplation or as severe criticism that contributes to the scenario’s implausibility in the view of the recipient. Some references to causal links in the scenarios also seem to result from the peculiarities of the scenario development methods Intuitive Logics (IL) and Cross-Impact Balance Analysis (CIB). In comparison to IL-related scenarios, the matrix structure of CIB-related scenarios enabled more targeted references to the causal links, because of its presentation of pair-wise factor relations: “The argumentation for the factors A – C are plausible, here I find both the supporting and the inhibiting factors appropriate and also their weighting.” [TN15/SOC/f/2/CIB2]. At the same time, analogical and inductive links – two further proposed second-order categories by the models-of-data theory – appear to be especially pertinent for IL-based scenarios. Put forth as reasons for implausibility only, inductive links were denied per se on two levels. On a meta-level, participants criticised that “[…] interdependencies of the past and the present are extrapolated into the future, which I perceive as very risky.” [T16/SOC/f/2/IL1]. Also, on a more specific level, inductive inferences were viewed as illegitimate: “The sense of community already exists in small villages today. But I think that this is not transferable 2

The analysis is not meant to provide quantitative results, yet the number of codings can provide some hints as to how dominant the respective reasons for plausibility or implausibility appear to be.

7 Experimental study: qualitative research findings

to bigger cities. Here, different perceptions are too strong. This sense will not influence Germany.” [TN61/ENG/m/2/IL1]. Analogical links are present in IL-based scenarios only. This could have resulted from the fact that the CIB method does not usually offer room for transferring insights from one event or phenomenon to another. When evaluating IL-based scenarios, analogical links presented a rather controversial reference for evaluating the scenarios. Both IL scenarios refer to the increasing number of refugees in Germany in 2015-16 to argue for the possibility of similar social movements in the context of the energy systems transformation. On the one hand, these analogies were criticised as “out of place [because] these two developments have nothing to do with each other” [TN52/ENG/m/1/IL2]; on the other hand, they were viewed as helpful for the scenario’s imaginability by others: “The developments described were targeted to the current events in the migration crisis and these arguments mostly made sense. I could certainly imagine, that this scenario exactly comes true […].” [TN13/SOC/f/1/IL1] Next to comments on the data linkages, a relevant aspect is that participants suggested further possible analogies that could support the scenario’s plausibility. Hence, the category D Alternative Causal Pathways (table 18) presents a similarly multifaceted source for evaluating scenario plausibility. The prominence of this category is evident across both scenario formats. The modelsof-data theory assumes that when individuals find alternative causal paths, the plausibility of the scenario decreases. This is also evident from the present data when participants proposed alternative pathways as counterfactuals: “I think the explanation for the weak economic growth are implausible. I would say that an increase in climate change rather adds to an economic growth.” [TN15/SOC/f/2/CIB1]. However, beyond this theoretical assumption, also other tendencies can be interpreted from analysing the written protocols: Individuals built on the presented links in the scenario and added alternative pathways that were – from their perspectives – not necessarily in conflict with the already existing links. In this sense, participants also put forth completely new alternative pathway that gave the scenario a different turn: “To increase the acceptance of citizens, citizen participation is a suitable measure. However, this also bears problems. Participation without limits leads to a slow-down of the [infrastructure] processes. Furthermore, it is dif-

175

176

The Plausibility of Future Scenarios

ficult to expect well-informed participation of citizens for complex issues.” [TN11/SOC/f/1/IL2]. It seems that the scenarios triggered participants to think about what else could be influenced by the presented future. This is not interpreted as signs for implausibility, but as generating new food for thought. While narrative theory suggests that plausibility is generally established through (causal) linkages between factors or events (Abbott 2002; Magliano 1999; Pargman et al 2017), the data also suggests that scenario recipients made an evaluation simply on the basis of whether they found individual factors of the scenario to be plausible or implausible. The category E Individual Components was, therefore, inductively added (figure 19). Such simple assertions constitute dominant reasons in the written protocols that go in both directions: i) acceptance of individual components as exemplified by the quote “plausible: Plan of the government to become independent from fossil energy sources” [TN10/ENG/m/2/IL2], but also ii) rejection as evident from the quote “Less plausible I think is the scepticism with regards to the Energiewende, for example, and also the low extensions of grids and renewable energy.” [TN58/ENG/m/2/CIB1]. Furthermore, the absence of individual components was mentioned as reasons for judging the scenarios as implausible: “The role of the EU is completely ignored (e.g. Lobbying of the EU Commission).” [TN34/ENG/m/2/CIB report]. There are several possible reasons for the prominence of this category. Certainly, the brevity of answers could be attributed to the research design. After completing several closed questions, participants might not be willing or able to engage in this cognitive effort towards the end of the experiment. At the same time, Chinn & Brewer (2001:351-356) notice the presence of simple assertions at several levels of their empirical research. They hypothesise that such ‘reasons’ could be made, because the factors or the scenario in general is consistent with the individual’s underlying theory or own beliefs.

18

no. of codes “I find scenario IL2 very plausible. On the one hand, because here a relaxed global situation is assumed in combination with the state’s focus on internal affairs politics.” [TN08/SOC/m/2/IL2].

Examples

“Also, it seems questionable why after circa 35 years the ‘refugee crisis’ should still have a significant influence on the discussed topic. More relevant would be past or in this area possible incidents/ risks (flooding, breaches in dams, nuclear incidents etc.).” [TN50/ENG/m/1/IL1].

1

“Also, the attempt to integrate the refugee theme is out of place. These two developments have nothing to do with each other! It gives the impression that the topic is used as an on-ramp to create attention!” [TN52/ENG/m/1/IL2].

C3. Proposition of alternative analogies

3

“I think this scenario is plausible and comprehensible. The developments that are described in there, were adjusted to current developments like the refugee crisis and the arguments sounded mostly reasonable. Thus, I could imagine that this scenario comes true and that the population, in cooperatives, will take things in their own hands.” [TN13/SOC/f/1/IL1].

1 C2. Rejection of analogical links

“Furthermore, it is not comprehensible why the German government would jump on and participate in this ‘bottom-up transformation’.” [TN19/SOC/m/1/IL1].

C1. Acceptance of analogical links

C. ANALOGICAL LINKS

B1. Denying impossible 4 causal links

B. IMPOSSIBLE CAUSAL LINKS

“It is not clear to me why the acceleration of the infrastructure planning (F2) leads to scepticism with regards to the Energiewende” [TN31/ENG/m/2/CIB1].

A. CAUSAL LINKS

Reason for Implausibility

A3. Questioning causal 5 links

Examples

“I find scenario IL2 very plausible. On the one A2. Rejection of causal hand, because here a relaxed global situation is links assumed in combination with the state’s focus on internal affairs politics.” [TN08/SOC/m/2/IL2].

no. of codes

A1. Acceptance of causal 21 links

Reason for Plausibility

7 Experimental study: qualitative research findings

Table 18: Reasons for plausibility and implausibility: internal structure of scenario

Source: Illustration inspired by Chinn & Brewer (2001)

177

no. of codes

Reason for Implausibility 3

no. of codes

E2. Rejection of 15 individual components

E. INDIVIDUAL COMPONENTS

D2. Rejection of 16 inductive links (specific)

D1. Rejection of inductive links (metalevel)

D. INDUCTIVE LINKS

“plausible: Plan of the government to become independent from fossil energy sources.”

Examples

“Less plausible I think is the scepticism with regards to the Energiewende, for example, and

“This sense community is also present in small villages. But I think that is will not be transferrable to big cities. There are too strong differences in viewpoints. This sense will not shape Germany.” [TN61/ENG/m/2/IL1].

“Less plausible, because relations from the past and the present are transferred into the future; I think this assumption is very risky. The strength of the relationships is not expressed”. [T16/SOC/f/2/IL1].

Examples

    

E1. Acceptance of 9 individual components

Reason for Plausibility

178 The Plausibility of Future Scenarios

7 Experimental study: qualitative research findings

7.3

Scenario’s relation to other forms of knowledge and data

Following the theoretical assumptions of Chinn & Brewer (2001), along with the given data in the scenario, individuals incorporate other forms of knowledge and data into their cognitive representations (model). This fit between the scenario and other data is assessed when evaluating the scenario. Three second-order categories are identified in this analysis. Table 19 shows an elaborate overview of all sub-categories. The first category constitutes an individual’s reference to the underlying theory3 of the scenario. In this respect ‘underlying theory’ implies more general assumptions on how the dynamics of energy transformations can be understood and empowered. An example would be the theory that only citizen participation and a democratisation of the energy system will ultimately drive a change towards sustainable energy production and consumption. The data analysis reveals this category as one of the more frequent in the coding scheme. Participants often rejected the underlying theory of the scenario: “Scenario 1 from report CIB links the implementation of the Energiewende to economic and rational factors. I believe this not very plausible.” [TN11/SOC/f/1/CIB1]. At the same time, participants also supported the underlying theory and mentioned this as a source for scenario plausibility: “[...] Scenario 1 from report IL is rather plausible […] If ecologic pioneers of the past decades had imagined the implementation of the Energiewende, it would have looked similar to this.” [TN22/SOC/m/1/IL1]. A further second-order category has been inductively generated and inspired by what Chinn & Brewer (2001) summarised as ‘relation to other data’. In the context of scenario evaluation, this ‘other data’ pertains to an individual’s own observations of current and past developments: “The developments of the past 30 years show that environmentally-related measures can be implemented rather quickly (e.g. CO2 reductions) and are accepted rather smoothly by the population.” [TN34/ENG/m/2/CIB1]. As much as current and past observations are used in support of the scenario’s plausibility, it was almost equally used to as source for implausibility:

3

The notion of ‘underlying theory’ is used by Chinn & Brewer (2001) in a different context. In addition to the present data, in their empirical research participants receive information about a scientific theory (e.g. a theory about causes for the mass extinction at the end of the Cretaceous period). Participants are then expected to test the presented data against this theory.

179

180

The Plausibility of Future Scenarios

“It is not sufficiently explained why the acceptance by citizens is expected to increase and how a [change in] awareness shall be implemented. The climate change conference in 2015 has changed nothing, therefore, this argument is not plausible.” [TN07/SOC/m/1/IL2]. As a third category, ‘relation to other knowledge’ is taken from the modelsof-data theory. As one aspect of the category, participants stated their own assertions of what will happen or what will not happen. Hence, such reasons do not only refer to past and present observations of participants, but to what they anticipated about the future. This can stand in direct competition with the presented scenarios: “Scenario 1 is plausible to me. I am pretty sure that the price for oil will increase. The same applied for the forecasted developments with regards to climate change.” [TN32/ENG/m/2/CIB1]. From the written protocols, also the notion of desirability is interesting. Several participants openly underlined the plausibility or implausibility with their own desires for the future: “Personally, I believe the factors D1, E1 and G1 to be plausible, because I wish that they will become reality and also I believe they are important for the future.” [TN36/ENG/m/2/CIB2]. Again, other participants rated the scenarios as implausible regardless of their own desires towards this future: “The idea of a participative infrastructure planning through new regulations is admirable, but I doubt that the population will gain influence through this and can actively participate.” [TN21/SOC/f/1/IL2] Beyond the three core categories that capture individuals’ relation to other data when evaluating scenarios, the data analysis reveals other responses to the data (summarised at the bottom of table 19). Noteworthy in the context of scenarios is that probabilities are occasionally used to reinforce the plausibility or implausibility of a scenario. Scenarios that are too far-fetched from the participants’ perspective in the sense of their likelihood, are also perceived as less plausible or imaginable. The sub-categories J4 and J5 also imply that many participants had difficulties in judging plausibilities, and that rather than allowing for clear assessments, the scenarios triggered further questions and insecurities for the scenario recipients.

Source: Illustration inspired by Chinn & Brewer (2001)

20

I1. Own ideas of what will happen

8

24

H1. Parallels to own observations

I2. Own desirability of scenario

14

no. of codes

G1. Acceptance of underlying theory

Reason for Plausibility

Reason for Implausibility 18

regardless of

“I personally think the factors D1, E1 and G1 are desirability plausible, because I myself wish that these come true and I also perceive them as important for the future.” [TN36/ENG/m/2/CIB2].

“Scenario 1 is plausible for me. I am fairly certain I3. Own ideas of what that the price of oil will go up. The same applies will not happen for the forecasted futures with regards to climate change.” [TN32/ENG/m/2/CIB1]. I4. Implausibility

no. of codes

8

20

I. RELATION TO OTHER KNOWLEDGE/ INTUITION

“The developments of the past 30 years show that H2. No parallels to own 6 environmental measures can be very quickly observations implemented (e.g. CO2 reductions) and that they are adopted quickly by the population.” [TN34/ENG/m/2/CIB1].

H. RELATION TO OTHER DATA/ OBSERVATION

“Scenario 1 from report IL is rather plausible. (…) If ecological pioneers from the past decades had laid out the Energiewende, it would have looked like this or similar.” [TN22/SOC/m/1/IL1].

G2. Rejection of underlying theory

G. UNDERLYING THEORY

“[Scenario is plausible, because (RS] private households tend to invest more money into a sustainable energy supply than big companies. ”[TN47/ENG/m/1/IL1].

Examples

“The idea of the shaping of participative infrastructure planning through new laws is laudable, yet I doubt that this way the population will have an influence and can participate actively.” [TN21/SOC/f/1/IL2].

“The high grid expansion is impeded by numerous public petitions, despite the high acceptance in society. This has always been a problem and will also present a problem for construction projects in the future.” [TN61/ENG/m/2/CIB2].

“It is not sufficiently explained, why the acceptance by citizens should increase and how this awareness is implemented. The climate change conference in 2015 has changed nothing, therefore this argument is not plausible.” [TN07/SOC/m/1/IL2].

“very improbable that the government achieves a rethink based on moral reasons. The government advocates national interests. If the population can be brought on to a sustainable pathway only by regulatory measures (without own motivation), this would be political suicide for the politics of the government.” [TN03/SOC/m/2/IL2].

“Scenario 1 from report CIB connects the implementation of the Energiewende with economic and rational factors. I think this is less plausible.” [TN11/SOC/f/1/CIB1].

Examples

7 Experimental study: qualitative research findings

Table 19: Reasons for plausibility and implausibility: scenario’s relation to other data

181

5

7

J4. Further arising questions

13

J3. Insecurity in judgment

J2. Probability

“As long as lobbyism has a strong influence on the politicians, the fossil energy sources will not disappear. An environmentally friendly system is therefore not probable.” [TN61/ENG/m/2/CIB2].

“Plausibility of scenario 2 is assessed higher by myself than for scenario 1 → own feeling.” [TN24/SOC/m/2/CIB2].

Examples

“In fact, imaginable, that the pressure of high energy prices will cause citizens to become active, but presumably rather protests on a large scale than many small projects/ cooperatives (rather isolated?)” [TN26/SOC/f/2/IL report]. “Furthermore, it is the question whether (...) citizens can actively participate in decision-making processes and whether they (can) take the time to engage with the Energiewende more thoroughly or if the normal citizen will simply follow the general opinions and no real participative state will be achieved.” [TN54/ENG/m/2/IL2].

“The scenario 1 is both plausible and also implausible.” [TN21/SOC/f/1/IL1]. “I would go for scenario report CIB. Although the scenarios sound somewhat implausible, they sound much more reputable and professional.” [TN22/SOC/m/1/CIB report].

“I also think the impetus towards the Energiewende through the state and not the citizen is probable and therefore plausible.” [TN08/SOC/m/2/IL2].

J. OTHER CATEGORIES

no. of codes

  

“sounds plausible and traceable” [TN47/ENG/m/1/CIB2].

Reason for Implausibility

  

20

Examples

  

J1. Simple assertions

no. of codes

  

Reason for Plausibility

182 The Plausibility of Future Scenarios

7 Experimental study: qualitative research findings

7.4

Scenario methodology

The third category relates to participants’ discussions of the scenario methodology, credibility and trust, and has been inductively developed. It covers a broader range of aspects regarding [im]plausibility assessments and is structured by seven sub-categories (overview in figure 19). One major assumption of cognitive and educational psychological studies is that individuals rarely take new information as given facts; rather, the data evaluation is expected to be informed by assessments of the methodology through which the data has been generated (Chinn & Brewer 2001; Lombardi et al 2015). This is also evident in the present study. While the former two overarching themes were found across the two scenario formats, albeit with different foci, reasons with regards to the scenario’s methodology are more clearly distinguishable between IL and CIB-based scenarios (table 20). Many participants explicitly referred to the traceability, i.e. the understandability and logic of the scenarios. The traceability of scenarios was mentioned as reasons for plausibility with respect to both scenario formats; for IL one participant wrote: “I could almost entirely follow the sequence of the content of the scenario.” [T02/SOC/f/1/IL1]. With respect to CIB-based scenarios, reasons for plausibility were almost always targeted to the specific form of scenario presentation: “The traceability was better, because it was described by the graphic and the arrows in the Cross-Impact Balance Analysis.” [TN48/ENG/m/1/CIB1]. While traceability as reason for the plausibility of scenarios is almost equally distributed among the two scenario formats, the absence of traceability as reasons for implausibility is only found with regards to IL-based scenarios. Many participants argue that the scenarios would not sufficiently explain assumed relationships between factors. A reversed picture is evident for the sub-category ‘complexity/ level of detail’. For both scenario formats, participants mention some degree of complexity of the scenarios as reasons for plausibility. For CIB-related scenarios, again the format of presentation contributes to the complexity and multifaceted nature of the scenarios. For IL-based scenarios, one participant exemplary wrote: “From my perspective the most plausible, despite the model’s low degree of complexity.” [TN53/ENG/m/2/IL1]. While complexity seems to contribute to plausibility, there seems to be a threshold; for CIB-related scenarios only, participants criticised the complexity of presentation as source for implausibility (table 20). The question of subjectivity versus objectivity of the scenarios has been a popular one among participants. The scientific nature of CIB scenarios was

183

184

The Plausibility of Future Scenarios

evident in the accounts of participants and almost reiterated discussions in the scholarly scenario literature: “I think that the opinions of the developers don’t become clear, because the development of the scenarios via the matrix don’t leave room for it. Because no storyline is developed, interests and worldviews cannot be included as readily, or at least this is not readily noticeable, because the statements for blocks [rows and columns in the matrix, RSS] are approached separately.” [TN30/SOC/f/1/CIB report]. At the same time, many participants limited the superiority of CIB and noticed that objectivity of CIB-scenarios is mostly conveyed by the form of scenario presentation. In this context, alleged subjectivity is also regularly attributed to CIB scenarios and used as reasons for the scenario implausibility. IL-based scenarios are more often criticised for their lack of objectivity, yet both criticism and support for the scenarios’ plausibility is evident. A further sub-category is identified that was taken up as a specific question in the experimental questionnaire (“Do you feel own ideas and worldviews of the scenario developers are evident? If so, where in the scenarios?”). Perspectives of participants are very much diverse for this category across the two scenario formats. While arguments for both the plausibility and implausibility of IL and CIB scenarios are evident, the content of the statements often refer to the form of presentation of the scenarios, i.e. the matrix-structure and the storylines. The sub-category ‘trust in method’ shows similarly interesting tendencies regarding the scenario formats. Overall, the trust in the method appeared as rather important determinant for the evaluation of the scenarios, as exemplified below: “I would choose the CIB scenario report. Although the scenarios are somewhat implausible, it [the report, RSS] appears much more serious and professional. This is probably due to the cool data matrix and the graphics and arrows and the separate description of the individual factors.” [TN22/SOC/m/1/CIB report]. While the CIB structure was regularly mentioned to create trust in the method, there are also counterexamples: “Algorithms are nowhere explained, this leads to a ‘lack of trust’ (why are which variants assessed like this?), implausible!” [TN10/ENG/m/2/CIB2]. Interestingly, none of the participants explicitly raised trust in the IL-method; written statements with regards to ILbased scenarios were only used as reasons for implausibility.

Source: Illustration inspired by Chinn & Brewer (2001)

CIB

IL

CIB

IL

CIB

IL

CIB

IL

“Phrases are unnecessarily difficult to understand due to double negativity (e.g. inhibiting scepticism; supporting a weak growth); this inhibits the reading and can cause confusion.” [TN21/SOC/f/1/CIB1].

“The other factors present speculations for me, which are, according to individual opinions, more or less plausible. ”[TN58/ENG/m/2/CIB1].

“Relationships are objectively assessed and subsequently evaluated, which decreases ‘fallacies’ in limited cognitions.” [TN35/ENG/m/2/CIB1].

“I would decide for the report CIB. Although the scenarios are somewhat implausible, it [the scenario] seems more reputable and professional. This probably is because of the cool data matrix and the graphs and arrows and the separate description of the individual factors.” [TN22/SOC/m/1/CIB report].

“The algorithm is nowhere explained, this leads to a loss of trust (why were the variables assessed like this?). Implausible.” [TN10/ENG/m/2/CIB2].

“From the methodological explanations it is not clear to me, how the driving factors that are not in the coordinates of axes are [included] in the scenarios.” [TN19/SOC/m/1/IL1]

O. TRUST IN METHOD

“I think that the opinion of the developer does not become clear, because the “The assessments of the influences (weak, moderate, strong) are not always inference of the scenarios via the matrix does not leave room for this. Because no transparent form me. Here, interests and values of the involved actors could be storyline is developed, interests or worldviews cannot be included so quickly or hidden.” [TN23/SOC/m/2/CIB report]. at least this cannot be detected so quickly, because the statements in the blocks are considered separately.” [TN30/SOC/f/1/CIB report].

“I think both scenarios in report IL are not considerably normative. Yet, I cannot “Developer indicates that s/he perceives the previously implemented measures conclusively answer why exactly the two scenarios from the upper right corner in (2016) as insufficient (political priorities for climate and Energiewende). scenario axis are presented; this could be traced to the views of the authors.” Emphasis on the significance of participation.” [TN19/SOC/m/1/IL report]. [TN08/SOC/m/2/IL report].

N. ROLE OF SCENARIO DEVELOPER(S)

“Like in scenario 1 many assumptions are made. The moral or ethical obligation is no decisive argument for me.” [TN61/ENG/m/2/IL2].

“factual neutral” [TN47/ENG/m/1/IL2].

M. SUBJECTIVITY/OBJECTIVTY

“The scenario 1 from report CIB is plausible in the sense that it integrates and considers different aspects.” [TN22/SOC/m/1/CIB1].

“In my opinion the most plausible, despite the low degree of complexity of the model.” [TN53/ENG/m/2/IL1].

L. COMPLEXITY/ LEVEL OF DETAIL

“Influencing factors and the combination[s] are well comprehensible; the mutual influences (…), the structure and the individual factors in their impact make the scenario tangible.” [TN28/SOC/f/2/CIB1].

CIB

K. TRACEABILITY/ LOGIC

Reason for Implausibility

“I can trace the developments in the scenarios almost entirely.” [T02/SOC/f/1/IL1]. “It’s not sufficiently explained, why acceptance by citizens should increase and how the awareness can be implemented.” [TN07/SOC/m/1/IL2]

Reason for Plausibility

IL

Scenario Method

7 Experimental study: qualitative research findings

Table 20: Reasons for plausibility and implausibility: scenario methodology

185

Reason for Implausibility

“Influence of new trends inestimable as uncertainty” [TN46/ENG/m/2/CIB1].

CIB

“I think this scenario is hardly plausible. – Political decision-making sounds single-sided to me” [TN59/SOC/f/2/IL2]. “Sounded hardly plausible to me, because no distinction is made between factors that are less influenced by model-internal factors (something like ‘predetermined factors’ in report IL) and those that strongly depend on other model quantities.” [TN09/SOC/f/1/CIB2].

IL

CIB

Q2. EVIDENCE STRUCTURE: Reductionist/ Deterministic

“From the current standpoint I doubt this scenario, because politics will not act only due to moral convictions; I miss complex relationships like the cooperation between economic actors and citizens.” [TN28/SOC/f/2/IL2].

IL

Q1. EVIDENCE STRUCTURE: Lack of Information

“It reads like a novel from Andreas Eschbach; turn after paragraph 4 questions plausibility; first critical whether the Energiewende ‘succeeds’ and then everything can be solved by citizens in the rural areas.” [TN35/ENG/m/2/IL1].

P. DRAMATURGY

“A plus is that the timely dynamic is included and that trends and driving forces were distinguished.” [TN09/SOC/f/2/IL1].

Reason for Plausibility

  

IL

Scenario Method

186 The Plausibility of Future Scenarios

  

7 Experimental study: qualitative research findings

Overall, the categories for the thematic cluster ‘methodology’ reveal more criticism, i.e. reasons for implausibility, rather than plausibility of the scenarios. This is also supported by the last sub-category ‘evidence structures’. Many participants mentioned that more information would be needed to fully evaluate the scenarios. At the same time, both scenario formats, particularly CIB, was criticised for being too reductionist and too deterministic. Specifically, participants noted that possible developments were not portrayed or analysed from multiple perspectives, were single-sided or even biased, and did not sufficiently consider model uncertainties. In sum, two findings are particularly noteworthy. First, previous theoretical concepts (see Chinn & Brewer (2001) for an extensive review) on data evaluation assume that individuals mostly pick up on methodological flaws in the data. While in this presented study, methodological considerations are identified as relevant reasons for [im]plausibility, the coded passages suggest that individuals’ comments are often directed to the method itself – the perceived objectivity, traceability or the attractiveness of data presentation – without any recourse to the content of scenarios. In more simple words, a scenario was judged plausible if its developing method was perceived legitimate. Second, participants picked up on the very peculiarities of the two presented scenario formats IL and CIB. For instance, while the persuasive character of IL-based scenarios – the persuasiveness of narrative storylines is assumed to be very convincing in narrative research (see chapter 4.3) – was coded 13 times as reasons for plausibility; the same code only showed up once for a CIB-related scenario. Vice versa, the perceived scientific nature of CIB scenarios due to the display of numbers and matrices, was coded 30 times versus just two times for IL scenarios (table 21). Table 21: Specific characteristics of IL- and CIB-based scenario evaluations Methodological aspects

no. of codes

R. PERSUASIVENESS

(IL: 13) (CIB: 1)

Reason for Plausibility

Intuitive Logics “Furthermore, it [the report] is positive and is a positive view into the future. The argument that the Energiewende is only possible if everyone pulls together is a thrilling argument”. [TN17/SOC/f/1/IL report].

187

188

The Plausibility of Future Scenarios

S. IMAGINABILITY

(IL: 7) (CIB: 1)

“This concerns the morally anchored argument which goes beyond the material values and the global problems of climate change (…) and presents the consequences of our own living standards in a bigger picture.” [TN25/SOC/m/1/IL report].

Cross-Impact Balance Analysis T. SCIENTIFIC NATURE/ OBJECTIVITY

(CIB:30) (IL: 2)

U. COMPARABILITY/ COMPREHENSIVENESS

(CIB:16) (IL: 5)

“The whole report, in contrast to the second report, makes a more independent and reputable impression” [TN61/ ENG/m/2/CIB report]. “Due to the determination of premises (strength of the influencing factors) hardly any hints towards normative values of the developer rather clinical-scientific, neutral presentation.” [TN01/SOC/f/2/CIB report].

“Due to the scenario method and the diagrams it is very easy to see the scenario in relation to the other scenario.” [TN30/SOC/f/1/CIB report]. Source: Illustration inspired by Chinn & Brewer (2001)

7.5

Discussion and data triangulation

The qualitative findings enrich quantitative conclusions as discussed in the previous chapter and are relevant across the research propositions of this research. The written protocols reveal that individuals form opinions on scenario plausibility or implausibility by evaluating the internal data structure of the scenarios. This finding is overall consistent with theoretical notions from argumentative discourse analysis (Majone 1989) as well as cognitive and educational psychology (Chinn & Brewer 2001; Lombardi et al 2015). It presents relevant empirical evidence for the theoretically-derived proposition 5: A user’s plausibility judgment of a scenario is linked to the internal structure of the scenario, the argumentation strength and persuasiveness. Individuals explicitly discuss diverse linkages of data in a scenario when contemplating its plausibility. As hypothesised by narrative theorists (Abbott 2002; Pargman et al 2017), causal links are particularly often raised by participants as reasons for plausibility AND

7 Experimental study: qualitative research findings

implausibility. While scenario methods do not always work with causality, connections between factors in the scenario are regularly interpreted by participants as causalities. This observation can be attributed to the concept of ‘implicit causalities’ (Kutscher 2009) in the sense that scenario presentation tempted participants to judge certain arrangements of subjects and objects in a sentence (for IL-based scenarios) and impact diagrams (for CIB-based scenarios) to result from causality. Here, the data analysis shows that causal, analogical or inductive links in the scenarios – implicitly assumed or explicitly stated in the data – offer room for attack but also support of the scenarios. The qualitative findings offer insights that would have remained undetected with quantitative analyses only. Indeed, the statistical tests have only rudimentarily captured how individuals assess the scenario data using the quantitative variables complexity and internal consistency. The qualitative analysis can be interpreted as generally supporting the positive correlation between a scenario’s internal consistency and plausibility. Participants’ conception of the degree of complexity of a scenario, however, disclose tendencies that are less straightforward, and are literally more complex. Participants put forth that complexity is generally considered a reason for plausibility, e.g. the CIB scenarios were considered plausible, because they included so many factors. Also, some participants found IL scenarios to be plausible despite its low degree of complexity. At the same time, there seems to be a threshold for complexity, because participants criticised the complex presentation of CIB scenarios as hindering the plausibility of scenarios. Hence, the qualitative data regarding the relation between plausibility and complexity could explain the very different dynamics in the quantitative analysis of the variable complexity (the variable showed positive, negative as well as non-existent bivariate associations with scenario plausibility judgments). Similar tendencies are also reflected in participants’ reference to a scenario’s methodology – another overarching theme in the qualitative data analysis. Written passages coded as relating to the scenario methodology thereby stand out to be relevant for the study’s proposition 2: A user’s plausibility judgment of a scenario is linked to the perceived credibility of the scenario source and to trust in the scenario itself. In this study, credibility is assumed to be related to either the credibility of the scenario source or to the credibility and trustworthiness of the scenario itself. For both notions, the empirical data provides evidence towards a positive association with plausibility. While the ladder is evident in participants’ explanations of the scenarios’ traceability, logic or value neutrality, particularly the credibility associated with the scenario source

189

190

The Plausibility of Future Scenarios

is remarkable. Participants extensively commented on the trustworthiness of the method or contemplated whether they felt scenario developers’ own worldviews are manifested in the scenarios. The perceived objectivity and the attractiveness of the method seems to be very dominant – to the extent that plausibility assessments are justified only based on the scenario method and without any recourse to the actual content of the scenario.   The third overarching theme disclosed by the coding procedure relates to scenarios’ relation to other forms of data and knowledge, and enriches quantitative data regarding the research proposition 3: A user’s plausibility judgment of a scenario depends on whether the scenario corresponds with the recipients’ own worldviews and prior knowledge. The qualitative data analysis reveals that participants judge a scenario to be more plausible when it supports their underlying theory, and when the scenario is consistent with current and past observations of the participant. Additionally, other forms of knowledge play a role in the assessment of scenarios plausibility, i.e. participants’ own ideas of what will or will not happen in the future. These results are particularly interesting in juxtaposition to the findings of the quantitative analysis. The dominance of other data that is consulted when evaluating the scenario reflects the highly statistical relevance of the variable match with own ideas and provides deeper insights into the mechanisms that may have been captured by the variable. For the category relating to the fit between individuals’ own knowledge and the scenario, no differences between scenario formats can be located. This allows for the hypothesis that a matching between the given scenario and individuals’ own mental models is a robust concept across the format of data presentation (narrative versus matrix). Finally, insights across the three coding categories show interesting twists regarding the research proposition 1: A user’s plausibility judgment of a scenario is linked to the format in which a scenario is presented. The quantitative, statistical analysis only allowed for the careful observation that plausibility judgments for CIB-related scenarios were overall higher than for IL-related scenarios (particularly when participants rated the IL-scenarios first). Qualitative data suggests that participants picked up on the respective particularities of the two presented scenario formats. The persuasive character of IL-based scenarios was primarily mentioned as reason for plausibility, while for CIB-related scenarios the perceived scientific nature was emphasised. While this observation may not be very surprising, it suggests that individuals deploy diffe-

7 Experimental study: qualitative research findings

rent mechanisms and look out for different aspects when making plausibility judgments of two very different scenario formats.

191

8 Synthesis: A conceptual map of scenario plausibility

The overall objective of this book is to contribute to the still young literature on scenario plausibility. Previously, scholars have proposed ideas to grasp and establish plausibility from the perspective of scenario developers (Wiek et al 2013), for instance, through the use of certain scenario construction techniques (Boenink 2013; Lloyd & Schweizer 2013) or have argued that plausibility is the result of a co-constructive interaction between developers and users (Ramírez & Selin 2014; Ramírez & Wilkinson 2016). The research presented in this book is the first systematic and empirical exploration of scenario users’ plausibility judgments. It has taken a broad, sociological prospect to explore a wider range of indicators for plausibility. Therefore, conclusive statements are no key outcomes of this study. Rather, a conceptual map of indicators for plausibility judgments is presented in this chapter. Based on the examined theoretical concepts and the empirical observations, it discusses the significance of identified indicators for scenario practices, and puts forth proposals for conceptualising scenario plausibility. The exploratory nature of the study, indeed, calls for a cautious interpretation of findings. Therefore, the map is not understood as full-fleshed theory but as an approach to synthesise insights and outlooks that are worth pursuing in future research agendas.

8.1

Units and contexts of the map

The map development resembles ‘theory-research circles in which conceptual frameworks inform empirical research, and in turn lead to more refined theoretical concepts in iterative processes (Dubin 1978). The process presented in figure 20 can thereby be viewed as a starting point for more in-depth theoretical and empirical research on scenario plausibility. The contexts and

194

The Plausibility of Future Scenarios

boundaries of the map, i.e. the specific contextual conditions under which plausibility is viewed, are represented as relevant units in this map1 . Three units of analysis and their interactions are synthesised from the theoretical concepts and empirical observations: Scenario development sources and methods (unit A), scenario(s) and their internal structure as presented in a scenario report (unit B), and a scenario user-recipient (unit C) all contribute to scenario plausibility judgments. The map connects units by categoric laws of interaction, implying that a change in one unit potentially triggers a change in plausibility judgments. The categoric laws do not assume causal relationships between the units2 . Figure 21 provides an overview of the unit factors and indicators. The units are briefly defined and subsequently discussed in more detail.

Figure 20: Developing a map on scenario plausibility judgments

Source: Presentation inspired by Lynham (2002:243) and Dubin (1978)

1 2

Following the ‘theory-research circle’ approach of Dubin (1978:144), units are concepts of factors that are central to the understanding of the general workings of the map. For this map, not enough is known about the dynamics of plausibility so that relationships are not specified as determinant. Following Dubin (1976) and Chermack (2005:63), categoric laws are, therefore, used to provide careful indications for directions of relationships.

8 Synthesis: A conceptual map of scenario plausibility

Unit A: Scenario(s) development sources and methods Scenarios result from development processes in which scenario users were not involved. The development team (e.g. experts and/ or stakeholders) can apply different scenario methods. For this map, the methods Intuitive Logics (IL) and Cross-Impact Balance Analysis (CIB) are considered. While for IL, narrative storylines are systematically developed, in CIB, internally consistent factor combinations based on pairwise assessments of uncertain factors are determined. Unit B: Scenario(s) and their internal structure in a scenario report Scenarios are defined as possible pictures of a future world and come in sets of at least two or more scenarios in a written report. Depending on the method, they take on different formats: For this map, the forms of storylines (IL) and network diagrams (CIB) are considered. For both formats, scenarios consist of several uncertainty factors that interact with one another. Methodological descriptions of development processes are included as part of a scenario report. Often several scenarios exist regarding a subject matter (for instance the future of energy system change) and users are presented with more than one report. Unit C: Scenario user-recipient Different stakeholders may receive scenario reports. This map only addresses scenario users that were not involved in the development process. No direct interaction or exchange of ideas happen between scenario developers and the ‘user-recipients’. Scenario users have different expectations and beliefs about the subject matter, different disciplinary backgrounds and diverse motivations to engage with a given set of scenarios.

8.2 8.2.1

Unit of analysis A: scenario development sources and methods Indicator 1: credibility

Empirical findings of this study yield credibility as one of the key indicators for scenario plausibility judgments. A positive relationship between plausibility and some sort of ‘credibility’ is also established as a common thread running through theoretical concepts and models across disciplines. This map

195

196

The Plausibility of Future Scenarios

thereby underpins those theoretical directions. However, at the same time, it differentiates more clearly between different notions of credibility and accentuates its significance for scenario assessment. Indeed, credibility has been subject to scholarly analyses for decades, and definition and scope is often subject-specific, for instance its role within scientific communities (Allchin 1999; Shapin 1994), for communication research (Chung et al 2008; Hovland et al 1953) or in information processing (Chaiken & Maheswaran 1994; Hilligoss & Rieh 2008). Consequently, credibility has been associated with epistemic structure and quality of reasoning, with persuasion, trust, believability, objectivity, cognitive strategies and many other concepts. For the composition of a scenario plausibility map, two perspectives of credibility are determined as relevant: Source credibility, one the one hand, and credibility in the sense of trustworthiness of the scenario itself, on the other. Both conceptions are interrelated; if the scenario source is perceived as very credible, trustworthiness of the scenario itself is also likely to be high. However, both notions also involve fundamentally different dynamics when it comes to assessment mechanisms and are, therefore, discussed in turn. The credibility of the scenario itself refers to what is often termed ‘message credibility’ (Metzger et al 2003) or ‘information credibility’ (Hilligoss & Rieh 2008) and relates to the believability and trustworthiness of described pathways in the scenarios. This notion of credibility is highly pertinent to scenario practice. Scenarios embrace uncertainty and ambiguity to the extent that conventional criteria for information validation are inapplicable; instead, assessments of scenarios are bound to trust and perceptions of credibility (Selin 2006). This is reinforced by the proliferation of scenarios, particularly in topics like sustainable energy transformation where scenario users are confronted with an overload of different and often contradictory scenarios. Conceptual maps of scenario plausibility here can profit from well-established risk perception research and can draw useful analogies. Longstanding research has suggested trust as a mediator and as a necessary condition for taking thirdparty risk assessments seriously (Beck 1992; Löfstedt 2005; Slovic 2000). For plausibility judgments of scenarios, credibility is clearly linked to ‘trustworthiness’ to imply that the scenario recipient thinks that the assertions of the scenario can be believed and trusted (Hovland et al 1953). Quantitative findings from the experimental study evidence that plausibility judgments are positively correlated with recipients’ perception of trustworthiness. Bivariate correlations are evident across the two different scenario formats (CIB and IL) and across the different scenario recipients (students with social science and

8 Synthesis: A conceptual map of scenario plausibility

engineering backgrounds). Logistic regressions show very powerful models that predict plausibility judgments based on trustworthiness with up to 31.3 percent of the proportion of variance in plausibility judgments to be explained by the predictor trustworthiness. Qualitative findings show that trustworthiness involves perceived persuasiveness of a scenario’s content and contributes to plausibility judgments, for instance when participants argue that “although it is not explained why such ‘ethical’ debate is held, I still believe it is plausible” [TN07/SOC/m/1], or “the general structure of arguments is plausible, I just do not believe it” [TN24/SOC/f/2]. As such, qualitative findings directly speak to the propositions of Majone’s (1989) ‘quasi-judicial’ methodology. Majone argues that while policy analysts – or in this case scenario developers – have to compile arguments using credible evidence, it is ultimately up to the audience to assess whether it is trustworthy enough to hold as conclusion. Although the map maintains a positive correlation between scenario credibility and plausibility judgments, this relation is not interpreted as too straightforward. As the quantitative findings have revealed, associations vary from strong positive for the respectively first scenarios of the report (IL1 / CIB1 ) to no meaningful relations for the partner-scenarios (IL2 / CIB2 ). The fact that this tendency is observable for both scenario types points to some systematic patterns that should not be overlooked in a conceptual plausibility map. One explanation is that trustworthiness can serve as helpful anchor for scenario users in making their judgments. Following psychological research on order and sequence effects (Stewart et al 2002), for the respectively first scenario of a report, trustworthiness could have served as helpful anchor, because other references are still missing; for the respectively second scenario, however, this anchor was not as present, because the first scenario could now be used as benchmark. Similar explanations are also evident from other contexts. Chaiken & Maheswaran (1994), for instance, demonstrated that when individuals perceived a task to be of low as opposed to high importance, credibility played a very important role in making judgments, regardless of the argument’s strength. Furthermore, the more ambiguous a message was, the more credibility- and context-related thoughts were involved in judgments. In sum, this map takes up indications that when judging a scenario, contextual interpretations may play a particularly important role. This relationship is also dynamically impacted by cognitive short-cuts or heuristics. Interesting parallels can here be drawn between empirical insights on scenario plausibility judgments and studies on subjective probability assessment. In the ladder, longstanding research has been able to prove

197

198

The Plausibility of Future Scenarios

that heuristic ‘rules of thumb’ are actually quite consistent when it comes to individuals’ judgments of risks (Gilovich et al 2002). Explorations of such patterns should be part of conceptual plausibility maps in the future. Yet, already at this juncture, it constitutes an important refinement of previous considerations of plausibility and credibility compared to findings in narrative theory. Indeed, the relationship between narrative evaluation, plausibility judgments and credibility /trustworthiness has been acknowledged, however on a rather broad, conceptual level. Hajer (1995) and Boswell et al (2011), for instance, noted that plausibility is a cognitive condition of the narrative reader, and along with credibility and source trustworthiness, contributes to narrative persuasion. Considerations of the potential interaction between those factors were not noted. Others have simply asserted that plausibility itself is a “verbal cue for trustfulness” (Vrij 2008).   Next to the trustworthiness of the scenario itself, also credibility of the scenario source is taken up as relevant factor in the plausibility map. Although scenarios are often not directly associated with certain authors and may be rather linked to the credibility of the respective text itself, source credibility is taken up for two reasons: first, propositions in extant plausibility models and concepts across disciplines, and second, empirical findings from the experimental study. Except for linguistic approaches to plausibility, all explored notions argue for a relationship between plausibility and credibility; thereby most authors really refer to source credibility. Rescher (1976), for example, maintains that plausibility assessments are directly dependent on the source reliability. Also, conceptual plausibility models from educational psychology, most notably the PJCC (Lombardi et al 2015), are based on empirical evidence that source credibility influences students’ plausibility judgments of scientific and non-scientific stories about climate change. In this map, source credibility is operationalised as ‘expertise’ and refers the perceived capacity of scenario developers to produce valuable scenarios (Hovland et al 1953). The quantitative findings hint at a positive relation with plausibility judgments, albeit less conclusively. Participants’ perceptions of the expertise show strong associations with judgments of the overall scenario CIB Set (about 30.6 percent of the proportion of variance in the plausibility judgments are explained by expertise), while for the other scenario plausibility judgments, merely weak or even non-observable associations can be demonstrated. At the same time, qualitative findings exhibit many references to the scenario developers, as exemplified in this statement: “[The]

8 Synthesis: A conceptual map of scenario plausibility

developer indicates that s/he perceives the previously implemented measures (2016) as insufficient (political priorities for climate and Energiewende)” [TN19/SOC/m/1]. Together with the inconsistent patterns in the quantitative results, these tendencies are worth considering for conceptual plausibility maps. Participants did not know the scenario developers in person but only read a general description on the ‘involved experts’ and their methodologies and still tried to relate to developers. In an interesting way, it speaks to a presented framework on narrative communication. Abbott (2002) holds that readers’ evaluations do not stop at the narrative; even if no direct interaction between reader and author takes place, readers still construct their own image of the author (‘implied author’) to interpret and find meaning in the narrative. The assumption that ‘perceived expertise’ is subject to individuals’ own mental constructs is also supported by social psychology research. Earle & Cvetkovich (1995), for instance, have demonstrated that the more a statement corresponds with an individual’s own beliefs and values, the higher individuals assess the expertise of the statements’ authors.   Proposing source credibility as an indicator for plausibility, thereby presents not only a new perspective to scenario plausibility, but also to scenario planning more generally. Scenario assessments are associated not only with the material – the text, the narrative or the model of scenarios – but with the source of the material. It presents a fundamental counterargument to assumptions in scenario literature, holding that scenario plausibility is ensured through internal consistency and argumentative strength of scenarios (Lloyd & Schweizer 2013; Weimer-Jehle et al 2016). Thereby, the findings on scenario plausibility and source credibility also gain a social and political relevance. In the absence of first-hand validation, Shapin (1994) has been famously concerned with the criteria for assessing the credibility of scientific reports. He maintained that any scientific output is ultimately produced by a person, and so the credibility of the scientist or the institution behind the science becomes a ‘vicarious selector’ – in the sense of an available shortcut or cue – for making assessments of any scientific product (Allchin 1999). For Shapin (1994), then, the critical question is not what to trust, but who to trust. This perspective is highly pertinent for the current practice of scenario planning. In fact, some scholars have more recently warned that scenarios of renown developing institutions – most notably the IPCC – exercise a certain power, or performativity, that does not stem from the scenarios’ technical feasibility itself, but from the perceived authority of such credible institutions (Beck &

199

200

The Plausibility of Future Scenarios

Mahony 2018). Thus, future conceptual maps on scenario plausibility need to look carefully at the relationship with political power.

8.2.2

Indicator 2: scenario methods

Connected to the first indicator ‘credibility’, a second indicator is proposed that relates to individuals’ assessment of the scenario methods when making plausibility judgments. While source credibility is often conceptualised to encompass considerations about methodologies (Whitehead 1968), the prominence of references in the qualitative findings merits a dedicated discussion. Figure 21 displays the different factors that were synthesised from the empirical analysis. The spectrum of factors underlines a core argument of the models-of-data theory discussed in this book. Chinn & Brewer (2001) argue that when individuals are asked to evaluate incoming, anomalous data, they do not treat this data as given facts – as has been assumed in past cognitive research – but critically assess the underlying methodology. Qualitative data supports this theory proposition; study participants carefully investigated the methodology of scenarios when asked to make plausibility judgments. The two methods presented in the experimental study (CIB and IL) thereby evoked different reactions. Most participants perceived a presentation of scenarios in form of a model (CIB) to be more plausible than the narrative. Differences in the judgments are particularly strong with regards to recipients’ judgments of the overall scenario reports. Here, the map presents an interesting counterfactual to narrative theory, in which plausibility is associated with the telling of a story (Dahlstrom 2010) and the canonical sequence of events – the ‘story grammar’ (Canter et al 2003). Propositions from narrative theory commonly maintain that the narrative structure, i.e. the sequences of events, are powerful drivers of plausibility, because they reflect humans’ perceptions about the ‘order of things’ (Caron 1992:162). In contrast, empirical findings of this study support assumptions within the Futures Studies literature that the transparent and traceable CIB-method produces more plausible scenarios than Intuitive Logics (Lloyd & Schweizer 2013). These findings are interesting against the vivid methods debates in Future Studies (as have been discussed in chapter 2.3). Quantitative findings are underlined by qualitative insights in that the perceived subjectivity or objectivity of scenarios play a major role for plausibility judgments and are, therefore, taken as map factor. It reflects longstanding research on source credibility in which debates about objectivity

8 Synthesis: A conceptual map of scenario plausibility

are included as important dynamics (Whitehead 1968). Participants argued for the objectivity of CIB scenarios, because of its “more scientific character” [TN22/SOC/m/1], as exemplified in multiple statements: “The whole report, in contrast to the second report, makes a more independent and reputable impression” [TN61/ENG/m/2]. Indeed, the CIB method is regularly mentioned to create trust, while at the same time many acknowledged that this may not lead to a ‘superiority’ of CIB and present counterexamples: “Algorithms are nowhere explained, this leads to a ‘lack of trust’ (why are which variants assessed like this?), implausible!” [TN10/ENG/m/2]. For IL related judgments, different principles seemed to be applied to assess plausibility. For instance, in contrast to CIB, the persuasiveness and induced imaginability of the storyline was particularly often mentioned by participants. The conceptual map, therefore, accounts for the circumstance that individuals seem to pick up on the peculiarities of the scenario methods. Next to the respective focus of the two methods (the scientific appearance or the persuasive narrative is, indeed, induced by the methods), it suggests that individuals deploy fundamentally different mechanisms and look out for different aspects when making plausibility judgments of two very different scenario presentations. Such ‘flexibility’ in plausibility judgments has not been accounted for in the previously explored theoretical models of plausibility. One reason for this may be the respective disciplinary foci of previous plausibility concepts; e.g. narrative theory primarily focuses on how storylines convey plausibility. In this respect, the cross-disciplinary perspective of the map is a key strength when looking at scenario assessments.   Despite the variations in scenario format assessments, evaluations of the scenarios’ methodology also exert some common thread running across formats. This is illustrated by the map factor ‘evidence structure’. Individuals across social sciences and engineering backgrounds criticised both scenario formats for being too reductionist and deterministic. For instance, participants noted that possible developments were not portrayed and analysed from multiple perspectives, were single-sided or even biased, and did not sufficiently consider model uncertainties. Such observations are not reflected in any of the previously discussed concepts. In this map, it is acknowledged as potential jeopardy to scenario plausibility. The danger of determinism and tendencies of forecasting the future implicitly resonate in scenario planning research. Yet, those concerns are often brushed aside by arguments holding that scenarios only suggest possible sketches of future worlds that are not supposed

201

202

The Plausibility of Future Scenarios

to become reality. This map maintains that scenario users may not be satisfied with such premises of scenario theorists. Future maps should strive for an even better understanding of individuals’ criticism towards scenario structures and methods. Previous studies, for instance, have suggested that people search harder for flaws in methodologies when the data contradicts their own ideas and assumptions (Chinn & Brewer 2001; Klaczynski 2000; Kunda 1990). Such possible interplays between the indicators ‘credibility’ and ‘scenario methods’ on the one hand, and scenario recipients’ mental models and concept coherence (indicator 4) on the other, are noteworthy.   In sum, the map on scenario plausibility judgment is characterised by a key role of contextual assessments. The prominence of individuals’ judgments regarding the ‘scientific’ or ‘scholarly’ appearance of scenarios is remarkable. It may struck scenario theorists’ attention since the entrenched debates about scenarios being a ‘science’ or an ‘art’ have been going on in the community for years. Possible explanations for the observed tendencies can be drawn from more recent research on credibility. Hilligoss & Rieh (2008) have found that individuals use very different criteria for assessing the credibility of given information. This ‘construct credibility’ holds that perceived credibility is dependent on the individuals themselves but also on the format and appearance of information. Applied to scenario judgments, the tendency to focus on scholarly appearance of scenarios surely has been induced by the format of presentation (the reports, the description of methodologies). While this has been part of the experimental stimuli, it directly reflects recent pursuits of scientific principles in scenario planning research and practice (Gerhold 2015). Rieh (2002), furthermore, holds that individuals often carry over credibility criteria from one type of information to another. Hence, the emphasis on ‘what looks scholarly’ may also result from their university higher education backgrounds. In other words, if scenario theorists and developers strive towards scientific measures in scenarios, they should be prepared that recipients in turn feel prompted to apply such criteria for their assessments.

8 Synthesis: A conceptual map of scenario plausibility

Figure 21: Conceptual map explaining scenario plausibility judgments  

203

204

The Plausibility of Future Scenarios

8.3 8.3.1

Unit of analysis B: scenario(s) and scenario report Indicator 3: internal structures of scenarios

The map accounts for the empirical observation that scenario recipients evaluate the internal structure of a scenario and pay attention to different kinds of links between scenario components. The qualitative findings thereby add to the quantitative analysis and illustrate in more detail what has been broadly captured by differences in scenario formats and in treatment groups. The internal structure of a scenario, i.e. its individual uncertainty factors and the kinds of linkages between them, present important sources for the evaluation of scenario plausibility. The map rests on the models-of-data theory by Chinn & Brewer (2001), which presents five different kinds of links (causal links, impossible causal links, inductive links, analogical links and contrastive links) as factors in data evaluation. The first four links are clearly identified in the written protocols. Particularly causal links within the scenarios are mentioned as basis of judgments. Here, the plausibility map directly links to narrative theory approaches. The majority of scholars argue that causal information is more often recalled in cognitive exercises than non-causal information and is considered more important by readers (Bower & Morrow 1990; Kintsch 1998; Pargman et al 2017), because it leads to improved comprehension and memory retention (Dahlstrom 2010; Sundermeier et al 2005). At the same time, the map departs from narrative theory assumptions in that it incorporates scenario peculiarities about the judgment of inductive and analogical links. As supported by the qualitative findings, inductive links within scenarios are categorically rejected: Individuals perceived the extrapolation of past and present developments of the future per se as illegitimate and explicitly dismissed several specific inductive references in the scenarios. Furthermore, analogical links present controversial sources for scenario plausibility. While analogies are often criticised as being “out of place”, because respective developments described in the scenario “have nothing to do with each other” [TN52/ENG/m/1], they have sometimes been viewed as helpful for the imaginability of scenarios.   The described tendencies are interesting as they emphasise common discrepancies between formal (causal) models on the one hand, and the human interpretation and perception of causes and effects on the other. In the development and evaluation of causal models, plausibility plays an important

8 Synthesis: A conceptual map of scenario plausibility

role on two levels of analysis, as illustrated by Siegrist (1999): First, the relationship between the variables in the model needs to be ‘plausibly’ established (e.g. using structural equation modelling). Second, causal models are often subject to criticism, because alternative models are equally plausible and cannot be ruled out for explaining certain phenomena. Indeed, the use of causality is controversially discussed in scenario planning literature. While for some scenario methods, causality is not necessarily needed – CIB, for instance, also works when links between uncertainty factors are interpreted as correlations – other methods, including IL, work explicitly with causality as a preferred mode of thinking for scenario users (van der Heijden 2005). Some scholars have argued that the use of causality and internal consistency prohibits scenarios to display surprising and unexpected futures (Postma & Liebl 2005). As one of the few experimental studies on scenario planning, Bradfield (2008:9) also warns that causalities in scenarios may limit its interpretations to existing schemas and readily available knowledge – to the extent creative thinking with scenarios is impaired. The circumstance that individuals tend to overestimate causalities when making judgments is thereby also taken up by this map. In the literature, this is discussed as ‘implicit causalities’ (Kutscher 2009) in the sense that certain arrangements of subjects and objects in sentences trigger individuals to attribute causality. The implicit causality literature is still divided as to whether effects result from the linguistic structure, or whether some non-linguistic cognition is involved (Hartshorne 2014). Both explanations are possible for the scenario context. In comparison to IL-related scenarios, the matrix structure of CIB-related scenarios enabled more targeted references to causal links. Following the linguistic tradition, this could be due to the presentation of pair-wise factor relations in CIB, while references to causal links within IL-based scenarios rather remained on a ‘bird’s eyes’ perspective. Whether such effects may also account for the quantitatively observed differences in plausibility judgments needs to be confirmed in future empirical studies. At the same time, also cognitive explanations for the references to causality hold for the observed qualitative insights. As the research of Sanbonmatsu et al (1993) supports, references to causalities are often mediated by confirmatory processes, i.e. individuals emphasise causalities with references to their own beliefs or disbeliefs. In this vein, it could also be explained how individual components of the scenarios were often referenced as sole justification for scenario [im]plausibility. Following research on processes of causal attributi-

205

206

The Plausibility of Future Scenarios

on, individual factors are immediately interpreted as plausible or implausible causes for some outcome (Kelley 1973). The map thereby constitutes a relevant expansion of previous models. Although narrative theory scholars have generally argued that the reason why causality is giving way to plausibility relies on readers’ inclination to find cause even in non-causal contexts (Abbott 2002:37), the scenario plausibility map demonstrates in a more nuanced manner that the relationship between causality and plausibility is relevant on different levels: i) the level of individual scenario factors, ii) the interaction between two factors and iii) the causal models underlying a scenario as a whole in comparison to another scenario. This way, it also presents a helpful adaptation and refinement from argumentation theory. According to Majone (1989), plausibility judgments are made about the conclusions drawn from an argument that in itself may consist of information, data and evidence. While Majone suggests that plausibility judgments happen at the end of the data engagement, this map maintains that multiple plausibility judgments happen when assessing complex and multi-layered information such as scenarios. Here, the model also shows interesting parallels to research on probability assessments. Kahneman & Tversky (1984), for instance, have found that when provided with the same quality of information, causal data is generally given more weight in probability judgments than diagnostic data.   In close relation to findings on plausibility and causality, scenario users’ references to existing alternative pathways are incorporated in the map as reasons for scenario plausibility or implausibility. The prominence of this unit factor is evident across both tested scenario formats (IL and CIB). Empirical findings point to two different dynamics. On the one hand, plausibility perceptions of scenarios were impaired when individuals identified alternative causal pathways that were counterfactual to the scenario. Participants disagreed with a proposed causality and proposed alternatives. This corresponds to theoretical concepts from cognitive psychology (Chinn & Brewer 2001; Chinn 1993) assuming that any alternative pathway limits plausibility perceptions. Support also comes from longstanding cognitive research; empirical studies have consistently found that when individuals are confronted with information contradictory to their own beliefs, they tend to come up with other causal explanations in order to keep their own story alive (Swann & Read 1981). One the other hand, empirical findings reveal tendencies that scenario recipients also build on presented links and second them with alternative pa-

8 Synthesis: A conceptual map of scenario plausibility

thways that are consistent with the already existing links. Study participants put forth alternative explanations that lent the scenarios a different turn or more nuanced impulses without necessarily confining their plausibility. Hence, scenarios can trigger user-recipients to think about what else could influence the presented future and thus, support the scenario. This can be equally explained by confirmatory processes, i.e. scenarios that basically correspond with individuals’ beliefs are further supported through alternative explanations. Yet, it has not been raised by previous plausibility models. What is more, it also points towards conceptual enhancements of plausibility judgments in scenario planning contexts. Specifically, the role of and the relation between proposed scenarios and alternative explanations of the same phenomenon could be exploited to stimulate more detailed and nuanced engagements with scenarios. Scenario theorists who generally emphasise the value of plausibility have also proposed that perceptions of plausibility/ implausibility could be seen as the start, not as the end of a scenario exercise (Ramírez & Selin 2014). In this respect, cognitive psychology research has indicated that asking individuals to explicitly state reasons for their judgments can limited the power of mental short-cuts, including the overestimation of causality (Sanbonmatsu et al 1993).   Finally, with the degree of complexity and perceptions of scenarios’ internal consistency, two other unit factors are incorporated that present interesting countertendencies to theoretical assumptions. Against propositions from cognitive models of plausibility – most notably the model by Connell & Keane (2006) – high perceptions of scenario complexity correlate with high plausibility judgments. The written protocols reveal that some participants explicitly argued that a scenario was plausible “despite its low degree of complexity” ([TN53/ENG/m/2]). This tendency is weakly to moderately supported by quantitative data and is clearly at odds with key premises of narrative theory where plausibility is linked to accessibility and understandability of storylines. At the same time, there seems to exist a threshold for complexity since participants criticised the complex presentation of CIB scenarios as hindering plausibility. In a similar context, ‘internal consistency’ is incorporated in the map. It accounts for assumptions in scenario literature that the absence of contradictories between factors within one scenario is central for its plausibility (Wiek et al 2013). Statistical measures support this assumption in that the variable shows positive relations with plausibility judgments exhibiting moderate to very strong effects. As has been pertinent for causality, differences bet-

207

208

The Plausibility of Future Scenarios

ween notions of formal internal consistency and perceptions of consistency are important in this respect. Many scenario scholars who argue for internal consistency as important factor for scenario plausibility put forth formal testing procedures, such as those comprised in the CIB method. Such notions of consistency are also evident in plausibility models from the perspective of informal logics (Walton 1992b). With the perspective on individuals’ perceptions, this map accounts for whether some individual judges a scenario as internally consistent or not – disregarding whether this can be formally approved. Explanations for this can be found in cognitive psychology research on individuals dealing with uncertainty. Jensen & Hurley (2010), for instance, maintain that for humans, inconsistency or conflict between data is a manifestation of uncertainty and is sought to be reduced depending on individuals’ needs or own beliefs. In fact, empirical results indicate that although the CIB method features formal measures to ensure internal consistency and the IL method does not, participants did not reflect this in their subjective assessment to the extent that bivariate correlations are even stronger for IL than for CIB scenarios.   In sum, two aspects appear to be particularly important for further conceptual developments of the scenario plausibility map. First, while factors like causal linkages, complexity and internal consistency may appear to be formally testable through analytical analysis of scenario developers or external scenario theorists, individuals’ perceptions do not necessarily follow those prescribed notions. Comparisons between notions and operationalisations by scenario developers on the one hand, and the understanding and actual judgments of scenario user-recipients on the other, need to be investigated. Second, the variety of angles from which the internal structure of scenarios can be analysed (individual factors, causal, inductive, analogical links, consistency, etc.) makes any prediction of scenario plausibility very difficult. In fact, the multi-faceted nature and density of scenarios, i.e. the large numbers of factors and linkages between them, appear to make a scenario rather vulnerable to its users. Similar observations have, in fact, also been made by narrative theorists about the sensitivity of texts (Abbott 2002:79). Conceptual maps need to investigate how influential recipients’ assessments of small fractures of scenarios are for the overall plausibility judgment. The analysis of Chinn & Brewer (2001:342) offers an interesting cue: They argue that there exist many different levels of accepting data. Similarly detailed nuances of plausibility can further enhance the present map.

8 Synthesis: A conceptual map of scenario plausibility

8.4 8.4.1

Unit of analysis C: scenario user-recipient Indicator 4: conceptual coherence

One of the key premises of this plausibility map is that judgments of scenario user-recipients are related to whether a presented scenario corresponds with their pre-existing knowledge and beliefs. The map endorses the applicability of this theoretical assumption from other disciplinary contexts to scenario planning. Except for formal assessment procedures put forth by philosophy of sciences (Rescher 1976), all theoretical models see a form of ‘conceptual coherence’ to be directly connected with plausibility judgments. Qualitative and quantitative findings affirm that the more sources of corroboration a scenario recipient finds, the more plausible a scenario is judged. This is confirmed on different levels. Strong to very strong, positive bivariate relationships exist between plausibility judgments and study participants’ own assessment of the match between scenarios and their ideas about energy systems transformations. Furthermore, participants’ appraisals of energy transformation success factors, collected with a significant time lag to the plausibility judgments, point into the same direction and add even more validity to the observed relationship. While the quantitative observations cannot capture the differentiations between forms of pre-existing knowledge, the written protocols of study participants are rich in illuminating statements. Particularly, written accounts manifest individuals’ own ideas of what will happen in the future as central factor (see figure 21). Also, the references of study participants to their underlying theories and their matching of these insights with given scenarios are interesting in the light of some narrative theory premises. For instance, it is interesting to note that participants often referred to economic, incentivedriven behaviours in society when arguing for a scenario’s plausibility. The finding supplements one of the few previous experimental studies on scenario planning. Bradfield (2008) found that developers are strongly guided by what is currently discussed in society or in the media. This way, assumptions from narrative research can be supported. At the same time, research has also maintained that plausibility is conveyed if a narrative corresponds with accepted guidelines and norms in a given society (Brockmeier & Harré 2001). In future research designs, cultural sociology perspectives need to further investigate this assumption.    

209

210

The Plausibility of Future Scenarios

Overall, logistic regressions based on the experimental data mark models with conceptual coherence as predictors as the most powerful in determining scenario plausibility. While the small sample sizes rule out analyses about the interaction of different predictor values, a high significance of conceptual coherence can already be asserted at this juncture. It explains the variance in plausibility judgments better than all other models tested for alternative indicators (e.g. trustworthiness or need for cognitive closure). Furthermore, the effects are consistent across the two tested scenario formats IL and CIB. The assumed weight of conceptual coherence as indicator for scenario plausibility can also be reflected against findings from longstanding cognitive psychology research, most notably dissonance theory. Its key argument is that individuals’ assessment of given information is never fully detached from own beliefs. Specifically, the ‘confirmatory bias’ denotes that individuals tend to believe and favour information that corresponds with their own view while they discredit or ignore statements contradictory to their own (Jermias 2001; Mahoney 1977). Most conceptual and empirical studies have explained this behaviour with individuals’ need for reducing cognitive dissonance (Cooper 2007; Koehler 1993; Sanbonmatsu et al 1993; Swann & Read 1981). Given the multitude of empirical and conceptual research that has pointed to the robustness of confirmatory processes across different contexts, this revelation within plausibility judgments is not necessarily surprising. At the same time, it also points towards its influence on other plausibility indicators. As has been demonstrated by research studies, pre-existing beliefs can guide individuals’ assessment of credibility or trustworthiness. Koehler (1993), for instance, found that even scientists’ perceptions of their colleagues’ credibility is influenced by whether their colleagues’ findings correspond with their own conceptions. In a similar vein, Pitz (1969) found that pre-existing beliefs and confirmatory processes were the main reason for why individuals tend to stick with their initial judgments and refuse to reconsider contradictory information. Based on the previous research, future conceptualisations of scenario plausibility do not only have to acknowledge conceptual coherence as a stand-alone indicator but need to emphasise its indirect effect on plausibility through contextual factors of scenarios. In this context, it should be mentioned that a great number of social psychology scholars have laid out models to understand and overcome individuals’ persistence towards their own beliefs. The Elaboration Likelihood Method (ELM) by Petty & Cacioppo (1986) range among the most well-known. The ELM determines two key factors with the potential to induce attitude

8 Synthesis: A conceptual map of scenario plausibility

change: First, the argumentative and persuasive strength of a message, and second, the availability of peripheral cues regarding the source or the content of the message. In a way, this is also reflected by plausibility notions from educational psychology. Lombardi et al (2015) acknowledge the ‘stickiness’ of individuals’ own plausibility and maintain that conceptual change, i.e. learning, can only happen if a new storyline is significantly more plausible than the individual’s existing conception. A similar direction can be found in the notions of Majone (1989) who argues that an arguments’ plausibility is directly dependent on the persuasiveness of the message, e.g. through powerful evidence and/ or trustworthy proponents. While both accounts acknowledge the challenge to ‘implant’ new plausibilities, the conditions under which this is possible need to be further investigated in conceptual scenario plausibility maps. This is particularly relevant considering the expectations towards scenarios as tools for learning and conceptual change.

8.4.2

Indicator 5: cognitive heuristics and dispositions

Overall, the theoretically-derived propositions have put forward two kinds of indicators for scenario plausibility judgments: One the one hand, internal factors hold that judgments are related to the internal logic and structure as well as to the presentation format of the scenarios. On the other hand, external factors account for the fact that judgments are related to the contexts in which scenarios are assessed and involve own beliefs of an individual, its contextual interpretation as well as the cognitive styles or dispositions of scenario users. The latter has been defined by Canter et al (2003) as ‘anchors’ in plausibility judgments. The assumed relevance of such anchors is nuanced in models of plausibility by cognitive psychologists. Here, they are discussed under the more general frame of cognitive heuristics in the sense of mental shortcuts individuals apply to complex and ambiguous tasks. In the conceptual map, the relevance of such judgmental heuristics is reflected in two ways. First, the empirical findings have produced interesting insights into the relation between plausibility judgments and some specific heuristic constructs, such as the need for cognitive closure. This legitimises it as an indicator. Second, in the previously discussed indicators, several dynamics resonate with the workings of well-known cognitive heuristics and underline its importance for scenario plausibility. Both perspectives are discussed in turn.    

211

212

The Plausibility of Future Scenarios

While the relevance of cognitive shortcuts for scenario plausibility shows through at several points in the theoretical exploration, three aspects are most strongly mentioned. First, the relevance of cognitive styles, meaning the way individuals approach data, how thorough they process it and with what urgency of arriving at a definite solution. Previous studies have found that such decisiveness is significantly related to plausibility judgments of students (Lombardi & Sinatra 2013). Observations in this study support these theoretical assumptions and second empirical research from educational studies. Bivariate analyses and logistic regressions reveal several moderate to strong, negative relationships between scenario plausibility judgments and participants’ need for cognitive closure. Specifically, participants who disagreed with the statement ‘In general, I avoid participating in discussions over inconclusive and controversial topics’ (NCC-item 15) ranked the CIB-related scenarios as rather plausible. An explanation for this finding can be that higher plausibility judgments of scenarios mean for participants to seriously consider one or multiple scenarios for further discussions; they do not feel the need to discredit or even ignore them as implausible. This is also in line with previous empirical studies. Kruglanski et al (1993), for instance, have found individuals with higher needs for cognitive closure to be more resistant to accept persuasive messages than their lower-needs counterparts. This is particularly interesting in the context of scenarios as it may mean that prospective scenario users may make different assessments of scenarios when under pressure of finalising a decision. A second relevant cognitive mechanism has been discussed in the context of reading new information. While it is generally accepted among cognitive and educational psychologists that plausibility judgments are made in the course of reading processes (Black et al 1986; Richter & Schmid 2010), different mechanisms are assumed to be at play: Narrative theory holds that plausibility is positively influenced if the reading material follows a canonical sequence of events (Canter et al 2003; Labov 1972). Others relate plausibility to the interest and motivation for reading a given material. Such effect is also incorporated in this map. Findings show moderately positive relationships with plausibility judgments of all three CIB-related scenarios. This is not only in line with scenario research; Ramírez et al (2015) have argued that the purpose of scenario activities is to produce ‘interesting research’, because it leads to higher productivity and resonance with involved stakeholders. It also supports research about motivated reason (Kunda 1990) holding that individuals more thoroughly investigate material when they are driven by an intrinsic motiva-

8 Synthesis: A conceptual map of scenario plausibility

tion. In a similar vein, Nahari et al (2010) have demonstrated that individuals with high levels of absorption, i.e. with the capacity to fully engage their cognitive and perceptual resources while reading, also tend to judge statements as more plausible than individuals with lower dispositions of absorption. As a third, explicitly tested cognitive shortcut, individuals’ background knowledge is incorporated. An analysis of participants’ background knowledge on energy issues reveals that those participants with less background knowledge tended to judge the plausibility of scenarios in extremes (very plausible). In line with research on cognitive heuristics (Lee 2001), this allows for the interpretation that participants with less background knowledge applied more short-cuts when asked to make judgments. Interestingly, the workings of such heuristics are evident only for all CIBrelated scenarios, however not for IL-based scenarios. Furthermore, effects were the strongest for judgments of the CIB set, not for the individual CIB scenarios. Cognitive heuristics seem to be particularly strong in complex scenarios. CIB-related scenarios were perceived as more complex by participants, and from an analysis point of view included more information in different forms, such as matrix, numbers, network diagrams and explanatory text. This complexity and amount of information could have led to the circumstance that cognitive anchors, such as patterns in dealing with controversial material or the own interest in the scenario, were stronger at play for the CIB scenarios. Finally, as mentioned earlier, the relevance of cognitive heuristics is supported by other indicators as well. The avoidance of cognitive dissonances as reflected in indicator 4, but also anchoring effects in the way components of scenarios are interpreted as causalities underline the potential power of cognitive heuristics in plausibility judgments. The indicators point to important repercussions for the contexts of scenario planning. On the one hand, the current scenario practice has the potential to even reinforce the power of cognitive heuristics. As has been demonstrated in this book, scenario users are often not involved and, therefore, not familiar with the scenario development processes and its methods. This increases the chances that scenarios are perceived as rather complex and detached from their own perspectives. This in turn can increase the power of cognitive heuristics in plausibility judgments. On the other hand, with cognitive heuristics at play, the indicator demonstrates that parallels between plausibility and probability judgments do exist. Such parallels have been more openly discussed in previous plausibility models. While the map has demonstrated that plausibility is not solely based

213

214

The Plausibility of Future Scenarios

on heuristics and may encompass more explanatory factors, connections to probability nevertheless constitute a key finding of this research.

9 Conclusions and outlook

9.1

Implications for research and practice

The key objective of this research has been to enhance conceptual understandings of scenario plausibility judgments from the perspective of scenario users. Despite the focus on one construct within scenario practice, the findings hold some practical implications for how scenario processes are currently conducted, and offers critical reflections on prevalent scenario research directions. When considering the life path of scenarios (phases of development, assessment and usage), the findings illustrate why and how scenarios are accepted or rejected at the very end of this life path by prospective scenario users. To put it more bluntly, one could say it sheds light on whether resource-intensive and often publicly funded scenarios are simply discarded or seriously considered by its target audience. Certainly, jumping to hasty conclusions about scenarios’ overall effectiveness is disproportionate given the scope of this study. However, what the findings do is that they depict scenarios not as an unequivocal, ‘sure-fire success’ planning tool. There is a lot going on at the later stages of the life path that determines the effectiveness of scenario exercises and that has not been systematically explored in the extant scholarly literature. While thorough and scientifically sound development processes rightfully present the backbone of many scenario studies, they do not per se guarantee an uptake of the scenarios. Interests and resources in scenario planning are currently allocated mostly towards a constant methodological improvement of scenario methods, with only limited efforts put into critical, evaluative research. This also includes the few efforts taken towards determining plausibility indicators from scholars’ perspectives. The findings of this book demonstrate that such scholarly-derived factors, e.g. internal consistency or narrative richness, play a role in users’ plausibility judgments but utterly

216

The Plausibility of Future Scenarios

fail in accounting for the different rationales that are at play when individuals assess given scenarios. This leads to two practical implications. First, as has been demonstrated at the outset of this book, scenario users are mostly unexplored territory in scenario planning. This is often due to the pre-occupation of both practitioners and researchers with the actual development process of scenarios; only those users who are directly involved in this process are considered. The book makes clear that more systematic inquiries need to be directed towards scenario users that are not directly involved but still targeted by the scenarios. Investigating their expectations, beliefs and cognitive assessment styles is worthwhile for any successful scenario activity. Critics who may consider this an impossible task are reminded of the small but growing number of studies that actually investigate the same for scenario developers (Franco et al 2013; Hodgkinson & Healey 2008). Second, when talking about ‘effectiveness’, scenario studies and research analyses are often rather vague regarding the ultimate purposes of scenario activities. As has been mapped out, at the individual scenario user’s level, objectives are most often associated with some sort of learning, conceptual change, or at least with the exposure to new and challenging assumptions. The findings severely question this potential of scenarios and thereby challenge some widely held assumptions of scenario planners. For that matter, also Bradfield (2008) expressed his doubts from his analysis of scenario developers’ learning potential. For all that can be derived from this research, ‘learning’ in the form of stretching an individual’s mental model beyond what s/he beliefs about the world, is very difficult because challenging scenarios tend to be disregarded and only have a chance for consideration if the scenario source and method are considered highly credible. Certainly, there exist the potential for scenario developers and scholars to exploit such indicators and actively work towards high plausibility judgments, for instance, by promoting scenarios from very trustworthy sources and methods, by integrating certain cognitive ‘hooks’, or by carefully combining narrative storylines with some credible numerical evidence. Inspiration from cognitive psychology is here abundantly available; the stimulation of ‘peripheral cues’ such as message credibility and persuasiveness by the Elaboration Likelihood Model of Persuasion (ELM) (Petty & Cacioppo 1986) is just one example. CIB scholars, for instance, can build on the method’s perceived credibility to push towards high plausibility judgments. At the same time, they need to clarify what it is that their scenarios should achieve: Is it convincing users with the appearance

9 Conclusions and outlook

of scenarios or to seriously question users’ conceptions of the future? Scholars have also made clear that contextual trust in scenarios is not only difficult to obtain or maintain (Selin 2006), but it brings about sensitive questions about the power structures among those who tell the scenarios and those who are supposed to simply trust them (Nordmann 2014; Scheele et al 2018).   On a much more fundamental level, the research presented in this book also begs the question of whether plausibility constitutes an appropriate assessment criterion for scenarios. Findings imply that a differentiation is needed between plausibility as ‘effectiveness criterion’ – as has been propagated by so many scenario reviews – and plausibility as a construct to understand scenario users’ reactions to given sets of scenarios. As for the former, the findings question the appropriateness of plausibility as a function of scenario effectiveness. According to the proposed conceptual map, a scenario is plausible if high credibility perceptions of the scenario itself and/or its methods exist, if the scenario matches with users’ own ideas about the subject matter, and is furthermore dependent on the activation of several commonly known cognitive heuristics. To consider such compound as condition for a scenario’s success could run counter to the enormous efforts put into the development of scenarios. At the same time, such an amalgamation of plausibility indicators may also exist precisely because the scenario planning literature has provided very little guidance on how to interpret scenarios (which has been the starting point of this book). One consequence can be to differentiate and flesh out quality criteria for scenarios that build on the empirical findings of this and other studies and are more user-oriented. This would correspond with the research demands of several scholars in recent years (Betz 2010; Kunkel et al 2016; Trutnevyte et al 2016). Therefore, the scenario research community is well advised to continue researching plausibility judgments to understand individuals’ assessments of scenarios. Theoretical and empirical findings have pictured scenarios to be rather vulnerable to users’ plausibility judgments. At the same time, judgments do not appear arbitrary or even random. On the contrary, the conceptual map reveals clear judgment patterns that are distinct and cannot simply be represented by other concepts such as credibility, believability or internal consistency of the scenario. Selin (2011b:241), for instance, has aptly argued that instead of dealing with plausibility as a fixed quality criterion, it can function as “a worthwhile sparring partner that brings up interesting […] questions around evidence, trust, science and culture and decision-making”. Hence,

217

218

The Plausibility of Future Scenarios

much can be learnt from further conceptual and empirical evaluations of scenario user judgments, but also from other systematic bodies of research into human judgment mechanisms, e.g. from the research on risk perception and probability assessment. One key finding in this respect is that the almost compulsive isolation of the scenario planning community from questions of probability assessments has not been worthwhile. On the one hand, the scenario community can learn from the elaborate research designs to analyse human judgments; on the other, findings of this book point towards interesting parallels in probability and plausibility judgments that have to date been categorically dismissed1 . The finding that scenarios are judged implausible and too far-fetched when being too far from users’ own beliefs and experiences but at the same time are also judged implausible when perceived as too simplistic or deterministic allows for the hypothesis that plausibility judgments may be represented by the medium density function of probability. Scenario research and practice should, therefore, more openly look towards other disciplinary perspectives when engaging in micro-perspective analyses of certain phases of the scenario planning process.   In conclusion, scenarios are currently very in vogue, both in organisational and public policy contexts. Simply developing a set of scenarios is often considered to be a major step into the direction for change (whatever ‘change’ means in particular circumstances). The book has pointed out that both researchers and practitioners need to be more aware of the potential limitations, particularly towards the later stages of the scenarios’ life paths. Such difficulties, like the plausibility judgment patterns of scenario users, need to come to the forefront of discussions and cannot be swept away by more research that seeks to only enhance scenario development methods. Scenario planning scholars could look towards risk perception research for some inspiration. Here, Slovic et al (1982) have argued that the main purpose of their research has been to i) explore means to study risk perception, ii) understand its mechanisms, iii) improve communication and governance of risk. Such prioritisation of research agendas can be worthwhile for scenario planning.

1

The research by Ramírez & Selin (2014) is the noteworthy exception in the scenario literature. However, their work focusses on the etymological differences and parallels between plausibility and probability and does not address human perception mechanisms.

9 Conclusions and outlook

9.2

Critical review of the research process

With the exploration of scenario users’ plausibility judgments, this book has entered new territory; one core intention, therefore, is to initiate further research in this field. A critical review of the study’s limitations is essential for other scholars to contextualise findings, be aware of shortcomings and further develop research agendas. The explorative nature of the research The extant literature on scenario plausibility has offered limited starting points for conceptual and empirical investigations. No previous conceptual framework for scenario plausibility existed. Therefore, premises found in the scenario literature served as explorative starting points, namely references to informal logic, narrative presentation and cognitive capabilities. Although such conceptual links were explored systematically by consulting pertinent literatures in the respective fields, the book is not meant to constitute an exhaustive discussion of all relevant concepts of plausibility, both within and beyond those fields. It does not rule out that other academic sub-disciplines, e.g. computer linguistics or notions from artificial intelligence research, can offer further interesting entry points. The experimental design As has been discussed at the outset of this book, the scenario literature lacks typologies or even thorough investigations of who scenario users are. Thus, the literature did not predefine a clear group of participants for the experimental study. The study, therefore, established a preliminary classification of users and focused on ‘third-order users’, as informed citizens or stakeholders that were not involved in the scenario development process. In the experiment, users were represented by master-level students. Although students had a general background in the overall subject matter and different academic backgrounds, the sample still presents a rather homogeneous group in terms of age, overall level of education and cultural background. Measures were taken to ensure the validity and the replicability of real-world settings (the scenario methods used, the format of scenario presentation). Still, just as the various psychological studies on human judgment under uncertainty or on risk perception, this study cannot fully refute the criticism that participants would have reacted differently under different circumstances.

219

220

The Plausibility of Future Scenarios

The conundrum of exploring the unexplored One of the main issues of this research has been to better understand what goes into an individual’s plausibility assessment of a given scenario. This data was collected without being able to provide study participants with a clearcut definition of what plausibility means, because there is no clear definition available in scenario planning (hence the relevance of this research). Given this challenge, the study followed previous experimental research by Lombardi et al (2015) and provided a working definition of plausibility, which was adjusted to the context of scenario planning. The effects the definition may have had on the responses of participants, therefore, need to be critically reflected in future research. The relevance of plausibility as judgment criterion As any research investigating particular constructs, this study also needs to face the potential criticism that ‘plausibility’ as a basis for scenario judgment is merely scholarly constructed, i.e. that in reality, scenario users actually refer to different principles. Certainly, it cannot be ruled out that scenario users may also use other concepts for making assessments, maybe even probability. Plausibility has been widely populated as the assessment criterion - not only in scenario research but also in practice. Its omnipresence in human judgment is further supported by other disciplines, from philosophy to narrative theory and cognitive psychology. The study is no evidence for the realworld presence of the concept per se. Rather, it presents a critical reflection in that it analyses what plausibility judgments of individual scenario users entail, and what it would mean for scenario planning if plausibility was used as assessment criterion.

9.3

Suggestions for further research

This book has provided a first extensive approach to conceptualise and empirically study scenario plausibility. Research into scenario plausibility is still in its infancy and is further complicated by unresolved questions about scenario users, scenario studies’ purposes and criteria for scenario effectiveness. In consideration of its limitations and strengths, the study points to some concrete pathways to make empirical and conceptual progress in scenario planning research and practice.

9 Conclusions and outlook

Suggestions for empirical research The key outcomes of this study have been synthesised in a conceptual map featuring five indicators for scenario plausibility. Although all indicators have been derived from qualitative and quantitative data analyses and have been contextualised using extant theoretical concepts, further research should investigate more deeply the proposed indicators and develop new research propositions to enhance or modify the map. One option is to rerun the experimental study and include the variables in different orders and scales, using larger sample sizes. This way, the strength and the nature of the relationships between plausibility and the proposed indicators can be further scrutinised. For example, most of the explored theoretical concepts but also the powerful regression models within the empirical findings maintain that a scenario’s perceived trustworthiness positively contributes to scenario plausibility. However, a reverse relationship (plausibility as a precondition for perceived trustworthiness), as has been suggested by Nahari et al (2010), cannot be ruled out. Another interesting angle is to look at whether any of the proposed indicators exercise an overriding power over other factors. For instance, qualitative findings show that individuals’ high affinity with the scenario format, e.g. its scholarly and professional appearance, can be determinant for a scenario’s plausibility, even if the scenario runs counter to the participant’s own beliefs. Further research of such dynamic relationships could work towards a plausibility index in that it provides some very practical implications for the relative importance of the different indicators. Further empirical analyses could also target in more detail different scenario formats and their effects for plausibility. The book exemplified scenario studies using two methods (IL and CIB) that are quite distinctive from one another. Future research could investigate other scenario methods that feature narrative and non-narrative aspects in a more balanced way to expand the applicability of the conceptual map. A replication of the experimental design with different sets of participants is worthwhile. When using decision-makers or stakeholders that are regularly involved in the scenario studies’ subject matter (e.g. energy transformations), it will be interesting to see whether those participants’ judgments are even more based on previous beliefs and conceptions, because they have more stakes in the presented subjects as compared to students. It is certainly imaginable that energy experts or stakeholders with a focus on centralised energy production and distribution may have strong reservations against scenarios

221

222

The Plausibility of Future Scenarios

that picture a future of city- or even neighbourhood-based energy solutions. In this respect, a diversification of the subject matter of scenarios can also be expedient to see whether scenarios with high-stake versus low-stake contexts are subject to different kinds of plausibility judgment mechanisms. Finally, the presented conceptual map is limited to plausibility judgments of individual scenario users and does not capture group interaction or possible dynamics of groupthink. Further empirical studies could design experimental set-ups that allow for discussions among participants and account for the effects on plausibility judgments. This way, the present research on plausibility can reach the attention of scenario scholars and practitioners, who advocate scenario planning mostly as a means for organisational communication and networking (Lang & Ramírez 2015). Suggestions for conceptual research The conceptual map is based on theoretical and laboratory explorations. Although the internal validity of the study has been ensured and the external validity been thoroughly reflected, it cannot be taken for granted that the findings are taken at face value by scenario practitioners and scholars. Thus, for this scenario plausibility map to achieve practical relevance, the map and its indicators need to be discussed and reflected with stakeholders and practitioners that work with scenarios on a regular basis. Here, different scenario user groups, i.e. the preliminary typology of first-, second- and third-order users developed in this book, can be tested and empirically enhanced. In-depth interviews with different users as well as participatory observation studies have the potential to enhance the legitimacy of the present findings among the community, and to triangulate findings from other experimental studies. Within the scholarly community, findings can help to initiate critical debates about plausibility as judgment criterion in scenario processes. Workshop formats could inquire whether plausibility is tenable as a normative principle for scenario ‘effectiveness’, and how the close entanglement between plausibility and other concepts such as credibility and probability pave the way for plausibility as a descriptive inquiry into the ‘effect’ – not effectiveness – of scenarios. In this context, the role of implausibility could be further investigated. During scenario development processes, a clear distinction is made between plausible (those scenarios that end up in the final set) and implausible scenarios (those that do not). From a human judgment perspective, however, plausibility presents a continuum. Research should further explore how mo-

9 Conclusions and outlook

ving between nuances of plausibility may be a fruitful way of engaging with scenarios. Indeed, qualitative findings revealed that less plausible scenarios were not always disregarded per se, but considered as ‘food for thought’ to develop alternative pathways of the future. Lastly, the presented conceptual map addresses plausibility judgments by scenario users at the latter phases of a scenario’s life path. However, a central premise of this study has been that plausibility also plays an important role in scenario development. Plausibility notions from the perspective of scenario development, its involved stakeholders and methods have been derived from pertinent scenario literature, but here too is room for more theoretical exploration and conceptual enhancement. Future research, for instance, could explore how plausibility is understood and operationalised in modelling disciplines or economics. This can provide insights into whether the manufactured plausibility notions in development processes may or may not contradict with the perceptions and judgments patterns of scenario users. In sum, the interlinkage between conceptual questions of scenario plausibility and fundamental questions of scenario usage makes clear that the former should not, and in fact cannot, remain a niche endeavour. Instead, plausibility researchers can join forces with other scenario scholars who have – on a more general level and often without empirical evidence – critically reflected on the potential of scenario planning as a foresight and management tool.

223

Abbreviations

CIB

Cross-Impact Balance Analysis

EEA European Environment Agency ELM Elaboration Likelihood Model IL      Intuitive Logics IRGC International Risk Governance Council IPCC Intergovernmental Panel on Climate Change NCC Need for Cognitive Closure PAM Plausibility Analysis Model PJCC Plausibility Judgment and Conceptual Change Framework PRE Proportional reduction in error (bivariate correlation analysis) RSS Ricarda Schmidt-Scheele SRES Special Report on Emission Scenarios by IPCC STS

Science and Technology Studies

Acknowledgments

The research presented in this book was funded by the Cluster of Excellence Simulation Technology (German Research Foundation, EXC310/2) at the University of Stuttgart. A number of people greatly supported me during the research process. I would like to thank my advisor Ortwin Renn for his immense knowledge and helpful comments throughout the research and writing process. I also thank Wolfgang Weimer-Jehle who continuously accompanied me along this journey with his knowledge, patience and motivation. My sincere thanks go to Cynthia Selin and Rafael Ramírez for hosting me at Arizona State University and the University of Oxford, and for getting me involved in the Oxford Scenarios Programme. My research greatly benefitted from their guidance and insights on scenario planning. At the University of Oxford, I am thankful to John Fresen for getting me acquainted with statistical computing using R. Finally, there are three people who supported me beyond conducting this research. My parents Ulrich und Ulrike and my husband Alexej have provided me with confidence, motivation and inspiration. I dedicate this book to them.

References

Abbott, P. H. (2002): The Cambridge Introduction to Narrative. Cambridge: Cambridge University Press. Achinstein, P. (2005): Scientific Evidence: Philosophical Theories and Applications. Baltimore: John Hopkins University Press. Adam, B. (2004): “Memory of Futures”, KronoScope, 42: 297-315. https://doi. org/10.1163/1568524042801392 Agnolucci, P. (2007): “Hydrogen infrastructure for the transport sector”, International Journal of Hydrogen Energy, 32(15): 3526-3544. https://doi.org/10. 1016/j.ijhydene.2007.02.016 Ahlqvist, T., Rhisiart, M. (2015): “Emerging pathways for critical futures research: Changing contexts and impacts of social theory”, Futures, 71: 91104. https://doi.org/10.1016/j.futures.2015.07.012 Alcamo, J. (2008): “The SAS Approach: Combining Qualitative and Quantitative Knowledge in Environmental Scenarios”, in: Developments in Integrated Environmental Assessment: Environmental Futures - The Practice of Environmental Scenario Analysis, edited by Alcamo, J., Elsevier, pp. 123150. https://doi.org/10.1016/S1574-101X(08)00406-7 Allchin, D. (1999): “Do we see through the Microscope? Credibility as a vicarious selector”, Philosophy of Science, 66: 287-298. https://doi.org/10.1016/ S1574-101X(08)00406-7 Amer, M., Daim, T., Jetter, A. (2013): “A review of scenario planning”, Futures, 46: 23-40. Andersen, U., Woyke, W. (2013): Handwörterbuch des politischen Systems der Bundesrepublik Deutschland. Vol. 7. Heidelberg: Springer VS. https://doi. org/10.1007/978-3-531-19072-3 Anton, J. (1990): “Book Review: Majone, G. Evidence, Argument & Persuasion in the Policy Process”, Policy Sciences, 23: 177-182.

230

The Plausibility of Future Scenarios

Appelrath, H.-J., Dieckhoff, C., Fischedick, M., Grunwald, A., Höffler, F., Mayer, C., Weimer-Jehle, W.-J. (2016): “Consulting with energy scenarios. Requirements for scientific policy advice”, Position paper, March 2016, München: Acatech. Arizona State University (2009): “Plausibility Project Workshop”, Consortium for Science, Policy & Outcomes, Arizona State University, available at https://cspo.org/research/plausibility-project-workshop/ (last accessed 24/02/2020). Atteslander, P. (2008): Methoden der empirischen Sozialforschung. Berlin: Erich Schmidt. Barthes, R. (1982): “Introduction to the Structural Analysis of Narrative”, in: A Barthes Reader, edited by Sontag, S., New York: Hill & Wang, pp. 251-295. Bateman, T. S., Zeithaml, C. P. (1989): “The psychological context of strategic decisions: a test of relevance to practitioners”, Strategic Management Journal, 10: 587-592. https://doi.org/10.1002/smj.4250100606 Beck, S., Mahony, M. (2017): “The IPCC and the politics of anticipation”, Nature Climate Change, 7(5): 311-313. https://doi.org/10.1038/nclimate3264 Beck, S., Mahony, M. (2018): “The politics of anticipation: the IPCC and the negative emissions technologies experience”, Global Sustainability, 1: e8. https://doi.org/10.1017/sus.2018.7 Beck, U. (1992): The Risk Society: Toward a new Modernity. London: Sage. Beckert, J. (2013): “Capitalism as a System of Expectations”, Politics & Society, 41(3): 323-350. https://doi.org/10.1177/0032329213493750 Beckert, J. (2016): Imagined Futures: Expectations and capitalist dynamics. Cambridge: Harvard University Press. https://doi.org/10.4159/ 9780674545878 Bell, W. (1997): Foundations of Futures Studies: Human science for a new era. Volume 2: Values, objectivity, and the good society. New Brunswick, NJ: Transaction. https://doi.org/10.1016/S0024-6301(97)84590-6 Bennett, J. F. (1997): “Credibility, plausibility and autobiographical oral narrative: some suggestions from the analysis of a rape survivor’s testimony”, in: Culture Power and Difference: Discourse analysis in South Africa, edited by Levett, A. K., Kottler, A., Burman, E., Parker, I., London and New Jersey: Zed Books Ltd, pp. 96-108. Benninghaus, H. (2007): Deskriptive Statistik: Eine Einführung für Sozialwissenschaftler. 11th ed. Wiesbaden: VS Verlag für Sozialwissenschaften. Bergman, N., Haxeltine, A., Whitmarsh, L., Köhler, J., Schilperoord, M., Rotmans, J. (2008): “Modelling Socio-Technical Transition Patterns and Path-

References

ways”, Journal of Artificial Societies and Social Simulation, 11(3): 1-32. https://doi.org/10.1504/IJISD.2008.018195 Betz, G. (2010): “What’s the Worst Case? The Methodology of possibilistic Prediction”, Analyse & Kritik, 01: 87-106. https://doi.org/10.1515/auk-20100105 Billig, M. (1987): Arguing and Thinking: A rhetorical approach to social psychology. Cambridge: Cambridge University Press. Bishop, P., Hines, A., Collins, A. (2006): Thinking about the Future: Guidelines for Strategic Foresight. Washington DC: Social Technologies. Black, A., Freeman, P., Johnson-Laird, P. N. (1986): “Plausibility and the comprehension of text”, British Journal of Psychology, 77: 51-60. https://doi. org/10.1111/j.2044-8295.1986.tb01980.x Blair, J. A. (2007): “The ‘Logic’ of Informal Logic”, in: Dissensus and the Search for Common Ground, Proceedings of the seventh OSSA Conference, edited by Hansen, H. V., Tindale, C.W., Blair, J.A., Johnson, R.H., Godden, D.M., University of Windsor. Boenink, M. (2013): “Anticipating the future of technology and society by way of (plausible) scenarios: fruitful, futile or fraught with danger?”, International Journal for Foresight and Innovation Policy, 9(2/3/4): 148-161. https:// doi.org/10.1504/IJFIP.2013.058608 Bolger, F., Wright, G. (2017): “Use of expert knowledge to anticipate the future: Issues, analysis and directions”, International Journal of Forecasting, 33(1): 230-243. https://doi.org/10.1016/j.ijforecast.2016.11.001 Börjeson, L., Höjer, M., Dreborg, K.-H., Ekvall, T., Finnveden, G. (2006): “Scenario types and techniques: Towards a user’s guide”, Futures, 38: 723-739. https://doi.org/10.1016/j.futures.2005.12.002 Bosch, R. (2010): “Objectivity and Plausibility in the Study of Organizations”, Journal of Management Inquiry, 19(4): 383-391. https://doi.org/10.1177/ 1056492610369936 Boswell, C., Geddes, A., Scholten, P. (2011): “The Role of Narratives in Migration Policy-Making: A Research Framework”, The British Journal of Politics and International Relations, 13(1): 1-11. https://doi.org/10.1111/j.1467-856X. 2010.00435.x Bower, G. H., Morrow, D. G. (1990): “Mental models in narrative comprehension”, Science, 247(4938): 44-48. https://doi.org/10.1126/science.2403694 Bowman, G., MacKay, R. B., Masrani, S., McKiernan, P. (2013): “Storytelling and the scenario process: Understanding success and failure”, Techno-

231

232

The Plausibility of Future Scenarios

logical Forecasting and Social Change, 80(4): 735-748. https://doi.org/10. 1016/j.techfore.2012.04.009 Bradfield, R., Wright, G., Burt, G., Cairns, G., van der Heijden, K. (2005): “The origins and evolution of scenario techniques in long range business planning”, Futures, 37: 795-812. https://doi.org/10.1016/j.futures.2005.01. 003 Bradfield, R. M. (2008): “Cognitive Barriers in the Scenario Development Process”, Advances in Developing Human Resources, 10(2): 198-215. https:// doi.org/10.1177/1523422307313320 Braunreiter, L., Wemyss, D., Kobe, C., Müller, A., Krause, T., Blumer, Y. (2016): “Understanding the Role of Scenarios in Swiss Energy Research”, SML Working Paper No. 13, Zurich: School of Management and Law, Zurich University of Applied Sciences. Brockmeier, J., Harré, R. (2001): “Narrative: Problems and promises of an alternative paradigm”, in: Narrative and Identity: Studies in Autobiography, Self and Culture, edited by Brockmeier, J., Carbough, D., Amsterdam/ Philadelphia: John Benjamins Publishing Company, pp. 39-58. https://doi. org/10.1075/sin.1.04bro Brown, J. S., Duguid, P. (1991): “Organizational learning and communities of practice”, Organization Science, 2(1): 40-57. https://doi.org/10.1287/ orsc.2.1.40 Brown, N., Rappert, B., Webster, A. (2000): Contested Futures. A sociology of prospective techno-science. Burlington: Ashgate. Bruun, H., Hukkinen, J., Eklund, E. (2002): “Scenarios for coping with contingency: The case of aquaculture in the Finnish Archipelago Sea”, Technological Forecasting and Social Change, 69: 107-127. https://doi.org/10. 1016/S0040-1625(01)00134-2 Bryant, B. P., Lempert, R. J. (2010): “Thinking inside the box: A participatory, computer-assisted approach to scenario discovery”, Technological Forecasting and Social Change, 77(1): 34-49. https://doi.org/10.1016/S00401625(01)00134-2 Callon, M. (1986): “Some elements of a sociology of translation: Domestication of the scallops and the fisherman of St. Bieuc Bay”, in: Power, action and belief: A new sociology of knowledge?, edited by Law, J., London: Routledge, pp. 196-223. Cambridge Dictionary (2020): “Scenario in English”, Cambridge University Press, available at https://dictionary.cambridge.org/dictionary/english/ scenario (last accessed 24/02/2020).

References

Canter, D. V., Grieve, N., Nicol, C., Benneworth, K. (2003): “Narrative plausibility: the impact of sequence and anchoring”, Behavioral Sciences and the Law, 21(2): 251-267. https://doi.org/10.1002/bsl.528 Carbonell, J., Sánchez-Esguevillas, A., Carro, B. (2017): “From data analysis to storytelling in scenario building. A semiotic approach to purpose-dependent writing of stories”, Futures, 88: 15-29. Carlsen, H., Klein, R. J. T., Wikman-Svahn, P. (2017): “Transparent scenario development”, Nature Climate Change, 7(9): 613-613. https://doi.org/10. 1038/nclimate3379 Caron, J. (1992): An introduction to psycholinguistics. Exeter: BPCC Wheatons. Chabay, I. (2015): “Narratives for a Sustainable Future: Vision and Motivation for Collective Action”, in: Global Sustainability. Cultural Perspectives and Challenges for Transdisciplinary Integrated Research, edited by Werlen, B., Cham: Springer International Publishing, pp. 51-61. https://doi.org/10. 1007/978-3-319-16477-9_3 Chaiken, S., Maheswaran, D. (1994): “Heuristic Processing can bias Systematic Processing: Effects of Source Credibility, Argument Ambiguity, and Task Importance on Attitude Judgment”, Journal of Personality and Social Psychology, 66(3): 460-473. https://doi.org/10.1037/0022-3514.66.3.460 Chermack, T. J. (2004): “Improving decision making with scenario planning”, Futures, 36: 295–309. https://doi.org/10.1016/S0016-3287(03)00156-3 Chermack, T. J. (2005): “Studying scenario planning: Theory, research suggestions, and hypotheses”, Technological Forecasting and Social Change, 72(1): 59-73. https://doi.org/10.1016/S0040-1625(03)00137-9 Chinn, C. A., Brewer, W. F. (2001): “Models of Data: A Theory of How People Evaluate Data”, Cognition and Instruction, 19(3): 323-393. https://doi. org/10.1207/S1532690XCI1903_3 Chinn, C. A. (1993): “The Role of Anomalous Data in Knowledge Acquisition: A Theoretical Framework and Implications for Science Instruction”, Review of Educational Research, 63(1): 1-49. https://doi.org/10.3102/ 00346543063001001 Chung, S., Fink, E., Kaplowitz, S. (2008): “The Comparative Statics and Dynamics of Beliefs: The Effect of Message Discrepancy and Source Credibility”, Communication Monographs, 75(2): 158-189. https://doi.org/10.1080/ 03637750802082060 Cin, S. D., Zanna, M. P., Fong, G. T. (2004): ”Narrative Persuasion and Overcoming Resistance”, in: Resistance and Persuasion, edited by Knowles,

233

234

The Plausibility of Future Scenarios

E., Linn, J., Mahwah/ New Jersey/ London: Lawrence Erlbaum Associates, pp. 175-192. C ollins, A., Michalski, R. (1989): “The Logic of plausible reasoning: A core theory”, C ognitive Science, 13: 1-49. https://doi.org/10.1207/s15516709c og1301_1 Colonomos, A. (2016): Selling the Future. Oxford: Oxford University Press. https://doi.org/10.1093/acprof:oso/9780190603649.001.0001 Connell, L., Keane, M. T. (2004): “What plausibly affects plausibility? Concept coherence and distributional word coherence as factors influencing plausibility judgments”, Memory & Cognition, 32(2): 185-197. https://doi. org/10.3758/BF03196851 Connell, L., Keane, M. T. (2006): “A model of plausibility”, Cognitive Science, 30: 95-120. https://doi.org/10.1207/s15516709cog0000_53 Connelly, M. F., Clandinin, J. D. (1990): “Stories of Experience and Narrative Inquiry”, Educational Researcher, 19(5): 2-14. https://doi.org/10.3102/ 0013189X019005002 Cook, T. D., Campbell, D. T. (1976): “The design and conduct of true experiments and quasi-experiments in field settings”, in: Handbook of Industrial and Organizational Psychology, edited by Dunnette, M. D., Rand McNally & Co, pp. 223-326. Cooper, J. (2007): Cognitive Dissonance: 50 years of a classic theory. Los Angeles/ London/ New Delphi/ Singapore: Sage. Crupi, V. (2016): “Confirmation”, Stanford Encyclopedia of Philosophy Archive, Winter 2016 Edition, available at: https://plato.stanford.edu/archives/ win2016/entries/confirmation/ (last accessed 24/02/2020). Crupi, V., Tentori, K. (2016): “Confirmation Theory”, in: Oxford Handbook of Philosophy and Probability, edited by Hájek, A., Hitchcock, C., Oxford: Oxford University Press, pp. 50-665. https://doi.org/10.1093/oxfordhb/ 9780199607617.013.33 Cuhls, K. (2003): “From Forecasting to Foresight Processes - New Participative Foresight Activities in Germany”, Journal of Forecasting, 22: 93-111. https://doi.org/10.1002/for.848 Cunha, M., Palma, P., da Costa, N. (2006): “Fear of foresight: Knowledge and ignorance in organizational foresight”, Futures, 38(3): 942-955. https://doi. org/10.1016/j.futures.2005.12.015 Dahlstrom, M. F. (2010): “The Role of Causality in Information Acceptance in Narratives: An Example From Science Communication”, Communication Research, 37(6): 857-875.

References

Dansereau, D. F. (1985): “Learning strategy research”, in: Thinking and learning skills. Vol. 1: Relating instruction to research, edited by Segal, J. W., Chipman, S. F., Glaser, R., Hillsdale, NJ: Lawrence Erlbaum Associates Inc., pp. 209-239. de Wit, J. B. F., Das, E., Vet, R. (2008): “What works best: Objective statistics or a personal testimonial? An assessment of the persuasive effects of different types of message evidence on risk perception”, Health Psychology, 27(1): 110-115. https://doi.org/10.1037/0278-6133.27.1.110 deHaven-Smith, L. (1990): “Review of Evidence, Argument and Persuasion in the Policy Process by G. Majone”, Journal of Politics, August 1990: 672-674. https://doi.org/10.2307/2131923 Dieckhoff, C. (2015): Modellierte Zukunft – Energieszenarien in der wissenschaftlichen Politikberatung. Bielefeld: Transcript Verlag. https://doi. org/10.14361/9783839430972 Dole, J. A., Sinatra, G. M. (1998): “Reconceptualising Change in the Cognitive Construction of Knowledge”, Educational Psychologist, 33(2/3): 109-128. https://doi.org/10.1080/00461520.1998.9653294 Dreborg, K. H. (2004): “Scenarios and Structural Uncertainty”, Stockholm: Department of Infrastructure, Royal Institute of Technology. Druckman, J. N., Green, D. P., Kuklinski, J. H., Lupia, A. (2011): Cambridge Handbook of Experimental Political Science. Cambridge: Cambridge University Press. Dryzek, J. (1993): “Policy Analysis and Planning: From Science to Argument”, in: The Argumentative Turn in Policy Analysis and Planning, edited by Fischer, F., Forester, J., London: Duke University Press, pp. 213-232. https:// doi.org/10.1215/9780822381815-010 Dubin, R. (1978): Theory Building. revised ed. New York: Free Press/ Macmillan. Dufva, M., Ahlqvist, T. (2015): “Knowledge creation dynamics in foresight: A knowledge typology and exploratory method to analyse foresight workshops”, Technological Forecasting and Social Change, 94: 251-268. https:// doi.org/10.1016/j.techfore.2014.10.007 Durkin, S., Wakefield, M. (2008): “Interrupting a narrative transportation experience: Program placement effects on responses to antismoking advertising”, Journal of Health Communication, 13: 667-680. Earle, T., C vetkovich, G. (1995): Social Trust: Toward a C osmopolitan Society. Westport, C onnecticut: Praeger. https://doi.org/10.1080/1081 0730802412248

235

236

The Plausibility of Future Scenarios

Eder, K. (2006): “Europe’s Borders: The Narrative Construction of the Boundaries of Europe”, European Journal of Social Theory, 9(2): 255-271. https:// doi.org/10.1177/1368431006063345 EEA (2009): “Looking back on looking forward: a review of evaluative scenario literature”, EEA Technical Report No 3/2009, Copenhagen: European Environment Agency. Egner, T., Ely, S., Grinband, J. (2010): “Going, going, gone: characterizing the time-course of congruency sequence effects”, Frontiers in Psychology, 1(154): 1-8. https://doi.org/10.3389/fpsyg.2010.00154 Eidinow, E., Ramírez, R. (2016): “The aesthetics of story-telling as a technology of the plausible”, Futures, 84: 43-49. https://doi.org/10.1016/j.futures. 2016.09.005 Elzen, B., Geels, F. W., Hofman, P. S. (2002): “Sociotechnical Scenarios (STSc): Development and evaluation of a new methodology to explore transitions towards a sustainable energy supply”, Report for NWO/NOVEM, Enschede: University of Twente. Enserink, B., Kwakkel, J., Veenmann, S. (2013): “Coping with uncertainty in climate policy making: (Mis)understanding scenario studies”, Futures, 53: 1-12. https://doi.org/10.1016/j.futures.2013.09.006 Evans, J. S. B., Stanovich, K. E. (2013): “Dual-process theories of higher cognition: Advancing the debate”, Perspectives on Psychological Science, 8: 223-241. https://doi.org/10.1177/1745691612460685 Fahey, L., Randall, R. M. (1998): Learning from the Future: Competitive Foresight Scenarios. New York: Wiley. Ferry, E. (2016): “Claiming Futures”, Journal of the Royal Anthropological Institute, 22(S1): 181-188. https://doi.org/10.1111/1467-9655.12400 Festinger, L. (1957): The theory of cognitive dissonance. Stanford, CA: Stanford University Press. Fischer, F., Forester, J. (1993): The Argumentative Turn in Policy Analysis and Planning. London: Duke University Press. https://doi.org/10.1215/ 9780822381815 Flowers, B., Kupers, R., Mangalagiu, D., Ravetz, J., Ramirez, R., Selsky, J., Wasden, C., Wilkinson, A. (2009): “The Oxford Scenarios: Beyond the financial crisis”, Oxford: Institute for Science, Innovation and Society, University of Oxford. Franco, L. A., Meadows, M., Armstrong, S. J. (2013): “Exploring individual differences in scenario planning workshops: A cognitive style framework”,

References

Technological Forecasting and Social Change, 80(4): 723-734. https://doi. org/10.1016/j.techfore.2012.02.008 Fuchs-Heinritz, W., Lautmann, R. d., Rammstedt, O., Wienold, H. (1994): Lexikon zur Soziologie. 3rd ed. Opladen: Westdeutscher Verlag. https://doi. org/10.1007/978-3-322-91545-0 Fuller, T., Loogma, K. (2009): “Constructing futures: A social constructionist perspective on foresight methodology”, Futures, 41(2): 71-79. https://doi. org/10.1016/j.futures.2008.07.039 Funtowicz, S., Ravetz, J. (1990): Uncertainty and Quality in Science for Policy. Dordrecht: Kluwer. https://doi.org/10.1007/978-94-009-0621-1 Gallego Carrera, D., Ruddat, M., Rothmund, S. (2013): “Gesellschaftliche Einflussfaktoren im Energiesektor - Empirische Befunde aus 45 Szenarioanalysen”, Stuttgarter Beiträge zur Risiko- und Nachhaltigkeitsforschung Nr. 27, Stuttgart: Universität Stuttgart. Gasper, D. (1996): “Analysing policy arguments”, The European Journal of Development Research , 8(1): 36-62 . https ://doi o.rg/10 1.080/095788 19608426652 Gaßner, R., Kosow, H. (2008): “Methoden der Zukunfts- und Szenarioanalyse. Überblick, Bewertung und Auswahlkriterien”, Werkstattbericht, Berlin: Institut für Zukunftsstudien und Technologiebewertung (IZT). Gerhold, L., Holtmannsspötter, D., Neuhaus, C., Schüll, E., Schulz-Montag, B., Steinmüller, K., Zweck, A. (2015): Standards und Gütekriterien der Zukunftsforschung. Wiesbaden: Springer VS. https://doi.org/10.1007/978-3658-07363-3 Gigerenzer, G. (2000): Adaptive thinking: Rationality in the real world. Oxford/ New York: Oxford University Press. Gilovich, T., Griffin, D. (2002): “Introduction – Heuristics and Biases: Then and Now”, in: HEURISTICS AND BIASES: The Psychology of Intuitive Judgment, edited by Gilovich, T., Griffin, D., Kahneman, D., Cambridge: Cambridge University Press, pp. 1-18. https://doi.org/10.1017/ CBO9780511808098.002 Gilovich, T., Griffin, D., Kahneman, D. (2002): HEURISTICS AND BIASES: The Psychology of Intuitive Judgment. Cambridge: Cambridge University Press. https://doi.org/10.1017/CBO9780511808098 Glenn, J. (2000): “The Futures Group International. Scenarios”, in: Futures Research Methodology. Version 3. The Millennium Project, edited by Glenn, J., Gordon, T.

237

238

The Plausibility of Future Scenarios

Glick, M. B., Chermack, T. J., Luckel, H., Gauck, B. Q. (2012): “Effects of scenario planning on participant mental models”, European Journal of Training and Development, 36(5): 488-507. https://doi.org/10.1108/ 03090591211232066 Godet, M. (2000): “The Art of Scenarios and Strategic Planning: Tools and Pitfalls”, Technological Forecasting and Social Change, 65: 3-22. https:// doi.org/10.1016/S0040-1625(99)00120-1 Gong, M., Lempert, R., Parker, A., Mayer, L. A., Fischbach, J., Sisco, M., Mao, Z., Krantz, D. H., Kunreuther, H. (2017): “Testing the scenario hypothesis: An experimental comparison of scenarios and forecasts for decision support in a complex decision environment”, Environmental Modelling & Software, 91: 135-155. https://doi.org/10.1016/j.envsoft.2017.02.002 Gräfe, A. (2009): “Prediction Markets versus Alternative Methods Empirical Tests of Accuracy and Acceptability”, Dissertation, Fakultät für Wirtschaftswissenschaften, Universität Karlsruhe (TH). Green, M. C., Brock, T. C. (2000): “The role of transportation in the persuasiveness of public narratives”, Journal of Personality and Social Psychology, 79(5): 701-721. https://doi.org/10.1037/0022-3514.79.5.701 Green, M. C., Brock, T. C. (2002): “In the mind’s eye: Transportation-imagery model of narrative persuasion”, in: Narrative impact: Social and cognitive foundations, edited by Green, M. C., Strange, J. J., Brock, T. C., Mahwah, NJ:. Erlbaum, pp. 315–341. Greenhalgh, T., Russell, J. (2006): “Reframing Evidence Synthesis As Rhetorical Action in the Policy Making Drama”, Health Policy, 1(2): 34-42. https:// doi.org/10.12927/hcpol.2006.17873 Grunwald, A. (2011): “Der Lebensweg von Energieszenarien - Umrisse eines Forschungsprogramms”, in: Energieszenarien: Konstruktion, Bewertung und Wirkung – “Anbieter” und “Nachfrager” im Dialog, edited by Dieckhoff, C., Fichtner, W., Grunwald, A., Meyer, S., Nast, N., Nierling, N., Renn, O., Voß, A., Wietschel, A., Karlsruhe: KIT Scientific Publishing, pp. 11-24. Grunwald, A. (2015): “Argumentative Prüfbarkeit”, in: Standards und Gütekriterien der Zukunftsforschung, edited by Gerhold, L., Holtmannsspötter, D., Neuhaus, C., Schüll, E., Schulz-Montag, B., Steinmüller, K., Zweck, A., Wiesbaden: Springer VS, pp. 40-51. Grunwald, A., Schippl, J. (2013): “Forschung für die Energiewende 2.0: integrativ und transformativ”, TaTuP, 22(2): 56-62. https://doi.org/10.14512/ tatup.22.2.56

References

Guivarch, C., Schweizer, V. J., Rozenberg, J. (2013): “Enhancing the policy relevance of scenario studies through a dynamic analytical approach using a large number of scenarios”, International Energy Workshop, 19-21. Guston, D. H. (2014): “Understanding ‘anticipatory governance’”, Social Studies of Science, 44(2): 218-242. https://doi.org/10.1177/0306312713508669 Habermas, J. (1976): “Verwissenschaftlichte Politik und öffentliche Meinung”, in: Technik und Wissenschaft als ‘Ideologie’, edited by Habermas, J., Frankfurt: Suhrkamp, pp. 120-145. Hachmeister, C. D., Müller, U., Ziegele, F. (2016): “Zu viel Vielfalt? Warum die Ausdifferenzierung der Studiengänge kein Drama ist”, Gütersloh: CHE gemeinnütziges Centrum für Hochschulentwicklung. Hajer, M. A. (1995): The Politics of Environmental Discourse. Oxford: Claredon Press. Hajer, M. A. (2002): “Discourse Analysis and the Study of Policy Making”, European Political Science, Autumn: 61-65. https://doi.org/10.1057/eps. 2002.49 Han, S. H., Diekmann, J. E. (2001): “Making a risk-based bid decision for overseas construction projects”, Construction Management and Economics, 19(8): 765-776. https://doi.org/10.1080/01446190110072860 Harries, C. (2003): “Correspondence to what? Coherence to what? What is good scenario-based decision making?”, Technological Forecasting and Social Change, 70(8): 797-817. https://doi.org/10.1016/S00401625(03)00023-4 Hartshorne, J. (2014): “What is implicit causality?”, Language, Cognition and Neuroscience, 29(7): 804-824. https://doi.org/10.1080/01690965.2013. 796396 Haxeltine, A., Whitmarsh, L., Bergman, N., Rotmans, J., Schilperoord, M., Köhler, J. (2008): “A Conceptual Framework for transition modelling”, International Journal of Innovation and Sustainable Development, 3(1-2): 93114. https://doi.org/10.1504/IJISD.2008.018195 Hejazi, A. (2012): “Futures Metacognition: A Progressive Understanding of Futures Thinking”, World Future Review, 4(2): 18-27. https://doi.org/10.1177/ 194675671200400205 Heller, J. (2011): Experimentelle Psychologie: Eine Einführung. München: Oldenbourg Verlag. https://doi.org/10.1524/9783486714258 Helmer, O. (1981): “Reassessment of Cross-Impact Analysis”, Futures, 13: 389400. https://doi.org/10.1016/0016-3287(81)90124-5

239

240

The Plausibility of Future Scenarios

Herman, D. (2007): The Cambridge Companion to Narrative. Cambridge: Cambridge University Press. https://doi.org/10.1017/CCOL0521856965 Herman, D., Phelan, J., Rabinowitz, P., Richardson, B., Warhol, R. (2012): Narrative Theory: Core Concepts and Critical Debates. Columbus: Ohio State University Press. Hilligoss, B., Rieh, S. Y. (2008): “Developing a unifying framework of credibility assessment: Construct, heuristics, and interaction of context”, Information Processing and Management, 44: 1467-1484. https://doi.org/10. 1016/j.ipm.2007.10.001 Hinkel, J. (2008): “Transdisciplinary Knowledge Integration. Cases from Integrated Assessment and Vulnerability Assessment”, Dissertation, Wageningen University. Hodgkinson, G. P., Healey, M. P. (2008): “Toward a (Pragmatic) Science of Strategic Intervention: Design Propositions for Scenario Planning”, Organization Studies, 29(3): 435-457. https://doi.org/10.1177/0170840607088022 Hogarth, R., Einhorn, H. (1992): “Order Effects in Belief Updating: The BeliefAdjustment Model”, Cognitive Psychology, 24(1): 1-55. https://doi.org/10. 1016/0010-0285(92)90002-J Holbrook, A. (2011): “Attitude Change Experiments in Political Science”, in: Cambridge Handbook of Experimental Political Science, edited by Druckman, J. N., Green, D. P., Kuklinski, J. H., Lupia, A., Cambridge: Cambridge University Press, pp. 252-279. Hovland, C., Janis, I., Kelley, H. (1953): Communication and persuasion. New Haven, CT: Yale University Press. Hughes, N., Strachan, N., Gross, R. (2013): “The structure of uncertainty in future low carbon pathways”, Energy Policy, 52: 45-54. https://doi.org/10. 1016/j.enpol.2012.04.028 Hulme, M., Dessai, S. (2008): “Negotiating future climates for public policy: a critical assessment of the development of climate scenarios for the UK”, Environmental Science & Policy, 11: 54-70. https://doi.org/10.1016/j.envsci. 2007.09.003 Humphreys, P. (1978): “Reviewed Work(s): Plausible Reasoning: An introduction to the Theory and Practice of Plausibilistic Inference by Nicholas Rescher”, The Journal of Symbolic Logic, 43(1): 159-160. https://doi.org/10. 2307/2271978 Inayatullah, S. (1990): “Deconstructing and reconstructing the future: Predictive, cultural and critical epistemology”, Futures, 22(2): 116-141. https:// doi.org/10.1016/0016-3287(90)90077-U

References

Inayatullah, S. (1998): “Causal Layered Analysis: Poststructuralism as method”, Futures, 30(8): 815-829. https://doi.org/10.1016/S0016-3287(98)00086-X IRGC (2005): “Risk Governance: Towards an Integrative Approach”, White Paper, Geneva: International Risk Governance Council. Jasanoff, S., Kim, S. H. (2009): “Containing the Atom: Sociotechnical Imaginaries and Nuclear Power in the United States and South Korea”, Minerva, 47(2): 119-146. https://doi.org/10.1007/s11024-009-9124-4 Jasanoff, S., Kim, S. H. (2013): “Sociotechnical Imaginaries and National Energy Policies”, Science as Culture, 22(2): 189-196. https://doi.org/10.1080/ 09505431.2013.786990 Jaynes, E. T. (2003): Probability Theory: the Logic of Science. Cambridge: Cambridge University Press. https://doi.org/10.1017/CBO9780511790423 Jensen, J., Hurley, R. (2010): “Conflicting stories about public scientific controversies: Effects of news convergence and divergence on scientists’ credibility”, Public Understanding of Science, 21(6): 689-704. https://doi.org/10. 1177/0963662510387759 Jermias, J. (2001): “Cognitive dissonance and resistance to change: the influence of commitment confirmation and feedback on judgment usefulness of accounting systems”, Accounting, Organizations and Society, 26: 141160. https://doi.org/10.1016/S0361-3682(00)00008-8 Johnson, M. L., Sinatra, G. M. (2013): “Use of task-value instructional inductions for facilitating engagement and conceptual change”, Contemporary Educational Psychology, 38: 51-63. https://doi.org/10.1016/j.cedpsych. 2012.09.003 Johnson, R. H. (1999): “The relation between formal and informal logic”, Argumentation, 13(3): 265-274. https://doi.org/10.1023/A:1007789101256 Johnson, R. H., Blair, J. A. (1987): “The current state of informal logic”, Informal Logic, 9: 147-151. https://doi.org/10.22329/il.v9i2.2671 Johnson-Laird, P. N. (1983): Mental models. Cambridge: Cambridge University Press. Jones, M. D., McBeth, M. K. (2010): “A Narrative Policy Framework: Clear Enough to be Wrong?”, The Policy Studies Journal, 38(2): 329-353. https:// doi.org/10.1111/j.1541-0072.2010.00364.x Kahn, H., Wiener, A. J. (1972): “The Use of Scenarios” in: The Futurists, edited by Toffler, A., New York: Random House, pp. 160-163. Kahn, H. (1984): Thinking about the Unthinkable in the 1980s. New York: Simon and Schuster.

241

242

The Plausibility of Future Scenarios

Kahneman, D., Klein, G. (2009): “Conditions for intuitive expertise: a failure to disagree”, American Psychologist, 64(6): 515-526. https://doi.org/10.1037/ a0016755 Kahneman, D., Tversky, A. (1984): “Choices, values, and frames”, American Psychologist, 39(4): 341–350. https://doi.org/10.1037/0003-066X.39.4.341 Kangas, A. S., Kangas, J. (2004): “Probability, possibility and evidence: approaches to consider risk and uncertainty in forestry decision analysis”, Forest Policy and Economics, 6(2): 169-188. https://doi.org/10.1016/S13899341(02)00083-7 Karlsen, J. E., Karlsen, H. (2007): “Expert groups as production units for shared knowledge in energy foresights”, Foresight, 9(1): 37-49. https://doi. org/10.1108/14636680710727534 Kashima, Y. (2000): “Maintaining Cultural Stereotypes in the Serial Reproduction of Narratives”, Personality and Social Psychology Bulletin, 26(5): 594-604. https://doi.org/10.1177/0146167200267007 Kashima, Y., Lyons, A., Clark, A. (2013): “The maintenance of cultural stereotypes in the conversational retelling of narratives”, Asian Journal of Social Psychology, 16(1): 60-70. https://doi.org/10.1111/ajsp.12004 Kelley, H. (1973): “The Process of Causal Attribution”, American Psychologist, February: 107-128. https://doi.org/10.1037/h0034225 Kemp-Benedict, E. (2012): “Telling better stories: strengthening the story in story and simulation”, Environmental Research Letters, 7(4): 041004. https://doi.org/10.1088/1748-9326/7/4/041004 Kerlinger, F., Pehazur, E. J. (1973): Multiple Regression in Behavioural Research. New York: Holt, Rinehart & Winston. Kicker, D. (2009): “Wendell Bell and Oliver W. Markley: Two Futurists’ View on the Preferable, the Possible and the Probable”, Journal of Futures Studies, 13(3): 161-178. Kintsch, W. (1998): Comprehension: A paradigm for cognition. Cambridge: Cambridge University Press. Klaczynski, P. A. (2000): “Motivated scientific reasoning biases, epistemological beliefs, and theory polarization: A two-process approach to adolescent cognition”, Child Development, 71: 1347–1366. https://doi.org/10.1111/14678624.00232 Knight, F. (1921): Risk, Uncertainty and Profit. Boston/ New York: Houghton Mifflin Company.

References

Koehler, J. J. (1993): “The influence of prior beliefs on scientific judgments of evidence quality”, Organizational Behavior and Human Decision Processes, 56: 25-55. https://doi.org/10.1006/obhd.1993.1044 Konzendorf, G. (2013): “Zum Einfluss von Evaluationen auf die politische Entscheidungsfindung”, Verwaltung und Management, 19(4): 171-178. https:// doi.org/10.5771/0947-9856-2013-4-171 Kosow, H. (2015): “New outlooks in traceability and consistency of integrated scenarios”, European Journal of Futures Research, 3(16): 1-12. https://doi. org/10.1007/s40309-015-0077-6 Kosow, H., Leon, C. (2015): “Die Szenariotechnik als Methode der Expertenund Stakeholdereinbindung”, in: Methoden der Experten- und Stakeholdereinbindung in der sozialwissenschaftlichen Forschung, edited by Niederberger, M., Wassermann, S., Wiesbaden: Springer VS, pp. 217-242. https://doi.org/10.1007/978-3-658-01687-6_11 Kowalski, K., Stagl, S., Madlener, R., Omann, I. (2009): “Sustainable energy futures: Methodological challenges in combining scenarios and participatory multi-criteria analysis”, European Journal of Operational Research, 197: 1063–1074. https://doi.org/10.1016/j.ejor.2007.12.049 Kropp, C. (2013): “Demokratische Planung der Klimaanpassung? Über die Fallstricke partizipativer Verfahren im expertokratischen Staat”, in: Partizipation und Klimawandel - Ansprüche, Konzepte und Umsetzung, edited by Baasch, S., Knierim, A., Gottschick, M., München: Oregon Verlag, pp. 55-74. Kruglanski, A. W., Webster, D. M., Klem, A. (1993): “Motivated resistance and openness to persuasion in the presence or absence of prior information”, Journal of Personality and Social Psychology, 65(5): 861-876. https://doi. org/10.1037/0022-3514.65.5.861 Kunda, Z. (1990): “The case for motivated reasoning”, Psychological Bulletin, 108: 480–498. https://doi.org/10.1037/0033-2909.108.3.480 Kunkel, K., Moss, R., Parris, A. (2016): “Innovations in science and scenarios for assessment”, Climate Change, 135(1): 55-68. https://doi.org/10.1007/ s10584-015-1494-z Kutscher, S. (2009): Kausalität und Argumentationsrealisierung: Zur Konstruktionsvarianz bei Psychverben am Beispiel europäischer Sprachen. Tübingen: Max Niemeyer Verlag. Kuusi, O., Cuhls, K., Steinmüller, K. (2015): “Quality Criteria for Scientific Futures Research”, Futura, 1: 60-77.

243

244

The Plausibility of Future Scenarios

Kwakkel, J., Haasnoot, M., Walker, W. E. (2015): “Developing Dynamic Adaptive Policy Pathways: A computer-assisted approach for developing adaptive strategies for a deeply uncertain world”, Climate Change, 132(3): 373386. https://doi.org/10.1007/s10584-014-1210-4 Labov, W. (1972): Language in the inner city: Studies in the Black English vernacular. Philadelphia, PA: Blackwell. Lang, T. (2012): “Essays on how scenario planning and the building of new social capital are related”, Dissertation, Green Templeton College, Said Business School, University of Oxford. Lang, T., Ramírez, R. (2015): “Scenario cranes new cognitive social capital”, Said Business School Research Papers, Oxford: Oxford Said Business School, University of Oxford. Lapata, M., McDonald, S., Keller, F. (1999): “Determinants of adjective–noun plausibility”, in: Proceedings of the Ninth Conference of the European Chapter of the Association for Computational Linguistics, San Mateo, CA: Morgan Kaufmann, pp. 30-36. https://doi.org/10.3115/977035.977041 Lee, B. (2001): “Mutual knowledge, Background knowledge and shared beliefs: Their roles in establishing common ground”, Journal of Pragmatics, 33(1): 21-44. https://doi.org/10.1016/S0378-2166(99)00128-9 Lempert, R. (2012): “Scenarios that illuminate vulnerabilities and robust responses”, Climatic Change, 117(4): 627-646. https://doi.org/10.1007/ s10584-012-0574-6 Lempert, R., Hallsworth, M., Hoorens, S., Ling, T. (2008): “Looking back on looking forward: A review of evaluative scenario literature”, WR-564-EEA, Cambridge: RAND Cooperation Europe. Liebl, F. (2001): “Rethinking trends - and how to link them to scenarios”, in: 21st Annual SMS Conference, San Francisco. Liebl, F. (2002): “The anatomy of complex societal problems and its implications for OR”, Journal of the Operational Research Society, 53: 161-184. https://doi.org/10.1057/palgrave.jors.2601293 Lindgren, M., Bandhold, H. (2003): Scenario Planning - the link between future and strategy. Hampshire/ New York: Palgrave Macmillan. https://doi. org/10.1057/9780230511620 Lloyd, E., Schweizer, V. J. (2013): “Objectivity and a comparison of methodological scenario approaches for climate change research”, Synthese, 191(10): 2049-2088. https://doi.org/10.1007/s11229-013-0353-6 Löfstedt, R. E. (2005): Risk Management in Post-Truth Societies. New York: Palgrave MacMillan. https://doi.org/10.1057/9780230503946

References

Lombardi, D., Sinatra, G. M. (2010): “College Students’ Perceptions About the Plausibility of Human-Induced Climate Change”, Research in Science Education, 42(2): 201-217. https://doi.org/10.1007/s11165-010-9196-z Lombardi, D. (2012): “Students’ Conceptions about Climate Change: Using Critical Evaluation to Influence Plausibility Reappraisals and Knowledge Reconstruction”, Dissertation, Department of Educational Research, Cognition and Development, University of Nevada. Lombardi, D., Sinatra, G. M. (2013): “Emotions about Teaching about HumanInduced Climate Change”, International Journal of Science Education, 35(1): 167-191. https://doi.org/10.1080/09500693.2012.738372 Lombardi, D., Sinatra, G. M., Nussbaum, E. M. (2013): “Plausibility reappraisals and shifts in middle school students’ climate change conceptions”, Learning and Instruction, 27: 50-62. https://doi.org/10.1016/j. learninstruc.2013.03.001 Lombardi, D., Seyranian, V., Sinatra, G. M. (2014): “Source Effects and Plausibility Judgments When Reading About Climate Change”, Discourse Processes, 51(1-2): 75-92. https://doi.org/10.1080/0163853X.2013.855049 Lombardi, D., Nussbaum, E. M., Sinatra, G. M. (2015): “Plausibility Judgments in Conceptual Change and Epistemic Cognition”, Educational Psychologist, 51(1): 35-56. https://doi.org/10.1080/00461520.2015.1113134 Lombardi, D., Brandt, C. B., Bickel, E. S., Burg, C. (2016a): “Students’ evaluations about climate change”, International Journal of Science Education, 38(8): 1392-1414. https://doi.org/10.1080/09500693.2016.1193912 Lombardi, D., Danielson, R. W., Young, N. (2016b): “A plausible connection: Models examining the relations between evaluation, plausibility, and the refutation text effect”, Learning and Instruction, 44: 74-86. https://doi. org/10.1016/j.learninstruc.2016.03.003 Lompe, K. (1972): Wissenschaftliche Beratung der Politik. Ein Beitrag zur Theorie anwendender Sozialwissenschaften. Göttingen: Verlag Otto Schwartz & Co. Lord, S., Helfgott, A., Vervoort, J. M. (2016): “Choosing diverse sets of plausible scenarios in multidimensional exploratory futures techniques”, Futures, 77: 11-27. https://doi.org/10.1016/j.futures.2015.12.003 Lösch, A., et al. (2016): “Technikfolgenabschätzung von soziotechnischen Zukünften”, Diskussionspapier, Nr. 03/Dezember 2016, Karlsruhe: Karlsruhe Institut für Technikzukünfte, Karlsruhe Institut für Technologie.

245

246

The Plausibility of Future Scenarios

Lumer, C. (2011): “Argument schemes - an epistemological approach”, in: Proceedings of the 9th International Conference of the Ontario Society for the Study of Argumentation (OSSA), Windsor, ON. Lynham, S. (2002): “Quantitative Research and Theory Building: Dubin’s Method”, Advances in Developing Human Resources, 4(3): 242-276. https://doi.org/10.1177/15222302004003003 Lyons, A., Kashima, Y. (2001): “The Reproduction of culture: Communication Processes tend to maintain cultural stereotypes”, Social Cognition, 19(3): 372-394. https://doi.org/10.1521/soco.19.3.372.21470 Magliano, J. P. (1999): “Revealing inference processes during text comprehension”, in: Narrative comprehension, causality, and coherence: Essays in honor of Tom Trabasso, edited by Goldman, S. R., Graesser, A. C., Broek, P. v. d., Mahwah, NJ: Lawrence Erlbaum, pp. 55-75. Mahoney, M. J. (1977): “Publication prejudices: an experimental study of confirmatory bias in the peer review system”, Cognitive Therapy and Research, 1(2): 161-175. https://doi.org/10.1007/BF01173636 Maier, J., Richter, T. (2012): “Plausibility effects in the comprehension of controversial science texts”, in: Proceedings of the American Educational Research Association conference, Vancouver British Columbia. Majone, G. (1989): Evidence, Argument, and Persuasion in the Policy Process. New Haven: Yale University Press. Marien, M. (2002): “Futures studies in the 21st Century: a reality based view”, Futures, 34: 261–281. https://doi.org/10.1016/S0016-3287(01)00043-X Martelli, A. (2001): “Scenario building and scenario planning: state of the art and prospects of evolution”, Futures Research Quarterly, Summer: 57-70. Masini, E. B. (2006): “Rethinking Futures Studies”, Futures, 38: 1158-1168. https://doi.org/10.1016/j.futures.2006.02.004 Masini, E. B., Vasquez, J. M. (2000): “Scenarios as seen from a human and social perspective”, Technological Forecasting and Social Change, 5: 4966. https://doi.org/10.1016/S0040-1625(99)00127-4 Maxim, L., van der Sluijs, J. P. (2011): “Quality in environmental science for policy: assessing uncertainty as a component of policy analysis”, Environmental Science and Policy, 14: 482-492. https://doi.org/10.1016/j.envsci. 2011.01.003 Mayer, J., Hanson, E. (1995): “Mood-congruent Judgment over time”, Personality and Social Psychology Bulletin, 21(3): 237-244. https://doi.org/10.1177/ 0146167295213005

References

McClanahan, A. (2009): “Future’s Shock: Plausibility, Preemption, And The Fiction of 9/11”, Symploke, 17(1-2): 41-62. https://doi.org/10.1353/sym.2009. 0011 McComas, K., Shanahan, J. (1999): “Telling stories about global climate change: Measuring the impact of narratives on issue cycles”, Communication Research, 26(1): 30-57. https://doi.org/10.1177/009365099026001003 McNamara, D. S. (2001): “Reading both high-coherence and low-coherence texts: Effects of text sequence and prior knowledge”, Canadian Journal of Experimental Psychology, 55(1): 51-62. https://doi.org/10.1037/h0087352 Meissner, P., Wulf, T. (2013): “Cognitive benefits of scenario planning: Its impact on biases and decision quality”, Technological Forecasting and Social Change, 80(4): 801-814. https://doi.org/10.1016/j.techfore.2012.09.011 Mermet, L., Fuller, T., van der Helm, R. (2009): “Re-examining and renewing theoretical underpinnings of the Futures field: A pressing and long-term challenge”, Futures, 41(2): 67-70. https://doi.org/10.1016/j.futures.2008.07. 040 Metzger, M., Flanagin, A., Eyal, K., Lemus, D., McCann, R. (2003): “Credibility for the 21st Century: Integrating Perspectives on source, message, and media credibility in the contemporary media environment”, in: Communication Yearbook, edited by Kalbfleisch, P., Mahwah: Lawrence Erlbaum, pp. 293-335. https://doi.org/10.1207/s15567419cy2701_10 Metzger, M., Rounsevell, M., Van den Heiligenberg, H., Perez-Soba, M., Soto Hardiman, P. (2010): “How personal judgments influence scenario development: an Example for Future Rural Development in Europe”, Ecology and Society, 15(2): 5. https://doi.org/10.5751/ES-03305-150205 Midttun, A., Baumgartner, T. (1986): “Negotiating energy futures: The Politics of energy forecasting”, Energy Policy, 14(3): 219-241. https://doi.org/10. 1016/0301-4215(86)90145-X Millett, S. M. (2003): “The future of scenarios: challenges and opportunities”, Strategy & Leadership, 31(2): 16-24. https://doi.org/10.1108/ 10878570310698089 Millett, S. M. (2009): “Should probabilities by used with Scenarios?”, Journal of Futures Studies, 13(4): 61-68. Molitor, G. (2009): “Scenarios: Worth the Effort?”, Journal of Futures Studies, 13(3): 81-92. Morgan, M. G., Keith, D. W. (2008): “Improving the way we think about projecting future energy use and emissions of carbon dioxide”, Climatic Change, 90(3): 189-215. https://doi.org/10.1007/s10584-008-9458-1

247

248

The Plausibility of Future Scenarios

Moss, R. H., et al. (2010): “The next generation of scenarios for climate change research and assessment”, Nature, 463(7282): 747-756. https://doi.org/10. 1038/nature08823 Nahari, G., Glicksohn, J., Nachson, I. (2010): “Credibility judgments of narratives: Language, plausibility, and absorption”, American Journal of Psychology, 123(3): 319-335. https://doi.org/10.5406/amerjpsyc.123.3.0319 Nakicenovic, N., et al. (2000): Special report on emissions scenarios. New York: Cambridge University Press. Nielsen, S. K., Karlsson, K. (2007): “Energy Scenarios: a review of methods, uses and suggestions for improvement”, International Journal of Global Energy Issues, 27(3): 302-322. https://doi.org/10.1504/IJGEI.2007.014350 Nilsson, M., Nilsson, L. J., Hildingsson, R., Stripple, J., Eikeland, P. O. (2011): “The missing link: Bringing institutions and politics into energy future studies”, Futures, 43(10): 1117-1128. https://doi.org/10.1016/j.futures. 2011.07.010 Nordmann, A. (2007): “If and Then: A Critique of Speculative NanoEthics”, Nanoethics, 1: 31-46. https://doi.org/10.1007/s11569-007-0007-6 Nordmann, A. (2014): “Responsible innovation, the art and craft of anticipation”, Journal of Responsible Innovation, 1(1): 87-98. https://doi.org/10. 1080/23299460.2014.882064 Nordmann, A., Rip, A. (2009): “Mind the gap revisited”, Nature Nanotechnology, 4(5): 273-274. https://doi.org/10.1038/nnano.2009.26 Noss, C. (2013): “Strategisches Management und Zeit: Auf dem Weg zu einem integrativen Konzept zeitinduzierter Wettbewerbsvorteile”, Managementforschung, 23: 83-126. https://doi.org/10.1007/978-3-658-02998-2_3 O’Mahony, T. (2014): “Integrated scenarios for energy: a methodology for the short term”, Futures, 55(41-57). https://doi.org/10.1016/j.futures.2013.11. 002 O’Mahony, T., Zhou, P., Sweeney, J. (2013): “Integrated scenarios of energyrelated CO2 emissions in Ireland: A multi-sectoral analysis to 2020”, Ecological Economics, 93: 385-397. https://doi.org/10.1016/j.ecolecon.2013.06. 016 Oaksford, M., Chater, N. (2007): Bayesian Rationality: The Probabilistic Approach to Human Reasoning. Oxford: Oxford University Press. https:// doi.org/10.1093/acprof:oso/9780198524496.001.0001 Ochs, E., Capps, L. (1997): “Narrative Authenticity”, Journal of Narrative and Life History, 7(1-4): 83-89. https://doi.org/10.1075/jnlh.7.09nar

References

Olsen, W. (2004): “Triangulation in Social Research: Qualitative and Quantitative Methods can really be mixed”, in: Developments in Sociology, edited by Holborn, M., Ormskirk: Causeway Press, pp. 1-30. Pargman, D., Eriksson, E., Höök, M., Tanenbaum, J., Pufal, M., Wangel, J. (2017): “What if there had only been half the oil? Rewriting history to envision the consequences of peak oil”, Energy Research & Social Science, 31: 170-178. https://doi.org/10.1016/j.erss.2017.06.007 Parker, A. M., Srinivasan, S. V., Lempert, R. J., Berry, S. H. (2015): “Evaluating simulation-derived scenarios for effective decision support”, Technological Forecasting and Social Change, 91: 64-77. https://doi.org/10.1016/j. techfore.2014.01.010 Parson, E. A. (2008): “Useful global-change scenarios: current issues and challenges”, Environmental Research Letters, 3(4): 045016. https://doi.org/10. 1088/1748-9326/3/4/045016 Patomäki, H., Steger, M. (2010): “Social imaginaries and Big History: Towards a new planetary consciousness?”, Futures, 42: 1056-1063. https:// doi.org/10.1016/j.futures.2010.08.004 Peng, C.-Y. J., Lee, K. L., Ingersoll, G. M. (2010): “An Introduction to Logistic Regression Analysis and Reporting”, The Journal of Educational Research, 96(1): 3-14. https://doi.org/10.1080/00220670209598786 Peng, C.-Y. J., So, T.-S. H. (2002): “Logistic Regression Analysis and Reporting: A Primer”, Understanding Statistics, 1(1): 31-70. https://doi.org/10.1207/ S15328031US0101_04 Pennington, N., Hastie, R. (1986): “Evidence evaluation in complex decision making”, Journal of Personality and Social Psychology, 51: 242-258. https:// doi.org/10.1037/0022-3514.51.2.242 Petty, R. E., Cacioppo, J. T. (1986): Communication and persuasion: Central and peripheral routes to attitude change. New York: Springer. Pfenninger, S., Hawkes, A., Keirstead, J. (2014): “Energy systems modelling for twenty-first century energy challenges”, Renewable and Sustainable Energy Reviews, 33: 74-86. https://doi.org/10.1016/j.rser.2014.02.003 Phelan, J., Rabinowitz, P. (2005): “Introduction: Tradition and Innovation in Contemporary Narrative Theory”, in: A Companion to Narrative Theory, edited by Phelan, J., Rabinowitz, P., Malden/Oxford: Victoria Blackwell Publishing, pp. 1-18. https://doi.org/10.1002/9780470996935.ch1 Phelan, J., Rabinowitz, P. (2012): “Reception and the Reader”, in: Narrative Theory: Core Concepts and Critical Debates, edited by Herman, D., Phe-

249

250

The Plausibility of Future Scenarios

lan, J., Rabinowitz, P., Richardson, B., Warhol, R., Columbus: Ohio State University Press, pp. 139-143. Pintrich, P. R., Marx, R. W., Boyle, R. B. (1993): “Beyond cold conceptual change: the role of motivational beliefs and classroom contextual factors in the process of conceptual change”, Review of Educational Research in Science Education, 63: 167-199. https://doi.org/10.3102/ 00346543063002167 Pitz, G. F. (1969): “An inertia (resistance to change) in the revision of opinion”, Canadian Journal of Psychology, 23(1): 24-33. https://doi.org/10.1037/ h0082790 Polya, G. (1954): Mathematics and Plausible Reasoning. Princeton: Princeton University Press. Posner, G. J., Strike, K. A., Hewson, P. W., Gertzog, W. A. (1982): “Accommodation of a scientific conception: Towards a theory of conceptual change”, Science Education, 66: 211-227. https://doi.org/10.1002/sce.3730660207 Postma, T. J. B. M., Liebl, F. (2005): “How to improve scenario analysis as a strategic management tool?”, Technological Forecasting and Social Change, 72(2): 161-173. https://doi.org/10.1016/S0040-1625(03)00152-5 Pregger, T., Naegler, T., Weimer-Jehle, W., Prehofer, S., Hauser, W. (2019): “Moving towards socio-technical scenarios of the German energy transition - lessons learned from integrated energy scenario building”, Climatic Change. https://doi.org/10.1007/s10584-019-02598-0 Pulver, S., VanDeveer, S. (2009): “‘Thinking about Tomorrows’: Scenarios, Global Environmental Politics, and Social Science Scholarship”, Global Environmental Politics, 9(2): 1-14. https://doi.org/10.1162/glep.2009.9.2.1 Ramírez, R. (2008): “Scenarios providing clarity to address turbulence”, in: Business Planning in Turbulent Times: New Methods for Applying Scenarios, edited by Ramírez, R., Selsky, J., van der Heijden, K., London: Earthscan, pp. 187-206. Ramírez, R., Mukherjee, M., Vezzoli, S., Kramer, A. M. (2015): “Scenarios as a scholarly methodology to produce ‘interesting research’”, Futures, 71: 7087. https://doi.org/10.1016/j.futures.2015.06.006 Ramírez, R., Ravetz, J. (2011): “Feral futures: Zen and aesthetics”, Futures, 43: 478-487. https://doi.org/10.1016/j.futures.2010.12.005 Ramírez, R., Selin, C. (2014): “Plausibility and probability in scenario planning”, Foresight, 16(1): 54-74. https://doi.org/10.1108/FS-08-2012-0061 Ramírez, R., Selsky, J., van der Heijden, K. (2010): Business Planning in Turbulent Times. London: Earthscan.

References

Ramírez, R., Wilkinson, A. (2016): Strategic Reframing: The Oxford Scenario Planning Approach. Oxford: Oxford University Press. https://doi.org/10. 1093/acprof:oso/9780198745693.001.0001 Rasch, B., Friese, M., Hofmann, W., Naumann, E. (2006): Quantitative Methoden - Einführung in die Statistik. 2nd ed. Heidelberg: Springer Medizin Verlag. Ratcliffe, J. (2003): “Scenario planning: an evaluation of practice’”, Futures Research Quarterly, 19(4): 5-25. Renn, O. (2008): Risk Governance: Coping with Uncertainty in a Complex World. London: Routledge. Renn, O. (2017): Zeit der Verunsicherung: Was treibt Menschen in den Populismus? Reinhek: Rowohlt. Renn, O., Klinke, A., van Asselt, M. (2011): “Coping with complexity, uncertainty and ambiguity in risk governance: a synthesis”, Ambio, 40(2): 231246. https://doi.org/10.1007/s13280-010-0134-0 Renn, O., Rohrmann, B. (2000): “Risk perception research – An introduction”, in: Cross-cultural risk perception: A survey of empirical studies, edited by Renn, O., Rohrmann, B., Dordrecht/ Boston/ London: Kluwer Academic Publishers, pp. 11-54. https://doi.org/10.1007/978-1-4757-4891-8_1 Renn, O., Schweizer, P.-J. (2009): “Inclusive risk governance: concepts and application to environmental policy making”, Environmental Policy and Governance, 19(3): 174-185. https://doi.org/10.1002/eet.507 Rescher, N. (1976): Plausible reasoning: An introduction to the theory and practice of plausibilistic inference. Amsterdam: Van Gorcum. Richter, T., Schmid, S. (2010): “Epistemological beliefs and epistemic strategies in self-regulated learning”, Metacognition and Learning, 5: 47-65. https://doi.org/10.1007/s11409-009-9038-4 Rieh, S. Y. (2002): “Judgment of information quality and cognitive authority in the Web”, Journal of the American Society of Information Science and Technology, 53(2): 145-161. https://doi.org/10.1002/asi.10017 Ringland, G. (2008): “Innovation: scenarios of alternative futures can discover new opportunities for creativity”, Strategy & Leadership, 36(5): 2227. https://doi.org/10.1108/10878570810902086 Rippl, S. (2002): “Cultural Theory and Risk Perception: A Proposal for a better measurement”, Journal of Risk Research, 5(2): 147-165. https://doi.org/10. 1080/13669870110042598

251

252

The Plausibility of Future Scenarios

Rogelij, J., Meinshausen, M., Knutti, R. (2012): “Global warming under old and new scenarios using IPCC climate sensitivity range estimates”, Nature Climate Change, 2: 248-253. https://doi.org/10.1038/nclimate1385 Rohrbeck, R., Schwarz, J. O. (2013): “The value contribution of strategic foresight: Insights from an empirical study of large European companies”, Technological Forecasting and Social Change, 80(8): 1593-1606. https://doi. org/10.1016/j.techfore.2013.01.004 Rotmans, J., van Asselt, M., Anastasi, C., Greeuw, S., Mellors, J., Peters, S., Rothman, D., Rijkens, N. (2000): “Visions for a sustainable Europe”, Futures, 32: 809-831. https://doi.org/10.1016/S0016-3287(00)00033-1 Rowe, G., Bolger, F. (2016): “Final report on ‘The identification of food safety priorities using the Delphi technique’”, EFSA Supporting Publications 2016, EN-1007, Parma: European Food Safety Authority. https://doi. org/10.2903/sp.efsa.2016.EN-1007 Rowe, G., Wright, G. (1999): “The Delphi technique as a forecasting tool: issues and analysis”, International Journal of Forecasting, 15(4): 353-375. https:// doi.org/10.1016/S0169-2070(99)00018-7 Rowe, W. D. (1994): “Understanding uncertainty”, Risk Analysis, 14: 743-750. https://doi.org/10.1111/j.1539-6924.1994.tb00284.x Rumelhart, D. (1975): “Notes on a schema for stories”, in: Representation and understanding: Studies in cognitive science, edited by Bobrow, D., Collins, A., New York: Academic, pp. 211-236. https://doi.org/10.1016/B978-0-12108550-6.50013-6 Runde, J. (1998): “Clarifying Frank Knight’s discussion of the meaning of risk and uncertainty”, Cambridge Journal of Economics, 22: 539-546. https:// doi.org/10.1093/cje/22.5.539 Sager, F. (2007): “Habermas’ Models of Decisionism, Technocracy and Pragmatism in Times of Governance”, Public Administration, 85(2): 429-447. https://doi.org/10.1111/j.1467-9299.2007.00646.x Sahner, H. (2005): Schließende Statistik: Eine Einführung für Sozialwissenschaftler. Wiesbaden: Springer VS. https://doi.org/10.1007/978-3-32295695-8 Sala, O. E., et al. (2000): “Global Biodiversity Scenarios for the Year 2100”, Science, 287: 1770-1774. https://doi.org/10.1126/science.287.5459.1770 Sanbonmatsu, D. M., Akimoto, S. A., Biggs, E. (1993): “Overestimating causality: attributional effects of confirmatory processing”, Journal of Personality and Social Psychology, 65(5): 892-903. https://doi.org/10.1037/00223514.65.5.892

References

Sardar, Z. (2010): “The Namesake: Futures; futures studies; futurology; futuristic; foresight - What’s in a name?”, Futures, 42(3): 177-184. https://doi. org/10.1016/j.futures.2009.11.001 Sarris, V., Reiss, S. (2005): Kurzer Leitfaden der Experimentalpsychologie. München/Boston/San Francisco: Pearson Studium. Schafer, G. (1978): “Non-additive probabilities in the works of Bernoulli and Lambert”, Archives of Historically Exact Science, 19: 309-370. https://doi. org/10.1007/BF00330065 Scheele, R., Kosow, H., Prehofer, S. (2017): “Kontextszenarien als Ergänzung modellgestützter Szenarioanalysen - Grundlagen und aktuelle Fragestellungen”, in: Digitale Welten: Neue Ansätze in der Wirtschafts- und Sozialkybernetik, edited by Tilebein, M. et al., Wirtschaftskybernetik und Systemanalyse Band 30., Berlin: Duncker & Humblot, pp. 107-121. Scheele, R., Kearney, N. M., Kurniawan, J. H., Schweizer, V. J. (2018): “What Scenarios Are You Missing? Poststructuralism for Deconstructing and Reconstructing Organizational Futures”, in: How Organizations Manage the Future: Theoretical Perspectives and Empirical Insights, edited by Krämer, H., Wenzel, M., Palgrave Macmillan, pp. 153-172. https://doi.org/10.1007/ 978-3-319-74506-0_8 Scheer, D., Konrad, W., Renn, O., Scheel, O. (2014): Energiepolitik unter Strom: Alternativen der Stromerzeugung im Akzeptanztest. München: oekom Verlag. Schelsky, H. (1961): Der Mensch in der technischen Zivilisation. Köln: Springer. https://doi.org/10.1007/978-3-663-02159-9 Schmidt-Scheele, R., Bauknecht, D., Poganietz, R.-W., Seebach, D., Timpe, C., Weimer-Jehle, W., Weiß, A. (2019): “Guiding motives and storylines of the German energy transition: How to systematically integrate stakeholder positions into energy transformation pathways”, TaTuP, 28(3): 2733. https://doi.org/10.14512/tatup.28.3.27 Schmidt-Scheele, R. (2020): “Supplementary Material to ‘The Plausibility of Future Scenarios: Conceptualising an Unexplored Criterion in Scenario Planning’ (transcript Verlag)”, available at https://www.researchgate.net/ profile/Ricarda_Schmidt_Scheele/research (last accessed 03/04/2020). Schoemaker, P. (1993): “Multiple Scenario Development: Its Conceptual and Behavioural Foundation”, Strategic Management Journal, 14: 193-213. https://doi.org/10.1002/smj.4250140304 Schoemaker, P. (1995): “Scenario planning: a tool for strategic thinking”, Sloan Management Review, Winter: 25-40.

253

254

The Plausibility of Future Scenarios

Schroeder, S., Richter, T., Hoever, I. (2008): “Getting a picture that is both accurate and stable: Situation models and epistemic validation”, Journal of Memory and Language, 59: 237-259. https://doi.org/10.1016/j.jml.2008.05. 001 Schubert, D. K. J., Thuß, S., Möst, D. (2015): “Does political and social feasibility matter in energy scenarios?”, Energy Research & Social Science, 7: 43-54. https://doi.org/10.1016/j.erss.2015.03.003 Schwartz, P. (1991): The Art of the Long View: Planning for the Future in an Uncertain World. New York: Currency Doubleday. Schweizer, V. J., O’Neill, B. (2014): “Systematic construction of global socioeconomic pathways using internally consistent element combinations”, Climatic Change, 122: 431-445. https://doi.org/10.1007/s10584-013-0908-z Schweizer, V. J., Kriegler, E. (2012): “Improving environmental change research with systematic techniques for qualitative scenarios”, Environmental Research Letters, 7(4): 044011. https://doi.org/10.1088/17489326/7/4/044011 Scriven, M. (1976): Reasoning. New York: McGraw-Hill. Seidl, R. (2015): “A functional-dynamic reflection on participatory processes in modeling projects”, Ambio, 44(8): 750-765. https://doi.org/10.1007/s13280015-0670-8 Selin, C. (2006): “Trust and the Illusive Force of Scenarios”, Futures, 38(1): 1-14. https://doi.org/10.1016/j.futures.2005.04.001 Selin, C. (2007): “Expectations and the Emergence of Nanotechnology”, Science, Technology & Human Values, 32(2): 196-220. https://doi.org/10.1177/ 0162243906296918 Selin, C. (2011a): “Negotiating plausibility: intervening in the future of nanotechnology”, Science and Engineering Ethics, 17(4): 723-737. https://doi. org/10.1007/s11948-011-9315-x Selin, C. (2011b): “Travails, Travels and Trials: Report from the S.NET Roundtable on Plausibility”, in: Quantum Engagements: Social Reflections of Nanoscience and Emerging Technologies, edited by Zülsdorf, T. et al., Heidelberg: AKA GmbH, pp. 237-242. Selin, C. (2015): “The Plausibility Project”, Cynthia Selin, available at https://www.cynthiaselin.com/plausibility-project.html (last accessed 24/02/2020). Selin, C., Pereira, A. G. (2013): “Pursuing Plausibility”, International Journal for Foresight and Innovation Policy, 9(2/3/4): 93-109. https://doi.org/10. 1504/IJFIP.2013.058616

References

Shapere, D. (1966): “Plausibility and Justification in the development of science”, Journal of Philosophy of Education, 63(20): 611-621. https://doi. org/10.2307/2024256 Shapin, S. (1994): A social History of Truth: Civility and Science in Seventeenth-Century England. Chicago: University of Chicago Press. https://doi.org/10.7208/chicago/9780226148847.001.0001 Sheppard, S. R. J., Shaw, A., Flanders, D., Burch, S., Wiek, A., Carmichael, J., Robinson, J., Cohen, S. (2011): “Future visioning of local climate change: A framework for community engagement and planning with scenarios and visualisation”, Futures, 43(4): 400-412. https://doi.org/10.1016/j. futures.2011.01.009 Siegrist, M. (1999): “A Causal Model explaining the Perception and Acceptance of Gene Technology”, Journal of Applied Social Psychology, 29(10): 20932106. https://doi.org/10.1111/j.1559-1816.1999.tb02297.x Siegrist, M., Cvetkovich, G., Roth, C. (2000): “Salient value similarity, social trust, and risk/benefit perception”, Risk Analysis, 20: 713-719. https://doi. org/10.1111/0272-4332.205064 Simon, H. A. (1957): Models of Man. New York: Wiley. https://doi.org/10.2307/ 2550441 Sinatra, G. M. (2005): “The ‘warming trend’ in conceptual change research: The legacy of Paul R. Pintrich”, Educational Psychologist, 40(107-115). https:// doi.org/10.1207/s15326985ep4002_5 Sinatra, G. M., Chinn, C. (2011): “Thinking and reasoning in science: Promoting epistemic conceptual change”, in: Critical theories and models of learning and development relevant to learning and teaching, edited by Harris, K., McCormick, C. B., Sinatra, G. M., Sweller, J., Washington, DC: APA Publications, pp. 257-282. https://doi.org/10.1037/13275-011 Sinatra, G. M., Kardash, C. M., Taasoobshirazi, G., Lombardi, D. (2011): “Promoting attitude change and expressed willingness to take action toward climate change in college students”, Instructional Science, 40(1): 1-17. https://doi.org/10.1007/s11251-011-9166-5 Sjöberg, L. (2000a): “Consequences matter, ‘risk’ is marginal”, Journal of Risk Research, 3(3): 287-295. https://doi.org/10.1080/13669870050043189 Sjöberg, L. (2000b): “The Methodology of Risk Perception Research”, Quality & Quantity, 34: 407-418. https://doi.org/10.1023/A:1004838806793 Sjöberg, L., Moen, B., Rundmo, T. (2004): Explaining risk perception. An evaluation of the psychometric paradigm in risk perception research. Trondheim: C. Rotunde.

255

256

The Plausibility of Future Scenarios

Slaughter, R. (2002a): “Beyond the mundane: Reconciling breadth and depth in futures enquiry”, Futures, 34: 493-507. https://doi.org/10.1016/S00163287(01)00076-3 Slaughter, R. (2002b): “From forecasting and scenarios to social construction: Changing methodological paradigms in future studies”, Foresight, 4(3): 26-31. https://doi.org/10.1108/14636680210697731 Slovic, P. (1992): “Perception of Risk: Reflections on the psychometric paradigm”, in: Social Theories of Risk, edited by Krimsky, S., Golding, D., Westport CT: Praeger, pp. 117-152. Slovic, P. (2000): The Perception of Risk. London/ New York: Earthscan. Slovic, P., Fischhoff, B., Lichtenstein, S. (1982): “Why study risk perception?”, Risk Analysis, 2(1): 83-93. https://doi.org/10.1111/j.1539-6924.1982.tb01369.x Sondeijker, S., Geurts, J., Rotmans, J., Tukker, A. (2006): “Imagining sustainability: the added value of transition scenarios in transition management”, Foresight, 8(5): 15-30. https://doi.org/10.1108/14636680610703063 Soste, L., Wang, Q. J., Robertson, D., Chaffe, R., Handley, S., Wei, Y. (2015): “Engendering stakeholder ownership in scenario planning”, Technological Forecasting and Social Change, 91: 250-263. https://doi.org/10.1016/j. techfore.2014.03.002 Star, S. L., Griesemer, J. R. (1989): “Institutional Ecology, ‘Translations’ and Boundary Objects: Amateurs and Professionals in Berkeley’s Museum of Vertebrate Zoology, 1907-39”, Social Studies of Science, 19: 387-420. https://doi.org/10.1177/030631289019003001 Stein, N. L., Glenn, C. G. (1979): “An analysis of story comprehension in elementary school children”, in: Discourse processing: Multidisciplinary perspectives, edited by Freedle, R. O., New York: Ablex, pp. 53-120. Stewart, N., Brown, G. D. A., Chater, N. (2002): “Sequence effects in categorization of simple perceptual stimuli”, Journal of Experimental Psychology: Learning, Memory, and Cognition, 28(1): 3-11. https://doi.org/10.1037/ 0278-7393.28.1.3 Stirling, A. (2005): “‘Opening Up’ and ‘Closing Down’: Power, Participation, and Pluralism in the Social Appraisal of Technology”, Science, Technology & Human Values, 33(2): 262-294. https://doi.org/10.1177/0162243907311265 Stirling, A. (2014): “Transforming power: Social science and the politics of energy choices”, Energy Research & Social Science, 1: 83-95. https://doi. org/10.1016/j.erss.2014.02.001 Stone, D. (1997): Policy Paradox: the art of political decision making. New York: W.W. Norton.

References

Strand, R. (2013): “Science, Utopia and the human condition”, International Journal for Foresight and Innovation Policy, 9(2/3/4): 110-124. https://doi. org/10.1504/IJFIP.2013.058614 Sundermeier, B. A., van den Broek, P., Zwaan, R. A. (2005): “Causal coherence and the availability of locations and objects during narrative comprehension”, Memory & Cognition, 33: 462-470. https://doi.org/10.3758/ BF03193063 Swann, W. B. J., Read, S. J. (1981): “Acquiring self-knowledge: the search for feedback that fit”, Journal of Personality and Social Psychology, 41(6): 11191128. https://doi.org/10.1037/0022-3514.41.6.1119 Teske, S. (2011): “Energy [R]evolution Scenarios: Development, Experiences and Suggestions”, in: Energieszenarien: Konstruktion, Bewertung und Wirkung – ‘Anbieter‘ und ‘Nachfrager‘ im Dialog, edited by Dieckhoff, C., Fichtner, W., Grunwald, A., Meyer, S., Nast, N., Nierling, N., Renn, O., Voß, A., Wietschel, A., Karlsruhe: Karlsruhe KIT Scientific Publishing, pp. 121-140. Tetlock, P. E., Gardner, D. (2015): Superforecasting: The Art and Science of Prediction. New York: Broadway Books. Thagard, P. (1989): “Explanatory coherence”, Behavioral and Brain Sciences, 12: 435–467. https://doi.org/10.1017/S0140525X00057046 Thagard, P. (1992): Conceptual revolutions. Princeton, NJ: Princeton University Press. https://doi.org/10.1515/9780691186672 Thalmann, A. (2005): “Risiko Elektrosmog – Wie ist das Wissen in der Grauzone zu kommunizieren?”, Psychologie Forschung aktuell, Band 19. Thouless, R. (1974): Straight and Crooked Thinking. London: Pan Books. Toulmin, S., Rieke, R., Janik, A. (1979): An Introduction to Reasoning. New York: Macmillan. Trutnevyte, E., Guivarch, C., Lempert, R., Strachan, N. (2016): “Reinvigorating the scenario technique to expand uncertainty consideration”, Climatic Change, 135(3-4): 373-379. https://doi.org/10.1007/s10584-015-1585-x Trutnevyte, E., Stauffacher, M., Schlegel, M., Scholz, R. W. (2012): “Contextspecific energy strategies: coupling energy system visions with feasible implementation scenarios”, Environmental Science & Technology, 46(17): 9240-9248. https://doi.org/10.1021/es301249p Tversky, A., Fox, C. (1995): “Weighting risk and uncertainty”, Psychological Review, 102(2): 269-283. https://doi.org/10.1037/0033-295X.102.2.269

257

258

The Plausibility of Future Scenarios

Tversky, A., Kahneman, D. (1974): “Judgment under uncertainty: Heuristics and biases”, Science, 185: 1124–1131. https://doi.org/10.1126/science.185. 4157.1124 Tversky, A., Kahneman, D. (1981): “The Framing of Decisions and the Psychology of Choice”, Science, 211: 453-458. https://doi.org/10.1126/science. 7455683 University of Zurich (2018): “Methodenberatung”, University of Zurich, available at www.methodenberatung.uzh.ch/de/datenanalyse_spss.html (last accessed 24/02/2020). Urry, J. (2016): What is the Future?. Cambridge: John Wiley & Sons. Uruena, S. (2019): “Understanding ‘plausibility’: A relational approach to the anticipatory heuristics of future scenarios”, Futures, 111: 15-25. https://doi. org/10.1016/j.futures.2019.05.002 van Asselt, M., van’t Klooster, S., van Notten, P. W. F., Smits, L. A. (2010): Foresight in Action: Developing Policy Oriented Scenarios. London: Earthscan. van Asselt, M., Rotmans, J. (2002): “Uncertainty in Integrated Assessment Modelling”, Climate Change, 54(1-2): 75-105. https://doi.org/10.1023/ A:1015783803445 van der Heijden, K. (2005): Scenarios: The Art of Strategic Conversation. 2nd ed. Chichester: John Wiley & Sons. van der Heijden, K. (2008): “Turbulence in the Indian Agriculture Sector: A Scenario Analysis”, in: Business Planning for Turbulent Times: New Methods for Applying Scenarios, edited by Ramirez, R., Selsky, J., van der Heijden, K., London: Earthscan, pp. 87-102. van der Heijden, K., Bradfield, R., Burt, G., Cairns, G., Wright, G. (2002): The Sixth Sense: Accelerating Organizational Learning with Scenarios. Chichester: Wiley. van Notten, P., Rotmans, J., van Asselt, M., Rothman, D. (2003): “An updated scenario typology”, Futures, 35(5): 423-443. https://doi.org/10.1016/S00163287(02)00090-3 van Vliet, M., Kok, K., Veldkamp, A., Sarkki, S. (2012): “Structure in creativity: An exploratory study to analyse the effects of structuring tools on scenario workshop results”, Futures, 44(8): 746-760. https://doi.org/10.1016/j. futures.2012.05.002 Varho, V., Tapio, P. (2013): “Combining the qualitative and quantitative with the Q2 scenario technique - The case of transport and climate”, Technological Forecasting and Social Change, 80(4): 611-630. https://doi.org/10. 1016/j.techfore.2012.09.004

References

Verschraegen, G., Vandermoere, F. (2017): “Introduction: Shaping the future through imaginaries of science, technology and society”, in: Imagined Futures in Science, Technology and Society, edited by Verschraegen, G., Vandermoere, F., Braeckmans, L., Segaert, S., Abington/New York: Routledge Studies in Science, Technology and Society, pp. 1-11. https://doi.org/10. 4324/9781315440842-1 Visschers, V. H., Meertens, R. M., Passchier, W. W., de Vries, N. N. (2009): “Probability information in risk communication: a review of the research literature”, Risk Analysis, 29(2): 267-287. https://doi.org/10.1111/j. 1539-6924.2008.01137.x Visschers, V. H. (2015): “Judgments under uncertainty: evaluations of univocal, ambiguous and conflicting probability information”, Journal of Risk Research, 20(2): 237-255. https://doi.org/10.1080/13669877.2015.1043569 Visschers, V. H. (2018): “Public Perception of Uncertainties Within Climate Change Science”, Risk Analysis, 38(1): 43-55. https://doi.org/10.1111/risa. 12818 Volkery, A., Ribeiro, T. (2009): “Scenario Planning in public policy: Understanding use, impact and the role of institutional context factors”, Technological Forecasting and Social Change, 76: 1198-1207. https://doi.org/10. 1016/j.techfore.2009.07.009 von Wirth, T., Wissen Hayek, U., Kunze, A., Neuenschwander, N., Stauffacher, M., Scholz, R. W. (2014): “Identifying urban transformation dynamics: Functional use of scenario techniques to integrate knowledge from science and practice”, Technological Forecasting and Social Change, 89: 115130. https://doi.org/10.1016/j.techfore.2013.08.030 Vrij, A. (2008): Detecting lies and deceit: Pitfalls and opportunities. Chichester: Wiley. Wachs, M. (1985): Ethics in Planning. New Brunswick: Rutgers, The State University of New Jersey. Wack, P. (1985a): “Scenarios: Unchartred Waters ahead”, Harvard Business Review, September/ October 1985: 73-89. Wack, P. (1985b): “Scenarios: Shooting the rapids”, Harvard Business Review, November/ Dezember 1985: 139-150. Walton, D. N. (1992a): Plausible Argument in Everyday Conversation. Albany: State University of New York Press. Walton, D. N. (1992b): “Rules for plausible reasoning”, Informal Logic, XIV(1): 33-51. https://doi.org/10.22329/il.v14i1.2524

259

260

The Plausibility of Future Scenarios

Walton, D. N., Reed, C., Macagno, F. (2008): Argumentation Schemes. Cambridge: Cambridge University Press. Walton, J. S. (2008): “Scanning Beyond the Horizon: Exploring the Ontological and Epistemological Basis for Scenario Planning”, Advances in Developing Human Resources, 10(2): 147-165. https://doi.org/10.1017/ CBO9780511802034 Walton, S., O’Kane, P., Ruwhiu, D. (2019): “Developing a theory of plausibility in scenario building: Designing plausible scenarios”, Futures, 111:42-56. https://doi.org/10.1016/j.futures.2019.03.002 Webler, T. (1995): “‘Right’ discourse in citizen participation: an evaluative yardstick”, in: Fairness and Competence in Citizen Participation. Evaluating New Models for Environmental Discourse, edited by Renn, O., Webler, T., Wiedemann, P., Dordrecht: Kluwer, pp. 35-86. https://doi.org/10.1007/ 978-94-011-0131-8_3 Webster, D. M., Kruglanski, A. W. (1994): “Individual differences in need for cognitive closure”, Journal of Personality and Social Psychology, 67: 10491062. https://doi.org/10.1037/0022-3514.67.6.1049 Weimer-Jehle, W. (2006): “Cross-impact balances: A system-theoretical approach to cross-impact analysis”, Technological Forecasting and Social Change, 73(4): 334-361. https://doi.org/10.1016/j.techfore.2005.06.005 Weimer-Jehle, W., Prehofer, S., Vögele, S. (2013): “Kontextszenarien - Ein Konzept zur Behandlung von Kontextunsicherheit und Kontextkomplexität bei der Entwicklung von Energieszenarien”, TaTuP, 22(2): 27-36. https:// doi.org/10.14512/tatup.22.2.27 Weimer-Jehle, W., Buchgeister, J., Hauser, W., Kosow, H., Naegler, T., Poganietz, W.-R., Pregger, T., Prehofer, S., von Recklinghausen, A., Schippl, J., Vögele, S. (2016): “Context scenarios and their usage for the construction of socio-technical energy scenarios”, Energy, 111: 956-970. https://doi.org/10. 1016/j.energy.2016.05.073 Weimer-Jehle, W., Vögele, S., Hauser, W., Kosow, H., Poganietz, W.-R., Prehofer, S. (2020): “Socio-technical energy scenarios: state-of-the-art and CIB-based approaches”, Climatic Change. https://doi.org/10.1007/s10584020-02680-y. Whitehead, J. L. J. ( 1968): “Factors of source credibility”, Quarterly Journal of Speech, 54: 59-63. https://doi.org/10.1080/00335636809382870 Wiebe, K., et al. (2015): “Climate change impacts on agriculture in 2050 under a range of plausible socioeconomic and emissions scenarios”, Environmental Research Letters, 10(8): 085010.

References

Wiek, A., Withycombe Keeler, L., Schweizer, V.J., Lang, D. J. (2013): “Plausibility indications in future scenarios”, International Journal for Foresight and Innovation Policy, 9(2/3/4): 133-147. https://doi.org/10.1504/IJFIP.2013. 058611 Wilkinson, A. (2009): “Scenarios Practices: In search of theory”, Journal of Futures Studies, 13(3): 107-114. Wilkinson, A., Eidinow, E. (2008): “Evolving practices in environmental scenarios: a new scenario typology”, Environmental Research Letters, 3(4): 1-11. https://doi.org/10.1088/1748-9326/3/4/045017 Wilkinson, A., Kupers, R., Mangalagiu, D. (2013): “How plausibility-based scenario practices are grappling with complexity to appreciate and address 21st century challenges”, Technological Forecasting and Social Change, 80: 699-710. https://doi.org/10.1016/j.techfore.2012.10.031 Wilkinson, A., Ramírez, R. (2009): “How plausible is plausibility as a scenario effectiveness criterion?”, InSiS Working Paper, Joint ASU-Oxford Plausibility Project, Oxford: InSiS, University of Oxford. Wilson, I. (1998): “Mental Maps of the future: An Intuitive Logics Approach to Scenario Planning”, in: Learning from the Future: Competitive Foresight Scenarios, edited by Fahey, L., Randall, R. M., New York: John Wiley and Sons, pp. 81-108. Wise, M., Han, J. Y., Shaw, B., McTavish, F., Gustafson, D. H. (2008): “Effects of using online narrative and didactic information on healthcare participation for breast cancer patients”, Patient Education and Counseling, 70: 348-356. https://doi.org/10.1016/j.pec.2007.11.009 Wong-Parodi, G., Fischhoff, B., Strauss, B. (2014): “A method to evaluate the usability of interactive climate change impact decision aids”, Climatic Change, 126(3-4): 485-493. https://doi.org/10.1007/s10584-014-1226-9 Wright, G., Bradfield, R., Cairns, G. (2013a): “Does the intuitive logics method – and its recent enhancements – produce ‘effective’ scenarios?”, Technological Forecasting and Social Change, 80(4): 631-642. https://doi.org/10. 1016/j.techfore.2012.09.003 Wright, G., Cairns, G., Bradfield, R. (2013b): “Scenario methodology: New developments in theory and practice”, Technological Forecasting and Social Change, 80(4): 561-565. https://doi.org/10.1016/j.techfore.2012.11.011 Wright, G., Goodwin, P. (2002): “Eliminating a framing bias by using simple instructions to ‘think harder’ and respondents with managerial experience: comment on ‘breaking the frame’”, Strategic Management Journal, 23(11): 1059–1067. https://doi.org/10.1002/smj.265

261

262

The Plausibility of Future Scenarios

Wynne, B. (1992): “Uncertainty and Environmental Learning: Reconceiving Science and Policy in the Preventive Paradigm”, Global Environmental Change, 2(2): 111-127. https://doi.org/10.1016/0959-3780(92)90017-2 Zimmermann, H.-J. (1985): Fuzzy Set Theory - and its Applications. Dordrecht: Kluwer Academic Publishers. https://doi.org/10.1007/978-94-015-7153-1 Zimmermann, H.-J. (2000): “An application-oriented view of modeling uncertainty”, European Journal of Operational Research, 122: 190-198. https:// doi.org/10.1016/S0377-2217(99)00228-3 Zinn, J. (2006): “Recent Developments in Sociology of Risk and Uncertainty”, Historical Social Research, 31(2): 275-286. Zwicky, F. (1969): Discovery, Invention, Research Through the Morphological Approach. New York: Macmillan.

Social Sciences kollektiv orangotango+ (ed.)

This Is Not an Atlas A Global Collection of Counter-Cartographies 2018, 352 p., hardcover, col. ill. 34,99 € (DE), 978-3-8376-4519-4 E-Book: free available, ISBN 978-3-8394-4519-8

Gabriele Dietze, Julia Roth (eds.)

Right-Wing Populism and Gender European Perspectives and Beyond April 2020, 286 p., pb., ill. 35,00 € (DE), 978-3-8376-4980-2 E-Book: 34,99 € (DE), ISBN 978-3-8394-4980-6

Mozilla Foundation

Internet Health Report 2019

2019, 118 p., pb., ill. 19,99 € (DE), 978-3-8376-4946-8 E-Book: free available, ISBN 978-3-8394-4946-2

All print, e-book and open access versions of the titles in our list are available in our online shop www.transcript-verlag.de/en!

Social Sciences James Martin

Psychopolitics of Speech Uncivil Discourse and the Excess of Desire 2019, 186 p., hardcover 79,99 € (DE), 978-3-8376-3919-3 E-Book: 79,99 € (DE), ISBN 978-3-8394-3919-7

Michael Bray

Powers of the Mind Mental and Manual Labor in the Contemporary Political Crisis 2019, 208 p., hardcover 99,99 € (DE), 978-3-8376-4147-9 E-Book: 99,99 € (DE), ISBN 978-3-8394-4147-3

Iain MacKenzie

Resistance and the Politics of Truth Foucault, Deleuze, Badiou 2018, 148 p., pb. 29,99 € (DE), 978-3-8376-3907-0 E-Book: 26,99 € (DE), ISBN 978-3-8394-3907-4 EPUB: 26,99 € (DE), ISBN 978-3-7328-3907-0

All print, e-book and open access versions of the titles in our list are available in our online shop www.transcript-verlag.de/en!