Fact-Checking Journalism And Political Argumentation: A British Perspective 3030305724, 9783030305727, 9783030305734

This timely book examines the role of fact-checking journalism within political policy debates, and its potential contri

439 129 1MB

English Pages 116 Year 2019

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Fact-Checking Journalism And Political Argumentation: A British Perspective
 3030305724,  9783030305727,  9783030305734

Table of contents :
Acknowledgements......Page 6
Contents......Page 8
List of Tables......Page 9
Chapter 1: Introduction......Page 10
Post-Truth Politics?......Page 13
The Fact-Checking Movement......Page 15
Partisanship and Trust in UK News Media......Page 16
The British Political Context and the 2017 Snap Election......Page 17
References......Page 19
Chapter 2: Objectivity and Interpretation in Fact-Checking Journalism......Page 23
Verification and Objectivity—Professional Norms of Journalistic ‘Truth-telling’......Page 24
Empirical Critiques of Fact-Checking......Page 26
Ethical Criticisms of Fact-Checking—Bias......Page 30
Criticisms of Fact-Checking as Ineffective......Page 34
Research Questions and Method......Page 37
Content Analysis......Page 39
Argumentative and Political Discourse Analysis......Page 41
References......Page 42
Chapter 3: Fact-Checking Claims, Policies and Parties......Page 46
Form: Verdicts, Explainers and Information......Page 47
Empirical Concerns......Page 51
Explainers......Page 52
Verdicts on Measures, Causal Relationships, Predictions and Definitions......Page 53
Interpretation......Page 56
Ethical Concerns: Impartiality and Audience Perception of Bias......Page 58
‘Balance’ of Fact-Checking on Different Parties’ Claims......Page 59
‘Balance’ of Fact-Checking on Different Parties’ Policies and Performance......Page 61
Perception of Bias on Twitter......Page 63
Negativity Bias......Page 64
Empirical Issues: Corroboration......Page 67
Ethical Issues: Impartiality of Sources......Page 68
Summary and Conclusion......Page 70
FactChecks......Page 71
Tweets......Page 73
References......Page 74
Chapter 4: The Role of Fact-Checking in Political Argumentation......Page 76
Conflicting Truth Claims on Empirical Circumstances: Schools Funding......Page 78
Contesting the Empirical Circumstances: Schools Funding at a Record High or Deepest Cuts in Two Decades......Page 80
Additional Contextual Empirical Circumstances: Evidence of Impact of Schools Funding on Educational Standards......Page 82
Attribution of Blame: Security and Policing......Page 84
A New Perspective on Empirical Circumstances: More Contentious Claims About the Impact of Cuts to Police......Page 86
Summary......Page 90
Predicting the Impact of Proposed Policy: Negotiating Brexit......Page 91
Clarifying the Parties’ Actual Brexit Positions......Page 92
Counter-Arguments—Potential Negative Outcomes of the Claim for Action......Page 95
Leadership Qualities and Being an Effective Negotiator......Page 97
Conclusion......Page 98
FactChecks......Page 99
Tweets......Page 100
References......Page 101
Chapter 5: Conclusion......Page 103
Fact-Checking as Explanatory Journalism......Page 104
Evaluating Political Arguments......Page 106
Fact-Checking Social Facts: Descriptive and Persuasive Definitions......Page 108
Fact-Checking the Future: The Problems with Predictions......Page 109
Fact-Checking Personality Politics: Measures of Credibility and the Heuristic of Trust......Page 110
Effectiveness of Fact-Checking: Motivated Reasoning and Reasonable Disagreement......Page 111
References......Page 113
Index......Page 114

Citation preview

Fact-Checking Journalism and Political Argumentation A British Perspective

Jen Birks

Fact-Checking Journalism and Political Argumentation

Jen Birks

Fact-Checking Journalism and Political Argumentation A British Perspective

Jen Birks Department of Cultural, Media & Visual Studies University of Nottingham, University Park Nottingham, Nottinghamshire, UK

ISBN 978-3-030-30572-7    ISBN 978-3-030-30573-4 (eBook) https://doi.org/10.1007/978-3-030-30573-4 © The Editor(s) (if applicable) and The Author(s), under exclusive licence to Springer Nature Switzerland AG 2019 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, express or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Cover image: Pattern © John Rawsterne/patternhead.com This Palgrave Pivot imprint is published by the registered company Springer Nature Switzerland AG. The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland

To Martyn

Acknowledgements

This project started with an interest in the debate around the European Union (EU) referendum debate, so I suppose I should thank David Cameron for ensuring we live in ‘interesting times,’ but I won’t. My initial thoughts on fact-checking journalism were exercised at Stuart Price’s Disruptive Events symposium at De Montfort University in December 2016, and thereafter at various other excellent events including Mark Wheeler and Petros Iosifidis’ Trump, Brexit and Corbyn: Social Media, Authenticity and Post-Truth (City University, May 2018) and Cumberland Lodge’s conference, The Politics of (Post) Truth (December 2018). I extend my thanks to the organisers for their kind invitations and for taking on the onerous task of organising academics. At those and other conferences (notably PSA 2017 and MeCCSA 2019), I benefitted from erudite comments from and fruitful discussion with many friends and colleagues too numerous to list, but special mention goes to James Morrison for insisting that I should write a book on this without delay, Darren Lilleker for services to robust debate at Cumberland Lodge, Karin Wahl-Jorgensen for sage advice on very many occasions, and Einar Thorsen, Dan Jackson and Dominic Wring, for giving me a platform for the first blush analysis in their splendid Election Analysis report. I’m also grateful to the marvellous peer reviewers and blurb writers for taking time out of their summer, especially Lucas Graves in mid-Atlantic transition. Mala Sanghera-Warren at Palgrave has also made the whole process as painless as possible, with great patience and professionalism. vii

viii 

ACKNOWLEDGEMENTS

I am indebted to Cathy Johnson for getting my sabbatical application considered in complicated and frustrating circumstances. More than anyone, perhaps, this book would not exist without her prompt and merciful intervention. I would also like to recognise the contribution of the climbing crew for timely distractions and good catches, especially the Wednesday regulars— in no particular order: Alice Chapman, Anthony McCourt, Oli Grice-­ Jackson, Danny Krall, Nat Johnson and Sam and Jono Hatton, plus Jenny Mansfield, Alice Rutter and Joe Strellis for good measure. Thanks also to Daisuke Sherwood Birks, aka The Fuzzball, aka Thunderpaws, aka Lord Fluffington of Mapperley, The First of His Name, Slayer of Elastic Bands, for adorable obstructions that were at least as calming as they were infuriating. Finally, but most importantly, thanks to Martyn Wells, who has already enjoyed an acknowledgement for his witticisms from a properly famous writer (props, Mhairi McFarlane), but just as importantly, contributed to this book by being a brilliant sounding board and never letting me forget the value of clear writing, direct syntax and plain old Anglo-Saxon words. I hope you will forgive my many lapses—I am, after all, still an academic. Amongst other services to the production of this book, Martyn also provided loving support, the finest curries outside of the kitchen of Anjum herself, and general sanity maintenance. And thanks too for climbing all those stairs to the study in search of kisses or proffering tea—if nothing else, this book has given you a quad workout. You’re welcome.

Contents

1 Introduction  1 2 Objectivity and Interpretation in Fact-­Checking Journalism

15

3 Fact-Checking Claims, Policies and Parties

39

4 The Role of Fact-Checking in Political Argumentation

69

5 Conclusion97 Index109

ix

List of Tables

Table 2.1 Table 2.2 Table 3.1 Table 3.2 Table 3.3

Number of articles in the fact-check sample by fact-checker 32 Number of tweets in the Twitter sample by fact-checker 32 The distribution of verdicts in full fact-check articles on claims by source of claim 53 The distribution of verdicts in full fact-checks on party policies and government performance 54 Average engagement with tweets by party claim and valance of verdict59

xi

CHAPTER 1

Introduction

Abstract  This chapter sets out the social and political context within which fact-checking has developed. Recent controversies have raised concerns about propagandists and partisans spreading misinformation and disinformation online, and some journalists have blamed politicians for generating a culture of post-truth politics and publics for embracing it. In contrast, the fact-checking movement recognises problems within journalism itself. This chapter locates concerns about political debate within wider shifts in news resourcing, journalistic ideology and convention, and public trust in mainstream media, locating the UK in a transnational comparative context. It introduces British fact-checking in comparison with the US organisations that inspired them and the wider international movement, and sets out the political context of the 2017 snap general election that forms the case study for this book. Keywords  Post-truth • Misinformation • Trust • Journalism • Debate • Fact-checking In recent years, long-standing concerns over the standard of public political debate and public engagement have escalated. Commentators in the US and the UK have perceived a cultural shift, popularly labelled ‘post-­ truth politics,’ whereby the truth matters less than ‘truthiness’ (a term attributed to US satirist Stephen Colbert)—the things you already believe © The Author(s) 2019 J. Birks, Fact-Checking Journalism and Political Argumentation, https://doi.org/10.1007/978-3-030-30573-4_1

1

2 

J. BIRKS

or wish to be true. However, it remains to be seen whether the Trump presidency represents a new era in American politics or a temporary anomaly, and we should certainly be cautious about generalising this trend across the Atlantic. Several British journalists have written books on the topic, with the notorious falsehood from the EU referendum Leave campaign that the UK sends £350m to the European Union (EU) every week prominent in their case for a British post-truth politics. However, this misrepresentation of statistics has a long history in the UK, as elsewhere, and is not in the same realm as Trump’s ‘bullshitting’ and ‘alternative facts’—that is, having a complete disregard for the truth, and outright making things up. Nonetheless, reservations about the novelty of problems with the facticity of political communication, and especially news reporting, should not lead us to be complacent about those problems. As a result of several decades of increasingly sophisticated professionalised political communication, alongside cuts to news budgets, journalists have found themselves more and more dependent on politicians’ public relations (PR) professionals, or ‘spin doctors,’ and resorted to hyperadversarial ‘gotcha’ journalism to compensate for this loss on control by leaping on gaffes, splits and u-turns (Blumler and Gurevitch 1995). But as valid as this form of account-holding might be (Schudson 2008), it does little to inform the public on substantive policy issues. Perhaps unsurprisingly, journalists’ own attention is focused on the threat of democratised communication via social media and other digital technologies (such as deepfakes), and see professional journalism as part of the solution, rather than part of the problem. There is certainly a case for arguing against the techno-optimism of the previous decade, in which scholars such as Clay Shirky (2009) argued that we need not worry about the decline of journalism because everyone is now a journalist, but that simply reinforces the importance of strengthening journalism in the face of budget cuts that produce churnalism—a light rewriting of press releases and other puffery. Of course, editorial distortion is also a key factor: research has shown that misinformation about British politics circulated on social media originates primarily in the tabloid press (Chadwick et al. 2018). Not unrelatedly, there has been a consistent decline in public trust in both established politics and mainstream media (Edelman 2019; Newman et al. 2018). However, whilst journalists worry that audiences are being wilfully stupid in trusting social media, transnational surveys indicate that social

1 INTRODUCTION 

3

media is trusted even less as a source of news than the mainstream media. Indeed, it could well be that rising media literacy has left citizens rationally sceptical and distrustful, but with no rational means of distinguishing fact from fiction and distortion other than instinct, partisan allegiances and other heuristics. Furthermore, in an age of personalised politics, we are encouraged to rely on heuristics such as trust to select political leaders. Politicians and journalists alike assume that voters are not interested in policy and bored by details, but they cannot then blame voters for voting on a less rational basis. A key heuristic for audiences is consistency with existing beliefs. Again, this is not a recent development—the psychological notion of reducing cognitive dissonance (Festinger 1957) is over 60 years old, and the first media reception research found that media coverage of election campaigns “reinforces more than it converts” (Berelson et al. 1954). At a time when there was concern about assumed strong media effects, it was reassuring to find that the audience (or at least, an engaged minority that were paying attention at all) were not passive recipients of media messages, but in recent years political polarisation and the resurgence of populism across Europe and North America, as well as Latin America, has generated panic about that same motivated reasoning. However, reason, and especially political conviction, cannot be separated in any meaningful way from values (Fairclough and Fairclough 2012), and there is no reason why strong political convictions should impede rational engagement with well-­ evidenced facts. In this context, fact-checking journalism is a timely and optimistic development, and an idea that has caught on around the world (Graves 2016; Graves and Cherubini 2016; Singer 2018). It is not a panacea, however, and its contribution to reasoned debate and effectiveness will vary considerably around the world. Nonetheless, detailed analysis of the practice is overwhelmingly focused on the US, where the practice originated (Graves 2016; Graves et  al. 2016; Uscinski and Butler 2013; Amazeen 2015, 2016; Amazeen et  al. 2015; Gottfried et  al. 2013; Fridkin et  al. 2015; Lowrey 2017; Nyhan et  al. 2013; Nyhan et  al. 2019), although studies have been conducted on France (Barrera et al. 2017) and Africa (Cheruiyot and Ferrer-Conill 2018), but not on the UK, even though Channel 4’s FactCheck was among the first imitators of FactCheck.org. This study therefore aims to supplement that valuable existing literature with a British perspective, by closely examining the election output of the three main national fact-checkers: Channel 4 FactCheck, BBC Reality

4 

J. BIRKS

Check and independent fact-checker, Full Fact. It takes as a case study the 2017 snap election, called following a change of leader at the helm of the governing Conservative Party following the EU referendum vote, and a perceived opportunity by new leader Theresa May to increase her majority. This election is particularly interesting in terms of fact-checking, because May sought to contest and expected to win it handsomely on personal credibility, but lost her majority, in no small part on policy. The rest of the current chapter will set out the concerns about the quality of political debate in the UK since the Brexit vote, before elaborating on the emergence of British fact-checking journalism, and the political context for this research.

Post-Truth Politics? The claim that the UK sends £350m per week to the EU has gained notoriety, alongside leave-supporting MP Michael Gove’s controversial assertion that the British public had “had enough of [unreliable economic] experts”. This figure was the gross UK contribution before the rebate (after which it would be £276m according to Reality Check) and subsidies received by the UK (so the effective figure is around £161m, net). However, the authors acknowledge that the more representative figure was equally damaging and that the remain campaign chose to contest the vote on other grounds rather than contest it (Ball 2017; Davis 2017; d’Ancona 2017). Since the correction was not advanced by a prominent public figure, most utterances of the claim went unchallenged. These authors largely pin blame, however, not on the campaigns or the media reporting, but on the public. BBC journalist, Evan Davis (2017: loc 4152) cites opinion polls showing that more people thought the claim true than false, even though that “the media had challenged the claim on numerous occasions.” He doesn’t substantiate that latter claim, or give examples. A Nexis search of UK national newspapers during the campaign period indicates that pro-­ Remain newspapers, the Guardian and Independent, debunked it, and other broadsheets at least reported disagreement over the claim, but it was reported uncritically by the pro-Leave Daily Mail and Express. However, it went unchallenged most often in the broadcast media because journalists accepted the campaign’s photo opportunities, agreeing to interview Boris Johnson and others in front of the bus emblazoned with the claim. It is hardly surprising if people interpreted this

1 INTRODUCTION 

5

complicity with campaign misinformation as validation. However, Davis concludes, rather unfairly, that “it will always be hard to fight those who peddle untruths if the public has a predisposition to believe them and makes no effort at all to verify the reliability of information fed to them” (2017: loc 4166). This responsibilises the audience to fact-check the news bulletins by actively seeking out niche content such as BBC Radio 4’s statistics programme More Or Less, or indeed the online fact-checking services, rather than accepting journalists’ own responsibility to integrate fact-checking findings into mainstream reporting. Davis argues that politicians are merely giving people what they want, and their “bullshit” would not be successful if we didn’t leap to the heuristic judgements they invite us to. In contrast, James Ball was critical of fact-checkers for taking the “bait” (2017: loc 662) because the counter-argument was “terrible” (loc 670). Certainly the correct number was not strategically helpful for campaigners, but the notion that for voters a correction based on rebates and subsidies is “hard to follow,” and that the counter-argument that slower economic growth would reduce tax receipts by more than would be saved in EU contributions is “wonkish,” (Ball 2017: loc 670–95) is rather patronising. Journalists, as much as political PR professionals, have assumed an audience more interested in politics as soap opera than as a route to the resolution of social problems, which becomes a self-fulfilling prophesy. Both sides have also been preoccupied with a struggle for the upper hand, with communication professionals stage-managing every encounter, leaving overworked and desk-bound journalists both dependent on the page-­ ready copy supplied and vigilant for any chink in the PR armour. In the mid-1990s, Blumler and Gurevitch (1995) gave a clear-sighted account of this “crisis of public communication,” but expressed hope that the increasing involvement of the audience in news reporting, especially participation in televised political debate, would bring public concerns back to the fore. However, in the age of social media the tsunami of claims and counter-­ claims—including anti-vaxxer, flat-earther and other conspiracy theories— gives the impression that all facts are contested (Waisbord 2018). Fact-checking journalism represents a valiant attempt to take voters seriously, and to tell them which claims are better substantiated and which are misleading, rather than berate them for their credulity or their cynical disengagement.

6 

J. BIRKS

The Fact-Checking Movement What has become a global fact-checking movement originated in the US, with FactCheck.org in 2003, and then the Washington Post’s Fact Checker and Tampa Bay Times’ widely syndicated Politifact in 2007. They were motivated by disenchantment with the dominant mode of ‘objective’ reporting, in which two or more sources were quoted making conflicting truth claims, without any responsibility to help the audience identify which was the better supported claim. This innovation was not without controversy, however, with critics questioning whether journalists are qualified to make such judgements, or even understand where the line lies between fact and informed opinion (Uscinski and Butler 2013; Uscinski 2015), and whether it constituted partisan editorialising to favour one side in a debate (Ianunucci and Adair 2017). Nonetheless, this remit caught the imagination of journalists and NGOs around the world; in 2014, the Duke Reporter’s Lab counted 44 fact-checkers globally, and by 2018 that had more than tripled to 149 (Stencel and Griffin 2018). The densest concentration was in Europe (52) and North America (53), but there were also 22 in Asia and 15 in South America, though only 4 in Africa. However, the UK’s first fact-checker, Channel 4’s FactCheck, was launched as early as 2005, just two years after FactCheck.org got the ball rolling. For the following election in 2010, FactCheck started to host its fact-check items on a dedicated blog page. That year, FactCheck was also joined by the BBC’s Reality Check and independent fact-checker Full Fact. Although fact-checking is now an ongoing activity, elections are still a particular focus, with Full Fact having raised £100,263 from almost 2000 individual supporters to fund fact-checking for the 2017 snap election, allowing the organisation to almost treble their staff from 11 to 30. Initially, FactCheck was incorporated into the broadcast news output, especially when relaunched in 2010 to associate it more closely with one of the presenters, Cathy Newman. However, editors found this challenging because “what makes a good FactCheck may not necessarily make a good package on TV” (FactCheck journalist Alice Tartleton, in Chadwick 2017: 201). To prepare material for broadcast therefore required extra resourcing to re-write items and create graphics to convey data and sources (Mantzarlis 2016). Consequently, news reports no longer regularly reference or even plug the FactCheck blog. Similarly, BBC Reality Check correspondent Chris Morris occasionally appears on news programmes, in the same way that other specialist reporters are asked to provide analysis, but in the

1 INTRODUCTION 

7

s­ ample for this study he made just one appearance, to discuss the growth in food bank use. In common with much of the world, fact-checking in the UK is, then, primarily an online venture, despite its association with legacy media. In the US, this digital sidelining is argued to mean that politicians “get a relatively free pass on television” (Mantzarlis 2016). Andrew Chadwick (2017: 200) describes fact-checking as part of the hybrid media, whereby digital output is tied into traditional formats in various ways, but the decoupling described above—and also noted in US fact-checking by Lowrey (2017: 385)—suggests that digital platforms allow legacy media to offer supplementary material aimed at a niche audience, rather than a more radical challenge to dominant assumptions about the general audience’s interest in, and capacity for understanding policy detail. The audience response to legacy media-related fact-checkers also depends on their trust in those media outlets, which in turn relates to their positioning in a partisan media landscape.

Partisanship and Trust in UK News Media In the US, Democrat supporters have far more faith in the mainstream news media, and relatedly in fact-checking, than Republicans, and that gap has been widening since the Trump’s election (Newman et al. 2018). This left/right divide is also typical, to a lesser degree, in Northern Europe, but in the UK there are some indications that it is the left that may be more distrustful, according to research by the Pew Research Center (2018) though the 3% gap is not statistically significant. This reflects the editorial leaning of the UK’s partisan press, with all the largest circulating newspapers typically backing the Conservative Party—though some briefly switched to ‘New’ Labour when it shifted to the centre-left in the late 1990s, they have been particularly strong opponents of Jeremy Corbyn’s leadership of the party—and perhaps also the tendency of broadcast news to follow the press agenda (Cushion et al. 2018). Although the UK is similar to other Northern European countries (Scandinavia, plus Germany and the Netherlands) in having much of the population share the same main source of news—almost half, among both left and right, depend mainly on the BBC according to the Pew Research Center study (2018: 14)—overall levels of trust in media are far closer to those of Southern European countries, just above that of Spain and Italy. Despite the dominance of the BBC, overall media consumption is widely polarised, to the point that the “magnitude of those differences in the UK

8 

J. BIRKS

looks similar to what occurs in the more ideologically divided southern countries studied” (Pew Research Centre 2018: 14). However, the Reuters Institute found that people broadly trust the news media that they use more than news media in general, but “a natural consequence of seeing more sources when in aggregated environments,” such as social media and search engine results, is that the presence of a partisan press may lower trust because of exposure to divergent views and especially those with which readers disagree (Newman et al. 2018: 16). Accordingly, breaking trust down by news brands (Newman et  al. 2018: 42), people who position themselves on the right trust right-wing newspapers the Daily Telegraph, The Times, Daily Mail and Sun more than people who identify as on the left, whilst the reverse is true of left-leaning newspapers, the Guardian, Independent and Daily Mirror. Even conservatives are fairly distrustful of divisive ‘red-top’ tabloid The Sun, however, trusting it less than they do the left-leaning Guardian. Among the broadcast media, right-wingers slightly favour Sky News, whilst both broadcasters with dedicated fact-checking operations, Channel 4 News and BBC News, are significantly more trusted by those on the left, though not actively distrusted on the right (both just under 6 on a 10-point scale of trustworthiness). We might therefore expect to see some differences between Labour and Conservative partisans in the engagement with these news organisations’ fact-checkers on social media, especially in the context of a divisive and hotly contested election campaign.

The British Political Context and the 2017 Snap Election British politics have been in turmoil since the vote to leave the EU, now universally known by the ugly portmanteau ‘Brexit.’ At the time of writing, many politicians are torn between a conviction to honour the result of the vote, though it was not legally binding, and a belief that to do so would be damaging to the country. Even those who agree that the UK must leave the EU do not agree on the terms that should be sought. However, these divisions do not cleave to party lines, and splits have been exposed within the two main parties that were superficially papered over for the election. The Conservative Party encompasses both free-­ market and free-movement neo-liberals on the one hand, and anti-immigration social conservatives on the other, whose differences have been

1 INTRODUCTION 

9

reflected in long-standing and intermittently exercised divisions on EU membership, which have inevitably been exacerbated by Brexit. The Labour Party is also a broad church, with a significant Blairite faction that supported the shift to the political centre ground under Tony Blair’s leadership as ideologically sound, or at least pragmatic after a bruisingly long period in opposition. However, after defeat in 2015, the party grassroots voted for a more traditionally left-wing social democrat (and long-standing backbench irritant to the party leadership) Jeremy Corbyn to lead the party. As a past critic of the neo-liberal economic policies of the EU, Corbyn was criticised for lukewarm support for the Remain campaign, and, like May, has struggled to unite his party on the issue since the result. Despite divisions on both sides, Conservative leader Theresa May sought to contest the election on that ground, but as a political context in which it was important to have a safe pair of hands, and not risk a prime minister with no ministerial experience, and, more broadly, as a contest of political credibility, since Corbyn had received a drubbing from the press (Cammaerts et al. 2017) and was assumed to be unpopular with the electorate. Unlike the US, Britain has a parliamentary democracy, where elections are technically focused on parties rather than prospective prime ministers, but in practice leaders do tend to dominate election campaigns, and this was particularly pronounced in 2017. The Labour and Conservative leaders together made up over half of all politicians’ media appearances over the course of the campaign (Deacon et al. 2019: 31), compared to less than a third in 2015. In part this was because a brief period of multiparty democracy—following the surprise 2010 result when a hung parliament produced a coalition government (Conservative and Liberal Democrat), and expectations of a similar result hung parliament again in 2015—had passed, with a landslide victory predicted for the Conservatives. With a loosening of Ofcom regulations about the extent of coverage each party merited, smaller parties therefore received less coverage, although the Liberal Democrat and Scottish National Party leaders were still, respectively, the third and fourth most prominent politicians. In common with many countries in western Europe, the UK has experienced a rise in populism, mostly on the right, with the inequities of neo-­ liberal capitalism blamed principally on immigration, which was thought to be one driving factor behind the Brexit vote. However, on the left there was a groundswell of popular opposition to austerity, and supporters at Labour rallies appeared enthused by the anti-austerity policies set out in the manifesto, whilst the Conservative angle was left exposed for its lack of

10 

J. BIRKS

policy specifics when the personal credibility campaign faltered (Cook 2019: 154). This suggestion of a return to policy, as one of the things that the media got wrong, alongside their steadfast assumption that Corbyn was unelectable, makes this an interesting election in which to examine the role played by fact-checkers in the media debate.

Outline of Chapters Chapter 2 will explore the literature on fact-checking in the US and elsewhere, with a focus on the epistemological concerns around what can be considered a checkable fact, ethical concerns about partisan bias in the act of adjudication, and concerns about the effectiveness of fact-checking in the face of partisan’s motivated reasoning. Chapter 3 will then analyse the fact-checks published during the election campaign, their promotion and engagement received on Twitter, through the three lenses of epistemology, ethics and effectiveness. It will illuminate how British fact-checkers deal with ambiguity and uncertainty when adjudicating on truth claims that aren’t strictly empirical, such as predictions, the basis on which the active audience on Twitter challenge fact-checking verdicts, and the role of partisanship. In Chap. 4, three key policy issues are examined in detail to analyse the role that fact-checks play in mediated policy debates, and the extent to which conflicts in wider arguments are disambiguated through verification of specific claims. Chapter 5 summarises the current position of fact-checking within political argumentation and suggests how it could be more effective in the future.

References Amazeen, Michelle A. 2015. ‘Revisiting the Epistemology of Fact-checking’, Critical Review, 27: 1–22. Amazeen, Michelle A. 2016. ‘Checking the Fact-checkers in 2008: Predicting Political Ad Scrutiny and Assessing Consistency’, Journal of Political Marketing, 15: 433–64. Amazeen, Michelle A., Emily Thorson, Ashley Muddiman, and Lucas Graves. 2015. ‘A Comparison of Correction Formats: The Effectiveness and Effects of Rating Scale Versus Contextual Corrections on Misinformation’, American Press Institute, Accessed 21 December 2018. http://www.americanpressinstitute.org/wp-content/uploads/2015/04/The-Effectiveness-of-RatingScales.pdf.

1 INTRODUCTION 

11

Ball, James. 2017. Post-Truth: How Bullshit Conquered the World (Biteback Publishing: London). Barrera, Oscar, Sergei Guriev, Emeric Henry, and Ekaterina V. Zhuravskaya. 2017. Facts, Alternative Facts and Fact Checking in Times of Post-truth Politics (Centre for Economic Policy Research). Berelson, Bernard R., Paul F.  Lazarsfeld, William N.  McPhee, and William N.  McPhee. 1954. Voting: A Study of Opinion Formation in a Presidential Campaign (University of Chicago Press: Chicago, IL). Blumler, Jay, and Michael Gurevitch. 1995. The Crisis of Public Communication (Psychology Press: London). Cammaerts, Bart, Brooks DeCillia, and César Jimenez-Martínez. 2017. ‘Journalistic Transgressions in the Representation of Jeremy Corbyn: From Watchdog to Attackdog’, Journalism. ISSN 1464884917734055. Chadwick, Andrew. 2017. The Hybrid Media System: Politics and Power (Oxford University Press: Oxford). Chadwick, Andrew, Cristian Vaccari, and Ben O’Loughlin. 2018. ‘Do Tabloids Poison the Well of Social Media? Explaining Democratically Dysfunctional News Sharing’, New Media & Society, 20: 4255–74. Cheruiyot, David, and Raul Ferrer-Conill. 2018. ‘“Fact-checking Africa” Epistemologies, Data and the Expansion of Journalistic Discourse’, Digital Journalism, 6: 964–75. Cook, Greg. 2019. ‘The Labour Campaign.’ In Dominic Wring, Roger Mortimore and Simon Atkinson (eds.), Political Communication in Britain (Palgrave Macmillan: London). Cushion, Stephen, Allaina Kilby, Richard Thomas, Marina Morani, and Richard Sambrook. 2018. ‘Newspapers, Impartiality and Television News’, Journalism Studies, 19: 162–81. d’Ancona, Matthew. 2017. Post-Truth: The New War on Truth and How to Fight Back (Random House: New York, NY). Davis, Evan. 2017. Post-Truth: Why We Have Reached Peak Bullshit and What We Can Do About It (Little, Brown: London). Deacon, David, John Downey, David Smith, James Stanyer, and Dominic Wring. 2019. ‘A Tale of Two Parties: Press and Television Coverage of the Campaign.’ In Dominic Wring, Roger Mortimore and Simon Atkinson (eds.), Political Communication in Britain (Palgrave Macmillan: London). Edelman. 2019. ‘Edelman Trust Barometer Report’, Accessed 30 May 2019. https://www.edelman.com/trust-barometer. Fairclough, Isabela, and Norman Fairclough. 2012. Political Discourse Analysis (Routledge: Abingdon). Festinger, Leon. 1957. A Theory of Cognitive Dissonance (Stanford University Press: Stanford, CA).

12 

J. BIRKS

Fridkin, Kim, Patrick J. Kenney, and Amanda Wintersieck. 2015. ‘Liar, Liar, Pants on Fire: How Fact-checking Influences Citizens’ Reactions to Negative Advertising’, Political Communication, 32: 127–51. Gottfried, Jeffrey A., Bruce W. Hardy, Kenneth M. Winneg, and Kathleen Hall Jamieson. 2013. ‘Did Fact Checking Matter in the 2012 Presidential Campaign?’, American Behavioral Scientist, 57: 1558–67. Graves, Lucas. 2016. Deciding What’s True: The Rise of Political Fact-checking in American Journalism (Columbia University Press: New York, NY). Graves, Lucas, and Federica Cherubini. 2016. The Rise of Fact-checking Sites in Europe (Reuters Institute for the Study of Journalism: Oxford). Graves, Lucas, Brendan Nyhan, and Jason Reifler. 2016. ‘Understanding Innovations in Journalistic Practice: A Field Experiment Examining Motivations for Fact-checking’, Journal of Communication, 66: 102–38. Ianunucci, Rebecca, and Bill Adair. 2017. Hereos or Hacks: The Partisan Divide over Fact-checking (Duke Reporters Lab Sanford School of Public Policy: Durham, NC). Lowrey, Wilson. 2017. ‘The Emergence and Development of News Fact-checking Sites’, Journalism Studies, 18: 376–94. Mantzarlis, Alexios. 2016. ‘Can the Worldwide Boom in Digital Fact-checking Make the Leap to TV?’, Poynter, Accessed 30 May 2019. https://www.poynter.org/fact-checking/2016/can-the-worldwide-boom-in-digital-factchecking-make-the-leap-to-tv/. Newman, Nic, Richard Fletcher, Antonis Kalogeropoulos, David A. L. Levy, and Rasmus Kleis Nielsen. 2018. ‘Reuters Institute Digital News Report 2018’, Reuters Institute for the Study of Journalism, Accessed 21 December 2018. http://www.digitalnewsreport.org/. Nyhan, Brendan, Ethan Porter, Jason Reifler, and Thomas J. Wood. 2019. ‘Taking Fact-checks Literally But Not Seriously? The Effects of Journalistic Fact-­ checking on Factual Beliefs and Candidate Favorability’, Political Behavior: 1–22. Nyhan, Brendan, Jason Reifler, and Peter A.  Ubel. 2013. ‘The Hazards of Correcting Myths About Health Care Reform’, Medical Care, 51: 127–32. Pew Research Centre. 2018. In Western Europe, Public Attitudes Toward News Media More Divided by Populist Views Than Left-Right Ideology (Pew Research Center: Washington, DC). Schudson, Michael. 2008. Why Democracies Need an Unlovable Press (Polity: London). Shirky, Clay. 2009. Here Comes Everybody: How Change Happens When People Come Together (Penguin UK: London). Singer, Jane B. 2018. ‘Fact-checkers as Entrepreneurs: Scalability and Sustainability for a New Form of Watchdog Journalism’, Journalism Practice, 12: 1070–80.

1 INTRODUCTION 

13

Stencel, Mark, and Riley Griffin. 2018. ‘Fact-checking Triples over Four Years’, Duke Reporters Lab, Sanford School of Public Policy. https://reporterslab.org/ tag/fact-checking-census/. Uscinski, Joseph E. 2015. ‘The Epistemology of Fact Checking (Is Still Naìve): Rejoinder to Amazeen’, Critical Review, 27: 243–52. Uscinski, Joseph E., and Ryden W.  Butler. 2013. ‘The Epistemology of Fact Checking’, Critical Review, 25: 162–80. Waisbord, Silvio. 2018. ‘Truth is What Happens to News: On Journalism, Fake News, and Post-Truth’, Journalism Studies, 19: 1866–78.

CHAPTER 2

Objectivity and Interpretation in Fact-­ Checking Journalism

Abstract  This chapter outlines the tensions between fact-checking journalism and the dominant epistemological norms and practices of ‘objective’ journalism to explain why elements of fact-checking practice can be controversial. Whilst scientific objectivity is associated with facticity or truth, as a journalistic norm it is understood only as avoiding subjective bias, and operationalised through direct quotation, distancing the journalistic voice. Verification is therefore typically limited to names, dates, places and accurate transcription. Fact-checking, in contrast, involves verifying the substance of sources’ claims, and can therefore be criticised as too interpretive, subjective and biased, criticisms that aim to draw narrow bounds around the legitimate ground for fact-checking. Keywords  Fact-checking • Journalism • Objectivity • Interpretation • Verification • Epistemology On the face of it, facts would seem to be journalists’ stock in trade. A Pew Research Center survey found that “83 percent of [American] voters believe fact-checking is a responsibility of the news media” (PEN America 2017), and journalists, too, see their job as centrally a “discipline of verification” (Shapiro et  al. 2013). However, fact-checking tends to be very limited in practice in mainstream reporting. Indeed, fact-checking is a challenge to the dominant practices of journalistic objectivity, which are © The Author(s) 2019 J. Birks, Fact-Checking Journalism and Political Argumentation, https://doi.org/10.1007/978-3-030-30573-4_2

15

16 

J. BIRKS

focused more on the avoidance of subjectivity than the pursuit of truth. This chapter will therefore start by reviewing those professional norms, and their relationship to verification and adjudication between competing claims, before going on to examine the specific critiques of fact-checking journalism as a separate genre. Fact-checking is contested as legitimate journalistic practice in three broad ways that will be explored in turn: empirical challenges, ethical challenges and dismissals of fact-checking as an ineffective way of tackling ‘post-truth’ politics. Finally, the research questions and methods used in this study will be outlined.

Verification and Objectivity—Professional Norms of Journalistic ‘Truth-telling’ Comparative transnational research has found that codes of journalism ethics across the world cite ‘objectivity,’ ‘accuracy’ and ‘truth’ as core professional norms (Hafez 2002: 228; Hanitzsch et al. 2011). Despite near-­ universal consensus on the abstract concepts, however, examination of journalists’ interpretation of these norms across 18 countries shows variation, especially in terms of whether a commitment to facticity excludes or involves judgement and interpretation (Hanitzsch et  al. 2011: 282–3). This was particularly the case in relation to adjudication between opposing claims: Making clear which side in a dispute has the better position tends to be disapproved by journalists in the west, but Turkish journalists are even more averse to this aspect of journalism culture. (Hanitzsch et al. 2011: 283)

This form of adjudication scored a mean of 2.61 on a 5-point scale, the lowest of eight measures of epistemologies and empiricism, though with the widest standard deviation, indicating a broad spread of answers. The highest agreement (4.42) was with the statement “I make claims only if they are substantiated by hard evidence and reliable sources,” but even claims supported by evidence can be contested by interested and ideologically motivated parties, so reticence to distinguish the better position could mean very rarely making definitive statements. Interestingly, in the CoPlot visualisation of this data (Hanitzsch et al. 2011: 284), the vector for adjudication was closely aligned to that for “I always remain strictly impartial,” indicating an affinity (the same respondents frequently agreeing to both). The vector for “I think that journalists

2  OBJECTIVITY AND INTERPRETATION IN FACT-CHECKING JOURNALISM 

17

can depict reality as it really is” is pointed in the diametrically opposite direction, indicating greatest disagreement between those that favour adjudication and those that elide the political contestation of reality altogether (or assume that the ‘reality’ is that there is no agreed truth). Most tellingly, unlike the other dimensions of journalism ethics explored in Hanitzsch et al.’s study, epistemological and empirical beliefs did not map on to national cultural differences, but appeared to be personal differences between individual journalists. Of course, it is likely that journalists in non-democratic countries will understand ‘reliable sources’ and ‘verified information’ rather differently from those in liberal democracies with freedom of information (FOI) rights and reliable, quasi-autonomous national statistics agencies, but this variation is also consistent with newsroom observations that found that journalists do not have a clear and consistent philosophical epistemology, but rather a set of practices that they believe to operationalise ‘objectivity’ (Tuchman 1978). If anything, western journalists have interpreted objectivity as the removal of the journalist’s subjective voice, and therefore reducible to attribution to sources (Tuchman 1978; Gans 2004; Schlesinger and Tumber 1994), whilst verification has been limited to checking the small facts like names, dates and places (Shapiro et al. 2013: 668). Doug Underwood (1988: 169) exemplifies this response, arguing against academic critiques on the basis that “scholars always seem to get such satisfaction in exploding myths, such as the belief in objectivity, that most journalists long ago discarded in favour of more realistic standards like ‘fairness’ and ‘impartiality.’” In as far as journalists are aware of the critique of objectivity, then, it is understood as having lost faith in an empiricist understanding of objectivity as ‘value-free facticity,’ but Juan Ramón Muñoz-Torres (2012: 569) argues that journalism scholars and practitioners have swung too far in the other direction by embracing a relativist ‘ethical’ interpretation that emphasises balance and fairness between equally valid truth claims. Muñoz-Torres (2012: 577) argues that truth cannot be reduced to either empirical observation or subjective belief. This opens the way, philosophically at least, for an element of interpretation and judgement in adjudicating on facts and their role in political arguments about truth. One of the pioneers of what has become the fact-checking movement, Michael Dobbs (2012: 3), founder of the Washington Post Fact Checker, has specifically positioned fact-checking as a correction to the interpretation of objectivity as balancing both sides’ truth claims—often derided as

18 

J. BIRKS

‘he said, she said’ journalism—and as a form of reporting that conversely “sees the reporter as a ‘truth seeker,’” without resorting to a reductive notion of ‘reality’ as self-evident. While the journalist has the duty to quote all sides accurately, he is not required to lend equal credibility to all the competing voices. He uses his judgment, and experience, to sift through the hubbub of different opinions, discarding some and highlighting others. (Dobbs 2012: 3)

This is not unique to fact-checking journalism, of course, and can be seen as part of the same tradition as investigative journalism, drawing on well-established methods for cross-checking, triangulating and otherwise corroborating sources (Ettema and Glasser 1998: 147), but on a smaller scale (Birks 2019). Both practices seek to hold power to account, but investigative journalism typically highlights problems encountered by the powerless and draws heavily on witness accounts to make a moral case for a call for action, whilst fact-checking follows the mainstream political agenda more closely, and aims to hold politicians to account more narrowly for what they say. In some ways, then, fact-checking can be interpreted as an expression of journalism’s abstract ideals, but it jars with conventional practice and draws criticisms from both the empiricist and ethical standpoints. Positivists make an empirical critique that journalists are not qualified to make categorical judgements and often do so erroneously. On the other hand, political critics, largely right-wingers, make an ethical critique, equating objectivity with balance and perceiving fact-checking as biased towards their opponents.

Empirical Critiques of Fact-Checking Empirical critiques hinge on the definition of what constitutes a checkable fact. Graves’ (2016: 91) detailed study observed that US fact-checkers aimed to select factually verifiable claims and avoided subjective opinion, but he identified a grey area that includes expert judgement and scientific paradigms, especially in economics. Uscinski and Butler (2013) take a far narrower view of checkable facts, criticising fact-checkers for erroneously selecting causal relationships, forecasts and definitions as facts that could be designated true or false. They take the view that fact-checking necessarily implies this binary true/false judgement and are withering about mixed

2  OBJECTIVITY AND INTERPRETATION IN FACT-CHECKING JOURNALISM 

19

judgements such as ‘half-true’ as muddled and reductive. In doing so, they focus on rating scales as the defining characteristic of fact-­ checking verdicts. Whilst common—recent estimates suggest that 80% of fact-checkers worldwide use a rating system, though not all are scales, but include ‘ordinal’ verdicts such as ‘out of context’ (Graves and Cherubini 2016: 625)— fact-checkers regard these presentational devices as merely illustrative. American fact-checkers have described rating scales as “fun” (New York Times political editor, Richard Stevenson, cited in Dobbs 2012: 8) and “playful” (Graves 2016: 147). Uscinski and Butler (2013: 173–4) argue that they imply a scientific validity that is not warranted, not least because the criteria for deciding on ratings are not explicit, but Michelle Amazeen (2015: 15) countered that Uscinski and Butler were judging fact-checkers on a standard they have never actually claimed to meet.1 Graves (2016: 144) points out that fact-checkers are aware of the subjectivity of their judgements and negotiate this ground carefully: “Like other journalists, they reconcile doubt about the ability to achieve perfectly objective truth with a faith in the value of trying.” The most significant measure in practical terms is arguably the effect that rating scales have on audience understanding of the findings. In a survey-based experiment using a rating scale of their own devising and attributed to a fictional fact-checker, Amazeen et al. (2015) found survey participants to be no less (or more) likely to correctly interpret the contextual findings of a fact-check if also given a rating scale icon. The one differential, counter-intuitively, was that people who reported liking complex thinking were more likely to correctly interpret the fact-check in the post-­ intervention testing where a rating scale was included (Amazeen et  al. 2015: 12). So ratings scales do not appear to have much bearing on audience understanding of complex or mixed judgements, but there may yet be an argument for greater transparency in the criteria by which those ratings are applied, according to PEN America (2017: 69), not to increase scientific validity but to increase trust in fact-checking. This could be more 1  Somewhat contradictorily, Amazeen (2015, 2016) also found evidence that fact-checkers do broadly agree on distinctions between entirely true and, in some way, problematic claims (comparing fact-checks on identical claims in political television advertisements), but in a rejoinder, Uscinski (2015) pointed out that only a small proportion of fact-checks get a rating of entirely true or accurate and this research therefore aggregates the wider range of ratings between which there is likely to be much greater disagreement.

20 

J. BIRKS

s­ ignificant in the future, as Lowrey (2017: 389) found that fact-checking mobile apps were being launched that “were less likely to offer narrative information.” It is also possible that fact-checks shared and promoted on social media could communicate the rating at the expense of the reasoning and context. That reasoning and context is particularly important where claims examined are not straightforward true or false empirical facts. There were three specific kinds of claim that Uscinski and Butler (2013) argued should not be selected as facts: causal relationships, future predictions and definitions. They argue that fact-checkers conflate correlation with causality because “causal relationships cannot be verified by ‘looking up the answer,’ which is the typical methodology used by fact checkers” (Uscinski and Butler 2013: 169). They apply scientific reasoning to conclude that causal questions can only be answered through complex statistical analysis, but this is too exacting a standard to apply to journalism and political communication. A better standard would be practical argumentation, which pertains to making decisions about the best course of action. The argument scheme for correlation to causation in practical argumentation merely requires that the alternate explanations have been examined, such as coincidence or a third explanatory factor (Walton 2006: 103). Journalistic fact-checkers can also draw on research by experts in the field who can challenge or substantiate claims of cause and effect based on their more rigorous research (Graves 2016: 128–9). Future predictions based on statistical trends or modelling are theoretical rather than empirical, in that the outcome is not an observable fact until it has come about (Uscinski and Butler 2013). Uscinski and Butler accept that these claims are generally based on the informed opinions of experts, but argue that even if experts disagree with the prediction, it doesn’t make it false. However, again, in practical argumentation theory, an argument to expertise can be made subject to critical questioning about the expert’s credibility, relevance of their field of expertise, consistency with others in the field and basis in evidence (Walton 2006: 88–90). Finally, Uscinski and Butler (2013) claim that definitions are not facts— more accurately they are not empirical facts, but we could see them as ‘social facts’ (Fairclough and Fairclough 2012) in that they are broadly recognised. It is reasonable to say that everyone understands, for instance, what a ‘u-turn’ is, as distinct from a ‘clarification,’ and it is possible to say which of those terms most people would apply. Conversely, where a politician puts forward a definition that is not broadly recognised but a

2  OBJECTIVITY AND INTERPRETATION IN FACT-CHECKING JOURNALISM 

21

‘persuasive definition’ (Walton 2006: 245–50) that can be highlighted as a rhetorical framing that distort the facts. Graves (2016: 140) reports such a case where FactCheck.org’s “analysis implied a ‘reasonable person’ standard to contest a definition of healthcare reform: What counts as ‘government takeover’ is not a matter of opinion but reasoned argument that takes into account the actual shape of the world.” Beyond the designation of what is a checkable fact, there is also some dispute over whether an accurate fact—usually a statistic—should be rated as less than entirely true because of the context of its use in a wider political claim or argument. Graves (2016: 132) identifies this as “the most difficult question professional fact-checkers face, and the most controversial,” because factual, and especially statistical, claims are used in a selective and misleading way to support a more interpretive claim about policy choices or, most often, attribution of credit or blame (Cushion et  al. 2017). Adjudications on such questions are rarely cut-and-dried, and Uscinski and Butler, because they assume that fact-checking necessarily enforces a true/untrue binary verdict, suggest that they are better off not attempted: The journalist who saw the world as ambiguous would never get a story written, and the fact checker who saw it that way would never be able to do his job. Yet we have gotten along without fact checking before, leaving politics open to ambiguity to that extent. (Uscinski and Butler 2013: 178)

However, although simplification is an established news value, and sensible critiques have been made of scientific risk reporting (Allan 2002), a fact-checking verdict can be nuanced and acknowledge ambiguity, and yet still give a reasoned argument for favouring one interpretation over another (though that’s not to say, of course, that it necessarily always does). It is important to recognise that interpretation is not simply the same as opinion, because that effectively reduces politics to an expression of taste or preference. As Graves points out, “Fact-checkers agree that matters of opinion can’t be checked, but they object to partisans who use that label as a license for deceptive claims” (2016: 140–1). This is not a theoretical risk, but a common rhetorical trick—for instance, Donald Trump attempted to circumvent negative fact-checking by semantically expressing a claim as an opinion—“I better say ‘think,’ otherwise they’ll give you a Pinocchio,” he said. “And I don’t like those—I don’t like Pinocchios” (Kessler et al. 2017).

22 

J. BIRKS

Contra Uscinski and Butler, I argue that selection should be based on what is pragmatically useful for people to make informed decisions about policies and candidates, rather than what fits a particular epistemological standpoint on what counts as a ‘fact.’ Uscinski and Butler are not suggesting that there is no way to rationally critique these assertions, nor are they denying that some claims are more persuasive than others, yet the implication of their critique is that journalism should operate as if that were the case. This book will therefore take a perspective consistent with Graves’ (2017) view that fact-checking is concerned with the usefulness of political debate that does not require idealistic Habermasian standards of disinterested rationalism and consensus, but a more modest standard of engaging reasonably with evidence that Schudson (2008: 101–5) calls ‘public reasonableness.’ To help assess whether fact-checking journalism improves debate by this measure, the research is informed by argumentation analysis and political discourse analysis (PDA) and, in particular, the critical questions that should be raised for the different kinds of argumentative claim. However, because fact-checking involves interpretive judgement, and because it jars with the conventional operationalisation of the objectivity norm as relativist balance of claims-as-opinions, the research will also address the more common critique expressed in ethical terms as political and ideological bias.

Ethical Criticisms of Fact-Checking—Bias Bias can be read into fact-checking in several ways, in terms of both the selection of facts for checking and interpretation of them. Graves (2016: 88) concluded from his extensive observation of fact-checking in practice that the main US fact-checkers principally use newsworthiness judgements, selecting claims about which they wonder, and therefore expect the audience to wonder, whether they are true. This could favour the most significant claims that underpin a central or contentious policy argument, but it is also likely to produce a disproportionate focus on false and misleading claims, though if a claim sounds unlikely yet is true, then it is also potentially newsworthy. Uscinski and Butler (2013: 165–6) argue that selection of claims on the basis of newsworthiness represents a flawed ‘sampling’ strategy. Seeing selection as sampling assumes that the claims are collectively taken to be representative of all political claims made and could be used to compare different parties or candidates. In this sense, there is a reasonable case

2  OBJECTIVITY AND INTERPRETATION IN FACT-CHECKING JOURNALISM 

23

against Politifact’s database of judgements, and their ‘scorecard’ aggregation of an individual politician’s honesty,2 but this is not common practice among fact-checkers and not adopted by any of the main British fact-­ checkers. A more impressionistic sense of bias can nonetheless prevail. There are two aspects of claim selection that are criticised for bias— disproportionately checking one party’s claims over another’s, and giving more negative judgements to one party than another. There may be a case for the former, since there should clearly be a diversity of claims and counter-­claims subject to scrutiny, and no party should be exempt from this form of accountability. At the same time, there are reasons other than unconscious ideological bias for selecting slightly more claims from one party or individual politician, such as their being particularly prone to making outlandish claims or unrepentantly repeating a claim already found false or misleading, though of course that shouldn’t be allowed to drown out other parts of the debate, including well-supported claims. The notion that fact-checkers should artificially ensure that each party or individual receive an equal proportion of confirmatory and critical verdicts is clearly more problematic and would constitute ‘false balance’ (Birks 2019). Although the relative proportion of fact-checks can’t be reliably used as a basis to compare the honesty or credibility of politicians, there are nonetheless genuine differences between politicians in their relationship to the truth. To ‘balance’ out the judgements in such cases would involve looking for claims to criticise from a generally honest politician, which could place unwarranted emphasis on minor infractions that do not have much significance to the overall political argument. However, there is also a non-partisan concern about bias towards negativity in general. Bad news is more newsworthy than good news, and catching a politician out in a lie is far more of a scoop than confirming their claim to be correct. In recent decades, the media appetite for scandals and gaffes has increased in response to increasing control from the political spin machine, resulting in a condition of what Blumler and Gurevitch (1995) call “hyperadversarialism.” This amplification of every misstep and chink in the public relations (PR) armour can lead an alreadysceptical public to become cynical about politics in general.

2  For instance, Donald Trump’s scorecard: https://www.politifact.com/personalities/ donald-trump/.

24 

J. BIRKS

Critics of fact-checking interpret a negative verdict as a personal attack because of the implication that the subject is being accused of lying, which makes assumptions about motivation (Uscinski and Butler 2013: 171), and therefore character traits of honesty and sincerity. Amazeen (2015: 7) acknowledges that this is implied in the Washington Post Fact Checker’s ‘Pinocchio’ scale and Politifacts’ ‘Pants on Fire’ rating (as in the child’s rhyme, ‘liar, liar, pants on fire’) but points out that others such as Factcheck. org avoid this. Although fact-checking journalists say that they are frustrated with journalistic conventions that generate more heat than light (Graves 2016; Dobbs 2012), a strong bias towards problematic claims, and especially active misrepresentation (rather than honest error), could potentially exacerbate distrust in politicians, but, on the other hand, fact-­ checking could defuse the personal element of attack politics by reorienting criticism to policy. It is not only politicians that are suffering public distrust, however— this is a problem afflicting the mainstream media as well. This is most obvious in the US, where President Trump has encouraged his supporters to regard legacy media as ‘fake news,’ to discredit unfavourable reporting, but research indicates that distrust in news media is widespread across the Anglophone world, Europe and beyond (Newman et al. 2018; Edelman 2019). Critics on both the left and the right accuse journalism of having inherent biases in its professional norms, but for different reasons. The left is more suspicious of the conventions of ‘objective’ journalism (along with the class interests of media owners) for privileging official sources (Hall et al. 1978; Herman and Chomsky 2002), and ‘balancing’ truth claims to generate controversy (Merritt 1995). On the right, meanwhile, critics argue that there are cultural biases in media organisations, and that journalists typically have liberal opinions that unconsciously seep into their selection and framing choices, a critique encapsulated in the complaints of a ‘liberal elite’ common in both the US and the UK (Domke et al. 2006). Overall, however, recent studies have found that the right were more distrustful than the left. Trust was already unevenly distributed in 2016, but post-election we find that those who identify on the left (49%) have almost three times as much trust in the news as those on the right (17%). The left gave their support to newspapers like the Washington Post and New York Times while the right’s alienation from mainstream media has become ever more entrenched. (Newman et al. 2018: 18)

2  OBJECTIVITY AND INTERPRETATION IN FACT-CHECKING JOURNALISM 

25

They found not only that the right-leaning American public were less trusting of mainstream media (and decliningly so, falling from 23% in 2016 to 17% in 2018), but that Trump’s presidential victory had led the left to “re-invest trust in liberal media” (Newman et al. 2018: 18). Since the majority of fact-checkers in the liberal democratic west are based within established media organisations (what Graves and Cherubini 2016 call the ‘newsroom model,’ and Singer 2018 calls ‘intrapreneurs’), fact-checking may suffer by association. Consistent with orientation to mainstream media in general, studies in the US have found that right-­ wingers are more suspicious of fact-checking. A Duke Reporters’ Lab report found that conservative partisan sites were markedly more likely to evaluate fact-checkers negatively and accuse them of liberal bias (PEN America 2017: 66). Some of these were organisations whose raison d’être is to hold the media to account for perceived bias, such as the conservative Media Research Center, whose assessment was that “These fact-checking organizations have been founded and funded by mainstream media, with all the same blind spots and biases” (PEN America 2017: 67). In the UK, there is a more partisan press than in the US, which is perhaps why experiments with fact-checking columns in the British press have not taken off, and the two main mainstream media fact-checkers are attached to public service broadcasters, the BBC and Channel 4.3 It is possible that the tighter rules around impartiality in broadcast regulation could lend these media fact-checkers greater credibility or, conversely, that the use of adjudications could be more jarring for the audience. The other institutional form that fact-checkers take is as independent bodies in the third sector and media accountability campaigners (the ‘NGO model’ for Graves and Cherubini 2016; or ‘entrepreneurs’ in Singer’s 2018 terms), a form that dominates in Southern and Eastern Europe, Africa and the Middle East (Graves and Cherubini 2016: 8; Singer 2018: 1073). These bodies may enjoy greater credibility among the public as fair-minded arbitrators, but that is not necessarily their primary intended audience. The British independent fact-checker Full Fact was founded by a researcher for a cross-bench peer4 frustrated by the ill-­ informed briefings from charities and trade body lobbyists that would inform legislative debate (Brecknell 2016). Full Fact’s remit is still informed by this desire to influence parliamentary law-making directly, 3 4

 Channel 4 has a PSB remit, although it is part-funded through advertising.  Peers sit in the House of Lords, the second chamber of the Houses of Parliament.

26 

J. BIRKS

rather than to inform the electorate on the evidential support for party policy proposals, though charitable status was only granted with the remit of ‘public education’ since ‘civic engagement’ was considered too ‘political’ (James 2014). Graves (2018: 625) found that “Full Fact values effecting change over publishing stories,” with corrections occasionally sought through negotiation rather than publicity. Accordingly, it uses a “terse style that does away with explanatory, multi-perspectival context,” offers more decisive verdicts than more journalistic fact-checkers, with less information and rarely quoting experts. Regardless of their intended audience, however, another question arises about how effective they are at informing that audience.

Criticisms of Fact-Checking as Ineffective In the many books on post-truth politics published in the wake of the European Union (EU) referendum and the US presidential election, fact-­ checking was dismissed as marginal and ineffective (Ball 2017; Davis 2017) largely because the arguments it relies on are too dry and wonkish and so not picked up by the mainstream media, other than to “make it a fight” in which the original claim gets repeated (Ball 2017: loc 662), but also because people are culturally resistant to facts (Davis 2017: loc 1833). If evaluated as a corrective to ‘fake news’ (or ‘fraudulent news,’ as PEN America now terms it in distinction from Trump’s usage), as reflected in Facebook’s much-heralded but troubled partnership with fact-checkers (Lee 2019), its effectiveness probably is quite limited. PEN America (2017: 66) points out that fact-checking is too slow to correct misinformation (or disinformation) that has spread virally through the internet—it inevitably takes longer to disprove a lie than to make it up or to share it. However, fake news, in the sense of disinformation circulated for profit or propaganda, appears to be a far greater problem in the US than the UK, where falsehoods circulated on social media principally originate in tabloid newspapers (Chadwick et al. 2018). The original purpose of fact-checking journalism, as distinct from internet myth-busters such as Snopes.com, was to inform political debate by checking the assertions made by politicians and other prominent individuals. A more relevant criticism in this context is that fact-checking has little influence on mainstream journalism, where political debate is most commonly and prominently conducted. Research on US fact-checking suggests that verdicts are sometimes picked up and quoted elsewhere in the

2  OBJECTIVITY AND INTERPRETATION IN FACT-CHECKING JOURNALISM 

27

media (Amazeen 2015: 17), but there is little evidence that this happens regularly. Additionally, fact-checkers that are attached to legacy media organisations tend to be less prominently promoted than might be expected, given that it is generally positioned as a prestige aspect of the news (Graves et al. 2016). One study found that less than half of websites identified as offering fact-checking linked to it from their home page, and they tended to become “decoupled” over time and “less likely to offer high profile definitive judgments” (Lowrey 2017: 385). It is also not clear that they are used specifically to challenge politicians live during interviews and so on where this would be most effective, though experiments in broadcasters live-checking debates in the US have been influential elsewhere (Golshan 2016). A second way in which fact-checkers could improve political debate is by influencing politicians’ behaviour—dissuading them from making misleading statements or from repeating claims found to be problematic. Dobbs (2012: 10–11) argues that there is some evidence of the latter in the US, and the Washington Post leapt on an acknowledgement from Trump that he did not like being given Pinocchios, though that does not appear to have influenced his behaviour beyond the presentational terms mentioned above (Kessler et al. 2017). The success of fact-checking in this sense depends on the extent to which politicians believe in its influence on the audience or perceive it as causing reputation damage. However, fact-­ checking has to compete with other heuristics in people’s evaluation of politicians’ credibility, including personal affinity and exiting political values and commitments. Research broadly indicates that fact-checking does indeed have some effects on audience belief and understanding, in comparison with exposure to the claim alone. Surveys have established correlation between visiting a fact-checking website and knowledgeability on issues where fact-checking had occurred, even when controlling for covariate variables such as consumption of news media, education, and being less conservative and younger (Gottfried et  al. 2013: 1564). Evidence from experiments in social psychology also suggest that fact-checks have a significant impact on people’s perception of the source of the claims as accurate or inaccurate (Fridkin et al. 2015: 135). There were more mixed results, however, in terms of people accepting the specific factual claims. Some studies have found a greater corrective effect in negative than positive verdicts (Fridkin et  al. 2015: 138), and among people who were initially unsure rather than those with an active

28 

J. BIRKS

misperception (Garrett et al. 2013: 627). In Fridkin and colleagues’ study, fact-checks had a significant impact where they contradicted the claims made in an attack advertisement (especially among those who disliked negative campaigning), but not where they reinforced the claims (2015: 138). They interpreted this as evidence of negativity bias—greater attention to and processing of negative information—but it could also reflect the extent to which people are predisposed to believe negative claims about politicians (mean 8.16 and 7.69 on a 12-point scale for negative claims about a Republican and Democrat candidate, respectively, before they had even seen the advert), and even after the correction effect of the critical fact-check, they were more inclined to believe the claims than not (7.97 and 7.15). Garrett et al. (2013: 619) found that even where people accepted that an initial claim had been fabricated, belief in it persisted because of a more subconscious process of the “spontaneous creation of naïve theories.” This accords with other social psychological studies that have detected a ‘boomerang’ or ‘backfire’ effect. Early research on corrections (not specifically ‘fact-checks,’ then not yet well established) suggested that attempts to correct a misperception could have the opposite effect (Nyhan and Reifler 2010). However, in more recent research, Nyhan et  al. (2013) found that there was only a backfire effect among those that supported the politician being fact-checked (in that instance, Sarah Palin) and were politically knowledgeable. This is an instance of ‘intended construct activation,’ when people object to being told what to think and double down on their original belief as a form of rebellion, whereas naïve theories are ‘unintended construct activation,’ when people focus on an aspect of the message that is not intended to be the point (Garrett et al. 2013: 623). In the latter case, if the fact-check restates the original assertion in the headline in the form of a question (‘is x right to say y’?), then there is a danger that people remember the original claim more than the correction, especially if they don’t read any further than the headline (PEN America 2017: 68–9). On the other hand, if the verdict is included in the headline (‘x was wrong to say y’), then people may be less likely to read the full article and could miss important nuance about the nature of the error. This dilemma is particularly pertinent to the promotion of fact-checking on social media, where the space to give a nuanced verdict is limited, requiring the audience to be motivated to click through to the article. Political partisanship is the main motivation identified by social psychological research for audiences intentionally rejecting corrections (Amazeen

2  OBJECTIVITY AND INTERPRETATION IN FACT-CHECKING JOURNALISM 

29

et al. 2015; Nyhan et al. 2013), though Fridkin et al. (2015: 142) found that political sophistication and a low tolerance of negative information had more explanatory force than partisanship. Interestingly, this study also found that Democrats were more inclined to motivated reasoning than Republicans, despite research indicating that they are more likely to consider fact-checking legitimate and unbiased (PEN America 2017: 66–7), though differentials in motivated reasoning are also affected by which party is in power (Lebo and Cassino 2007). Most social psychology experiments use constructed examples and fictional candidates (though attach them to real parties), but Flynn et  al. (2017) examined real-world claims made by a real-life candidate (though the articles were constructed for the experiment). This drew a more complex picture, indicating that Trump supporters didn’t reject factual corrections to his claims, but that debunking did not alter their perception of, or support for Trump. In other words, we cannot assume that if people accept that a candidate has said something factually inaccurate, they will necessarily think less of that candidate, and may still regard them as speaking wider truths (in one instance, that jobs are being lost to Mexico, even if employment had risen in the two cities Trump gave as examples). Whilst Fridkin et al. (2015) concluded that political sophisticates were more able to correctly interpret fact-checkers’ verdicts, Nyhan et  al.’s (2013) research indicated that such knowledgeability enabled partisans to challenge the fact-checks. Because the research on the effects and effectiveness of fact-checking has been dominated by social psychological experimentation, we do not know the specific reasons that people had for disbelieving the fact-check verdict. One profitable avenue for further research would be focus group-based audience reception research, but this also carries some of the issues of artificial context that are acknowledged limitations of experimental surveys, with inevitable observation effects and the time lag introducing a different political context. This research therefore opts to examine the direct contemporary engagement of the audience via Twitter.

Research Questions and Method It is clear from the existing literature that there is currently a bias towards research on US fact-checking. More recent comparative research is beginning to explore transnational similarities and differences in institutional form and value propositions (Graves and Cherubini 2016; Singer 2018),

30 

J. BIRKS

but this work is necessarily fairly broad-brush to cover that extensive ground. Since this research indicates divergence from the founding US practice (also highlighted by Graves 2016 via observation at international fact-checker conferences), our understanding of fact-checking can be significantly enriched by in-depth research on non-US contexts, and this study of the UK aims to make a contribution towards that aim, focusing on the three main British fact-checkers, Channel 4’s FactCheck, the BBC’s Reality Check, and independent Full Fact. British fact-checking is well established, with Channel 4’s FactCheck service having been launched in 2005, just two years after FactCheck.org, the US service that inspired it, with Full Fact following in 2009, and the BBC’s Reality Check in 2010. This means that fact-checking in the UK has settled into a consistent set of practices. It is no coincidence that the two broadcaster-based fact-checkers were founded in election years, and so an election is an ideal context for examining the practice, as well as offering a coherent political context in which to observe the operation of fact-­ checking in relation to mediatised debate. Secondly, this study seeks to diverge from this body of research in two ways that have a broader relevance for the study of fact-checking outside the UK. Firstly, it systematically examines a sample of fact-checks in the context of the mediatised debate, both on broadcast media and on Twitter. Secondly, it seeks to understand the impact of fact-checking more qualitatively in relation to the quality of political debate, an aim that was ranked first or second important by almost half of fact-checkers surveyed by Graves and Cherubini (2016: 15). To do this, it draws on argumentation theory, and Fairclough and Fairclough’s (2012) political discourse analysis, to locate the fact-checked premises within a persuasive political argument and examine the extent to which the audience is enabled to assess those arguments, in distinction from the ‘he said, she said’ attribution-­ based practice of ‘objectivity.’ Drawing on the concerns explored above, this study addresses five research questions: RQ1: What kind of claims do fact-checkers select, and how do they deal with ambiguity? RQ2: Do fact-checkers attempt to ‘balance’ their attention to different parties and/or verdicts on each? RQ3: On what basis do engaged audiences on Twitter perceive bias and/or challenge verdicts?

2  OBJECTIVITY AND INTERPRETATION IN FACT-CHECKING JOURNALISM 

31

RQ4: Do broadcasters with fact-checking operations pick up on verdicts and/or factual information from their fact-checkers in their flagship broadcast news programming? RQ5: How does fact-checking inform political argumentation? To address RQ1 and RQ2, all election-related fact-checks were collected from their home pages5 (excluding, for instance, checks on statements by leaders of other countries), from 28 April 2017, the date the election was called, to 7 June 2017, the day before the polls opened. Fact-­ checks on the electoral process and checks on minor parties’ policies were collected for broad comparative context, but the focus of the analysis was on policy-related fact-checks on the Conservative and Labour parties. This provided a more comparable set of claims, given variation in attention to and treatment of smaller parties. Content Analysis The most established fact-checker, Channel 4’s FactCheck, produced the lowest number of full articles, little over a third of its public service broadcasting rival; BBC Reality Check were better resourced and more prolific, but over a fifth of items were brief posts on the blogroll that did not click through to a full article or linked to a previous article. In keeping with the overall focus of the media coverage on the two main parties, approaching three-quarters of fact-checks checked Labour or Conservative claims and policies, with the Liberal Democrats, Scottish National Party and UK Independence Party (UKIP) making up most of the 36 checks on minor party policies (the remaining 13 fact-checks being on process or horserace issues such as opinion polls) (Table 2.1). To answer RQ3, the Twitter accounts of the two broadcaster-based fact-­ checkers were also analysed.6 Tweets specifically badged as election related (or linking to articles that were) were collected manually by scrolling back through their account page, and copying and pasting the key data—the tweet text and any hyperlink, and the three key statistics of likes, retweets and replies—into a spreadsheet for analysis. Again, Reality Check tweeted 5  C4 FactCheck: https://www.channel4.com/news/factcheck; BBC Reality Check: https://www.bbc.co.uk/news/topics/cp7r8vgl2rgt/reality-check#main-content; Full Fact: https://fullfact.org/. 6  Full Fact also tweeted prolifically, but received much lower engagement.

32 

J. BIRKS

Table 2.1  Number of articles in the fact-check sample by fact-checker Total election fact-checks (including minor party policy and process) Channel 4’s FactCheck BBC’s Reality Check Full Fact Total

Fact-checks on Conservative or Labour policy, or government performance

31

23

86

68

59 176

36 127

Table 2.2  Number of tweets in the Twitter sample by fact-checker Total election-related tweets (including minor party policy and process) @FactCheck @BBCRealityCheck Total

52 180 232

Tweets on Conservative or Labour policy, or government performance 40 73 113

more often, mostly because they live fact-checked more broadcast debates, including election editions of audience debate programme Question Time, and a BBC Radio 1 debate on issues affecting young people (Table 2.2). Each fact-checker item was coded by policy issue, the form the item took (full fact-check, explainer or another form), the person and party or organisation to whom the claim is attributed, whether the claim related to a party policy, whether it related to government performance, the sources drawn on and the presence or absence of particular source types. A summary of the narrative conclusion was noted, and a verdict valance attributed: where clearly affirmative phrasing was present (such as ‘correct,’ ‘about right’ or ‘a reasonable estimate’), it was recorded as positive; where clearly critical phrasing was used (including ‘incorrect,’ ‘misleading’ and ‘needs clarification’), it was recorded as negative; inclusion of both was ‘mixed’; and where the narrative was given with no clear valance, it was noted as ‘unclear.’ Tweets were coded by form (verdict, question, topic), issue, party and the valance of the verdict where present, and whether a claim appeared in the tweet text without the verdict. Chapter 3 will draw

2  OBJECTIVITY AND INTERPRETATION IN FACT-CHECKING JOURNALISM 

33

on this analysis to examine how British fact-checkers navigate different kinds of claim and accommodate complexity and ambiguity, as well as ethical concerns for impartiality. It will also explore how Twitter replies, as a measure of an engaged audience, receive and especially accept or challenge the fact-checkers’ verdicts and reasoning. Argumentative and Political Discourse Analysis To answer RQ4 and RQ5, flagship news programming was collected from the two broadcasters with fact-check operations, Channel 4 News’ main evening bulletin (7–8 pm Monday–Friday, 6.30–7 pm weekends), and BBC Radio 4’s agenda-setting Today programme (6–9 am Monday– Friday, 7–9 am Saturday). Again, only election-related items were collected, and only those related to the policy proposals of the two main parties, or on the government record on policy matters. A total of 191 items were collected, but the analysis focuses on three issue debates, 8 items on education (schools funding), 33 items on security (in particular the 16 directly mentioning police numbers), and 8 items on Brexit. In addition, the main set-piece televised debate from each broadcaster was analysed: the BBC Election Debate (31/05/2017), in which representatives of seven parties took questions from an invited audience (selected to represent the country by how they voted at the previous election and in the referendum), moderated by Mishal Hussein, and the Channel 4 and Sky News joint programme The Battle for Number 10 (29/05/2017), in which the Conservative and Labour leaders were interviewed by Jeremy Paxman and responded to questions from the audience, moderated by Faisal Islam. All items were transcribed and analysed using tools from political discourse analysis (Fairclough and Fairclough 2012), informed by pragma-dialectic argumentation theory (Van Eemeren and Houtlosser 2003; Walton 2006). This approach facilitates a critique of the argumentation within which the factual claims are located, allowing the analysis to assess the role played by fact-checking in presenting a reasonable challenge to the argument as a whole. In the pragma-dialectic model, “argumentative discourse is conceived as aimed at resolving a difference in opinion by putting the acceptability of the ‘standpoints’ at issue to the test by applying criteria that are both problem-valid as well as intersubjectively-valid” (Van Eemeren and Houtlosser 2003: 387). That is to say that it applies the rules of reasonable argumentation that allow the dispute to be resolved, if not in the mind of

34 

J. BIRKS

those arguing (which is unlikely in a competitive context such as an election campaign), then in the minds of the audience. In other words, the relevant measure is whether the audience are enabled to make a more informed choice between the parties on the basis of their policy arguments. Argumentation theory offers specific argumentative schemes, and the ways in which the argument can be defeated through critical questioning in dialectic exchange (Walton 2006), though Fairclough and Fairclough (2012) argue that in political discourse it is also important to recognise the more powerful approach of defeating the claim through a counter-­ argument, since an argument can be fallacious yet the claim still be true for different reasons. To aid this analysis, it is helpful to distinguish the various premises of the argument by the role they play. Premises are assertions that underpin a specific claim for action (in this case, an electoral policy pledge), including ‘circumstances’, claims about the current state of affairs (identifying problems to address or successes to build on), the goals for political action (making normative claims about the good society) and means of achieving them (theoretical claims about the effectiveness of the proposed policies), as well as the (moral, ethical, ideological) values that inform the goal. Chapter 4 will deconstruct three policy arguments as case studies to examine how the fact-checked premises helped to disambiguate the arguments and counter-arguments from the incumbent party (Conservatives) and main opposition party (Labour), and, indeed, which went unchallenged.

References Allan, Stuart. 2002. Media, Risk and Science (Open University Press: Buckingham). Amazeen, Michelle A. 2015. ‘Revisiting the Epistemology of Fact-checking’, Critical Review, 27: 1–22. Amazeen, Michelle A. 2016. ‘Checking the Fact-checkers in 2008: Predicting Political Ad Scrutiny and Assessing Consistency’, Journal of Political Marketing, 15: 433–64. Amazeen, Michelle A., Emily Thorson, Ashley Muddiman, and Lucas Graves. 2015. ‘A Comparison of Correction Formats: The Effectiveness and Effects of Rating Scale Versus Contextual Corrections on Misinformation’, American Press Institute, Accessed 21 December 2018. http://www.americanpressinstitute.org/wp-content/uploads/2015/04/The-Effectiveness-of-RatingScales.pdf. Ball, James. 2017. Post-Truth: How Bullshit Conquered the World (Biteback Publishing: London).

2  OBJECTIVITY AND INTERPRETATION IN FACT-CHECKING JOURNALISM 

35

BBC Election Debate (31/05/2017) - BBC, available from: https://www.youtube. com/watch?v=fz4KAdrwSE0 Birks, Jen. 2019. ‘Fact-checking, False Balance, and “Fake News”: The Discourse and Practice of Verification in Political Communication.’ In Stuart Price (ed.), Journalism, Power and Investigation: Global and Activist Perspectives (Routledge: London). Blumler, Jay, and Michael Gurevitch. 1995. The Crisis of Public Communication (Psychology Press: London). Brecknell, Suzannah. 2016. ‘Interview: Full Fact’s Will Moy on Lobbyist “Nonsense”, Official Corrections and Why We Know More About Golf than Crime Stats’, Civil Service World, Accessed 21 December 2018. https://www. civilser viceworld.com/articles/inter view/inter view-full-fact%E2%80% 99s-will-moy-lobbyist-%E2%80%9Cnonsense%E2%80%9D-official-correctionsand-why. Chadwick, Andrew, Cristian Vaccari, and Ben O’Loughlin. 2018. ‘Do Tabloids Poison the Well of Social Media? Explaining Democratically Dysfunctional News Sharing’, New Media & Society, 20: 4255–74. Cushion, Stephen, Justin Lewis, and Robert Callaghan. 2017. ‘Data Journalism, Impartiality and Statistical Claims’, Journalism Practice, 11: 1198–215. Davis, Evan. 2017. Post-Truth: Why We Have Reached Peak Bullshit and What We Can Do About It (Little, Brown: London). Dobbs, Michael. 2012. ‘The Rise of Political Fact-checking, How Reagan Inspired a Journalistic Movement’, New America Foundation, Accessed 21 December 2018. https://www.issuelab.org/resources/15318/15318.pdf. Domke, David, Mark D. Watts, Dhavan V. Shah, and David P. Fan. 2006. ‘The Politics of Conservative Elites and the “Liberal Media” Argument’, Journal of Communication, 49: 35–58. Edelman. 2019. ‘Edelman Trust Barometer Report’, Accessed 30 May 2019. https://www.edelman.com/trust-barometer. Ettema, James S., and Theodore Glasser. 1998. Custodians of Conscience: Investigative Journalism and Public Virtue (Columbia University Press: New York, NY). Fairclough, Isabela, and Norman Fairclough. 2012. Political Discourse Analysis (Routledge: Abingdon). Flynn, D. J., Brendan Nyhan, and Jason Reifler. 2017. ‘The Nature and Origins of Misperceptions: Understanding False and Unsupported Beliefs About Politics’, Political Psychology, 38: 127–50. Fridkin, Kim, Patrick J. Kenney, and Amanda Wintersieck. 2015. ‘Liar, Liar, Pants on Fire: How Fact-checking Influences Citizens’ Reactions to Negative Advertising’, Political Communication, 32: 127–51. Gans, Herbert J. 2004. Deciding What’s News: A Study of CBS Evening News, NBC Nightly News, Newsweek, and Time (Northwestern University Press: Evanston, IL).

36 

J. BIRKS

Garrett, R. Kelly, Erik C. Nisbet, and Emily K. Lynch. 2013. ‘Undermining the Corrective Effects of Media-Based Political Fact Checking? The Role of Contextual Cues and Naïve Theory’, Journal of Communication, 63: 617–37. Golshan, Tara. 2016. ‘The Importance of Fact-checking the Debate in Real Time, According to an Expert’, Vox, Accessed 21 December 2018. https:// www.vox.com/2016/9/26/13063004/real-time-fact-checking-debate-trumpclinton. Gottfried, Jeffrey A., Bruce W. Hardy, Kenneth M. Winneg, and Kathleen Hall Jamieson. 2013. ‘Did Fact Checking Matter in the 2012 Presidential Campaign?’, American Behavioral Scientist, 57: 1558–67. Graves, Lucas. 2016. Deciding What’s True: The Rise of Political Fact-checking in American Journalism (Columbia University Press: New York, NY). Graves, Lucas. 2017. ‘The Monitorial Citizen in the “Democratic Recession”’, Journalism Studies, 18: 1239–50. Graves, Lucas. 2018. ‘Boundaries Not Drawn’, Journalism Studies, 19: 613–31. Graves, Lucas, and Federica Cherubini. 2016. The Rise of Fact-checking Sites in Europe (Reuters Institute for the Study of Journalism: Oxford). Graves, Lucas, Brendan Nyhan, and Jason Reifler. 2016. ‘Understanding Innovations in Journalistic Practice: A Field Experiment Examining Motivations for Fact-checking’, Journal of Communication, 66: 102–38. Hafez, Kai. 2002. ‘Journalism Ethics Revisited: A Comparison of Ethics Codes in Europe, North Africa, the Middle East, and Muslim Asia’, Political Communication, 19: 225–50. Hall, Stuart, Chas Critcher, Tony Jefferson, John Clarke, and Brian Roberts. 1978. Policing the Crisis: Mugging, the State and Law and Order (Macmillan: London). Hanitzsch, Thomas, Folker Hanusch, Claudia Mellado, Maria Anikina, Rosa Berganza, Incilay Cangoz, Mihai Coman, Basyouni Hamada, María Elena Hernández, Christopher D.  Karadjov, Sonia Virginia Moreira, Peter G.  Mwesige, Patrick Lee Plaisance, Zvi Reich, Josef Seethaler, Elizabeth A. Skewes, Dani Vardiansyah Noor, and Edgar Kee Wang Yuen. 2011. ‘Mapping Journalism Cultures Across Nations’, Journalism Studies, 12: 273–93. Herman, Edward S., and Noam Chomsky. 2002. Manufacturing Consent (Pantheon Books: New York, NY). James, Sam Burne. 2014. ‘Full Fact Gains Charitable Status Five Years After First Application’, Third Sector, Accessed 21 December 2018. https://www.thirdsector.co.uk/full-fact-gains-charitable-status-five-years-first-application/governance/article/1314532. Kessler, Glenn, Michelle Ye Hee Lee, and Meg Kelly. 2017. ‘President Trump’s First Six Months: The Fact-check Tally’, Washington Post Fact Checker, Accessed 29 August 2017. https://www.washingtonpost.com/news/fact-checker/ wp/2017/07/20/president-trumps-first-six-months-the-fact-check-tally/? noredirect=on&utm-term=.74ce46a170ab&utm_term=.08303617e028.

2  OBJECTIVITY AND INTERPRETATION IN FACT-CHECKING JOURNALISM 

37

Lebo, Matthew J., and Daniel Cassino. 2007. ‘The Aggregated Consequences of Motivated Reasoning and the Dynamics of Partisan Presidential Approval’, Political Psychology, 28: 719–46. Lee, Dave. 2019. ‘Key Fact-checkers Stop Working with Facebook’, BBC News, Accessed 30 May 2019. https://www.bbc.co.uk/news/technology-47098021. Lowrey, Wilson. 2017. ‘The Emergence and Development of News Fact-checking Sites’, Journalism Studies, 18: 376–94. Merritt, Davis. 1995. ‘Public Journalism and Public Life’, National Civic Review, 84: 262–6. Muñoz-Torres, Juan Ramón. 2012. ‘Truth and Objectivity in Journalism’, Journalism Studies, 13: 566–82. Newman, Nic, Richard Fletcher, Antonis Kalogeropoulos, David A. L. Levy, and Rasmus Kleis Nielsen. 2018. ‘Reuters Institute Digital News Report 2018’, Reuters Institute for the Study of Journalism, Accessed 21 December 2018. http://www.digitalnewsreport.org/. Nyhan, Brendan, and Jason Reifler. 2010. ‘When Corrections Fail: The Persistence of Political Misperceptions’, Political Behavior, 32: 303–30. Nyhan, Brendan, Jason Reifler, and Peter A.  Ubel. 2013. ‘The Hazards of Correcting Myths About Health Care Reform’, Medical Care, 51: 127–32. PEN America. 2017. ‘Faking News: Fraudulent News and the Fight for Truth’, PEN, Accessed 21 December 2018. https://pen.org/wp-content/ uploads/2017/11/2017-Faking-News-11.2.pdf. Schlesinger, Philip, and Howard Tumber. 1994. Reporting Crime: The Media Politics of Criminal Justice (Clarendon Press: Oxford). Schudson, Michael. 2008. Why Democracies Need an Unlovable Press (Polity: London). Shapiro, Ivor, Colette Brin, Isabelle Bédard-Brûlé, and Kasia Mychajlowycz. 2013. ‘Verification as a Strategic Ritual’, Journalism Practice, 7: 657–73. Singer, Jane B. 2018. ‘Fact-checkers as Entrepreneurs: Scalability and Sustainability for a New Form of Watchdog Journalism’, Journalism Practice, 12: 1070–80. The Battle for Number 10 (29/05/2017) - Sky News / Channel 4 News, available from: https://www.youtube.com/watch?v=1mN_zZqlQts Tuchman, Gaye. 1978. Making News: A Study in the Construction of Reality (Free Press: New York, NY). Underwood, Doug. 1988. ‘When MBAs Rule the Newsroom’, Columbia Journalism Review, 26: 23. Uscinski, Joseph E. 2015. ‘The Epistemology of Fact Checking (Is Still Naìve): Rejoinder to Amazeen’, Critical Review, 27: 243–52. Uscinski, Joseph E., and Ryden W.  Butler. 2013. ‘The Epistemology of Fact Checking’, Critical Review, 25: 162–80. Van Eemeren, Frans H., and Peter Houtlosser. 2003. ‘The Development of the Pragma-dialectical Approach to Argumentation’, Argumentation, 17: 387–403. Walton, Douglas. 2006. Fundamentals of Critical Argumentation (Cambridge University Press: Cambridge).

CHAPTER 3

Fact-Checking Claims, Policies and Parties

Abstract  This chapter examines the characteristics of British fact-­checking in relation to the theoretical concerns outlined in Chap. 2. It assesses, from both an empirical and ethical perspective, how the three most established fact-checkers negotiate the boundaries of checkable facts, draw on sources and present their verdicts. Drawing on a content analysis of 176 articles and 232 tweets from Channel 4 FactCheck, BBC Reality Check and independent charity Full Fact, it finds that empirically factual claims (measures of current circumstances) were most commonly and unproblematically checked, but theoretical claims (predictions) and social facts (definitions) were also tackled, with mixed results. Interpretation was open to reasonable challenge, so rejection of the verdict in Twitter replies did not necessarily indicate irrationality, even where potentially motivated by partisanship. Keywords  Fact-checking • Empirical • Truth claims • Interpretation • Reasonable • Balance Having established in the previous chapter key areas by which we might interrogate fact-checking journalism, this chapter will draw on the content analysis of fact-check articles and tweets, and engagement with those tweets, to do exactly that. It will examine how fact-checkers deal with different kinds of empirical and theoretical claim and with issues of ­impartiality © The Author(s) 2019 J. Birks, Fact-Checking Journalism and Political Argumentation, https://doi.org/10.1007/978-3-030-30573-4_3

39

40 

J. BIRKS

and balance. The first section gives an overview of the form that the articles and tweets took, in comparison with fact-checking elsewhere in Europe and the US.  The following two sections then examine first the claims and then the sources chosen in relation to the empirical and ethical concerns outlined in the previous chapter, as well as the response of the audience on Twitter.

Form: Verdicts, Explainers and Information Uscinski and Butler (2013) criticised fact-checks that addressed several discrete factual claims within an argument, but gave a single verdict on the argument as a whole. Here, where there were multiple, unrelated claims, such as a manifesto, these were almost always addressed individually within the article (and have been separated into discrete fact-checks for the purpose of this analysis). Even where multiple claims were closely related, forming part of the same arguments, they were generally taken separately, such as Full Fact’s analysis of ten separate claims in an activist video on the underfunding of the National Health Service (NHS) (FF29, 30/05/2017), which did not give an overall conclusion even though all claims were determined to be correct. However, there was one instance where FactCheck did indicate an overall verdict on a speech on social care with the headline, “Theresa May’s misleading claims on the ‘Dementia Tax’ U-turn” (FC13, 22/05/2017). The wording also suggests that they looked specifically for problematic claims rather than fact-checking every claim: “We gave her Dementia Tax speech the FactCheck treatment. We found six key claims that Theresa May made which were seriously misleading.” The most immediately obvious distinction between the presentational form of British fact-checks and common convention is that none of the top three fact-checkers uses a rating scale, favouring narrative verdicts. Transnational surveys indicate that this is unusual—just 17% of European fact-checkers and a fifth worldwide use no rating system (Graves and Cherubini 2016: 19). Channel 4 FactCheck had initially experimented with a visual expression of its verdict, with ‘fact’ and ‘fiction’ on stacked blocks that would be rotated face-on or away to the side, depending on the amount of ‘fact’ or ‘fiction’ detected. Unlike Politifact’s Truth-O-­ Meter and the Washington Post Fact Checker’s Pinocchio scale, there is no implied motive in this graphic, with ‘fiction’ encompassing honest error and rhetorical misrepresentation as well as outright lies. Nonetheless, this was later abandoned, as was Full Fact’s short-lived magnification glass

3  FACT-CHECKING CLAIMS, POLICIES AND PARTIES 

41

r­ ating scale, as founder Will Moy found such scales “needlessly reductive” (Graves and Cherubini 2016: 19). Furthermore, in much of the US-focused literature, particularly that on psychological effects, a fact-check is understood to be an investigation and verdict on a clear claim or set of claims, whether on a scale or as a narrative verdict. However, in this sample of British fact-checking, only 85 of 127 items clearly did so. Of the remaining third, most (28.3%) would more accurately be described as explainers, similar to news analysis articles in mainstream journalism. This is more typical of European fact-checking, for example, Le Monde’s Les Décodeurs, which combines fact-checking with explainers and data journalism (Graves and Cherubini 2016: 31). The remaining six items simply reported new statistics without adding any analysis, which I have designated ‘information’ as there was no original visualisation as associated with data journalism. All three fact-checkers produced explainers, but they made up a far higher proportion of Reality Check’s output (42.6% compared to a quarter of the others’). Coupled with Reality Check’s greater output, this meant that two-thirds of the explainers and all but one information item in the sample were from Reality Check. The BBC were particularly focused on explaining the substance of policy and more attached to explanatory journalism as part of its remit (Blumler 2019: 59). There was also some variation in the promotion of fact-checking on Twitter. FactCheck tweeted out all of its items when first posted on their blog, whereas over a fifth of the BBC Reality Check articles were not promoted on Twitter at all. When live fact-checking televised debates and other media events, @FactCheck tended to use stand-alone tweets with the verdict in the tweet text, sometimes linking to online sources, whilst @BBCRealityCheck more often linked to previous fact-checks and explainers. As a result, over a third of Reality Check items were tweeted out multiple times (compared to just three of FactCheck’s) and 12 were tweeted three or more times, whilst a slightly greater proportion of @FactCheck’s tweets were stand-alone fact-checks (44% to 30% of @BBCRealityCheck tweets). Reality Check’s explainers and information items were half as likely to be promoted on Twitter, but those that were, were more likely to be linked to multiple times (45% compared to 28% of full fact-checks). These were on topics that arose in TV debates, so three of seven explainers on fiscal issues were tweeted out three or more times, whilst both on nationalisation were tweeted out just once.

42 

J. BIRKS

Of tweets promoting full fact-checks on initial publication, 72.7% c­arried the verdict in the tweet text or in an attached graphic. @BBCRealityCheck favoured a graphic with the claim quoted over a picture on the left half and the verdict text to the right, whilst @FactCheck reported the verdict in the text of the tweet, which was often also the headline of the article. However, other tweets did not include the verdict and instead either took the form of a question, to encourage followers to click through to the full article, or simply specified a topic that they had ‘examined’ or ‘investigated.’ This was favoured by @FactCheck—of the 17 tweets linking to full fact-checks, only 4 gave the verdict in the tweet. Instead, most of the 14 verdict tweets were stand-alone tweeted fact-checks. In comparison, @BBCRealityCheck more often included the verdict in their initial promotion of full fact-checks on Twitter (53.8%, though slightly less—42.2%—if excluding those on minor parties and process fact-checks). In contrast, later tweets linking to fact-check articles, when live-­ checking broadcast debates, rarely related to the same precise claim, but were presented as background information to the discussion, so the verdict was omitted as not directly relevant to the claim being checked. On the other hand, two tweets gave a verdict but linked to an explainer (rather than the original source) to confirm the same specific factual claim made in two separate debates.1 Despite tweeting less frequently, @FactCheck attracted twice the engagement on average as @BBCRealityCheck (72 to 30 likes and 92 to 41 retweets), but general engagement was low and skewed by a handful of particularly popular tweets. The highest recorded number of ‘likes’ was 1082, and retweets topped out at 1315 (both @FactCheck—479 and 909, respectively, for @BBCRealityCheck), but 94.4% of all tweets received under 100 likes and 90.9% under 100 retweets. @FactCheck also received the ­highest replies for a single tweet (98 compared to 72) but the average replies for both fact-checkers was just nine. In fact, 90.4% of Fact Check tweets received fewer than 20 replies, compared to 72.2% of @BBCRealityCheck. This reflects the greater contestation of @BBCRealityCheck’s tweets via replies from a broader audience—in ­contrast, 1  RCT064 (31/05/2017) linked to RC43 (28/05/2017) outlining existing powers to stop terror suspects from travelling and RCT122 (04/06/2017) linked to RC64 (04/06/2017) summarising wider security service powers on suspected terrorists, to support Tim Farron’s assertion that the government had only issued one temporary exclusion order in the previous two years.

3  FACT-CHECKING CLAIMS, POLICIES AND PARTIES 

43

100 80 60 40 20 0

Verdict

Question

Topic

Other

-20 -40 -60

Difference from average replies

Difference from average RTs

Difference from average likes

Chart 3.1  Average engagement by form of tweet

@FactCheck did not attempt to engage wider attention via popular election-related hashtags such as #GE2017, principally addressing its own followers. The tweets that attracted the most engagement were those that included a verdict in the tweet text or embedded picture file. For both fact-­checkers, verdicts attracted twice as many replies as other kinds of tweet, but the strongest difference was in retweets and likes—around eight times as many as other forms of tweet (see Chart 3.1). Both @FactCheck tweets with highest engagement were stand-alone fact-checks, with no further analysis on the blog (FCT20, 26/05/2017, and FCT50, 06/06/2017) but both offered additional information through screengrabs, one of which gave detailed reasoning in a series of bullet points. The high number of retweets could be because the additional information and reasoning embedded in the tweet offers more credibility for the claim without relying on people clicking through to the full article. These tweets are therefore more useful as a form of advocacy for a preferred party or against an opponent. Tweets that only mentioned a topic received the lowest engagement (along with information, clarification and promotional tweets), whilst

44 

J. BIRKS

questions came out a little over average in terms of replies only. Some responses to question tweets expressed irritation that the verdict wasn’t given in the tweet (presumably when it favoured their preferred party), and even stated it in their reply to amplify a favoured verdict, whilst others gave declarative answers (‘yes,’ ‘no’) and other responses to the question, often from personal experience, but rarely with any indication of having read the full article or blog post. If these tweets are meant as a teaser to encourage click-through, then they are evidently not entirely successful. One potential problem with question headlines and tweets is the risk that prominently repeating a claim whilst only debunking it less prominently within the text or attached article could reinforce that claim in the casual reader’s memory (Dobbs 2012). However, only two @FactCheck and three @BBCRealityCheck tweets included a claim in the tweet text that was given a negative verdict only in the linked article, and replies assumed or gave a negative answer even if they didn’t appear to have read the verdict.

Claims Chosen As we saw in the previous chapter, much of the criticism aimed at fact-­ checking relates to the choice of claims that are checked. The academic critique, by scholars such as Uscinski and Butler (2013), is twofold. Firstly, an empirical critique that fact-checkers draw verdicts on claims that are not factual, or go beyond the facts when interpreting claims as true but politically misleading. Secondly, they make an ethical critique that the claims selected are not representative of all political claims made in a given campaign, say, or political career, because sampling is not scientific and subject to bias. More popularly, especially from right-wing critics in the US, the ethical critique focuses on the ‘balance’ of claims and verdicts on left- and right-wing politicians and candidates. Empirical Concerns The empirical critique is that fact-checkers often select claims that, though they are truth claims, have no empirical content and are therefore neither provable nor disprovable via systematic observation. In this British sample, however, all three fact-checks adopted different formats to accommodate a variety of political claims without giving an unequivocal verdict on claims subject to reasonable disagreement. Unlike standard ‘objective’ ­journalism, however, they did mostly give the premises and reasoning that would allow readers to draw their own conclusions.

3  FACT-CHECKING CLAIMS, POLICIES AND PARTIES 

45

Explainers Wider, more complex arguments were usually conveyed as explainers, outlining what is known and what questions remain to be answered before a full fact-check is possible. Nonetheless, many explainers do make categorical claims about the premises underpinning the overall argument, even though they don’t give a verdict. For instance, two explainers on Labour’s policy of renationalising the railways (FC05, 02/05/2017, and RC19, 08/05/2017) contained evidence of the policy’s popularity with the public (social circumstances), context of current structures of ownership and control (institutional circumstances), and compared the performance of the temporarily nationalised East Coast line against selected private franchises (support for the means-goal). Six of Reality Check’s explainers included aspects of the common visual language and format of a full fact-check, such as titles in the form of a question and including a quoted claim at the top, as if initially planning to check the claim before switching tack. One of these claims was a prediction of the ‘Brexit bill’ or ‘divorce bill’ the UK would be liable to pay to back out of its obligations to the European Union (EU), which would not be certain until it had been negotiated (RC10, 03/05/2017), whilst another was not a claim but a speech act (an entreaty from Bill Gates for the UK to keep its promise on foreign aid; RC03, 20/04/2017). Two others, however, simply put the opposing standpoint against the claim, but without comparing the evidence or reasoning for each, but implicitly privileging the rejoinder. For instance, on the question “who would be affected under Labour’s higher taxes?” (RC26, 16/05/2017) in response to Labour leader, Jeremy Corbyn’s claim that 95% of people would pay no more tax, the explainer points out that “the top 5% of earners already account for 47% of income tax.” However, the relevance of that claim involves an unstated premise to explain why they should not pay more than 47%—indeed, a similar counter-claim was criticised on Twitter by those who regarded it as irrelevant context (see ‘interpretation,’ below). Finally, two of these explainers seem to avoid the most contentious part of the claim. One starts with a quote from Home Secretary Amber Rudd claiming that “there’s really strong evidence of Prevent2 initiatives helping families, saving children’s lives and stopping radicalisation,” but this evidence is not found or interrogated. Instead, the item is headlined, “Why  ‘Prevent’ is an anti-terrorism strategy intended to prevent radicalisation, and requires public institutions, including schools and universities, to take measures to minimise risk and to identify individuals deemed at risk of radicalisation. 2

46 

J. BIRKS

does the Prevent strategy divide opinion?” and after explaining the policy, ends with a ‘he said, she said’ claim and counter-claim about its effectiveness. Explainers can be an appropriate way to deal with ambiguity, then, but they can also be an excuse to fall back on unilluminating journalistic attribution.  erdicts on Measures, Causal Relationships, Predictions and Definitions V Among the 85 full fact-checks, over two-thirds (68.2%) stuck to the relatively safe ground of claims about the current circumstances or past record, rather than cause and effect relationships. This type of claim was more likely to be positively fact-checked (44%, compared to 24.0% of cause and effect claims and 8.3% of predictions), but even so, around a quarter were deemed misleading (if not actually erroneous) and a further quarter gave mixed verdicts. A handful were unclear on their interpretation, for instance, where the claim was a little out or had been rounded up, whether this was deemed acceptable. Many of the claims on measures of current circumstances implied credit or (more often) blame for that state of affairs, such as the series of claims in the Labour manifesto analysed by Full Fact (FF09–16) that started as, “Just look at the last seven [years of Conservative government],” but whilst some of these were contextualised (e.g. fewer firefighters, but also fewer incidents), only one addressed the attribution of blame. The Full Fact fact-check on increasing food bank use (FF15, 16/05/2017) offered the Conservative counter-explanation that increasing use was driven by an increase in supply rather than in demand, but did so uncritically without reviewing the evidence. By contrast, Reality Check’s Chris Morris for the Today programme (01/06/2017) argued that the claim of increased demand was backed up by statistics showing living standards below a decade ago and a disproportionate impact on the “lowest paid and poorest,” as well as the high proportion of food bank users reporting their reason as changes to and delays in payment of benefits, connecting it to the introduction of Universal Credit. Far fewer claims were selected for fact-checking that that explicitly attributed credit or blame for having brought about the current circumstances through past policy actions (11.8%), and most of these only checked correlation, such as the decrease in tribunal cases since the introduction of fees (RC42, 28/05/2017). In fact, some simply checked the circumstantial claim, such as RC06 (25/04/2017), checking May’s claim that “because of our strong leadership, what we’ve seen is… record numbers of jobs,” which confirmed the record high on three measures. The

3  FACT-CHECKING CLAIMS, POLICIES AND PARTIES 

47

existence of the claimed correlation is one of the critical questions for an argument from correlation to causation, but another key question is whether there is an alternative explanation for that pattern. Only three fact-checks (all from Full Fact) explicitly examined a causal relationship, and a fourth (from FactCheck) at least problematised a straightforward attribution of credit or blame. In the first three fact-­ checks, there was no evidence of the epistemological naivete alleged by Uscinski and Butler (2013); they all drew on government reports and academic research, and included their caveats. For instance Full Fact concluded that it was impossible to say conclusively whether immigration “puts pressure on public services” (FF20, 18/05/2017) because there was inadequate longitudinal data, and, indeed, it may not even be possible to identify and quantify all the relevant impacts. On public finances more broadly, they cited OBR research concluding that migrants are probably net contributors since they are disproportionately working age. In the fourth case, FactCheck compared the Labour and Conservative record on the economy via six surprisingly wide-ranging measures (FC14, 26/05/2017), and offered other potential explanations for variation over time, such as the global financial crisis on GDP, and the influence of the Living Wage Foundation on wage levels. The most complex types of claim, predictions of the impact of a proposed course of action, were similarly infrequent (14.1%). These claims are theoretical projections based on past observations and interpretation of cause and effect, so cannot be conclusively proven or disproven, but the reasoning behind them can still be revealed as more or less well grounded. Fact-checkers looked at the assumptions made in calculations, especially on costing policy pledges, and highlighted reasons why they might be inaccurate. For example, Reality Check (RC16, 18/05/2017) criticised Labour for poor statistical practice in having extrapolated from an unrepresentative sample of NHS Trusts that responded to a Freedom of Information (FOI) request to estimate the cost of providing free hospital car parking, although they recognised that it was the best available data. However, at times it did seem that the lack of empirical certainty in predictions meant that they would always attract a negative verdict from fact-­ checkers, whatever figure they ventured. Furthermore, in analyses of parties’ fiscal projections, they seemed to be damned if they produced a balance sheet and damned if they didn’t. Labour were criticised for putting a “suspiciously precise number” on how much they hoped to raise from tackling tax avoidance and evasion (FC03,

48 

J. BIRKS

16/05/2017), and the Conservatives criticised for not putting a figure on their revenue-raising plans, which included the same measure (FC06, 18/05/2017). Most fiscal projections are imprecise estimates because of the unpredictable nature of human behaviour and the economy, but it is unclear whether any figure would therefore be accepted as reasonable. Fact-checkers also struggled with the weight to accord criticisms; FactCheck judged Labour’s projections of revenue-raising from reversing the cut in corporation tax optimistic because in the long term people tend to change behaviour to avoid higher tax, such as “retire earlier, shift more of their income into pensions, or even leave the country” (FC03, 16/05/2017), but Reality Check judged the risk of behaviour change to be a minor caveat on a positive verdict on charging Value Added Tax (VAT) on private school fees (RC12, 07/05/2017). Another type of claim that Uscinski and Butler (2013) argued was not suitable for fact-checking was that which centres on a definition. There were only three fact-checked claims (two in articles and one tweet) that fell clearly into this category, whilst a fourth item more effectively disambiguated a contested definition without fact-checking it as such. FactCheck tackled the media claim that the Labour manifesto was “more radical” than the last (FC04, 16/05/2017), but concluded only that it was similar but went further, leaving it to the audience to judge whether that constituted being more radical. In the second, Full Fact examined, inconclusively but with reasons for each interpretation, whether Brexit really meant, by definition, leaving the single market (FF25, 18/05/2017). A third appeared as a stand-alone tweet whilst live fact-checking the BBC Debate, in response to a Labour MP on Twitter saying, “we are not having a garden tax. It is nonsense” (RCT161, 06/06/2017). ‘Garden tax’ is a term that was invented to characterise Labour’s proposal to reform or replace council tax (based on then 25-year-old property valuations), with one option under consideration being land value tax. @BBCRealityCheck highlighted a screengrab of the relevant section in the manifesto, which seemed to imply that it did indeed constitute a garden tax, but then clarified in replies to its own tweet that there were good arguments for this proposal. The author of the claim was challenging the persuasive definition of the policy, which implies a negative valance by inferring that everyone with a garden would pay more, but Reality Check’s initial response infers that she was denying the existence of the policy. The problem with this fact-check is not that it addresses a definition, but that it doesn’t acknowledge that it is the definition that is being contested. It

3  FACT-CHECKING CLAIMS, POLICIES AND PARTIES 

49

could have fact-checked the definition, but it would have to first articulate the unstated inferences of the term. FactCheck (FC13, 22/05/2017) took this approach when checking Theresa May’s claim that ‘dementia tax’ was “a term spread by the Labour Party to scare people.” Whilst the fact-check mostly addressed who had coined the term and how it was used by Labour, it also explained that the term had been used to indicate an inconsistency in “the way care is categorised,” in that dementia is an illness treated at home and that treatment is means-tested, whilst other illnesses are treated in hospital free of charge. The key underlying point being conveyed is that dementia is an illness, not just an inevitable part of old age. This definition is therefore more defensible when disambiguated, even though it is persuasive in form and purpose. However, there are some statements that are not suitable for fact-­ checking. Not all political statements are claims, some are speech acts, most commonly promises in the form of policy pledges. For instance, FactCheck attempted to examine whether the Conservatives would really reduce immigration to the tens of thousands (FC08, 18/05/2017), or really increase NHS funding by £8bn (FC07, 18/05/2017). The first of these was not a promise to put a specific measure in place, but to achieve a certain outcome, so the fact-check can only cast doubt by pointing out that it is difficult to do and that the manifesto did not say how they would achieve it. The second was more explicitly about the party’s credibility on spending promises, citing critics who argued that similar previous promises hadn’t been delivered and accused them of “fiddling the figures.” These are reasonable observations to inform a heuristic of trust, but in this case the verdict is subjectively with the audience. The fact-check could instead, however, interrogate the evidence for the critics’ argument from inconsistent commitment (Walton 2006: 125), that is, past broken promises and concealment thereof. Interpretation In many of these cases, including the most straightforward empirical measures, fact-checkers’ interpretative judgement is open to challenge, and was indeed disputed by Twitter users on a number of different grounds: • Statistical significance: In response to a fact-check on Corbyn’s claim about child poverty in Scotland (RCT006, 24/04/2017), one Twitter user complained, “Your article says Corbyn ‘may have been right’ but your headline ‘verdict’ says ‘he is right’. So which is it? You

50 

J. BIRKS

even show error margin,” and another “Article explains no statistically significant change but conclude change probably significant anyway? Pytahgoras [sic] wept! #MarchForScience.” • Missing context: To take one example of several, a positive verdict on Theresa May’s claim of “record numbers of jobs” (RCT007, 25/04/2017) was challenged by Twitter users on the basis of the quality of jobs, arguing that many were part-time, precarious (with zero-hour contracts) and poorly paid. • Interpretation of original claim: Reality Check challenged John McDonnell’s claim that “the burden in terms of tax is falling on middle and low earners” by pointing out that 25% of income tax is paid by the top 1% (RCT002, 19/04/2017). Replies on Twitter interpreted ‘burden’ as proportion of wealth rather than proportion of tax take, countering that the top 1% own a higher proportion of wealth than 25% and pay a lower percentage of their income as tax in comparison with ordinary people who can’t avoid tax. This also implies different unspoken value premises, such as that income tax should be progressive, rather than that we should expect a maximum contribution from the richest. • Unprovable versus false: FactCheck judged Corbyn “wrong on working-­class students” being deterred from university by tuition fees (FC22, 07/06/2017, and FCT052, 07/06/2017), though the analysis acknowledged that social class was recorded using unreliable and inconsistent measures. One reply objected: “Not saying Corbyn is correct, but your analysis is flawed and doesn’t address his point,” to which @FactCheck replied, “It’s for him to prove his point, as well, isn’t it? What evidence does he have?” The Twitter user agrees, but maintains that it was “unfair of you to claim he is outright wrong based on your stats which don’t address his actual claim.” • Challenging an appeal to expert opinion: In response to a live fact-­ checking verdict that increasing corporation tax would hurt consumers, two replies contested the issue with logic, pointing out that prices didn’t go down when corporation tax went down. All of these objections to fact-check verdicts are perfectly reasonable, and reflect the critical questions that should be posed to interrogate political argumentation. This demonstrates that rejection of fact-checkers’ conclusions cannot be assumed (as it is in much of the social psychology

3  FACT-CHECKING CLAIMS, POLICIES AND PARTIES 

51

research) to demonstrate irrationality, even if there is some motivated reasoning behind these challenges. However, interpretation in verdicts was also contested on more obviously politically motivated grounds. For instance, on cuts to police numbers (RC09, 02/05/2017, and RCT011, 02/05/2017), there was a slight discrepancy between Reality Check’s figures and Labour’s (due to a difference in dates chosen for comparison, rather than rounding up—see Chap. 4 for further discussion of this example) but it was judged “about right,” which was leapt on by critics in the Twitter replies: “How is this a reality check? If police officer numbers have dropped by 19k and Labour claim it’s 20k then they have got the numbers wrong?”, “So 5% out then? That’s not really about right!” On the other hand, a Labour supporter was critical of the opposite potential interpretation: “so @BBCNews is that you actually supporting Labour for telling the truth or being petulant because they are slightly off with figures?” This ambiguity in the acceptable margin of error or rhetorical rounding up or down does not invalidate the exercise of fact-checking, however. It has long been a part of investigative journalism, where corroboration is rarely perfect (Ettema and Glasser 1998; Birks 2019). Such disagreement is also inevitable where facts are not free-floating but operationalised as premises in political argumentation, including by people on Twitter. Accordingly, there were replies that inferred intentional lying, and used this as ammunition in arguments with other Twitter users: “@_____ this is what I was telling you about. Fallon misleading/lying about military numbers! #VoteLabour,” “Here is an example @_____ promised one thing never below 82,000 then back peddling. Same MP who was told he was talking B∗∗∗∗∗KS on TV.” Motivated reasoning was also apparent in the Twitter audience’s perception of bias. Ethical Concerns: Impartiality and Audience Perception of Bias There are two key areas of concern about impartiality raised in the literature. The most obvious is bias in the attention to and judgements on different parties, of which fact-checkers are highly conscious whilst also keen to avoid false balance (Graves 2016). Secondly, there could be an overall bias to negativity if fact-checkers favour claims that sound dubious, raising the concern that this could give a misleading impression of an individual politician’s truthfulness or give voters the impression that politicians in general are habitually deceitful.

52 

J. BIRKS

In the Twitter data, it appeared that people were very attuned to imbalance of attention, for example, “Could #bbc make a #realitycheck on the number of checks on Corbyn in comparison to those on T May and/or #Tories? How much is BBC skewed?” (RCT062, 31/05/2017), and from the opposite selective perception, “The BBC have fact-checked the Labour Party?? ∗loud thud as I faint∗” (RCT005, 21/04/2017). However, there also appeared to be an assumption that fact-checking implied a negative perception of a politician, even when the verdict was positive: “Why do you feel the need to confirm what [SNP leader] Nicola Sturgeon says as correct. Do you fact check Theresa Mays [sic] comments?” (RCT128, 04/06/2017). Since public service broadcasters are often regarded as an ideal, though in practice rare, partner for fact-checkers (Graves and Cherubini 2016: 27), we might expect the BBC to be a trusted source for this service. However, the vast majority of accusations of bias were aimed at @BBCRealityCheck, perhaps because they actively sought an audience beyond their Twitter followers by using Twitter’s mark-up culture, such as hashtags and @mentions, and were shared by other BBC journalists on their own accounts, but also due to a widespread perception of mainstream media, and especially BBC bias against Labour (Cammaerts et al. 2017). ‘Balance’ of Fact-Checking on Different Parties’ Claims Contrary to this perception of bias, fact-checkers actually paid fairly even attention overall to both main parties’ claims, with only very slightly more of Labour’s claims checked (38 to 31) in full articles, and the same on Twitter (47 to 44). There was some variation between fact-checkers, however, and Reality Check did have a slightly stronger skew towards checking Labour claims, whilst FactCheck was more evenly balanced, and Full Fact somewhere between the two. Even so, this did not translate into greater criticism of claims by Labour politicians, since verdicts on Labour claims were disproportionately positive, and on Conservative claims skewed conversely to the negative (see Table 3.1). Furthermore, the positive skew in verdicts on Labour claims was driven by Reality Check, the fact-checker most accused of anti-Labour bias (64.3% positive compared to 20.0% from FactCheck and 25.0% from Full Fact). The negative skew on Conservative claims was most marked in Full Fact’s checks (54.5%), whilst FactCheck gave mostly mixed judgements.

3  FACT-CHECKING CLAIMS, POLICIES AND PARTIES 

53

Table 3.1  The distribution of verdicts in full fact-check articles on claims by source of claim Positive Labour Conservative Other parties Multiple parties Civil society Total

13 7 2 3 6 31

41.9% 24.1% 33.3% 50.0% 50.0% 36.5%

Negative 9 12 0 1 4 26

29.0% 41.4% 0.0% 16.7% 33.3% 30.6%

Mixed/unclear 9 9 4 2 2 28

29.0% 31.0% 66.7% 33.3% 16.7% 32.9%

Total 31 29 6 6 12 85

On Twitter, since only around a third of fact-checkers’ tweets gave a verdict in the tweet text, there were only 31 verdict tweets on Labour or Conservative claims (and a further 37 on other parties, mostly from @BBCRealityCheck), so we should be cautious of reading too much into any pattern, but both media-based fact-checkers were precisely evenhanded in their verdicts on Labour in the tweet text (five each positive and negative for @BBCRealityCheck and four each for @FactCheck). On the Conservatives, meanwhile, they were skewed in opposite directions: @BBCRealityCheck gave twice as many positive verdicts (four to two) on Conservative claims, whilst @FactCheck did not check a single one positively (three negative and one mixed). However, parties did not always make claims about their own policies— over three-quarters (77.4%) of Labour claims adjudicated on in full fact-­ checks were about the Conservatives’ record in government (compared to 39.2% of Conservative claims). Another two checked Labour claims criticising Conservative policy pledges (RC13, 07/05/2017, contextualising the proposed increase in mental health nurses against previous cuts, and FF27, 26/05/2017, criticising the social care policy for not specifying a cap). Furthermore, criticisms of policy proposals also came from elsewhere such as think tanks (FC19, 31/05/2017: Nuffield Trust on the impact of hard Brexit on the NHS; RC59, 02/06/2017: Policy Exchange on the effectiveness of grammar schools), the media (FF31, 31/05/2017: Daily Express on Labour’s ‘garden tax’) and even social media (FC09, 18/05/2017: complaints on Twitter that the Conservatives’ removal of the pension triple lock meant pensions would be cut). These claims breadthened the scope of fact-checking to some degree, but sources from

54 

J. BIRKS

civil society made up only 15% of fact-check items with claims specified, against the remaining 85% that checked or explained politicians’ claims. ‘Balance’ of Fact-Checking on Different Parties’ Policies and Performance Looking instead at the balance of policies, therefore, there were rather more items on the Conservatives (34 to 20) and a slightly higher proportion were fully fact-checked, as compared to Labour’s (55.8% to 45.0%), so there were almost twice as many verdicts on claims about Conservatives’ policies as on claims about Labour’s (19 to 9; see Table 3.2). Both parties’ claims about their own policies were more negatively than positively fact-­ checked, but whilst both criticisms of Labour policy were debunked, two-­ thirds of criticisms of Conservative policy were upheld. Critics’ claims about the Conservative government’s performance in power were positively skewed, but so too were Conservative claims about their own record. Whilst there doesn’t appear to be any artificial ‘balance’ in verdicts between the two parties, neither is there a strong steer that one party is more dishonest than the other. There were certainly specific agendas in terms of which policies the two parties were interrogated on, however. Over a quarter of Conservative policy fact-checks were on the controversial social care proposals and fudged u-turn, followed by the two issues—immigration (18%) and Brexit (15%)—on which they were challenged by UK Independence Party Table 3.2  The distribution of verdicts in full fact-checks on party policies and government performance Positive Conservative policy  –  Tory policy claims  –  Criticisms of policy Labour policy  –  Labour policy claims  –  Criticisms of policy Govt performance  – Govt performance claims  –  Criticisms of policy Total full fact-checks

Negative

Mixed/unclear

Total

5 1 4 1 2 0 24 5

25.0% 7.7% 66.7% 11.1% 33.3% 0.0% 48.0% 45.5%

8 7 1 6 4 2 11 4

40.0% 53.8% 16.7% 66.7% 66.7% 100% 22.0% 36.4%

6 5 1 2 0 0 15 2

30.0% 38.5% 16.7% 22.2% 0.0% 0.0% 30.0% 18.2%

19 13 6 9 6 2 50 11

13 31

44.9% 36.5%

7 26

24.1% 30.6%

9 27

31.0% 31.8%

29 85

Note: Policy and performance not mutually exclusive

3  FACT-CHECKING CLAIMS, POLICIES AND PARTIES 

55

(UKIP) (and hoped to win back their supporters following the EU referendum result). Seven in ten checks on Labour policies were on their costings, and they made up almost two-­thirds (64%) of all fiscal fact-checks, compared to just 9% on Conservative pledges, not least because the Conservatives did not feel that they needed to specify their costings. On Twitter, this caused some irritation among Labour supporters, the more reasonable of which thought this missed the point of the anti-­ austerity argument: Wow. Is this what ALL of your reports are on? How about focusing on their policies? They already explained how everything is funded. (RCT030, 16/05/2017)

Others resented the (assumed) lack of scrutiny on the Conservatives’ sums, despite the absence of any costings to check, with around a third of the 72 replies to Reality Check’s assessment of Labour costings (RCT153, 06/06/2017) claiming bias, 17 of which demanded that they perform the same check on the Conservative manifesto. This drew four responses pointing out that they had indeed performed an equivalent (though obviously inconclusive) check on the Conservatives, with one even mocking the knee-jerk response: “What a lot of parrots. Have you checked the conservatives Have you checked the conservatives Have you checked the conservative [sic]. Go read it.” Although this is indicative of motivated reasoning, it is also a rational response to the iniquities of the communication environment, where the news media assume that the Conservative Party claims on fiscal and economic issues are credible—to the point where the party doesn’t even feel the need to give specifics—whilst Labour claims require interrogation. In a spirit of impartiality, fact-checkers did note that the Conservatives had not provided any costings for their policies—at one point even confirming a tweet about an audience heckle to that end in Channel 4 and Sky News’ televised event, The Battle For Number 10 (FCT38, 29/05/2017)—but one limitation of fact-checking for holding politicians to account is that they can only check what they do say, not what they omit. However, some negative responses to fact-checkers (mostly Reality Check) on Twitter appear to punish them for the perceived sins of the wider news agenda and framing.

56 

J. BIRKS

 erception of Bias on Twitter P Most of the accusations of bias aimed at @BBCRealityCheck on Twitter were actually against the BBC as a whole. Those that specified their reasoning largely pointed to the perceived valance of the main BBC News Twitter account (e.g. “Everything the @BBCNews tweets is either Anti Labour or Pro Tory. #Agendamuch”; RCT153, 06/06/2017) or to its status as a public service broadcaster that is publicly funded, or “government funded” (conflating public service broadcasting with state broadcasting) and as an “establishment apologist and supporter” (RCT005, 21/04/2017). Some scepticism of media impartiality is of course a healthy part of media literacy, but these remarks use attacks on the credibility of the organisation to dismiss the assertion without engaging with it, in some cases clearly without having read the full article. Many of these replies are to @bbclaurak as well as @BBCRealityCheck, and are in fact replying to BBC Political Editor Laura Kuenssberg retweeting the fact-checker. Since there was strong hostility to Kuenssberg for perceived bias against Corbyn, amplified by alt-left websites such as The Canary and Evolve Politics (Moore and Ramsay 2017), there were indications of particularly strong motivated reasoning in responses to her tweets. For example, one Twitter user replied, “Played Laura. Digging deep to smear Labour. Any chance you could ask the Tories about their record of the past 7 years,” in response to Kuenssberg’s bland tweet “Water, water. Reality Check examines Labour’s water bill claim,” retweeting the fact-­ checker’s positive verdict on Labour’s claim that water bills had risen disproportionately since privatisation (RCT029, 16/05/2017). When another Twitter user pointed out “Err… to be fair that is a link to a fact check which supports Labour’s claim, that’s the reverse of a smear,” the critic styles it out: “So he’s stating fact then?”; “@bbclaurak So his numbers were completely accurate but don’t worry about it because we were MAINLY ripped off shortly after privatisation?” However, for those coming across @BBCRealityCheck tweets indirectly, it is not just assumptions about the wider organisational politics that prime Corbyn supporters to look for evidence of bias. The name ‘Reality Check’ in itself seems to have been met with some resistance. Whilst most fact-­ checkers in Anglophone countries mention ‘facts’ somewhere in their title (FactCheck, Fact Checker, Full Fact, Politifact and so on), the BBC has been more original, but when encountered on Twitter via a trending hashtag or a retweet, the perception appears to be coloured by the popular connotations of the idiom.

3  FACT-CHECKING CLAIMS, POLICIES AND PARTIES 

57

Whilst the Cambridge Dictionary defines ‘reality check’ in the way the BBC intends, as “an occasion that causes you to consider the facts about a situation and not your opinions, ideas, or beliefs,”3 the controversial but popular website Urban Dictionary defines it as, “A word or phrase used to bring a person back into the life of those around them, sometimes used to smash hopes and dreams.”4 In other words, there is a perception that to give someone a reality check is to give them a ‘wake-up call,’ to disabuse them of a cherished belief, and so the endeavour itself can be seen as intrinsically critical and even confrontational. This would seem to explain responses such as “What the hell is ‘BBC Reality Check’ when did you take it on yourselves to spew propaganda so blatantly, oh hang on 2010, wasn’t it” (RCT148, 06/06/2017—an explainer on existing police powers to stop terrorists). This is perhaps also reflected in the demand for other, non-factual assertions to be given a ‘reality check,’ such as Theresa May’s demeanour on television or her slogan: “Do your job hold her to account she sounded possessed?!? Reality check Strong and Stable’ #GE17” (RCT007, 25/04/2017). Negativity Bias Although responses on Twitter sometimes assumed that fact-checking was an endeavour aimed at calling out mistakes, overall there was a slight bias to positivity, with 36.5% of full fact-checks issuing positive verdicts and 31.8% negative, with the remainder either mixed (21.2%) or unclear (for instance, whether a discrepancy between the figure cited and the precise number is interpreted as acceptable rounding up or down, or as incorrect). Reality Check was the most inclined to issue fact-checks with a positive verdict (48.7% to 30.8% negative), whilst FactCheck was slightly more inclined to negativity (33.3% negative to 22.2% positive, and with a higher proportion mixed or unclear, with a similar pattern in tweets), and Full Fact fairly balanced between the two. FactCheck’s bias to negativity is actually stronger when taking into account the fact that all of the positive verdicts on claims about government performance were given to criticisms from opposition parties and civil society. It is possible, then, that FactCheck could reinforce an impression of politicians as broadly dishonest or incompetent.

3 4

 https://dictionary.cambridge.org/dictionary/english/reality-check.  https://www.urbandictionary.com/define.php?term=reality%20check.

58 

J. BIRKS

Reality Check’s positivity bias, meanwhile, was even stronger on Twitter, as articles with positive verdicts were more likely to be tweeted out at all (90.5% to 72.7% of negative verdicts), and to be tweeted out more than once, though the verdict was less likely to be in the tweet text in subsequent iterations. Overall, @BBCRealityCheck ran twice as many positive as negative verdicts in the tweet text, almost two-thirds of which were related to parties other than the two main contenders—some, but not all (58.8%) were criticisms of the government record. This could indicate that there is a different approach to choosing claims to check, with FactCheck more inclined to select the most dubious-­ sounding claims, whilst Reality Check as associated with the main a public service broadcaster is less susceptible to hyperadversarial definitions of newsworthiness, but it is more likely that the difference is explained by the greater number and therefore range of fact-checks carried out by Reality Check. Another significant explanatory factor, however, is the type of claims chosen, in terms of their place in the wider argumentation and their empirical or theoretical basis; 87% of Reality Check full fact-checks (with verdicts) were on the less ambiguous statistical measures of current circumstances, compared to only half of FactCheck’s.5 For the same reason, checks on the government record—generally via national statistics—were more often given positive verdicts (62.5%) than checks on policy proposals (19.4%), since the latter were typically predictions of the impact of a policy change and relied on several expert opinions. This connection between the type of claim, and in particular its empirical qualities, and the valance of the verdict raises the possibility that fact-checking—if a serious concern for politicians—could inadvertently incentivise attacks on government performance as a campaigning tactic for opposition parties, and even appeals by incumbent parties to their record in government, and discourage claims about policy. This could reinforce the focus on trust and competence associated with personality politics and the marginalisation of policy substance. Nonetheless, the fact-checks that highlight uncertainty in predictions do not frame this in the hyperadversarial terms of ‘gotcha’ journalism, so we should not assume that these verdicts would undermine politicians’ reputations for honesty, though of course they may be selectively used and interpreted to reinforce existing negative political beliefs about a politician 5  In contrast, almost two in five of FactCheck’s full fact-checks were on predictions, compared to one in ten of Reality Check’s.

3  FACT-CHECKING CLAIMS, POLICIES AND PARTIES 

59

Table 3.3  Average engagement with tweets by party claim and valance of verdict @BBCRealityCheck No. of tweets All positive verdicts All negative verdicts All mixed/unclear verdicts Favourable to Labour and critical of Tories Favourable to Tories and critical of Labour All verdict tweets All other forms of tweet Overall averages

@FactCheck

Avg replies

Avg RTs

Avg likes

No. of tweets

24 12 17

17 19 10

153 115 40

105 71 28

6 8 4

7

31

363

191

9

22

95

53 127

15 7

180

9

Avg replies

Avg RTs

Avg likes

12 22 5

359 303 50

212 271 33

4

39

719

610

73

4

9

26

14

108 14

72 12

15 37

15 6

247 29

196 22

41

30

52

9

92

72

or party. More often, though, Twitter users brought their own heuristics of distrust in politicians to challenge positive verdicts, especially on Diane Abbott’s claim about police numbers (RCT011, 18/05/2017). Although the evidence from experimental psychology literature suggests that people are broadly speaking more attentive to negativity, this is not the full story. Although engagement on Twitter is an imperfect approximation of attention, it is clear that there is no consistent bias to negativity by that measure, and in fact positive verdicts appear to be retweeted slightly more than negative ones (see Table 3.3). Differentiating the verdicts by party, however, revealed a strong pattern across both fact-­ checkers of greater engagement with tweets with verdicts that found against Conservative claims, and in favour of Labour claims, especially their criticisms of the Conservatives.6 This was particularly true of retweets, the strongest indicator of agreement. It’s not clear whether this reflects a 6  Engagement with tweets with positive verdicts on Labour criticisms of the government record was far higher for those on public service funding and staffing (RCT015 on mental health staff and both RCT011 and FCT49 on cuts to policing) than for those growing inequality (RCT091 on falling wages, and RCT006 and RCT038 on child and pensioner poverty, respectively), with far lower engagement, perhaps reflecting Twitter users’ own priorities, or the arguments they thought most effectively persuasive for others.

60 

J. BIRKS

pro-Labour or more broadly anti-Tory bias in fact-checkers’ Twitter followers—though there is some evidence of a left-wing bias in Twitter users overall, mostly related to its younger, more educated demographic (Mellon and Prosser 2017)—or that opponents to the incumbent party tend to be more engaged in advocacy and campaigning, in this case potentially energised by frustration at the dominant assumption of a Conservative landslide victory. Some tweets attracted relatively low overall engagement but disproportionately high replies, where those responding disagreed with the fact-­ check verdict, and especially when they had grounds to dispute it. For example, the check on Theresa May’s claim of “record numbers of jobs” (RCT007, 25/04/2017) attracted almost as many replies as likes (whereas on average there were around four times as many likes as replies), most of which challenged the verdict on an empirical basis.

Sources Chosen As with the choice of claims, there are empirical and ethical concerns with the choice of sources drawn on. In comparison with standard journalistic practice, where claims from two or more sources are presented without resolving or explaining differences, in fact-checking journalism we would expect to see some triangulation or corroboration of multiple empirical sources, akin to investigative journalism, albeit on a smaller scale (Birks 2019). In ethical terms, the concern is whether to draw on partial or interested sources, or only to use those that are independent or impartial (Graves 2016: 124–5). Empirical Issues: Corroboration It is difficult to know how much work has gone in behind the scenes to corroborate facts and figures, especially since Full Fact admits to speaking to official sources off the record for background (Graves and Cherubini 2016), and where different sources are cited to support different parts of the question they may yet have influenced the rest of the analysis. There is, however, evidence of different sources being used to triangulate a single point, such as reports from the NAO (National Audit Office) and IFS (Institute for Fiscal Studies) as well as Department for Education statistics reinforcing conclusions on schools funding (FC02, 10/05/2017). We also see checks on sources cited in the political claim against other sources,

3  FACT-CHECKING CLAIMS, POLICIES AND PARTIES 

61

such as querying the Nuffield Trust’s use of a parliamentary committee answer on the number of pensioners living in EU countries and covered by the reciprocal health scheme, by asking the Department of Health (FC19, 31/05/2017). That corroboration was not the norm, however. One in ten items cited no sources other than manifestos (including four explainers comparing policies on particular issues, and a fact-check on the social care u-turn comparing the Conservative manifesto to subsequent speeches) or other political or parliamentary records (such as on the no-confidence vote in Jeremy Corbyn). One drew only on political knowledge and logic to fact-­ check Theresa May’s assertion that a decisive election result could strengthen the UK’s negotiating position on Brexit (RC02, 19/04/2017). Over half (55.1%) drew on two or more sources, and three fact-checks used as many as seven sources, but the most common number of sources was just one (34.6%). In over half (56.8%) of those cases, the article is a straightforward check on the selection and presentation of official statistics, where the Office for National Statistics (ONS) is the only source required, especially on claims about government performance (13 items), and largely on immigration (6) and employment (5), where very narrow statistical claims were selected, avoiding the argumentative context. However, in over a fifth of single-sourced fact-checks the analysis depended entirely on a single expert source. All four from Reality Check were fiscal explainers, but they covered similar ground to FactCheck’s three verdicts on predictions, all relying solely on the IFS, raising ethical questions about how complete their explanation is. Ethical Issues: Impartiality of Sources The dominance of official sources in single-sourced fact-checks and indeed overall (appearing in two-thirds of articles) reflects an assumption that they are the most extensive and reliable statistics available, though ­multiple measures were drawn on where official figures were known to be flawed, such as crime statistics. Unsurprisingly, claims about measures were more likely to draw on official statistics (74.6%), as compared to predictions (25.0%). In contrast, experts were more commonly drawn on in relation to predictions (58.3%) and overall half as frequently as official statistics. Interest organisations such as trade unions and professional associations were drawn on less frequently (10.2%), but it’s interesting that they are used at all since they don’t claim to be impartial, but do offer specialist

62 

J. BIRKS

knowledge and often (for instance, in the case of trade associations and trade unions) original analysis of official statistics. However, only one item cited an interest group exclusively—an explainer on the objections of campaign group Women Against State Pension Inequality to the implementation of changes to women’s retirement age (RC44, 30/05/2017). Expert sources were relatively rarely cited without other kinds of source also being present, but of those 14 instances, 10 relied on one single expert source, eight of which relied solely on the IFS. Indeed, the IFS was the only expert organisation to be used as a sole source for a fact-check other than the Migration Observatory (two, both Full Fact). IFS predictions were also reported with more certainty than others, concluding, for instance that Labour business taxes would hurt pensions, workers and consumers, and that Conservative cuts to immigration would hurt the economy (FC16, 26/05/2017), without offering the reasons and analysis that support their conclusions, or any counter-arguments from their own analysis or from other sources. This follows the high credibility accorded to the IFS in mainstream reporting—regarded by some journalists as the ‘word of god’—but it does take a position within a particular economic paradigm that means it is more sceptical of Keynesian growth through public investment (Anstead and Chadwick 2018). This orientation to the IFS drew attention on Twitter when used in the tweet text in a live fact-check on Corbyn’s claim that 95% of people would pay no more tax under their plans—“IFS point out if you tax business it affects workers, customers, pensioners” (FCT30, 29/05/2017). In addition to a question about who funds the IFS, one reply queried this single source expert judgement: “The IFS analysis is very interesting, but is not your own Fact Check research. Bit misleading to link from that account.”7 Argumentatively, these fact-checks are an appeal to expertise and seem to consider the IFS itself as a sort of fact-checker, perhaps in what Graves and Cherubini (2016) call the policy expert model. Indeed, Full Fact published an article authored by two analysts from the IFS (FF06, 10/05/2017, on different ways of defining ‘the rich’). The most contested sources on Twitter, however, were public opinion polls. The explicit purpose of voting intention polls is to provide evidence 7  Interestingly, the most forceful criticism came from freelance journalist and previously Channel 4 News economic editor, Paul Mason, who retorted that “That’s an opinion And not a fact. The IFS theory is discredited in academia—pls withdraw,” though others pointed out that he had not offered a source to substantiate his counter-claim.

3  FACT-CHECKING CLAIMS, POLICIES AND PARTIES 

63

for predicting the result, but they can also be interpreted as an appeal to popular opinion, inferring that the majority view is necessarily the right one (an argumentative fallacy). Some explicitly argued that polls are often politicised to marginalise opposing opinion, and can therefore be a self-­ fulfilling prophecy, for example, “Polls are the new way of bending peoples [sic] minds & swaying their voting intentions” (FCT48, 02/06/2017); another shared a meme of a quote attributed to Peter Hitchens: “Opinion polls are a device for influencing public opinion, not a device for measuring it. Crack that, and it all makes sense” (FCT03, 24/04/2017). There were also accusations that YouGov, cited as an expert source to explain the problems with viral polls, were themselves biased (“a Tory-owned website”) or had a flawed methodology that generated a biased result (“full of Alt right trolls with multiple duplicate accounts”). However, one poll finding was used to challenge an appeal to popular opinion, when Home Secretary Amber Rudd characterised Jeremy Corbyn’s view that British foreign policy favouring military intervention made the country more of a target for terrorists as “outrageous,” showing a screengrab of a YouGov poll result to demonstrate that “66% of voters disagree with her + back Corbyn’s view” (FCT20, 26/05/2017). Only 1 in 8 of the 91 visible replies reacted to the poll, though most that did were critical of the methodology or pointed out that @FactCheck had used the unweighted figure, but the majority responded with their own view on Corbyn’s or Rudd’s assertion, or even tried to address the evidence for correlation.

Summary and Conclusion The fact-checking journalism examined in this sample does not naively over-simplify the status of truth claims that cannot be empirically adjudicated on, but uses a mix of explanatory journalism and fact-checking ­verdicts to demystify claims, misleading interpretations and well-intentioned but flawed statistical practice. This flexibility allows a wider range of claims to be examined than the use and misuse of statistics or the record of what someone had previously said, which is ultimately more useful for informing political debate. If anything, fact-checkers were too timid on issues such as immigration, sticking to net migration statistics and shying away from the more contentious aspects of the argument. BBC Reality Check was particularly guilty of this, and it may in part be due to the greater pressure the broadcaster is under to be scrupulously fair to both or

64 

J. BIRKS

all sides, and even to increase the number of positive verdicts, in the face of vocal criticism on Twitter and elsewhere. There was significant evidence of motivated reasoning and heuristics of distrust on the BBC in particular. Some responses, particularly among those who appeared unfamiliar with fact-checking journalism, and who did not appear to have clicked through to the whole article, were dismissive of verdicts that did not favour the party they supported. However, just as frequently, responses on Twitter made a reasoned challenge to a fact-­ check verdict, which may have been motivated by political partisanship, but which raised or answered appropriate critical questions of the argumentative appeal being made. In contrast to the assumptions in social psychology research on the impact of fact-checking on the audience, then, this research finds that people have reasons to challenge fact-checkers’ verdicts. These exchanges may not be empirically conclusive but they do constitute ‘public reasonableness’ (Schudson 2008) and enrich, rather than invalidate, the usefulness of fact-checking.

FactChecks FC02, 10/05/2017, FactCheck, ‘The school funding crisis (Q&A)’ https:// www.channel4.com/news/factcheck/factcheck-qa-the-school-funding-crisis FC03, 16/05/2017, FactCheck, ‘The Labour manifesto: Is it really fully costed?’ https://www.channel4.com/news/factcheck/factcheck-the-labour-manifesto FC04, 16/05/2017, FactCheck, ‘The Labour manifesto: Is it more radical than the last one?’ https://www.channel4.com/news/factcheck/factcheck-thelabour-manifesto FC05, 18/05/2017, FactCheck, ‘Should we nationalise the railways? (Q&A)’ https://www.channel4.com/news/factcheck/factcheck-qa-should-wenationalise-the-railways FC06, 18/05/2017, FactCheck, ‘The Conservative manifesto—The big questions: Do the numbers add up?’ https://www.channel4.com/news/factcheck/ factcheck-the-conservative-manifesto-the-big-questions FC07, 18/05/2017, FactCheck, ‘The Conservative manifesto—The big questions: Will there really be £8bn extra for the NHS?’ https://www.channel4. com/news/factcheck/factcheck-the-conservative-manifesto-the-big-questions FC08, 18/05/2017, FactCheck, ‘The Conservative manifesto—The big questions: Will they really cut immigration?’ https://www.channel4.com/news/ factcheck/factcheck-the-conservative-manifesto-the-big-questions FC09, 18/05/2017, FactCheck, ‘The Conservative manifesto—The big questions: Will pension spending fall?’ https://www.channel4.com/news/factcheck/factcheck-the-conservative-manifesto-the-big-questions

3  FACT-CHECKING CLAIMS, POLICIES AND PARTIES 

65

FC13, 22/05/2017, FactCheck, ‘Theresa May’s misleading claims on the ‘Dementia Tax’ U-turn’ https://www.channel4.com/news/factcheck/ theresa-mays-misleading-claims-on-the-dementia-tax-u-turn FC14, 26/05/2017, FactCheck, ‘Which party has a better track record on the economy?’ https://www.channel4.com/news/factcheck/factcheck-qa-whichparty-has-a-better-track-record-on-the-economy FC16, 26/05/2017, FactCheck, ‘The IFS verdict on Labour, Tory and Lib Dem spending plans’ https://www.channel4.com/news/factcheck/factcheck-theifs-verdict-on-labour-tory-and-lib-dem-spending-plans FC19, 31/05/2017, FactCheck, ‘Will expat pensioners really cost the NHS £1billion’ https://www.channel4.com/news/factcheck/will-expat-pensionersreally-cost-the-nhs-1-billion FC22, 07/06/2017, FactCheck, ‘Corbyn wrong on working-class students’ https://www.channel4.com/news/factcheck/factcheck-corbyn-wrong-onworking-class-students FF06, 10/05/2017, Full Fact, ‘If politicians talk about the rich, always ask who they mean’ https://fullfact.org/economy/if-politicians-talk-about-richalways-ask-who-they-mean/ FF15, 16/05/2017, Full Fact, ‘Labour manifesto: Food banks’ https://fullfact. org/economy/labour-manifesto-2017-food-banks/ FF20, 18/05/2017, Full Fact, ‘Conservative Manifesto: Immigration and public services’ https://fullfact.org/event/2017/May/18 FF25, 18/05/2017, Full Fact, ‘Conservative Manifesto: Leaving the single market’ https://fullfact.org/europe/conservative-manifesto-2017-leaving-singlemarket/ FF27, 26/05/2017, Full Fact, ‘Will there be a cap on social care costs?’ https:// fullfact.org/health/will-there-be-cap-social-care-costs/ FF29, 30/05/2017, Full Fact, ‘NHS video, factchecked’ https://fullfact.org/ health/nhs-video-factchecked/ FF31, 31/05/2017, Full Fact, ‘Labour’s Land Value Tax: will you have to sell your garden?’ https://fullfact.org/economy/labours-land-value-tax-will-youhave-sell-your-garden/ RC02, 19/04/2017, Reality Check, ‘Could election improve UK’s Brexit position?’ https://www.bbc.co.uk/news/uk-politics-39641230 RC03, 20/04/2017 (20/04/2017), Reality Check, [on blogroll, linking to:] ‘How much does the UK spend on aid?’ https://www.bbc.co.uk/news/topics/cp7r8vgl2rgt/reality-check#main-content RC06, 25/04/2017, Reality Check, ‘Have we seen record numbers of jobs?’ https://www.bbc.co.uk/news/uk-politics-39710052 RC09, 02/05/2017, Reality Check, ‘Are there 20,000 fewer police?’ https:// www.bbc.co.uk/news/uk-politics-39779288 RC10, 03/05/2017, Reality Check, ‘What do we know about the 100bn euro Brexit bill?’ https://www.bbc.co.uk/news/uk-politics-39795629

66 

J. BIRKS

RC12, 07/05/2017 (06/04/2017), Reality Check, ‘Extending VAT to private school fees’ [on blogroll, linking to:] ‘Do Labour’s sums add up on free school meals?’ https://www.bbc.co.uk/news/education-39518380 RC13, 07/05/2017, Reality Check, ‘Are there 6,700 fewer mental health staff?’ https://www.bbc.co.uk/news/uk-politics-39836506 RC16, 08/05/2017, Reality Check, ‘Will free hospital parking cost £162m?’ https://www.bbc.co.uk/news/health-39847463 RC19, 11/05/2017, Reality Check, ‘What does nationalising the railways mean?’ https://www.bbc.co.uk/news/uk-politics-39886998 RC26, 16/05/2017, Reality Check, ‘Who would be affected by Labour’s higher taxes?’ https://www.bbc.co.uk/news/election-2017-39923783 RC42, 28/05/2017, Reality Check, ‘What impact have employment tribunal fees had?’ https://www.bbc.co.uk/news/election-2017-40014115 RC43, 28/05/2017, Reality Check, ‘What laws stop terror suspects travelling?’ https://www.bbc.co.uk/news/election-2017-40046562 RC44, 30/05/2017, Reality Check, ‘The fight over women’s state pensions’ https://www.bbc.co.uk/news/election-2017-40014575 RC59, 02/06/2017 (08/09/2016), Reality Check, [on blogroll, linking to:] ‘Do grammar schools work?’ https://www.bbc.co.uk/news/education37310541 RC64, 04/06/2017, Reality Check, ‘London Bridge attack: What powers do the police have?’ https://www.bbc.co.uk/news/election-2017-40152190

Tweets FCT03, 24/04/2017, https://twitter.com/FactCheck/status/8565625381 54209281 FCT20, 26/05/2017, https://twitter.com/FactCheck/status/868123660 590211072 FCT30, 29/05/2017 https://twitter.com/FactCheck/status/8692774706 16432642 FCT38, 29/05/2017 https://twitter.com/FactCheck/status/869290411 583557634 FCT48, 02/06/2017 https://twitter.com/FactCheck/status/8706903442 53276163 FCT50, 06/06/2017 https://twitter.com/FactCheck/status/8720996628 90774528 RCT002, 19/04/2017 https://twitter.com/BBCRealityCheck/status/8546 89007283863552 RCT005, 21/04/2017, https://twitter.com/BBCRealityCheck/status/8554 47522210402305

3  FACT-CHECKING CLAIMS, POLICIES AND PARTIES 

67

RCT006, 24/04/2017, https://twitter.com/BBCRealityCheck/status/8565 42997802098688 RCT007, 25/04/2017, https://twitter.com/BBCRealityCheck/status/8569 07265038843904 RCT011, 02/05/2017, https://twitter.com/BBCRealityCheck/status/8593 94476078497794 RCT015, 07/05/2017, The Conservatives pledge an extra 10,000 mental health staff. Labour say they have cut 6,700 since 2010. #GE2017, https://twitter. com/bbcrealitycheck/status/861226494126952448 RCT029, 16/05/2017, https://twitter.com/BBCRealityCheck/status/8644 49495832121344 RCT030, 16/05/2017, https://twitter.com/BBCRealityCheck/status/8645 21664578691072 RCT038, 19/05/2017, John McDonnell cites poverty figures while discussing Winter Fuel Payment. #GE2017, https://twitter.com/bbcrealitycheck/ status/865606468032225280 RCT062, 31/05/2017, https://twitter.com/BBCRealityCheck/status/8700 06716540125185 RCT064, 31/05/2017, https://twitter.com/BBCRealityCheck/status/8700 10256482603010 RCT091, 2/06/2017, Corbyn says wages are falling. This is what’s happened to pay over the past 10 years, inflation-adjusted #bbcqt RCT122, 04/06/2017, https://twitter.com/BBCRealityCheck/status/8718 21037184438279 RCT128, 04/06/2017, https://twitter.com/BBCRealityCheck/status/8718 27243542212608 RCT148, 06/06/2017, https://twitter.com/BBCRealityCheck/status/8721 77671186329601 RCT153, 06/06/2017, https://twitter.com/BBCRealityCheck/status/8721 83038519767041 RCT161, 06/06/2017, https://twitter.com/BBCRealityCheck/status/8721 90691803377665

References Anstead, Nick, and Andrew Chadwick. 2018. ‘A Primary Definer Online: The Construction and Propagation of a Think Tank’s Authority on Social Media’, Media, Culture & Society, 40: 246–66. Birks, Jen. 2019. ‘Fact-checking, False Balance, and ‘Fake News’: The Discourse and Practice of Verification in Political Communication.’ In Stuart Price (ed.), Journalism, Power and Investigation: Global and Activist Perspectives (Routledge: London).

68 

J. BIRKS

Blumler, Jay G. 2019. ‘BBC Campaign Coverage Policy.’ In Dominic Wring, Roger Mortimore and Simon Atkinson (eds.), Political Communication in Britain (Palgrave Macmillan: London). Cammaerts, Bart, Brooks DeCillia, and César Jimenez-Martínez. 2017. ‘Journalistic Transgressions in the Representation of Jeremy Corbyn: From Watchdog to Attackdog’, Journalism. https://doi.org/10.1177/ 1464884917734055. Dobbs, Michael. 2012. ‘The Rise of Political Fact-checking, How Reagan Inspired a Journalistic Movement’, New America Foundation, Accessed 21 December 2018. https://www.issuelab.org/resources/15318/15318.pdf. Ettema, James S., and Theodore Glasser. 1998. Custodians of Conscience: Investigative Journalism and Public Virtue (Columbia University Press: New York, NY). Graves, Lucas. 2016. Deciding What’s True: The Rise of Political Fact-checking in American Journalism (Columbia University Press: New York, NY). Graves, Lucas, and Federica Cherubini. 2016. The Rise of Fact-Checking Sites in Europe (Reuters Institute for the Study of Journalism: Oxford). Mellon, Jonathan, and Christopher Prosser. 2017. ‘Twitter and Facebook Are Not Representative of the General Population: Political Attitudes and Demographics of British Social Media Users’, Research & Politics, 4. https://doi.org/ 10.1177/2053168017720008. Moore, Martin, and Gordon Ramsay. 2017. ‘Caught in the Middle: The BBC’s Impossible Impartiality Dilemma.’ In Einar Thorsen, Dan Jackson and Darren Lilleker (eds.), UK Election Analysis 2017 (Bournemouth University: Bournemouth). Schudson, Michael. 2008. Why Democracies Need an Unlovable Press (Polity: London). Uscinski, Joseph E., and Ryden W.  Butler. 2013. ‘The Epistemology of Fact Checking’, Critical Review, 25: 162–80. Walton, Douglas. 2006. Fundamentals of Critical Argumentation (Cambridge University Press: Cambridge).

CHAPTER 4

The Role of Fact-Checking in Political Argumentation

Abstract  This chapter focuses on three specific policy issue disputes in the media coverage of the election campaign to examine the role that fact-­ checking plays in informing debate. Applying methodological tools from political discourse analysis and argumentation theory, it finds that fact-­ checking is effective at resolving disputes over empirical claims, even where both are statistically defensible, by interpreting which is most contextually relevant. However, checking the empirical circumstances did not resolve political arguments over conflicting policy proposals, as claims for action to reach an agreed goal, which hinge on claims about causal relationships. On the most complex and controversial issues, fact-checkers often chose to check banal and uncontested statements, rather than assess the difficult but important evidence behind theoretical, predictive claims. Keywords  Fact-checking • Policy • Debate political argumentation • Argumentation theory • Political discourse analysis Having established the overall patterns of British fact-checkers’ selection and treatment of claims in the previous chapter, here I will focus in on three issue-specific policy issue debates to examine the role that various truth claims and fact-checks on those claims play in political argumentation. This analysis draws on Fairclough and Fairclough’s (2012) political discourse analysis (PDA), as well as the pragma-dialectic argumentation © The Author(s) 2019 J. Birks, Fact-Checking Journalism and Political Argumentation, https://doi.org/10.1007/978-3-030-30573-4_4

69

70 

J. BIRKS

theory on which their method is based, that was outlined in Chap. 2. By breaking down the arguments into component premises, we can see what work is being done by ‘facts’ in political debate, and whether fact-­ checking—in the choice of facts and scope of judgement—actually improves the quality of that argument. The role of facts and fact-checking within political argumentation will be examined in this chapter through three sets of claims and counter-­claims that were chosen to represent different kinds of dispute. The first is the most straightforward—a debate over conflicting claims about empirical circumstances. Since politicians did not tend to invent statistics, but rather cite them selectively, disputes over empirical circumstances in the mediated debate were about interpretation of the relevant context. Significantly, challenges to politicians’ use of statistics were rarely made by opposition politicians or other news sources, so fact-checkers were performing a valuable service in providing this scrutiny. Equally, however, there was little evidence of this fact-checking filtering into the media debate without being picked up by a prominent MP. Apart from a fairly straightforward (and positively factchecked) challenge from Luciana Berger (Lab) to a Conservative pledge to increase mental health nurses (by less than they had already cut them), the only clear instance of a widely fact-checked dispute over empirical circumstances was over schools funding. Both the Labour claim that funding had been cut and the Conservative counter-claim that it was at a record high were defensible, but based on a different reading of the same statistics. The second is a more complex claim about the cause of current circumstances, involving an argument from correlation to causation that is used to allocate credit or blame. The most prominent attribution of blame was related to policing and security. This was an area on which Labour was thought to be vulnerable due to leader Jeremy Corbyn’s alleged links to Sinn Fein or even the IRA, but they went on the offensive early having identified policing as a broadly unpopular area of cuts. The party then found themselves well positioned to make political capital out of this argument when the political circumstances changed with two terrorist attacks taking place during the campaign. The Conservative counter-argument largely focused, especially at the outset, on ad hominem attacks, that is, criticising Corbyn and Shadow Home Secretary, Diane Abbott’s personal credibility on the issue rather than their argument. However, they were increasingly forced to counter the substantive argument when the political circumstances changed and inferred a more specific blame. The third is the most complex and contested kind of assertion—one that Uscinski and Butler (2013) argue is not empirically factual and

4  THE ROLE OF FACT-CHECKING IN POLITICAL ARGUMENTATION 

71

therefore not fact-checkable—arguments over means-goals that imply prediction of how effectively those means will achieve the stated goals. This kind of claim was prominent in arguments about policy proposals, especially those that constituted a significant change. Several policies would have been suitable case studies in this category, including social care and fiscal issues, but given the centrality of Brexit in the Prime Minister Theresa May’s reasoning for calling a snap election, as well as past concerns about the quality of debate on the issue, this seemed the most significant choice. In the 2016 referendum campaign, projections mostly centred on economic forecasts based on modelling that used an array of assumptions as placeholders for decisions that had not been made and would remain undecided after Article 50 had been triggered. The resulting forecasts of economic slowdown—perceived as an influential factor in the 2014 Scottish independence referendum—were derided as ‘project fear’ by the Leave campaign, whose own predictions—of easily securing an advantageous trade deal with the EU and other countries at the same time as being able to restrict migration from those countries—were based on little more than wishful thinking and nationalistic rhetoric. Michael Gove’s challenge to the credibility of economic forecasters—that the public had “had enough of experts from organisations […] who got consistently wrong what was happening”—became notorious as a distillation of everything ‘remainers’ thought was wrong with the Leave campaign and with Brexiteers’ reasons for voting to leave. The election was billed as the Brexit election, and Loughborough University research found that across the media it was indeed the most prominent issue (Deacon et al. 2019: 36), largely thanks to a fiercely pro-Brexit press but in the broadcast media, debate on the issue was sparse until the final week. Much like the debate on security, political argumentation focused more on personal credibility than on the credibility of the proposed course of action and its predicted outcome, but fact-checkers did find some angles on which they could intervene.

Conflicting Truth Claims on Empirical Circumstances: Schools Funding An unusual aspect of the snap election was that parties did not finalise and launch their manifestos until the third week of the official campaign.1 The first news coverage was therefore largely process related, or a continuation 1  Labour launched their manifesto on 16 May 2017 after a draft was leaked the previous week and the Conservatives on 18 May 2017.

72 

J. BIRKS

of earlier policy stories. Schools funding was one of these issues, as controversy had been slowly building around proposed changes that sought to make funding more fair and consistent across the country, but which meant certain schools, especially in the south-east of England, taking a real-terms cut, and even beneficiaries struggling to meet increasing costs (National Audit Office 2016; Belfield and Sibieta 2016). A network of campaigns emerged, some led by national teaching unions and professional associations, but also more local campaigns, especially in the south-­ east region, which were relatively successful at gaining publicity, especially in the left-leaning press.2 The media coverage of this issue was dominated by Channel 4 News, which ran pre-recorded items followed by studio debates on two programmes fairly early on in the campaign, on 8th and 10th May, the second of which coincided with a fact-check on the issue (FC02, ‘The schools funding crisis’). The first of these two programmes centred on a survey of 700 head teachers in south-east England by a local campaign by schools and supporters, as evidence of a widespread perception of crisis on the ground. This was reported by Jane Deith from a school in the region, including an interview with the head teacher. Following this item, Conservative MP and ex-Education Secretary, Michael Gove and Shadow Education Secretary, Angela Raynor were interviewed together in the studio. The second programme was driven by Labour and the Liberal Democrats launching education policy pledges, reported by Paul McNamara from a North London school with similar illustrative quotes, and followed by a debate with representatives of the three main parties. The fact-check drew largely on the National Audit Office (NAO) with some additional analysis from the Institute for Fiscal Studies (IFS), ­building on a previous fact-check on the proposed National Funding Formula, headlined ‘Conservatives are cutting school funding in real terms’ (23 March 2017), which followed an earlier news item (“‘No winners’ in school funding shake-up,” 17 March 2017). BBC Radio 4’s Today programme also picked up the issue on 10th May, responding to the opposition policy announcements by separately interviewing Angela Raynor and Liberal Democrat leader, Tim Farron, though 2  According to a Nexis search, the term ‘school funding crisis’ appeared in The Guardian’s education supplement in January 2017, in the Independent the following month, and the Mirror in March. Even the highly conservative Express used the term in a headline in April 2017, shortly before the election was called.

4  THE ROLE OF FACT-CHECKING IN POLITICAL ARGUMENTATION 

73

both were interrogated on other issues for at least a third of their interviews.3 Schools funding was also mentioned in a debate with a live audience when the programme was broadcast from the University of Bath (15th May), and in an interview with the head teacher of a high school that hosted the programme on 26th May, followed by a debate on grammar schools. BBC Reality Check touched on a related claim on the number of children in classes of more than 36 (RC04, 21/04/2017), but the specific claim about funding was only addressed on Twitter, as a live fact-­ check on an election-focused programme on the BBC Breakfast news magazine show (RCT137, 05/06/2017), linking to an old fact-check from 17th March. The debate was framed partly by the school campaigners, who sought to establish the empirical circumstances as a ‘funding crisis,’ against Conservative government denials, and also by opposition parties whose manifestos put forward a policy (or ‘claim for action’ in PDA terms) of increasing funding. To some extent, then, the argument was that cuts were inherently problematic, and the means-goal of increasing funding to end the funding crisis therefore self-evident. However, other circumstances were also raised that claimed negative consequences from the funding cuts. Contesting the Empirical Circumstances: Schools Funding at a Record High or Deepest Cuts in Two Decades The core empirical dispute was over two conflicting claims about the level of school funding at the time and in the coming years for which it had been set, as compared to levels before the Conservatives came to power. Conservatives denied the characterisation of a funding crisis or even that funding was being cut, on the basis that the overall amount spent on schools in cash terms was higher than ever before. More specifically, Michael Gove (Cons) said that “the overall schools budget is protected in real terms and the cash amount for every school is also protected as well” (in a context where other public services have not been so ‘protected’ from outright cuts), whilst Angela Raynor (Lab) pointed out that the amount of funding per pupil, or “the money that follows a child into 3  Angela Raynor was asked about Brexit and Tim Farron was pressed on horserace issues related to a hypothetical ‘progressive coalition,’ assumed then to be the only chance of defeating the Conservatives.

74 

J. BIRKS

school,” had been cut. In other words, there was a spending increase but it was not enough to keep pace with both rising prices (inflation) and increasing pupil numbers. The question as to whether schools’ budgets were being ‘cut’ therefore involved interpretation of the most appropriate context, and whether it constituted a crisis was a more subjective matter of definition. Both broadcasters’ fact-checks, following sources such as the IFS, confirmed both sides’ figures but implied that the per-pupil figure was the most contextually relevant because it related to their costs and/or was a departure from recent decades.4 FactCheck attributed is interpretative conclusion that “this equates to a real terms cut of around 6.5 per cent” to the IFS, but in their verdict called it a “significant funding challenge,” rather than a crisis. Live fact-checking the set-piece interview with Theresa May (The Battle for Number 10, 29/05/2017), @FactCheck linked to the IFS report, rather than their own article, and deferred judgement to the think tank: “School cuts are complicated but IFS is clear that there are real terms cuts per pupil planned, as @faisalislam said” (FCT36, 29/05/2017). Nonetheless, in this instance, FactCheck appears to have had a direct impact on the broadcaster’s news reporting. Whilst the Channel 4 News items on 8th May included claims that funding was being cut, they were all articulated by or attributed to sources (including an expert source from the Educational Policy Institute). Two days later, when the fact-check was published online, reporter Paul McNamara stated as fact that “under current plans, schools … are facing their budgets being slashed by 6.5% in real terms by 2021,” and in a question to the Conservative interviewee Neil Carmichael, Cathy Newman stated that these were “the first real terms cuts in school budgets since the mid-1990s.” In contrast, BBC R4 Today (15/05/2017) went as far as to prevent a Labour MP from challenging the empirical basis of her Conservative opponent’s claim in a debate. Mark Harper (Cons): “[…] first of all, just responding to what Paddy and Karen said, of course we haven’t cut education funding, we’ve increased

4  Both fact-checkers also detailed further cost pressures specified in the source reports that weren’t discussed in the media content, such as increases to National Insurance (a form of income tax to fund and determine eligibility for state benefits) the minimum wage, and a new ‘apprentice levy’ on businesses, on which they can also draw to fund apprentices, but of which schools were not in a position to make use.

4  THE ROLE OF FACT-CHECKING IN POLITICAL ARGUMENTATION 

75

education funding [shouts from the audience] no, we’ve increased education funding, and we’ve increased the number” Karen Smith (Lab): “Not per pupil” Mark Harper: “of young people going to university” Justin Webb (presenter): “Let’s park that there, they say you have, you say you haven’t”

This demonstrates the dominance of the ‘he said, she said’ logic of impartial political reporting, with a lack of curiosity about the basis of claims, or a sense of responsibility for enabling the audience to determine which is the most convincing. Additional Contextual Empirical Circumstances: Evidence of Impact of Schools Funding on Educational Standards The Conservative Party, whilst not keen to acknowledge the per-pupil cut in funding, did not accept that it was in itself a bad thing, focusing instead on indications of educational standards. Michael Gove cited increasing (but unstated) numbers of schools rated ‘good’ or ‘outstanding’ by educational standards body Ofsted, and (as a strategically higher number) 1.8 million more children in schools with those ratings. Theresa May also gave this answer to Faisal Islam’s question on The Battle for Number 10, prompting @FactCheck to link to a fact-check from the previous election in 2015, checking the same claim by David Cameron. The tweet gave a verdict in the tweet text, “More ‘good’ and ‘outstanding’ schools? Yes— but the system is rigged to make sure that happens,” pointing out in the linked article that Ofsted had changed their method of inspection, so that good and outstanding schools were inspected less frequently, meaning fewer downgraded schools in comparison with those that had improved from lower ratings. This more complex caveat did not make it through to the sampled media debate. An alternative account of the impact of cuts on educational standards was provided by witness testimony in the Channel 4 News coverage, with head teachers reporting that they needed to cut staff, including teachers, to meet next year’s budget, therefore increasing class sizes and offering fewer GCSE-level (ages 14–16) subject choices. These claims, prominent in the campaigning from schools since before the election, were also mediated by presenters, as well as Labour interviewees—Today programme’s presenter, John Humphries, cited “endless stories,” probably drawn from

76 

J. BIRKS

press coverage, “of head teachers sending out begging letters to parents because they don’t have enough money to meet all the bills.” Angela Raynor was also shown raising the issue of class sizes in an interview on LBC radio, but was criticised for not knowing the precise number of 5- to 7-year-olds currently in classes over 30 (her leader, Jeremy Corbyn, had already been pulled up by BBC Reality Check [RC04, 21/04/2017] for citing the wrong figure for ‘super-size’ classes over 36+). However, a source from the Educational Policy Institute argued that parents were not yet seeing the “impact of reduced funding,” and so was not convinced that schools funding would be a significant election issue. Even so, later that year, after the election result in which the Conservatives lost their majority and were forced to form a minority government with the support of the DUP, Education Secretary, Justine Greening announced that funding would be increased to maintain real terms, per-pupil funding for the next two years. School funding is at a record high because of the choices we made to protect and increase school funding even as we faced difficult decisions elsewhere to restore our country’s finances, but we recognise that at the election people were concerned about the overall level of funding for schools as well as distribution. (Department for Education 2017)

This doesn’t concede the argument about educational standards, but acknowledges public ‘concern,’ whether about the funding per se or about the impact on school provision. It’s not clear that the decisive reporting that funding had indeed been effectively cut was influential, compared to local awareness raising by schools themselves, but the debate was at least relatively well informed on that specific account of schools’ circumstances. One final aspect of the argument was less prominent, however, which related to how these circumstances had come about. Other than ­differences over spending priorities (and a related argument around the cost-­ effectiveness of grammar and ‘free’ schools), the main cause of the ‘funding crisis’ circumstances was an increase in pupil numbers, which was largely glossed over. The only explanation advanced in the media debate was from Michael Gove, a prominent ‘Brexiteer,’ who blamed Labour for having failed to control immigration and therefore “presided over the most significant increase in our population.” Raynor’s challenge to this was sceptical (“it’s not just about migration, that’s rubbish”), and she also shot

4  THE ROLE OF FACT-CHECKING IN POLITICAL ARGUMENTATION 

77

back on the Conservatives’ failure to meet their own immigration target. However, she didn’t offer an alternative explanation, rather suggesting that it was not an unexpected shift, that they knew pupil numbers were increasing “and they’ve done nothing to address that.” In fact, Reality Check offered an explanation back in March, that “there was a baby boom in the early 2000s, which has been hitting primary schools for several years and is now moving up through the secondary system,” and that further increases were anticipated by the Department for Education through to 2025. Summary Fact-checkers did a good job of unpicking the selective use of official statistics and reports and adjudicating between the claims and counter-claims on increases or cuts to schools’ funding, and, to some extent, the impact on educational standards. However, this filtered through to general media reporting and interviewing unevenly, with greater use on Channel 4 News (perhaps partly because Cathy Newman was previously closely associated with FactCheck herself). A key element of the explanation as to why the same funding in real terms was not sufficient, however, was not addressed directly, though it did appear in an old fact-check promoted on Twitter in relation to a claim on overall funding. It is possible that it was deemed too difficult to disaggregate the effects of immigration and the birth rate to be able to adjudicate on Gove’s assertion that the former was to blame, but his could be a highly misleading claim that positions immigration control as a necessary and even sufficient solution to relieving strain on funding of public services.

Attribution of Blame: Security and Policing Although security, policing and defence are generally regarded as issues ‘owned’ by the Conservative Party as the ‘law and order party,’ and an area of particular weakness for Corbyn personally, Labour went on the offensive on policing early in the campaign. The party sought to capitalise on a potential area of Tory vulnerability—unpopular cuts to police budgets—that tied into their own overall campaign theme of reversing austerity. On the first day of official campaigning, then, Labour’s Shadow Home Secretary, Diane Abbott, announced a pledge to fund 10,000 extra police officers. Strategically, this appeared to be relatively successful at gaining media attention, despite presentational missteps that distracted from the

78 

J. BIRKS

message, but the policy really gained most ground when the political circumstances shifted dramatically on 22nd May, when a terrorist bomb at the Manchester Arena killed 22 mostly young people attending a pop concert. Less than two weeks later, a second attack occurred on London Bridge, where a van intentionally drove into pedestrians, killing three, before a further five were stabbed by the same assailants on foot. These events reframed policing as a security issue as much as a law and order issue. The same facts were therefore put to different purposes, but in both cases the argument formed part of a wider counter-argument against the Conservative (and right-wing press) attack on Corbyn’s credibility on security and law and order. This meant that the rational argument had to compete with a more instinctive and value-based heuristic of trust. The political argument centred on the empirical circumstance that there were 20,000 fewer police offices in England and Wales than in 2010. However, the significance of this rested with wider circumstances: initially, a perceived ‘social fact’ that ‘bobbies on the beat’ were popular with voters across the political spectrum and a political circumstance of growing disquiet about resentment of cuts. An increase in police recruitment could therefore, like more schools funding, be interpreted as an end in itself, as well as means to a largely implied end (or goal) of tackling rising crime. The latter was most explicitly claimed by Emily Thornberry (Channel 4 News, 02/05/2017), but she appealed (fallaciously) to popular opinion (or, more charitably, unspecified witness testimony) rather than empirical measurement: “I think that speaks the truth to people, because they can see that crime is rising and they think there should be more police officers on the beat.” During the initial political circumstances, BBC Reality Check was alone among the big three in fact-checking the claim about cuts (confirming it was ‘about right’; RC09, 02/05/2017). Interestingly it also addressed the causal implication, noting that “no comprehensive research has been done into whether there is a link between falling police numbers and rising crime levels,” but that recorded crime was falling, which was also raised or picked up in some Twitter replies (RCT011, 02/05/2017) as well as discussion of changes in crime recording. Another separate fact-check addressed crime statistics and their limitations in more detail (the problems with crime statistics meant that both could be justified, but it was more likely falling overall; RC11, 05/05/2017). The counter-argument in political debate, however, was unrelated to these circumstances, or indeed the Labour argument at all, but tried to

4  THE ROLE OF FACT-CHECKING IN POLITICAL ARGUMENTATION 

79

pull discussion back to Labour credibility on the issue. Indeed, the Conservatives were not called upon to engage with Abbott’s argument, given that media coverage, including questioning from interviewers and even contributions from one Labour figure,5 was focused on the damage to Abbott’s personal credibility from having stumbled over the cost of the policy in a radio interview,6 allowing Michael Gove to present the episode as evidence that Labour threatened a ‘coalition of chaos’7 (Channel 4 News, 02/05/2017). The issue of policing then receded until the political circumstances changed when the two terror attacks shifted security and policing back up the agenda, and an increased sense of threat loomed large over the second half of the election campaign. Here, what I’m calling ‘political circumstance’ is a form of social fact in that it is based in public feeling, priorities and concerns, but is a more strategic interpretation of the opportunities and limitations presented by that social fact. A New Perspective on Empirical Circumstances: More Contentious Claims About the Impact of Cuts to Police After the Manchester bombing, Labour raised police cuts again, but did not explicitly claim that the cuts had made the UK more vulnerable to attack, or that police would have been able to foil the attack if there had not been cuts to their numbers. Instead, they raised the suggestion (attributed to hypothetical others through passive syntax and nominalisation) that the deployment of military personnel to assist armed police indicated that there were too few police to cope in a crisis: “But there will be concern that it’s cuts in policing which has meant you need the troops now” (Diane Abbott, Channel 4 News, 24/05/2017, my emphasis). Jeremy Corbyn made a similarly vague connection, whilst also anticipating and deflecting a counter-argument that he was suggesting that first responders did not do a good job, but therefore undermining the coherence of his argument. Labour will reverse the cuts to our emergency services and police. Once again in Manchester, they have proved to be the best of us. Austerity has to 5  Alistair Campbell, Downing Street Press Secretary and Director of Communication for Labour Prime Minister under Tony Blair, 1997–2003. 6  The interview, with Nick Ferrari on London’s LBC Radio, 02/05/2017, was described in much of the news reporting as ‘car crash,’ in other words, as compellingly awful. 7  A soundbite line used by the Conservatives, assuming that Labour stood no chance of forming a government on their own.

80 

J. BIRKS

stop at the A&E ward and at the police station door. We cannot be protected and cared for on the cheap. (Jeremy Corbyn speech, clip shown on Channel 4 News, 26/05/2017)

If there is an unexpressed premise that links these assertions, it is perhaps that we shouldn’t fund these services “on the cheap” because we value them or because they are virtuous (“the best of us”), rather than because of any negative consequences to the quality of those services. The likely rhetorical intention of simply juxtaposing police cuts and terrorist incidents, however, was to prompt the audience to make the link themselves. Indeed, media interviewers explicitly made that link when interviewing Conservative politicians. On the Today programme (26/05/2017), for instance, Sarah Montague asked Security Minister Ben Wallace, “Have those cuts gone too far and are they now compromising safety?” The connection was made most clearly, however, by Police Federation sources that were drawn on heavily in Simon Israel’s report for Channel 4 News (05/06/2017). The chair of the organisation argued that “by withdrawing neighbourhood policing you’re withdrawing the eyes and ears” within the local communities. A clip was also shown in the item—perhaps at the suggestion of the same source—of Theresa May denying the link two years previously at a Police Federation conference. She tells the audience of police officers “this crying wolf [over cuts] has got to stop,” since crime had fallen at the same time as police budgets and “our country is safer than it has ever been.” The unstated premise here is that May had been proven wrong by the successive terrorist attacks, and the police officers proven right. However, when directly asked, “Is it Labour’s case, as we go into this election, that if there had not been cuts in police numbers and in funding, that Manchester and London wouldn’t have happened?” (Today, 07/06/2017), Shadow Justice Secretary, Richard Burgon distanced himself from the claim, acknowledging that most blame lay with the terrorists themselves, and attributing the casual claim to the police, as those in a position to know, or as the relevant experts8: “what doesn’t help, and we’ve got listen to what the Police Federation, what the police service say, what doesn’t help to make our communities safer is trying to do security 8  He specifically mentions the clip shown on Channel 4 News, but is vague about how he came to see it: “when I had a look back at the footage …”

4  THE ROLE OF FACT-CHECKING IN POLITICAL ARGUMENTATION 

81

on the cheap, cutting 20,000 extra [sic] police.” Pressed on his own view, he deflects the argument onto the interviewer: “I think that the chain of causation in any such atrocity is … isn’t as simple as you may be inviting me to suggest,” though he later says that he thinks that the Police Federation “are right,” and that a link is “common sense” (at best, an appeal to popular opinion, but effectively signalling closure to opposed argument—see Walton 2006: 261).9 The interviewer did not question the credibility of the Police Federation, but made a contrary appeal to expert opinion, that “the former independent reviewer of terrorism legislation, Lord Carlile, said that the assertion that cuts to beat police officers have diminished the ability to fight terrorism is untrue,” and invited Burgon to say that Carlile “is wrong.” Burgon dismissed this as a “cheap game,” and the interview moved on, with the basis of the claims on either side remaining oblique. Regardless of the extent to which Labour were asserting a causal link between cuts to police numbers and the incidence of terrorist attacks, in these new political circumstances, the Conservatives did have to respond to the argument, and not only the person delivering it. The claim that fewer police made the country more vulnerable to terrorism is an argument from correlation to cause, and in this argumentation scheme there are three critical questions that should be asked, according to Walton (2006: 103): firstly, whether there really is a correlation; secondly, if that correlation could be coincidental; and thirdly, if there is a third factor that could be causing both circumstances. Defence Secretary Michael Fallon (Channel 4 News, 26/05/2017) raised the first question by suggesting that there wasn’t a consistent pattern over time, therefore implying that the correlation between reduced police numbers and the cluster of terrorist attacks at that time was coincidental (the second critical question, making the third question redundant). However, the only supporting evidence he offered for this was a single other incident; “the number of police officers on the streets increased slightly under a Labour government and we had the London bombings of 2005,” which does not in itself disprove correlation, though it could cast doubt.

9  Another Labour MP, Yvette Cooper, similarly said that “you can’t ever provide precise links […] and it would be inappropriate and wrong to do so,” but also asserted that “we do know” that community policing helps to gather intelligence and prevent radicalisation (Today, 05/06/2017).

82 

J. BIRKS

Other Conservatives argued more broadly against the causal connection implied in the policy of increasing police recruitment to reduce the risk of future attacks. Fallon denied that it was a necessary means, citing other means that he claimed would be more effective: “it’s not the actual number of officers, it’s the resources you dedicate to anti-terrorism” (Channel 4 News, 26/05/2017). Ben Wallace, security minister, argued that more police officers was not a sufficient condition: “When you fight counter-terrorism it’s not just on the ground” citing technologies, intelligence and security services, which have all benefitted from increased investment (BBC R4 Today, 26/05/2017), and Culture and Media Secretary, Karen Bradley raised legal powers as the necessary means, implying that uniformed police officers would be insufficient even if effective (Today, 05/06/2017). The Conservative counter-argument challenges Labour’s argument that an increase in police numbers is necessary, but does not establish whether their alternative means are themselves necessary or sufficient. The latter would suggest that the country is as safe from terrorism as it is possible to be, which may be true, but politically unwise to express as it would most likely be interpreted as complacency. Following the change in political circumstance, all three fact-checkers addressed the empirical claim about the number of police offers that had been cut, though all three came to a slightly different figure between 19,000 and 21,000 depending on the dates they chose to represent the period since the Conservatives gained power.10 FactCheck and Full Fact also looked at armed officers in particular, but again disagreed on the exact figure.11 Those figures were picked up by interviewers too, this time on the BBC—not only by Sarah Montague interviewing Security Minister Ben Wallace on BBC R4 Today (26/05/2017), as mentioned above (citing BBC Reality Check’s 19,000 figure), but also by Mishal Husain citing 10  Reality Check (RC39, 26/05/2017) put it at 18,991 (rounded to 19,000) using the most recent data from September 2010 to September 2016. FactCheck (FC21, 05/06/2017) put it at 19,668  in FC15 (close to Labour’s 20,000) using slightly older March 2010 to March 2016 figures. Full Fact’s (FF33, 05/06/2017) figure was 21,000 mixing the two from March 2010 to September 2016. 11  Full Fact (FF33, 05/06/2017) put the drop at over 1300 (March 2010 to March 2016) and FactCheck (FC15 26/05/2017) at 1014 (‘When the Tories took office’ to March 2016), but they reported the same figure of 640 officers who had recently completed their training and begun active duty from the National Police Chiefs’ Council, though they were not yet reflected in the official statistics. The two fact-checkers therefore put the net change from 2010 at 700 and 374, respectively.

4  THE ROLE OF FACT-CHECKING IN POLITICAL ARGUMENTATION 

83

Full Fact’s 700 fewer armed officers “nationally” (actually England and Wales) in response to Boris Johnson’s selective reference to a report on armed officers in London (Today, 06/05/2017). On Channel 4 News, two days before the FactCheck was published (24/05/2017), Jon Snow cited figures to Diane Abbott, Lord Paddick and Michael Fallon, the source of which is unspecified and doesn’t correspond with any of the fact-­ checks, and reporter Simon Israel (Channel 4 News, 05/06/2017) also cited different figures for both the overall drop in police numbers (21,494 since the record high in 2009) and armed police (517 for 2010–2017, perhaps not using FTE numbers). As well as informing interviewers’ questions, fact-checking’s disambiguation of the various statistics also helped interviewers tackle politicians’ selective use of them, such as Mishal Hussein (Today, 06/06/2017) challenging Boris Johnson’s focus on rising armed officers in the capital when nationally they had fallen. However, on the causal relationship, fact-­ checkers did not substantially contribute to the discussion. Even though FactCheck headlined one fact-check, “Are Tory cuts to police budgets the reason we need troops on the streets?”, its verdict only adjudicated on the empirical circumstance of whether there were fewer armed officers. Summary The aspects of the fact-checks that were picked up by media interviewers were again the narrow empirical, statistical facts, but in this instance, those figures were not being contested. Far from playing down the cuts, as Home Secretary, Theresa May had made a virtue of requiring forces to “do more with less.” The claim that was being debated was between ­competing means-goals for keeping the country safe from terrorism, but to the extent that these claims were interrogated at all it was limited to the circumstantial premise of police numbers, whilst the means-goal arguments were reduced to competing appeals to expertise, backed up by basic claims about their ‘position to know’ (Walton 2006). Fact-checkers could have explained the reasoning behind these claims, even if they were unable to come to a firm judgement, though of course in this instance there may well have been limitations to available evidence on grounds of national security. Nonetheless, this demonstrates that fact-checking that restricts itself to narrow empirical premises can contribute little to the rational quality of political debate.

84 

J. BIRKS

This narrow focus also, however, reflects another limitation—the elite indexing inherent in an exercise of checking politicians statements means that fact-checking can only illuminate the ground on which the parties choose to contend the election, and the Conservative counter-argument to Labour’s claims on security was dominated by ad hominem attacks on Corbyn and Abbott’s credibility on the issue (which was reasonably well fact-checked: RC57, 01/06/2017, on Corbyn and May’s voting record on anti-terror legislation including context and reasoning; FC18, 30/05/2017, correcting Twitter followers’ assertion that Corbyn did not vote for the Good Friday Agreement). If fact-checking is intended to depart from ‘he said, she said’ exchanges of appeals to authority, then it fails to do so on politically pivotal claims about the reasons for the current circumstances.

Predicting the Impact of Proposed Policy: Negotiating Brexit The 2017 election was initially expected to be ‘the Brexit election,’ since it was Theresa May’s stated reason for calling the election, and had dominated the media agenda when that call was made. Certainly, it did rank highly on overall coverage (Deacon et al. 2019: 35–6), and especially in much of the right-wing press that followed the Conservative agenda, but in the selected broadcast sample it was notable by its absence until the final week, when John Humphries (Today, 30/05/2017) called it ‘the spectre at the feast.’ This is partly because, even amongst a Conservative campaign that was “relatively policy-light” (Blumler 2019: 55), there was little substance to the party’s position on Brexit. Instead, their campaign focused on personality politics, asking voters to decide which leader, between May and Corbyn, could be trusted to secure the most advantageous Brexit deal. Their strategy was therefore based heavily in the received wisdom that Corbyn was unpopular and personally unelectable. Initially, then, Theresa May dominated the promotional aesthetics of the Conservative campaign, with her name dwarfing the party name and logo on podiums, banners and backdrops, until the campaign was relaunched in light of a faltering performance. The Labour Party, in contrast, focused on policy, whether because of the “social-movement-led model” that assumed a more substantively engaged electorate (Blumler 2019: 55–6) or because Corbyn was seen as an electoral liability even

4  THE ROLE OF FACT-CHECKING IN POLITICAL ARGUMENTATION 

85

within the party. Even so, Labour were just as divided on Brexit as the Conservative Party and indeed the country as a whole, so had just as much reason to steer away from specifics on this issue. They were also substantially let off the hook in the televised leader debates because questions about Brexit were actually about immigration, broadly whether it was damaging or beneficial, but not what could be achieved by controls on EU migration and the likely cost. Brexit also brought further epistemological complications, including some of the problems that had become apparent in the EU referendum campaign itself, where dire predictions of economic slowdown from economists were dismissed as ‘project fear’ by those conversely predicting a wealth of opportunities, but fact-checkers struggled to helpfully adjudicate, focusing on the problems with economic modelling without challenging the empty rhetoric of wishful thinking.12 However, political argumentation on Brexit in the context of a general election, as opposed to a referendum, also brought new challenges, since the electioneering discourse on Brexit had another audience—the EU27, that is, the other countries with whom the party that formed the next government would soon be negotiating. There was therefore a fundamental disconnect between the logic of an election, where you are expected to say what you will aim to deliver to get a mandate for your term in office—with the Brexit negotiation and transition a key part of that term—and the logic of negotiation itself, where it is better not to reveal your (entire) hand. Clarifying the Parties’ Actual Brexit Positions Both main parties set out a broad position on Brexit in their manifestos, but avoided saying much about it in the media campaign. Labour explicitly argued that voters did not want to be bogged down in the detail; for instance, Liam Byrne (Channel 4 News, 30/05/2017) claimed that “what people want at this election is not just a story about the process, people don’t want to hear about that, they just want to know that Brexit’s gonna 12  Because Remain campaigners omitted caveats and overstated the certainty of predictions, especially specific figures rather than the broad direction, fact-checks conversely focused on the limitations of economic forecasting. Although this answers, in part, the ‘evidence’ critical question for appeals to expertise, knowing that economic models are only as good as the data and assumptions input, is only helpful if those details are provided for the specific studies cited. Another critical question relates to credibility, but the record for accuracy of economic forecasting was not set out (Birks 2019: 256).

86 

J. BIRKS

happen,” and reassured voters that Labour would ensure that it did (also Angela Raynor, Today, 30/05/2017, and Keir Starmer, Channel 4 News, 01/06/2017). The Conservatives were also strategically vague—of one speech by Theresa May, Channel 4 News Political Correspondent, Michael Crick said, “we didn’t learn much new, it was short on detail, but full of comforting phrases” (Channel 4 News, 01/06/2017). The most explicit statements of the party positions were on the BBC’s Today programme (30/05/2017), when Labour’s Angela Raynor and Conservative David Davis gave similar-sounding accounts (tariff-free access to—but not staying in—the single market, and a tariff-free free-­ trade agreement with an associated customs agreement, respectively), and indeed Liberal Democrat Nick Clegg depicted them as “almost identical” since both wanted to leave the single market and end freedom of movement (Channel 4 News, 01/06/2017), yet almost two years later, talks between the two parties to break a parliamentary deadlock on the issue broke down without resolution. Given the vagueness of the rhetoric, fact-checkers took their cue from the manifestos to elucidate the actual party positions. FactCheck produced an explainer to point out the differences between Conservative, Labour and Liberal Democrat policies (FC10, 19/05/2017), concluding that “though they wouldn’t admit it, Labour and the Conservatives appear to agree on the main elements of ‘hard’ Brexit,” and point out that Labour’s manifesto states that “freedom of movement will end when we leave the European Union,” as if that was a given rather than a highly contested position. Full Fact (FF25, 18/05/2017) queried a similar claim in the Conservative manifesto that the country cannot stay in the single market without effectively staying in the EU, which essentially fact-checks a ­contested definition of what Brexit means. It concludes that Britain could technically stay in the single market via EEA membership, like Norway, but would have to follow a lot of EU rules, including free movement of people. However, the fact-check does not adjudicate on what it was that people thought that they were voting on at the time, or what they wanted it to mean, which is a significant missing ‘social fact’ in much of the argument over the definition of Brexit, and was strategically sidestepped in May’s much maligned tautological soundbite “Brexit means Brexit.” As discussed in Chap. 3, ‘persuasive definitions’ are an obstruction to argument if not agreed by all parties in the debate (Walton 2006: 248– 50), but can at least be contested, unlike empty soundbites. In this light,

4  THE ROLE OF FACT-CHECKING IN POLITICAL ARGUMENTATION 

87

the fact-check begins to fill in the unstated premises—such as that when people voted they understood Brexit to mean leaving the single market— that could have been interrogated, with reference to opinion polls, for example (albeit with the caveats about opinion poll data also touched on in the previous chapter). For instance, in January 2017, a YouGov poll indicated 57% agreement with the negotiating target of trying “to negotiate a new free trade deal with the EU giving us ‘the greatest possible’ access to the single market” without remaining in it (What UK Thinks: EU 2017), and in August 2016 a Kantar poll suggested that restrictions on immigration were a priority for 34% of British people, whilst access to the single market was chosen by 32%, though as the options were not mutually exclusive it’s not clear how many wanted, unrealistically, to have both at the same time (What UK Thinks: EU 2016). Reality Check (RC18, 11/05/2017) ran an explainer that was a very partial answer to the question “What does ‘hard Brexit’ mean for UK?” In addition to an explanation of what the single market and customs union entail, they gave the Conservatives’ reasoning that “being part of the customs union also means you cannot negotiate your own trade deals with other countries around the world,” but without interrogating the unstated premise that this would be a means to the goal of getting better deals than were in place as part of the EU. Whilst some Remainers such as Ken Clarke (Cons) have made the point that logically the UK has less negotiating power as a small nation than as part of a large trading block, this was oddly absent from both the election debate and the fact-checking. This is at least in part because fact-checking only interrogates the stated, not the implicit, premises of an argument and focuses on the most prominent claims. However, despite the efforts of the Liberal Democrats, Greens and others, surprisingly little was said about the impact that ‘harder’ or ‘softer’ Brexit deals would have on the economy and jobs, which was addressed far less prominently than in the referendum campaign. Labour suggested that they would protect jobs by securing access to the single market whilst Conservatives claimed that “a successful Brexit will give you a successful economic strategy” to pay for public services (Angela Raynor and David Davis, respectively, both on BBC R4 Today, 30/05/2017). Neither of these claims were fact-checked (or perhaps, given their vagueness, explained), and the government’s more pessimistic economic analysis was not examined. There was, however, some limited attention to political counter-arguments.

88 

J. BIRKS

Counter-Arguments—Potential Negative Outcomes of the Claim for Action Liberal Democrat Brexit spokesman Nick Clegg (Channel 4 News, 01/06/2017) made an appeal to authority (the dreaded ‘experts’) to argue that there were likely negative outcomes to a no-deal exit: It would be interesting to hear from James Cleverly [Cons] exactly which deal he thinks is worse than the chaos that would ensue from no deal. I mean, authoritative independent analysts have suggested we would lose about a third of the trade we do with the rest of the world if we had no deal at all, it would pitch the United Kingdom into an absolute legal and commercial tailspin, money would be sucked out of the country, I think the pound would come under huge pressure, there would be intense market instability and that would of course mean people’s jobs and livelihoods would be affected—precisely the ‘just about managing’ that the Conservatives claim they’re so concerned about.

Reality Check addressed one similar claim from Nick Clegg, that “Households would on average be £500 worse off in 2017 than they were in 2016, due to Brexit,” which it concluded was not an accurate account of the source report, which did not attribute all of the predicted inflation figure to the weaker pound following the EU referendum result, and pointed to inconsistent forecasts from other sources. There was one other prominent counter-claim on the potential negative outcomes of a hard Brexit, which was raised in a report by the Nuffield Trust, a health think tank, reported by Channel 4 News (31/05/2017) and checked by FactCheck on the same day (FC19 and FCT46). The news item focused largely on the trust’s argument that uncertainty about Brexit was deeply unsettling for NHS staff from the EU and for hospitals concerned about their future ability to recruit staff, but also reported their calculation of the impact on the NHS of ‘expat’ (British migrant) pensioners returning to the UK. In the item, Health and Social Care Correspondent Victoria MacDonald stated both that these costs would increase by £1bn and double to £1bn. In the factcheck, Martin Williams noted that the Nuffield Trust made both claims in their report and press release, respectively, and that they had confirmed to him that the latter was the correct figure. In the verdict, he concluded that even then the £1bn cost, “a figure repeated across the media today,” was inflated due to the assumption that all pensioners

4  THE ROLE OF FACT-CHECKING IN POLITICAL ARGUMENTATION 

89

would return, rather than seek citizenship in their adopted country or take out private healthcare, and furthermore that those pensioners weren’t already returning to the UK for treatment. It’s curious that these findings were not reflected in the Channel 4 News piece, presumably because there was not enough time to edit the pre-recorded item once the findings came in, but given that an interviewee from the Nuffield Trust contributed to the piece there would appear to have been scope to query the contradictory figures. Live fact-checking also addressed some of the concerns about NHS staffing (RCT115, 04/06/2017, and RCT156, 06/06/2017) and whether EU citizens would be allowed to remain in the UK and UK citizens in other EU countries (RCT055, 31/05/2017), but these were largely banal and not probative to the political argument. For instance, the latter remarked that “Amber Rudd correctly points out that there are around 3m EU citizens living in the UK and 1m UK citizens living elsewhere in EU,” which was met with nonplussed and mocking responses on Twitter such as “Amber Rudd also correctly points out that she has 10 fingers and she looks through things called spectacles, of which she has 2 pairs,” “I don’t think anyone was disputing that. About all she has got right tonight” and, most frustratedly, “And????” Overall, these fact-check articles would help voters to make some sense of the parties’ positions on Brexit, and at least of one of the criticisms, but the debate was little more concrete than during the referendum campaign with both main parties unrealistically promising the benefits of the single market whilst leaving it, and not being entirely honest about their position. Most obviously, after the election it became apparent that Theresa May did not truly believe that “no deal was better than a bad deal,” but—unlike the Brexit ‘ultras’ in her party—saw this solely as a negotiating ploy,13 and therefore sought an extension to Article 50 rather than allow the country to crash out. No wonder the Conservatives chose instead to conduct the debate on Brexit on the ground of personality politics. 13  Conservative MP James Cleverly (Channel 4 News, 01/06/2017) argued that threatening ‘no deal’ was a pragmatic negotiating tactic, but seemed to pull back from actually suggesting that she would walk away from a bad deal, saying instead that she would to try to renegotiate, yet still portrayed the effectively similar Labour argument that it was important to seek a deal as having “admit[ted] defeat before the negotiations have started” (Cleverly, Channel 4 News, 01/06/2017).

90 

J. BIRKS

Leadership Qualities and Being an Effective Negotiator To some extent, personality politics is anathema to fact-checking, which generally seeks to improve debate on substantive issues of policy, rather than heuristics of trust and likeability. However, leadership qualities and negotiation skills are clearly important factors when negotiating a Brexit deal is the main order of business for the next parliamentary term. All the same, these are rather subjective judgements that don’t lend themselves easily to fact-checking. For instance, the kind of evidence drawn on by Labour to suggest that Theresa May’s self-styled character (on the basis of a remark by colleague Ken Clark) as a ‘bloody difficult woman’ did not make her a tough negotiator, but “belligerent [and] extreme,” “alienating everyone” (Keir Starmer, Channel 4 News, 01/06/2017), were photographs of May looking friendless and isolated in Brussels: “You see the pictures, Theresa May’s at the back of the queue whenever she’s talking to the leaders in Europe, you know, we’re a laughing stock across Europe” (Angela Raynor, BBC R4 Today, 30/05/2017; also Emily Thornberry, Channel 4 News, 02/05/2017). This might be a rhetorically effective strategy, but whether people accept it depends on their interpretation of those images and belief that the photographs themselves are a fair representation. The Conservatives countered that Labour would ‘roll over’ (James Cleverly, Channel 4 News, 01/06/2017 (twice), also picked up by Today presenter John Humphreys, 30/05/2017, interviewing Angela Raynor), and that the EU would think they could “take these Labour chaps to the cleaners” (Boris Johnson, Channel 4 News, 01/06/2017), which similarly defies any verification. The other prominent rhetorical soundbite was “strong and stable,” meant to evoke a safe pair of hands at a time of crisis and division, which was equally rhetorically challenged by opposition supporters (or Conservative opponents at least), who ridiculed it with the opposing soundbite “weak and wobbly” (audience debate, Channel 4 News, 30/05/2017) that had been gaining currency on social media (Ridge-­ Newman 2019: 139). Although u-turns on social care were raised as evidence of inconstancy, that interpretation is subjective, as is whether forgetting a costing on live radio is evidence of incompetence, as suggested by Channel 4 News presenter Krishnan Guru-Murthy about Corbyn, but disputed by an audience member—“is he a school child who has to remember numbers by rote?”

4  THE ROLE OF FACT-CHECKING IN POLITICAL ARGUMENTATION 

91

More specifically, they referred to Theresa May’s prior experience of negotiating in Europe (Karen Bradley, Channel 4 News, 30/05/2017, and James Cleverly, Channel 4 News, 01/06/2017), but the substance of that claims wasn’t stated or fact-checked. Corbyn’s record was attacked on division within the party on his leadership (Amber Rudd, BBC Debate, 31/05/2017, and James Cleverly, Channel 4 News, 01/06/2017), which was fact-checked since Rudd raised as evidence the parliamentary party’s motion of no-confidence in Corbyn’s leadership (RC53, 31/05/2017, and RCT061, 31/05/2017) and Corbyn came back with the strong show of support from the grassroots party membership in his re-election as leader (RC54, 31/05/2017, and RCT062, 31/05/2017), and, though on Twitter the former was contested in as far as the vote was given as a proportion of Labour MPs who voted, rather than of all Labour MPs, both straightforwardly confirmed readily available recorded facts. Summary There were inherent problems with fact-checking the campaign on Brexit, not only that it was a future event and predictions are intrinsically challenging, but also because it was a policy area not entirely in the gift of whoever formed the government after the election, given that much of the outcome had to be agreed with 27 other nation states (though there was also the question as to what a newly sovereign government would do with those powers). However, in addition to the inherent nature of the issue, it was such an unknown quantity because neither party wanted to discuss the specifics that were so divisive both across the electorate and within the parties. Fact-checkers made a valiant attempt to clarify the manifesto positions and what was meant by terms such as ‘hard Brexit’ with explainers, but fact-checks that gave clear verdicts were, more often than not, quite banal. To some extent, this is a result of elite indexing and checking only the stated premises and not the implicit premises that connect the claim to the stated goal, but it also seems that fact-checkers steered away from the more contentious arguments, especially predictions.

Conclusion Fact-checks on empirical circumstances more often (but not always) inform media debate than more theoretical claims. They are more likely to be reflected in studio interviews, where a journalist uses statistics con-

92 

J. BIRKS

firmed in fact-checks (though they do not credit them) to challenge politicians. In broadcast debates between politicians of different parties, where an opponent challenges a statistic or other factual claim, moderators are more likely to consider balance achieved and seek to move on. These statistics are also only wielded by broadcast journalists when the issue has become controversial due to a conflict or disagreement, meaning that criticisms of a dubious claim are unlikely to reach a wider audience, if not picked up by opposition politicians. This also means that unstated premises that imply causal connections— such as between policing cuts and the incidence of terrorism—can go unchecked, even though journalists interviewing politicians about the same statement may infer the connection, albeit in a context where the politician can respond. However, even where causal connections were checked, as Reality Check did on policing cuts and crime, the inconclusive nature of the findings and the dominant framing around Labour’s competence at that stage of the campaign meant that it was not picked up in the same way. The problems with this elite indexing were most obvious in the contribution of fact-checking to the debate on Brexit, where fact-checkers struggled to identify checkable claims among the baseless rhetoric and personality politics that dominated the mainstream media debate. Elite indexing is inherent in an exercise of checking what people in the public eye say, but exacerbated by the focus on the most prominent politicians, and therefore—especially during election time—a narrow set of often centrally determined messages. In some instances, it was difficult for fact-­ checkers to focus on policy against the media focus on gaffes (such as Abbott’s) and ad hominem attacks, though the more relevant were effectively fact-checked. At the same time, fact-checkers did seem to avoid the more controversial policy debates, especially around what the British ­public really want out of Brexit—a prominent focus of rhetorical claims— and the basis of the more optimistic predictions.

FactChecks FC02, 10/05/2017, FactCheck, ‘The school funding crisis (Q&A)’ https:// www.channel4.com/news/factcheck/factcheck-qa-the-school-funding-crisis FC10, 19/05/2017, FactCheck, ‘What’s the difference between the Tories, Labour and Lib Dems?: Brexit’ https://www.channel4.com/news/factcheck/ factcheck-whats-the-difference-between-the-tories-labour-and-lib-dems

4  THE ROLE OF FACT-CHECKING IN POLITICAL ARGUMENTATION 

93

FC15, 26/05/2017, FactCheck, ‘Are Tory cuts to police budgets the reason we need troops on the streets?’ https://www.channel4.com/news/factcheck/ are-tory-cuts-to-police-budgets-the-reason-we-need-troops-on-the-streets FC18, 30/05/2017, FactCheck, ‘Corbyn on Northern Ireland’ https://www. channel4.com/news/factcheck/factcheck-corbyn-on-northern-ireland FC21, 05/06/2017, FactCheck, ‘Jeremy Corbyn is right: the Tories have cut 20,000 police officers since 2010’ https://www.channel4.com/news/factcheck/jeremy-corbyn-is-right-the-tories-have-cut-20000-police-officerssince-2010 FF25, 18/05/2017, Full Fact, ‘Conservative Manifesto: Leaving the single market’ https://fullfact.org/europe/conservative-manifesto-2017-leaving-singlemarket/ FF33, 05/06/2017, Full Fact, ‘Have armed police numbers been cut?’ https:// fullfact.org/crime/have-armed-police-numbers-been-cut/ RC04, 21/04/2017, Reality Check, ‘How many children are in classes of more than 30?’ https://www.bbc.co.uk/news/education-39666686 RC09, 02/05/2017, Reality Check, ‘Are there 20,000 fewer police?’ https:// www.bbc.co.uk/news/uk-politics-39779288 RC11, 05/05/2017, Reality Check, ‘Is crime up or down under the Tories?’ https://www.bbc.co.uk/news/uk-politics-39817100 RC18, 11/05/2017, Reality Check, ‘What does ‘hard Brexit’ mean for UK?’ https://www.bbc.co.uk/news/uk-politics-39858788 RC39, 26/05/2017, Reality Check, ‘Why does the Prevent strategy divide opinion?’ https://www.bbc.co.uk/news/election-2017-40060325 RC53, 31/05/2017, Reality Check, ‘Confidence in Corbyn’ [on blogroll] https://www.bbc.co.uk/news/topics/cp7r8vgl2rgt/reality-check#maincontent RC54, 31/05/2017, Reality Check, ‘Corbyn: “300,000 voted for me to lead party”’ [on blogroll] https://www.bbc.co.uk/news/topics/cp7r8vgl2rgt/ reality-check#main-content RC57, 01/06/2017, Reality Check, ‘May and Corbyn’s record on anti-terror legislation’ https://www.bbc.co.uk/news/election-2017-40111329

Tweets FCT36, 29/05/2017, https://twitter.com/FactCheck/status/8692896249 50231040 FCT46, 31/05/2017, https://twitter.com/FactCheck/status/8699791222 73062912 RCT011, 02/05/2017, https://twitter.com/BBCRealityCheck/status/859 394476078497794

94 

J. BIRKS

RCT055, 31/05/2017, 998129810096128 RCT061, 31/05/2017, 006425195606016 RCT062, 31/05/2017, 006716540125185 RCT115, 04/06/2017, 504547465900033 RCT137, 05/06/2017, 975321582678016 RCT156, 06/06/2017, 186707835334659

https://twitter.com/BBCRealityCheck/status/869 https://twitter.com/BBCRealityCheck/status/870 https://twitter.com/BBCRealityCheck/status/870 https://twitter.com/BBCRealityCheck/status/871 https://twitter.com/BBCRealityCheck/status/871 https://twitter.com/BBCRealityCheck/status/872

References Belfield, Chris, and Luke Sibieta. 2016. ‘Long-Run Trends in School Spending in England: IFS Report R115’. https://www.ifs.org.uk/publications/8236. Birks, Jen. 2019. ‘Fact-checking, False Balance, and ‘Fake News’: The Discourse and Practice of Verification in Political Communication.’ In Stuart Price (ed.), Journalism, Power and Investigation: Global and Activist Perspectives (Routledge: London). Blumler, Jay G. 2019. ‘BBC Campaign Coverage Policy.’ In Dominic Wring, Roger Mortimore and Simon Atkinson (eds.), Political Communication in Britain (Palgrave Macmillan: London). Deacon, David, John Downey, David Smith, James Stanyer, and Dominic Wring. 2019. ‘A Tale of Two Parties: Press and Television Coverage of the Campaign.’ In Dominic Wring, Roger Mortimore and Simon Atkinson (eds.), Political Communication in Britain (Palgrave Macmillan: London). Department for Education. 2017. ‘Justine Greening Statement to Parliament on School Funding’, House of Commons, Accessed 21 December 2018. https:// www.gov.uk/government/speeches/justine-greening-statement-to-parliamenton-school-funding. Fairclough, Isabela, and Norman Fairclough. 2012. Political Discourse Analysis (Routledge: Abingdon). National Audit Office. 2016. ‘Financial Sustainability of Schools’, Accessed 21 December 2018. https://www.nao.org.uk/report/financial-sustainabilityin-schools/#. Ridge-Newman, Anthony. 2019. ‘“Strong and Stable” to “Weak and Wobbly”: The Conservative Election Campaign.’ In Dominic Wring, Roger Mortimore and Simon Atkinson (eds.), Political Communication in Britain (Palgrave Macmillan: London).

4  THE ROLE OF FACT-CHECKING IN POLITICAL ARGUMENTATION 

95

Uscinski, Joseph E., and Ryden W.  Butler. 2013. ‘The Epistemology of Fact Checking’, Critical Review, 25: 162–80. Walton, Douglas. 2006. Fundamentals of Critical Argumentation (Cambridge University Press: Cambridge). What UK Thinks: EU. 2016. ‘Which Do You Think are the Most Important Issues for the Government to Focus on in Negotiations with the EU over the Terms of Britain’s Exit?’, Accessed 30 May 2019. https://whatukthinks.org/ eu/questions/which-do-you-think-are-the-most-important-issues-forthe-government-to-focus-on-in-negotiations-with-the-eu-over-the-terms-ofbritains-exit/. What UK Thinks: EU. 2017. ‘Is It Right or Wrong for the Government to Set Out a Negotiating Target that Britain will not Try to Remain Inside the European Single Market, but will Instead Try to Negotiate a New Free Trade Deal with the EU Giving Us “The Greatest Possible” Access to the Single Market?’, National Centre for Social Research, Accessed 30 May 2019. https:// whatukthinks.org/eu/questions/is-it-right-or-wrong-for-the-governmentto-set-out-a-negotiating-target-that-britain-will-not-try-to-remain-inside-theeuropean-single-market-but-will-instead-try-to-negotiate-a-new-free-tradedeal-w/.

CHAPTER 5

Conclusion

Abstract  This chapter draws on the findings from Chaps. 3 and 4 to argue that fact-checking is at its most effective when it assesses truth claims in the context of the wider political argument rather than being limited to narrow empirical facts, which are often uncontested or defensibly accurate even when used misleadingly. The most contested political arguments typically hinge on less easily checkable truth claims, such as causal relationships and predictions, but it is not necessary for fact-checking to conclusively prove or disprove these claims but need only ask the appropriate critical questions to guide a ‘reasonable enough’ judgement. The audience may—contrary to the assumptions of social psychological effects research— have reasonable disagreement with the verdicts, but this does not invalidate their usefulness. Keywords  Fact-checking • Political argumentation • Empirical truth • Reasonable • Interpretation • Analytical journalism This concluding chapter will draw together the key findings from the analysis of the previous two chapters, but it will also make a case for what the role and purpose of fact-checking journalism is and could be. I’ll start with that bigger picture, and then explore some of the challenges highlighted in the 2017 British snap general election. Finally, the chapter will consider

© The Author(s) 2019 J. Birks, Fact-Checking Journalism and Political Argumentation, https://doi.org/10.1007/978-3-030-30573-4_5

97

98 

J. BIRKS

what the engagement on Twitter indicates about the effectiveness of fact-­ checking in the British context.

The Role of Fact-Checking Journalism The most obvious role for fact-checkers is correcting an erroneous, or indeed confirming an accurate, empirical claim. But what is the purpose of correcting or confirming that fact? Certainly, it is useful for the voter to know that there are 20,000 (or thereabouts) fewer police officers than there were when the governing party took power, because if considerably fewer had been cut, that would undermine the opposition party’s argument based on that premise. However, the opposite does not hold—the accuracy of that statistic does not prove that the reduction in police numbers has had negative consequences, or that the opposition call for action to increase police numbers by 10,000 would have had a positive impact on crime or its successful prosecution. In other words, empirical facts are not always probative, and have limited political significance on their own. These facts are put to work in the service of an argument—in other words, they are not just facts, but argumentative premises. Even if the premises are true, the argument may be fallacious. If we take the purpose of fact-checking to be to enable voters to judge politicians’ policy proposals on their merits and, more broadly, to enable citizens to engage meaningfully in policy debates, rather than being invited to judge popularity contests, then it is clear that fact-checking journalism must have a wider role than simply checking narrow empirical facts without commenting on their use within an argument. In this study of British fact-checking, where fact-checks on empirical claims added value to the political debate, as they frequently did, it was through a more interpretive judgement on the appropriateness of that selection from the available data, and whether it proved or at least supported the claimed relationship between policy means and social outcomes. Fact-Checking as Explanatory Journalism Although statistical measures and other recorded observations do dominate the claims chosen, British fact-checkers do not constrain themselves to empirical facts. This does mean that some of what fact-checking journalists do does not seem to be entirely captured by the term ‘fact-­checking,’ and might be more accurately described as explanatory journalism, but

5 CONCLUSION 

99

that does not invalidate this wider practice, which is also common to fact-­ checking in other European countries (Graves and Cherubini 2016). Even in full fact-checks, the narrative verdict format (eschewing the more common truth-o-meter-style scale verdict) allows for greater flexibility in finding claims reasonable based on the available evidence, even if they may not necessarily be ‘true,’ or dubious because key variables have not been considered, even if not necessarily ‘false.’ It can also emphasise caveats, such as the limitations of available data or methodological inconsistencies. Explanatory journalism, with or without a verdict, can therefore be an effective way of dealing with claims that defy definitive adjudication, either because necessary information is unavailable or because they are complex claims on which there is no expert consensus. However, fact-checking journalists need to be clear and consistent about how explainers differ from conventional ‘objective’ reporting. It is necessary to do more than simply state a counter-claim to the claim being checked or explained. It may be appropriate, for instance, to explore why a policy is controversial, as with the Reality Check explainer on Prevent discussed in Chap. 3, but to have any explanatory power, the answer must do more than simply describe the controversy. Even if a topic is subject to reasonable disagreement and neither side of the debate can be proven true, the argument and counter-argument contain premises that can be interrogated, as can be seen in FactCheck’s item on railway nationalisation. Fact-checking distinguishes itself from mainstream news reporting by going beyond the passive attribution of ‘he-said, she-said’ reporting, and involves journalists assessing the evidence and finding their own interpretive voice. This is not, of course, unique to fact-checking journalists, being shared with other forms of interpretive, analytical and explanatory journalism, but fact-checkers are more exposed to criticism because they arbitrate on politicians’ claims, and do so on the basis of their own, reasonable but not infallible, interpretation of the evidence. This is a valuable correction to an information environment full of so many conflicting claims that truth seems precarious or inaccessible (if not something entirely from a previous era). However, to tackle the more complex political claims, fact-checkers will inevitably need to depend on more in-depth research from other sources— think tanks and other policy experts, in particular. Some policy fields are more crowded than others, however, and whilst the Nuffield Trust, Kings Fund and the Health Foundation can be relied on for corroborative analysis on health issues, the Institute for Fiscal Studies (IFS) has very much

100 

J. BIRKS

cornered the market in fiscal economics. This presents a challenge for fact-­ checkers, who find themselves reduced to simply making an appeal to expertise, relying on the understanding that the IFS is a credible and impartial arbiter (Anstead and Chadwick 2018). In contrast to their own analysis, however, they feel less obliged to convey the IFS’s reasoning or consider the limitations of their methodology. They do not, in other words, always subject expert sources to the appropriate critical questions. For fact-checking to really offer a form of journalism that is distinct from the equivocal stenography of ‘objective’ reporting, it needs to also interrogate the role of those facts within the wider political argument. To do this, fact-checking journalism could expand its toolkit to include some of the analytical tools of critical argumentation. Fact-checkers already use some of the critical questions that are proposed in practical argumentation theory (Van Eemeren and Houtlosser 2003; Walton 2006), by examining the existence of a correlation in an argument from correlation to causation, or whether an expert opinion is consistent with others in the field in an argument to expert opinion, or if there is really a common belief in an appeal to popular opinion. However, there are other critical questions that should also be asked—if there is an alternative explanation for the correlation, if the expert opinion is based on credible evidence or if there is any reason to doubt a view commonly held to be true.

Evaluating Political Arguments Depending on the claim being checked, evaluating its place in the wider argument might involve interrogating not only whether the premises are true or reasonable, but also whether they are relevant to the call for action (in an electoral context, generally a policy proposal), or used misleadingly. Where the opposition parties argue that public services are underfunded, for instance, it is necessary not only to check the scale of the cuts—though that is useful in itself in the few cases where that is contested, as we saw with schools funding in Chap. 4—but also to examine whether the cut has led to the problem claimed, as with cuts to police and patterns of crime and terrorism. Often it will mean tackling the more difficult truth claims, such as arguments from correlation to causation. Fact-checkers often select a statement that claims credit for bringing about a positive state of affairs with a successful policy, or blames the opposing party for causing a problematic circumstance with a harmful policy, but they only ask the most obvious and

5 CONCLUSION 

101

straightforward critical question—whether there is indeed a correlation between the introduction of the policy and the stated change. One way to do this is simply to ask if there are other explanations for the shift other than the change in policy as FactCheck did with various measures of economic performance, and another is to draw on research by relevant issue experts that controls for other variables, as Full Fact did with research on the impact of immigration on public services. To take the evaluation still further, it could involve asking whether the proposed means are necessary and/or sufficient to achieve the stated goal, whether they might have unintended negative consequences that damage another goal and whether others have proposed more effective alternative means. For example, in relation to the Brexit debate, if the key goal is to bring down immigration, and a ‘hard’ Brexit is proposed as the means to achieve it, we must ask if a ‘hard’ Brexit is sufficient to achieve that goal, or if other additional means would be required; we must ask whether it is necessary to achieve that goal, or if there are other ways to do it; and if there are negative consequences such as skilled labour shortages or higher prices or economic slowdown that might conflict with other goals. More fundamentally, we can interrogate the circumstantial premise that the EU referendum result gave a mandate for that goal—in other words, that ‘the people’ (or a slim majority of those who participated) voted ‘leave’ because they wished to reduce immigration. This assertion has hardened into an established ‘social fact’ to the point that it has become almost invisible as a contestable claim and an argumentative premise. Another challenge for fact-checkers, therefore, is that the argument is often not fully elaborated—there are implied but unstated premises. In fact, politicians might leave premises unstated specifically and strategically to avoid being held to account for them. To some extent, mainstream media interviewing is adept at pressing politicians on these inferences, such as the interrogation of Labour MPs on the relationship they inferred between police numbers and terrorist attacks, but the results are not always illuminating. One of the most baffling omissions from the Brexit debate during the general election campaign was any challenge to the implied but unstated premise that the UK is in a stronger negotiating position with non-EU countries alone than as part of the EU trading block. Despite being a fundamental premise of the argument in favour of leaving the single market, since this was not explicitly raised by a prominent politician it went uninterrogated by fact-checkers. It might be contentious for fact-checkers to

102 

J. BIRKS

interrogate logical connections between juxtaposed claims, but it could be a deterrent to politicians using this rhetorical trick. Fact-Checking Social Facts: Descriptive and Persuasive Definitions Contrary to Uscinski and Butler’s (2013) sweeping judgement that definitions are not facts, where they have a settled meaning that is not widely contested, we can regard them, as Fairclough and Fairclough (2012) do, as social facts that actions can be tested against. We can ask, for instance, whether there is a substantive difference between one statement and another that constitutes a u-turn. We might object to the term ‘u-turn’ as a negative spin on politicians changing their minds—lexical choices are significant, the distinction between neutral and loaded terms is blurred, and even the most descriptive terms can convey some valence—but it is a workable definition against which to assess a politician’s denial. Where a definition is subjective or used inconsistently, such as when experts disagree on the meaning of a technical term (Uscinski and Butler’s (2013: 174) example of fact-checking a definition relates to a ‘filibuster’), fact-checkers using an ordinal or categorical verdict must pick one against which to test the claim or action (in this instance, the allegedly filibustering senator’s), but they do so transparently, leaving scope for the audience to disagree using the reasoning explored in the fact-check. Since British fact-­ checkers use narrative verdicts, they are able to more explicitly leave that interpretation open, assessing, for instance, the differences between the 2015 and 2017 Labour manifestos without determining whether or not those differences (proposing broadly the same policies but being more ambitious in their scope) constitute being more radical, or indeed whether ‘radical’ has a positive or negative valance. Importantly, however, the audience do have the information to make their own judgement, distinguishing the practice of fact-checking journalism from the mainstream convention of simply presenting two or more perspectives. Where definitions are used rhetorically, on the other hand, they do not have the status even of contested social facts, but that is because persuasive definitions are actually arguments disguised as factual propositions with the hope that they will be accepted without challenge (Walton 2006: 222– 3). Terms such as ‘garden tax’ and ‘dementia tax’ are used rhetorically by opponents of the policies, partly perhaps to infer a negative valance associated with paying taxes, but also to imply that the policies tax inappropriate

5 CONCLUSION 

103

things—respectively, simple middle-class pleasures and illnesses we can’t avoid. They are also an obstacle to reasonable argumentation because they cannot be accepted as common grounds for debate—when asked “why are you introducing a garden tax?”, a Labour MP could not answer without accepting the (inaccurate) inference that they were proposing to increase the amount of tax that all people with gardens would pay. Such terms can be rhetorically effective in the cut and thrust of debate, but they are nonetheless amenable to analysis that unpacks the message being conveyed and analyses how well grounded that implicit argument is. In explaining the term ‘dementia tax’ as conveying the argument that dementia (as the main reason for needing social care) is an illness whose treatment is not included in the principle of universal healthcare, free at the point of delivery, but means-tested, the fact-checker opens it up to being accepted or contested in a reasonable rather than instinctive, heuristic way. Indeed, fact-checkers could go further, to interrogate the argument from analogy, such as whether social care is essentially the same as treatment for other illnesses. If we view fact-checking in pragma-dialectical rather than epistemological terms—as critically unpacking political argumentation to allow voters to engage meaningfully with debates on policy rather than conveying political truth—then definitions are legitimate targets. Fact-Checking the Future: The Problems with Predictions Whilst there is value in recognising a broader remit for fact-checking beyond discrete empirical facts, we also have to recognise that some premises are particularly difficult to address. The most difficult are often the most significant in political argumentation—the prediction that a specific policy measure would bring about a desired future state of affairs. These means-goal premises are crucial to the object of practical argumentation in an election campaign—to help voters decide who to vote for—so to avoid checking these claims would undermine the purpose of fact-checking journalism. Although these claims are not always effectively interrogated, that does not invalidate the attempt to do so. The problem common to the fact-checks on policy predictions in this election campaign sample was not that caveats were missing or alternative accounts ignored—the explanations were generally good, if not exhaustive—but rather how to interpret that uncertainty in the valance of the verdict. In many, though not all, of these cases, it seemed that the

104 

J. BIRKS

a­vailability of any potential alternative predictive argument, based on a different set of assumptions or past data, was taken to fatally undermine the prediction. It is difficult to see how any prediction could pass muster when held to this standard. However, this is not the only standard available—fact-checkers could instead compare the evidence and assumptions underpinning the competing predictions to evaluate how reasonable (though uncertain and contested) the prediction is and to nevertheless allow the audience scope for reasonable disagreement. The selection of predictive claims for fact-checking also ploughed a rather narrow furrow by focusing almost exclusively on the costing of policy proposals rather than their intended outcomes. Since only Labour provided detailed costings, this selection disproportionately affected them. It could be that fact-checkers assumed an increase in funding for public services to be an inherently good thing, and therefore they need only check for negative outcomes. However, this is also driven by dominant news framing conventions, not only assuming the Conservatives to be fiscally responsible and Labour ‘profligate,’ but assuming that to fund spending increases through increased borrowing, rather than tax increases or reductions in spending elsewhere, is inherently a bad thing (though, of course, Labour implicitly accepted, or chose not to challenge, the latter by going to unusual lengths to cost every policy other than capital investment). It may also be that fact-checkers perceive fiscal questions to be easier to address, because they felt that they could simply defer to the IFS, as discussed above. Fact-Checking Personality Politics: Measures of Credibility and the Heuristic of Trust Whilst ad hominem arguments are often regarded as a fallacy because they do not respond to the argument but rather the person advancing it, they are a major part of political argumentation, and Walton (2006: 122) argues that they can be reasonable and can therefore be rationally interrogated. Since personal attacks are often very prominent claims to which voters may be attentive, they are worth addressing despite the distraction from policy, and indeed on issues where there were strategically little concrete policy claims, notably Brexit, difficult to avoid. Fact-checkers did help to answer relevant critical questions about the evidence in support of pertinent character attacks (the direct ad hominem variant), such as voting records and the recorded reasoning given for those

5 CONCLUSION 

105

votes, and indirect (circumstantial) attacks on their commitment, as evidenced by consistency or u-turns. They quite sensibly, for the most part (with one exception—a fact-check on whether the Conservatives could be trusted to increase NHS funding), steered away from the implications of those observations for the subjective perception of credibility and trust in the eyes of voters, unlike broadcast journalists and interviewers.

Effectiveness of Fact-Checking: Motivated Reasoning and Reasonable Disagreement Fact-checking engages the audience in more complex ways than are captured in the findings of social psychology research. Experimental research from social and behavioural psychology gives us a fascinating insight into how people respond to fact-checks, but there is necessarily a limit to what it can tell us because such experimental interventions are artificial. The surveys mostly focus on variables related to the person responding to the claims and fact-checks, but hold the claim being checked constant. Most use a single claim as ‘typical’ of a fact-check (Garrett et al. 2013), or two similar claims (Flynn et al. 2017), whilst one or two address several claims contained in an attack ad but still roll them up together (Fridkin et  al. 2015). They also tend to assume that fact-checks should always be accepted at face value and neglect the good reasons people might have for being sceptical of the conclusions drawn. Interactions with fact-checking on social media offer an opportunity to examine people’s more natural responses and, importantly, their reasoning. Of course, Twitter users are not representative of the whole population—younger and more well-educated people are over-represented, and therefore so are more liberal viewpoints, and those following fact-checkers are likely to be more politically engaged—but this small-sample qualitative research is meant to contribute depth of understanding rather than observations that can be extrapolated. Nonetheless, variations in response within this sample can indicate a range of reasons why people might resist fact-checking, and the extent to which heuristics, including motivated reasoning, mitigate against systematic reasoning. On the one hand, engagement with fact-checkers’ tweets does indicate that the audience use heuristic judgements and motivated reasoning— Twitter users appear to ‘like’ and retweet verdicts that accord with their political perspective, and contest via replies those that contradict it,

106 

J. BIRKS

i­ncluding through heuristics of distrust. Heuristics are cognitive shortcuts based on generalisations and experience, cutting out the process of reasoning. However, on the other hand, heuristics are also pragmatic and can be, to some extent, reasonable. For instance, the perception of mainstream media bias against Corbyn’s left-wing politics is not entirely driven by motivated reasoning, as indicated in media research (Cammaerts et  al. 2017), even if the BBC is far from the worst offender (though perhaps held to a higher standard, as a trusted public service broadcaster). Furthermore, even if people are motivated to interrogate a claim by their political allegiances, that does not preclude them from seeking and giving good reasons to challenge verdicts that are not consonant with their existing beliefs. Responses offered alternative explanations for correlations in data, asked for missing contextual data, and contested interpretation in ways that engaged with the substance of the argument and even enriched understanding of the issue for others. It is significant, therefore, that there was no evidence of party bias in the selection of claims, and that any imbalances in verdicts can be explained by differences in the kind of claims selected for analysis. Nonetheless, perception of bias is damaging in that it precludes engagement of the most partisan or sceptical. Those who distrust the BBC (Channel 4 did not seem vulnerable to this) appear to reject verdicts shared on Twitter without clicking through to the analysis, often assuming that the very exercise of fact-checking a claim implies criticism or at least scepticism, even when the article itself is confirmatory. It may be beneficial, therefore, for fact-­checkers to include the verdict and a brief explanation in the tweet text and accept that only a minority will click through to the full analysis. Since fact-checking does remain a niche form of journalism, and reaches a relatively small audience, however, the potential benefits of fact-checking for political argumentation and public engagement will only be realised if mainstream reporting picks up more consistently on their analysis and findings. There were promising examples in Chap. 4 of interviewers using figures from fact-checking to challenge politicians—citing them as facts rather than someone else’s counter-claim. However, they were limited to statistical measures, and did not credit fact-checkers or publicise their work. Overall, fact-checking journalism provides a valuable service to an engaged minority and informs wider media debate to a limited extent. Like all worthwhile endeavours, fact-checking has flaws and limitations, but these are not inherent to the practice if understood in a

5 CONCLUSION 

107

­ ragma-­dialectical sense, and where we understand audience contestap tion as a legitimate and productive form of engagement.

References Anstead, Nick, and Andrew Chadwick. 2018. ‘A Primary Definer Online: The Construction and Propagation of a Think Tank’s Authority on Social Media’, Media, Culture & Society, 40: 246–66. Cammaerts, Bart, Brooks DeCillia, and César Jimenez-Martínez. 2017. ‘Journalistic Transgressions in the Representation of Jeremy Corbyn: From Watchdog to Attackdog’, Journalism. https://doi.org/10.1177/ 1464884917734055. Fairclough, Isabela, and Norman Fairclough. 2012. Political Discourse Analysis (Routledge: Abingdon). Flynn, D. J., Brendan Nyhan, and Jason Reifler. 2017. ‘The Nature and Origins of Misperceptions: Understanding False and Unsupported Beliefs About Politics’, Political Psychology, 38: 127–50. Fridkin, Kim, Patrick J. Kenney, and Amanda Wintersieck. 2015. ‘Liar, Liar, Pants on Fire: How Fact-checking Influences Citizens’ Reactions to Negative Advertising’, Political Communication, 32: 127–51. Garrett, R. Kelly, Erik C. Nisbet, and Emily K. Lynch. 2013. ‘Undermining the Corrective Effects of Media-Based Political Fact Checking? The Role of Contextual Cues and Naïve Theory’, Journal of Communication, 63: 617–37. Graves, Lucas, and Federica Cherubini. 2016. The Rise of Fact-Checking Sites in Europe (Reuters Institute for the Study of Journalism: Oxford). Uscinski, Joseph E., and Ryden W.  Butler. 2013. ‘The Epistemology of Fact Checking’, Critical Review, 25: 162–80. Van Eemeren, Frans H., and Peter Houtlosser. 2003. ‘The Development of the Pragma-dialectical Approach to Argumentation’, Argumentation, 17: 387–403. Walton, Douglas. 2006. Fundamentals of Critical Argumentation (Cambridge University Press: Cambridge).

Index1

A Ambiguity, 10, 21, 33, 46, 51 Attribution, 17, 21, 46, 47, 70, 77–79, 99 Audience, 2, 3, 5–7, 10, 19, 22, 25–30, 32–34, 40, 42, 48, 49, 51–52, 55, 64, 73, 75, 80, 85, 90, 92, 102, 104–107 See also Engagement Austerity, 9, 55, 77, 79 B Balance, 17, 18, 22, 23, 40, 44, 47, 51–55, 92 See also Impartiality Bias, 10, 22–26, 28, 29, 44, 51–52, 55–60, 106 See also Impartiality Brexit, 2, 4, 5, 8, 9, 26, 33, 45, 48, 53–55, 61, 71, 73n3, 84–92, 101, 104

C Causation, 18, 20, 46–49, 70, 78, 81–83, 92, 100 Corroboration, 60–61 Credibility, 4, 9, 10, 18, 20, 23, 25, 27, 43, 49, 55, 56, 62, 70, 71, 78, 79, 81, 84, 85n12, 100, 104–105 D Definitions, 18, 20, 21, 46–49, 58, 74, 86, 102–103 E Elite indexing, 84, 91, 92 Empirical, 10, 16–22, 39, 40, 44, 47, 49, 58, 60–61, 70–83, 91, 98, 103

 Note: Page numbers followed by ‘n’ refer to notes.

1

© The Author(s) 2019 J. Birks, Fact-Checking Journalism and Political Argumentation, https://doi.org/10.1007/978-3-030-30573-4

109

110 

INDEX

Engagement, 1, 3, 8, 10, 29, 31n6, 39, 42, 43, 59, 59n6, 60, 98, 105–107 See also Audience Epistemology, 10, 16, 17 Ethical critique of fact-checking, 10, 16–18, 22–26, 33, 34, 40, 44, 51–52, 60–63 Ethics, journalism, 10, 16, 17 Expert, 4, 18, 20, 26, 50, 58, 61–63, 71, 74, 80, 81, 88, 99–102 Explainers, 32, 40–46, 57, 61, 62, 86, 87, 91, 99 F Fiscal policy, 5, 41, 45, 47, 48, 50, 53, 55, 61, 62, 71, 74n4, 100, 102–104 H Heuristics, 3, 5, 27, 49, 59, 64, 78, 90, 103–106 See also Motivated reasoning I Immigration, 9, 47, 49, 54, 61–63, 76, 77, 85, 87, 101 Impartiality, 17, 25, 33, 39, 51–52, 55, 56, 60–63, 75, 100 See also Balance Interpretation, 15–34, 45–51, 63, 70, 74, 79, 90, 99, 102, 106 Investigative journalism, 18, 51, 60 M Misinformation, 2, 5, 26 Motivated reasoning, 3, 10, 29, 51, 55, 56, 64, 105–107 See also Heuristics

O Objective journalism, 6, 15–34, 44, 99, 100 Opinion polls, 4, 31, 62, 63, 87 P Partisanship, 7–8, 10, 28, 29, 64 Personalisation of politics, 3 Polarised politics, 3, 7 Political argumentation, 10, 50, 51, 69–92, 103, 104, 106 See also Pragma-dialectical Post-truth, 4–5, 16, 26 Pragma-dialectical, 103, 107 See also Political argumentation Predictions, 9, 10, 20, 45–49, 58, 58n5, 61–63, 71, 84–85, 88, 91, 92, 103–104 Premises circumstances, 34 goals, 103 means, 92 unstated premise, 45, 80, 87, 92, 101 Public spending, see Austerity R Rating system, 19, 40, 41 S Security, 33, 42n1, 70, 71, 77–80, 82–84 Social facts, 20, 78, 79, 86, 101–103 Social media, 2–3, 5, 8, 20, 26, 28, 53, 90, 105 Statistics, 2, 5, 17, 21, 31, 41, 46, 58, 60–63, 70, 77, 78, 82n11, 83, 91, 92, 98

 INDEX 

T Trust, 2, 3, 7–8, 19, 24, 49, 58, 59, 64, 78, 88, 90, 104–106 Truth claims, 6, 10, 17, 24, 44, 63, 69, 71–73, 100

111

V Verdicts, 10, 19, 21, 23, 24, 26–29, 32, 33, 40–54, 56–61, 59n6, 63, 64, 74, 75, 83, 88, 91, 99, 102, 103, 105, 106 See also Rating system Verification, 5, 10, 15–18, 20, 90