Defining Moments: Dramatic Archaeologies of the Twentieth-Century 9781407305813, 9781407335308

The shape of this collection of essays has emerged over time from an original session from the Theoretical Archaeology G

272 55 28MB

English Pages [178] Year 2009

Report DMCA / Copyright


Polecaj historie

Defining Moments: Dramatic Archaeologies of the Twentieth-Century
 9781407305813, 9781407335308

Table of contents :
Front Cover
Title Page
Studies in Contemporary and Historical Archaeology
Table of Contents
List of figures
List of tables
List of contributors
Chapter 1. 1115 hrs, 24 June 2008 Drama and the moment
Chapter 2. 1230 hrs, 12 December 1901 Marconi’s first transatlantic wireless transmission
Chapter 3. 1140 hrs, 14 April 1912 The case of the RMS TITANIC
Chapter 4. 1 July 1916 The Battle of the Somme and the machine gun myth
Chapter 5. 11 August 1921 ? The discovery of insulin
Chapter 6. 2 October 1925 From Ally Pally to Big Brother: television makes viewers of us all
Chapter 7. 1 June 1935 The introduction of compulsory testing of drivers in the United Kingdom: the neglected role of the state in motoring
Chapter 8. Commentary: Visions of the twentieth century
Chapter 9. 16/17 May 1943 Operation Chastise: The raid on the German dams
Chapter 10. 1130 hrs, 29 May 1953 Because it’s there: The ascent of Everest
Chapter 11. 2228:34 hrs (Moscow Time), 4 October 1957 The Space Age begins: The launch of Sputnik I, Earth’s first artificial satellite
Chapter 12. 11 February 1966 Proclamation 43
Chapter 13. March 1993 The Library of Babel: Origins of the World Wide Web
Chapter 14. 0053 hrs, 12 October 1998 The Murder of Matthew Wayne Shepard: An archaeologist’s personal defining moment
Chapter 15. 0000:00 hrs, 1 January 2000 ‘Three, two, one …?’ The material legacy of global Millennium celebrations
Chapter 16. n.d. Conservation and the British

Citation preview

BAR S2005 2009

Studies in Contemporary and Historical Archaeology 5

Defining Moments: Dramatic Archaeologies of the Twentieth-Century


Edited by

John Schofield



BAR International Series 2005 2009

Studies in Contemporary and Historical Archaeology 5

Defining Moments: Dramatic Archaeologies of the Twentieth-Century Edited by

John Schofield

BAR International Series 2005 2009

ISBN 9781407305813 paperback ISBN 9781407335308 e-format DOI A catalogue record for this book is available from the British Library



Studies in Contemporary and Historical Archaeology Studies in Contemporary and Historical Archaeology is a new series of edited and single-authored volumes intended to make available current work on the archaeology of the recent and contemporary past. The series brings together contributions from academic historical archaeologists, professional archaeologists and practitioners from cognate disciplines who are engaged with archaeological material and practices. The series will include work from traditions of historical and contemporary archaeology, and material culture studies, from Europe, North America, Australia and elsewhere around the world. It will promote innovative and creative approaches to later historical archaeology, showcasing this increasingly vibrant and global field through extended and theoretically engaged case studies. Proposals are invited from emerging and established scholars interested in publishing in or editing for the series. Further details are available from the series editors: Email [email protected] or [email protected] This, the fifth volume in the series, brings together a highly innovative series of contributions that explore the material, social and institutional legacies of ‘defining moments’ of the 20th century. The ‘headline’ significance of these events is varied: some were of global impact (e.g. the creation of television and the World Wide Web, and the discovery of insulin), others more personal (e.g. the murder of Matthew Wayne Shepard); but all are telling of how the conditions of modernity and postmodernity that shape the networks and contours of contemporary life were brought into being. Innovation here derives from a distinctly ‘archaeological perspective’ that is taken on critical historical moments, one which solidly foregrounds the materiality (and, in instances such as that of transatlantic wireless, the immateriality) of event.

Dan Hicks (University of Oxford) and Joshua Pollard (University of Bristol) Series Editors



Contents Foreword by Dan Hicks and Josh Pollard ...................................................................................................................................................... i List of Figures ...................................................................................................................................................................................................... iv List of Tables........................................................................................................................................................................................................ vi List of Contributors .......................................................................................................................................................................................... vii Preface ................................................................................................................................................................................................................... ix 1 1115 hrs, 24 June 2008 Drama and the moment ................................................................................................................................... 1 John Schofield 2 1230 hrs, 12 December 1991 Marconi’s first transatlantic wireless message .................................................................................. 9 Cassie Newland 3 1140 hrs, 14 April 1912 The case of the RMS Titanic ....................................................................................................................... 19 David Miles 4 1 July 1916 The Battle of the Somme and the machine gun myth .................................................................................................. 29 Paul Cornish 5 11 August 1921 ? The discovery of insulin ............................................................................................................................................ 39 E M Tansey 6 2 October 1925 From Ally Pally to Big Brother: Television makes viewers of us all ................................................................... 47 Martin Brown 7 1 June 1935 The introduction of compulsory driving tests in the United Kingdom: The neglected role of the state in motoring ............................................................................................................................................. 55 John Beech 8 Commentary: Visions of the twentieth century .................................................................................................................................... 65 Cornelius Holtorf 9 16/17 May 1943 Operation Chastise: The raid on the German dams .......................................................................................... 83 Richard Morris 10 1130 hrs, 29 May 1953 Because it’s there: The ascent of Everest ................................................................................................... 95 Paul Graves-Brown 11 2228:34 hrs (Moscow Time), 4 October 1957 The Space Age begins: The launch of Sputnik I, Earth’s first artificial satellite ................................................................................................................................................................. 105 Greg Fewer 12 11 February 1966 Proclamation 43 .................................................................................................................................................... 115 Martin Hall 13 March 1993 The Library of Babel: Origins of the World Wide Web ........................................................................................ 123 Paul Graves-Brown 14 0053 Hrs, 12 October 1998 The Murder of Matthew Wayne Shepard: An archaeologist’s personal defining moment .................................................................................................................................. 135 Thomas Dowson 15 0000:00, 1 January 2000 ‘Three, two, one …?’: The material legacy of global millennium celebrations ............................ 147 Rodney Harrison 16 n.d. Conservation and the British ....................................................................................................................................................... 157 Graham Fairclough


List of figures 1.1 1.2 1.3 2.1 2.2 2.3 2.4 2.5 3.1 3.2 3.3 3.4 3.5 4.1 4.2 5.1 5.2 5.3 5.4 5.5 5.6 6.1 6.2 6.3 6.4 6.5 7.1 7.2 7.3 7.4 7.5 7.6 8.1 8.2 8.3 8.4 8.5 8.6 8.7 8.8 8.9 8.10 8.11 8.12 8.13

The Mary Rose sinking (left) and during recovery (right). (Left) Not Skara Brae, but a ‘discarded’ necklace on a teenager’s bedroom floor, somewhere in the south of England, c.2008. (Right) A cashmere cardigan, left in a London office block at the time of its abandonment. The day someone made a biface, some half a million years ago. George Kemp with the kite. Ring of masts at Poldhu. Damaged masts at Poldhu. Temporary aerial at Poldhu. Raising the kite at Signal Hill. Poster for the Women’s Titanic Memorial Fund. A White Star Line advertisement for the first sailing to New York from Southampton via Queenstown. A White Star Line advertisement for the return sailing from New York. A London paper-seller announces the disaster. Titanic Baby Found Alive! A 1993 tabloid story in the Weekly World News. The reality: British machine-gunners on the Somme. The myth: Stanley Wood's vision of the fighting on the Somme, from The War Illustrated, 15 July 1916. Charles Best and Frederick Banting. Leonard Thompson, in a photograph given to the Research Defence Society by Dr Charles Best in 1938 when he delivered the Stephen Paget Memorial Lecture. Advertisement for Burroughs Wellcome & Co.’s insulin , taken from the British Medical Journal, 26 October 1929. Nobel Prize certificate of John Macleod, jointly awarded to Frederick Banting, 1923. Note on the outside of an envelope of documents entitled ‘Insulin Controversy’ deposited by Sir Henry Dale in the Wellcome Historical Library, 1959. Young diabetic girl before and four months after starting insulin treatment. 21 Linton Crescent, Bexhill-on-Sea, where John Logie Baird worked on his experimental television system. Popularly know as ‘Ally Pally’ this nineteenth-century palace for the people renegotiated its role as a site of mass, public entertainment by becoming the site for BBC commercial broadcasts. Television Centre in London has become more than a production and administration centre and has appeared in its own dramas and as a backdrop to media events from telethons to record-breaking tap-dance events. The set of the BBCTV children's programme Teletubbies manifests the fantastical in the English countryside. The nineteenth- and early twentieth-century roofs of Poble Sec (Barcelona) cluster with television aerials. The major elements that contribute to the phenomenon of motoring. Thirties garage representation at the Museum of Irish Transport, Killarney. Detail from original 1931 edition of The Highway Code. Road Fund Licence from 1936. Road sign from the Coventry area. Traditional advertising head from a petrol pump. Archaeology makes the past concrete. The medium is the message. Domesticating shipwrecks. Commemorating national heroes. Discoveries below the surface. The hyper-real living room. My other car is also a Porsche! Reflections on reflexivity. Comedy is tragedy plus time. It’s Rubbish! The dreams of science and technology. Innocence lost. Archaeology in the age of outside play. iv

8.14 8.15 8.16 9.1 9.2 9.3 9.4 9.5 9.6 10.1 10.2 10.3 11.1 12.1 12.2 12.3 13.1 13.2 13.3 13.4 14.1 14.2 14.3 14.4 14.5 15.1 15.2 15.3 15.4 16.1 16.2 16.3 16.4

Penguin normalities. Nothing ages faster than the future. Forget conservation! Remains of buildings at Gunne, just below the Möhne Dam. Road and rail bridges are swamped at Frondenberg, fifteen miles from the Möhne Dam. Evanescent deposit formation processes: a roof structure has been bodily swept onto the ground. Townspeople salvage belongings. Most flood debris was systematically cleared. House on the main street of Wickede after the Möhnekatastrophe. Neheim, where many buildings were rebuilt. Everest from the South West, showing Camps 1-V111 of the 1953 British Expedition. Open and closed oxygen sets. ‘At last!!’ World Space Museum’s model of Sputnik 1. Nostalgia for District Six: poster for a musical recalling the culture of the segregated township. Marking the ground. An installation that formed part of the District Six Sculpture Festival. Installation by Roderick Sauls. Part of the District Six Sculpture Festival, this installation recalls both slavery and the carnival traditions of the city. The NeXT workstation used by Tim Berners-Lee as the first Web server on the World Wide Web. Evolution of the web browser. A VPL Research DataSuit, a full-body outfit with sensors for measuring the movement of arms, legs, and trunk. Stonehenges for sale. The author's avatar explores one of several Stonehenges to be found in Second Life. The Mesolithic Family Group. The Neolithic Family Group. The Bronze Age Family Group. The Iron Age Family Group. The Anglo-Saxon Family Group. The Millennium Dome, with the Canary Wharf complex in the background. Looking east across Hilly Fields stone circle, October 2007. Sketch plan of the Hilly Fields stone circle, October 2007. Detail of stone slab sundial calendar, Hilly Fields stone circle, October 2007. Aldermanbury (City of London), after bombing in the Second World War. The Euston Arch (London) in 1960, three years before its demolition, a cause célèbre in the growth of conservation. Marsham Street Towers (Department of the Environment) (Westminster) during demolition in 2002, a removal more or less welcomed by the fully-fledged conservation movement. South Crofty pit in 1998 at the moment of closure, the then last-working tin mine in Cornwall.


List of tables 15.1

List of major projects funded by the Millennium Commission which involved new constructions and their association with heritage sites or attractions.


List of contributors John Beech is the Head of Sport and Tourism Applied Research at Coventry University's Applied Research Centre for Sustainable Regeneration. His research interests in tourism and heritage are diverse, and include all forms of transport. He has published on motoring-, railway- and aviation heritage. John has practitioner experience in heritage visitor attractions as he was a member of the small group that bought, restored and opened Errol Station Heritage Centre, which won the Ian Allan Award for Best Preserved Station in Britain in 1991. Martin Brown is Archaeological Adviser for southern Britain with Defence Estates, an agency of the UK Ministry of Defence and has previously worked within local government, English Heritage and as a contract archaeologist. He is also a founder member of No Man's Land - the European Group for Great War Archaeology and has directed a number of projects with the group. Martin's involvement with television includes appearances on a number of programmes addressing archaeological or cultural heritage issues, including Trench Detectives, Ancestors, Time Team, Country Tracks and Celebrity Big Brother. His interest in Baird and his legacy was sparked during his time working in East Sussex, where Baird lived and conducted early experiments. His previous published considerations of archaeology and popular culture explored storytelling and writing; this paper shifts focus to the modern medium of myth-making. Paul Cornish is a Senior Curator in the Department of Exhibits & Firearms at the Imperial War Museum where he has worked since 1989. Since 2000 he has become involved in the cross-disciplinary study of the Material Culture of Conflict; having cohosted four conferences and co-edited a book on the subject (with two others in preparation, as of August 2009). His book ‘Machine Guns and the Great War’, which unites his interests in history, firearms technology and material culture, was published in July 2009. He feels that 1 July 1916 might validly be claimed as a defining moment for many aspects of British society and culture, but has focussed on an area which has been the subject of his recent research: our perception of the machine gun as an artefact of material culture. Thomas A Dowson is an independent archaeologist. He has held posts at the University of the Witwatersrand (South Africa), and the Universities of Southampton and Manchester (England). His research includes shamanism and the interpretation of rock art, theory and methodology of archaeological approaches to art, the popular representation of prehistoric and ancient artistic traditions, as well as the sexual politics of archaeology. His publications include Rock Engravings of Southern Africa (1992, Witwatersrand University Press) and, with David Lewis-Williams, Images of Power: understanding San rock art (1989, Second edition 2000, Struik). He also edited the Queer Archaeologies volume of World Archaeology (32:2, 2000). Graham Fairclough, an archaeologist, currently Head of Characterisation in English Heritage, has written widely on the evolving practice of heritage management and has experienced many of the twists and turns of the relationship between archaeology and conservation of the later twentieth century. Born in Canada of mixed French-Canadian and Irish parentage, Greg Fewer moved with his family to Ireland as a small child. He attended University College Cork, graduating with a BA in archaeology and history (1989) and an MA in history (1993). A lecturer for many years at the Waterford Institute of Technology, in Ireland, he is now its equality officer. He was also an occasional lecturer and tutor in late medieval/early modern Irish history at National University of Ireland, Maynooth. He has written various articles and book chapters on archaeology and history, edited or co-edited local journals, and was a volunteer abstractor for what is now the British and Irish Archaeological Bibliography from 1992 to 2003. A child of the Space Age ushered in by Sputnik I, he has been a lifelong science-fiction fan and has for many years been interested in the heritage of space exploration and in the role archaeology can play in recording and conserving that heritage. Before becoming an archaeologist, Paul Graves-Brown worked as an engineer, a fact whose significance has only recently dawned on him. Gaining his PhD in Archaeology at Southampton University in 1990, he first encountered the World Wide Web when the ‘Mosaic’ browser reached British academic institutions, c.1994. Interested in IT since the early 1980s, he builds and repairs PCs and occasionally designs websites for paying clients. A wide variety of publications include the edited volume Matter, Materiality and Modern Culture (Routledge 2000) and a recent article on the Kalashnikov AK47 assault rifle (Journal of Material Culture, December 2007). Whilst not actively afraid of heights (he has, from time to time, worked on the roofs of buildings), they do tend to make him very cautious. Martin Hall is Vice Chancellor of the University of Salford, Greater Manchester. He has written extensively on pre-colonial history in Southern Africa, on the historical archaeology of colonialism and on contemporary public culture. He currently vii

teaches and carries out research on the intersection of the public and private sectors, entrepreneurship, and the role of ‘knowledge organizations’ in advancing development in highly unequal societies. Recent publications include ‘Identity, memory and countermemory: the archaeology of an urban landscape’ (Journal of Material Culture 11(1-2): 189-209, 2006), Historical Archaeology (edited with Stephen Silliman; Oxford, Blackwell, 2006) and Desire Lines: Space, Memory and Identity in the PostApartheid City (edited with Noeleen Murray and Nick Shepherd; London, Routledge, 2007). He was previously Professor of Historical Archaeology at the University of Cape Town, and is a past-President of the World Archaeological Congress. A full list of publications, as well as current work, is available at Rodney Harrison is a lecturer in Heritage Studies at the Open University (UK). He has previously held research and teaching positions at the Australian National University, University of Western Australia, and the NSW National Parks and Wildlife Service, and has been an honorary visiting research fellow in the Department of Anthropology, University College London. His books include Shared Landscapes (UNSW Press, 2004) and co-edited volumes The Heritage Reader (Routledge, 2008) and After Captain Cook (AltaMira Press, 2004). At the stroke of midnight on the eve of the new Millennium, Rodney was amongst the crowds who had gathered to watch the fireworks in Perth, Western Australia, and had already started wondering about what would be left behind. Cornelius Holtorf teaches Archaeology at Linnaeus University in Kalmar. In his research he investigates contemporary zoos, archaeology in popular culture, and the connections between placemaking, storytelling and heritage. Recent books include: From Stonehenge to Las Vegas (2005), Archaeology is a Brand! (2007) and Contemporary Archaeologies (co-edited with Angela Piccini, 2008). Just occasionally he enjoys provoking his colleagues. David Miles was until recently Chief Archaeologist at English Heritage. Previously the Director of Oxford Archaeological Unit, he was an Associate Professor of Stanford University and is a Research Fellow of the Institute of Archaeology, Oxford and a fellow of Kellogg College, Oxford. His particular interests are the history of the English landscape and the archaeology of the first millennium AD. He has worked in many areas of England and also France, Greece, Israel, Africa and the Americas. He is the author and co-author of many books and articles on archaeology including An Introduction to Archaeology, An Atlas of Archaeology and The Countryside of Roman Britain. He was a columnist for the Oxford Mail and Times for ten years and frequently broadcasts on radio and TV. Professor Richard Morris combines writing with direction of the Institute for Medieval Studies at the University of Leeds, and with music. The biographer of Guy Gibson (1994) and Leonard Cheshire (2000), he was Director of the Council for British Archaeology from 1991 to 1999, having earlier worked as a university teacher and Research Officer for the Council for British Archaeology. His interests in places of worship, settlement, historical topography, cultural history and aviation are reflected in essays, chapters, articles and books. At the time of writing he chairs the Expert Panel of the Heritage Lottery Fund and the Blackden Trust. He is the official biographer of Barnes Wallis. Cassie Newland is a PhD student and part time lecturer at the University of Bristol. She also works as a freelance consultant. Her recent projects include the archaeology of mobile phones, the Imperial Wireless Scheme and 'In Transit', the excavation of a 1991 Ford Transit Van. Cassie is obsessed with radio, something she puts down to childhood trauma. John Schofield works for English Heritage and teaches at the universities of Bristol and Southampton. Following research and a PhD in prehistoric archaeology, John has gradually been drawn into more contemporary matters. Organising the ‘Defining Moments’ session for TAG 1999 was one his first forays into this emerging field, and since then he has published widely on the subject. In 2006 he collaborated with others in excavating an old Ford Transit van. Tilli Tansey is Professor of the history of modern medical sciences at the Wellcome Trust Centre for the History of Medicine, University College London. She studied the neurochemistry of the Octopus brain for her PhD and spent many years working as a research neuroscientist in Sheffield, Edinburgh, Naples and London before taking a second doctorate in medical history, on the career of Sir Henry Dale (1875-1968). She specialises in twentieth-century medical sciences, especially physiology and pharmacology, and has published extensively in historical, medical and scientific journals. She is a Fellow of the Academy of Medical Sciences, and an Honorary Fellow of the Royal College of Physicians of London.


Preface The shape of this collection of essays has emerged over much of the first decade in the twenty-first century. But the path has been very far from smooth. The original session at the Theoretical Archaeology Group conference (Cardiff, 1999), from which many of these papers derive, was followed by one or two publication proposals. At the time (2000-2001) these were not greeted by potential publishers with any enthusiasm. Some questioned the selection of ‘moments’ (in the way Cornelius Holtorf does, in Chapter 8, p 73); others questioned the whole concept. A few years later I re-launched the publication proposal through the then fledgling CHAT (Contemporary and Historical Archaeology in Theory) organisation, with its own series of books and conferences. This seemed an obvious home for Defining Moments, and so it proved. For their support and encouragement I am therefore indebted to the series editors Josh Pollard and Dan Hicks (one of whom, incidentally, was in the audience of the original conference session, and the other a conference organiser). I also owe a debt to the contributors at TAG 1999 most of whom remained enthusiastic, willing and committed some eight years later. Indeed with three exceptions all of the speakers have remained for the published proceedings. One, sadly, has died (Sara Champion, who presented the original Internet contribution, though read at the conference session by Duncan H. Brown), and the other two withdrew with the loos of ‘punk’ and ‘the solar eclipse’. On the plus side are several new contributions, from Cassie Newland, Paul Cornish and Rodney Harrison. The book also benefits from a photo-essay by Cornelius Holtorf. I am grateful to all of these contributors for their support and hard graft. If only some of these contributions change the way some archaeologists think about their subject, Defining Moments for me will have been worthwhile.


Chapter 1

1115 hrs, 24 June 2008 Drama and the moment John Schofield Significant things happen so slowly that it’s seldom you can say: it was then – or then. It’s only after the change is fully formed that you can see what’s happened. (Sebastian Faulks’ Engleby: p77)

that doesn’t put them beyond the scope of archaeological scrutiny. Far from it, as this collection illustrates.

Geological time There is a lesson we learn early as archaeologists – that there is a longue durée, a prolonged period over which change occurs, sometimes more through structures than events (Braudel and Matthews 1982), and that the material traces of specific, precise moments are only rarely encountered. These traces do exist; it is just that, as Faulks implies, they are generally invisible, often to all but the closest of scrutineers, like brush strokes on a canvas, or a lone instrument in the orchestra. But occasionally a brush stroke does stand out from the canvas, or a slash in the case of Lucio Fontana, whose ‘slash series’, ‘art for the space age’, began in 1958. Here a single stroke can define the picture, giving it particular relevance and meaning. It is the thing people remember – a single action of a great artist, a moment of genius. The archaeological exceptions are well known and often headlinegrabbing: the moment of biface manufacture at the Lower Palaeolithic site at Boxgrove (West Sussex), for instance; or the discarded necklace left in a Neolithic house at Skara Brae (Orkney); the sinking and subsequent recovery of the Henrician warship Mary Rose; and, as David Miles describes in his chapter, the volcanic submersion of Pompeii. ‘Defining Moments’ is about these specific moments of drama: it explores ways in which points of detail can emerge from the broader and generally anonymous processes of cultural change. The archaeological dramas represented here each recall Binford’s (1981) ‘Pompeii Premise’, the reading of events directly from remains as if they had been left yesterday (Lucas 2001: 148). And the drama of the moment is important – it is these events, things that happen at a particular place and time, often with significant social and political ramifications, which provide the stories that punctuate our lives, and enthuse and excite us about the world. For archaeologists this is why Mary Rose grips us, and Skara Brae, and Pompeii, and the episodes of early human activity at Boxgrove and Laetoli, with its Pleistocene footprints strangely akin to those left by the moon-landers, a comparison recently highlighted by Greg Bailey (2008). The archaeological traces of these moments of drama exist also for the twentieth century. The only difference is that they happened more recently and often within living memory; but

But let us begin on another temporal scale altogether, the geological scale of things and the physical processes by which the earth continues to be shaped and reconfigured, by which continents move, merging and separating not even over millennia but over millions of years, introducing the concept to some of us of unimaginable time: time that quite simply exceeds our ken. Yet, even at this vast scale there can be defining moments: an earthquake caused by the frictions of plate tectonics, and perhaps movements by centimetres at once, not the micro-movements that are perceptible only to modern equipment attuned to measuring with increasingly close precision. What we now know as Africa and America once nestled (or spooned) together, parts of Pangaea, the super-continent of 250 million years ago, though are now separated by a vast ocean. When did this happen? At what point did these two continents become separate? We know the separation began in the Early to Middle Jurassic period, between 200 and 161 million years ago, but what was the defining moment? Or were there many moments, each unique to a specific set of spatial co-ordinates? We will probably never know, and at this geological scale of enquiry one might ask whether it is even appropriate or necessary to talk about such things in this way? Do geologists think in such terms? Should archaeologists? A very different resolution exists in Günter Grass’s Mein Jahrhundert (My Century) (1999). In this, 100 short stories create a personal, artistically portrayed view of the twentieth century, one per year. It is history from below, ‘history from the point of view of those … hardly ever referred to in history books: the victims of history, the little people – not state rulers, generals and business tycoons’ (Weber 2000). ‘Although only fictional’, Weber argued, ‘Grass’s stories often contained more truth than authenticated historical documents’ (ibid.). For these reasons, Grass’s book reads also as a kind of fictional archaeology of the twentieth century: what was it like, and what were the people like that lived at this time? It also follows Faulks (op cit.), in noting how significant things happen more slowly, one thing creating the 1

DEFINING MOMENTS situation for other situations to arise; another change to occur. It is incremental in other words – the gradual creep of progress, or change.

appropriate as a precursor to the particular subjects that follow. Moments

Weber’s (2000) review of My Century also criticises the work: Grass’s literary method ‘revolves around the how and not the why. As a result, unfortunately, precisely those questions are omitted which are of the greatest interest when dealing with history: questions concerning the relationship between people’s ideas and their actions or non-actions, questions that take as their theme the influence of conscious action on external social relations and vice versa.’ Archaeologists ask many questions of the material culture they investigate: but ‘what’, ‘when’ and ‘where’ are generally precursory to the more interesting and challenging ‘hows’ and ‘whys’, being the higher level, bigger, questions that require greater depth of understanding to provide an informed view of one’s subject.

1115 hrs on 24 June 2008 is a moment of questionable significance. It barely matters when I first clicked the mouse to begin creating this introduction, and attempting, finally, to draw this fascinating and diverse collection of chapters and perspectives together. But may be it will have significance in time? Perhaps in time this collection of essays will have a place in archaeology’s historiography? Maybe the publication will be a turning point of sorts? People may one day ask whether the ‘defining moment’ was the timing of the original TAG session in which the book has its origin, or whether it was the drawing together of the published chapters, or the publication date? I hope not, and trust that people will have better things to concern themselves. Yet, the date and time are included here as a matter of record: this is when the process of compilation and accumulation began.

That too is what Defining Moments is largely about: material culture as evidence of change, and how the micro-scale (the single event, the action) feeds the bigger picture, the overview, providing compelling and immediate life stories of individual events or dramas, through the people, places and things involved. The defining moment is the punctum in other words (that which pricks us, to use Barthes’ terminology [2000]), for those who remember and for those who do (and will) not, centuries or millennia into the future. The defining moments included here comprise a particular twentiethcentury narrative, but the principle can easily translate to earlier periods, understanding what ancient traces of events contribute to the bigger pictures of say prehistory or the medieval period. How do they contribute to what Lewis Binford described as the ‘Big Questions of Archaeology’ (1983: 26)? By taking a supposedly familiar period of history, and assessing it in this way, through its traces, its material remains, we can assess what these defining moments contribute to our understanding of the modern period. It also helps us to better understand our curatorial responsibilities and conservation, and to consider where there is legitimacy in studying archaeology at this micro-scale.

But while my moment is almost certainly insignificant and arbitrary, others here have obvious resonance. Some important things do happen at once and not over a prolonged period, and there was no shortage of these in the twentieth century. Some of these moments are included in this collection and others not, the selection being based entirely on what authors offered, rather than attempting to arrange coverage for a preconceived set of stories and accounts. One often hears the question: ‘Can you remember where you were/what you were doing when Kennedy was assassinated?’ or ‘where did you watch the moon landing?’ We make similar connections for those more personal and intimate moments, when a child was born, or one hears of a parent’s death, for example. As time passes, our cognitive maps become more elaborate, more complex, as we add places with particular associations and resonances, often combining the global and the local. Rodney Harrison (2004) describes mapping landscape biographies: identifying, mapping and describing the places that matter to local communities, often because of particular events that happened, and which are time specific. That is partly what we are doing here: mapping some very particular landscape biographies, at personal level (through the author’s own accounts of events) and culturally.

In this short introduction I will briefly contextualise the chapters that follow under four headings: moments, things, places and people, which together comprise the staple diet of archaeologists. In spite of recent significant interest in archaeologies of the contemporary past (eg. Graves-Brown 2000; Buchli and Lucas 2001; English Heritage 2007; McAtackney et al. 2007; Penrose 2007), a summary of these four areas of interest in relation to contemporary archaeology, and the defining moments included in this collection, seems

Time of course is continuous, and what we are seeing in this collection are punctuation marks – events that effectively introduce an element of discontinuity: events that can create, if you will, seismic shifts in the cultural process and serve as markers along the time-line. It can be said of many of the moments included here that ‘things were never the same again’.



DRAMAS 1 & 2

Figure 1.1: The Mary Rose sinking (left) and during recovery (right), two defining moments some 450 years apart. Both images are copyright The Mary Rose Trust.

The National Historic Ships Register contains information on Mary Rose, the Henrician warship that sunk in the Solent on 19 July 1545 and was recovered on 11 October 1982 (Figure 1.1). Two defining moments, both arguably of global significance but for very different reasons. The Register tells us more: MARY ROSE is of outstanding significance not only because of her history, cultural and technical significance, but also because of the contribution to the development of maritime archaeology and to the science of conservation. She was built in Portsmouth in 1509 on the orders of King Henry VIII and launched in 1511. She took part in all three of Henry VIII's wars with France and was one of the ships in the fleet covering his journey to France to meet Francis I at the Field of the Cloth of Gold. She sank on 19 July 1545 in full view of Henry VIII during an engagement with a French invasion fleet some two kilometres from Portsmouth Harbour. Such was her importance that immediate attempts for her recovery were considered and initiated. MARY ROSE dates from a vital period in the development of ship construction and in particular the design of a warship. She has often been called a 'revolutionary' warship both because of her construction and because she was one of the first to incorporate water-tight lidded guns ports. These allowed heavy guns to be placed low down in the hull in addition to the guns on higher decks. She was finally recovered on 11 October 1982. Dr Margaret Rule switched on a new spray system at the Mary Rose Ship Hall in Porstmouth Historic Dockyard on 27 July 2006, marking the start of the next phase of the ship's conservation. The new spray contains a thicker, more concentrated, polyethylene glycol. The Trust will be working closely with Heritage Lottery Fund over the plans for the third and final air drying phase and are in the process of applying for funding to support their vision of a highly innovative new museum. ( - accessed 5 September 2008).



Figure 1.2: (Left) Not Skara Brae, but a ‘discarded’ necklace on a teenager’s bedroom floor, somewhere in the south of England, c.2008 (Photo: Armorel Schofield). (Right) A cashmere cardigan, left in a London office block at the time of its abandonment. Photo: Author. This book is not short on such dramas. There are great moments which enthralled and excited us (Everest, the Millennium); great advances in medicine and technology, advances that genuinely changed lives in dramatic and previously unforeseen ways (transatlantic wireless, television, the World Wide Web and insulin); it was a century characterised also by conflict and trauma (the Somme, Titanic, the Second World War and the dams raids), and by intolerance, racism and bigotry (Proclamation 43 and the murder of Matthew Wayne Shepard). It was a century also of exploration into new further reaches (Everest again, and space), of state control (driving tests) and of the rise and rise of conservation. It was like no other century before, and the traces of this diverse legacy can be a fascination to all who study them.

more draw-out? Some have interpreted the abandonment as due to a sudden, cataclysmic event such as a storm and high tides. Some (more recently) say it was a more gradual process, and an early example of the climate change and coastal erosion we now witness with increased regularity. One can easily visualise the scene: increasing storm force eventually causing the household to decide enough was enough – and a particular event being the final straw leading ultimately to a rushed departure, with the family clutching their heirlooms and essentials, including the necklace. In the rush a necklace gets dropped, only to be gradually covered in silts and then rediscovered millennia later by archaeologists. Or perhaps it was lost, or broken and left therefore as a worthless thing. Perhaps the necklace was a standing joke – like a teenager’s precious jewellery just left on the floor in or just outside her (one presumes it is a ‘her’) bedroom. She cannot be bothered to pick it up, and her parents wouldn’t dare! But it is something for them all to laugh about years later, perhaps (Figure 1.2).

Things As archaeologists our concern is with the literal ‘stuff’ of archaeology; and one aspect of that is the material culture that accompanies or represents change. What physical evidence exists, for example, for the moment of change, and its repercussions? I recently made a comparison between two finds, separated in space, time and social context, but united in their status as lost objects, lost perhaps as a direct result of the abandonment of the buildings in which they were found. Some 5000 years separates the loss of a necklace in Neolithic Orkney, and a cashmere cardigan left on the back of a chair in a London office-block. Each clearly represents a moment in time: one (arguably) the hurried desertion of a Neolithic village at Skara Brae, and one the final abandonment of English Heritage’s HQ in Savile Row, London W1 (Schofield 2008). Both were presumably not left deliberately, and the loss of both objects may have had traumatic consequences. Both may have been loved objects, perhaps heirlooms and of some cultural value. In both cases the context matters. Why was Skara Brae abandoned, and was it sudden or longer and

The same people, the same things …but very different stories depending on the interpretation placed upon them. Yet this degree of speculation is not such an issue for contemporary archaeology, is it? We know the stories for this familiar past, do we not? Some sceptics and detractors may think that, but I disagree. And that’s another reason why this collection matters. Take the cashmere cardigan, left on the back of a chair in Savile Row and rediscovered by an archaeologist after the building’s abandonment (Figure 1.2) (Schofield op cit.). The same range of theories and ideas can be extolled here. Was this something left on the back of the chair as its owner frantically packed crates as the deadline for departure approached? Running up to the deadline, and perhaps beyond it, she (one presumes it was a ‘she’) had to hurry out. Was the cardigan left as she made a dash for the exit? Or perhaps it was simply ‘lost’, having been left on a chair in a remote part of the building where the owner had attended a 4

JOHN SCHOFIELD: 1115 HRS, 24 JUNE 2008. DRAMA AND THE MOMENT meeting. If so it could have been on that chair for months before English Heritage left. Or perhaps it had become worthless – too small, or too threadbear (in the owner’s opinion). May be this too became a standing joke: the precious cardigan that no-one came back for. May be it belonged to an ‘outsider’, someone who came to Savile Row for a meeting and inadvertently left her mark. Questions about things appear just as important, and just as hard to answer, for the recent as for the deeper past.

gone (in fabric at least), if it was ever really there in the first place! Places It is not necessary that moments in time happen at particular places, but they often and typically do. And it is through place that we construct our own cognitive maps of the world – we mentally map our experiences and lives in terms of the places where things happened. If I make a journey to Suffolk where I grew up, I recall events and people as I travel the last few miles down lanes and by-ways. And I notice even the most subtle changes that have happened to those places – a hedgerow grubbed out, or a building or garden transformed. For the most part these everyday places recall moments. I cannot place the event that occurred there precisely to a date, even a year … but the geo-reference is clearly recalled; the spatial coordinates are accurately given on my mental map, and the associated event is clearly recalled as I pass by. These, again, are the mapped biographies that Rodney Harrison (op cit.) has spoken of.

The things in this book are many and varied, and many of the illustrations represent this range of uniquely twentiethcentury material culture that as archaeologists we increasingly seek out and investigate. We are all closely familiar with some of these objects and categories: artefacts recovered from Titanic or filmed in situ and seen at Imax screens around the world, and machine guns and other war matériel in museum displays; televisions of various ages in design- and science museums and in people’s lofts and garages; road signs; the detritus of early camps on Everest, and other detritus in space; people’s artefacts of resistance in the District Six Museum (Cape Town); and the question of the immaterial in the case of the Internet, combined with early examples of computers, hard drives, manuals and so on. This is just a small sample, and perhaps an unrepresentative one, but it conveys an impression of the range of materials, and the challenge to archaeologists responsible for their investigation and curation.

When asked, as I often am, about the similarities that exist between ancient and recent archaeologies, place is often prominent in my response. One only needs to read a city biography to see this. Brian Ladd’s (1997) The Ghosts of Berlin for example, subtitled ‘Confronting German history in the urban landscape’, examined the various layers of Berlin’s troubled history which come together at specific sites where Nazi and Stasi pasts are superimposed, and where different unrelated events appear to be focused. One wonders if there are places that simply attract trauma? One can imagine this being so for geo-political reasons, just as some places lend themselves to other types of history – that explorers are bound to target the remotest and most challenging places on earth, like Everest; that significant scientific advances will take place in university laboratories. I was struck, having selected the particular ‘dramas’ and images for this introduction, to note that the majority are tied to a very particular area of central southern England, yet covering events half a million years apart. I did once live in the area so maybe this is not random so much as representing some psycho-geographical coincidence.

All of these moments, then, involve physicality in terms of the construction of place, or the artefacts that represent the moment, either literally or as a replication or representation of reality. The papers that feature here are a further example. From what I recall of the conference session from which this collection derives, most papers were read, and therefore existed in material form – as a file on a computer, and as hard copy. But given the time lapse, computer and word processing packages have moved on (as Graves-Brown’s Internet chapter tells us), making it near impossible to open and edit old files, while the physical paper was probably left in a bin in the Cardiff lecture room in which the session took place (Physiology Lecture Theatre A). The physical form has therefore been recreated in this case, from a combination of memory, old notes and snippets, and the original paper abstract which remains online. In some cases the recreated paper closely reflects what existed before; in some cases it is a reinvention – something new, up-to-date and reflective also of the developments in contemporary archaeology in the intervening years.

Places resonate through this book. Some are well-known and tourist hotspots, but the majority in fact are not, either for their remote location or the fact that while the event, moment or product may be well-known, the place most significant in its production or use may not be. Visitors to Cape Town often visit the District Six Museum, but the District itself is best known by Capetonians and the community evicted from there in the late 1960s. Many know of the Dams Raids, but few could place the dams on a map of Germany, excepting again local communities and dams raid historians of course. Scampton is on many people’s radars as the base from which the aeroplanes flew, but how many can

Other traces exist in the places at which these events occurred, places which often have physical remains that warrant the attention of conservationists, heritage practitioners, tourists and – increasingly – archaeologists. Often though, and rather surprisingly, much of the place has


DEFINING MOMENTS locate it precisely, despite its close proximity to the historic city of Lincoln? Everyone knows the Titanic sank in the north Atlantic, and most will tell you it was built in Belfast and sailed from Southampton. But how many can cite its present co-ordinates, and is it better in this case that people do not know? Where is Sputnik now, and from where was it launched? Where was Matthew Wayne Shephard murdered? And who knows where the key research and developmental stages for discovering insulin and the Web are located? Research laboratories and university campuses are places too, where many of the authors in this collection work. These places can attain cultural significance, but perhaps they have less resonance or public interest than Wembley Stadium, say, where England famously won the football World Cup in 1966 and where Bob Geldof later staged Live Aid, precisely because they are unfamiliar places to many, and – to some people – are mundane, uninteresting places, places lacking in character, and places that are essentially functional. But, as I have shown recently, even these functional mundane places can matter and can benefit from an archaeological gaze (Schofield 2008).

blew your mind that a white man would doff his hat. And subsequently I discovered, of course, that this was quite consistent with his theology that every person is of significance, of infinite value, because they are created in the image of God. And the passion with which he opposed apartheid and any other injustice is something that I sought then to emulate. There are many such tales and accounts in this collection. Many tales are of explorers, soldiers, sailors and airmen, and scientists. But there are also tales of ordinary people, people watching television, taking their driving tests on the instruction of the State, suffering the injustice of the Group Areas Act, and celebrating the Millennium. The collection has a further dimension: the diverse group of authors, from a range of cultural and academic backgrounds but who all take interest in the material traces of the twentieth-century’s defining moments. These people form an important part of the narrative as well. Archaeology, as Mortimer Wheeler famously said, is about people. We have also seen how Gunter Grass represented an account of the twentieth century through ‘the victims of history, the little people’. Above all, perhaps, this collection is about people, from a diversity of backgrounds and situations, some of whom sought fame or infamy, while for others it just happened. The chapters contain a roll-call of the great and the good, the notorious, the boffins and brains, the celebrities and characters that together create a uniquely twentiethcentury narrative. Some are people readers will have heard of, and some remain largely anonymous despite significant contributions to society.

People A BBC series recently asked some of the world’s most influential people about the defining moments in their lives. Desmond Tutu chose the moment when he saw Trevor Huddleston when he was about nine years old. I didn't know it was Trevor Huddleston, but I saw this tall, white priest in a black cassock doff his hat to my mother who was a domestic worker. I didn't know then that it would have affected me so much, but it was something that was really - it

DRAMA #3 Some half a million years ago, on beach in what is now West Sussex, someone made a biface. We know this from the scatter of flint debitage found over a triangular area of less than a quarter of a square metre, with clearly defined edges. By comparison with experimental flint working, it can be said that this person was seated. Louise Austin describes how, A small number of larger than average flakes were located approximately 20 cm to the west of the main scatter, probably to the right of the knapper’s leg. These pieces may have been deliberately placed here during knapping for the later selection of usable flakes. There was a build up of flakes along the inside of the knapper’s right thigh, suggesting that the knapper held the rough-out on his right leg and knapped therefore on the right side of the body. These actions took place half a million years ago, yet they can be reconstructed now, through forensic investigation. The discovery may have been a defining moment for those involved in the recovery, and for the person who wrote about it (Austin 1994). But for the knapper it was presumably a private moment, a moment for concentration and focus, and – importantly – an everyday moment, something ordinary which now seems extraordinary in its recovery and the contradiction between spatial resolution and time depth, one of the many remarkable things about archaeology and why it attracts public interest.



Figure 1.3: The day someone made a biface, some half a million years ago, (After Austin 1994, 124) not far from the location of the necklace ‘find’ in Figure 1.2, and very close to the location of events depicted in Figure 1.1.

There were many other such moments at Boxgrove, though perhaps none quite like this one, in the Lower Level Assemblage of Quarry 1. In the later Upper Level Assemblage, further in situ knapping events or episodes were encountered. Groups D-H for example (Figure 1.3) are believed to be from the reduction of a single flint tool, while Group A represents part of the thinning stage of an already partly reduced flint nodule, and Group B seems to be reduction of the same biface as A, but probably the opposite face.

the knapper, and his handedness. This is interesting, and is in large part the attraction of archaeology – witnessing remote events and activities and creating minor dramas out of them. What really matters, we constantly tell ourselves, are the bigger questions (after Binford 1983): the social context of lithic production strategies; seasonality; the process of settlement. Yet it is often the events and actions that capture the imagination; that make the headlines. By examining these questions for a more familiar past (well, supposedly more familiar – this is debateable) one might establish renewed justification for investigating moments, and certainly for constructing value judgements for the artefacts and places that remain. That is the final point here: that beyond the purely archaeological, my own interests incorporate the management and curatorial decisions that accompany these things. Having identified a defining moment, and recognised its significance, whether for local, national or international reasons, by what further processes can we establish value judgements and decide an appropriate mechanism for management and public interpretation? Can we help people to understand and value the past through its defining moments and the monuments and objects that remain? Can they help people to understand the public benefits of archaeology, why it matters and to whom; and can they help to create an enthusiasm for the past?

Heritage etc. This book has several objectives, some of which correspond closely to the things that motivate me as an archaeologist, and as someone involved with heritage management. One is the very nature and scope of archaeology, and the symmetrical observation of a past not necessarily past: that archaeology is the study of material culture in the pursuit of understanding. A cashmere cardigan falls within this definition just as easily as a Neolithic necklace. I am also interested in how we construct narrative and from what. As an archaeologist I am used to identifying specific events through material culture. One might cite the example of an in situ knapping floor in which a particular episode of tool manufacture can be identified, to the extent that we can identify the posture of 7


Austin, L. 1994. The life and death of a Boxgrove biface. In Ashton, N. and David, A. (eds), Stories in Stone, 119-126. Lithic Studies Society Occasional Paper No. 4. Bailey, G. 2008. Thinking outside the box. British Archaeology 102, 52-3. Barthes, R. 2000 [1980] Camera Lucida. London: Vintage Classics. Binford, L.R. 1981. Behavioural archaeology and the ‘Pompeii Premise’. Journal of Anthropological Research 37(3), 195-208. Binford, L.R. 1983. In Pursuit of the Past: Decoding the Archaeological Record. London: Thames and Hudson. Braudel, F. and Matthews, S. 1982. On History. Chicago: University of Chicago Press. Buchli, V. and Lucas, G. 2001. Archaeologies of the Contemporary Past. London: Routledge. English Heritage, 2007. Modern Times. Conservation Bulletin 56. Graves-Brown, P. (ed) 2000. Matter, Materiality and Modern Culture. London: Routledge. Ladd, B. 1997. The Ghosts of Berlin: Confronting German History in the Urban Landscape. Chicago: University of Chicago Press. Lucas, G. 2001. Critical Approaches to Fieldwork: Contemporary and Historical Archaeological Practice. London and New York: Routledge. McAtackney, L., Palus, M., and Piccini, A. (eds), 2007. Contemporary and Historical Archaeology in Theory: Papers from the 2003 and 2004 CHAT conferences. BAR International Series 1677. Penrose, S. (with contributors) 2007. Images of Change: An Archaeology of England’s Contemporary Landscape. London: English Heritage. Schofield, J. 2008. The Office: Heritage and Archaeology at Fortress House. British Archaeology 100, 58-64. Weber, W. 2000. My Century? A review of Gunter Grass’ latest novel, Mein Jahrhundert (My Century). (Accessed at consulted 23 June 2000.)

Back then, finally, to this chapter’s defining moment, 1115 hrs, 24 June 2008 – the moment when I began to write, and finally to draw this collection together into something tangible and (I hope) lasting. Is it a significant moment? As I said earlier, no – or at least, not for me, and not in the context of this book. maybe something significant did happen, somewhere – something significant in global terms. Maybe someone was born at that moment who will perform great deeds in the future: a leader, an icon, a great archaeologist? Or maybe it was something else – an event, the legacy of which will only be known later. What is beyond doubt is that, as a moment, it will have significance for someone, somewhere, and that the material traces may one day form the subject of archaeological enquiry.

In this chapter I have identified several types of defining moment: Those that are telegraphed - that we know are coming (the Moon Landing); those that take us all by surprise but whose significance is obvious (The assassination of Kennedy and, from the early twenty-first century, the terror attacks of 9/11 on the twin towers); those that are planned by some, but come as a surprise or shock to others; and those whose significance gradually accumulates (medical advances for example). Examples of all types are included here, together providing alternative views of the twentieth century, from the perspectives of archaeology, cultural heritage, and material culture. As I have said, these are the punctuation marks which help the sentences make sense, and perhaps at the same time bring pleasure to the reader. That is my intention here: a book that is fun to read, but contains serious messages about the nature of archaeology, how we do it, what we do and why. There is a drama to archaeology, especially where defining moments are concerned. For me that makes it all the more enjoyable, engaging and – sometimes – relevant.


Chapter 2

1230 hrs, 12 December 1901 Marconi’s first transatlantic wireless transmission Cassie Newland


Sigs. At 12.30, 1.10 and 2.20. This brief diary entry for Thursday 12 December 1901 is how Signor Guglielmo Marconi chose to commemorate what is arguably the single most pivotal moment in telecommunications history: the first transatlantic wireless transmission. Pivotal in that it flew in the face of all accepted scientific knowledge. Conventional wisdom held that radio waves – being electromagnetic radiation – should behave like light and would, therefore, be unable to travel beyond the horizon, let alone across the Atlantic. The signals: three dots, an ‘S’ in Morse code, were sent from Poldhu, an experimental radio station designed expressly for the attempt near The Lizard, Cornwall. They were received at a temporary station at Signal Hill in Newfoundland, Canada, a distance of around 2100 miles. In the century which has followed, radio technologies have flourished beyond even Marconi’s prescient imaginings. Television, radio, GPS and mobile phones have become hugely important, some would argue essential, artifacts in the construction of our modern globalized world. Recently voted the greatest patented invention of all time,1 radio has given us telescopes to explore the heavens above us and geophysics to explore the earth beneath our feet. Marconi’s experimental transmission has come to be regarded by many (Hong 2005, 9) as the birth of modern information technology - the first pioneering foray into the unexplored universe of electromagnetic radiation.

Fig 2.1 George Kemp with the kite. (Photo from George Kemp's diary, held at the Marconi Archive, Oxford.) at all, suggesting that Marconi might have been misled by static interference, lightning storms or stray signals from one of his own nearby ship-to-shore stations. This argument continues to this day, rumbling on in public houses, Internet chat-rooms and anywhere else radio-engineers, -historians, or -enthusiasts congregate. This begs the question, how did an event which was - and continues to be - so hotly contested come to be accepted as one of the key events of the twentieth century?

Marconi’s achievement was announced by The New York Times on 15 December 1901, three days after the first transmission, as ‘the most wonderful scientific development in modern times’. Reaction to the results was mixed, however, as the signals were entirely unverified, having only been witnessed by Marconi and his assistant George Kemp (Figure 2.1). Despite being hailed by the general public as an astounding technological breakthrough (Hong 2005, 7), some prominent scholars (notably Thomas Edison and Oliver Lodge) were justifiably skeptical. From the press too, Marconi received mixed reactions. The Telegraph questioned publicly whether signals had been received in Newfoundland

To answer this question – and indeed to understand the event itself – we first need to appreciate the great variety of materials available for study. The remains of the station sites at Poldhu and Signal Hill have been under the protection of the National Trust, UK and Parks Canada respectively for around 70 years. The Poldhu station buildings were largely demolished in 1935 and exist as a collection of concrete or

1 Voted the greatest patented invention of all time by the New Scientist, 2 June 2007.


DEFINING MOMENTS tiled floors which imply little of their original function. Cable trenches are clearly visible as earthworks, however, and these, along with the mast bases and large concrete anchor stones, are sufficient to establish the function and chronology of the site. The building at Signal Hill was destroyed by fire in 1920, although, due to the superficial nature of its role, it was unlikely in any case to have retained archaeological traces of the event.

independent observer. The marks it produced should have been proof that signals were received. Marconi did not, however, use the Morse inker. He couldn’t. The inker needed a minimum electrical input to function and Marconi was unable to maintain a sufficiently strong and consistent signal from his aerial to provide this. The reason: the 210 m (500 ft) bare copper aerial wire was held aloft with the aid of a kite. The reason it was held aloft with a kite was because the balloon had blown away the previous day. Balloons and kites have never been the methods of choice for supporting aerial wires, not even in the early days of wireless. So why would Marconi, a pioneer in the field, be using them for this, his first – and vital - transatlantic transmission?

The Marconi archive at the Bodleian Library, Oxford holds a wealth of archival materials, including documents, photographs, sketches, engineering drawings, diaries and eyewitness accounts. Moreland (2001) has argued that a successful archaeological engagement with these kinds of documentary materials must treat them not as sources per se but critically, reflexively and as objects in their own right. This essay hopes to continue in this tradition, bringing an archaeological understanding of materials to more traditional historical accounts of the event. Alongside documentary artefacts there are a number of technical and mechanical components housed and displayed at the Museum of the History of Science, Oxford. A detailed reappraisal of the function, significance and limitations of the objects involved will prove crucial to the exploration of the event, as will an understanding of the object’s place and function in the public imagination.

The transatlantic experiment was designed initially to be conducted between two purpose-built stations either side of the Atlantic. In October 1900 a site was chosen on a headland overlooking Poldhu Cove near the Lizard, Cornwall. Work began on the American side in March, 1901 on a site on Cape Cod, Massachusetts. The stations were of unprecedented size and power, essentially scaled-up versions of Marconi’s successful laboratory equipment with higher aerials and more powerful transmitters. Marconi personally designed the aerial system (Figure 2.2) which consisted of 400 aerial wires held aloft by a circle of 20 masts, each 61 m (200 ft) high. The wires terminated in a station building set in the centre of the circle, creating a wire ‘cone’. The term ‘masts’ was not an incidental one. Aerial designers drew on the only similarly engineered structure existing at the time: traditional wooden ships masts. The transatlantic masts were constructed in four wooden sections referred to as lowermast, top-mast, top-gallant and royal, held aloft by rope stays. The masts, though electrically innovative, were not structurally particularly robust.

Most importantly for this essay, perhaps, is the work of sociologists and historians of technology (for example MacKenzie and Wacjman 1985; Bijker et al. 1989) which, since the 1980s, has highlighted the fact that behind the more obviously ‘archaeological’ objects lies an entanglement of things which archaeologists do not traditionally engage with. As diverse and heterogeneous as stock prices or an Italian’s penchant for a sea-view, these social, technical, or economic things play significant roles in the construction of the material world and are therefore suitable for archaeological engagement. This essay hopes to build on archaeology’s expertise in investigating and interpreting material culture to pick apart these webs of things, places and people to create new, fuller and un-packed understandings of events in the world.

Both the Poldhu and Cape Cod stations were built on rugged stretches of coastline at the mercy of the unpredictable Atlantic weather. It has been assumed that this was in an effort to minimize transmission distances (though when dealing with distances of several thousand miles it is difficult to understand how an extra half-mile might have helped). It could perhaps stem from an unspoken assumption that wireless stations somehow should be built on the coast, a hang-over from Marconi’s earlier ship-to-shore work for the Navy. In any event, on 17 September 1901, the exposed Poldhu headland was subject to a vicious gale which toppled the entire delicate aerial array (Figure 2.3). Within ten days, Marconi’s assistant Kemp managed to erect a more robust 48 m (157 ft) high temporary array from the wreckage of the first (Figure 2.4). Transmission ranges from this jerry-rigged aerial system appeared promising and it was therefore decided to go ahead and attempt the experimental transatlantic transmission between Cape Cod and Poldhu as planned. The Poldhu signal might not have been everything Marconi had envisaged but the fully functioning aerial array at Cape Cod would at least provide the best chance of transmission in the

A defining moment The object, above all others, which can lay claim to being centre stage on 12 December 1901, is Marconi’s Morse inker. The Morse inker is a device which can be used to record the arrival of wireless signals. When a signal is detected, the Morse inker automatically marks the arrival with a dot or dash on a paper tape. The Morse inker doesn’t just mark paper, however, it is also an object which performs a social role: it tells the truth. Whereas people are mistrusted, regarded as unreliable, fantasists or liars, the inker is seen as independent and incorruptible. And so, the Morse inker that Marconi brought with him should have played the role of



Fig 2.2 Ring of masts at Poldhu. (Photo from George Kemp's diary, held at the Marconi Archive, Oxford.)

Fig 2.3 Damaged masts at Poldhu. (Photo from George Kemp's diary, held at the Marconi Archive, Oxford.)



Fig 2.4 Temporary aerial at Poldhu. (Photo from George Kemp's diary, held at the Marconi Archive, Oxford.)

opposite direction. On the eve of the planned sailing to America - 26 November – a gale wrecked the Cape Cod array. Marconi’s experiment was again thwarted by the weather.

On arrival in St. John’s on 6 December, the group was shown by the Governor of St John’s to a bleak ex-military fever hospital on an aptly named local promontory, Signal Hill. Newfoundland may not have been Marconi’s first choice but it certainly lent his endeavour an air of historical inevitability. Signal Hill is the location of the tower commemorating John Cabot’s discovery of Newfoundland in 1497 and a stone’s throw from Heart’s Content, the landing place for the first transatlantic cable in 1858. The equipment was readied in one of the abandoned hospital rooms, wires were run outside and the earth-plates were buried in preparation for the first transmissions from Poldhu. Marconi cabled that transmission should start on 11 December and that three Morse dots – the letter S – should be sent continuously between 3 and 7 pm every day (UK time) until further notice.2

The experiment was always going to be a bit of a gamble. No one had ever attempted to build a giant aerial structure before. Marconi’s lack of structural engineering skills had left him out of time, out of money and without either of his planned transatlantic stations. It was, however, imperative that the trial went ahead. £50,000 had been invested by the Board of the Marconi Company and Marconi was under great pressure to provide some kind of justification for the continued funding of the project. It was therefore decided to transmit from the temporary aerial at Poldhu. In order to maximize the chances of receiving signals in America, the receiving site was moved from Cape Cod to the closest point of landfall: St John’s, Newfoundland. There was, of course, no wireless station in Newfoundland, so Marconi, Kemp and his other assistant Paget, hurriedly assembled a makeshift receiving kit of kites, balloons, great rolls of antenna wire, earth-plates, coherers, ear-pieces and the Morse inker. Then, under great secrecy, the three men loaded their odd cargo and boarded the ship.


It should be noted that the Poldhu transmitter could not, in fact, transmit dashes. This was due to the low spark rate of Marconi’s transatlantic transmitter, estimated at no more than 10 per second. The sound has been recreated by John Belrose (1994) and is available to listen to online at



Fig 2.5 Raising the kite at Signal Hill. (Photo from George Kemp's diary, held at the Marconi Archive, Oxford.)

On the day of the experiment the weather at Signal Hill was taking a turn for the worse. The first attempt to receive signals was made using a balloon to hold approximately 500 m of copper antenna wire aloft. Marconi was using what he called a syntonic receiver - a new and sensitive receiver which could be tuned to the specific capacity of the aerial wire. At the time there was no way of amplifying weak signals so the tuned receiver was by far his best shot at receiving the feeble transmission from Poldhu. It could also be used in conjunction with the Morse inker to record the event. As the force of the storm grew, the balloon reared and bobbed, changing direction and elevation at random. This made the capacity of the aerial fluctuate wildly, which in turn made it impossible to keep the syntonic receiver in tune and rendered it useless. The weather ensured that Marconi had to abandon the syntonic receiver and employ instead an older, untuned receiver attached directly to a headphone. Any results received in this way would therefore be completely unverifiable.

was 12.20 pm. Marconi and Kemp recorded more signals at 1.10 and again at 2.20. The Morse inker, however, recorded nothing. Defining a moment Events are places where the material and the social are caught up together. As Appadurai (1986, 3) suggested, objects have social lives. The Morse inker is an object bound up with explicit social concepts, including ideas of scientific proof, integrity and trust. The physical properties of the inker do certain things; for example the resistivity of the inker’s circuits determines whether or not the event could reliably be said to have happened. Latour (2005) would argue that materials provide the physical stability around which ‘the social’ can be constructed. Without the stability provided by materials – such as a dot on a piece of paper - understandings of the event were reduced to a far shakier, social scaffolding, in this case Marconi’s professional reputation. The Museum of the History of Science does not list Marconi’s inker among its collection of artefacts from that first transatlantic transmission. The absence of this central object from our current story of the event is evidence that defining moments do not arrive fully formed but must undergo subsequent and continual processes of creation and negotiation. It is to a discussion of this narrative forming afterlife that we now turn.

The next day, 12 December, the storm was worse. A BadenPowell kite was first sent up carrying twin 155 m (510 ft) aerial wires which blew away in a matter of minutes. A second kite, carrying a single 152 m (500 ft) wire was launched (Figure 2.5). The wind ripped at the kite which plunged and strained but remained in the air. Inside the makeshift receiving room in the hospital Marconi suddenly handed the telephone earpiece to his assistant Kemp and asked the now famous question, ‘Can you hear anything, Mr. Kemp?’ Mr. Kemp replied that he could: a repeated series of three faint clicks, fading in and out of the static. The time

Defining the moment of the first transatlantic wireless transmission is not perhaps as straightforward as it might appear with hindsight. Neither of the diaries of Marconi and 13

DEFINING MOMENTS Kemp record the event with a view to posterity. The press release, published several days after the transmission, was markedly low key giving the time, location and stating only that very faint signals had been received. Among the Marconi supporters, including the well known engineer, W.S. Franklin, the implications were, however, clear: that the events of 12 December 1901 would go down in history.

The greatest threat to Marconi’s legend was in fact his subsequent work. Marconi began construction on the permanent Canadian station at Glace Bay in 1902. In May that year the Poldhu aerial array was rebuilt, the power substantially increased and the wavelength augmented to 1000 m. The twin, giant stations were operational by November. Transmissions were not just disappointing however, they were non-existent. After six weeks of tuning and tweaking, the first messages were transmitted on 14 December. Transatlantic working remained highly unreliable. The fact that these huge mega-powered stations were unable to recreate the successes of the earlier kite-driven technology led people to question whether the first transmissions had taken place at all. The transatlantic legend was in jeopardy.

Thursday, December 12, 1901, may prove, therefore, to be a date to be remembered in the history of wireless telegraphy. Within this apparently feeble result - three very faint clicks repeated at intervals of five minutes there is to be seen the germ of ocean wireless telegraphy, and, perhaps, telephony. (Franklin 1902, 112) Many, as we have seen, remained to be convinced (Hong 2005, 7). Marconi set out to provide his skeptics with the incontrovertible proof they desired. He equipped a ship, the Philadelphia with a mast-extension which would suspend four wire aerials 150 m (494 ft) above the deck. He set up his receiving apparatus in the ship’s wireless room. For this second set of trials, Marconi used only the tuned, syntonic, receiver, synchronized to the capacity of the new – and fixed aerial. More importantly, to this receiving apparatus he attached the Morse inker. On 22 February 1902 the Philadelphia steamed out of port. Signals were transmitted from the temporary aerial at Poldhu on the same 366 m wavelength but with a higher spark rate that allowed it to transmit real messages in place of dots. Onboard, received messages were not only logged through the Morse inker but were also witnessed by the Captain. Whole sentences were transmitted at night to a distance of 2415 km (1550 miles) and ‘S’s to a distance of 3380 km (2100 miles), the same distance as Marconi claimed for the first transatlantic transmission. The results were convincing and for a time 25 February 1902 looked set to usurp 12 December 1901 as the defining moment in the history of radio. For example,

Rescue came from well respected quarters. Oliver Heaviside had mused in 1901 on the existence of layers of ‘attenuated gasses’ in the atmosphere which could theoretically act as electromagnetically conductive surfaces. It was suggested by physicist Arthur Kennelly that the signals Signor Marconi claimed to have heard in Newfoundland the previous year might somehow have ‘bounced off’ one of Heaviside’s proposed gas layers. An invisible and at the time, entirely theoretical artefact, the Kennelly-Heaviside layer provided a most popular and enduring explanation. So popular indeed, that it cocooned the evolving transatlantic legend until 1924, when two important things happened: Edward Appleton put forward his model of the ionosphere, which refined and replaced the concept of the Kennelly-Heaviside layer; and Marconi began to build the first commercial short wave (SW) stations. These events were not unconnected. During the intervening three decades there had been unprecedented probing into the landscape of radio and the way in which this landscape affected the materials of radio. Appleton’s ionosphere is a layer in the atmosphere above 85 km which can either reflect or absorb radio waves. During the daytime the particles in the ionosphere become ionized, or charged, causing them to absorb radio waves. At night, the particles lose this charge and will instead reflect radio waves. It is this reflective property that can be exploited – just as Kennelly suggested – to bounce radio signals back down to earth and send signals over vast distances. Many factors, aside from day and night, contribute to how attenuated the radio signal becomes. Space weather, for example sun spots, solar flares and solar variation, can charge huge areas of the ionosphere (a well known example of this is the aurora borealis). Another factor - and one with overwhelming importance to the transatlantic legend - is the wavelength of the radio signal.

The month of February and particularly the 23rd and 25th of February, 1902, will undoubtedly become historically recorded as the beginning of what may be known as the Marconian era. It was on the first of these dates that a message was transmitted more than a thousand miles… and it was at the second of these dates that distinct signals were repeatedly transmitted over a distance exceeding two thousand miles… and permanently recorded on the tape of the receiving instrument. (Thurston 1902, 474) There was a protracted period during which publications refer to either date as the first transmission: for example, over a decade later, Love (1915) and Pupin (1915) are still giving the date as February 1902. The legitimacy that the second set of transmission lent to the first was, however, sufficient to ensure that in time December 12 1901 won out. By 1931, even Oliver Lodge – one of Marconi’s most vocal detractors was giving the December 1901 date (Lodge 1931, 519).

In 1923, all wavelengths in commercial use were long or very long, up to 14 000 m. In the pursuit of ever greater distances, Marconi and his contemporaries followed the maxim ‘more power and longer wavelengths equals greater distances’. 14

CASSIE NEWLAND: 1230 HRS, 12 DECEMBER 1901. MARCONI’S FIRST TRANSATLANTIC WIRELESS TRANSMISSION thin aerial wire.4 The spark transmitter can be imagined as a rather crude instrument, where the spark produced created a disturbance across a broad range of frequencies. Although it now seems clear that the Poldhu aerial was radiating in the medium wave band, it was proposed that the broadband nature of the spark meant that it was also radiating at other unintended, or ‘parasitic’, wavelengths. If the Poldhu transmitter was indeed also radiating at an unknown frequency in the high frequency, short wave band then it was entirely possible that signals were bouncing off the ionosphere and being received by the untuned receiver in Newfoundland. This would also explain why signals could not be received when Marconi was using his syntonic receiver tuned only to receive medium wave frequencies. When the idea of a parasitic signal in the high frequency range was suggested in 1924, short-wave, high frequency working was the hot new idea and the legend was lent promising (and helpfully at the time un-testable) intellectual rigour.

Longer wavelengths meant correspondingly taller aerials. In 1923, the state of the art Marconi super-station in Australia broadcast on ultra long wavelength, from 20 steel masts, each 244 m (800ft) high, and 1000kW power output. A year later, Marconi sent transatlantic transmissions from Poldhu to New York using a short wave of 37 m and just 17kW of power. Short waves, it turns out, are reflected remarkably well by the ionosphere. Understanding the physical reaction between radio waves and the ionosphere also had other implications. When Marconi carried out his first transatlantic experiment in 1901 there was no method in existence which could accurately measure the frequency and therefore the wavelength.3 Flemming in 1903 quotes the wavelength as 1000 ft (304 m), giving a frequency of around 820 kHz. In a lecture in 1908, Marconi himself gives the wavelength as 1200 ft (366 m), a frequency of 850 kHz (Bondyopadhyay 1993). Even taking account for this disparity, one thing became clear: Marconi’s transmission was made in the medium wave (MW) band somewhere between 820 and 850 kHz, the frequencies subject to the maximum amount of ionospheric absorption.

Redefining a moment? As archaeologists we put a lot of stock in materials. It might logically be assumed that the revisiting and interacting with the material record which took place in the 1920s would feed into a renegotiation of the transatlantic legend itself. This, however, does not appear to be the case. The intellectual debate going on in science and engineering seemed to bypass popular versions of the event almost entirely. Accounts from the 1930s do not mention the doubts cast on MW signals bouncing off the ionosphere. Neither do they discuss parasitic HF transmissions as an alternative. The stories, if anything appear to benefit from a newfound legitimacy, evolving a far more exciting tone than the first perfunctory diary entries. The example below, was recounted by Marconi’s other assistant Percy Paget (who, interestingly was not actually present for the reception of the signals, being off sick on the day in question). Paget conjures up the dramatic Atlantic weather and the heightened expectations of all involved:

Moreover, Marconi’s first transatlantic transmission took place in the early afternoon when the whole of the signal path across the Atlantic was in daylight. He could not have picked a worse frequency or time of day to undertake his experiments. Worse still, the comforting confirmation provided by experiments on the Philadelphia - undertaken with superior tuned receivers and a fixed aerial - were equal only to the Newfoundland distances at night. Day time reception had not been achieved at anything like the magic 2000 mile mark. In fact, radio physicist John Ratcliffe later calculated that Marconi could only have detected 850 kHz signals in Newfoundland at 12.20 on 12 December 1901 if his receiver had been between 10 and 100 times more sensitive than the receiver later used on the Philadelphia (Ratcliffe 1974). An examination of the artifact in question finds no evidence to suggest it was. As understandings of the wider telecommunications landscape grew, the event that was held to be the first foray into that landscape looked less and less likely to have happened. A re-examination of the materials from the transatlantic broadcast had opened the event up to criticisms of unimagined proportions. One artefact from this transmission, it seemed, could be called upon to save it: the spark transmitter.

The wind howled around the building where in a small dark room furnished with a table, one chair and some packing cases, Mr. Kemp sat at the simple receiving desk while Mr. Marconi drank a cup of cocoa before taking his turn at listening for the signals which were being transmitted from Poldhu – at least we hoped so. (Percy Paget recorded for the BBC, 12 June 1935)

When Marconi undertook his first transatlantic transmission he used a spark transmitter. The signal from Poldhu was generated by firing a powerful spark across a 5 cm gap. The spark generated a disturbance in the background electromagnetic radiation which was then broadcast from the

Instead of modification to accommodate the advances in radio engineering, the original story appears to have rolled on regardless, acquiring certain themes in its constant retelling. The narrative is one of exploration. The lone pioneer strikes

3 Frequency has an inverse relationship with wavelength. Higher frequencies therefore have shorter wavelengths.

4 You can recreate the actions of a spark transmitter by playing your radio in the kitchen and pressing the piezoelectrical (clicky) button which lights your gas stove. The sound broadcast from the radio is very similar to the recreation by Belrose (1994).


DEFINING MOMENTS out with certain faith for unknown shores finally triumphing over the forces of nature. With a spin reminiscent of the Columbus and the Flat Earth story, Marconi is drawn as the single-minded visionary, who alone knew the truth about the world. Like the Flat Earth story, this too proved to be a myth with a certain momentum.

and transmitter design (Belrose 2001). The aim was to establish whether it would indeed have radiated at the proposed parasitic high frequencies. Belrose, though a professor of engineering, implemented an approach which would not be unfamiliar to archaeologists. He examined the curated museum artefacts, documentary and photographic evidence5 and created a 1:75 scale experimental reconstruction of the aerial and transmitter. Through the experimental results obtained from his maquette and theoretical projections based on the science of radio wave generation and propagation, Belrose was able to confirm that although the spark transmitter did indeed broadcast HF signals, the fan antenna at Poldhu ‘radiated efficiently only on the fundamental oscillation frequency of the tuned antenna system’ (Belrose 2001, 25). In other words the parasitic HF signals were being generated by the spark transmitter at Poldhu but would not have been radiated with anywhere near sufficient power to reach Signal Hill. According to Belrose, Marconi’s first transatlantic broadcast never happened. The big question here is does it matter?

The popular version of events propagated by historians, educators and biographers, remained unchanged from the 1935 account for several decades. This is amply demonstrated by a passage in William Baker’s 1970 book, History of the Marconi Company: On the 12th the gale was still vicious… Another kite was flown… Frantically it reared and plunged, threatening every instant to break loose like its predecessor. At the receiver Marconi sat listening intently as the precious minutes of the scheduled transmission slipped by. Suddenly, at 12.20 p.m. Newfoundland time, he handed the earpiece to George Kemp with a quiet ‘Can you hear anything, Mr. Kemp? Kemp took the headphone. Through the crash of static he could hear, faintly, the unmistakable rhythm of three clicks followed by a pause… until, all too soon, the signals were lost once more in static’. (Baker 1970, 68-69)

Discussion The stock answer from historical archaeology is yes, of course it matters. To question, to deconstruct, to reframe and to reassemble is the purpose behind a great deal of archaeology. When studying the recent past, the archaeologist examines the people, places and things in order to defamiliarize the familiar and to confront people with an alternative (and – many would argue - more valuable) version of what really happened (for example West 1999: 1). Alternative versions allow us to examine our unarticulated assumptions and open the mind to new ideas. These versions also allow us to give form to current ideas and to voice the things which need to be said in our disciplines at that moment in time. Defamiliarising is a popular pastime and the trend is visible in many other disciplines. Alternative truths are, however, only incidentally useful, and then only to a point. This chapter fits easily into the alternative truths tradition and, like many others, it does not really get anywhere near the heart of the matter: in this case, why is Marconi’s first transmission a defining moment? Why is the legend the shape it is and why is it so enduring?

Baker’s version shares in all the essential themes of the earlier account. The transmission is still medium wave, the wind still howls and Marconi is still envisioned with the literary equivalent of the thousand-yard-stare. Importantly, no mention is ever made of that tricky business with the ionosphere. Outside of lay circles the parasitic HF transmission theory remains the explanation of choice for radio physicists and engineers. Ratcliffe, one of the world’s most respected and sophisticated radio physicists, still cites HF emissions as the most likely explication of Marconi’s transmission when writing in 1974. Indeed, the HF emission theory still carries a great deal of weight at the time of writing. It remains standard curriculum at engineering colleges and to disagree with the received explanation is regarded as tantamount to ‘technological heresy’ (Kimberlin 2003, 4). Heretics are, however, out there. At the beginning of the 1990s, interest in Marconi’s first transmission resurfaced, mostly, it must be said, in response to a proposed Institute for Electrical and Electronics Engineers (IEEE) Conference entitled 100 Years of Radio. From 1993 engineering writers and historians of technology began to reexamine the 1901 transmission in the light of current knowledge, publishing several papers either for (for example, MacKeand and Cross 1995) or against the HF theory (for example, Belrose 1995).

The legend can be seen as a social discourse. Like academic discourses, social discourses are created for reasons, they fulfill a function: they say what needs to be said at the time. The transatlantic legend can be seen as having more to do with meeting a social need than constantly updating past events for accuracy. That the popular version of Marconi’s transatlantic legend is so pervasive and widely held - despite the material evidence to the contrary and regardless of the

To mark the 100th anniversary of Marconi’s first transatlantic transmission, Belrose published another paper in which he examined hitherto unexplored aspects of the Poldhu antenna

5 Interestingly, Belrose discovered that every single existing photo of the Poldhu antenna array had been doctored. In one, 32 of the 54 aerial wires in the fan had been whited-out. In another, ceramic insulators at the junction between the aerial wires and the triatic had been added with a pen!


CASSIE NEWLAND: 1230 HRS, 12 DECEMBER 1901. MARCONI’S FIRST TRANSATLANTIC WIRELESS TRANSMISSION efforts made in academia to derail it - says something about that need. In the endless reiterations in text books and on web pages, the shrine-like museum displays and the anniversary celebrations, Signal Hill remains stormy and Marconi still hears the clicks because it is important that he does. The reasons as to why the legend is important and which social needs are being addressed are not necessarily accessible directly from a rational analysis of the material record.

comforting human dimension and a notion of control to an otherwise illimitable and daunting new universe. It is, after all, unsettling to turn on the radio and discover that God does indeed play dice. References Appadurai, A. 1986. Introduction: commodities and the politics of value. In A. Appadurai (ed),The Social Life of Things: Commodities in Social Perspective, 3-63. Cambridge: Cambridge University Press. Belrose, J.S. 1994. Sounds of a Spark Transmitter. Multimedia article available from URSI Radioscientist, Belrose, J.S. 1995. Fessenden and Marconi: their differing technologies and transatlantic experiments during the first decade of this century. IEEE International Conference on 100 Years of Radio: 32-43. Belrose, J.S. 2001. A Radioscientist's Reaction to Marconi's First Transatlantic Wireless Experiment—Revisited. Antennas and Propagation Society, IEEE International Symposium 1: 22-25. Bjiker, W., Hughes, T.P. and Pinch, T. 1989. The Social Construction of Technological Systems: New Directions in the Sociology and History of Technology. Cambridge, Massachusetts: MIT Press. Bondyopadhyay, P.B. 1993. Investigations on the Correct Wavelength of Transmission of Marconi's December 1901 Transatlantic Wireless Signal. IEEE Antennas and Propagation Society, International Symposium Digest 1: 72 -75. Franklin, W.S. 1902. Wireless Telegraphy. Science, New Series 15, 368: 112-113. Hong, S. 2005. Marconi’s error: the first transatlantic wireless telegraphy in 1901. Social Research. 72:1. 107124 Kimberlin, D.E. 2003. The world’s most heralded Radio failure. Radio Guide 11, 10: 4-6. Latour, B. 2005. Reassembling the Social: An Introduction to Actor-Network Theory. New York: Oxford University Press. Lodge, O. 1931. A Retrospect of Wireless Communication. The Scientific Monthly 33, 6: 512-521 Love, A.E.H. 1915. On the Diurnal Variation of the Electric Waves Occurring in Nature, and on the Propagation of Electric Waves Round the Bend of the Earth. Philosophical Transactions of the Royal Society of London 215: 105-131. MacKeand, J.C.B. and Cross, M.A. 1995. Wide-band high frequency signals from Poldhu? The propagating spectrum and terminal equipment revisited. International Conference on 100 Years of Radio, 5-7 September 1995 (I E E Conference Publication)26-31.

This chapter argues that the seeming irrelevance of the material evidence in this case stems from the nature of the defining moment itself. The moment defined by the legend seems not to be the broadcast itself but rather the moment when the sphere of human existence was abruptly and publicly opened-up to a vast and unknowable beyond. Until that moment, exploration had meant traversing harsh terrain, physical effort, dog-sleds, etc. Marconi’s transmission meant the collapsing of the familiar relationship between time and space: the death of physical geography. As Franklin presciently appreciated at the time: It is not in the interlinking of continents divided by an ocean, but rather in the overspreading of the ocean itself with telegraphic facilities that the power and fruitfulness of this latest achievement of Mr. Marconi is to be perceived. (Franklin 1902, 112) Whether or not the actual transmission happened has no bearing upon the unveiling of this new world. The possibilities and uncertainties laid-bare could not be somehow un-revealed. The moment, therefore, abides and the legend reflects this. For the last 100 years what has needed to be said has not been about the realities of that day but rather about exploration and the need for vision, open-mindedness and – appropriately - blue-sky thinking. The stories created have needed to be inspirational and aspirational. Looking back, Marconi’s transmission stood at the threshold of a world of profound discoveries which were not immediately – or are even currently – understandable to lay audiences, for example the discovery of the electron and the theory of the Big Bang.6 Marconi’s transmission also presaged some distinctly disconcerting ideas, such as the pervasive and perverse character of chaotic patterning7 and the uncertainty principle,8 ideas which were hard for even the most scientifically literate minds to grasp. That the popular narrative of the event continues to cast Marconi as the recognizable ‘pioneer’ figure in a familiar tale of exploration should not surprise us. As a proverb, the story lends a 6

Radio telescopes tuned to receive radiation in the microwave band allowed physicists to look back towards the very start of the universe. 7 The ‘Cantor Dust’ problem, which set the seed of chaos theory, was originally a model of interference in radio transmissions. 8 The ‘is it a wave or is it a particle’ question.


DEFINING MOMENTS MacKenzie, D. and Wacjman, J. 1985. The Social Shaping of Technology: How the refrigerator got its hum. Milton Keynes and Philadelphia: Open University Press. Moreland, J. 2001. Archaeology and text. London: Duckworth. Pupin, M.I. 1915. The Aerial Transmission Problems. Science, New Series 42, 1093: 809-815. Ratcliffe, J. A. 1974. Scientists' Reactions to Marconi's Transatlantic Radio Experiment. Proceedings IEEE 121, 9: 1033-1038. West, S. 1999. Introduction. In S. Tarlow and S. West (eds), The Familiar Past: Archaeologies of Late-Historical Britain Past, 1-16. London: Routledge. Thurston, R.H. 1902. Engineering notes. Science, New Series 15, 377: 473-475.


Chapter 3

1140 hrs, 14 April 1912 The case of the RMS TITANIC David Miles

themselves from the approaching water. There is a tremendous noise as the ship’s internal fittings crash forward into the submerged bow. The ship’s lights go out.

ACTION! Sunday 14 April 1912, 11.00 pm. ‘Say, old man, we are stopped and surrounded by ice MWL’ (Wireless message from California) ‘Shut up. I am busy. I am working Cape Race MCE’ (Reply from Titanic)

2.20 am The now almost vertical Titanic slides slowly beneath the icy, flat-calm waters of the Atlantic. 4.10 am The Carpathia arrives and begins to pick up the survivors. According to British Board of Trade figures 703 people survive and 1503 are lost.

Sunday 14 April 1912, 11.40 pm. The Royal Mail Steamer Titanic, on the fifth day of its maiden voyage across the Atlantic from Southampton to New York. A floating town lit like Piccadilly Circus; the world’s largest mobile artefact Titanic steams sleekly westwards over the benign immensity at about 20.5 knots. Until lookout Frederick Fleet sights an iceberg dead ahead. In spite of taking avoiding action the Titanic strikes the iceberg below the water-line along the starboard side. The ship’s steel plates rupture, popping open along about 200 feet, and allowing water to pour into six of the sixteen, so-called ‘water-tight’ compartments. As the water rises in these compartments it spills over into the next. The Titanic will inevitably founder.

Friday 19 April, 9.35 am The survivors arrive in New York to a journalistic feeding frenzy. Incidents: 96 years later Sunday 3 February 2008, 3.00 pm. Greenwich, UK A group of children, about eight years old, are playing on a climbing frame shaped like a ship, outside the National Maritime Museum. One of them cries out ‘It’s the Titanic. It’s sinking. We’re all sliding down!’

Monday 15 April 1912, 12.10 am The ship’s Marconi officers begin to send out a wireless distress call. Fifteen minutes later the Cunard liner Carpathia, 58 miles to the south east, picks up the message and alters course to come to the aid of the Titanic.

Monday 11 February 2008, 8.30 am London The author walks across Smith Square in Westminster. Near the corner with Lord North Street he sees a green plaque mounted on the wall of a house. It reads: W.T.Stead 1849 – 1912 Journalist And reformer Of Great Renown Lived here 1904 – 1912

12.30 am Captain Edward Smith gives the order to begin placing women and children in the lifeboats. There are 2,201 people on board. Most of the passengers are migrants from many countries bound for a new life in America. In first class there are American multi-millionaires and other celebrities of the day. The lifeboats have a total capacity of only 1,178, which is compliant with the out-of-date regulations which have not responded to the rapidly increasing size of ocean liners.

There is no reference to why Stead died in 1912. In fact, he was one of the best-known and most controversial journalists of his day, a social reformer and a spiritualist. Stead was a passenger on the Titanic, on his way to New York to address a convention of muscular Christians, the Men and Religion Forward Movement. He did not survive. At least not in the flesh. His communication with a medium from beyond his watery grave was published as The Blue Island Experiences of a

2.05 am The last of the twenty lifeboats leaves the Titanic. In the confusion the boats are not filled to capacity. The stern of the Titanic rises in the air. Most of the fifteen hundred people left on board clamber upwards, desperate to distance 19

DEFINING MOMENTS this chapter. They simply reflect the prevalence of the great ship in our culture, our built environment and in many people’s minds ninety-six years after the event. Yet in strictly logical or objective terms this disaster was not an event of any great historical significance. As Steven Biel wrote, ‘In my opinion the disaster changed nothing except shipping regulations’ (Biel 1996, 7). Even at the time a hard-headed Wall Street Report (printed in the San Francisco Examiner, 17 April 1912) declared ‘ ... as a market influence the Titanic may be dismissed.’ And within two years the world was embroiled in the war to end all wars.

New Arrival beyond the Veil, with a preface by Sir Arthur Conan Doyle. It is not strong on details of the disaster. Sunday 24 February 2008, 10.00 am London The author opens the Observer newspaper. On page 34 there is a beautifully drawn cartoon by Riddell. It shows the submerged wreck of a ship named ‘Special Relationship’ which has foundered on a huge iceberg labelled ‘Extraordinary Rendition’. This refers to the CIA landing prisoners on British territory without authorisation, and the impact of the affair on US-British relations. In new circumstances readers automatically understand the Titanic iconography. Since 1912 many ships of state have collided with metaphorical icebergs.

Making myths Yet even as the ship slipped beneath the icy Atlantic its story began to be ‘embarnacled with metaphor and myth’ (Jack 1999: 36). Historians cannot dictate or even predict which events and people will be resurrected by mythmakers, or be bathed in the limelight of popular culture. King Arthur, Robin Hood, Jesse James, General Custer at the Little Big Horn or Princess Diana: none is particularly important in the great scheme of history. But all are nevertheless significant because they reflect back constantly shifting images of societies that choose to manipulate their stories. Only the more extreme Titanic buffs would deny Richard Howell’s statement, that ‘The sinking of the Titanic is an event whose mythic significance eclipses its historical importance’ (Howells 1999: 1). And French historians such as Robert Darnton (1984) have emphasised the importance of mentalités or attitudes, which can far outweigh the importance of events (real or imagined) which they reflect.

Wednesday 27 February 2008 10.30 am London In Somerset House, the Director’s outer office. The author: ‘I must get on, I have an article to write about the Titanic’ Ros: ‘Oh Maria here has a Titanic connection.’ Maria: ‘Yes, my great great uncle came from Southampton and worked in the engine room of the Titanic. He managed to get out on deck and found a life-jacket. He was saved. His name was Walter Hurst’. At the next desk Jelena says “My great great uncle was on the Carpathia and helped pick up the survivors. He was Croatian, well from the Austro-Hungarian Empire in those days. Maria and I only recently realised that we both had a Titanic connection.’ Later that morning I check the on-line Titanic biographies Maria’s great great uncle Walter was lucky. Aged 27 he was one of the engineering crew, in fact a fireman, who managed to scramble onto Collapsible Lifeboat B after it had washed off the Titanic as the bridge went under. Although the lifeboat floated upside down in the water about thirty people dragged themselves onto it and perched there precariously until about 6.30 am when they were rescued. Other survivors on the Collapsible were Second Officer Charles Lightholler and second wireless operator Harold Bride. Both had starring roles in the Titanic drama. And great great uncle Walter himself would also be given a significant part. In Walter Lord’s book A Night to Remember (1978 edition) Walter Hurst, said to be a greaser, is an actor who makes several notable appearances: when his father-in-law tosses a fragment of the iceberg into his bed; when he observes No 1 Boat launched with only 12 occupants and comments ‘If they are sending the boats away, they might just as well put some people in them’; and finally when he extends an oar to Captain Smith who, in the icy water, has been selflessly exhorting the crew in the lifeboats, only to find the noble seadog has finally given up the ghost.

Many sociologists, literary theorists and anthropologists have attempted to provide a more or less comprehensible definition of myths – ‘a cultural device in which abstract values are encoded in concrete form’ (Howells 1999: 10) Or ‘A myth shows something. Myths define enemies and aliens as in conjuring them up they say who we are and what we want, they tell stories to impose structure and order’ (Warner 1994: 19). ‘Myth’ may be wrong, or they may be used to bad ends – but they cannot be dispensed with … a myth that works well is as real as food, tools and shelter’ (Simons and Melia 1989: 267). Myths are not trivial; they are not intellectual errors or sloppy history; ‘The weakness of myth is its strength. Its disclaimer to absolute truth is its claim to partial truth – the only kind we as finite interpreters can ever presume to possess’ (Kearney 1991: 184). Myths are created in a historical context which allows them to spread contagiously through our culture. They are multidimensional, reflecting what different people in different places and times wish to make of them. Myths shift and change their shape and are charged by bolts of renewing energy. Inherited myths generate new myths.

These personal intersections with the Titanic listed above all occurred in the couple of weeks before I sat down to write 20


Figure 3.1 Poster for the Women’s Titanic Memorial Fund: The eventual memorial unveiled in 1931 now stands on Fourth and P Streets SW in Washington DC.

Figure 3.3 A White Star Line advertisement for the return sailing from New York, which never happened.

Figure 3.2 A White Star Line advertisement for the first sailing to New York from Southampton via Queenstown, aimed at third class passengers. 21


Figure 3.5 Titanic Baby Found Alive! A 1993 tabloid ‘story’ in the Weekly World News. The baby and the legend live on through a time warp.

Figure 3.4 A London paper-seller announces the disaster.

Synchronicity and the Titanic

years trade between the U.S. and Britain increased 700% and the population of the U.S. quadrupled. By 1880 British companies, notably Cunard and White Star, German shipowners and arriviste Americans (delayed by the Civil War) led by J P Morgan were battling for supremacy on the Atlantic. Morgan stormed in with a blast of American cando energy and bought up the Inman Line of Liverpool, thus acquiring British shipbuilding expertise. He then joined up with the mighty German fleet and launched a price war. Third class passengers could now cross the Atlantic for £2. In 1902 he acquired the White Star Line. The White Star ships looked British – built in Belfast, registered in Liverpool, staffed by British officers and a mainly British crew and flying a British flag. But they were American owned and sailing to American seagoing rules.

The Titanic sank in a specific time and place, a thousand miles east of Boston, Massachusetts, in a specific historical context. The three decades before 1912 arguably witnessed the most dramatic technological advances in world history – the emergence of powered flight, the combustion engine, radio signals, pneumatic tyres, a massive surge in industrial productivity, the increasing use of telephones, electricity, the phonograph, film, newspapers and massive steel ocean liners. As early as 1829 Thomas Carlyle wrote, ‘it is the Age of Machinery. We remove mountains and make seas our smooth highway; nothing can resist us. We war with rude Nature; and by our restless engines, come off always victorious’ (quoted in Foster 1999: 3). In the nineteenth century railways crossed continents and rescued communities from ‘glutinous immobility’ (Blanning 2007: 3). In mid-century the German historian Karl Biedermann estimated that two generations earlier overland travel had been fourteen times more expensive. On the verge of the railway age a stagecoach from Paris to Bordeaux cost the equivalent of a clerk’s monthly wage (Blanning 2007: 6). The railways democratised travel. And the new generation of steam-powered liners (so called because they travelled directly from one place to another) allowed poor people to change their lives. The United States was the magnetic attraction. In 1840 Samuel Cunard launched the Britannia and with it the first trans-Atlantic steamship service. In the next fifty

Speed and comfort represented the future. There were reactionaries, like William Morris, who shunned this new world. Others revelled in it. The artistic movement known as the Futurists launched their Manifesto in 1909: • ‘We say that the world’s magnificence has been enmeshed by a new beauty of speed. • We will glorify war – the world’s only hygiene – militarism, patriotism, the destructive gesture of freedom …beautiful ideas worth dying for and scorn for women. • We will destroy the museum, libraries, academies of every kind, will fight moralism, feminism, every opportunistic or utilitarian cowardice.’


DAVID MILES: 1140 HRS, 14 APRIL 1912. THE CASE OF THE RMS TITANIC This manifesto sounds ominously like a clarion call for fascists. It makes all-too-clear some of the fault-lines that ran through early twentieth-century western societies as they struggled to come to terms with unprecedented change.

fatal collision of two great steamships seems the worst till another occurs to take its place in horrifying the imagination …. Never were men and women more irretrievably abandoned to their doom.’

For H.G. Wells, French intellectuals and even Communists the decks of the Titanic represented the rigid strata of European class obsession. At the same time these ships provided a cheap and usually safe passage to new opportunities.

The strategy of Victorian and Edwardian editors is still familiar today (the Soham child murders; the Maddie case) – hype up a good story, involve the readers, and string it out. Then lace the sensation with regular doses of moralising often written by a new generation of women journalists. However, the best newspapermen could be staggeringly professional. The New York Times lowered its price to a penny in 1898 and by 1901 its circulation had quadrupled to over 100,000. Legendary editor Carr Van Anda was on top of the Titanic story faster and more accurately than anyone. By 3.30 am on the morning of the disaster Van Anda and his team had organised a background story on the passengers, acquired photographs and led with the headline ‘TITANIC SINKS FOUR HOURS AFTER HITTING ICEBERG’. The New York Times then out-manoeuvred the competition to grab the best stories when the Carpathia arrived with the survivors. Their Titanic edition was out in three hours after the Carpathia docked, with fifteen of its twenty four pages devoted to the story (Emery et al. 2000).

Communication was also transformed in the later nineteenth century by the growth of newspaper customers. In the 1850s the educated middle-class made up the bulk of newspaper readership but in the following decades factory workers entered ‘the newspaper culture in droves’ (Curtis 2001: 55). In Britain literacy expanded thanks to Sunday Schools and increasing numbers of state elementary schools (from 70 percent in 1850 to 85 percent in 1900). Newspaper sales were elevated, however, mainly by falling prices; and mass circulation generated rapid advances in printing presses. The paper joined the glass of beer and tobacco as the working man’s daily self-indulgence. By the 1870s mass newspaper culture was firmly embedded in British life and in 1914 some 2,504 different newspaper titles were published across Britain.

Titanic: myth and metaphor

The circulation of the Daily Telegraph rose nearly ten times between 1855 and 1880 when it reached a quarter of a million a day, helped by its pricing policy – one pence compared with three pence for the Times. The Daily Mail, the precursor of the tabloids, made its first appearance in 1896, price a halfpenny. On 24 May 1844 Samuel F.B. Morse in Washington DC sent the first message by telegraph to Baltimore. That afternoon the Baltimore Patriot carried the first telegraphic message to be published in a newspaper. A new era had arrived. The invention of the telegraph promoted syndicated news from Reuters in Britain (1851), Associated Press in New York (1848) and Favas in France (1835). The medium, the wire services, may have provided the ‘facts’ globally but the message could be infinitely manipulated by the huge range of newspapers and magazines which received them as the Titanic disaster would show. Myths could spread rapidly and globally.

For newspapermen the Titanic disaster was a godsend. It had everything; it played into virtually every major concern of the day: class, celebrity, gender, race, fear of change, technology, religion and morals, immigration. From the Titanic, stories rose like bubbles ready for the myth-makers. This is a subject which has been studied in detail in Britain (Howells 1999) and in the U.S. (Biel 1996) and does not need to be elaborated here. Nevertheless, it is the revelation of twentieth-century attitudes or mentalités which is the Titanic’s major contribution to cultural history. The facts were that a large – but not exceptionally large - floating hotel carrying both rich celebrities and poor immigrants steamed too fast, but not exceptionally fast, into an ice-field and collided with a berg. There were insufficient life-boats for the passengers and crew and the ship sank so rapidly that, in spite of the Marconi radio and busy sea-lanes, no rescue ship could reach the Titanic in time to save those left on board before she went down.

Like the tabloids today the mass newspapers thrived on predictable fodder: murder most foul, royalty, the doings of the upper class – especially divorce. New York newspapers led the way in the so-called ‘New Journalism’ – monopolising and sensationalising the news for their mass readership. The New Journalists loved a good disaster: fires, train crashes, heroism and derring-do – and, of course, ship wrecks. Following the collision of two steamships off Newfoundland on 14 August 1888, when 150 passengers drowned, the newspapers described their death struggles in vivid detail. On 20 August the Times ran a leader: ‘Every

Out of this material the Edwardian myths emerged: of Anglo-Saxon fortitude, of upper-class heroism, of women and children first, male chivalry, manly self-sacrifice, stoicism and devotion to duty, and female dependency. Italians and Orientals in steerage panic and are ‘shot down like dogs’. They glare ‘like wild beasts’. The negro stoker (though there were no black people on board) emerges from the depths of the ship to attack the noble radio operator. Naturally he is shot down too. 23

DEFINING MOMENTS The Titanic sank at a time of major social conflict. Sixty one African Americans were lynched in the U.S. in 1912. The suffragettes were on the march (the term ‘feminist’ was used for the first time) and divorce rates were rising. Women were increasingly entering the work-place. There were fears about immigration in the U.S. and Christian Jeremiahs feared everything from materialism to ‘short-haired’ women. Out of this senseless maritime disaster morals and metaphors could be created for every purpose. People needed to make sense of the senseless.

of Ronald Reagan’s America technological miracles were back in fashion. In that year a team from the Woods Hole Oceanographic Institute led by Robert Ballard joined with the Institut Francais de Recherché pour L’Exploitation de la Mer (IFREMER) and located the wreck of the Titanic – though the French were largely overlooked in the blaze of publicity (Biel 1996: 208–16).

Immediately after the disaster there was a surge of interest carried along by newspapers, post-cards, plays, music and a tsunami of bad poetry. Different people and places had different attitudes – in Belfast where Titanic was made there was shock and even shame; in Portsmouth, where most of the crew came from, there was a sense of loss and sadness, but an unsentimental determination to get on with work (Barczewski 2004: 205, 247). From Portsmouth to Liverpool, Colne to Belfast and across the Atlantic in Halifax, memorials and gravestones were erected to the dead. And to the heroes. Especially those who had done their duty such as Wallace Henry Hartley, violinist and bandmaster, who may or may not have played hymns to the last. His body was retrieved from the Atlantic and returned to his home town of Colne in Lancashire, where 40,000 people attended his funeral.

Ballard’s image was that of the westerner, exploring the new frontier. But he was hardly the rugged individualist, not Daniel Boone, Shane nor even Bat Masterson – to whom he was related. His pioneering technology depended upon the staggering military budgets of the Cold War. Finding the Titanic was a sideline to locating nuclear submarines (and to be fair such important scientific discoveries as, in 1982, the bacterial colonies living in deep sea hot vents or ‘black smokers’ – ‘hot living inhabitants of hell!’, Thomas 1983). Nevertheless locating the wreck was a remarkable achievement; the hero had descended into the underworld, into the abyss, to Cocytus itself, to recover the lost one. Out of primeval chaos came order; a beam of light was shone into the heart of darkness from which came a kind of rebirth, a resurrection, an apocalypse. Technology revived the mythmaking.

Interest in the Titanic died along with the millions in the Flanders’ trenches. Yet remarkably it has been resurrected in the later twentieth century – by books such as Walter Lord’s dramatic A Night to Remember (1956) and by films – notably that based on Lord’s account, also entitled A Night To Remember (1958), an essentially British story of duty and men with stiffer-upper lips. It fitted the zeitgeist of post-war fifties Britain and America and did not essentially differ from the predominant traditional Edwardian ethos of 1912 (I extend the Edwardian period, culturally, to 1914). But for cultural impact it nowhere near matched that of James Cameron’s 1997 film Titanic: a romance of youthful freedom, of female liberation from the shackles of elderly, male Anglo-American stuffiness. Leonardo di Caprio as a renegade St George who delivers the maiden and her maidenhood, from the dragon of stifling convention. All set against the back-drop of a superbly re-created Titanic; the best special effects Hollywood’s $200million could buy (Delgado 2007). Cameron successfully shifted the cultural boundaries; metamorphosing a story of adult male duty and responsibility into one of youthful female rebellion and independence.

The mundane Titanic

The leading man in the triumphal epic of exploration was Robert Ballard.

Ballard and his team had also sailed into a heritage issue: an extremely large wreck on the bed of the Atlantic two and a half miles deep. It is not hyperbole to say that the deep sea is the new frontier for archaeologists. Thanks to the aqualung the tremendous potential of historic wrecks to provide new evidence about the past is clear. Discoveries such as the eleventh-century Skuldev ships, scuppered in Denmark’s Roskilde Fjord, the sixteenth-century English Princes Channel wreck, the Spanish Playa Damas shipwreck off Panama, the iron-clad U S Monitor or the early submarine H.L. Hunley, provide us with new information about the technology of shipbuilding at key periods in world history. Usually archaeologists deal with terrestrial sites of lengthy occupation which have been cleared, cleaned and remodelled by successive generations of occupants. These sites are palimpsests of fragmentary evidence. Occasionally on land a catastrophe grabs a place at a particular moment: such as the volcanic eruptions at Pompeii and Herculaneum or at Thera in the Aegean. Buildings, objects, even people may then be found in-situ. The more rapid and unexpected the catastrophe the more likely that the evidence will be trapped in place – subject to the taphonomic processes which time will impose. The well-preserved shipwreck is the timecapsule par excellence. The Mary Rose provides evidence of Tudor life of such startling intimacy that even the most hard-bitten archaeologist is impressed. The Bronze Age

Titanic resurrected Cameron’s film was inspired by the discovery of the real Titanic (Ballard 1987). The ship sank in a sea of hubris, of pride, man and his machines defying nature. But in the 1985 24

DAVID MILES: 1140 HRS, 14 APRIL 1912. THE CASE OF THE RMS TITANIC wrecks at Ulu Buru and Cape Gelidonya with their intact cargoes transform our understanding of prehistoric trade in the Mediterranean. The historic value of such ships is obvious (Bass 2005; Grenier et al. 2006).

get (from Titanic), but there’s a heck of a lot to learn about science … Titanic is a great testbed for analyzing shipwrecks (quoted in Ballard and Sweeney 2004: 123). However, the Titanic became a twentieth-century icon by sinking on its maiden voyage. This is what attracted the French-US team to search for it, using state-of-the-art late twentieth-century technology. And thanks to the disaster the Titanic is relatively well-preserved on the dark, cold bed of the Atlantic. The Titanic’s almost identical sister ship, the Olympic, had a very different career. Built to the same design in the adjoining Belfast shipway and launched before the Titanic to a much greater fanfare, the Olympic reliably plied its relatively uneventful trade until 1937, when it was taken to Scotland and quietly scrapped (Bonsall 1989, 7; Howells 1999: 149-152). Ironically after a valuable life the Olympic became virtually worthless. In contrast the Titanic has accumulated cultural value because of the manner of its death (Thompson 1979).

But what of the Titanic? Historians and archaeologists usually want to answer the Who? What? How? and Why? questions. In the case of the Titanic we know most of the answers. Some important documentation was destroyed by bombing during World War II and when Cunard took over the White Star Line. Nevetheless a vast amount is available to us – for example Shipbuilder of 1911 explains clearly in technical language why the sleek quadruple-screw express Cunarders Lusitania and Mauretania held all the Atlantic speed records. In contrast Messrs Harland and Wolff, Shipbuilder explains, aim to build more economical, slower but more comfortable vessels. Their engineers have learnt lessons from the propelling technology fitted in 1909 to the Megantic and the Laurentic. To provide the ideal balance between speed and comfort liners need to become bigger so, ‘[t]he maximum possible dimensions of a new vessel depend upon the dock and harbour accommodation available when the ship is completed’. Hence the rise of Southampton. With regard to speed ‘it has been the custom of the White Star Line to strive for pre-eminence in passenger accommodation in conjunction with a speed which can be obtained without too great a sacrifice of cargo capacity … The Olympic and Titanic have been designed in accordance with the policy … a passenger on one of these ships will not have the honour of crossing in the fastest ship on the Atlantic. But he (sic) will be comfortable and have the greatest choice of accommodation’ (ibid.)

Raising issues The discovery of the Titanic wreck raises many contemporary issues – of conservation, ownership, the ethics of salvage, exploitation, the sale of artefacts, the treatment of a grave site and the control of international waters. The controversy began soon after the initial discovery on 1 September 1985. Robert Ballard appeared before the US Congress appealing for the protection of the wreck as a maritime memorial. The RMS Titanic Memorial Act of 1986 (Titanic Act) was enacted to protect the wreck from uncontrolled salvage. This did not prevent agents from other countries exploiting the site and in 1987 a French group from IFREMER, working with a US company, returned to salvage artefacts from the debris field – though they did not then penetrate the two, separated, sections of the hull.

The Shipbuilder and other technical journals such as Engineering provide a detailed description, including drawings and photographs of the Olympic and Titanic; the accommodation of 2,440 passengers, fixtures and fittings including its carpets, furniture, panelling, stained glass windows, Turkish baths, swimming pool and gymnasium. We know (almost) every object and person taken on board the Titanic as well as its technical specifications. The Titanic was 882 feet 9 inches long, 92 feet wide at the maximum and 104 feet high. Its gross tonnage of 46,328 tons was almost 100 tons more than the Olympic and it had an anticipated sea speed of 21 knots. The two official enquiries into the disaster, held in New York and London, were different in tone and character, but nevertheless provide remarkably detailed accounts, from eye-witnesses subjected to a barrage of questions, of what happened on the night of 14 – 15 April. We knew who was lost, who was saved (see for example Wikipedia); and why the Titanic foundered.

Congress began to negotiate an international agreement, to protect the wreck site, with Canada, France, the UK and any other interested nation. Following widespread consultation the US National Oceanic and Atmospheric Administration (NOAA) issued guidelines and the Final Minutes of the International Agreement Concerning the Shipwrecked Vessel RMS Titanic Agreement was signed in 1999. The salvage company RMS Titanic, Inc, sued NOAA and the Department of State in order to stop the signing of the Agreement. They lost. The UK signed the Agreement in 2003 and the US in 2004. However, the US has not yet passed this into law. In October 2007 the US informed the District Court, Eastern District of Virginia that the proposed legislation is currently being reviewed by the Senate Committee in Commerce, Science and Transportation. Once a sponsor is selected, both the US

So in terms of historical details the wreck of the Titanic has relatively little to tell us. As Lt Jeremy Weirich said, ‘[t]here’s not a lot of historical or archaeological knowledge we could


DEFINING MOMENTS Department of State and the NOAA are ready to proceed. The Virginia Court declared: ‘In proclaiming RMST the salvor-in-possession of the submerged wreck of the RMS Titanic’ the court declared that ‘RMST is not the owner of the artefacts which it recovers from the wreck site. Rather, under the law of salvage, RMST is entitled to a salvage award for its salvage efforts.’ RMST’s request to change its role from salvor-in-possession to that of ‘finder’ was denied. As a result, the case, which was remanded to the court in 2005 to proceed under salvage law and award RMST a salvage award, has not been settled. The court rapped RMST over the knuckles for its obfuscating tactics.

In the Guardian (17 May 2007) Maev Kennedy wrote (in an article entitled ‘Treacherous seas, a mystery wreck – and $500 million haul of treasure’):

In its motion for a salvage award (30 November 2007) RMST declares that it first undertook salvage on the Titanic, a wreck ‘in a condition of marine peril’ at a depth of approximately 12,500 feet beneath the surface in 1987. Around 2000 artefacts were recovered. Since then further expeditions have taken place in 1993, 1994, 1996, 1998, 2000 and 2004. RMST alleges that the fair market value of the collection of objects is US $ 110,859,200. RMST claim that ‘its efforts in the conservation and preservation of the artefacts has vastly augmented the fair market value of the collection by many orders of magnitude’. RMST declares that it is entitled to a salvage award between 90% - 100% of the market value of the artefacts. The Titanic has resurfaced into a legal fog worthy of Jarndyce & Jarndyce.

Armed with some of the most advanced technology the world has ever seen there is a danger that we will return to the underworld of nineteenth-century treasure hunting, when Egyptian tombs were dynamited and stripped out for the sake of economic gain. Indiana Jones and the wild frontiersmen will be back in the saddle, unless the world’s governments can co-ordinate an appropriate and effective response.

‘In a phrase which will send a shiver up the spine of many maritime archaeologists who fear that unnecessary damage is done during salvage operations, Mr Stemm (Of Odyssey Marine Explorations, the finders of several important wrecks) said: the outside world now understands that what we do is a real business … not just a lucky one shot deal. I don’t know of anybody else who has hit more than one economically significant shipwreck.’

Only a small proportion of known historic wrecks are currently protected around the world. In Britain, for example, some 32,777 identified wrecks and casualties are recorded in territorial waters (see www.heritagegateway. However, a mere forty five wrecks are designated under The Protection of Wrecks Act 1973 in England. The problem facing English Heritage, who took on responsibility for maritime heritage and wrecks in England’s territorial waters under the National Heritage Act 2002, is how to select the most important sites for protection and management from the vast number of potential candidates (Roberts and Trow 2002; English Heritage 2007). English Heritage’s methodology includes risk assessment to calculate the vulnerability of the wreck, and field evaluation. Shipwrecks are also assessed according to five factors which represent all phases of a ship’s career covering: build, use, loss, survival and investigation (Wessex Archaeology, 2006; Dunkley forthcoming). This aims to examine the main attributes of a site to establish its significance. Significance means the sum of the cultural and natural heritage values of a place.

It is clear though that there is tremendous public interest in the Titanic witnessed by the numbers who attended the exhibitions of the artefacts around the world. In spite of the circumstances curators including those of Britain’s National Maritime Museum in Greenwich believed that it was ethical to display the salvaged material (McCaughan 1996). Robert Ballard’s achievement has opened a Pandora’s box. The technology to explore deep sea wrecks is now privately available. In recent years there have been many visits to the Titanic itself and Ballard has documented the damage and erosion to the site, possibly caused by these uncontrolled visits (Ballard and Sweeney 2004). There are potentially a million wrecks around the world, mostly unprotected by heritage law and now accessible with remote controlled vehicles and robotic equipment. Some of them contain financially valuable cargoes, wrecks like the Spanish galleon Nuestra Senora de Atocha which sank of the Florida Keys in 1622. A treasure hunter located it in 1985 and recovered £200 million worth of coins. More recently treasure hunters have located historic wrecks with valuable cargoes off Cornwall (the so-called Black Swan said to contain 17 tons of gold and silver coins) and off Gibralta (a wreck claimed to be the warship Sussex which sank in 1694 carrying, possibly, nine tons of gold).

People may ascribe subjective values to a wreck site for many reasons. By utilising a transparent, well-researched method of assessing significance English Heritage aims to establish a range of values about a site: its distinctive design or construction, the story it can tell about the past, its association with notable people or events, or its flora and fauna. The English Heritage approach also identifies four high level values – evidential, historical, aesthetic and communal. These move in a spectrum from objective to subjective: the potential of a wreck site to yield primary evidence about past human activities; the wreck’s ability to illustrate aspects of history, of technology and innovation or its associations with specific people; the sensory or


DAVID MILES: 1140 HRS, 14 APRIL 1912. THE CASE OF THE RMS TITANIC Dunkley, M. forthcoming. The Value of Historic Shipwrecks, a paper delivered to the EAA Conference, Zadar, Croatia, 2007. English Heritage 2007. Protected Wreck Sites at Risk: A risk management handbook. London: English Heritage. Foster, J.W. 1999. Titanic. London: Penguin Books. Grenier, R., Nutley, D. and Cochran, I. (eds) 2006. Underwater Cultural Heritage at Risk: Managing Natural and Human Impacts. (Heritage at Risk Special Edition) Paris: ICOMOS. Howells, R. 1999. The Myth of the Titanic. London: MacMillan Press. Kearney, R. 1991. Poetics of Imagining: From Husserl to Lyotard. London: Harper Collins. Lord, W. 1956. A Night to Remember. London: Longmans Green & Co. McCaughan, M. 1994. National Maritime Museum, Reading the Relics: Titanic Culture and The Wreck of the Titanic Exhibit. Followed by Knight, R. 1996. Curatorial Statement. Material History Review 43, 68-73. Roberts, P. and Trow,S. 2002. Taking to the Water: English Heritage’s Initial Policy for the Management of Maritime Archaeology in England. London: English Heritage. Simons, H. and Melia, T. (eds) 1989. The Legacy of Kenneth Burke. Madison: University of Wisconsin Press. Thomas, L. 1983. Seven wonders, from Late Night Thoughts. London: Viking. Thompson, M. 1979. Rubbish Theory: The creation and destruction of value. Oxford: Oxford University Press. Wessex Archaeology. 2006. On the Importance of Shipwrecks, ALSF Project ref: 58591d.02a, unpublished report for English Heritage.

intellectual stimulation of the wreck site; and finally the meaning of the wreck site to people who relate to it. Although I have emphasised the mythic rather than historical importance of the Titanic, there is no doubt that under the English system of designation the Titanic wreck deserves our care and protection, as do hundreds of other wrecks which currently lie in shark infested waters. A new myth Perhaps a new myth may emerge in the twenty-first century – one where the Titanic becomes the mothership of a convoy of vessels in need of international protection. Not hulks to be flensed, stripped and desecrated by unregulated greedy scavengers but vessels with cultural and communal value, worthy of our care. In an age of optimism the Titanic and its passengers were the victims of thoughtlessness and recklessness. In an age of anxiety it is a lesson we forget at our peril. Titanic R.I.P. Acknowledgements Thanks to Ian Oxley, Mark Dunkley, Catherine Stocker of English Heritage and in the USA Ole Varmer and Jeremy Weirich. References Ballard, R.D. 1987. The Discovery of the Titanic. New York: Madison Press. Ballard, R.D. with M.S. Sweeney, 2004. Return to the Titanic: a new look at the World’s Most Famous Ship. Washington DC: National Geographic. Barczewski, S. 2004. Titanic: A Night to Remember. London: Hambledon Continuum (2006 paperback edition). Bass, G.F. 2005. Beneath the Seven Seas: Adventures with the Institute of Nautical Archaeology. London: Thames & Hudson. Biel, S. 1996. Down with the Old Canoe: A Cultural history of the Titanic Disaster. New York: W.W.Norton & Co. Blanning, T. 2007. The Pursuit of Glory: Europe 1648 – 1815. London: Penguin Books (2008 paperback edition). Coupe, L. 1997. Myth. London: Routledge. Curtis, L.P. 2001. Jack the Ripper and the London Press. New Haven: Yale University Press. Darnton, R. 1984 The Great Cat Massacre and Other Episodes in French Cultural History. London: Vintage. Delgado, J.P. 2007. Titanic, in J.M.Schablitsky (ed), Box Office Archaeology: Refining Hollywood’s Portrayals of the Past, 70-87. Walnut Creek: Left Coast Press.


Chapter 4

1 July 1916 The Battle of the Somme and the machine gun myth Paul Cornish

Of all days in British history, few are imbued with such resonance as 1 July 1916: the first day of the Battle of the Somme (officially the first day of the Battle of Albert). ‘The most notorious battle in British history’ (Sheffield 2003: xii) according to one prominent modern historian and, in the words of another: ‘At the time of writing almost a century has passed since the Battle of the Somme. That interval of time is still probably not long enough to enable us to gauge its full impact on British life’ (Duffy 2006: 165).

clear that this was far from a general state of affairs, with a variety of tactical approaches applied to the problem of traversing no man’s land (Sheffield 2002: 167). Nevertheless, even in the sectors where the British managed to break into the German positions, further progress was frequently halted by machine gun fire brought to bear on them from flanking positions (Travers 1987: 152-160; Holmes 1992: 124, 126). In fact the first day of the Somme probably witnessed the zenith of the machine gun’s effectiveness as a direct-fire weapon. At no point in this, or any other, war was the direct fire of un-subdued machine guns brought to bear in such a concentrated manner on such a scale (Cornish 2009b).

The reason for this is simple: 1 July 1916 witnessed the highest casualties ever sustained in a single day by the British armed forces, totalling 57,470 – including 19,240 killed (Holmes 1992: 127). These stark statistics do not, however, do justice to the significance that this day has assumed over the intervening nine decades (Liddle 1992). In popular perception, the ‘First Day of the Somme’ has come to represent the whole 141 days of the battle. In fact it might be taken to encapsulate the popular view of the British experience of the First World War in toto. AJP Taylor, in The First World War: an Illustrated History, devotes at least as much space to the disastrous First of July as to the subsequent events of the battle during the period up to 18 November (Taylor 1966: 134-140). It is, of course, one of the ironies of historiography that this waspish and highly political little book should have become perhaps the most widely read text on the Great War (Danchev 1991: 263). Nevertheless, the author certainly touched upon a deep-seated truth when he stated that ‘The Somme set the picture by which future generations saw the First World War: brave helpless soldiers; blundering obstinate generals; nothing achieved’ (Taylor 1966: 140).

An Artillery War It is completely wrong, however, to fashion, from the events of this one day, a prism through which to view the whole of the First World War. If we do so, we inevitably distort the true picture. First and foremost, we should remember that it was artillery and not the machine gun, which was the main ‘killer’ of the First World War (Bidwell and Graham 1982). Despite the unique events of 1 July 1916, this rule held true on the Somme too. British casualty returns indicate that explosive munitions, rather than small arms fire, caused almost 60% of deaths and wounds (Noon 1996: 101). Moreover, it was estimated that a man struck in the chest by shrapnel or a shell fragment was three times more likely to perish than a man similarly wounded by small arms fire. (Noon 1996: 102). It was shellfire that drove the armies underground. ‘Artillery was the killer; artillery was the terrifier. Artillery followed the soldier to the rear, sought him out in his billet, found him on the march’ (Terraine 1981: 132). This was echoed in the sentiments of front line soldiers. Machine guns did not generally feature among the perils which they most feared. The threat of burial-alive, gas, flamethrowers, aerial bombing and, above all, artillery bombardment appear to have been feared and hated most (Winter 1979: Ch.7; Holmes 1994: Ch.6). Bullets, by contrast, were regarded as a relatively ‘clean’ way to get wounded or killed. A French infantryman summed it up thus: ‘To die from a bullet seems to be nothing; parts of our being remain intact; but to be dismembered, torn to pieces, reduced to pulp, this is a fear that flesh cannot support…’ (Holmes 1994: 233).

Firmly emplaced at the heart of this picture is the machine gun, arguably one of the most deadly – certainly the most mythologized - materialities of modern conflict. Overwhelmingly the popular image of the Somme pits slowly moving lines of British infantry against German machinegunners who proceed to cut them down ‘like ripe corn before the scythe’ (Sheldon 2007: 160). Such scenes were indeed to be seen on 1 July 1916. On several sectors of the 27,000-yard front British troops lost the ‘race to the parapet’ against German machine-gunners bringing up their guns from the deep dugouts which had protected them from six days of British artillery bombardment. Recent research has made 29

DEFINING MOMENTS In tactical terms, artillery dominated the Western Front. All sides sought to break the trench deadlock by the increasingly sophisticated use of heavy concentrations of artillery. This did indeed frequently permit attacking troops to ‘break in’ to enemy positions. The failure of the British attacks on the 1 July 1916 may well be viewed as a result of the faulty use of an inadequate quantity of artillery (Travers 1987: 161-164); especially when compared with the (little-remembered) success of French attacks on the Southern sector of the Somme Front. The French were well supported by concentrated artillery-fire, and enjoyed a victorious 1 July, as did the British divisions immediately to their north, which benefited from some of the same support (Falls 1960: 170). Throughout the war, it was, ironically, subsequent difficulty in moving artillery forward that proved one of the main obstacles to achieving the elusive ‘breakout’ (although the relatively primitive state of battlefield command, control and communications were also a major problem in this context).

impossible, as the residues left by gunpowder quickly fouled the barrels of small calibre weapons (Boudriot 1981). The answer to this conundrum was found by the French, who invented a nitrocellulose-based propellant which combusted completely upon firing, releasing a great amount of energy without creating significant residue or smoke. They immediately developed a rifle to chamber a new 8 mm cartridge filled with the miraculous propellant. Its appearance effectively rendered obsolete all other rifles (not to mention machine guns) then in service (Huon 1995: 4-6). The military establishments of Europe soon found themselves embroiled in an infantry weapons race, as they vied with each other to introduce small calibre rifles into service. These weapons were of 6.5 mm to 8 mm in calibre, and fired bullets that were jacketed with soft metal to prevent their bores from becoming fouled with lead. They were typically sighted out to 2000 m, although their bullets were potentially lethal at even greater ranges.

The machine gun lacked both the range and the killing power of artillery. Men who kept their heads below ground level were impervious to its bullets. Nevertheless, even as the popular vision of combat on the Western Front exaggerates the dominance of the machine gun on the battlefield, it remains simultaneously ignorant of its true capabilities. This is hardly surprising; as such matters have rarely been considered suitable for discussion in works of history. To quote Stéphane Audoin-Rouzeau and Annette Becker: ‘It is striking how much historians, though they profess to be discussing the war, are cut off from areas of relevant knowledge. Weapons for example – how they are used, how they work, and what effect they have – are outside the competence of most of them’ (Audoin-Rouzeau and Becker 2002: 19). I will now outline the true nature of the machine gun and its lethal potential - as an aid to its comprehension as an item of material culture.

In the wake of smokeless powder came another important change in ammunition technology: the general introduction of bullets with pointed noses. It was (correctly) anticipated that the streamlining of the bullet would further enhance both range and accuracy. However these so-called spitzer bullets have another characteristic. Because the centre of gravity is at the rear of such projectiles, they have an innate tendency to travel tail-first through the air. This tendency is held in check by the spin imparted to them by the rifling of the weapon from which they are fired. However, it is frequently the case that, on entering a human body, such a bullet begins to turn around its lateral axis, causing far more damage to tissue and organs than would be achieved by a straight travelling bullet. (Cornish 2009) Thus the cartridges fired from most machine guns during the First World War combined potentially ferocious terminal ballistics with a lethal range in excess of 3000 m. They were also capable of remarkable feats of penetration. The British .303-inch bullet, when travelling at its maximum velocity, could penetrate 1.5 m of clay, almost 1 m of hardwood, or almost 0.5 m of sandbags (Coppard 1969: 27). No helmet or body armour could withstand such concentrated kinetic energy. A single bullet could easily kill or wound more than one man. These terrifying qualities literally and symbolically, embodied the intensified destructiveness of this new era of modern industrialised war.

A Revolution in Firepower The machine gun itself is of course just one part of a ‘weapons system’ - the real weapons are the bullets that it discharges. The bullets fired by the machine guns of the Western Front were the product of a revolution in military technology: namely, the invention of smokeless propellant for cartridges in 1885. Prior to this date, small arms had been chambered for ammunition filled with gunpowder - of calibres ranging from 9.5 mm to 12 mm, they fired solid lead bullets that followed markedly curved trajectories and were potentially lethal out to around 1500 m. Military experts sought to improve this level of performance by creating high velocity ammunition, which would have a greater range, and a flatter trajectory to simplify the process of taking accurate aim. However, this would require either more propellant or a smaller bullet. Increasing the gunpowder charge was not feasible without increasing the recoil of the weapon to unacceptable levels. Conversely a decrease in calibre was also

By far the most effective way of delivering these deadly projectiles was the machine gun. It could do so in a number of ways. These did not include the standard Hollywood method of swinging the gun rapidly around on its tripod to hose the enemy ranks with fire. Such shooting would be wasting not only ammunition, but also the machine gun’s greatest asset – its precision. This precision is founded on a combination of the firm but adjustable mount from which the gun is fired and the fact that the effects of fear and confusion on its firers 30

PAUL CORNISH: 1 JULY 1916. THE BATTLE OF THE SOMME AND THE MACHINE GUN MYTH could not readily disturb its aim. As a British Army training pamphlet dryly observed: ‘The accuracy of fire is increased by a reduction of the personal factor’ (SS192 1919: 1).

revolution in machine gun tactics. Machine guns were progressively removed from vulnerable front-line positions and increasingly devoted to fire from long range – frequently conducted indirectly (i.e. at targets not visible to the firer) and/or over the heads of intervening friendly troops. These techniques, which necessitated a good deal of mathematical calculation and careful use of map, compass and clinometer, were made possible by a combination of the precision noted above, and the curved trajectory followed by the bullets (Cornish in prep. a; SS192).

Machine guns are fired in bursts, the lengths of which vary according to the range or type of fire being conducted. Ten to twenty round bursts might suffice at ranges under 1000 m, while fifty round bursts might be used at extreme ranges. In the British Army, ‘rapid fire’ was conducted at a rate of 250 rounds per minute. The tactic closest to the popular perception of a machine gun’s use was ‘traversing fire’, which the French, showing a flair for finding the mot juste called feu fauchant. To achieve this, the gun’s elevation would be set to sweep an enemy parapet or area of advance and the gun would be traversed in the horizontal plane in between each burst of fire. Traversing fire would be conducted within a carefully defined arc - typically around 20 degrees.

These tactics began to see regular use during the Somme battles of 1916 (Figure 4.1). This period also witnessed the first organized use of the ‘machine gun barrage’ – officially defined as ‘centrally controlled fire by a large number of guns on to definite lines or areas, in which each gun engages approx 40 yards of frontage’ (SS192 1917: 7). The effectiveness of such fire on the Somme was enhanced by the tactics employed by the Germans, whose invariable practice it was to attempt the recapture of lost positions by immediate counterattack. This policy inevitably exposed large bodies of German infantry to the fire of the increasingly effective Royal Artillery and to machine gun barrage fire. This is what gave that bloody struggle what John Terraine described as its ‘true texture’: ‘The picture of the British infantry rising from their trenches to be mown down is only a true picture of the battle of the Somme when set beside that of German infantry rising from their trenches to be mown down’ (Terraine 1981: 122). Barrage fire was equally valuable for supporting attacks or deterring counterattacks, and for the rest of the war it formed the basis of British machine gun tactics. The French and Germans only belatedly copied it.

With the deflection and elevation mechanisms of the mount clamped tight, machine guns could fire on a fixed point or line, with minimal deviation, for as long as required. This type of fire was especially useful at night, with aim having been taken in daylight. The effectiveness of machine gun fire was greatly enhanced by ensuring that the guns took the target in enfilade (i.e. from the flank). Even on a straight frontage machine guns were most properly positioned on the flanks of the position, so as to afford them long diagonal lanes of fire across the front of the position, preferably with the opportunity of creating crossfire with other machine guns. When fired at longer ranges, the machine gun develops different capabilities. Not all bullets leaving a machine-gun follow the same flight path. The variation in their trajectories creates what is known as a ‘cone of fire’. The area where the cone of fire intersects with the ground is known as the ‘beaten zone’ – an area typically elliptical in shape (Cornish 2009b). In the case of a British Vickers machine gun, firing at a range of 2000 m, onto flat terrain, produced a beaten zone 64 m long, by 6 m wide (Hutchinson 2004: 199). This gave machine guns the ability to saturate areas of ground with fire and to deny the enemy safe access to chosen areas of the battlefield – even those out of view. Interlocking fire from a number of guns can be particularly devastating as their beaten zones overlap. These characteristics made the machine gun particularly useful in the relatively static warfare that prevailed on the Western Front between late 1914 and early 1918.

The Germans, in fact, had less reason than the British to adopt these new techniques. With the exception of their illstarred offensive at Verdun, they maintained a defensive posture on the Western Front from the summer of 1915 until the spring of 1918. Central to this defence, as revealed so terribly on 1 July 1916, was the interlocking direct fire of carefully sited machine guns. This meant situating the guns behind the front line, preferably in positions from which they could enfilade any attackers (SS 487), with deep dugouts or concrete pillboxes to protect their crews. Being on occupied territory, the Germans were able to select the best positions to occupy – making local withdrawals where necessary. The allies, who were bound, for reasons of politics and civilian morale, to hold on to every inch of French and Belgian territory, did not have this luxury.

As the war progressed, further levels of sophistication were added to the tactics used by machine guns. Contrary to common perceptions, the leader in this field was the British Army. Uniquely among the belligerents, Britain created a separate corps, the Machine Gun Corps who, from the end of 1915, became the sole users of the Vickers machine gun within the army (see McCarthy 1993). The resulting concentration of specialist knowledge engendered a

The Machine Gun as Trophy One consequence of this was that the capture of machine guns came to be seen as a measure of the success of any offensive operation carried out by the Allies. In former wars, such significance had adhered only to artillery pieces. Now the number of captured machine guns was recorded after



Fig 4.1 The reality: British machine gunners on the Somme. The gunners are wearing gas helmets and the gun is firmly emplaced for the conduct of long-range fire. The setting of the rear sight indicates that the target is 2,200 yards distant.

each attack. Concomitant with this development, machine guns very naturally became much sought after as trophies of war. In a real sense, therefore, the machine gun became fetishized. After a successful action in April 1917, Captain Graham Greenwell wrote that ‘Our captured (machine) guns are fine trophies, and I have already had them stamped “Captured by ‘B’ Company 1/4 Oxford and Bucks Lt. Infty.” They will go to Oxford at the end of the war’ (Greenwell 1972: 175). In the following year, The Times extolled the use of such trophies in raising war-funds, suggesting that:

5/9/18 Maj. Garnett). Furthermore, trophy-hungry recruiters, fundraisers or regimental officers had to take second place to the Army itself, which re-employed a number of captured German machine guns, after having them converted to fire British ammunition and to fit the tripod used with the Vickers gun (Goldsmith 1989: 129).

A captured gun, grimed with the mud of France or Flanders, a heavy minenwerfer that perhaps a month ago was shelling a British first line trench, a machine-gun that may have “held-up” half a battalion till its team were bayonetted (sic) by the vanguard of our advancing infantry, make an even more effectual appeal (The Times, 6 June 1918).

Departmental Committee to deal with all the questions relating thereto, and to watch the interests of the Imperial War Museum in consultation with the museum committee. The normal procedure is for guns and other trophies sent to this country from France and other theatres of war, to be claimed as having been captured by certain units. If the claims of these units are substantiated the commanding officers are given the opportunity of determining whether the trophies should be presented to the Regimental Depot, the Imperial War Museum, or some City, Borough, etc (NA T1/12438).

After the Armistice, a more ordered system of trophy distribution was put in place by the Army Council; which set up a,

Competition for such trophies was evidently fierce. A 1918 request from the Irish Recruiting Council for ‘trophies, not too bulky in size, such as rifles, bomb throwers, flame throwers, helmets, damaged machine guns etc.’ was refused on the grounds that ‘no more could be spared & so much of what they had being claimed by units & consequently could not be sent out of the country’ (NA NATS 1/258 Memo

The greatest concentration of these trophy guns was, of course, to be found at the Imperial War Museum. Coincidentally, the museum’s origins could be said to lie in 32

PAUL CORNISH: 1 JULY 1916. THE BATTLE OF THE SOMME AND THE MACHINE GUN MYTH the events of 1 July 1916 – for it was the great loss of life on the Somme which persuaded the government that steps must be taken to unite the public behind the war effort. The creation of a national war museum was integral to this policy (Cornish 2004: 36). Thus, the ‘social life’ of the machine gun as curated material culture began, ironically, and in part, with its ability to destroy and damage so many actual human lives.

the author’s conclusion that the importance of machine guns in colonial campaigns was played-down in order to emphasis traditional martial virtues and the personal, rather than technological, superiority of the white man (Ellis 1975: 106107). This thesis is not borne out by contemporary reports of these colonial actions, which frequently give prominence to the role of the Maxim gun and clearly accept it as a symbol of Western technological superiority over the ‘uncivilized races’. To quote but one example, the Daily Telegraph, reporting on the Battle of Omdurman in 1898, was moved to assert: ‘In most of our wars it has been the dash, the skill, and the bravery of our officers and men that have won the day, but in this case the battle was won by a quiet scientific gentleman living down in Kent’ (Hawkey 2001: 82).

Not all of these guns have survived the vicissitudes of the Imperial War Museum’s early history (Cornish 2004). However, of the twenty-seven machine guns captured on the Western Front currently held by the Imperial War Museum, seven bear painted inscriptions denoting the unit that captured them. Three of them carry painted serial numbers relating to the Trophy Committee (IWM FIR 9156, FIR 9162, FIR 9343). They played their part, therefore, in the museum’s role as what Saunders calls ‘a national focus for the commemorative materiality of war-related objects’ (Saunders 2007: 59).

The reference was of course to Sir Hiram Maxim – inventor of the first true machine gun. Maxim (not quite a ‘gentleman’ by the standards of his day and not ‘quiet’ by any standards) was himself not unaware of the status of his weapon as an Imperialist icon. In 1888 he earned a great deal of publicity by presenting the explorer Henry Morton Stanley with a special maxim gun fitted with an ‘arrow-proof’ shield (Goldsmith 1989: 46).

Other captured German machine guns embarked on a new post-war life as emblems of victory in regimental headquarters, or proudly displayed by municipal authorities, at town halls or other locations. No less than 4000 captured machine guns were sent to Canada (Vance 1995: 50). So many were to be found in Australia in 1942, that they were gathered up and re-chambered for .303-inch ammunition, in expectation of an imminent Japanese invasion. The numbers involved are indicated by the fact that 1500 of these modified guns were created – even after many were ‘cannibalized’ to provide parts for others (Skennerton 1989: 54-57).

By the end of the nineteenth century, the machine gun had established itself not only as a universally recognisable artefact, but one with generally positive associations. It was featured firing blanks in public entertainments (Cornish 2009b) and even had a beer named for it - Vaux Maxim Ale. Maxim himself recorded finding images of his gun adorning postcards in a Swiss village shop. Furthermore,

It should be noted however, that the active life of these machine guns as valued trophies was relatively short – dwindling during the 1930s and coming to an abrupt end in 1939. Jonathan Vance has charted this tendency with regard to trophies in Canada (Vance 1995: 53-55) and it seems clear that there was a general public retreat at this time from the elevation of trophies into ‘icons of remembrance’ (Cornish 2004: 47). This aspect of a wartime object’s ‘cultural biography’ – where the Second World War influenced and re-shaped the material culture of the First, is a significant but so far under-acknowledged and under-investigated issue.

There was a wooden Maxim gun in the little village, and it was considered the thing for visitors to be photographed seated on the gun, with the mountains in the distance. If the mountains were covered with clouds, an excellent imitation painted on a large screen was used which did just as well, perhaps better (Maxim 1915: 197).

The Machine Gun as Icon

The Maxim gun was certainly not the only machine gun in existence, but it is interesting to note that, by 1900, its distinctive silhouette, comprising a box-like body and a cylindrical water-jacket had established itself as an image that would remain the predominant generic representation of the machine gun for decades to follow.

The post war life of German machine guns as trophies was of course a direct corollary of their use and capture by soldiers. However, even before the First World War, the machine gun had enjoyed a distinct celebrity as an item of material culture in its own right. In fact they maintained a generally favourable public image at that period. Their role in colonial conflict was thoroughly discussed by Ellis, in his well-known book The Social History of the Machine Gun (1975). In title and conception this book was a good twenty years ahead of its time, when first published. However, I would disagree with

Perhaps surprisingly, this favourable iconography did not disappear with the coming of war. In 1916, even as the war was plumbing new depths of frightfulness, Arcadian China of Stoke on Trent produced a crested-china souvenir figurine depicting a Tommy hunched over a machine gun. Even as late as 1917, new designs for British machine guns reproduced in the form of souvenir china were being registered (Southall1982: 16, 144, 147). A contemporary advert for Beecham’s Powders used an image of a Tommy firing a Maxim gun, with the slogan: ‘A Good “Maxim” To


DEFINING MOMENTS Remember. Beecham’s Pills will keep you up to the mark.’ The picture had, in fact, been drawn at the Front by Bruce Bairnsfather (the celebrated creator of ‘Old Bill’), while serving as a machine gunner (Cornish 2009b).

representation, and mimesis – and how machines that have killed have been ‘civilized’ with bronze; thereby changing from weapon to memorial. The reaction of artists to the machine gun was, therefore, equivocal. But what about society in general? How did the machine gun gain its later reputation? How did it become a cipher for the horror and supposed futility of the First World War? Banal as it may seem, at least some of the roots of the machine gun’s modern day image lie in the haphazard development of the historiography of the Great War (for an excellent survey of this, see Bond 1991). Personal reading of numerous memoirs and first hand accounts has failed to find the machine gun elevated to a dominant position among other aspects or, indeed, weapon systems of the war (unless, of course they were written by machine-gunners).

Machine guns also inspired artistic endeavours of a loftier nature during the First World War – and this on an international level. In particular, those artists connected with the Futurist movement were entranced by this mechanized means of killing and the ascendancy that it appeared to gain over both its users and its victims. The ‘Futurist’ artists Henri Gaudier-Brzeska and Christopher Nevinson created images of machine gunners (the former’s La mitrailleuse en action and the latter’s La Mitrailleuse) which conflated the gun with the man . A critic viewing Nevinson’s image of French machine-gunners wrote: ‘Are they men? No! They have become machines. They are as rigid and as implacable as their terrible gun. The machine has retaliated by making man in its own image’ (Walsh 2002: 146). Even artists of a more conventional bent appear to have been influenced by the Futurist mindset. Edward Handley-Read, who served as a machine gun officer, entitled a picture of British machinegunners firing their gun: ‘Killing Germans: The Machine at Work’ (IWM Art 179).

The Machine Gun Myth Nevertheless those at some remove from actual combat were quick to seize upon the machine gun as a symbol of the carnage of the Western Front. I would suggest that this was due to fact that the ostensibly simple ‘point and shoot’ cause and effect of machine gun fire, as popularly conceived, offered a straightforward way of explaining it. Thus, as early as 1927 we find none other than Winston Churchill lamenting: ‘If only the generals had not been content to fight machine-gun bullets with the breasts of gallant men, and think that was waging war’ (Churchill 1927: 348).

A machine gunner artist from the other side of the line, Otto Dix, produced a wartime drawing known as Falling Ranks (1916), depicting men being cut down by machine gun fire. A visible, geometric, arc of fire is depicted. The path of its traverse through an advancing rank of men can be traced by their attitudes – standing, stricken, falling or prostrate (Eberle 1985: 35). Incidentally, this impression of the machine gun was not limited to the artistically inclined. A contemporary survey of the machine gun, states that ‘its gust of destructive fire has a particularly nerve-shaking quality. Those who have to face it and witness its devastating effect on their comrades have the uncanny feeling that they are up against a machine, not merely fighting with other men’ (Longstaff and Atteridge 1917: 182).

Another who was swift to elevate the machine gun to a position of undue prominence was the military theorist and commentator Basil Liddell Hart. As a champion of the armoured warfare, Hart was evidently eager to boost the importance of the weapon which the tank had been invented to neutralize. By 1933 he felt able to state with confidence in a lecture that: ‘The one man who bestrode the World War like a Colossus was Hiram Maxim. Generals and statesmen became helpless puppets in the grip of his machine gun. The machine gun was, and still is, the dominant fact in land warfare’ (Liddell Hart 1933: 213).

Even in retrospect, the machine gun retained some attraction for artists. As late as 1924 Otto Dix chose to represent himself as a machine gunner clutching a Maxim gun in the self portrait (Eberle 1985: 43) which prefaced his series of etchings: Der Krieg. He plainly retained a sort of pride in his grim trade, which was not entirely destroyed by the ghastliness of some of his front-line experiences, so clearly evident in the etchings (Eberle 1985: 23, 39). In Britain at about the same time, the machine gun reached what was, perhaps, its iconic apotheosis when two Vickers guns, draped in laurel wreaths, were incorporated in the memorial created by Francis Derwent Wood RA for the fallen of the Machine Gun Corps. These guns are not sculptures, but are in fact real Vickers guns, coated in bronze (Goldsmith 1994: 110-111). Here we plainly see how aspects of material culture can be transformed – playing with ideas of authenticity,

Liddell Hart lurks spider-like at the centre of much of the early historiography of the Great War. One man with whom he maintained contact was the wartime Prime Minister, David Lloyd George. He acted as military adviser for the latter’s War Memoirs (Bond 2001: 58), wherein the machine gun was characterized as ‘The most lethal weapon of the war’ (Lloyd George 1933: 2. 614). This dramatic statement would have recommended itself to Lloyd George, wishing as he did to puff his own achievements in increasing machine gun production whilst Minister of Munitions, while simultaneously defaming his old enemies in the wartime High Command (Cornish in prep. a). Despite, or more probably, because of its rancorous quality, the former premier’s book was hugely successful and influential (Beckett


PAUL CORNISH: 1 JULY 1916. THE BATTLE OF THE SOMME AND THE MACHINE GUN MYTH 1991: 94-95). The deliberately misleading section (Cornish 2009b: 47-50) dealing with the procurement of machine guns has continued to be quoted, without qualification, in books right up to modern times (eg. Goldsmith 1994: 48-9; Smith 2002: 203-4).

153). This repetition of an easily grasped theme eventually established a new orthodoxy. In 1986 the Daily Telegraph – a newspaper hardly known for its denigration of victorious British war efforts – made reference to ‘ranks of brave men ordered forward through barbed wire and quagmire to throw themselves fruitlessly at fortified machine-gun positions’ (Todman 2005: 38). In this quote we see all the elements of the modern popular conception of the First World War – mud, incompetent (usually British) generals, wire, and – as the dynamic element – machine guns.

Its influence was certainly apparent twenty years later, when a new wave of popular histories of the Great War began to appear. Following a period when the immediacy of the Second World War eclipsed interest in the First; these books simultaneously fed upon and fostered a renewed public interest in the conflict (Danchev 1991). Due to the fifty-year rule on the release of official papers then in place, the authors of these works had necessarily to rely upon the secondary sources already available. Thus the erroneous placement of the machine gun at the nexus of the tactical puzzle of the Western Front was, to some extent, simply a function of repetition. Widely credited with being the first of this new wave of Great War histories, In Flander’s Fields, by Leon Wolff has the hapless Tommy pitted against ‘thousands of armoured machine-guns (that new and utterly frustrating “concentrated essence of infantry”)’ (Wolff 1959: 5).

Since 1990 this convention has benefited from the endorsement of the Imperial War Museum. The entrance to the ‘Western Front’ section of that institution’s galleries (which opened in that year) is dominated by a Maxim gun. The associated information panel states that the machine gun was: ‘largely responsible for the trench deadlock on the Western Front. Firing over 600 rounds per-minute, a single machine-gun was capable of halting the attack of hundreds of troops, forcing them to take refuge in trenches.’ Of course this vision exists entirely apart from historiological developments – remaining serenely impervious to any innovation or revisionism on the part of historians (infuriatingly for them). Dan Todman provides an excellent illustration of this dichotomy in his description of the furore ensuing upon the broadcast of the BBC Timewatch programme: Douglas Haig. The Unknown Soldier (Todman 2005: 117-118). Another historian - Stephen Badsey - has perceptively summed-up this phenomenon by suggesting that, for many people, the war represents:

This conception was undoubtedly reinforced by popular representations of the war. The wartime popular press abounded with illustrations of British medal winners gamely taking on emplaced German machine guns and their shavenheaded crews (Figure 4.2). A thoroughgoing investigation of some of the popular fiction of the period (Todman 2005: 15, 24) might yield further evidence. Pride of place however, should surely be accorded to the officially sanctioned film The Battle of the Somme (1916). It is estimated that this film was viewed by an astonishing 20 million people when first released (Todman 2005: 15). It contains a scene purporting to show troops going ‘over the top’ on 1 July 1916. Four of them fall as they advance – apparent victims of machine gun fire (Smither 1988: 4). It is now considered that this iconic sequence – ‘a classic part of the imagery of the First World War’ – was faked at a trench mortar school, well behind the lines (Smither 1988: 4-6). I would tentatively suggest that subsequent film and television productions might also have played a role in reinforcing this picture, as it is certainly easier and cheaper to recreate machine gun fire for such purposes than it is to replicate an artillery barrage.

such a uniquely terrible experience that it cannot be understood as part of any historical process or analysis. Instead, it can only be understood through the emotional response of individuals, and in particular the works of literature and art produced by participants … a modern cultural meaning of the war … is first derived from a close reading of such texts, and then this meaning is used retroactively to colour and interpret the events of the war itself (Sheffield 2002: xx-xxi). And so it goes on. Lyn Macdonald, in a foreword to a recent Folio Society reprint of Leon Wolff’s famous book, talks of ‘forests of barbed wire and the lethal machine-guns, which thwarted repeated attempts to breach them. Not for nothing was the German Maxim dubbed “The Widow Maker”’ (Wolff 2003: xi).

By the time the new thirty-year rule on the release of papers was introduced in the Public Records Act of 1967, the image was evidently firmly established outside the historical community. Paul Fussell, in his celebrated literary study The Great War and Modern Memory, lambasts poet David Jones’ attempt to place the war in a military-historical continuum with the words: ‘The war will not be understood in traditional terms: the machine gun alone makes it so special and unexampled that it simply can’t be talked about as if it were one of the conventional wars of history’ (Fussell 1975:

This last quotation brings us to one final significant aspect of the machine gun’s life as an item of material culture: its assumption of a dark and sinister mantle. In fact it is frequently anthropomorphised as the ‘Widow Maker’, the ‘Devil’s Watering Pot’ (Hutchinson 2004: frontispiece) or the ‘Grim Reaper’ (Ford 1996). In these guises it presides



Fig 4.2 The myth: Stanley Wood's vision of the fighting on the Somme, from The War Illustrated , 15 July 1916.

over a hellish vision of the First World War as a barbed wirebound morass, lit by the eerie glare of parachute-flares, seeded with corpses and populated by rats ‘as big as cats’. It ‘scythes’ (Smith 2002: viii) down men in ‘swathes’ (Lloyd George 1933: 2.606, Smith 2002: 183) – presenting the generals with bloated ‘butcher’s bills’ (this last linguistic reduction of men to meat appearing even in some serious works of history (Holmes 1992: 130). It is curious to note that seldom does

either the Second World War, or indeed any other conflict, receive this Grand Guignol treatment. Nicholas Saunders has written that ‘The passage of time and generations creates different interpretations of and responses to, the materialities of war as they journey through social, geographical and symbolic space.’ (Saunders 2004: 6). I would suggest that the machine gun abundantly reflects this process, 36

PAUL CORNISH: 1 JULY 1916. THE BATTLE OF THE SOMME AND THE MACHINE GUN MYTH and that it is an outstanding example of an object with a ‘social life’: the most significant day in that life being 1 July 1916.

World War and British Military History, 263-288. Oxford: Clarendon Press. Danchev, A. 1998. Alchemist of War: The Life of Sir Basil Liddell Hart. London: Weidenfeld and Nicholson. Duffy, C. 2006. Through German Eyes. The British and the Somme 1916. London: Weidenfeld & Nicolson. Eberle, M. 1985. World War 1 and the Weimar Artists. New Haven: Yale University Press. Ellis, J. 1987. The Social History of the Machine Gun. London: Cresset. Falls, C. 1960. The First World War. London: Longmans Ford, R. 1996. The Grim Reaper. London: Sidgwick & Jackson. Fussell, P. 1975. The Great War and Modern Memory. Oxford: Oxford University Press. Goldsmith, D. 1986. The Devil’s Paintbrush. Cobourg: Collector Grade Publications. Goldsmith, D. 1994. The Grand Old Lady of No Man’s Land. Cobourg: Collector Grade Publications. Hawkey, A. 2001. The Amazing Hiram Maxim. Staplehurst: Spellmount. Holmes, R. 1992. Fatal Avenue. London: Pimlico Holmes, R. 1994. Firing Line. London: Pimlico. Huon, J. 1995. Proud Promise. French Autoloading Rifles 1898-1979. Cobourg: Collector Grade Publications. Hutchinson, G. 2004. Machine guns; their history and tactical employment. Uckfield: Naval & Military Press. Liddell Hart B. H. 1933. An International Force. International Affairs XII.2: 205-223 Liddle, P. 1992. 1916 Battle of the Somme. A Reappraisal. London: Leo Cooper. Longstaff, F. and Atteridge A. 1917. The Book of the Machine Gun. London: Hugh Rees. Lloyd George, D. 1933. War Memoirs Vol 2. London: Ivor Nicholson & Watson. McCarthy, C. 1993. Nobody’s Child: a brief history of the tactical use of Vickers machine-guns in the British Army 1914-1918. Imperial War Museum Review No 8, 63-71. London: Imperial War Museum. Maxim, H. 1915. My Life. London: Methuen. Noon, G. 1996. Treatment of Casualties. In Griffith, P. (ed.), British Fighting Methods in the Great War, 87-112. London: Cass. Saunders, N, 2004. Material Culture and Conflict. In Saunders, N. (ed.), Matters of Conflict, pp 5-25. London: Routledge. Saunders, N. 2007. Killing Time. Archaeology & the First World War. Stroud: Sutton. Sheffield, G. 2002. Forgotten Victory. The First World War: Myths and Realities. London: Review. Sheffield, G. 2003. The Somme. London: Cassel. Sheldon, J. 2007. The German Army on the Somme 19141916. Barnsley: Pen & Sword.

Acknowledgements My thanks go to Nick Saunders and Chris McCarthy. References National Archive Documents NATS 1/258 T1/12438 British Army Training Manuals SS192 The Employment of Machine Guns. Pt 1: Tactical. (Various editions) SS487 Order of the 6th Bavarian Division regarding machine guns. 3 September 1916. 1917. Secondary Sources Audoin-Rouzeau, S. and Becker, A. 2002. 1914-1918. Understanding the Great War. London: Profile Books. Beckett, I. 1991. Frocks and Brasshats. In Bond, B. (ed.), The First World War and British Military History, 89-112. Oxford: Clarendon Press. Bidwell, S. and Graham, D. 1982. Fire-Power. London: Allen & Unwin. Bond, B. 1977. Liddell Hart. A study of his military thought. London: Cassel Bond, B. (ed.) 1991. The First World War and British Military History. Oxford: Clarendon Press. Bond, B. 2001. The Unquiet Western Front. Cambridge: CUP. Boudriot, J., P.Lorain and R. Marquiset. 1981. Armes a Feu Françaises Modèles Réglementaires 1833-1918. Paris. Coppard, G. 1969. With a Machine Gun to Cambrai. London: HMSO. Cornish, P. 2004. ‘Sacred Relics’. In Saunders N (ed) Matters of Conflict, pp 35-50. London: Routledge. Cornish, P. 2009a. ‘Just a boyish habit’..? British and Commonwealth War Trophies in the First World War. In Saunders N. and Cornish, P. (eds) Contested Objects, pp 11-26. London: Routledge. Cornish, P. 2009b. Machine Guns and the Great War. Barnsley: Pen & Sword. Cornish, P. In prep. Unlawful Wounding: Attempts to codify the interaction of bullets with bodies. In Cornish, P. and Saunders N (eds) Bodies in Conflict. London: Routledge. Churchill, W. 1927. The World Crisis 1916-1918 Pt 2. London: Thornton & Butterworth. Danchev, A. 1991. ‘Bunking and Debunking: The Controversies of the 1960s. In Bond, B (ed.) The First


DEFINING MOMENTS Southall, R. 1982. Take Me Back To Dear Old Blighty. Horndean: Milestone Publications. Smith, A. 2002. Machine Gun. London: Piatkus. Smither, R. 1988. ‘A wonderful idea of the fighting’: the question of fakes in The Battle of the Somme. Imperial War Museum Review No 3, 4-16. London: Imperial War Museum. Taylor, A. 1978. The First World War: an Ilustrated History. London: Penguin. Terraine, J. 1981. The Smoke and the Fire. London: Book Club Associates. Todman, D. 2005. The Great War: Myth and Memory. London: Hambledon. Travers, T. 1987. The Killing Ground. London: Allen & Unwin. Vance, J. 1995. Tangible Demonstrations of a Great Victory: War Trophies in Canada. Material History Review 42: 47-56. Walsh, M. 2002. CRW Nevinson. This Cult of Violence. New Haven: Yale University Press. War Office. 1929. Textbook of Small Arms. London: HMSO. Winter, D. 1979. Death’s Men. London: Penguin. Wolff, L. 1959. In Flander’s Fields. London: Longman’s, Green & Co. Wolff, L. 2003. In Flander’s Fields. London: Folio Society.


Chapter 5

11 August 1921 ? The discovery of insulin E M Tansey

In late 1921 at the University of Toronto two young Canadian researchers, Frederick Banting and Charles Best, isolated from the pancreas gland of an experimental dog a substance they initially called isletin, soon renamed insulin. This, they believed, was the internal secretion of the pancreas responsible for the body’s ability to metabolise carbohydrates, the absence of which caused the progressively debilitating, almost invariably fatal, disease diabetes mellitus. The only available therapy that orthodox practitioners could then offer diabetics, or the desperate parents of children diagnosed with the condition, was a severely calorie restricted, low carbohydrate, diet. Many diabetics died miserably of starvation. The discovery of insulin offered, for the first time, an effective treatment. It remains so at the beginning of the twenty–first century.

and a world renowned expert on carbohydrate metabolism. Somewhat sceptical, Macleod nevertheless gave the young man lab space and experimental animals over the summer whilst he himself was on holiday in Aberdeen, and assigned one of his students, Best, to assist.

Toronto 1921-1922 By 1920 it was generally agreed that the substance missing in diabetics was produced by specialised cells, the Islets of Langerhans, in the pancreas gland. Efforts by scientists around the world to extract it had been tantalisingly close, and there is strong evidence that the Romanian physiologist Nicolas Paulesco actually beat Banting and Best to the discovery of what he called pancréine by a matter of months. In the flurry of interest that followed their announcement, however, his work was overlooked. Banting and Best were unlikely candidates to make a great medical discovery. Neither was a trained scientist. Banting was a young surgeon then struggling to establish a medical practice in London, Ontario. He had read of recent work on the pancreas and diabetes, which principally involved removing the pancreas from an experimental animal, grinding up the gland and preparing various chemical extracts, such as aqueous, saline or alcoholic solutions, and injecting them back into the animal. Despite some false starts, none of these had worked consistently, and Banting suspected, as had others, that some additional secretion in the pancreas was destroying the antidiabetic factor. He devised a surgical experiment that would cause the acid secreting cells of the pancreas to atrophy, whilst protecting the Islets of Langerhans, which might enable the secretion to survive and be extracted, and to then be injected into an animal in which diabetes had been induced by the removal of the pancreas. In Spring 1921 he discussed his ideas with John Macleod, professor of physiology at Toronto

Figure 5.1 Charles Best (L) and Frederick Banting (R) reproduced courtesy of the Thomas Fisher Rare Books Library, University of Toronto. Over the Summer months Banting and Best achieved mixed results, in both inducing diabetes in experimental dogs, and in preparing extracts that might overcome that diabetes, and initially what results they did achieve were both inconsistent 39

DEFINING MOMENTS and transient. On 30 July 1921 an extract reduced the blood sugar of a diabetic dog that lived for several hours, and over the next week two more dogs were de-pancreatised and given the extract, with encouraging but still short-lasting results. On 11 August 1921 a further diabetic dog was injected with what they now called ‘isletin’, and with constant treatment and monitoring lived for nearly three weeks. This was a major turning point, but it was still just a start. Extracting the substance from the pancreas in a usable form was a major problem. Macleod, on his return to Toronto in midSeptember, suggested a number of experimental modifications, and invested further resources from his department. James Collip, a biochemist from Alberta then on sabbatical leave in Toronto, was recruited to devise extraction procedures necessary to turn the crude extract into a clinically usable drug. In the Toronto General Hospital on 11 January 1922 an extract made by Banting and Best was finally administered to Leonard Thompson, a 14-year old diabetic patient close to death. The extract failed. There was only a very slight improvement in his condition, and treatment was abandoned. As the boy lay dying, there was enormous pressure on the team, mainly Collip, to come up with a better extract. Simultaneously, relations between the main investigators, already tense, deteriorated badly. Banting was increasingly resentful of what he saw as Macleod ‘stealing’ his work and he began to say so publicly; Macleod seems to have been hastened by Banting into supporting the clinical trial before Collip was satisfied with the purity of the extract, and felt the humiliation of failure deeply; and Banting was making the further purification into a competition between himself and Best on one hand, and Collip on the other, at a time when they desperately needed to collaborate and pool their resources. On 23 January a further extract, purified by Collip, was ready for injection, and this time Thompson responded well. He was to live for a further 14 years.

Fig 5.2 Leonard Thompson, in a photograph given to the Research Defence Society by Dr Charles Best in 1938 when he delivered the Stephen Paget Memorial Lecture. Reproduced courtesy of the Research Defence Society.

accidental contamination, and large scale deterioration because of inadequate storage and distribution. Furthermore, the method of drug delivery, by injection (because if taken orally insulin was degraded in the gastrointestinal system and was thus rendered ineffective) was little used until the advent of insulin and therefore manufacturers, physicians and patients were unfamiliar with it and had to develop equipment and techniques, and overcome their own and patients’ resistance to new methods.

This was the first successful clinical demonstration of insulin as a specific therapy for diabetes. Macleod turned over his entire department to insulin research with assistance from the Connaught Anti-Toxin Laboratories, a local serum producing company and he approached the American pharmaceutical firm Eli Lilly and Co. for further help in translating the laboratory processes into commercial procedures. The University of Toronto set up an Insulin Committee to patent the extraction procedures and regulate the award of production licences to companies and institutions around the world who hurriedly tried to manufacture the wonder drug. That manufacture was however fraught with problems. There were few pharmaceutical companies at the time, and very few of those had any experience of dealing with biological, as opposed to chemical, preparations. Supplies of the necessary raw material, animal pancreas glands, were sometimes erratic, especially in the UK that did not have the resources of the great cattle yards of the American mid-west to rely on. Inexperience led to large scale industrial wastage and

Insulin and twentieth-century medical research Insulin, its discovery, development and clinical use, provides a powerful symbol of medical research and therapeutic advance in the twentieth century. But what is its defining moment? Was it the day on which Banting and Best first made a pancreatic extract that worked, however briefly, in reducing a diabetic dog’s blood sugar? Or was it, as somewhat arbitrarily adopted for the purposes of this chapter, when they first observed the successful and repeatable treatment of an experimental diabetic dog? Or when they published their first tentative results in a peer-reviewed journal for fellow scientists and clinicians to assess and repeat? Was it when Collip improved the extraction process to produce a sample


E M TANSEY: 11 AUGUST 1921 ? THE DISCOVERY OF INSULIN There are numerous ways in which insulin epitomises modern medical research. It resulted from rational, laboratory based investigations that characterise modern scientific medicine, rather than empirical observation that had accounted for what little reliable therapeutic discoveries there had been until that date. Commercial pharmaceutical companies, nowadays so ubiquitous and important in medical research but then in comparative infancy, immediately became involved in its manufacture, testing and marketing, and indeed several companies were either created, or diverted from more traditional food or chemical manufacture, to produce insulin. The patenting of insulin raised for the first time serious questions of the ethics and professional propriety of profiting from medical research, questions that were to reverberate and be repeated throughout the twentieth and into the twenty-first century. There was the widespread production and utilisation of insulin that stimulated recognition of the need for international agreements about biological standardisation of medicines. Additionally, there were public, often bitter, disputes about credit for the work that came to the fore when the 1923 Nobel Prize for Physiology or Medicine was jointly awarded to Banting and Macleod. The disagreements were between, on the one side Banting, the provincial surgeon who had apparently struck lucky with his idea and to a lesser extent his young student assistant Best, and on the other side, the professional scientists MacLeod, and to a lesser extent Collip. Simultaneously, the medical staff involved in designing, organising and conducting the clinical trials of insulin, and the several manufacturers trying to devise and optimise production processes, were also muscling in on the discovery and claiming credit. Banting was particularly resentful against Macleod, and after an initial impulse to refuse the Prize, declared he would share his half of it with Best. A few days later Macleod announced he would do the same with Collip.

that could be used clinically, or the first successful demonstration of its therapeutic efficacy in an hospital patient? Was it the first successful commercial production, or was it the development of consistent, reliable, large scale production and supply? Or was it an entirely different, much later, event such as the work of the British chemists Frederick Sanger (Nobel Laureate in Chemistry, 1958) who sequenced the amino acid structure of insulin or Dorothy Hodgkin (Nobel Laureate in Chemistry 1964) who elucidated its 3-D structure? These latter discoveries opened up the possibility of producing synthetic insulin, which removed some of the problems inherent with its extraction from animal glands (although synthetic insulin has not been without problems of its own). For the purposes of this chapter the initial experimental work and first clinical usage in Toronto have been highlighted, although with the explicit acknowledgement that it was subsequent events that translated that discovery from a local laboratory finding into a major international scientific achievement and therapeutic advance.

Histories of insulin The diversity and richness of the history of insulin, summarised so briefly above, is largely taken from the definitive account of the subject that appeared in 1982. By Canadian historian Michael Bliss, it drew on previously unknown archival sources and was revised and updated for its 25th anniversary (Bliss 1982; 2007). Bliss’s account differs radically from previous versions, largely due to the material sources he unearthed from which to construct his history. The previous history had been largely promoted by Banting himself, and perpetuated by colleagues in Toronto after his death. Bliss located a number of fresh archives, some only just available after the death of Best, the last surviving member of the insulin quartet, including the personal, contemporary testimonies of the four main participants in the discovery, which challenged the existing historiography. He has written engagingly of his archival adventures and how he was able to challenge what he called the ‘Banting and Best myth’ (Bliss 1993; 1998).

Fig 5.3 Advertisement for Burroughs Wellcome & Co.’s insulin, taken from Chemist and Druggist, 26 October 1929, from original artwork in the archives of the Wellcome Foundation, the Wellcome Library, produced courtesy of the Wellcome Photographic Library.


DEFINING MOMENTS The present account can give only a flavour of the numerous, and growing, sources and resources that are available to study, reconstruct and understand the history of insulin and its discovery, and the bibliographies in Bliss’s volumes should be consulted for more details in the first instance. This is largely because this is a historical record that, although rooted in one place and one time, continues to grow and permeate modern medicine internationally. There are in fact several different interwoven histories of insulin, including: the technical analysis of the experimental discovery and the translation of that laboratory discovery into a medicinal preparation; the scientific history that includes the role of institutions and personalities, and the squabbles about credit internally amongst the Toronto team, and the later external priority claims of other scientists; the social medical history of the introduction of a new therapy for an untreatable, fatal disease and its impact on the practice and expectations of doctors and patients; and economic and business histories that encompass the technological, administrative, legal and financial aspects of producing, patenting, marketing, and distributing the drug at local, national and international levels. And unlike other more punctate events in the history of the twentieth century, insulin has a progressing history as it continues to be used by patients, investigated by scientists, developed by manufacturers and regulated by governments and market forces. This chapter argues that all these histories are dependent on the original discovery and are therefore contingent on it, but conversely that laboratory work would be meaningless, possibly forgotten, without the later developments outlined in the brief summary account above. All these aspects reflect in diverse ways the discovery of insulin, and can be studied and reconstructed using both published and unpublished documents from individuals, institutes, commercial companies and governmental organisations; audiovisual material such as photographs, films, interviews and oral histories; and objects including production equipment, buildings, and medical delivery systems such as needles and syringes.

engine, or more specifically searching a medical internet database like the freely available PubMedCentral at (accessed 28 January 2008). The subject of diabetes has an even larger literature and can be found using the same techniques. There is also a tangential group of ‘how to’ books and pamphlets, written by medical, paramedical and nursing professionals, patients and commercial organisations, all providing information for physicians, diabetics and their families, ranging from guidance on managing the disease and insulin therapy to nutritional advice and recipe books. Such books were produced before insulin was discovered, when a reduced carbohydrate input was the only possible treatment, but increased greatly once a viable therapy was available (see eg Lawrence 1933; Anon. 1933). In later years films, videos and web-sites also appeared to provide information to the many groups affected by insulin. As with personal patient testimonies, such material is in growing demand to understand the medical and social experiences of diabetics and insulin-users.

The records of insulin: published accounts

Personal papers of the discoverers

Some of the earliest published accounts of the discovery appeared almost immediately in the Toronto press, especially the Toronto Star which was actively recruited by Banting, and these stories and reports were soon taken up by major Canadian and then international papers after the award of the Nobel Prize. Technical medical books and papers on insulin therapy and its advantages and disadvantages appeared rapidly, as scientists around the world began to investigate the new substance, and that record, including scientific research work and clinical care, continues to the present day (eg Cammidge 1924; Begg 1924; Rosenfeld 2002). The extensive literature provides an enormous resource for the study of the internal scientific and medical history of insulin, and can be readily accessed by consulting the catalogue of a medical library, by using an internet search

The unpublished records of the discovery are extensive. Each of the original major players left papers including correspondence, diaries and lab notebooks. The smallest collection is that of John Macleod (1876 - 1935) who left Toronto in 1928 to return to his native Aberdeen as Regius Professor of Physiology. By then he was an embittered man, his latter years in Toronto made miserable by Banting’s unrelenting hatred of him (Bliss 1982). Dying before his 60th birthday Macleod rarely spoke of his time in Toronto, and only a few letters and papers are held in the Library of the University of Aberdeen, including his Nobel Prize certificate on loan from the Department of Physiology. Amongst this collection is a typescript of an unpublished document entitled ‘A history of the researches leading to the discovery of insulin’, dated September 1922. Although known

Articles and books specifically on the history of the discovery started to appear towards the end of the 1920s, one of the earliest contributors to the genre being Banting himself (Banting 1929) while Best, the youngest of the original Toronto quartet, also contributed several recollections (see Young and Hales 1982). Macleod and Collip added little directly to this genre, although the obituaries of all four dwelt to varying extents on the insulin story (eg Barr and Rossiter 1973; Best 1942; Cathcart 1935; Young and Hales 1982). Similarly, several people who had been involved directly or indirectly with the discovery contributed their own reminiscences (e.g. Feasby 1958; Wrenshall et al. 1962). Many later historians and biographers followed or used these accounts extensively and contributed to a largely untroubled triumphalist historiography of insulin (Stevenson 1947). The records of insulin: unpublished accounts


E M TANSEY: 11 AUGUST 1921 ? THE DISCOVERY OF INSULIN amongst a small group of scholars, it was not finally published until more than half a century later, after Best’s death (Stevenson 1978). It was used, with similar previously unknown accounts written by Banting, Best and Collip, by Bliss (1982; 2007).

accounts dated the following month, and then a much more substantial account dated 1940 (see University of Toronto MS Coll 76). After his sabbatical in Toronto James Bertram Collip (1892 – 1965) returned to Alberta, moving back east in 1926 when he was appointed Professor of Biochemistry at McGill. His papers are also in the University of Toronto (MS Coll 269), and include scrapbooks, correspondence and laboratory notebooks, and his own 1922 account of ‘the discovery of insulin’. The University of Western Ontario, where he finished his career, also hold some of Collip’s papers. Like Banting and Best he received civilian honours and numerous honorary degrees. Like Best he was nominated for a Nobel Prize on a number of later occasions for other work. The youngest and longest living of the four men, Charles Herbert Best (1899 – 1978) had a glittering career in Canadian medical research. An extensive collection of his papers is also in the University of Toronto (MS Coll 241). They include correspondence and scientific records from his entire career. There are also other recordings and transcripts of interviews with Best, films and film scripts about insulin, and a set of diaries kept by his wife for many years, which naturally record much of his life and work. These diaries were much used by their son Henry, himself a professional historian, who wrote a joint biography of his parents. This was a personal and family perspective on one of the key scientific players, largely absent from others’ accounts (Best, H. 2003). Critically, for anyone attempting to reconstruct the insulin story, Best started to collaborate on the writing of a biography by Dr William R Feasby. Although never published, the Best papers contain his 1922 account of the discovery, and several drafts of all chapters of the biography, audio recordings with Best and their transcripts, and Feasby’s notes, made in preparation for that biography.

Fig 5.4 Nobel Prize certificate of John Macleod, jointly awarded to Frederick Banting, 1923. Reproduced courtesy of Aberdeen University Library.

Forgotten/hidden accounts

Banting (1891 – 1941), as the first Canadian to win a Nobel Prize, had been much lauded and honoured during his lifetime. He was knighted in 1934. Almost from the very beginning of the insulin story, when he first contacted the Toronto Star, Banting was involved in creating an account of the discovery that pointed to his own achievements and success, and minimised the contribution of others, especially Macleod. Banting was killed in 1941 in an aircraft accident whilst serving during the Second World War, and immediately explicit attempts were made to create a longlasting Canadian hero, with a ‘Committee concerned with the Banting Memorabilia’ being established. Correspondence, notes, and photographs were amassed by this Committee, and supplemented by his widow’s endowment after her death in 1976. These documents are the core of the Banting collection held in the Thomas Fisher Rare Book Library at the University of Toronto. It was here that Bliss found Banting’s own account of the ‘discovery of insulin’ also written in September 1922, with additional

The key discovery of Bliss, when he began to get interested in the insulin story, was of the separate 1922 accounts written by all the key participants. Although known to some physiologists, these documents were not widely recognised or available, until after the death of the surviving member, Charles Best. They had been recorded at the request of a member of the governing body of the University of Toronto, although when they began to be circulated privately amongst a small group in Canada in the 1950s, their publication had been explicitly banned by the then president of the University, on the advice of Charles Best. Bliss, who was fortunate that he was based in Toronto, was told of the existence of the manuscripts by his physiologist brother, and became seriously intrigued by the story shortly after Best’s death. Other part players such as Feasby, and (Sir) Henry Dale in the UK, also compiled copies of these original accounts, and recorded later views and commentaries, and


DEFINING MOMENTS these are in the University of Toronto (Feasby) and the Wellcome Library, London (Dale). Useful reviews of some of this material and its history has been written by Bliss (1993; 1998).

League of Nations and academic and commercial scientists around the world, he succeeded in determining both a UK national standard and an international standard. Insulin was the first medicine to achieve this international standardisation, a procedure that later became routine for a wide range of pharmaceutical preparations. Records of that achievement can be found in the archives of government and international organisations including the MRC and the World Health Organisation in Geneva; in the professional papers of scientists such as Sir Henry Dale; and in secondary accounts by Sinding (2002) and Church and Tansey (2007). Documents of this kind all extend the historical analysis and assessment of the importance and impact of the discovery of insulin, the first medicinal substance to be so regulated.

Fig 5.5 Note on the outside of an envelope of documents entitled ‘Insulin Controversy’ deposited by Sir Henry Dale in the Wellcome Historical Library, 1959. Reproduced courtesy of the Wellcome Photographic Library.

Although little apparently survives of the debates associated with the award of the Nobel Prize to Banting and Macleod, the records of the Nobel Foundation do have some material associated with the award. They also contain the records for the six later occasions on which Best received a nomination, and the nine further nominations for Collip, which contribute to a broader understanding of the relative scientific standing of some of the main players in the insulin story. Brief details of these records are available on the Nobel Foundation’s web-site ( /medicine/database.html, accessed 14 February 2008).

Institutions and organisations Other unpublished archival material includes records of the institutions associated with the discovery and its consequences. The main administrative records of the University of Toronto and the Insulin Committee, the eponymous Banting Research Institute, the Best Research Institute, now the Banting and Best Department of Medical Research, and the Collip Medical Research Laboratory at the University of Western Ontario (see http://www., accessed 2 February 2008), all contain material relating to insulin. Initially there were severe difficulties in producing consistent batches of insulin, and uncertainty therefore if it could become a reliable therapeutic product. Dr (later Sir) Henry Dale was sent to Toronto on behalf of the British Medical Research Council (MRC) to investigate the claims being made for insulin. On his return to London he enthusiastically advised the MRC to acquire the patents for British production, and to authorise selected pharmaceutical companies to manufacture insulin. The personal papers of Dale at the Royal Society (93HD), the archives of the MRC at the National Archives, Kew (FD/1) and the papers of individual pharmaceutical companies, e.g. Burroughs, Wellcome & Co, in the Wellcome Library, London, all contain relevant contemporary papers about insulin’s production, marketing, distribution and use; and secondary accounts summarising aspects of commercial collaboration include Swann (1986), Liebenau (1989) and Church and Tansey (2007).

Other sources: places and people There is also a physically more substantial material record of the history of insulin, including early production facilities and equipment originally from pharmaceutical companies and now preserved in Science Museums. The buildings in which Banting, Best, Macleod and Collip worked in the University of Toronto, are still extant, and Banting’s home in London Ontario, where he lived for the year before he moved to Toronto to undertake the insulin work is preserved as a Canadian National Historic Site, and proudly proclaims itself as ‘the birthplace of insulin’ (see Banting House National Historic Site at , accessed 28 February 2008) More poignantly, significantly and perhaps uniquely, is the living material record of insulin’s history, recorded by diabetics and the descendants of diabetics, who owe their lives to insulin. One of the first recipients of insulin in Toronto in 1922 was 15-year old Elizabeth Hughes. She died in 1981, after almost 45,000 injections and 60 years of insulin therapy. Increasingly, such testimonies are being sought by professional historians, patient groups, family historians and private individuals, all attempting to understand the human dimension of insulin’s history. An excellent example of such a project is the ‘Diabetes stories’ website (at, accessed 24 January 2008) which has oral histories of 50 diabetics, some of whose

An immediate problem when commercial manufacture of insulin got under way in different companies and countries was that of regularising production so that a ‘unit’ of insulin produced by one manufacturer in one particular batch was equivalent to other batches by the same or other manufacturers. Dale, then of the National Institute for Medical Research in London, was one of the first to recognise this difficulty, and in collaboration with colleagues at the



Fig 5.6 Young diabetic girl before and four months after starting insulin treatment, taken from Allen, F M, ‘Clinical Observations with insulin’ J Metabolic Research, 1922. Reproduced courtesy of the Thomas Fisher Rare Books Library, University of Toronto.

memories go back to the 1920s, and all of which attest to the impact and importance of insulin in their lives.

Barr, M.L. and R.J. Rossiter, 1973. James Bertram Collip. Biographical Memoirs of Fellows of the Royal Society of London 19: 235-267. Begg, A. C. 1924. Insulin in general practice: a concise clinical guide for practitioners. London: William Heinemann. Best, C.H. 1942. Frederick Grant Banting. Obituary Notices of Fellows of the Royal Society 4: 21-26. Best, H.B.M. 2003. Margaret and Charley: the personal story of Dr Charles Best, the co-discoverer of insulin. Toronto: Dundurn Group. Bliss, M. 1982. The discovery of insulin. Toronto: McClelland & Stewart Ltd. Bliss, M. 1993. Rewriting medical history: Charles Best and the Banting and Best myth. Journal of the History of Medicine and Allied Sciences 48: 253-274. Bliss, M. 1998. Discovering the insulin documents: an archival adventure. The Watermark 21: 60-66. Bliss, M. 2007. The discovery of insulin. 25th anniversary edition. Chicago: University of Chicago Press. Cammidge, P.J. 1924. The insulin treatment of diabetes mellitus. Edinburgh: E & S Livingstone. Cathcart, E.P. 1935. John James Rickard Macleod. Obituary Notices of Fellows of the Royal Society 1: 584-589. Church, R. and E.M. Tansey, 2007. Burroughs Wellcome & Co.: Knowledge, trust, profit and the transformation of the

In 1947 Lloyd Stevenson produced a biography of Banting. It is a work very much in keeping with the then prevailing view, largely successfully propagated by Banting himself, that the discovery of insulin was the work of a lone genius inspired by humanitarian ideals and hampered by small-minded colleagues. The dust-jacket of the book emphasises that it is ‘the biography of the man who gave INSULIN to the world, one of the greatest achievements of the twentieth century’, and whilst the surviving records have now allowed that individual focus to be challenged and rectified, it is undoubtedly true that the discovery and production of insulin was, against considerable odds, one of the triumphs of twentieth-century medicine. References Anon. (A science graduate and certified dietician), 1933. What the diabetic needs to know about diet for easy use in all households. London: John Bale, Sons & Danielsson, Ltd. Banting, F.G. 1929. The discovery of insulin. Edinburgh Medical Journal 36: 1-18.


DEFINING MOMENTS British pharmaceutical industry 1880-1940. Lancaster: Crucible Books. Feasby W.R. 1958. The discovery of insulin. Journal of the History of Medicine 13: 68-84. Lawrence, R.D. 1933. The diabetic life: its control by diet and insulin. London: J.A. Churchill. Liebenau, J. 1989. The MRC and the pharmaceutical industry: the model of insulin. In Austoker, J. and L. Bryder (eds), Historical perspectives on the role of the MRC, 163-180. Oxford: Oxford University Press. Rosenfeld, L. 2002. Insulin: Discovery and controversy. Clinical Chemistry 48: 2270-2288. Sinding, C. 2002. Making the unit of insulin: standards, clinical work, and industry, 1920-1925. Bulletin of the History of Medicine 76: 231-270. Stevenson, L. 1947. Sir Frederick Banting. London: William Heinemann. Stevenson, L. 1978. Introduction. In Mcleod, J.R.R., History of the researches leading to the discovery of insulin. Bulletin of the History of Medicine 52: 295-312. Swann, J.P. 1986. Insulin: a case study in the emergence of collaborative pharmacomedical research. Pharmacy in History 28: 3-13. Wrenshall, G.A., G. Hetenyi and W.R. Feasby, 1962. The story of insulin: forty years of success against diabetes. London: The Bodley Head. Young, F., and C.N. Hales, 1982. Cgarles Herbert Best. Biographical Memoirs of Fellows of the Royal Society 28: 125.


Chapter 6

2 October 1925 From Ally Pally to Big Brother: television makes viewers of us all Martin Brown

Turn off the television! Rather than consuming the images that flow into your room, engage with the debate about the machinery that delivers them and think about the world they have created. Little did the Scottish inventor in the Sussex seaside town of Bexhill know what he was doing. Would he have cried out that he had created a monster? What we do know is that the inventor is little remembered, despite his contribution to the modern world. On 2 October 1925, John Logie Baird carried out the first successful broadcast of an image with halftones. His endeavours pioneered the development of television and although Baird’s system was not ultimately that used to broadcast to the world he is the true father of mass broadcasting ( Baird#cite_note-1) (Figure 6.1). Meanwhile, the nineteenth-century recreation and exhibition space called Alexandra Palace, on Muswell Hill (North London) was also about to stage a televisual first (Figure 6.2). As the Blue Plaque on the building records, Alexandra Palace, or ‘Ally Pally’ as it was colloquially known, was the location for the world’s first high definition television service, which began on 2 November 1936 (http://en.wikipedia. org/wiki/BBC_One). Initial experiments had been carried out at the Crystal Palace in Bromley but the destruction by fire of the Victorian structure on 12 November 1936 left Alexandra Palace as the sole transmitter (http:// and although the footprint of the Brunel-built Crystal Palace may still be seen, there is little to mark its place in the cultural history of the twentieth century. On Muswell Hill the Blue Plaque marking the first broadcast is not the only reminder of this world first – the aerials used to transmit the pictures may still be seen atop the building, forming part of its historic interest.

Figure 6.1 21 Linton Crescent, Bexhill-on-Sea, where John Logie Baird worked on his experimental television system. (Photo: Jane Stephen). monarchy into the home, projecting, broadcasting even, the status quo and reinforcing the bonds of community through shared activity. Yet this act of community may be seen as the beginning of the end for a particular trope of the twentiethcentury western world with the family and friends engaged together in leisure. The epitaph of the family viewing together is probably the BBC TV comedy show The Royle Family, which until its demise in 2006, presented an affectionately, if not entirely appealing, nostalgic image of the whole family together in front of the television. The same may be said of the US cartoons The Simpsons and Family Guy, both of which depict a family in front of the TV. The reality may be rather more confused, as families have several sets, often with one in each of the children’s’ rooms, which

Unfortunately, the further development of television was stalled by the coming of World War Two but transmission resumed following the end of the conflict and by 1953 there were sufficient television sets around the United Kingdom for many people, including the author’s own parents, to gather around small, black-and-white screens in order to watch the coronation of Queen Elizabeth II. As such the television was a cohesive tool, bringing ancient symbols of



Figure 6.2 Popularly know as ‘Ally Pally’ this nineteenth-century palace for the people renegotiated its role as a site of mass, public entertainment by becoming the site for BBC commercial broadcasts. It is the earliest surviving site of broadcasting history and retains the transmitter mast and elements of studios. (Photo: English Heritage.)

encourages separate, individual and disparate viewing. This fragmentation has, in turn, perhaps been encouraged by the explosion of channels, often with specialist foci, available via satellite and digital providers.

military, meant that a plethora of embedded reporters, analysts and experts created a hyper-reality in which the actual conflict was so reduced that it was possible to argue that there was no actual event to criticise or with which to engage (Appignanesi and Garratt 1995: 133-135). In such a scenario the events described become hyper-real and ultimate truth is impossible. Meanwhile, a survey undertaken into coverage of the war in the USA showed that the majority of respondents felt that misinformation in the mass media was justified if it was in the interests of national security (Cashmore 1994: 56). This information serves only to reinforce Baudrillard’s core argument that one can no longer be sure what is actually going on. However, it was perhaps during Gulf War II that Baudrillard was proven right to friends of the author who sat taking cover in a bomb shelter under Saddam's palace in Baghdad while CNN showed live footage of the insurgents' rockets bursting on the compound above their heads (Jon Sterenberg, pers. comm.). Here the hyper-reality of the screen image and the ultimate truth of death came dangerously close in an authentic but strangely post-modern situation. However this event may be the ultimate conclusion of TV war reporting where the combatants, notably in Vietnam, became players in a nightly drama beamed into millions of homes during the news broadcast that was, to those without experience of the

Reality and Rockets Today we may be blasé about the access to the world we have via the screen, whether of television or computer, but to the generations who had grown up with visits to the cinema the expansion of viewing truly was a revolution. While entertainment formed the majority of the output, daily news broadcasts were also fundamental to the schedules and these, along with the documentary, brought the world to the living room. It is often said that Walter Cronkite’s reporting of the 1968 Tet offensive in Vietnam (Cronkite 1968) caused a seachange in American public opinion and made the war unwinnable at home ( 2006/12/tet-cronkite-opinion-journalism-and.html). Meanwhile a later war (Gulf I, 1990) caused Jean Baudrillard to question our whole notion of authenticity, reality and representation. Baudrillard famously declared that the Gulf War did not happen (Baudrillard 2002). Behind the headline-grabbing title the philosopher argued cogently that the media saturation, in turn carefully controlled by the 48

MARTIN BROWN: 2 OCTOBER 1925. FROM ALLY PALLY TO BIG BROTHER: TELEVISION MAKES VIEWERS OF US ALL military, indistinguishable from drama - a situation neatly satirised in Francis Ford Coppola's 1978 film Apocalypse Now when the troops on the beach are exhorted by the war correspondent to be as realistic as possible as bullets fly. That this film is a work of fiction itself further heightens the irony.

bodies are present or not TV archaeology essentially seeks to mirror the traditional archaeological process as it moves from background and reconnaissance to performance of fieldwork to the post-excavation phase and denouement of interpretation and presentation. As such the TV director becomes the traditional site director revealing the overarching narrative at the end. What is not often shown is the real process of discussion, argument and constant, collective contestation with the evidence as it presents itself. However, this is one area in which Time Team excels, where important theoretical discussions and interpretive blind alleys can be presented and then mediated for the viewer by the interventions of Tony Robinson’s everyman. Despite the attention to detail afforded by many directors, archaeologists will always say ‘It’s not like that’ and the public will ask ‘What’s it really like?’ underlining the inherent distrust of neatly packaged information; all of which is true for mainstream programmes, let alone the shows that present the archaeological fringe and its proponents! Maybe Baudrillard could also have suggested that there is no archaeology. Had he done so his case would have been strengthened in summer 2008 by the broadcast of Bonekickers. Although this prime time TV drama had archaeological advisers and paid attention to detail in both the appearance of the sets and vocabulary, the over-sensational plot-lines may have undermined the value of seeing archaeologists moving further into popular consciousness. The series may have a positive effect – the CSI franchise has promoted forensic science even though the audience knows that real crime scene examiners don’t actually have such exciting lives – but like CSI, Bonekickers further and explicitly fictionalised archaeology to the point that one must ask where the boundaries lie?

Digging for Truth On a much more trivial level the notion of authenticity and hyper-reality has also permeated the relationship between archaeology and television. This is apparent not only in the way programmes are edited to create narratives or in the way in which objects are found and then re-found for the cameras, but even down to reality TV show Celebrity Big Brother. In 2002 the author was invited onto the show to examine bags of rubbish submitted by contestants in the house. He was asked to use his archaeological skills to use the rubbish in order to tell which bag belonged to a particular housemate and to say something about them from the remains. Each bag presented not only a glimpse into the lives of the celebrities involved but also a subtext about celebrity, particularly the rubbish of Melinda Messenger who was, at that time repositioning herself from, broadly speaking, glamour model to celebrity mum, and whose bin bag was, as I recall, full of wholesome baby food packets and organic nappy bags. Contrast that to eventual winner Mark Owen who sent in a bag of rubbish including till receipts and unopened vegetarian sausages. It appeared not to have been filtered for celebrity effect and presented a rounded and normal person, even if he had fronted boy band Take That! On the other hand it could have been a careful construct designed to give that impression. The final bag opened did present an image of a single man opening tins and drinking a little too much. Unfortunately the subject was about to undergo a traumatic split from his wife so maybe he had other things on his mind when he apparently failed to think about what was to be on display.

What the Celebrity Big Brother experience suggests is that the reality of archaeological method was effectively subverted by television and made into a hyper-reality where truth and presentation collided. Was the rubbish really sent in by celebrities? Was the author even a real archaeologist? It may not have been the Gulf War but the principles are essentially the same. Even the ‘expert’ examining the bags couldn’t believe much of what he saw. What chance for the poor viewer? Can we believe what we see? Does television conjure realities from nothing? When one considers archaeological programmes shown on British television it is clear that the same questions are ever-present, however hard programme makers and the archaeologists engaged by them seek to stress the authenticity of method, process and discussion. However, for the archaeological documentary, at least, one is fairly certain that the landscape in which the site sits is authentic. This is not the case when dealing with TV drama.

On a less frivolous level the success of the Channel 4 show Time Team has spawned a number of other archaeologicallythemed programmes that present a rather more authentic view of archaeology. As Piccini has noted elsewhere, the makers of Time Team are always keen to stress the academic rigour and adherence to the material evidence (Piccini 2007: 228). The same was true of the BBC’s Meet the Ancestors, which followed projects, unlike Time Team which commissions its own evaluation-style fieldwork. These two programmes set the format for numerous imitators, with a fieldwork component, strong characters and a journey of discovery that might draw in overtly scientific specialists, such as conservators, in contrast to the muddy field practitioners. Ancestors also promoted the use of human remains as the focus of a story around which other discoveries and discourses could be explored. Series including Trench Detectives and Tales from the Grave (both shown on Five), both of which involved the author, followed this particular path. Whether

Location, Location, Location Television has created further simulacra at many real and peaceful locations. The town of Holmfirth is the setting for the popular Sunday evening BBC TV comedy The Last of the Summer Wine. The quiet town on the edge of the Peak 49


Figure 6.3 Television Centre in London has become more than a production and administration centre and has appeared in its own dramas and as a backdrop to media events from telethons to record-breaking tap-dance events. It is also a contested site, having seen picket lines and political demonstrations, as well as currently being the focus of discussions over heritage designation, preservation and replacement by more modern facilities. (Photo: English Heritage.)

District of northern England has become the archetypical hill town and is now the destination for numerous ‘Summer Wine’ coach tours, as the location draws in the coach market who are predominantly retired people, a demographic that also provides the main audience for the show. The net effect of this has been both to bring new trade and provide a boost to the economy but it has also brought in tourism, causing increased traffic and significant numbers of incomers who can be seen as a positive force because of their spending but who also clog the town during the tourist season. The same is true in the small fishing village of Port Isaac in Cornwall where Doc Martin (ITV) is filmed (http: // Here, the combination of tourists drawn by the picturesque location, film crews and fans of the show create jams in the small streets but also support a village that would otherwise have little to support it in an area that is economically depressed following declines in upland farming and fishing, its traditional industries. At Lacock in Wiltshire the beautiful stone-built village, clustered around the medieval abbey, and its apparently unchanged English setting, has provided the backdrop to numerous British costume dramas featuring the cream of British acting talent, such as Dame Judi Dench. Lacock has been Austen's Hampshire,

Mrs Gaskell's Cranford and Laura Thompson's Lark Rise to Candleford in numerous adaptations of the English classics. Like Port Isaac, Lacock exists on tourism and is happy to play the ‘as seen on...’ card but this is not without its downside. During some productions the tarmac metalling of the modern streets is covered in mud, while the herds of cows and coach and cart horses bring their own natural hazards, which are not always welcomed by residents, who prefer their vision of the past to be less muddy and smelly than the original and do not welcome trails of mud into their houses, shops and pubs (Jane Hallett - local resident - pers. comm.) Nevertheless the income generated to the National Trust, which is the landlord of much of the village, has seen a move to preserve the antique charm: telephone lines, television aerials and especially satellite dishes, all of which would spoil the authenticity of the films, are discouraged, if not banned outright. The double-edged sword of popularity was actually the subject of a parish meeting in Pluckley (Kent) to debate whether the location should be advertised. This village was the setting for an adaptation of HE Bates' popular books centring on the Larkin family, and the 1950's set TV series The Darling Buds of May, with its nostalgic view of a past


MARTIN BROWN: 2 OCTOBER 1925. FROM ALLY PALLY TO BIG BROTHER: TELEVISION MAKES VIEWERS OF US ALL England, proved a huge hit (as well as launching the career of Catherine Zeta-Jones). The Parish decided the village was too small to cope with coach tours and so anonymity was largely retained (Andrew Woodcock, pers. comm.). Pluckley took its decision after talking to residents of Holmfirth and several Yorkshire Moors villages, particularly Goathland, which achieved fame as the setting for a gentle 1960's police drama Heartbeat and its spin-off series, The Royal (both ITV). Visitors to Goathland can see fake shop fronts familiar from the shows on homes which have not been commercial enterprises for decades. This creates something of a conundrum for the visitor seeking other facets of the area, including hiking, the steam railway and the landscape. However the debate centres around the definition of cultural heritage and one may ponder whether the visitor seeking association with Booker prize-winning novel Possession (Byatt 1991) has any more claim on the same landscape and its equally fictional events simply because it is a manifestation of high culture, rather than mass media. Perhaps it is the nature of cultural heritage that the term is sufficiently broad to encompass authentic prehistoric and nineteenth-century monuments and buildings, as well as the manifestations of the modern age and the mass media. Meanwhile the drive for screen authenticity can have an immediate and adverse impact on the historic environment: when the BBC made its series The Trench in 2001 some 500m of reconstructed Great War trench system was created as a set of the reality TV show. In an attempt to give added authenticity and power to the simulacrum it was located in northern France on the old Front Line. During construction by mechanical excavator Great War features and artefacts were unearthed and a few were collected and displayed during an Imperial War Museum exhibition connected to the show but no archaeological recording took place (Andrew Robertshaw pers. comm.). As such the simulacrum quite literally replaced the authentic. Any authenticity was borne out of a sense that the location was the authentic.

within the fabric. Meanwhile, at least one garden - that of the infamous ‘Jordache’ house - will retain the archaeological features and deposits that were created by the burial and forensic exhumation of a murdered character (ibid). On Brookside Close the boundaries between the actual and the imagined are deliciously blurred, even to the extent that visitors do go to see the house, rather as they might the scene of a real tragedy. Albert Square, setting for BBC TV’s Eastenders is a more traditional film set with facades before which action takes place. The same is true of Coronation Street. Here UK Granada TV built a set based on areas of Salford (Manchester). The Street, as is known has become an iconic representation of a northern English working class neighbourhood and embodies nostalgia for community and working class decency. As times have changed the cast has altered to reflect social movement in the areas on which The Street is based – more characters from ethnic minorities for example – and the same is true of the architecture. Across northern England urban regeneration has seen the alteration and demolition of parts of the townscape evoked by Coronation Street and in reflection the set now includes newer houses and shops that sit alongside the nineteenthcentury terraced houses and railway arch familiar from the earliest days of the soap opera in the 1960s. Even the world of children’s’ TV can have its impact: the strange world of BBCTV’s Teletubbies once existed in the British countryside (Figure 6.4). The location offered an example of the physical manifesting of the imaginary, akin to, but of more practical use than a folly! Seen from the air the site was a place of surprise and enjoyment to those familiar with the programmes but it could appear sinister and redolent of structures of the Cold War to the uninitiated. In this way the site realised the imaginary but then generates its own imaginary responses. Arcades, Obsolescence and Objects of Desire It may, therefore be argued that the medium may remake the physical location in the mind, creating a hyper-real place that exists as authentic settlement and fictional stage at one and the same time. However the influence of television on place may also be seen in most inhabited places of the world. Viewing television requires technology to receive the signal and to show it in the home. The television aerial has become such a ubiquitous part of the architectural scene that it is no longer remarkable (Figure 6.5). In the author’s own village numerous buildings protected by law for their historic character all have aerials protruding from their thatched or tiled roofs. However, the satellite dish is more obvious and less welcome and most, if not all, local authorities in Britain responsible for planning in respect of historic buildings have policies designed to ensure that the impact of satellite dishes on the historic fabric and external appearance of buildings is minimised. Meanwhile, this equipment and its ancillary elements, including television sets, video and DVD recorders

Conversely the requirements of television have created real places from the world of the imaginary. Brookside (UK Channel 4) created a stir when it commenced broadcast in 1982. It was the first show to use real houses, originally built as a residential development and then converted into permanent sets ( ~brookside.soapbox/). Since the demise of the original show the houses have remained in use as sets for other Mersey TV productions, including Hollyoaks and Grange Hill but a number have now been sold ( /england/merseyside/7074483.stm). Essentially the houses have come full circle, returning to the use for which they were built but their history means that for some time at least they will be associated with a moment in British cultural life. From an archaeological perspective this means the record of construction, conversion for TV use in order to accommodate the requirements of production, and the refurbishment for sale will all have left significant traces 51


Figure 6.4 The set of the BBCTV children's programme Teletubbies manifests the fantastical in the English countryside. However, seen from the air the set of this toddler-friendly show echoes more sinister sites from the Cold War, rather than the ‘hollow hills’ of British folklore or the innocence of the laughing baby who appears as the face of the sun in the series. (Photo: English Heritage.)

and players, remote controls and set-top receivers for digital TV are all consumer commodities to be bought, used, disposed of and replaced. An examination of landfill sites might reveal stratigraphies datable by the technology of television included within them. Would it be possible to detect such things as the move from monochrome to colour in the 1970s or the realisation that Betamax would never be the dominant system for video recorders, a technology that is now being replaced by DVD, perhaps leading to another depositional event horizon? In Britain this sequence might then be seen to include evidence of the disposal of older analogue receivers that cannot be converted to receive digital TV signals. This fast-moving technological change and its attendant instant redundancy may parallel the situation in Silicon Valley documented by Christine Finn (Finn 2002).

constantly enticed, as noted in the 1930s by Walter Benjamin (Benjamin 1999 [1935] 805). Perhaps the television epitomises the idea expressed by Benjamin that the commodity is fetishised, being presented as desirable but remote to the viewer (ibid); at least in a shop window the object is physically present but on the screen the dream can be presented and packaged as the advertiser wishes. Although television began as public service broadcasting, commercial television - taking its lead from commercial radio - soon followed with channels and their products funded by the commercial breaks between and during shows. The first commercial on British television was played on 22 September 1955 and was for a brand of toothpaste ( It was a mundane product with a simple message about freshness but it signalled the ingress of mass advertising into the home. With the production values of the TV industry and the commercial backing of the sponsor, advertisers were able to create ever-more sophisticated short films that moved from placing the product in the consumers’ minds to creating glamour, allure and mystique. The arrival of commercial television in the United Kingdom also coincided with full employment in the

If we remain longer at the landfill site, picking over the detritus of our contemporary culture one thing will strike us: it is one of consumption and disposal. Since the Garbage Project in the USA showed us that we over consume (Rathje and Murphy 2001) the landfill has remained the indicator of our obsession with getting and spending in the western world. The modern world is one where the consumer is 52


Figure 6.5 The nineteenth- and early twentieth-century roofs of Poble Sec (Barcelona) cluster with television aerials. While they are a definite alteration to the line and design of the buildings, such aerials are so common that most people no longer notice them. They add a new layer to the accretion of structures but also demonstrate the ubiquity and physicality of television in the built environment. (Photo: author.)

post-war world that increased wages, creating disposable income and feeding the so-called affluent society. The Ad Men were poised to help the manufacturer relieve the people of their money and commercial TV gave them a new opportunity by taking the message to the whole family as they clustered around the new television!

specific products. Some shows sell unattainable dreams of conspicuous consumption that feed the specific desire for individual products. Cashmore argues that the prime example of this phenomenon is Dallas (Cashmore op. cit.: 124). The saga of the Texan oil family was enormously popular in the 1980s in the US and Britain but was also syndicated across large parts of the globe, where it continues to run. It has been described as, ‘…an epic-sized advertisement for a commoditised good life.’ (ibid). With its constantly changing range of cars, costumes, and consumer goods Dallas cast a spell, suggesting to viewers that, like the characters, they needed to be defined by possessions and glamour. Dallas became the meta-Arcade, spanning the globe creating the ultimate fetishisation of commodity. Is it simplistic to suggest a link between the consumerist ethos of the later twentieth century and this show and its imitators?

Traditionally the commercials sat within the schedule, designated by short title sequences that ended the programme segment, or appearing between programmes. As the interplay of commerce and art developed, the boundaries were blurred. Today it is not only the designated areas for the commercials that enhance consumer awareness of the products available. For example, how many people knew of the fashion footwear designer Manolo Blahnik until the appearance of HBO’s Sex and The City in the late 1990s? Looking further back to the early 1980s, US cop show Miami Vice actually credited the designer labels used (Cashmore 1994: 76). On a more mundane level product placement is everywhere, whether on the shirts of Premier League footballers or in the products visible on the sets (Ibid.). Yet TV is about more than selling

While one TV show cannot be blamed for everything it may be regarded as a significant cultural influence. While one must also account for post World War Two affluence and the ability of industry to mass produce luxury (non-utilitarian) 53

DEFINING MOMENTS goods at affordable prices in the rise of consumerism, the impetus to consume comes from the advertisers in direct and indirect form. While the traditional print and billboard media had existed from the nineteenth century, the television could disseminate messages about product, from the informative to the spell-binding. As such, television not only reflects the world that produces it, it also shapes or recreates social attitudes and trends, including the trend to buy (Cashmore op. cit.: 2). As identity is increasingly defined by possessions the pressure is always there to work harder, earn more and to spend more on ‘lifestyle’. Television has made us more acquisitive than ever before but underlying this is a world of toil. We may no longer face starvation if we fail to labour but we will not look good, eat right or be as society suggests we should be! Initially we are told we need a car, then we are told we need a better car and finally we are told we need a car that will allegedly make us smarter, sexier and more sophisticated, but until the television extolled the joys of motoring how many people actually saw the need for a vehicle? However, the process of ad breaks both in and between programmes has not been enough and a more subversive approach to sales, even than Dallas or Miami Vice has emerged: music channels such as MTV, Kerrang and The Hits exist to sell product, whether explicitly via advertisements or through the exposure of the viewer to artists and their music videos. In addition, the process of luring the viewer through the display of goods described above has reached a new level in the rise of the shopping channels, where goods are sold direct from the television screen by demonstrators to consumers who telephone orders to a number displayed on the screen, from the comfort of their armchairs. The customer barely needs to move as the succession of sumptuously described and glamorously lit commodities are paraded before them, visible but out of reach, just as they were in the Arcades (above).

directing our consumption habits and altering both landscape and built environment. We have become defined by consumption and TV helps us decide what to consume. Big Brother is no longer the face of a dictator; it is the brand of a dictator. How appropriate for our own Brave New World! References – accessed 01 June 2008 – accessed 01 June 2008 –accessed 30 May 2008 perty/2008/06/28/ptvvillages128.xml&page=2 – accessed 5th July 2008 – accessed 05 July 2008 –- accessed 17 June 2008 Appignanesi, R. and Garratt, C. 1995. Postmodernism for Beginners. Cambridge: Icon. Baudrillard, J. 2002. The Gulf War Did Not Take Place. Bloomington: Indiana University Press. Benjamin, W. 1999 (1935). The Arcades Project, Eiland, H. and McLaughlin, K. (trans). Cambridge: Harvard University Press. Byatt, A.S. 1991. Possession. London: Vintage. Cashmore, E. 1994. And there was Television. London: Routledge. Cronkite, W. 1968. - accessed 02 May 2008. Finn, C. 2002. Artifacts: An Archaeologists Year in Silicon Valley. Athens: MIT Press. Piccini, A. 2007. Faking It: Why Truth Is So Important for TV Archaeology. In Clack, T. and Brittain, M. (eds), Archaeology and the Media, 221-36. Walnut Creek: Left Coast Press. Rathje, W. and Murphy, C. 2001. Rubbish! The Archaeology of Garbage. Tucson: University of Arizona.

Essentially, this promotion of consumerism via the television means that the archaeology of later twentieth-century western capitalist society and hence of the Asian economies that feed it, is the archaeology of television. Without television would we know that we need all the things that we acquire, seemingly to legitimate our existences? Landfill, scrapyards, car boot sales and junk shops are filled with the disposable consumer not-so durables with which modern man surrounds himself and which are disposed of once fashion or technology changes. That desire to consume is supported and fed by the television. Why is the first television broadcast significant? The world of 1925 is utterly remote to us, despite being within living memory. Within those relatively few years the world has altered enormously and for many people in the west TV has been the driver. The changes wrought by the technology are everywhere – on the roof tops, in the living room and in the workplace. Television impacts on almost every aspect of our lives, shaping the way we think and look at the world,


Chapter 7

1 June 1935 The introduction of compulsory testing of drivers in the United Kingdom: the neglected role of the state in motoring John Beech

date for this; in Austria-Hungary, France and Germany testing had been introduced before the First World War; in Belgium compulsory testing was not introduced until the early 1970s). The registration of vehicles in the United Kingdom had, as we have seen, been made mandatory very early in the century. For the first time the user, the motorist, was drawn directly and compulsorily into a state-run regulation system, a system which placed emphasis both on his or her skills in controlling a vehicle and on his or her familiarity with the growing body of motor-related legislation.

Introduction It would be hard to imagine a book on the twentieth century which did not pay due acknowledgement to the motor vehicle. As the century dawned, it was a play thing of the very rich; cars were so rare in the United Kingdom that they were yet to be required to register and to carry registration plates (this did not happen until December 1903). As the century came to a close, our way of life had undergone a revolution. Car ownership was the norm; the railways had been replaced by road services for the national distribution of vital foodstuffs, raw materials, newspapers and mail. ‘White van man’ was a familiar sight throughout the suburbs of the country. Daily travel by bus and coach was the habit of millions of commuters. What is less clear is a particular defining moment. The radical change had been incremental, almost insidious.

The moment The introduction of compulsory testing for learner drivers and the concomitant first use of the now familiar red ‘L plates’ was not an event that attracted major attention at the time. In Coventry, for example, the centre of the motor industry in the United Kingdom, the local press did little in the way of reporting the event, let alone commenting on it.

A contender might be the introduction of the production line to car manufacturing (this first occurred in 1913 at the Ford car plant at Highland Park in Detroit, Michigan). The introduction was undoubtedly significant in that it reduced the time required to produce a Model T Ford from the previous 728 minutes to a most impressive 93 minutes (NPS, undated). There are however two problems with making this a defining moment. Firstly, it would represent a means to an end rather than an end in itself – the production line was a facilitator of mass motoring but was certainly not a unique factor in that phenomenon. Secondly, the production line was a means to an end for an enormous range of products, and the Highland Park production line was not the first production line by any means – a serious contender for that title is Portsmouth Dockyard, where, it is claimed, a production line for making wooden pulley blocks was introduced as early as 1802, a candidate perhaps for a defining moment of the nineteenth century.

The Coventry Standard of 7 and 8 June 1935, the issue following the introduction, had the following as its third of four items under the heading Motor Notes: DRIVING TESTS NOW COMPULSORY On Monday driving tests became compulsory. This meant that anyone who took out a first licence after April 1, 1934, must undergo a test before the licence could be renewed, and that the provisional licence scheme came into force. Under this scheme a new applicant for a driving licence obtains first of all a provisional licence, which costs [illegible], and is valid for three months. The conditions under which he or she may use this licence are that their [sic] car must carry two plates of certain dimensions bearing the letter “L” and that they must always be accompanied by a driver with at least two years’ experience. After this they undergo a driving test (fee 7s. 6d.), and, if successful, can then obtain a full driving licence for 6s.

This chapter takes as its premise that the defining moment for the car and other road vehicles, and hence motoring and the motorist, was an event that symbolised their coming of age, a point at which it was recognised that they had become popular to the extent that their use needed to be regulated. This moment was the introduction of compulsory driving tests on 1 June 1935 (clearly other countries have a different

This item followed others, presumably considered more important, on ‘Police and Motorists’ and ‘Questions at Road 55

DEFINING MOMENTS Junctions’. The former item hints that the police might find a better use for their resources than enforcing the [1934] Road Traffic Act. This Act was to have far-reaching effects, and its provisions included the introduction of: •

Local figures were quoted:

Warwick County Coventry Leamington

compulsory testing of drivers and the introduction of three-month provisional licences for learner drivers;

controlled pedestrian crossings;

30 m.p.h. speed limit in built-up areas;

a tightening of the law with respect to the compulsory insuring of motor vehicles.

Offences dealt with



Total of fines (£)





1,370 311

1,017 239

915 215

1,390 173

While it is difficult to make valid comparisons with today, because of the different numbers of vehicles and drivers, different construction standards for vehicles, and the different value of the pound, it is clear that the growth of motoring was resulting in a frightening rise in the number of accidents and injuries. For the record, in 1934 over 7,000 people were killed on the roads although there were only 1.5 million cars registered in Britain (DSA, 1992). In 1997 the number of people killed on the road was, at 3,599, roughly half the 1934 figure (DETR, 1999); the number of private and light goods vehicles at the end of 1997 is reported (DVLA, 1999) as 24.2 million.

Another obvious manifestation of this legislation, in addition to ‘L plates’, was the appearance of the Belisha Beacon, named not as might be expected after the Minister of Transport who introduced the appropriate legislation - Oliver Stanley. Rather, the ‘beacons’ were named after his successor, Leslie Hall-Belisha, who had replaced Stanley as Minister by the time the first ‘beacons’ appeared.

The same issue of The Coventry Herald also reports on road casualties thus:

To assess the significance of the operationalisation of this piece of legislation in heritage terms is important firstly to evaluate the significance of the legislation in its contemporary context and secondly to identify the place of compulsory testing in our notions of transport heritage.

ROAD CASUALTIES In Coventry, last week, five persons were injured. Figures for local districts for the corresponding week last year, for last week, and for the year to date, are appended:

The contemporary context

As already noted, the introduction of mandatory testing was not seen at the time as a particularly significant event. The Coventry Standard of 7 and 8 June 1935 carries a fourth item in its Motoring Notes, which refers to the increase in the number of cars on the road – an increase of forty per cent on the 1933 figures. This figure, it concedes, might be artificially high because of registrations by the manufacturers. But, leaping to the defence of ‘The City of the Three Spires’, as Coventry liked to call itself, it points out that Oxford, Chester, Manchester, Lincoln, York, London and Newcastleupon-Tyne could show increases of over thirty per cent. The mid-thirties was thus the era which saw the arrival of middleclass motoring (Richardson 1977), a phenomenon enabled by mass production, but driven by mass demand.

Warwick County Coventry Leamington

Week ended June 2, 1934 Died Injured 2 52

Week ended June 1, 1935 Died Injured 2 42

Year to date



11 1

3 1

5 1

Died 36

Injured 903 104 38

News reports feature two particular road traffic accidents. The first was one which resulted in a manslaughter charge being dismissed at trial through lack of evidence. The second records in graphic detail a head-on crash, in which the drivers and passengers had ‘a remarkable escape from serious injury’. The third newspaper local to Coventry, The Midland Evening Telegraph, also featured weekly Motoring Notes. On 31 May 1935 it made no mention of the impending tests, instead devoting most of its column to the effects of a reduction in the horse-power tax on cars, which had come into effect at the beginning of the year.

This phenomenal growth in the number of vehicles on the road was achieved at considerable cost. The Coventry Herald of 7 and 8 June 1935, which does not record the introduction of compulsory testing, does, however, report on the number of road traffic offences committed during the previous year and the fines paid. The Home Office had issued figures showing that in 1934 433,060 motoring offences had been committed, and £321,000 had been collected in fines.

The general picture, confirmed by a wider reading of the three papers, is of the ever-rising use of motor vehicles and the ever-rising carnage on the roads. There is no attempt to establish any causal connection and no sense in the reporting


JOHN BEECH: 1 JUNE 1935. THE INTRODUCTION OF COMPULSORY TESTING OF DRIVERS IN THE UNITED KINGDOM that there is any cause for concern. As Richardson (1977) points out, the 1934 Road Traffic Act was presented by a concerned government to a population that resisted the ‘impositions’ it generated, particularly the introduction of speed limit in built-up areas.

1934 Act first introduced testing on a voluntary basis, with effect from 16 March 1934. The following June testing became compulsory for all drivers who had not held licences before 1 April 1934. The direct correlation between the issuing of a driving licence and the passing of the driving test has remained to this day, with two temporary exceptions when driving tests were suspended: • from 2 September 1939 to 1 November 1946, as a result of the war; • from 24 November 1956 to 15 April 1957, because of the Suez Crisis.

The introduction of compulsory testing was made not as a means of state control of motorists per se, but as a safety measure, to ensure that only competent drivers roamed the public highways. Even this measure was opposed by the motoring organisations, although not it would seem by the general public. The Pedestrians’ Association, however, was one of few bodies which actually welcomed it. It appears that the car was the object of a love affair for the public, and the public did not want to see an end to ‘free love’. This preoccupation with the car and not with the whole process of motor transport is reflected today in the modern approach to motor heritage.

The perfect example of this defining moment, the coming of age of the relationship between the state and the motorist, would be an examination result certificate dated 1 July 1935 – sadly, none have been found extant. Other motor and motoring material culture does of course exist from the period, most obviously a wide range of cars. UKMotorSport (2008), for example, lists over 40 motor museums in the UK alone. Often, however, indeed usually, the focus of these collections is so strongly on the motor vehicle itself, rather than on the act of driving, that it is necessary to establish a framework within which one can identify the types of material culture that might be expected to be found.

The driving licence and the driving test Driving licences had been introduced in 1903 under the Motor Car Act. Their purpose was purely to provide a means of identification. They were issued by the Traffic Commissioners of County or County Borough Councils, and renewal resulted in an identical form being added to the hardback folder. No test was undertaken by the applicant. This system was to remain unchanged for twenty-seven years.

A restricted model of transport heritage In 1930 a test for disabled drivers was introduced. This included a fitness test, and age restrictions were also introduced. Able-bodied drivers were required to sign a declaration of fitness. A system of endorsements to licenses for certain traffic offences was also introduced.

A visit to any of the major Motor Museums such as The National Motor Museum at Beaulieu (Trust administered; not-for-profit sector) and the Museum of British Road Transport at Coventry (local government operated; not-forprofit sector) immediately makes clear that the car is seen as the overwhelming item to be remembered and revered, and this is consistent with a model of Motor (meaning Motor Vehicle) Heritage which can be generalised into a model of Vehicle-centred Transport Heritage.

The following year the Traffic Commissioners were given powers to test applicants seeking a licence to drive a public service vehicle for hire and reward, although there was no provision for mandatory testing of all applicants. The reason for this legislation was concern over the dangerous practices that had developed during the so-called ‘Bus Races’ of the previous decades. In 1934 this scheme was extended to the drivers of Heavy Goods Vehicles, the licensing scheme coming into effect on 16 February 1934.

At the core of the model is the product or artefact itself. It is in terms of the varied perceptions of this product that the dimensions of the model are developed. As the driving force of the model is the product, it is logical to develop these dimensions in the same order as the chain of supply, production and distribution. To do so does, however, give no indication of the relative significance or importance of each dimension, this being essentially a function of the core product in each case.

The introduction of the essentially advisory Highway Code in 1933 did little to slow the rising death rate on Britain’s roads. The government introduced the 1934 Road Traffic Act, outlined above, as the first major legislation to regulate driving and drivers since driving had become a widespread activity.

The following uses the motor industry as an example, but the model is applicable to any area of transport heritage, and indeed to most forms of industrial heritage:

It was anticipated that large numbers of applicants would have to be tested once any compulsory scheme was introduced, so, in order to avoid a rush of test candidates, the 57

DEFINING MOMENTS 1. 2. 3. 4. 5. 6. 7.

The supplier of raw materials The builder The distributor The purchaser The user The servicer The enthusiast

The steel manufacturer; the component manufacturer; the body builder The car company The agent for the car company The car owner The driver; the chauffeur The garage; the mechanic The model collector; and hence the children of the participants in the other ‘dimensions’

of the artefact - operating on preserved railway lines - is the norm, unlike the case of motor heritage where passive display is the norm - the display, rather than the use, of cars. 6 The servicer This dimension is arguably the most under-exposed in industrial heritage attractions. Where there is active servicing, it is seldom visitable, on the genuine or otherwise grounds of Health and Safety. Certainly there are few examples where servicing takes place in the heritage as opposed to the contemporary context. As with the purchaser and user dimensions, the number of original artefacts may well have been small, and another variable becomes obvious – the number (low, in this case) of artefacts which have survived for contemporary presentation.

1 The supplier of raw materials The significance of the heritage component in this area is a function of the extent to which output was committed to the specific type of builder. In the case of the motor industry, the steel industry’s output went to an enormous range of other industries and so this is a relatively weak dimension of motor heritage. In contrast, component manufacturers and body builders might well have committed their entire output to the motor industry and so their heritage dimension is strong.

7 The enthusiast The enthusiast presents a paradox in this analysis. On the one hand, it is because of the enthusiast that a heritage attraction becomes viable, from the perspective of volunteer worker and, to a lesser extent, the perspective of the visitor. Yet, the corresponding artefacts that were produced for the enthusiasts of yesterday may well seem far removed from contemporary perceptions of heritage. Indeed, the majority were produced for children rather than enthusiasts, and displays of old Dinky cars, for example, seem to correspond to our perception of ‘childhood heritage’ rather than of ‘motor heritage’.

2 The builder As the producer of the core product, the heritage dimension of the builder is stronger than any of the other dimensions and we might expect any heritage collection to focus on the builders. However, to deal exclusively with builders would be to ignore the other six dimensions of heritage. 3 The distributor The distributor is likely to be committed exclusively to the products of one industry, even if not exclusively committed to one builder. We thus find retail garages distributing only cars, but cars from a range of manufacturers. Like the builders, this exclusive commitment results in a very strong dimension of heritage. Experience of UK motor heritage centres would suggest that this dimension is very definitely under-represented.

Although the strength of this model may be seen in its effectiveness of characterising the way that heritage centres and motor museums have developed by featuring the core product, as have railway and aviation museums by featuring locomotives and aircraft heavily, it is in this preoccupation with the core product that there lies a weakness.

4/5 The purchaser/The user The purchaser may or may not exhibit brand loyalty and so this dimension is likely to be weaker than that of the distributor for example. Consideration of this dimension reveals another significant variable in how heritage is presented – the extent to which artefacts were produced. In this dimension, very few artefacts were actually produced. Products were generally produced for cars rather than drivers, and an attempt to display this dimension of motor heritage would struggle to find sufficient examples of driving gloves, maps, etc. In the case of the motor industry it might be necessary to distinguish between the purchaser and the user with respect to the early days of motoring, where the purchaser would often employ a user (chauffeur) rather than use (drive) the car himself. It might also be helpful to distinguish two classes of user – drivers (active users) and passengers (passive users). This latter distinction may not be great in the case of motor heritage when considering cars, but is obviously much greater if we are considering buses. An even clearer distinction between active and passive users is apparent in the case of railway heritage, where proactive use

With the arguable exception of preserved steam railways, transport heritage developed along the lines suggested by this model results in a sterile and static presentation of vehicles. By diminishing the role of the other two key ingredients which, together with the core product, make up the form of transport, a uni-dimensional heritage is presented. A holistic model of transport heritage The other two key ingredients are: • the human who controls the movement of the vehicle – the ‘operator’; • the infrastructure which enables the vehicle to operate. It is only in the last few years that the provider of the infrastructure has been anyone other than the state. At the core of this model is the triple overlap of vehicle, operator and state (Figure 7.1). The clearest example of this is the state control of the operator of the vehicle – the process 58





Figure 7.1 The major elements that contribute to the phenomenon of motoring.

of licensing driving by the state – and the enactment of the previous year’s legislation on 1 June 1935 was the ultimate move by the state to introduce this control.

transport will be the nation’s preferred option because of the degree of choice it offers, rather than the constrained transport of the railway system.

The introduction of compulsory testing is therefore highly symbolic in the development of motoring in the United Kingdom, in that it uniquely symbolises the formalisation of a fundamental relationship between the vehicle, its operator, and the state.

A comparison of the two models provides an indication of what is under-represented in mainstream motor heritage, or, to put it another way, what leaves it short of being motoring heritage. Before such a comparison is made, it is necessary to set the introduction of compulsory testing into the broader context of licensing and testing.

All three elements fall within a broader element – Society. Today’s motoring heritage practitioners and the material culture available

As well as the symbolism of this emergent three-way relationship, there are other metaphoric watersheds that the introduction of compulsory testing represents: • The coming of age of motoring - it is no longer on a scale where it is the activity of a minority. It has become such a mass activity that it requires regulation because of the scale the activity has now reached. • The recognition of motoring as an activity of the people - the mass nature it has achieved indicates that there are no longer class barriers between drivers and non-drivers. The driving licence becomes proof of the ability to pass a test of skill; no longer is it an indicator of the owner’s class and wealth. • A pivotal point in the development of transport has been reached. From this point forward motoring will be in the ascendancy and the railways in decline. From this point forward individualised private

While the two models given above differ in the approach they take - the traditional view, here exemplified by motor heritage, being product-based, and the broader view, here exemplified by motoring heritage, being process-based - there is a measure of commonality. Both recognise the key role of the vehicle, but the broader view recognises other dimensions to the particular transport sector. Transport infrastructure has had little attention from official bodies with a concern for heritage, although there are exceptions. In August 1998 the Department for Culture, Media and Sport issued a consultation paper (DCMS, 1998) which included a tentative list of future nominations for the status of UNESCO World Heritage Sites. These included two transport infrastructure items - the Forth Railway Bridge 59


Figure 7.2 Thirties garage representation at the Museum of Irish Transport, Killarney. Photograph courtesy of the Museum of Irish Transport, Killarney. (Photo: author.)

and the Paddington to Bristol railway. Jones (1998) reports that ‘petrol stations are now being considered for listing by English Heritage as part of the post-war building listing programme’. She notes also the unique Grade II listed status of Park Langley Garage in Beckenham, Kent. Organisations which openly claim to be in motor heritage have started to broaden into the conservation of motoring heritage. Both the Museum of British Road Transport and more recently the National Motor Museum have reconstructions of early garages. The preservation of automobilia is arguably better served in the Republic of Ireland than in the United Kingdom - both the Museum of Irish Transport at Killarney (Figure 7.2) and Kilgarvan Motor Museum having comprehensive ranges, the former especially so. The emphasis in such broadenings of scope tends to be towards the motorist rather than towards the state. There are few examples of material related to road design and road construction, for example, and little on the twin subjects of licensing and testing of drivers. In motor museums you are far more likely to find old editions of the voluntary Highway Code, first issued in 1931 (DSA, 2007), (see Figure 7.3) than examples of licenses or test result forms. However, a market among collectors must exist, as Gardiner and Morris (1998) quote a suggested price of £10 for an early driving licence.

Figure 7.3 Detail from original 1931 edition of The Highway Code. Photograph taken courtesy of the Coventry Transport Museum. (Photo: author.)



Figure 7.4 Road Fund Licence from 1936. Photograph taken courtesy of the Coventry Transport Museum. (Photo: author.)

Licences which can be found in motor heritage centre archives fall into several categories: • Owner’s Driving Certificates issued by the Automobile Club. These were issued before 1903 and specified the make(s) of car(s) owned. • Standard post-1903 licences issued by local authorities eg City of Coventry and County of London. An extra sheet was added at every renewal. These are in a semi-standardised format, although some variants, such as those issued by the City of Liverpool, are in a different format. • Post-compulsory testing forties-type licences which refer to the Road Traffic Acts of 1930 to 1936 and various regulations relating to licences (1937 and 1947). • Hackney Carriage Licences issued by the Urban Sanitary Authority.

Figure 7.5 Road sign from the Coventry area. Photograph taken courtesy of the Coventry Transport Museum. (Photo: author.) Although the range is wide, the quantity is small when compared with the number of vehicles preserved. Also relatively under-represented in terms of preservation is material culture which links other stakeholders, such as petrol companies. For example, the promotional heads of petrol pumps, a familiar sight until at least the early seventies, are rare in heritage centres, although those which have survived (see Figure 7.6 for an example) are worthy of preservation in their own right. Few garages extant in the 1930s have survived, but Jones (2008) provides a guide to some notable examples.

Licensing of motor vehicles through the Road Fund Licence or ‘tax disc’ as it is familiarly known is well represented, with a number of continuous runs of such discs having been preserved. An example is illustrated as Figure 7.4, coming from a series dating continuously from 1935 to 1959 for a car owned by a director of the Humber manufacturing company.

The rate of survival of large examples of material culture which cannot be preserved other than in situ has not been good. Where there are numerous examples of preserved railway lines, the parallel mechanism has not developed for the preservation of roads – roads are not closed and sold off to not-for-profit volunteers to reopen. It would however be feasible to list sections of roads for conservation. In the Coventry area there are several examples of unreconstituted thirties-specification roads, including a section of the A45 bypass, measuring only a matter of a few hundred yards, that still retains the following cross-sections:

As well as these examples of the bureaucratic relationship which developed between the state and the motorist there exists a wide range of material culture which link the two with respect to the practice of motoring. These include, for example, a range of street furniture to help the motorist navigate (Figure 7.5) and to alert him/her to various hazards. Pavement


Cycle track Grass


Central Carriagereservation way 61


Cycle track Grass


DEFINING MOMENTS 1930s’ origins. The radial road leading eastward from the city centre, for example, has traces of high specification service roads running alongside, in a section called (then somewhat futuristically) Moumas Boulevard. There are also lengths of a middle ring road, named Hipswell Highway and Sewall Highway. Here rows of trees protect the pavements from road traffic. In many places, the grass setting of the trees has either been invaded by the parking of cars or has even been lost to tarmacadam, recognising that cars will park there whether the Council sanctions it or not. The traces of state-provided infrastructure which survive intact from the thirties are thus few. Certainly the major damage of the Blitz and the subsequent rebuilding have not helped in the conservation of the state sector motoring heritage. In the private sector there has been much the same scale of change and consequent loss of heritage. Not a single city centre filling station survives, and only one car show room survives on its original site. A new building has been built around the shell of the old one and it is hard to see any continuity of structure. Of the 137 car factories that were in operation at one time in Coventry, a few buildings survive, not through conservation as motoring heritage, but as buildings which have found a new use.

Figure 7.6 Traditional advertising head from a petrol pump. Photograph taken courtesy of the Coventry Transport Museum. (Photo: author.) This particular section is particularly important as it predates the more formalised specifications issued by central government shortly after the Second World War (HMSO 1946). There are far more numerous examples of heavily modified thirties-specification roads.

In Coventry, then, there is a very strong form of motor heritage, exemplified by the Coventry Transport Museum. The survival of motoring, rather than motor, heritage material has been haphazard and patchy. In spite of the City’s plans to exploit its industrial heritage - the Phoenix Initiative, a project to redevelop the centre of the city, has seen the Coventry Transport Museum become a natural focal point for visitors to the city – there are large gaps in those material items that reflect the state’s involvement in the development of road infrastructure. This is at least in part due to the difficulty of finding funds to catalogue the large collections of items other than vehicles. This has now started and the prospect of more ‘motoring’ items on display is encouraging.

Coventry has a clear claim to be the historical centre of the motor industry in the United Kingdom. As late as 1982 Mallier and Rosser were able to write: ‘To the public mind Coventry has been and still is a “car town”.’ (1982: 20) The industry was certainly in decline, and Thoms and Donnelly (1985) refer to the ‘degree of contraction that has occurred over recent decades’ to the city’s motor industry. Although four manufacturers survived at the turn of the twenty-first century – Peugeot (a direct descendant of Humber and Hillman production, via Rootes and then Chrysler), Jaguar (now owned by Ford), London Taxis International (manufacturers of the famous black cab) and Massey Ferguson – only London Taxi International survives at the time of writing.

Without the state development of the road system, the car industry could never have grown in the way that it did, bringing prosperity to the City for many years.

Not surprisingly, Coventry is the home of Coventry Transport Museum, formerly the Museum of British Motor Transport. The vast majority of material displayed consists of cars. There are artefacts from six of the seven categories identified in the traditional model above, with particular strength in model cars, the Museum having been the beneficiary of the gift of a very extensive collection.

Equally, without the state imposing compulsory driving tests, motoring would only have developed at enormous human cost to the peoples of the United Kingdom. Motoring heritage must be nurtured as strongly as motor heritage has been. The impact of motoring on our daily lives, both through the personal freedom it gives us to pursue both work and leisure and its impact on distribution systems which provide goods and services which were simply unavailable a century ago, warrants the inclusion of motoring in a

This self-image of Coventry the Car Town has resulted in some unusual material surviving. The A45 by-pass section mentioned above is not alone in betraying very clearly its 62

JOHN BEECH: 1 JUNE 1935. THE INTRODUCTION OF COMPULSORY TESTING OF DRIVERS IN THE UNITED KINGDOM collection of defining moments of the twentieth century. The first of June 1935 was indeed a defining moment for the motorist. No longer was motoring such a care-free (and to some extent care-less) activity – the state had intervened in the cosy relationship between motorist and car, and from then on the relationship was to be a three-way one involving monitoring, regulation and, since licences could be withdrawn for breech of motoring regulations, restriction. References DCMS, 1998. UNESCO World Heritage Sites. London: Department for Culture, Media and Sport. DETR, 1999. Online at st/1.htm. Consulted [6 November 2000] DSA, 1992. The History of the British Driving Test. London: Driving Standards Agency. DSA, 2007. Online at Consulted [17 June 2008]. DVLA, 1999. Online at tm. Consulted [6 November 2006] Gardiner, G and A. Morris 1998. Automobilia: International 20th Century Reference with Price Guide (3e). Woodbridge: Antique Collectors’ Club Ltd. HMSO, 1946. Design and Layout of Roads in Built-Up Areas: Report of the Departmental Committee set up by the Minister of War Transport. London: Her Majesty’s Stationery Office. Jones, H. 1998. Buildings designed to advertise fuel. British Archaeology 38: 10. Jones, R. 2008. Surviving old garages, dealerships and petrol filling stations. Online at Consulted [17 June 2008] Mallier, A.T. and Rosser M.J. 1982. The Decline and Fall of the Coventry Motor Industry. The Business Economist 13, 3: 12-28. NPS, nd. Highland Park Ford Plant. Online at Consulted [17 June 2008] Richardson, K. 1977. The British Motor Industry 1896 – 1939. London: Macmillan. Thoms, D. and T. Donelly, 1985. The Motor Car Industry in Coventry since the 1890s. London: Croom Helm. UKMotorSport, 2008. Online at Consulted [17 June 2008]


Chapter 8

Commentary Visions of the twentieth century Cornelius Holtorf

Preface This picture essay is a commentary on the archaeology of the twentieth century as portrayed in this book. In taking up some of the same issues as the other chapters and putting my own visual spin on them, I hope to make further contributions to their subject matters and, by implication, also to the archaeology of the twentieth century itself. The twentieth century might be described as a century in which images carried a particular significance. It has been the century of film and TV, of press and fashion photography, and of glossy magazines. But it has also been the century of snapshot photography and home video. Images like the earth seen from space or the Afghan girl on the cover of National Geographic Magazine have become icons of the century. They are omnipresent and widely known to an extent that they do not need to be reproduced here: my words suffice to recall them in readers’ minds. The images I have selected are all images that I find original, intriguing and insightful in what they express about the twentieth century and indeed our own time. They are no great works of art but tokens of how I read the chapters in this book. They reflect how I see both the archaeology of the twentieth century and archaeology in the twentieth century, with a natural bias towards my own lifetime and my own personal experiences in archaeology and beyond. I deliberately tried to give this contribution a less Anglo-centric feel than characterises the rest of the book, but I am aware that my own perspective is tinted by a northern and central European bias. In the captions I am providing some hints at what I have been thinking in selecting the images, or – more accurately – what I have been thinking after having selected the images. No doubt, readers may have chosen other motifs and when they see the images I selected they may be reminded of other things, but that only corresponds to the fact that the twentieth century itself has been depicted and interpreted in many different ways. It is also increasingly remembered in different ways. So this, then, is my archaeological account of the twentieth century – a century captured in images and imagined through their captions.



archaeologies. By focussing on specific sites and their material dimensions in the present, the archaeology of the contemporary world can make the familiar past concrete: it happened right here involving this very object. Contemporary archaeologies are also able to put new twists on well known histories. How did the Cubans themselves actually live through the crisis that centred on missile bases erected in their neighbourhood? What do they recall today when the remaining concrete rubbish reminds them of that time? Contemporary archaeology is thus not only a new field of study investigating and appreciating broadly twentieth century remains, but it is also a new approach more generally, emphasising the contemporary world and in particular any affected local communities. Its questions and approaches can best be developed regarding twentieth century material but they are subsequently also applicable to other archaeological subjects. Archaeologies of the recent past and the contemporary world thus have the potential not only to make us see past and present in a different light but also to affect contemporary people in new ways. This will eventually benefit all archaeologies. Photograph by Christer Åhlin, reproduced with permission.

Archaeology makes the past concrete The image shows the material trace of a specific, precise moment of the twentieth century. What looks like the fossilised imprint of an extinct plant species is in fact a very special archaeological artefact of global significance. Once part of a Soviet missile ramp on Cuba, this fragment had its moment of fame when it played a (very minor) role in bringing the world to the brink of a nuclear war in 1962. Unfortunately, as the archaeologists Mats Burström, Håkan Karlsson and Anders Gustafsson who recovered this artefact noted, none of the historical significance could have been inferred from the piece of concrete and its immediate context itself. So why do we need an archaeology of the twentieth century? What can an archaeology of the recent past and of the contemporary world teach us that we do not already know from other sources? Is contemporary archaeology a contradiction in terms? Does the study of twentieth century material have implications for the study of other ages? In one sense, all the past, all remains of the past and all archaeology are always contemporary – or we would not have them. But contemporary archaeologies are also distinct from other 66


The medium is the message This image was designed by Jon Lomberg and photographed by Simon Bell in order to be sent into outer space on board the NASA space orbiter Cassini. Cassini left earth in 1997 and is destined eventually to remain on Saturn’s moon Titan. The image was to communicate human life on earth to nonhuman creatures, who may eventually find it on Titan. As Lomberg explained1 (1), among the people shown you find all the major groups of our species. Their postures indicate the range of human movement, their ages the human lifespan, their clothes a sample of how we dress on earth (when it is warm at least). A baby nurses at its mother’s breast, whereas other children and adults are clustered around the eldest, who is telling a story, the story of the small object in her hand, which is the diamond wafer on which the photo is inscribed. In the background of the Hawaiian scenery a couple walks hand in hand and a sailor prepares to launch a canoe on a voyage, as so many mariners have done throughout human history. Now humans are reaching out for the stars, which is often considered the hallmark of only the most advanced civilisation(s). It must have been irresistible for NASA to


contemplate making contact with extraterrestrial beings. Ever since the Pioneer 10 and 11 spacecrafts were sent into space in 1972/3, the second half of the twentieth century has seen several technically advanced messages about humanity being shot into space. But the real purpose of these initiatives was perhaps never sending messages to other creatures of and about whom we know nothing. Instead, by using spacecraft messages as medium these projects have really been transmitting ideas about being human – and being American – on late twentieth-century earth to other human beings on the same planet… As it turned out, the Cassini rocket eventually left without the disc containing the image. Disagreements concerning copyright and corporate sponsorship made NASA drop the project in the final hour. In fact, personal, social and ethical conflicts may reveal more about human civilization on earth than the image ever could have done. In that sense the project succeeded after all. Photograph by Simon Bell, reproduced with permission (

Contact in Context: v2i1;



Domesticating shipwrecks surface of the sea by bringing to bear on them the most advanced technological miracles available. These sites are time-capsules par excellence. They may have treasures of the past waiting for us, irrespective of whether that treasure is entirely literal, such as jewellery, or mostly metaphorical and comes in the form of emotional stories about human desires, hardship, heroism or sacrifice. The seeming victory of nature over civilization is thus rendered into nothing but a temporary episode. Its concrete manifestations we can conquer and domesticate in the same way as we conquered and domesticated the most remote regions of the planet on dry land. During the twentieth century, underwater exploration has become not only a celebration of the Romantic notion of lost dreams and sunken triumphs but also an almost magical invitation to dream again and work in a dramatic fashion towards yet more triumphs of modern technology. Both stories express different kinds of fascinations people have with archaeology, touching them emotionally through ‘archaeo-appeal’. Photograph by Cornelius Holtorf.

Few scenes are as appealing to the human eye as the underwater remains of sunken ships, surrounded by ruins of a lost city and inhabited by corals, sharks, octopus and other exotic species of the sea. They can be found in decorative aquariums in people’s homes and offices, in the dentist’s waiting room, or as educational facilities and attractions in their own right, for example at the commercial Sea Life centres or in zoos. The example shown in my image is from the Ocean exhibit in Burgers Zoo near Arnhem in the Netherlands. Aquaria and zoos domesticate the experience not only of wild animals but also of the material remains of other ages. They bring us face to face with other worlds, and other ways – and forms – of life. The vision presented here is full of metaphor and myth. It reminds us that the human project is fragile and may not be forever, that nature will eventually win over even the most advanced civilizations and reclaim what we took. At the same time, underwater shipwrecks are tempting human explorers to investigate further those spooky sunken worlds below the



Commemorating National Heroes remote past could symbolise the soldiers’ idealised sacrifice for the nation in a way that fresh memories of the atrocities of war could not. The flesh of the Neolithic forefathers had long rotted away and their reputations become splendid as Our National Ancestors. In our minds, men who fought in the Great War are first and foremost associated with memories of their own and others’ deaths, whereas people who had settled and built megaliths in prehistory are first and foremost associated with what they accomplished during their lives. Suspending the differences between the two goes some way in transforming war memories into historical appreciations and even enthusiastic celebrations. The war dead were thus elevated to national heroes. As the archaeology of twentiethcentury conflict and commemoration re-evaluates the cold realities of nationhood and war, even megalithic tombs might profitably be studied as contemporary battlegrounds. They are among the places where important battles for the hearts and minds of people have been won or lost. Photograph by Cornelius Holtorf.

This is the communal war memorial for the dead of World War I in the southern Danish town of Haderslev, which also carries the German name of Hadersleben. It recalls the names of the local individuals who were slaughtered in a brutal war. The monument is also a reminder of the significance of archaeology’s engagement with the past in recent centuries. The choice of a prehistoric megalithic tomb as decoration of a monument commemorating historic events of the early twentieth century was certainly no coincidence. The solemn simplicity of the dolmen expressed what contemporary politics seemingly shared with the prehistoric past. Both war memorials and prehistoric tombs commemorated heroes who had lost their lives while making history for their communities. Both were made of solid stone, keeping the past present for an eternity while also evoking the solid determination of the bereaved community to remember and move on. Both promised a collective future rooted in a primordial past. In addition to these commonalities, the



Discoveries below the surface urgency. Just as doctors will save human bodies from death, archaeologists will rescue ancient remains in the ground from imminent destruction and permanent oblivion. When the archaeologist is satisfied with the work done, the wound in the earth is filled and left to heal but scars will remain visible for some years if not forever. The next generation of specialists learns from the experience of the previous, benefiting from model representations like the torsi of a girl from around 1930 shown in the picture. Dig where you stand, they said, not dreaming that they might be excavated as they stand. Future examinations will benefit from new research and systematic trials that will get to the bottom of what is apparent on the surface. And often enough a fuller story only emerges after all the unpublished archival records have been meticulously studied and analysed. During the twentieth century more discoveries were made below the surface than ever before. Photograph: Reproduced with permission, Deutsches Hygiene Museum Dresden.

Conventional wisdom has it that archaeologists appear only long after medicine has failed and a body been interred. But according to another perspective doctors are really archaeologists of sorts, too, and vice versa. Both work according to the general insight that truths are to be found underground. In modern medicine, as in archaeology, there is no final diagnosis unless the bodily symptoms have been carefully examined and the secrets that lie below the surface have been fully revealed. The earth is for the archaeologist what the body is for the medical doctor. Archaeological consultants apply their skills by cleanly executing cuts, carefully removing the ulcers, and fixing any fractures. All activities and discoveries are meticulously recorded for later checks. Various special tools and high-tech devices are being used – from toothbrushes and dentistry instruments for the more delicate operations to non-intrusive geophysical measuring-instruments and ground-penetrating radar. During the operation the site must be kept clean and when the work is done all must be sterile. Often there is a sense of



The hyper-real living room television programmes is subsequently exploited by various kinds of businesses including the mass media themselves and the advertising agencies that support most of them. Professional archaeologists, too, allow Hollywood and Discovery Channel, among others, into their living rooms. Few discussion topics among archaeologists tend to attract greater interest than the subject’s latest representation on television: was it accurate? Does it matter? Largely thanks to television, broad sections of the population consider archaeologists as being occupied with Indy-like adventures, criminological clue-hunting and spectacular discoveries. Recognising this currency, archaeologists, television producers and journalists will perpetuate these very clichés in their own story-telling. Local perceptions thus reflect and inform global representations at the same time. Mediated fictitious realities can no longer be clearly separated from unmediated actual realities. That too is a classic twentiethcentury film topic – my favourite example being Total Recall (1990) where Arnold Schwarzenegger inserts bought memories into his brain and it becomes increasingly unclear which part of the fiction is a fictive reality and which is a fictive fiction. Photograph: Cornelius Holtorf

Over the past few decennia television has become the single most important medium in present-day human social life. What is continuously being broadcast about the world directly into all our living rooms (as well as into many kitchens, bedrooms, studies and children’s rooms) does not only shape to a large extent our knowledge and perspective of the world, it also informs how we behave and what we are communicating in – and about – that world. Many archaeology students choose their subjects after being inspired by globally significant movie heroes like Indiana Jones or widely appreciated archaeologists like TV celebrity Mick Aston of Time Team fame. The small screen transports these characters’ archaeological adventure and detective stories from anywhere on the globe directly into our lives, as in this image which I took at the beginning of the twenty-first century in the most ordinary bleak English Bed & Breakfast room. On that evening, the medium transported me effortlessly to another time and another place. This sensation has become most ordinary for very many people during the twentieth century – so much so that the fictitious past elsewhere is also part of the actual here and now, which has been rendered hyper-real. The impact of broadcast 71


My other car is also a Porsche! become icons of national economies in their respective countries of origin and it creates huge outcries if they are being sold abroad or threatened by closure. Recently cars have been significant in environmental debates, both in the context of reducing the use of non-renewable resources like oil (petrol) and in the context of minimising various emissions into the air. Twentieth-century children generally play with cars from very early on. Today, they often do that to such a great extent and with such enthusiasm that I have been wondering what children actually played with before there were cars. We are so used to cars that one of the earliest vivid memories of my own childhood in the early 1970s is dancing on the road on a car-free Sunday during the energy crisis. During many long car journeys throughout my teens, roads became an extended exhibition and I learned to identify not only car makers and models but also the geographical clues contained in the (West-)German number plates. Who needs ancient pyramids if there are motorways for discovering wonderful things? Photograph: Cornelius Holtorf

The car is a household item and icon of the twentieth century. As material culture it makes for a great object of study by contemporary archaeologists. Not everybody owns a car, but many have a driving licence and practically all use cars, e.g. taxis and rental cars. Coming of age in the twentiethcentury West has for many been marked less by the right to vote and more by the right to drive! Cars are about transporting people and belongings from one place to another. But they are also about collective identities, shared values and social status. Owning brands like Volvo, Porsche and Rolls Royce, or driving sport cars, cabriolets or urban offroad vehicles (SUVs) sends strong messages to other people about who you are – or at least who you think you are or would like to be. Driving a car is sometimes considered an inalienable human right, and minor motoring offences in relation to parking or speeding are accepted by many as the price to pay for their freedom. On the national level, brands like Renault, Rover, Volkswagen, or Volvo symbolize much more than their economic value (or lack thereof). They have



representative nor particularly qualified to assess the twentieth century but somehow felt that they had something valuable to say? Why are there not more female authors? Why are there so few authors from outside the UK? Why did nobody write about topics as significant as the Russian and Chinese Revolutions, the World economic crisis in the late 1920s, the Holocaust, Hollywood, or modern literature and art? For which of these outcomes should the editor be blamed? Why did I choose to ask precisely these questions? Which questions might be asked about my own contribution? Although it has never been resolved whether a reflexive turn was really necessary (or at least beneficial) in order to improve our understanding of the world, or whether it amounted to unnecessary navel-gazing, as some onlookers have been suspecting all along, in recent years the issue has no longer attracted the same attention. Might this tell us something about academic fashions and the commodification of ideas which are superseded by others in possibly ever shorter intervals? Or does precisely the currency of such questions indicate a lasting impact of the reflexive turn after all? Photograph: Cornelius Holtorf

Reflections on reflexivity Twentieth-century intellectual debate has gone through what has been described as a reflexive turn. This turn was caused by the realisation (rightly or wrongly) that whatever insights about the world we may want to gain or express, our engagements with that world and representations of that world will always first and foremost tell us something about ourselves. Whether in the philosophy of science, anthropology, or the visual arts, among other fields, attention was therefore given to the question of precisely what our work reveals about ourselves and to what extent both that work and these revelations tells us something important about the world around us, too. The image shows me making final changes to this very page. It reminds us that in order to appreciate any (published) account we need to appreciate the context from which that account originated. Who is writing? Under what circumstances? With what intentions and desires? These are questions that need to be asked especially with regard to seemingly authoritative publications like the present volume. Is this book more than a pretty worthless ‘archaeology’ of a group of authors who are neither



Comedy is tragedy plus time

Hitler and all that. It felt enormously liberating. Adolf, which had two sequels, became a cult phenomenon in Germany.2 Maybe Hitler as an evolving cultural phenomenon in Germany is as unexpected as the continuing interest in allied acts of war heroism in the former Allied countries. These are instances of ‘history culture’ which signify the roles of history and indeed archaeology in contemporary culture. Ultimately it is this kind of constructed history and remembrance that shapes the way contemporary audiences view the past. What the past was like when it happened does not only hold little significance for the present but is also impossible to tell. For does not this volume show that we are not necessarily any wiser regarding the essence of what happened when dealing with an age during which many of us were alive? Although we have only recently left behind the twentieth century, it has already become a cult topic and a cultural phenomenon the meaning of which evolves fast. Photograph: Cornelius Holtorf

This bon mot, which crops up in Woody Allen’s Crimes and Misdemeanours (1989), is wonderfully illustrated by the Danish bunker graffiti from about half a century after the construction of Hitler’s Atlantic Wall in occupied northern Jutland. Even more tragic and traumatic were however the experiences of the Germans between 1933 and 1945. They had been voting into power a political leader like Hitler who not only started a World War but also committed genocide on millions of people and eventually left behind a country in physical and moral ruins. It took just over half a century until it became possible to make light of Hitler, World War II, and the Holocaust. In 1998, when I saw a copy of Walter Moers’ Adolf comic in a bookshop in Germany I bought it straight away and proceeded to read it that very day with equal amounts of fascination and sheer enjoyment. At about the same time I also saw Roberto Bernini’s concentration camp comedy Life is Beautiful (1997). Having grown up in Germany during the 1970s and 80s, this was the first time my generation laughed out loud about the national trauma of



don’t miss:


It’s Rubbish! highest point in Lund – offers a great view as far as Malmö and even all the way along the Öresund bridge to Copenhagen and Denmark which you can see along the horizon. When snow falls in the winter, the hills are in fact the best place for sledging in a wide area and become crowded very quickly. What many people do not know is that underneath the snow and their own feet lie archaeological treasures that so far have escaped looting. The finds to be made here will tell future archaeologists about consumption, production, refuse and construction patterns of mid twentieth-century Lund. Contrary to what one might expect, not even all organic material will have decayed. In American landfills investigated by archaeologist William Rathje and his team, even banana skins had survived for decades. The twentieth century was among other characterisations also the century of unparalleled quantities of rubbish. In retrospect it may not be surprising that it saw the emergence of applied archaeology as social science, studying rubbish. Photograph: Cornelius Holtorf

Monte Composti in the Swedish city of Lund has been climbed from many different routes and reached in almost every conceivable way. It has been climbed by the blind, the very young and the very old. There may be a fair amount of material remains of twentieth-century activities on Everest, but Monte Composti actually consists of the material remains of twentieth-century activities. The hills are entirely artificial and emerged between 1947 and 1967. There is a rubbish tip at the bottom followed by systematically constructed layers of household and industrial rubbish alternating with soil from construction work in the area. The entire area, officially known as St Hans Hills, is around 40 hectares and was designed so that after the closure of the landfill it would serve best the recreation purposes of local inhabitants. This not exactly being the world’s tallest mountain, the area is today popular among dog-owners, walkers and joggers throughout the year. On clear days, Monte Composti’s peak – with 86 m over sea level the



The dreams of science and technology visitors the opportunity to participate in a playful way in some of the same twentieth-century dreams as the very scientists and engineers who built real space rockets. For even science and technology, like archaeology, are to some extent about dreams and play. The dreams may be about pushing the frontiers of knowledge ever further and being able to use that knowledge for the good of humanity. The play may be about trying out new ideas and creatively exploring their implications and applicabilities. At the same time, celebrations of science and technology in popular culture implicitly express some of the underlying values and ideologies of the politicians who supported and sometimes directed this work for the sake of national pride or other priorities of the day. This is why a critical attitude is an important asset, prompting people to ask about the context and implications of the latest advances of science and technology (and archaeology). Let us hope that critical thinking will become so widely known in the present millennium that it can be commodified as yet another dream entertaining people in theme parks … . Photograph: Cornelius Holtorf

Since 1968, Legoland at Billund in Denmark presents Denmark and the world in miniature – or at least those places in Denmark and the world that have been selected as most interesting when built in miniature with Lego building blocks. Among them is a rocket launch site inspired by Cape Canaveral in Florida. The children love it when the broadcast countdown ends and steam is released from just below the large rocket boosters. Ten – nine – eight – seven – six – five – four – three – two – one – lift-off. For many people of the twentieth century, this sequence epitomized the Space Age as much as the first man on the moon. I suspect that it will not be a few launch sites in different parts of the world or numerous bits of satellite debris in various orbits around Earth but cultural representations like those in Legoland that will come to be seen as the foremost examples of space-related heritage in the future. Do theme parks, toys and other items of popular culture capitalize on commodified trivialisations of the world? Undeniably to some extent they do, thereby contributing to an important part of the economy. But you can also look at it in another way. Popular culture offers 76


Canadian grey squirrel) during the nineteenth century, the better adapted and more daring grey squirrels have slowly been invading the habitats of the native red squirrels which they threaten to make extinct. In some cases, conservationists have set up a system of separate development to stop this process of natural selection. According to its own publicity material,3 the Dundee Red Squirrel Project, for example, reasons that, ‘not long ago these animals were common throughout Dundee but the rising population of competing American grey squirrels has put their survival in danger…. The Project is helping the red squirrel by taking practical action in and around the remaining strongholds.’ Actions include the construction of feeders ‘that can only be entered by red squirrels … to help them survive lean periods and be fit to breed in places where greys are competing for food.’ In addition, ‘the Grey squirrel population is controlled by trapping and shooting in designated areas in and around the places where reds live.’ This is not only animal apartheid and sectarian violence against people’s most hated animals but also a manifestation of latent Anti-Americanism – all these being phenomena of the twentieth century. Photograph: Cornelius Holtorf

Innocence lost Having lost their political innocence during the twentieth century, archaeologists are sensitive towards the social and ethical consequences of human behaviour, sympathising with those being disadvantaged. George Orwell’s Animal Farm (1945) presents a dire warning of what can go wrong in human societies, but whereas his novel is fictitious, modern zoos present a similar warning using living animals. Zoo animals are anthropomorphised in many ways, both by visitors and by the zoo management, so that their displays of animals often function as parables for human family and communal life. In fact, many zoo visitors are entirely preoccupied with interpreting animals in human terms. This is worrying because it misrepresents animal life but also because it misleads regarding human life. Contemporary zoos, for example, take great care that animals do not breed beyond their own kind. But it does happen after all, and at one enclosure in Virginia Zoo, Norfolk (USA) I noticed an apologetic sign stating that, ‘The zoo did not intentionally acquire this animal. We purchased a pregnant bison that was supposed to have a bison calf. It instead gave birth to the beefalo. Mistakes happen!’ Another disturbing case concerns the wild grey and red squirrels in Europe. Having been introduced from North America (the image shows a




Archaeology in the age of outside play During the end of the twentieth century, the internet occasionally used to be considered as a place where nerds and loners meet each other in a fictitious world that is divorced from the ’real world’ and effectively an alternative to social reality. Although it is certainly a meeting place for all kinds of people, the net and the web have in recent years emerged as a very social reality indeed. Email, discussion groups, bulletin boards, personal messages, virtual learning environments, internet television and telephone, video conferences, Second Life, Facebook, YouTube, Flickr, Ebay, etc. allow a wide range of interactions that suffice to create, develop and maintain complex social relations. It no longer seems to matter much whether good personal friends or acquaintances are physically close or far. Even colleagues working in neighbouring offices find it increasingly more convenient to communicate electronically, in the same way they communicate with colleagues on other continents. The internet has become so central to the society in which we are living that many find it inconceivable to be without it for too long. Today many of us do go outside and play, but they do

this while still being online thanks to new electronic gadgets. To me, the archaeological dimension in all this does not only lie in the possibility to investigate the internet’s various ‘intangible artefacts’ as material culture but also in that internet socialities are fundamentally transforming the discipline of archaeology itself. Archaeological scholarship is now more global than ever before, and it is not unusual to work daily in close cooperation with people on different continents – much like global companies have been operating for some time but without their financial resources. There is much additional potential for further developing e-meetings, e-learning and e-publications in order to co-operate ever wider and reach ever larger audiences on a global scale. That potential is matched by financial pressures and rewards to attract international students to academic courses, submit competitive research proposals winning large-scale contracts from trans-national funding bodies, and display international excellence in research outcomes such as publications. Sometimes I do indeed want to turn off my computer and go outside and play.



Penguin normalities indeed the present) as an archaeologist lies in the valuable contributions it can make to contemporary society. Archaeology is thus inherently concerned with the present not the past. Archaeological epistemologies are weak, so that in practice the discipline of archaeology is largely not about knowledge but about conventional rules of engagement (discourse) that govern the authority to speak for the past in the present. Queering archaeology means to challenge these rules and to reveal the arbitrary base of archaeological authority by toying with its conventions. The aim is to create spaces in which it is possible to construct other pasts and presents which are ultimately more valuable to society because they speak more to all of us. What I manage is limited, of course. Challenging rules and authority is one thing but constructing other pasts or presents is quite another. The difficulty lies in that the latter requires the additional space that the former first needs to create in dialogue with others. One specific problem that arises for a queer archaeology is how to make sense of the depicted penguins at Copenhagen Zoo: how do people stage their lives? What do their lives tell people? Photograph: Cornelius Holtorf

A few years ago, after Bremerhaven Zoo in Germany had found that three of their Humboldt penguin pairs were actually each formed by two males, they brought in four additional female penguins from Sweden in order to increase reproduction among their penguins. The episode attracted wide media attention when gay and lesbian associations protested loudly that the penguins should be allowed to be gay. The zoo director eventually abandoned the original strategy and stated that, ‘All can live here as they please.’ Her change of mind was, however, almost certainly not due to the queer lesson learned that it is OK for zoos and their animals to be at odds with the dominant norms. More likely she felt pressured by all the public attention caused by the media which had championed a legitimate cause that would help them sell their product. However, do you have to be gay in order to be queer? Certainly not. Having read the paper by Thomas Dowson, I felt that a different kind of commentary was required than what I had originally planned, one that disrupted my own normality which is already as far from ‘true normative archaeological fashion’ as I manage. I am convinced that the significance of studying the past (or 79


Nothing ages faster than the future As the Millennium celebrations in the UK illustrate, memory-work is usually backward looking and focuses on heritage. A major exception are time capsules which were also deposited in connection with the Millennium (the phrase ‘millennium time capsule’ gives 3,820 hits in Google). They contain commemorative messages or items of the present chosen for a presumed benefit of future generations. Oglethorpe University in Atlanta, USA, houses the Crypt of Civilization built by Thornwell Jacobs, 1877-1956.4 He intended to preserve a thorough record of our civilization, illustrating not only life and customs in 1936 but also the accumulated knowledge of humanity up until that time. The crypt (see image) was closed on 28 May 1940, is still maintained at the present day and is due to be opened again after six further Millennium celebrations (Y3K-Y8K) on 28 May 8113 AD. It contains over 640,000 pages of microfilmed material, hundreds of newsreels and recordings, a set of Lincoln logs, a Donald Duck doll and thousands of other items, many from ordinary daily life. There also is a device designed to teach the English language to the Crypt's finders, who incidentally may include direct descendents of at least 4

some of us. It remains to be seen whether future archaeologists will consider this designed ‘closed find’ a welcome source of additional knowledge, an attempt to manipulate future histories, or an insult about their own abilities. But they might all agree that any incidental traces originating from the design, closure or subsequent maintenance of the crypt are far more interesting and also more credible evidence of life in twentieth-century North America. Oglethorpe University is also the seat of the International Time Capsule Society (ITCS), founded in 1990 and running a registry of all time capsules on earth. The ITCS estimates that there are approximately 10,000 capsules worldwide. Ironically, most are lost only a few years after being deposited, usually due to thievery, secrecy or poor planning. In 1991, a list of the ‘10 Most Wanted Time Capsules’ was released by the ITCS. To date, only three of them have been found. Many Millennium time capsules have probably already been lost as well. Photograph: Oglethorpe University, reproduced with permission.



Forget conservation! A boom of heritage and conservation has been one of the most important characteristics of the modern twentieth century. There seems to be little that in some way ought not to be ‘rescued’ and ‘saved’ for the future. Even a collection of car wrecks in the forest can become a minor heritage attraction – as the example of Åke Danielsson’s collection of metal in Kyrkö Mosse in Småland, Sweden, demonstrates (see image). Just about anything associated with natural and cultural heritage attracts considerable interest, goodwill and support if linked to notions of endangered existence and a believable threat of imminent loss. Some commentators have been observing ‘an official cult of historic heritage’ and have diagnosed a ‘Noah complex’ of excessive conservation in our age. Both citizens and experts are increasingly unable to get to grips with the inevitable, unstoppable, and to some extent desirable processes of extinction and destruction. Yet based on our knowledge of the past, the only thing we can say for sure about the future is that not very much, and not even our heritage, will remain the way it was. Will future generations at prehistoric monuments such as Stonehenge remember

little else than the heritage industry, planning policies and conservation techniques of the twentieth century – and thus at best remember remembering prehistory? It might not be long before the concepts of rescue and conservation as such will have become a valuable – and by then perhaps endangered – part of our cultural heritage. But will there be calls to conserve conservation? Or will English Heritage by then have been replaced by English Oblivion? As everything is likely to acquire new values in the future, English Oblivion will have realised that its most significant social and political responsibility does not lie in selecting and preserving assets that despite all change will remain as precious as they are considered to be now. Its difficult task will instead consist in identifying and getting rid of rubbish that they deem will not be sufficiently appreciated ever again. English Oblivion will see change as an attribute of heritage not only an impact on it, and heritage as something inherited not necessarily as something to keep. Photograph: Cornelius Holtorf


DEFINING MOMENTS Acknowledgments I would like to thank John Schofield for the invitation to contribute to this book, as well as Sarah May and especially Angela Piccini for thoughtful comments on the penultimate version of this contribution. For their help with the images I am grateful to Simon Bell, Mats Burström, Elizabeth Pittman, and Christoph Wingender.


Chapter 9

16/17 May 1943 Operation Chastise: The raid on the German dams Richard Morris Sunday 16 May 1943 was hot, with little wind and a clear sky. Towards dusk, as the day cooled, the first of a force of nineteen Lancaster bombers of the Royal Air Force took off from the aerodrome of Scampton in Lincolnshire. Led by Wing Commander Guy Gibson, flown by crews who had been rehearsing for six weeks, and armed with an unusual and operationally untried weapon, they were bound for the Sauerland, a hilly region on the northern edge of the Rhenish Slate Mountains, above the industrial districts of RhinelandWestphalia. In recent decades many of the Sauerland’s streams had been stopped to create reservoirs. On this night, the RAF was aiming to release the waters of at least three of them.

and others much affected by frost’ and that water had long poured through the fissures in jets up to thirty feet long (New York Times, 4 May 1895). In Britain the waters of the Bradfield dam near Sheffield had taken over 250 lives when the dam burst in 1864, destroying c.800 houses and swamping 4,357 more (Gheoghegan 1864; Heaton 1864). The Bradfield disaster made such an impression that it even became a subject for sermons (Brewin 1864). Sixty-one years later the hamlet of Porthlywyd in the Conway Valley disappeared when the dam of Llyn Eigian collapsed. Hundreds died following the failure of the Gleno dam in Italy in 1923. Older cases lingered in public memory (Letter 1718).

The Sauerland, in Ernst Mattern’s words, was ‘the cradle of modern German dam-building’ (1921: 6; cited in Blackbourn 2006: 190). From the 1890s dams had been built across the region’s valleys to store drinking water and, initially, to regulate supply to small, traditional water-powered mills. But their main role was to supply industries in towns of the Wupper and Ruhr, where coal and steel production guzzled vast quantities of water (c.3000 litres were needed to wash one ton of coal, 12,000 litres for a ton of pig iron (Blackbourn 2006: 216)) and the cumulative effects of overextraction, lowering of water tables and discharge of effluents had created a crisis. When stored by reservoirs, the abundant rainfall on these uplands could also be used to drive turbines to generate electricity.

In Germany, Intze set out to allay long-standing fears by applying science and technical accomplishment to the design of gravity dams that resist the force of water by their shape and mass. Intze’s dams were typically of triangular section (to ensure that stresses on the water side and air side of the dam walls were equilibrated), footed on bedrock, built of locallysourced materials, and bound by mortar mixed to his own recipe. Examples are the Eschbach, built 1889-91, the Ronsdorf at Wuppertal (1898-99), and the Glöre (1903-04). Six dams were on 617 Squadron’s list. The Sorpe, completed in 1935, was not a gravity dam but an embankment of earth that held back an artificial lake with a capacity of 70 million cubic metres. Ten miles away was the Möhne. Compared by Hermann Schönhoff to the work of giants (1913: 685-6), the Möhne was designed by Intze’s pupil Ernst Link, and when new could hold more water than all nearby dams combined. To the south was the Lister, holding 22 million cubic metres. Close to Wuppertal was the Ennepe, built 1904-06.

Dams and people Dam-building required consensus between conflicting interests, and a way to apportion costs and distribute returns (Blackbourn 2006: 210-18). The drive for both was led by Otto Adolf Ludwig Intze (1843-1904), who from 1874 was Professor of Hydraulics and Building Construction at Aachen Technical University, and whose proselytising helped to bring the Ruhr Valley Reservoirs Association (Ruhrtalsperrenverein – RTV) into being in 1899. Historically, dams had evoked pangs of concern; in counterpoint with their promised utility ran worries about what would happen if they failed. Such nervousness was heightened by actual cases when dams had given way. France remembered the troubled history of the Bouzey dam, which bulged and then collapsed in 1895. Reports noted that the dam had been made of different kinds of stone, ‘some friable

Looking east, the river Eder flows away from the Ruhr to merge successively with the Fulda and the Werra to become the north-flowing Weser that reaches the North Sea at Bremerhaven. A dam on the Eder was begun in 1908, to control flooding and assist inland navigation by regulating levels in the Mittelland Canal that was intended to link the Elbe and Rhine. When the Eder was finished on the eve of the Great War, the 200 million cubic metres of water that it held extended further than was held behind any other European dam. Nearby and functionally allied with the Eder was another dam, the Diemel.


DEFINING MOMENTS Targeting dams Dams had been imagined and actual military targets at least since the Great War, when rumours circulated about poisoned reservoirs and sabotage (Blackbourn 2006: 237). British military planners had noted the exposure of Ruhr industry to water shortage since 1937 (Sweetman 2002). The principle, as Albert Speer said later, ‘was to paralyse a crosssection, as it were – just as a motor car can be made useless by the removal of the ignition’ (Speer 1970: 280).

Third, Upkeep’s release parameters were challenging. The weapon had to be dropped from a height of 60 feet, at a range of 425-475 yards (389-434m), at a speed of some 220-230 mph. Upkeep also had to be back-spun to some 500 rpm before release. (Wallis’s experiments led him to suppose that back-spin increased the number of bounces, and thus the distance that Upkeep could travel between release and the target (Wallis 1963; for a different view see Podesta 2007).). Spin also caused the weapon to adhere to the dam wall as it sank after arrival.

The attractions of dams as targets had to be weighed against the problems of breaking them. The Nationalists’ failure to blow up the Ordunte and Burguillo dams during the Spanish Civil War illustrated the difficulty of attacking such monumental structures even when it was possible to lay the explosive charges by hand; placing them from the air looked impossible. Early in the war there was no bomb of sufficient explosive power to break a dam, and even if there had been there was no aeroplane with the capacity to lift it or aiming technique to drop it in the right place. Other methods, such as sacrificial commando attack or assault using torpedoes, were considered and rejected. For an aggressor, the Ruhr dams posed the further challenge of how to neutralise a system of pumping stations that enabled the cross-feeding of water from one reservoir to another. To cause a long-term industrial crisis, a number of dams would have to be broken simultaneously to bring about a systemic failure, and then reattacked at intervals to prevent their repair.

Even by day and unopposed, meeting these conditions of speed, height and range in a heavy bomber called for exceptional airmanship. At night, against a defended target, or simply amid difficult terrain, the requirements became daunting. Further, to outwit anti-aircraft defences in order to reach the dams the attackers would have to fly in and out of Germany at low level along routes threaded between known flak positions. Such flying and navigation would require sustained concentration and professionalism; the raid was accordingly preceded by detailed planning and intensive training (Owen 2008a, 2008b). The perils that awaited even slight departures from track or height are illustrated by the fact that of the eight aircraft that failed to return from Chastise, seven were lost on their way to or from the targets. At least four were shot down having strayed from their itineraries. Two flew into power lines. Another aircraft returned early, following damage caused when it touched the sea.

In 1942 a new idea emerged: a device that would skip across the surface of a reservoir until it arrived at the waterside face of the dam, hug the dam wall as it sank, and detonate at sufficient depth for hydraulic pressure to focus the energy of the explosion into the fabric. Later nicknamed ‘the bouncing bomb’ (although strictly speaking it was a type of depth charge) this was the brainchild of Barnes Wallis, VickersArmstrong’s Assistant Chief Designer (Structures). The weapon’s codename was Upkeep.

Already we have an inkling as to why Chastise has attracted so much attention: in every respect, it was an operation of extremes. But in what ways, if any, was it ‘defining’? And what, if anything, might physical remains or material culture contribute either to further assessment or to understanding of its place in popular perception? To approach such questions we must first look at the effects of the raid, and at some of the ways in which historians have treated them. Aims, costs, effects

Upkeep’s demands Chastise’s main targets were the Möhne, Eder and Sorpe dams, in that order. Carrying one Upkeep apiece, the aircraft were sent forth at intervals, in three waves, each wave subdivided into smaller groups. Nine aircraft made up the first wave. Their first objective was the Möhne. If they breached it, aircraft with unused Upkeeps were to proceed to the Eder. If that, too, was broken, then any remaining Upkeeps were to be used against the Sorpe. In the event, five aircraft of the first wave attacked the Möhne and three the Eder, breaching both.

Upkeep’s working depended on the fulfilment of a number of conditions. First, the reservoir had to be full, to provide a sufficient height of water above the explosion to concentrate its energy, to give the maximum moment of force to assist the explosion, and to cause utmost mayhem following a breach. Second, to stand any chance of evading German defences the attack would have to be made at night, which in turn limited it to the period when the moon was between eighty and a hundred per cent full. In 1943 these two conditions of water depth and moon coincided once only, for a few days in May. The attack would thus have to be made during this period, postponed for a year, or not at all.

The Sorpe was the target of the second wave, made up of five Lancasters. If all had gone to plan their attack would have coincided with the first wave’s attack on the Möhne. In the event, early returns and losses meant that only one of the


RICHARD MORRIS: 16/17 MAY 1943. OPERATION CHASTISE: THE RAID ON THE GERMAN DAMS Sorpe wave aircraft actually reached it. This solitary attack caused some damage, but no breach.

influenced Chastise priorities: the Möhne and Eder became the operation’s main concern because it was believed – rightly – that they were more vulnerable to Wallis’s weapon, even though the Sorpe was of greater economic worth.

The third wave similarly consisted of five aircraft. Directed by radio in the light of events reported from the other dams, their job was either to reinforce the efforts of the earlier waves, or to attack dams judged to be of lesser priority: the Lister, Ennepe, or Diemel. In the event, three of the reserve aircraft were sent to the Sorpe, but only one reached and attacked it, again without success. Of the two remaining aircraft, one was downed on its way to the Lister, while the other made an unsuccessful attack on a dam that its crew believed to be the Ennepe.

Later on the day of the attack, the Oberkommandos der Wermacht (OKW) hurriedly estimated that of the rest of Germany’s reservoirs, perhaps 25 were vulnerable to attacks of similar type (‘Angriff auf Talsperre’, Greiner & Schramm 1963: 494). OKW was not to know that only 20 production Lancasters modified to carry Upkeep had been built, eight of which had just been lost. Bomber Command subsequently toyed with an order for more Upkeep-carrying Lancasters, but in the event they were not built. Hence, even had Upkeep been effective against embankment dams like the Sorpe, the specialised force of sufficient size needed to mount a sustained drive against Germany’s water industry did not exist.

The floods released by the breaching of the Möhne and Eder were on a scale greater than any that Germany had previously seen. They took at least 1,341 lives, and for every man, woman or child who died, six farm animals perished. Alongside the carnage, damage to roads, railways, bridges, homes, factories and farms spread for sixty miles. For the RAF, the cost was eight out of the nineteen crews who had inflicted the damage: 56 men under 35, all but three of whom died.

A connected factor was the zeal of German reaction. Spares for RTV’s clogged pumping stations were rushed in within days from wherever they could be found, often by denuding less vital elements of German infrastructure. The two breached dams were repaired with a kind of vigour that seems to have surprised even those who did the work. At lunchtime on Monday 17 May 1943, Albert Speer’s best estimate for the time needed to repair the Möhne was a year. In fact, the repair effort was so well coordinated that the Möhne was ready to catch the winter rains by late September.

Despite this ruin, Chastise did not bring about the longerterm crisis for which its planners had hoped. For this there were two main reasons: the attack had focused on the wrong combination of dams, and it was planned as a stand-alone operation rather than as part of a sustained campaign (Interrogations of Albert Speer 1945: 375).

This is history from hindsight. Seven days after the attack, the UK Foreign Office’s weekly political intelligence summary concluded: ‘It is clear that the most significant damage has been effected by the Royal Air Force attacks on three important dams in Western Germany . . .’ The assessment forecast that the most serious effects would appear months later, when water shortages became acute (Weekly Political Intelligence Summary No 190: 1).

By 1943 the increased scale of mining and manufacturing had put the industrial water supply to the Ruhr under stress; a long interruption would very likely have disrupted war production. To do this, the Möhne and Sorpe would have had to have been broken together and re-attacked at intervals to keep their reservoirs empty through the rains of the following autumn and winter. If this had been done, great harm might have been inflicted on Germany’s war economy. However, by breaking only the Möhne on the Rhine-Ruhr side of the Sauerland watershed, the attack’s impact fell short (interrogations of Albert Speer, 1945: 375; Speer 1970: 281).

For the Allied public, the raid brought a ‘sense of thrill’ (Probert 2006: 190). Media reports of Bomber Command’s attacks were usually soon forgotten, whereas coverage of the Dams Raid went on for days and traced the damage as it spread. The story returned to the headlines when members of the 617 Squadron were decorated, when the king and queen visited Scampton, and when aircrew who had taken part toured factories or appeared at local events.

The Sorpe escaped partly because only two Chastise crews reached it, but also because of the way in which it had been built. Upkeep was designed to break gravity dams of the Intze tradition, whereas we have seen that the Sorpe was an embankment against which Upkeep was not expected to be effective. (The method of attack also differed, Upkeep being aimed unrotated into the water along the Sorpe’s length as a conventional bomb rather than being launched towards it at right angles over water.) The best that could be hoped for here was either to cause enough damage to force the emptying of the reservoir by the Germans, or to gash the crest in such a way as to start a trickle that would erode a channel that in turn would lead to a flood. So it was that the Sorpe’s design

The raid made Guy Gibson a star. Awarded the VC for his valour in leading the attack, he became recognisable on sight from newspaper photographs and newsreels. In following months Gibson accompanied Churchill to Canada, toured America, collaborated with Roald Dahl on a proposed film about the raid, appeared on Desert Island Discs, and mixed with stars and politicians. Gibson also wrote an autobiographical account of his wartime career called Enemy



Fig. 9.1 Remains of buildings at Gunne, just below the Möhne Dam. On the dam crest (to right of breach) can be seen dummy trees that were introduced in an attempt to make the dam look like a continuation of the reservoir’s wooded shoreline. (Royal Air Force Museum)

Fig. 9.2 Road and rail bridges are swamped at Frondenberg, fifteen miles from the Möhne Dam. Much flooding was of agricultural land and has left scant long-term trace. (Royal Air Force Museum) 86


Fig. 9.3 Evanescent deposit formation processes: a roof structure has been bodily swept onto the ground. Beside it are household items – a tin bath, a child’s scooter, a bicycle – most of which would have been salvaged, and so disappear from the archaeological record. (Royal Air Force Museum)

Fig. 9.4 Townspeople salvage belongings. Most flood debris was systematically cleared. (Royal Air Force Museum)



Fig. 9.5 House on the main street of Wickede after the Möhnekatastrophe. Damage to timber-framed buildings was often repairable, while the contemporary photograph gives a fuller record of house contents than could be expected from subsequent archaeological study. (Royal Air Force Museum)

Fig. 9.6 Neheim, where many buildings were rebuilt. Archaeological traces of the pre-flood town survive, but contribute little to understanding of the event in comparison with photographic and written evidence. (Royal Air Force Museum) 88

RICHARD MORRIS: 16/17 MAY 1943. OPERATION CHASTISE: THE RAID ON THE GERMAN DAMS as ‘a catastrophe’. Beyond it, around 10,000 troops and a substantial quantum of matériel were tied down for the rest of the war to defend German dams against further attack. An equal and opposite effect was the commitment of similar resources to counter the risk of reprisals against dams in the UK (Owen 2008: 57-9).

Coast Ahead. Completed in 1944 just before his death, but not published until 1946, the book contained vivid chapters on the Dams Raid that were in turn mined by Paul Brickhill for his book The Dam Busters (1951). R C Sherriff drew from both for his screenplay for the film The Dam Busters (1955). More on this later; suffice it to say here that the film has dominated public opinion ever since, and that popular perception of Chastise as a historically momentous action derives in large part from the witness of the man who led it.

A second result was to foster cohesion between the United States, the Soviet Union and the UK at a time when US public opinion was divided in its priorities between the European and Pacific theatres, and Stalin was looking for new effort to complement Soviet sacrifice in the east. On the night of Chastise, Churchill was in Washington for the Trident conference. Two days later he addressed both houses of Congress, and used Chastise to demonstrate British resolve and effectiveness. Linked with this was a shift in public mood, a sense that the war had passed a turning point. ‘Mr President’ said Churchill in his speech on 19 May, ‘the African war is over . . . One continent at least has been cleansed and purged for ever from Fascist or Nazi tyranny’ (in James 1974: 6781). On the heels of this came Operation Husky, the invasion of Sicily. Hence, while Chastise was not a war-winning operation, by its public impact and timing for the Allies it epitomised a general certainty that the war would be won.

Taking a longer view Since the 1960s, a consensus has grown that while the raid was bravely led and undertaken, its effects were not of fundamental economic importance (Gilligan 2006). Opinion about what it actually did achieve has fluctuated. In 1961, the official historians, Sir Charles Webster and Dr Noble Frankland, put a much fuller record of the operation into the public domain than had been available hitherto. Re-read today, their careful source-based account seems to reflect a tension between muted assessment of Chastise’s economic effects and the still-prevailing view that the operation had been a ‘brilliant victory’ (Webster & Frankland 1961: 2, 178). In the later 1960s and 1970s, hippy subculture, opposition to the Vietnam War, the end of National Service, subversive satire and environmentalism all combined to favour the kind of full-on revisionism that was epitomised by the journalist who described the raid as ‘a conjuring trick, virtually devoid of military significance’ (Page 1972; Verrier 1968: 225). The 1980s saw a trend towards deeper and more contextualised assessment. At the root of this was John Sweetman’s monograph The Dams Raid: Epic or Myth (1982). Based both on interviews and a fuller consideration of primary records than any hitherto, Sweetman’s study was at once fine-grained, tracing the micro-history of days and hours, but also strong on master narrative. The result, since re-titled, thrice revised and expanded, has become the essential starting point for anyone’s consideration of Chastise.

Chastise redefined the possibilities of airpower. For three years Bomber Command had been working as a kind of remote machine, wound up – as Leonard Cheshire put it – like an alarm clock, ‘and sent forth by commanders who . . . had less influence over its subsequent progress than did the chateau generals over their infantry in the First World War’ (Morris 1994: 186). In one of the operation’s numerous innovations, all the Chastise Lancasters were fitted with VHF radio-telephone sets, enabling communication over the target using direct speech. This enabled Gibson to direct the attacks, varying their pace or sequence according to circumstance. Later in the war, most raids by Bomber Command would be coordinated by a Controller. Chastise was the operation that introduced the judgement of a commander on the spot, and thereby ‘changed the whole propect of Bomber Command’ (Frankland in Wood 1993: 55).

One influence of this fuller approach was to widen consideration of what might count as Chastise’s effects, which can now be seen as more diverse and in some respects more far-reaching than either its planners supposed or its critics have since granted. An example is the effect of the raid on German effort to fortify the north European coast against Allied invasion. It has long been known that personnel from the Organization Todt (OT) with special engineering and technical skills were redeployed to repair the broken dams. Less attention has been given to the repercussions of the inept handling of this transfer, which caused a crisis of trust among French workers and a mass exodus from the OT. Under examination at Nuremberg in June 1946, Speer described this

Then there was the smallness of the force in relation to the damage done: just eleven Upkeeps were released in the course of the raid; possibly as few as two actually broke the dams. The fact that so much destructive energy could be released by such a tiny force was not lost on some in the German leadership, who saw that well-directed airpower could potentially destroy all capacity in a particular target system, and thereby sap a nation’s ability to fight. The fact that for much of the rest of the war Sir Arthur Harris did not apply this lesson to the extent to which he might did not alter its reality, which would be demonstrated again in the paralysis of the transport system of northern France before D-Day, and in the isolation of the Ruhr in 1945.


DEFINING MOMENTS Memories, fears and perceptions

Chastise and popular culture

For the German public, too, the small force seemed ominous. Citizens were able to compare statements by the German News Agency that few British bombers had been involved with photographs of the floods on leaflets dropped by Allied aircraft. For citizens in the Ruhr itself the effects were selfevident. Talk of up to 30,000 dead swept through Germany. Such rumours had long roots. As David Blackbourn has shown, just as from 1750 the project to ‘rectify’ the Rhine to ensure even flow within a single channel had come to be seen as ‘the supreme symbol of German identity over the 150 years that followed’, so after German unification did other hydrological projects take on the character of a struggle in which natural forces were subjugated to serve human purposes. The rhetoric of damming rivers had used words like ‘shackle’ and ‘tame’, which along with ‘force’ and ‘compel’, writes Blackbourn, ‘are the kinds of terms you apply to a dangerous enemy’ (Blackbourn 2006: 180). Such adversarial language was not restricted to Germany. After the Conway Valley dam disaster in 1925, a leader writer for the Manchester Guardian urged: ‘let us at least be careful to gauge the full strength of the wrath of which the waters are capable in their fight for escape before we complacently clap manacles on them’ (4 November 1925). Throughout the twentieth century dams were appropriated as symbols of modernity (Kaika 2006); in Germany this idea had turned ‘each new dam into another episode in the long-running struggle of man against nature’ (Blackbourn 2006: 180).

The Dams Raid’s reputation as ‘the most romanticised of all Bomber Command’s operations’ (Wood 1993: 55) began the day after it took place. The phrase ‘dam busting’ was being used within days, and some now-legendary details appeared in the first news reports. One legend, repeated by Gibson (1943: 46; 1946: 239-40), was that all the pilots had done a full tour of operations and that the crews were hand-picked. In fact, while a nucleus of captains was so selected, both the operational experience of Chastise aircrew and the means by which they were recruited were far more varied. Further fables that arose after the war included the belief that the original proposal to attack the dams had come from Wallis, and that Wallis had pursued the development of Upkeep in the teeth of bureaucratic hostility. All of these were repeated in the film The Dam Busters (1955). Directed by Michael Anderson (Anderson 1955), the film’s lighting cameraman was the German-born Erwin Hillier, who had earlier worked with Michael Powell and Emeric Pressburger on A Canterbury Tale (1944) and had outstanding affinity for cloudscapes and landscapes. The Dam Busters has influenced public memory ever since. Shot at Scampton and in places where the crews had trained, premiered on the twelfth anniversary of the raid, the film not only looked and felt convincing but also gained sympathetic resonance from the time in which it was shown (Hennessy 2006). Two royal premiers (the first before Princess Margaret on 16 May 1955, the second before the Duke and Duchess of Gloucester on the following evening) reflected the optimism of the new Elizabethan age, not yet soured by Suez. Churchill’s second premiership had just reached its end; fourteen years of food rationing had ended ten months before. A new cathedral was about to rise in Coventry. Richard Todd’s Gibson – martial and professional, yet sensitive and caring – was the quintessential British leader. As for Upkeep, whilst security still prevented revelation of the mechanics of the weapon, the unique behaviour of Wallis’ bomb shown by the film was a marvel.

Just as the struggles against the Oder and the Rhine had been cast in military terms, so the dammed valleys became ‘great battlefields’. That was the term used by Ernst Mattern in his 1902 book on dams. Schiller, he reminded readers, had written about fire: beneficent when under human control, terrible when left to run free. The same was true of water. Writing a few years later, Jakob Zinssmeister was impatient with critics of the dam-builders’ incursions into nature: they seemed to have forgotten that ‘in the end, mankind is after all there to dominate nature and not to serve it’ (Blackbourn 2006: 180-1).

There was also a generational aspect. The Dam Busters stirred both those who had just lived through the war, and those born since. On the eve of decolonisation, Eric Coates’s title music – by turns bustling, pensive, exultant – linked a sanguine present with Imperial Britain half a century before. Of more than a hundred British war films made between 1945 and 1960, The Dam Busters was the most successful (Sandbrook 2006: 202-3; James 2001: 722). David Rosenstone has reflected that ‘Today the chief source of historical knowledge for the bulk of the population must surely be the visual media’ (Rosenstone 1995: 23). In the case of the Dams Raid we can go further and suggest that the movie has transformed an historical event into a kind of tradition.

By setting Nature free from those who had tried to enslave her, Operation Chastise turned this discourse inside out. The resulting glimpse of the primeval overwhelming the modern exposed the fragile basis of things that western European society had come to take for granted – electrical power, easy communication, clean drinking water. In retrospect, this links with the wider unease that dams engender through ecological disturbance and community displacement, prefiguring the growing environmental predicament with which we live (Cummings 1990; Drèze et al 1997; Khagram 2004; Trussell 1992).


RICHARD MORRIS: 16/17 MAY 1943. OPERATION CHASTISE: THE RAID ON THE GERMAN DAMS Historians have strategies for dealing with such construction. More challenging is how to protect sources and sustain source criticism in an age when the internet dissolves boundaries between primary and secondary sources, and infinitely multiplies the pathways along which text and images move. To illustrate, the BBC website’s On this day account for 17 May 1943 reports the raid as contemporary news ( newsid_3623000/3623223.stm, accessed 3 November 2008.) Under the headline ‘RAF attack smashes German dams’, the reader is told that ‘An audacious RAF bombing raid into the industrial heartland of Germany last night has wrecked three dams serving the Ruhr valley.’ Already one fact is wrong – the Eder is not in the Ruhr valley – and other blunders follow. More disconcerting, however, is the inclusion of information that was witheld at the time. Upkeep’s working principle, Wallis’s authorship of the weapon, and the operation’s codename are all revealed, despite the fact that these were secrets guarded until long after the war. This gives an ominous twist to the idea of a ‘contemporary past’, for while seasoned historians will see straight past it, others – the majority, including most students – might reasonably suppose that the world’s most respected news organisation has helpfully placed an archive source on-line. On this day compilers could reply that their reports stem from an eminent genre of ‘you-are-there’ re-enactment, in the tradition of, say Peter Watkins’s Culloden (1964), which was made and edited ‘as though it was happening in front of news cameras’ (Peter Watkins website: ~pwatkins/culloden.htm, accessed 4 November 2008). If so, the answer must be that no-one watching Culloden could be unaware that it is entirely constructed, whereas the On this day report is disguised as a primary source, this impression being reinforced by the inclusion of a companion contexualising article.

Excavating . . . what? On the eve of the sixty-fifth anniversary of Operation Chastise, Les Munro, the last surviving pilot to have taken part, reflected: ‘I continue to be surprised by the interest shown in the Dams Raid by people of all ages, not least the young’ (Munro 2008: 5). This is true: one might expect curiosity about Chastise to have dwindled with the passage of time, but the reality is the opposite. The sixty-fifth anniversary was met by a volley of new publications with ‘dambusters’ in their titles (e.g. Arthur 2008; Foster 2008; Ward et al 2007; Thorning 2008). A new film is in prospect. The raid gathers attention from directions as diverse as oral history, film criticism, mathematics, prosopography, and biography. One line of approach not so far represented has been the material record. This may be partly, as we are about to see, because there is little of it. Even so, we can begin by asking the question: can study or stewardship of the physical legacy of Operation Chastise add to historical or public understanding? Chastise was a highly technical operation, reliant not only on a dedicated weapon, but also on the availability of speciallyconverted aircraft and different kinds of equipment needed to spin and deliver Upkeep and to handle it on the ground (Owen 2008c, 2008d). Barely eight weeks were available for developing these different elements and bringing them together. These in turn had to be coordinated with the simultaneous development of a smaller weapon known as Highball, to be used in a parallel operation (Operation Servant) against the warship Tirpitz. If the final explosives trial is taken into account, then Upkeep itself was not finalised until three days before the raid. The assortment of engineering needs was tackled by different teams in different places, each needing to be kept abreast of what the others were doing. Some of the engineering solutions had to be further customized, or discarded in favour of others, and all of these co-varying efforts in their turn affected the availability of aircraft and equipment with which to train. The full story of how these extraordinary preparations were coordinated (or in some respects, even what they were) has yet to be teased out.

If this seems hair-splitting, an analogy should give pause. In 1996 David Irving sued Professor Deborah Lipstadt and her publisher for stating that he was a mouthpiece for Holocaust denial who distorted historical evidence. Irving lost the action not because of his views, but because the judge considered that his treatment of historical evidence fell short of what was to be expected of a conscientious historian. As Professor Richard Evans, the main expert witness against Irving, later put it, ‘the judgment had had nothing to do with the interpretation of a body of knowledge at all. What it dealt with, on the contrary, was the creation of a body of ‘knowledge’ that was not really knowledge but invention, manipulation and falsification of source material.’ (Evans 2002: 250). Though clearly well-intended, the BBC’s ‘news’ report for 17 May 1943 also invents, manipulates and falsifies.

Exceptionally, therefore, this is an area where the survival of original matériel and associated equipment could add to knowledge. (‘Exceptionally’, for despite the claims of aviation archaeology (e.g. English Heritage 2002: 2). the evidential value of remains of mass-produced, stereotyped and welldocumented mid-20th-century aircraft is arguably small.) But little survives. The last three Upkeep-carrying Lancasters were scrapped in July 1947. Examples of inert and prototype Upkeeps can seen at around a dozen places, but tell us little. Of 120 production Upkeeps, 58 of which were explosive filled, just 39 live weapons remaining after Chastise were consigned to store before being dumped in the North Sea and the Atlantic during 1945-46 (Owen 2008f). 91

DEFINING MOMENTS What of Scampton, the airfield in Lincolnshire from which Chastise was launched? Lincolnshire, like East Anglia, is 20thcentury bomber country: land that looks eastward, where aerodromes were built in the later 1930s in anticipation of the war to come. While the core of the built layout is redolent of that era (the four hangars are listed buildings because of it), the flying field is much changed. Today, the long principal runway cuts through the line of Ermine Street, the former Roman road that runs north from Lincoln and originally formed the airfield’s eastern boundary (Francis 2004; Talbot and Bradley 2006. This runway was extended in 1956 to accommodate aircraft of V-Force. At the time of the Dams Raid the airfield was thus much more compact, and its surface was grass. Tailwheel aircraft of the 1930s and early 1940s were designed to fly from grass, which enabled them to take off and land into the wind from any quarter. The friction provided by grass also helped to slow the aircraft down after landing. One reason why the Dams Raid was flown from Scampton was that the airfield was about to be vacated to enable the laying of concrete runways. In March 1943 one squadron had just moved out, so there was space for a newlyformed unit.

where Upkeep was tested, materials connected with Barnes Wallis and Chastise are exhibited in the Herne Bay Museum and Gallery. In mid-Wales, the remains of the small masonry dam at Nant-y-Gro are still to be seen, split open by a trial detonation in 1942. In the Peak District, where Chastise crews trained, Severn-Trent Water administers a Dambusters’ museum in the west tower of the Derwent Dam. There are other centres like this. Typically, they are run by volunteers and veterans, under-resourced, a little shabby, and amateurish. However, to suggest that better displays might result from the concentration of resources into fewer centres would be to miss the point, for they are also labours of love – attempts to communicate personal experience to following generations in the places where the experience was gained. What will happen to them when the veterans are gone? There is a guided tour of RAF Scampton, a Cultural Trail for Herne Bay, a tour of sites in Holland and Germany organised by the War Research Society, an aerial tour that retraces the paths of the attackers by helicopter. The continental tours take in cemeteries. In Germany, the war cemeteries at Rheinberg and Reichswald are quiet places, where most of the airmen who perished in the course of the Dams Raid are now buried. Alongside them are others who did fly home on the morning of 17 May 1943, only to die later in the war. Of the 77 men who survived Chastise, indeed, only 32 survived the war.

Scampton is a place of atmosphere and memories. After the Cold War’s end the airfield was temporarily mothballed. Scampton has since returned to limited use, but remains melancholic, the associative values of its original buildings offset by decline. The Officer's Mess and accommodation, for instance, are now twelve miles away at Kirton-in-Lindsey, the 1930s Mess having been closed on grounds of health and maintenance. In the shadow of final closure, measured conservation-speak about the need to balance different interests does not take us far.

Nearly ten per cent of the 55, 500 bomber aircrew who died during the Second World War lie at Reichswald. It gives pause to find that the figure of 55,500 equates with the average number of slain from just nine days of the First World War.

What of the dams themselves? The Möhne and Eder attract large numbers of visitors, both from within Germany for recreational reasons, and from the UK, Canada and Australia (21 per cent of Chastise aircrew were Canadian, ten per cent were Australian or New Zealanders). Evidence of the attacks is readily visible in the repairs, although virtually all trace of the temporary work camps, railway spurs, compounds and machinery that accompanied the rebuildings has vanished. Again, the archival record is our main resource.

Cult and culture Memorialisation shades into commodification, vicarious association and deception – all three reflected in the vogue for what medievalists might recognise as contact relics: things touched by those who took part, the touch resulting in a transfer of power. Thus it is that items like letters, menus, signed photographs and signed paintings change hands for substantial sums. This extends to secondary relics, such as photographs of Richard Todd playing Guy Gibson, signed by Todd as a proxy for Gibson. Many paintings of scenes from the operation have been issued as prints, which in turn become fields for more signatures. Sixty-five years on, the relatives of those involved sometimes now appear at events or do the signing. Like Crécy and Agincourt, Chastise lives on in the repetition of names. Beyond the names lie merchandise – posters, tea towels, mugs, Royal Worcester plates, stamps, tankards, a Dambuster ale, Dambuster whisky, even a Dambuster cheese. The parallel with medieval relics runs further: the unscrupulous

The silts and muds deposited by the Möhnekatastrophe harbour poignant relics. A child’s toy, carried by the black torrent for thirty miles, its dents and scuffs witness to the battering it received along the way, could stand for bodies so disfigured that those who afterwards looked for their children or relatives could not tell who they were. Even now, tide marks from the flood are still to be seen on some buildings. No single centre does justice to the story of Chastise. Rather, places of explanation and recollection are dispersed, each location connected with a different part of the story. An Historical Museum exists at RAF Scampton. Near Reculver,


RICHARD MORRIS: 16/17 MAY 1943. OPERATION CHASTISE: THE RAID ON THE GERMAN DAMS try to make money by the sale of forgeries – signatures, documents – and by false claims of personal involvement.

Gibson, G. 1946. Enemy Coast Ahead. London: Michael Joseph. Gilligan, M. 2006. Does the Dambusters Raid deserve its growing reputation as operationally daring but strategically futile? Royal Air Force Air Power Review, 9.1, 27-48. Grant, B. 1864. The Sheffield flood and its lessons. London: J.H. Tresidder; Sheffield: Pawson & Brailsford. Greiner, H. & Schramm, E. P. (eds) 1963. Kriegstagebuch des Obekommandos der Wermacht, Band 3.1, 1 January 194331 December 1943. Frankfurt am Main: Bernard & Grafe. Heaton, W. 1864. Lines on the flood occasioned by the bursting of the Bradfield Dam, March 12th, 1864. Sheffield. Hennessy, P. 2006. Having it so good: Britain in the fifties. London: Allen Lane. Interrogations of Albert Speer, former Reich Minister of Armaments and War Production, 1945, 30 May 1945. In Sir Charles Webster and Noble Frankland, The Strategic Air Offensive Against Germany 1939-1945, Vol. 4, Annexes & Appendices, London: 1961, 371-8. James, L. 2001. Warrior Race. A history of the British at war. London: Abacus. James, R. R. (ed) 1972. Winston S. Churchill: his complete speeches 1897-1963. Volume VII: 1943-1949. New York: Chelsea House. Kaika, M. 2006. Dams as Symbols of Modernization: the urbanization of nature between geographical imagination and materiality. Annals of the Association of American Geographers, 96.2, 276-301. Khagram, S. 2004. Dams and development: transnational struggles for water and power. Ithaca, N.Y, London: Cornell University Press. Letter. 1718. A Letter to a Member of Parliament Concerning Dagenham-Breach: Occasion'd by the late Ruin of the Works there. London: Joseph Gillmore. Mattern, E. 1902. Der Thalsperrenbau und die deutsche Wasserwirtschaft. Berlin. Mattern, E. 1921. Die Ausnutzung der Wasserkräft. Leipzig. Morris, R. 2009. Nothing but names? Prosopography and the Dams Raid. Après Moi. The 617 Squadron Aircrew Association Newsletter. Spring 2009. Morris, R. with Dobinson, C. 1995. Guy Gibson. Paperback edition, London: Penguin. Morris, R. and Owen, R. (eds) 2008. Breaching the German Dams. London: Newsdesk & RAF Museum. Munro, L. 2008. Foreword. In Morris, R. and Owen, R. (eds) 2008. Breaching the German Dams, 5. London: Newsdesk & RAF Museum. Owen, R. 2008a. Planning the route. In Morris, R. and Owen, R. (eds) Breaching the German Dams, 28-31. London: Newsdesk & RAF Museum.

The Dams Raid was not only a defining moment to which historians repeatedly return, but has become an evolving cultural phenomenon. For reasons given, archaeological study of flooded areas or crash sites is unlikely to contribute to historical judgment. In the longer term, however, archaeology might well inform study of the cult. Whether that cult is primarily hagiographical, commercially driven or commercially fed is for discussion, but it is noticeable how many books and websites indiscriminately recycle material regardless of concern for accuracy. Like the medieval vita, such works form a literary genre, distinct from history, wherein certain things are appropriate. The cult centres on a yearning to connect with the flow between then and now, in which ‘then’ is substantially constructed. References Anderson, M. 1955. How I Directed The Dam Busters. Achievement. May, 19-21. Arthur, M. 2008. Dambusters: A Landmark Oral History. London: Random House. Blackbourn, D. 2006. The Conquest of Nature. Water, Landscape and the Making of Modern Germany. London: Jonathan Cape. Cull, N. J. 2003. Peter Watkins’ ‘Culloden’ and the alternative form in historical filmmaking. Film International 1, 48-53. Cummings, B. J. 1990. Dam the rivers, damn the people: development and resistance in Amazonian Brazil. London: Earthscan. Drèze, J., Samson, M. & Singh, S. (eds) 1997. The dam and the nation: displacement and resettlement in the Narmada Valley. Delhi, New York: Oxford University Press. English Heritage. 2002. Military Aircraft Crash Sites: archaeological guidance on their significance and future management. Swindon: English Heritage. Euler, H. 2001. The Dams Raid through the Lens. London: After the Battle. Evans, R. J. 2002. Telling Lies About Hitler. The Holocaust, History and the David Irving Trial. London: Verso. Foster, C. 2008. Breaking the Dams. The Story of Dambuster David Maltby and his Crew. Barnsley: Pen & Sword. Francis, P. 2004. RAF Scampton. Internal consultation document for English Heritage. Gheoghegan, J.B. 1864. Lines on the Great Flood which occurred at Sheffield, through the bursting of Bradfield Dam, betwixt the hours of 12 and 1 o'clock, on the morning of Saturday, March the 12th, 1864. Sheffield: G. Burgin and Son. Gibson, G. 1943. Cracking the German Dams. Atlantic Monthly 172.6, 45-50. 93

DEFINING MOMENTS Webster, Sir Charles, and Frankland, N. 1961. The Dams Raid and the development of precision bombing at night in 1943. In The Strategic Air Offensive Against Germany 1939-1945, Vol. 2: Endeavour, Part 4, 168-89. London: HMSO, 168-89. Webster, T.M. 2004. The Dambusters Raid. RAF Air Power Review Vol 7, No. 1, Spring. Weekly Political Intelligence Summary No 190, 1: Foreign Office Weekly Political Intelligence Summaries JanuaryJune 1943, Nos 170-195. Wood, D. (ed) 1993. Reaping the Whirlwind. Bracknell Paper Number 4. A Symposium on the Strategic Bomber Offensive 1939-45. Bracknell: Royal Air Force Historical Society & Royal Air Force Staff College.

Owen, R. 2008b. Tactics. In Morris, R. and Owen, R. (eds) Breaching the German Dams, 51-3. London: Newsdesk & RAF Museum. Owen, R. 2008c. Modifying the Lancaster. In Morris, R. and Owen, R. (eds) Breaching the German Dams, 39-43. London: Newsdesk & RAF Museum. Owen, R. 2008d. Loading Upkeep. In Morris, R. and Owen, R. (eds) Breaching the German Dams, 44-45. London: Newsdesk & RAF Museum. Owen, R. 2008e. Finding range. In Morris, R. and Owen, R. (eds) Breaching the German Dams, 49-50. London: Newsdesk & RAF Museum. Owen, R. 2008f. Operation Guzzle. In Morris, R. and Owen, R. (eds) Breaching the German Dams, 73-75. London: Newsdesk & RAF Museum. Page, B. 1972. How the Dambusters’ courage was wasted. Sunday Times Magazine, 24 May 1972, 4-10. Podesta, Michael B. 2007. Bouncing steel balls on water. Physics Education 42 (5), 466-477. Postlethwaite, M. with J. Shortland. 2007. Dambusters in Focus. Surrey: Red Kite. Probert, H. 2006. Bomber Harris. His Life and Times. London: Greenhill Books. Ramsden, J. 2003. The Dam Busters. A British Film Guide. London: I B Tauris. Rosenstone, R.A. (ed) 1995. Introduction to Revisioning History: Film and the Construction of a new Past. Princeton, New Jersey: Princeton University Press. Sandbrook, D. 2006. Never Had It So Good. A history of Britain from Suez to the Beatles. London: Abacus. Schönhoff, H. 1913. Die Möhnetalsperre bei Soest. Die Gartenlaube, 684-686. Speer, A. 1970. Inside the Third Reich. Memoirs by Albert Speer. Translated by Richard and Cloa Winston, London: Weidenfeld & Nicholson. Sweetman, J. 1982. The Dams Raid: Epic or Myth, London: Jane’s. Revised edition The Dambusters Raid. London: Arms & Armour Press, 1990. Second revised edition London: Orion, 1999. Third revised edition London: Cassell Military Paperbacks, 2002. Talbot, G. and Bradley, A. 2006. Characterising Scampton, in Schofield, J., Klausmeier, A. and Purbrick, L. (eds), Remapping the Field: New Approaches in Conflict Archaeology, 43-48. Berlin: Westkreuz-Verlag. Thorning, A. G. 2008. The Dambuster Who Cracked the Dam. The Story of Melvin ‘Dinghy’ Young. Barnsley: Pen & Sword. Trussell, D. (ed) 1992. The social and environmental effects of large dams. Vol. 3, A review of the literature. Camelford: Wadebridge Ecological Centre. Verrier, A. 1968. The Bomber Offensive. London: Batsford. Ward, C., E. Lee and A. Wachtel. 2007. Dambuster Crash Sites. 617 Squadron in Holland and Germany. Barnsley: Pen & Sword. 94

Chapter 10

1130 hrs, 29 May 1953 Because it’s there: The ascent of Everest Paul Graves-Brown His achievement in climbing Everest was one of the Twentieth Century’s defining moments. ... (Helen Clark, New Zealand Prime Minister, at the funeral of Edmund Hillary, 21 January 2008)

At 1130 hrs on 29 May 1953, two men, one born in Tibet, the other a New Zealand beekeeper, climbed to the highest point on the surface of the Earth. Some 7,500 miles away, and four days later during the coronation of Queen Elizabeth II, their achievement was celebrated as a triumph for Great Britain. As of 2007, some 3500 people from at least 71 nations had followed Tenzing Norgay and Edmund Hillary to the summit of Chomolungma/Sagarmartha/Mount Everest. Around 200 people have died in the attempt.

Peak XV The recognition of Everest as the World’s highest peak derived from the Great Trigonometrical Survey of India, begun by 1802 by William Lambton and continued between 1823-43 by George Everest (who incidentally pronounced his name Eve-rest). The survey, which cost a vast amount in lives and money, was completed by Everest’s successor Andrew Scott Waugh, who with the help of Bengali ‘computer’ Radhanath Sickdhar eventually established that the Himalayan peak designated XV was the highest at 29,002 feet (a 1999 GPS survey gave 28,035 feet; a 2005 Chinese measurement, 29,017 feet). Although Everest had established the convention of using local names for mountains, Waugh insisted that, since there were so many competing local names, it should be christened after his predecessor (Gillman 2001).

These facts alone suggest a contradictory and conflicted topic, and I must confess my own ambivalence regarding the ascent of Everest, and of mountaineering in general. Whilst I can appreciate the determination and courage, not to say heroism of those who set out to climb the World’s tallest mountain, I still feel that there is something essentially pointless in the endeavour, especially when one considers the number of people who have either died or been seriously injured in this unforgiving environment. Indeed, once the mountain had been ‘conquered’, it is even harder to comprehend why so many others have sought to repeat the exploit.

No westerners approached Everest until the British reconnaissance expedition of 1921. The location, on the border between Nepal and Tibet, was at the centre of numerous conflicts throughout the eighteenth and nineteenth centuries. The Nepalese and Tibetans fought two border wars. The British fought the Nepalese in 1815 and in 1905 invaded Tibet, fearing the influence of the Russians. The Chinese continually interfered in the internal affairs of Tibet, eventually invading and briefly occupying the country in 1910 (Shakabpa 1967). Perhaps this last event made the Dalai Lama’s, who ruled Tibet from the sixteenth century until 1950, more sympathetic to British exploration of the Himalaya.

In this chapter I want to explore the paradoxes of Everest. I will examine the social and technological factors which go some way to explaining why the mountain was finally climbed in 1953, and evaluate the potential for archaeological evidence of this specific event remaining on the mountain. I then want to examine the ideas, theories and beliefs that surround mountains and mountaineering, in an attempt to appreciate the underlying motivations; to understand whether indeed people do go to Everest simply because it is there.

All seven British expeditions between 1921 and 1938 approached the mountain from the north, trekking from Darjeeling across western Tibet. The first expedition reached the North Col of Everest and established that a route along the North East Ridge appeared possible. But the expedition doctor, Alexander Kellas, died of a heart attack on the journey in. The 1922 attempt met with disaster when seven Sherpa were killed by an avalanche whilst accompanying British climbers above the North Col. The 1924 attempt is,

There are aspects of the topic which, for the sake of brevity, I have had to omit. In particular, the Sherpa people of the Khumbu Solu region of Nepal have played a key role in Himalayan mountaineering, which I really cannot do justice to here (but see Ortner 2001). To begin with, however, I will briefly outline the background and events up to and including the 1953 ascent.


DEFINING MOMENTS of course, famous for the death of George Mallory and his climbing companion Andrew Irvine. What is perhaps less well known is that prior to Mallory’s attempt, expedition leader Edward Norton, accompanied by Howard Somervell, had got within 800 feet of the summit (Gillman 2001; Gillman and Gillman 2001; Unsworth 2000).

attempt on the nearby peak of Cho Oyu and this, combined with the intelligence gained from the Swiss attempt, was to prove crucial (Pugh 1954; Unsworth 2000). Planning for the 1953 assault on Everest began in September 1952. Eric Shipton was passed over for the leadership in favour of John Hunt, a British Army colonel who had won the DSO fighting on the Sangro river in Italy in the winter of 1943-44. Shipton was apparently regarded as too cautious (Unsworth 2000), and a safe pair of hands was required, especially given that the British had one shot at Everest, the French having obtained a permit for 1954, the Swiss again for 1955. Whilst Hunt was not initially ‘in’ with the elite climbers, he had been climbing in the Alps since the age of 10, and had been turned down for one of the attempts on Everest in the 1930s. At all events his diffident and decidedly unmilitary manner seems to have rapidly won over the team (Gillman 2001; Unsworth 2000).

The debate as to whether Mallory reached the summit continues. Controversy centres on the ‘second step’ on the North East Ridge, a difficult climb at that altitude. In 2007, Conrad Anker, who found Mallory’s body in 1999, free climbed the step and concluded that Mallory could also have done so (Douglas 2007; Herbert 2007). But the prevailing opinion is that this would have been beyond his capabilities. Neither of the cameras carried by Mallory and Irvine have ever been recovered, and hence the question remains open (Gillman and Gillman 2001). None of the later pre-war expeditions got as close to the summit as Norton and Somervell, most being driven back by bad weather or illness. The Second World War then intervened and no one returned to the mountain until the late 1940s.

The British team, consisting of ten climbers, plus Michael Ward (expedition Doctor), Griffith Pugh (Physiologist), Tom Stobart (Camera-man) and James (later Jan) Morris (Times Correspondent), assembled in Kathmandu on 8 March and departed in two sections on 10 and 11 March (Hunt 1993[1953]). This was, as Ortner (2001) says, one of the Mega-Expeditions. Using some 350 porters, the Expedition ferried around 15 imperial tons of equipment and supplies to Thangboche of which 13 tons were carried to the Base Camp on the Khumbu Glacier (Clore 1953). There is no doubt that the whole was organised like a military operation, well planned in advance and taking full advantage of Hunt’s experience in military logistics (Figure 10.1). Base camp (17,900 ft) was reached on 12 April. Unlike later climbers, the expedition had to walk from Kathmandu, yet this was an advantage – the opening of the Lukla airstrip and the use of helicopters has encouraged later expeditions to hurry their acclimatisation to the altitude, with all the attendant risks (Boukreev and DeWalt 2002; Krakauer 1998).

The post War political situation presented new problems. Although Canadian Earl Denman attempted a solo ascent from the north in 1947, the Chinese invasion of Tibet in 1950 effectively closed the northern route (Gillman 2001; Unsworth 2000). However, with the withdrawal of Britain from India in 1947, the Nepalese reopened their borders to foreign travellers. This presented climbers with the prospect of an entirely unknown route via the Khumbu Glacier and the Western Cwm, first observed by Mallory in the 1920s. In 1951, a British reconnaissance expedition headed by Eric Shipton, who had participated in the failed 1930s expeditions, climbed the Khumbu Glacier to what is now the southern Base Camp, and managed to cross the treacherous Khumbu Ice Fall, a maze of crevasses and huge ice blocks (seracs) formed as the glacier crosses a step in the underlying rock valley. They entered the Western Cwm but were confronted by an impassable 300ft wide crevasse (Gillman 2001; Unsworth 2000).

Camp II (19,400 ft), on the Khumbu Ice Fall, was established on 15 April, a further Camp III (20,200 ft) above the Ice Fall on 22 April and the Advance Base Camp (Camp IV, 21,200 ft) at the foot of the Lhotse Face on 1 May. Climbers and Sherpas moved up and down all the while, ferrying three tons of supplies to Camp 1V and at the same time gradually acclimatising to greater altitude.

Despite British attempts to keep others off ‘their’ mountain, a Swiss team gained permission to attempt the mountain in 1952. Whilst there certainly were political machinations, the effects of this can be over stated. Shipton briefed the Swiss on what had been learned in 1951 and the Swiss reciprocated after their eventual failure in 1952 (Unsworth 2000). Having successfully climbed the Western Cwm, the Swiss team attempted to reach the South Col by climbing the ‘Geneva spur’. The difficulty of this, combined with inadequate oxygen equipment, led to initial failure, but Raymond Lambert and Tenzing Norgay reached the Col by climbing the Lhotze face and eventually climbed as far as the South Summit. Meanwhile the British had made an abortive

From here the expedition established a series of interim camps as they explored the glacier on the face of Lhotze: Camp V (22,000 ft), Camp VI (23,000 ft) and Camp VII (24,000 ft). This took some time and the expedition did not reach the South Col until 21 May. On 26 May, the first assault party, consisting of Tom Bourdillon and Charles Evans, set off for the summit, but only reached the South Summit. On 28 May a further camp (IX 27,900 ft) was



Fig. 10.1 Everest from the South West, showing Camps 1-V111 of the 1953 British Expedition.

established on the South East Ridge. Here Hillary and Tenzing stayed, making their final climb to the summit the following day. In the process they had to overcome the Hillary Step, a 40 ft near vertical face which has remained a key obstacle to this day.

technological edge it had over the expeditions of the 1920s and 30s. Above about 26,000 feet, in the ‘death zone’, the atmospheric density of O2 is about one third of that at sea level. The body adapts to this by increased production of haemoglobin, but there are significant dangers such as heart attack, which killed Kellas in 1922, or thromboses. The most common danger at altitude is oedema: leakage of fluid across the body’s membranes. In pulmonary oedema the lungs begin to fill up; in cerebral oedema, fluid leakage causes the brain to swell. In both cases oedema is fatal unless the person rapidly descends at least 1000 ft. In addition to these pathologies, altitude also clouds thought processes and makes any physical activity extremely arduous.

Although it is sometimes claimed that the news of the success was deliberately delayed in order to coincide with the Coronation, this is quite absurd. Morris (1993) originally estimated that it would take eight days to get the news to London. In fact the news of success did not get to him at Camp IV until 30 May, after which he had to descend to send his dispatch through a series of radio relays from Namche Bazar to Kathmandu and hence to London. Today there is a cyber cafe at Everest Base camp and cellular phone calls can be made from the summit; communications in 1953 were somewhat more primitive.

A key factor in the 1953 expedition was the work of the physiologist Griffith Pugh (1954). Pugh’s studies of earlier oxygen equipment, and of the 1952 Cho Oyu climb, gave clear guidance in the design of the equipment for the 1953 attempt, and allowed him to stipulate some essential dietary requirements. One key factor here was the need to drink around eight pints of water per day, as high altitude leads to rapid dehydration, which in turn exacerbates both the effects

Technological and social factors Given the extreme nature of high altitude climbing it seems likely that the 1953 expedition would have failed without the 97

DEFINING MOMENTS of oxygen starvation and of cold. Whilst perhaps mundane, it would not be unreasonable to attribute a major significance to the design of the Primus stoves taken on the mountain, since these were essential in melting sufficient ice and snow.

parachutes (Trossarelli nd). As with the substitution of vinyl for shellac in the music industry, an artificial alternative born of necessity proved to have practical advantages. The importance of nylon in the clothing, ropes and tents used on Everest in 1953, many derived from military specifications, should not be underestimated. Apart from the weight considerations, nylon rope was a step change in quality over earlier fibre ropes, even the best of which were prone to break when subjected to severe stress. Indeed, the broken rope attached to Mallory’s body when found in 1999 has suggested to some that a failure of the rope was the proximate cause of his death.

I want to highlight three material factors, aluminium, nylon and radio, as focal to the 1953 expedition. Pugh calculated that the O2 sets used by Mallory et al. were so heavy that their benefits did little more than compensate for their weight. By contrast the open and closed oxygen sets used in the 1953 summit assaults relied on O2 bottles made of the aluminium alloy Duralumin or Dural, a combination of Aluminium, magnesium and copper which has the lightness of aluminium and the strength of mild steel. Aluminium had first been produced in any quantity by a chemical process invented in the 1850s by Henri Sainte-Claire Deville. However, this process, used among other things to make London’s statue of Eros and the cap on the Washington monument, was extremely expensive - until the late nineteenth century aluminium was more costly than gold. In 1886, Charles Martin Hall invented an electrolytic process of production and with the Pittsburg Reduction Company (formed 1888 and renamed Alcoa in 1907) went on to produce aluminium in quantity (Smith 1988). The duralumin alloy was invented by German metalurgist Alfred Wilm in 1908. Although aluminium and its alloys found extensive use in the First World War, it was, as with many technologies, the Second World War that brought it to full prominence. Indeed, Reynolds Metals, the manufacturer of the Dural oxygen bottles used by Hillary and Tenzing, had entered the aluminium business specifically because they recognised the implications of expanded German aluminium production in the late 1930s. During the war Reynolds gained a reputation for imaginative uses of aluminium alloys which they transferred to the civilian market in the early post war period (Smith 1988).

The final technological element to stress is the use of radio. The expedition had eight modified Pye PTC 122 Walkiephone VHF transceivers. These were bulky by modern standards. Approximately 200 x 100 x 100 mm and weighing 5lb, the sets were powered by external battery packs worn inside the climbers clothing to keep them warm (Briscoe and Hicks 2007; Hunt 1993[1953]). Although transistors were just beginning to be manufactured at this time, the PTC 122 used six valves, but could nonetheless operate for around 40 hours at -10 0C. They were used as far as Camp VII but the unit taken to the South Col was damaged in transit, hence the considerable time it took for news of success to reach Camp IV. Mountaineering purists object to the use of radio, yet Anatoli Boukreev, who was perhaps one of the best Himalayan mountaineers remarks: a critical item in an expedition inventory, a radio creates a link between base camp and climbers as they wend their way to the summit and provides a conduit for information on developing problems, emergencies, equipment needs, the weather and medical matters. (Boukreev and DeWalt 2002: 61) Indeed it seems that one of the key factors in the disastrous 1996 Everest Expedition, of which Boukreev was a part, was the inadequate communications due to the lack of radios (Boukreev and DeWalt 2002; Krakauer 1998).

Aluminium played another key role apart from oxygen supply in the form of alloy ladders. These were essential in bridging the crevasses of the Khumbu Ice Fall, as they were later used by the Chinese to overcome the second step on the North East Ridge, the feature that had probably defeated Mallory. Today there is a successful ladder hire company in Namche Bazar which supplies alloy ladders in large quantities to Everest expeditions (Boukreev and DeWalt 2002).

Turning to social factors, as noted above, the 1953 Expedition was organised on quasi military lines. Some commentators see this as another ‘inauthentic’ approach and criticize Hunt’s military background, painting him as a typical British Military type. In fact this does not seem to be true (Unsworth 2000). As a Colonel and a staff officer, Hunt clearly understood the importance of logistics; the Appendices of The Ascent of Everest (1993[1953]) set out the meticulous advance planning and organisation of the Expedition. This logistic approach parallels other wartime technology mentioned above. Logistic organisation originated in the military in the 1860s and 70s as a consequence of the changing social organisation of war. By the time of the Second World War, with its vast scale and

Nylon has, in many ways, a similar history to aluminium. Developed as an artificial alternative to silk, polyamide 66 (pronounced six six) was created by Wallace Hume Carothers at du Pont in 1935 and went into industrial production in 1938. At this time the main source of silk was Japan, and jokers of the day suggested that nylon was an acronym for ‘Now You Lose Old Nippon’ (in fact it wasn’t an acronym). For obvious reasons then, nylon became a key material in the Second World War, for making tents, ropes, clothing and


PAUL GRAVES-BROWN: 1130 HRS, 29 MAY 1953. BECAUSE IT’S THERE: THE ASCENT OF EVEREST highly mobile warfare, logistics had been developed to a fine art (Van Creveld 1991).

Aircraft Establishment at Farnborough (Clore 1953; Hunt 1993[1953]). In 1924 the British expedition had 24,000 litres of O2; the French 1952 expedition had 20,000 litres. In 1953 the British expedition took 193,000 litres of which they used all but 20,000 litres (Pugh 1954).

This being said, one should not imagine the 1953 Expedition as some sort of extremely hierarchical military unit. Hunt seems to have been a very modest, diffident man who stressed team building and cohesion. Although he used the word ‘assault’ to describe the summit attempts he was at pains to avoid talking of the ‘conquest’ of Everest. Equally, although this was a British expedition, he made his decisions on who should make the summit bids in terms of their performance on the mountain (Gillman 2001; Hunt 1993[1953]; Unsworth 2000). Hillary had been an idiosyncratic inclusion by Shipton in the 1951 reconnaissance, while Tenzing was the climbing Sirdar of the Expedition (essentially in charge of the other Sherpas), had experience on the mountain dating back to the 1930s and had participated in the Swiss summit attempt of the previous autumn. Whilst Bourdillon and Evans were chosen for the first summit attempt, even this may not have been an entirely nationalistic decision. Throughout the history of Everest mountaineering, successive expeditions and attempts had gained further knowledge of the terrain; Bourdillon and Evans in their turn pushed further up the mountain, hence furnishing Hillary and Tenzing with essential knowledge for their successful summit bid. In other words, the second assault team was more likely to succeed than the first (Unsworth 2000). In his book, apparently written in an astonishing 30 days, and in his introduction to the 40th anniversary edition thereof (Hunt 1993[1953]), Hunt continually stresses the importance of community and humanity over either nationalism or individualism, and perhaps in this at least he was atypical of mountaineers.

In its quasi military approach, the 1953 ascent of Everest was atypical of mountaineering but this almost certainly guaranteed its success. The growing individualism in more recent years has, ironically, depended on ever improving technologies of clothing, high altitude medicine, oxygen apparatus and aviation. Since the 1960s, helicopters have been used extensively both to ferry people and supplies to the mountain and to evacuate the injured. Archaeological Potential What, if any, archaeological evidence is likely to survive of the 1953 Expedition? Would it be possible to prove that the event ever took place? Here we need to consider both the kinds of material that would have been left on the mountain and the large number of site formation processes which would have altered or destroyed that record in the last 50 years. There are, in fact, some very specific pieces of evidence relating to 29 May 1953. According to Hillary (in Hunt 1993[1953]: 187): Tenzing had made a little hole in the snow and in it he placed various small articles of food – a bar of chocolate, a packet of biscuits and a handful of lollies. Small offerings, indeed, but at least a token gift to the Gods that all devout Buddhists believe have their home on this lofty summit. While we were together on the South Col two days before, Hunt had given me a small crucifix which he asked me to take to the top. I, too, made a hole in the snow and placed the crucifix beside Tenzing’s gifts.

The elite stance in mountaineering is to be contemptuous of technology, a notion of authenticity that resembles the desire of musicians to perform ‘unplugged’. Mallory and his colleagues had long debates as to whether the use of supplementary oxygen was ‘sporting,’ even though eventually he conceded the necessity and used an oxygen set in his ill fated attempt on the summit in 1924 (Gillman and Gillman 2001). Some would no doubt argue that the mountain was not truly ‘conquered’ until Reinholt Messner summited without the use of supplementary oxygen in 1978.

Other accounts of Tenzing’s offerings differ slightly. For example: ... he buried some sweets, a little red and blue pencil his daughter Nima had given him, and a small cloth black cat which Hillary had given him and which came from John Hunt (Unsworth 2000: 743 note 28). Would such poignant artefacts remain at the summit? Perhaps unlikely, yet artefacts from the 1924 expedition were found on the North East Ridge in the 1970s (Gillman 2001; Gillman and Gillman 2001; Unsworth 2000).

Technically, the success of the 1953 expedition can be attributed to several factors. Knowledge of the route gained from the 1951 reconnaissance and the 1952 French expedition was essential. But technologies developed largely in the context of mid 20th century warfare played a crucial part. Even the ‘double vapour barrier’ boots used at high altitude were based on those developed for the Korean war. The approach was both systematic and scientific. Pugh’s physiological work refined diet, acclimatisation, clothing and crucially, the extensive use of oxygen. Tents, boots, clothing and oxygen equipment were all extensively tested at the Royal

Other deposits would be more likely to survive, in particular the Reynolds Dural oxygen cylinders, some of which were almost certainly left on the higher reaches of the mountain. Both these, and the RAF Mark 5d cylinders could be identified from their form and from information stamped into their metal (see Figure 10.2). Tents at the South Col and



Fig. 10.2 Open and closed oxygen sets.

the South East Ridge (Camp VIII) might also leave evidence although they would have been heavily damaged by the weather. The 1953 team found the frame of Tenzing and Lambert’s tent from the summit attempt of September 1952, and used oxygen cylinders left on the South Col by the Swiss team.

in the 1950s (Watts 2007). The only areas of more stable conditions of preservation would be (to some extent) the South East ridge and particularly the South Col, a large relatively flat area between the peaks of Lhotze and Everest. To natural processes we add homogenic factors. Many estimate that there is something in the region of 50 tonnes of rubbish on Mount Everest, most of it, presumably, on the South side (although it has to be said that these estimates are probably little more than guess work). This ‘rubbish’ includes well in excess of 100 human bodies. Conditions on the South Col in 1996 are described thus by Krakauer (1998: 161): ‘The tents of camp four squatted on a patch of barren ground surrounded by more than a thousand discarded oxygen canisters.’ Although one assumes this is a guestimate. This situation prevailed in spite of the fact that the leaders of the two fatal 1996 expeditions had, together with others, mounted several ‘cleanup expeditions’ leading to: ‘the removal of more than eight hundred oxygen canisters from the upper mountain from 1994 through 1996’ (ibid.).

Other items to be considered are faeces. Although altitude affects the appetite, the 1953 team continued to eat regular meals throughout the summit attempt, and the freezing conditions on Everest would preserve their waste. I am reliably informed (Gibbons pers. comm.) that skin cells extracted from faeces can be used for DNA profiling. However, in all this we must consider factors both natural and homogenic which would destroy evidence. Unlike the North East route from Tibet, most of the route from the south is on active glacial surfaces, on the Khumbu Glacier, Western Cwm and the glacier on the Lhotze face. In addition to the more ‘natural’ movement of the glaciers, these are also melting; the Khumbu glacier is now c. 40 m lower than it was 100

PAUL GRAVES-BROWN: 1130 HRS, 29 MAY 1953. BECAUSE IT’S THERE: THE ASCENT OF EVEREST Recently there have been further attempts to clean up Everest. In 2006 an expedition led by Korean Han WangYong set out to remove around five tonnes of ‘tents, oxygen tanks and plastic wrappings’ from the South Col; they also hoped to remove some of the bodies (Gurubacharya 2006).

reconcile the emerging science of geology with scripture, suggesting that mountains are the ruinous debris of the Flood. Contemplating the classical ruins of Italy, he imagined the pristine earth as a ‘Mundane Egg’ of perfect smoothness, the post-Dilluvian world as one ‘lying in its Rubbish’. Yet Burnet also introduced a sense of temporal change that was to become ever more important in geological theories (Macfarlane 2004).

Clearly then, there is only a random chance that the oxygen cylinders from 1953 could be found on the South Col. Material from the lower mountain seems even more unlikely, especially given that, unlike later expeditions, the 1953 team did clean up after themselves. ‘Anxious to salvage as much serviceable equipment as possible, I asked Charles Wylie to stay behind in the Cwm with a rear party to carry loads down to Camp III’ (Hunt 1993[1953]: 196). In the end, Hunt (1993[1953]: 197) observed, ‘It was as if the mountain was bent on showing us, before our departure, how ephemeral was our intrusion into its territory.’

Following literally and metaphorically in Burnet’s footprints, Joseph Addison wrote that ‘a spacious horizon is an image of liberty’ (The Spectator 412), presaging the more celebrated Philosophical Inquiry Into The Origin Of Our Ideas Of The Sublime And Beautiful of Edmund Burke (1764). The latter explored the idea that there could, in the vast spaces of the mountains, be a sense of wonder, of the Sublime, which whilst related to the concept of beauty was profoundly different in its aspect of terror or fear. Whilst earlier travellers in the Alps had had themselves blindfolded, Grand Tourists began to embrace the Sublime fear of altitude and the precipitous.

Gods, pimples and the third Pole The Sherpa treat Everest as a God or sacred place, the Tibetan name, Chomolungma, meaning ‘Goddess mother of the World’. They chant and leave offering while on the mountain and often regard the antics of westerners as sacreligious and unlucky (see Krakauer 1996). Generally attributed to Buddhism, these beliefs are more plausibly aspects of Sherpa ‘popular religion’ (Ortner 2001). This resembles Tibetan Bön religion which, like Taoism, had a tradition of reverence for mountains. The lamas of Rongbuk and Thangboche disapproved of mountain climbing, probably because they regard it as pointless rather than sacreligious. The Lama of Rongbuk, speaking of the death of seven Sherpa on Everest in 1922 remarked: ‘I was filled with compassion for their lot who underwent such suffering on unnecessary work’ (Gilman 2001: 35).

For the romantics, Wordsworth, Byron, Coleridge and Shelley, mountains acquired a kind of secular religious status, beyond the Sublime into something mystical: Thou hast a voice, great Mountain, to repeal Large codes of fraud and woe; not understood By all, but which the wise, and great, and good interpret, or make felt, or deeply feel. (P.B. Shelley Mont Blanc 1817) What then changes as the nineteenth century progresses is a growing desire to get into and onto the mountains, rather than simply contemplate their Sublimity.

The connection between mountains and the sacred is very common. Olympus, the home of the Greek gods, being one of many examples (see Nicolson 1959). However, the Old and New Testaments offer interesting contrasts: whilst in the former, mountains such as Sinai or Ararat are sacred places, the Christian Gospels distain high places – ‘Every valley will be filled. Every mountain and hill will be levelled. The crooked ways will be made straight. The rough roads will be made smooth’ (Luke 3.5).

Modern mountaineering began with the ascent of Mont Blanc in 1786 by Jacques Balmat and Michel Paccard. By the end of the nineteenth century virtually all of the Alpine peaks had been climbed. Mountaineers then turned their attention to other ranges, British pioneer Edward Whymper, for example, climbing in the Andes in the 1880s and the Canadian Rockies in the early 1900s. Although the death toll on Everest seems extreme, it is worth noting that to this day, more people die on the Alps every year than have ever died on Everest.

Before the seventeenth century, Christian Europeans regarded mountains with fear and disgust; they were regarded as warts, boils or pimples that tainted the surface of the earth. Indeed the claim that medieval society believed the earth to be ‘flat’ may relate more to a desire for a smooth earth, than to a rejection of the earth as a globe (Nicolson 1959).

Naturally the focus eventually shifted to the still inaccessible Himalaya. In the 1890s Whymper is said to have described them, and Everest in particular, as the ‘Third Pole’. This is not entirely hyperbolic. After the North and South Poles, the Himalaya is the third largest repository of ice on the planet, and the extremes of high altitude are in many ways equivalent to the poles. In his famous ‘Because it’s there’ interview (Anon 1923), Mallory compared the assault on Everest with

Although the resurgence in mountain worship is attributed to romanticism, its beginnings can be seen in The Sacred Theory of the Earth (Burnet 1681). Burnet attempted to 101

DEFINING MOMENTS efforts of Shackleton to reach the South Pole. But perhaps the quest for Everest, whose summit, after all, is at the cruising height of modern jets, is more extreme than that of the poles. Krakauer (1996: 19) remarks that the 1953 ascent of Everest was, ‘an event that an older friend says was comparable, in its visceral impact, to the first manned landing on the moon’, whilst Hunt (1953: 211), in his conclusion remarks that, ‘there is always the moon to reach’. Mallory, in the same New York Times interview, says: ‘Everest is the highest mountain in the World, and no man has reached its summit. Its existence is a challenge. The answer is instinctive, a part, I suppose, of man’s desire to conquer the universe.’

and actually in control of their own lives’ (Yates 2002: 215216). It is here that the equally contradictory attitudes to technology come into play. Messner (1989: 40-41) says of his solo ascent of Everest: In us all the longing remains for the primitive condition in which we can match ourselves against Nature, have the chance to have it out with her and thereby discover ourselves. ... I refuse to ruin this challenge through the use of technological aids. In order to be able to survive this epoch of depersonalisation, concrete deserts, and the alienation brought about by being harnessed into the crazy machinery of manufacturing and administration, I need the mountains as an alternative world.

Mountaineering, like space travel, offers a Sublime sense of one’s own insignificance: ‘I had realised simultaneously both how lucky I was to be looking down at such beauty, but also how small and insignificant I was. It was a liberating feeling’ (Yates 2002: 201). Yet the changing perception of humanity’s place in the universe, this sense of the infinite has to be contrasted with darker motives. As Macfarlane (2004) points out, Alpinism began in the era of Herbert Spencer, Samuel Smiles and Adam Smith. Mount Everest tantalises the vaulting ambition to ‘triumph over nature,’ born in the individualism and industrialisation of the nineteenth century.

Yet he flew to Tibet and used a custom Goretex tent, titanium ice axe and high altitude clothing. Climbing Mount Everest represents, in reality, an illusory alternative world in which the individual is freed from the bonds of technology and society. What is striking about the 1953 expedition, at least as described by Hunt, is that it does not fit any of these archaetypal tropes of mountaineering. This perhaps explains both its success and the fact that no one died in the process. In Hunt’s position, a contemporary expedition leader would almost certainly have included him or herself in one of the summit parties, yet he seems to have been content with a supporting role, only going as far as the South Col to assist the ‘assault’ teams. The approach was scientific, cautious, meticulously planned and systematically executed; a stark contrast to the essentially chaotic events of May 1996 where the highly experienced expedition leaders seem to have totally abandoned their rules and plans once they got high on the mountain.

Although, since the 1970s, there have been many women climbers in the Himalaya and on Everest, the whole undertaking has been coloured by macho attitudes and sexual banter (Ortner 2001). Despite Hunt’s communitarian ethos, others on the 1953 expedition were more geared to individual competition. Hillary, in particular, was highly competitive throughout his involvement with Everest. Confronted with Shipton’s more cautious attitude in 1951 he remarks, ‘The competitive standards of Alpine mountaineering were coming to the Himalayas and we might as well compete or pull out’ (quoted in Ortner 2001). This attitude persists; indeed it has become more intense over the years, as for example, in the relationship between the Mountain Madness and Adventure Consultants commercial expeditions during the disasterous 1996 season (Boukreev and DeWalt 2002; Krakauer 1996).

Ironically, since the disasterous 1996 season, if not indeed before that, the whole Everest ‘scene’ has become something of a circus. Krakauer (1998) remarks that, by the 1990s, elite mountaineers were becoming contemptuous of the almost routine ascents of Everest. In 2007, in excess of 600 people climbed the mountain and the Chinese are in the process of building a new road to the northern Base Camp so that the 2008 Olympic torch can be carried to the Summit on its way to the Beijing games. There is speculation that they will also build a hotel at Everest Base Camp, and at least one commentator sees, in the growth of Everest tourism, the emergence of a kind of ‘Everestland’ theme park (McDougall 2007). In an era when the new ‘high’ is to go into space, perhaps even to take a trip around the moon, Everest no longer represents the challenge it did in 1953. It has been climbed from 15 different routes, skiid down and had a helicopter land on the summit. It has been climbed by the blind, the very young and the very old; a Nepalese recently

Moreover, climbers’ competitive attitude has been compounded by the countercultural ethos that has grown in mountaineering since the 1960s (Ortner 2001). ‘Compared to more formal sports, with their extensive rules and regulations, along with referees and judges to enforce them, mountaineering seems free and anarchic’ (Yates 2002: 21), and again, ‘above all else I had tasted freedom ... I had cast aside some of our society’s petty rules and behaviour’ (Yates 2002: 39). In confronting the infinite, climbers see themselves as set apart, and it is in this dislocation from society that they find their sense of control: ‘Freed from the hold imposed on us by the state, employers, community and family, people involved in an adventure can feel empowered


PAUL GRAVES-BROWN: 1130 HRS, 29 MAY 1953. BECAUSE IT’S THERE: THE ASCENT OF EVEREST Douglas, E. 2007. Did Mallory make it? Researcher believes he has the answer. The Guardian. Saturday 29 September 2007. Gillman, P, (ed.) 2001. Everest : eighty years of triumph and tragedy. London: Little, Brown. Gillman, P. and Gillman. L. 2001. The Wildest Dream : Mallory : his life and conflicting passions. London: Headline. Gurubacharya, B. 2006. Mountaineers prepare for clean-up mission on Everest. The Guardian. Monday 6 March 2006. Herbert, I. 2007. To the top of Everest in the footsteps of Mallory. The Independent. Friday 15 June 2007. Hunt, John. 1993[1953]. The Ascent of Everest. (40th Anniversary Edition). London: Hodder & Stroughton. Krakauer, J. 1998. Into thin air: A personal account of the Mount Everest disaster. London: Pan. Macfarlane, R. 2004. Mountains of the Mind: A history of a fascination. London: Granta. McDougall, D. 2007. Everest at risk as new road conquers roof of the world. The Observer. Sunday 8 July 2007. Messner R. 1989. Crystal Horizon: Everest – the first solo ascent. Marlborough: The Crowood Press. Morris, J. 1993. Coronation Everest. London: Boxtree. Nicolson, M. H. 1959. Mountain Gloom and Mountain Glory: the development of the aesthetics of the infinite. Ithaca, New York: Cornell University Press. Ortner, S. B. 2001 Life and Death on Mt. Everest : Sherpas and Himalayan mountaineering. Princeton, New Jersey: Princeton University Press. Pugh, L.G.C.E. 1954. Scientific aspects of the expedition to Mount Everest 1953. The Geographical Journal, 120, 2: 183-192. Shakabpa, W. D. 1967. Tibet : A political history. London: Yale University Press. Smith, G. D. 1988. From Monopoly to Competition: The transformations of Alcoa 1888-1986. Cambridge: Cambridge University Press. Trossarelli, L. nd. The History of Nylon. Online at n.html. Consulted 7 January 2008. Unsworth, W. 2000. Everest : the mountaineering history. London: Bâton Wicks. Watts, J. 2007. Everest ice forest melting due to global warming, says Greenpeace. The Guardian. Wednesday 30 May 2007. Yates, S. 2002. The Flame of Adventure. London: Vintage.

Fig. 10.3 ‘At last!!’

took off his clothes and stood (albeit briefly) naked on the summit. The highest point in the planet’s surface has been reached in every conceivable way, except perhaps by someone dressed as a pantomime horse (Figure 10.3), although no doubt this will come. Ultimately, this symbol of ‘Man’s triumph over nature’ has become somewhat bathetic, especially when we consider that our triumphant activities have melted the Rongbuk and Khumbu glaciers. Here is the world’s highest mountain, and on it we find large quantities of material and the remains of human beings, male and female, of a variety of ages from all around the world. To future archaeologists it may well appear to be a secular/religious place – a place of pilgrimage and a place of sacrifice which has drawn people from around the world to worship and to die on its slopes. A monument to Globalisation – the centre of a world religion and a place of pilgrimage for all races. A kind of modern tower of Babel where people have used technology to climb as close to their god as they can without leaving terra firma. References Anon. 1923. Climbing Mount Everest is work for Supermen. New York Times. 18 March 1923. Boukreev, A. and G. W. DeWalt. 2002. The Climb: Tragic ambitions on Everest. London: Pan. Briscoe, M. and Hicks D. 2007. G8EPR Pye Museum. Online at Consulted 7 January 2008. Clore, L. 1953. The Conquest of Everest. Countryman Films (DVD 2007 Optimum Home Entertainment) Van Creveld, M. 1991. The Transformation of War. New York: Free Press.


Chapter 11

2228:34 hrs (Moscow Time), 4 October 1957 The Space Age begins: The launch of Sputnik I, Earth’s first artificial satellite Greg Fewer Were it not for the fiftieth anniversary commemorations of the launch of Sputnik 1 in 2007, one could be forgiven for speaking of the ‘Space Age’ in the first decade of the twentyfirst century without thinking of how it began (as opposed to the status quo in the present or, perhaps, its future course and development). Indeed, the term ‘space age’ is possibly now more often used to describe something technologically advanced or futuristic, sometimes in a sarcastic way, than about the spacefaring era of the last fifty years. Although the launch of Sputnik 1 on 4 October 1957 defines the beginning of the Space Age, as many writers point out, often without saying much about the earth’s first artificial satellite itself (e.g., Anon. 2007a; Moore 2005), it is an event that tends to take a back seat to subsequent events in space exploration. In particular, the first Moon Landing of 20 July 1969 is widely seen as probably the crowning achievement of the Space Age, its ‘defining moment’ (Roland 1998), or at the very least, ‘a significant moment’ (Launius 2007a). The lunar site of the first Moon landing – Tranquillity Base – has also been described as the symbolic ‘“ground zero” of the space age’ with respect to space science tourism (Bell 2006: 100, n7). Since the 1990s, archaeologists have taken an increasing interest in the archaeology or heritage conservation of Tranquillity Base because of its historic significance (eg Campbell 2003; Capelotti 2004; Fewer 1998, 2002, 2007; Gorman 2005a; O’Leary 2006; O’Leary et al. 2003; Spennemann 2004, 2006; Vescio 2002). However, had Sputnik 1 not been launched in 1957 would people have first walked on the Moon as early as 1969?

In the course of this chapter, the historical background to Sputnik 1’s launch and its Cold War impact will be outlined before discussing the material culture of the earth’s first artificial satellite.

Sputnik 1’s Cold War impact Jerry Grey (1983: 10) comments that the launch of Sputnik 1 was ‘the single most significant event that catapulted the Soviet Union into worldwide recognition as a major modern power and stimulated the massive American reaction’. Indeed, many writers emphasise what Gilbert (1999: 162) calls the ‘acute embarrassment’ felt by the United States that Russia had been the first nation to launch a satellite into orbit. According to Patrick Moore (2005), it ‘took a lot of people by surprise and caused considerable alarm in America’, while other writers describe America as having been traumatised, ‘blindsided’ or ‘reeling’ from the shock (Rice 1992: 225; Anon. 2003, 2007b; Howard 2004: 291). Americans had come to realise that the Russians were not as technologically backward as they had smugly assumed (Anon. 2007c; Launius 2007a: 141; Zak 2007: Part 8). Even so, Matt Bille has suggested that the extent of the shock has been exaggerated: It was a shock, but not the earth-shaking panic that seems to be conventional belief these days. There was no public panic, aside from a few famous anecdotes. If you read the media reports of the day, they gauge the public’s feeling as ‘general uneasiness’. Most people were surprised, I think, but not terrified…. There was a lot more concern in Congress … and the media. The major newspapers and newsmagazines did a lot to create a ‘Sputnik panic’ that didn’t really exist. (Quoted in McDade 2003)

Matt Bille, a consultant space and defence analyst, who has also written about space history and technology, thinks that a lunar programme might not have developed until the 1970s or 1980s had the United States been the first country to launch a satellite (McDade 2003). The likelihood of such a delay has been suggested elsewhere (eg Anon. 2007c) with some writers claiming that Sputnik 1 invigorated the American space programme in general (e.g. Ahlstrom 1989; Zak 2007: Part 8). However, Sputnik 1’s legacy is not limited to being a catalyst for lunar exploration or for encouraging the launching of other satellites; it has also inadvertently spurred on advances in technology in other areas of human endeavour, such as robotics and telecommunications (including television, mobile phones and even the Internet).

Harold J. Noah also comments that ‘government and mass media combined to send the message that America’s security was at stake’, that America was going to be bombed from space. In short, ‘it was crazy and got out of hand’ (quoted in Steiner-Khamsi 2006: 10). While one senior American naval officer attempted to belittle Russia’s achievement by calling Sputnik 1 ‘a hunk of iron almost anybody could launch’ (quoted in Gilbert 1999: 162 and Launius 2007a: 141), E. C. Krupp (1997) states that the 105

DEFINING MOMENTS official response ‘was outfitted in good sportsmanship’, partly because the period from July 1957 to December 1958 was the International Geophysical Year, ‘a season of scientific fraternity that called for professional acknowledgement’ of Russia’s success (cf. Launius 2007a: 141). In July 1955, the United States had announced its plan to launch satellites during the International Geophysical Year and finally commenced with Explorer 1 on 31 January 1958 (Anon. 2003; Downey 2006: 139; Krupp 1997; Zak 2007: Part 1).

… anxious to restrain the political hype. We sought to provide some counterweight to those in the United States who asserted that we had much to learn from the Soviet educational system. We knew that published Soviet statistics were far from trustworthy and that the truth about Soviet schools, higher educational institutions, technical training, and adult education was not necessarily to be found in Soviet publications. (Quoted in Steiner-Khamsi 2006: 13)

If America’s initial shock has been exaggerated, Sputnik 1’s launch is nevertheless widely credited with bringing about fundamental changes in American government policy in what Krupp (1997) calls ‘a scramble to catch up’ with Russia’s technological lead. Firstly, America sought to change its educational system to encourage more secondary school children to consider a career in science. This led to the passing of the National Defence Education Act in 1958 (Gilbert 1999: 162; Downey 2006: 139; Krupp 1997).1 An October 2007 editorial in The New York Times, however, comments that the resulting ‘advanced science and math curricula developed for school fell out of use [and that] we are again bemoaning a paucity of science and engineering graduates’ (Anon. 2007c). The editorial adds that, currently, ‘there are wistful calls for another Sputnik-like event to goad a re-invigoration of American education and technology’, though it concludes that ‘future space exploration … will likely require close cooperation with other nations, not fearful reaction against their achievements’.

Secondly, new institutions were established to assist America’s space programme, including the Advanced Research Projects Agency (ARPA) and the National Aeronautics and Space Agency (NASA) (Downey 2006: 140; Grey 1983: 11; Steiner-Khamsi 2006: 10). The computer network that ARPA developed during the 1960s (called ARPANET) would become the backbone of the nascent Internet in 1969 (Kleiner 1994). Now called the Defence Advanced Research Projects Agency (DARPA), it has in recent years run competitions to encourage groups to develop robotic, self-driving, cars (Ahlstrom 2007). NASA’s Apollo programme, which led to the Moon landing in 1969, emerged in the early 1960s as the ‘space race’ between Russia and America picked up speed (Gilbert 1999: 264; Krupp 1997).2 Since the 1980s, it has managed America’s Space Shuttle System. Meanwhile in Russia, there was only a small news item about the successful launch of Sputnik 1 on the front page of the 5 October edition of Pravda.3 While the newspaper devoted the front page to the event the day after, the public response to Sputnik 1 was, according to Sergei Khruschev (son of the Cold War Russian premier, Nikita Khruschev), one of pride, not astonishment, because it had followed earlier Russian technological achievements, including the world’s first nuclear power plant, and Russia’s new prototype fighter, the MiG, which had been setting new world records (Khruschev 2007; cf. Siddiqi 2007).

According to Harold Noah, government support was also made available to improve the teaching of modern languages as well as ‘technical education, area studies, geography, English as a second language, counseling and guidance, school libraries and librarianship, and educational media centers’ (quoted in Steiner-Khamsi 2006: 10). In addition, funding was supplied for Soviet studies courses at American colleges and to support American educators in making comparative education tours in the Soviet Union as well as to underwrite low-interest loans to students to encourage more people to enter university (Steiner-Khamsi 2006: 10, 13-14). As Noah puts it, the National Defence Education Act’s ‘avowed purpose was to keep the United States ahead of the Soviet Union through education, viewed now as a vital tool to help the country win the cold war’ (quoted in Steiner-Khamsi 2006: 10). He adds that not only was there a desire to close the perceived gap between the Russian and American education systems, but also to ‘inoculate’ people in Africa, Latin America and East Asia from the ‘socialist virus’ (Steiner-Khamsi 2006: 15). Even so, Russian studies specialists like Noah were:

Sputnik 1’s Cold War origins American concerns over Russia’s lead in the space race were not merely based on national pride but on a desire to protect the United States from any possible nuclear missile attack launched by Russia. Impressed by Nazi Germany’s V-2 rocket programme, both America and Russia had ‘actively engaged and recruited German scientists and collected rocket information after World War II as the seeds of the Cold War space race were planted’ (Downey 2006: 139). Zak (2007a: Part I) states, however, that the German scientists acquired by Russia played only ‘a minor role in the Soviet quest for 2

Bille would prefer to categorise the competition to launch the first satellite and that to put the first person on the Moon as two distinct space races (McDade 2003), but it seems reasonable to regard them as separate phases of the same race. 3 The report is reproduced as an appendix to Zak (2007: Part 8).


The word ‘defence’ was added to the name of the bill so that it would pass quickly through Congress but money allocated under the Act would be administered solely by the US Office of Education rather than by the military or the secret service (Steiner-Khamsi 2006:16).


GREG FEWER: 2228:34 HRS (MOSCOW TIME), 4 OCTOBER 1957. THE SPACE AGE BEGINS: THE LAUNCH OF SPUTNIK I space’ and returned to Germany in the 1950s. In the West, popular support for rockets was more muted in the 1940s and 1950s because of their association with the V-2 weapons programme (Moore 2005). The involvement of Wernher von Braun, a German SS officer, in masterminding this programme, which not only killed many people when the rockets were deployed, but also involved the deaths of many more slave labourers who were forced to make them, and his subsequent involvement in the Apollo programme, continues to provoke anger among some people today (Gorman 2005a: 89-93; Lytton 2008).

Meanwhile, in Britain, the large newly built Mk1 (later renamed Lovell) radio telescope at Jodrell Bank, Manchester, which had commenced operation the same month that Sputnik 1 was launched, had been ‘widely regarded by the general public as being a waste of money’ (Anon. 2007d; Moore 2005). However, as it was ‘the only radio telescope in the world able to track Sputnik-1’s carrier rocket’ (Anon. 2007e; cf. Anon. 2006), it probably seemed like money well spent after all!

Russia’s interest in developing inter-continental ballistic missiles (ICBMs) had arisen because, as Grey (1983: 10) explains, the Russians had ‘no strategic air force’ at the end of the Second World War, while the Americans’ B-29s ‘were the only aircraft capable of carrying the massive new nuclear bombs’. Rather than try to achieve parity with the Americans in aircraft technology, the Russians turned instead to ‘developing the rocket launcher capability needed to carry their heavy nuclear bombs over intercontinental ranges’ (ibid.). This was because a technological gap between the two nations was not Russia’s only problem. American military bases around the world:

Sergei Khruschev (2007) describes Sputnik 1 succinctly as ‘an 84-kilogram (184-pound) sphere with whip-like antennas’. While Chandler (2007: 10) and Krupp (1997) agree with Khruschev about the satellite’s weight, other writers have offered different figures, such as Ahlstrom (1990), who gives the weight as 80 kg, Launius (2007a: 141), who says it weighed 183 lb, and Man (1999: 39, 42), who also stated it was 183 lb (83 kg). Turnill (1974: 170) gives a more exact figure of 184.3 lb (83.6 kg), which is the metric weight mentioned by Zak (2007: Part 3). There is similar disagreement over Sputnik 1’s size, which has been likened to that of a basketball (Steiner-Khamsi 2006: 10; Velocci 2007; Launius 2007b), a sphere two and a half times larger than a basketball (Launius 2007a: 141), or as a beachball (Chandler 2007: 10; Morring 2003). Zak (2007: Part 3) gives the diameter of the sphere as 580 mm. While one might expect greater agreement in the description of an artefact, the significance of the weight is partly that it was far below the initial aspiration of Sergei Korolev, the chief designer of Sputnik 1, who had proposed to the Russian Academy of Sciences in 1954 that ‘a massive one tonne satellite [observatory] be built and carried aloft by an equally massive but as yet non-existent rocket of his design’ (Ahlstrom 1990). In September 1955, Korolev’s initial specifications for a 1.1 metric tonne satellite were sent to his main industrial subcontractors and to leading politicians (Zak 2007: Part 1). In January 1956, the Russian government formally authorised the development of a satellite initially dubbed ‘Object D’ that weighed between 1.0 and 1.4 metric tonnes (the upper limit of the R-7 rocket’s capability), which was to be launched in 1957 (Zak 2007: Part 2).

The material culture of Sputnik 1

… were capable of delivering nuclear weapons to much of the Soviet Union [while] the US itself remained out of reach because of distance. The answer was heavy lift rockets that could carry payloads to the opposite side of the world. (Ahlstrom 1990; cf. Zak 2007: Part 8) Zak (2007: Part 1) comments that ‘the Soviet ability to strike the US at will, would become the cornerstone of Khruschev’s peaceful co-existence policy’, hence his desire to continue and speed up the ICBM development programme ‘to overcome the US air supremacy’. He adds that ‘paradoxically, Khruschev felt he needed this ability in order to negotiate, compete and cooperate with the West as [an] equal’ (ibid.). Over the 1950s, then, both countries strove to make ICBMs – an arms race concurrent to that of getting the first satellite into orbit. The successful launch of Sputnik 1 (and then Sputnik 2 on 3 November 1957) was consequently seen to underline the military threat that Russia’s technological advances showed. As Krupp (1997) puts it: ‘The Russians were interested in space and could get there. They had big missiles, and they alone knew how many.’

Progress on the satellite observatory was slower ‘because no one had ever designed instrumentation capable of withstanding the vibration of launch and the temperature and pressure changes of space’ (Ahlstrom 1990; Zak 2007: Part 3). Fearing that the Americans would reach space first, Korolev proposed making a satellite that was much smaller and functionally simpler so that it could be launched quickly (Ahlstrom 1990; Khruschev 2007). It was named Prosteyshy Sputnik, which Khruschev (2007) and Zak (200: Part 3) translate as ‘Simplest Satellite’, although Man (1999: 38-9) translates the phrase as ‘Preliminary Satellite’. Zak (2007: Part 3) points out that the ‘proposal to “bypass” Object D

Underlining the link between the nuclear missile threat and Sputnik 1 and 2 was the fact that the satellites had been launched into orbit on ICBMs, and Russia’s powerful thrusters were ahead of American capability at that time (McDade 2003; Anon. 2007c; Grey 1983: 10; Zak 2007: Part 8).


DEFINING MOMENTS with a simple satellite’ emerged as early as November 1956 in which month Nikolai Kutyrkin was assigned the task of designing it. In February 1957, the Russian government formally adopted the plan to launch the ‘simplest satellite’ but only once the R-7 rocket had been successfully launched twice (ibid.).4 Learning of America’s July 1957 successful test of the Jupiter ballistic missile, Korolev was anxious to test Russia’s first ICBM, the R-7, and did so successfully in August and again in September of that year (Khruschev 2007; Zak 2007: Part 6).

atmosphere on 2 December 1957 (Zak 2007: Part 7). In other words, apart from an arming key that ‘prevented contact between the batteries and the transmitter prior to launch’ (Anon. 2007f), the central artefacts of the material culture of the satellite no longer exist. As both the satellite and the core rocket stage re-entered the atmosphere, they do not consequently form part of the c. 10,000 items of ‘space junk’ now in orbit around earth. These items include ‘fifteen hundred upper stage rockets and myriad explosive bolts and clamp bands, along, of course, with urine and “other” bags’ (Shanks et al. 2004: 66; cf. Rathje 2004). While it not only forms a hazard to future space travel Rathje has suggested that space junk ‘is the natural study area of archaeologists’ in space (ibid.; Rathje 1999), a view shared by Gorman (2005b) who refers to it as ‘the cultural heritage of orbital space’.

Sputnik 1’s weight is also significant because it demonstrated to Americans how powerful the R-7 boosters were compared to their own Vanguard and Jupiter launchers. Americans became more alarmed with the payload weight (1,120 lb [508-9 kg]) of Sputnik 2 because of what this suggested in terms of Russia’s capability in launching nuclear warheads (Man 1999: 43; Turnill 1974; 170). In contrast to Sputnik 1 and 2, America’s first satellite, Explorer 1, weighed only 30.8 lb (14 kg) (Turnill 1974: 47).

However, there are other elements of the satellite’s material culture that still do survive. This includes the satellite’s launch pad, assembly building and associated structures at Tyuratam (subsequently renamed Baikonur Cosmodrome) in what is now the independent republic of Kazakhstan, as well as the various ground control stations across the former Soviet Union. The team of engineers working on Sputnik 1 had been moved here by September 1957 (Ahlstrom 1990). According to Man (1999: 10), Tyuratam was ‘the Soviets’ biggest launch site’, but only a few years after Kazakhstan’s independence in 1991, ‘the great cosmodrome from which Sputnik and Gagarin were launched was a sorry scene of decay and neglect’. Furthermore, Tyuratam:

The simplicity of Sputnik 1 meant that its most memorable feature was the intermittent beep it transmitted on shortwave radio for three weeks while it was in orbit, a signal which could be picked up by amateur radio operators across the world (Ahlstrom 1990; Khruschev 2007; Anon. 2007c; Steiner-Khamsi 2006:10; Zak 2007: Part 7). However, the science side of Sputnik 1’s mission has always been overshadowed by the political and technological achievement of the launch itself. Officially, Russia reported to the United Nations’ Committee on the Peaceful Uses of Outer Space that Sputnik 1’s purpose was the ‘launching of [the] first ever artificial satellite of the Earth [and the] physical study of the atmosphere’ (Morozov 1962: 4). This physical study was possible because the satellite carried instruments that measured the density and temperature of the atmosphere as well as concentrations of electrons in the ionosphere (Turnill 1974: 170; Man 1999: 39; McDade 2003; Zak 2007: Part 3). The density of the upper atmosphere could also be measured by studying the rate of decay of Sputnik 1’s orbit, as could the effect of the ionosphere on the distribution of radio waves (Zak 2007: Part 3).

… was the scene of a major riot in 1992, when the arrest of a military worker inspired several hundred soldiers to burn three barracks. Equipment was plundered, launch complexes abandoned. As one of the employees remarked: ‘I came here whole, healthy and unharmed and now after serving for two years I am going home sick, a cripple’. (Man 1999: 143) The problem had been accentuated by the collapse of the Russian economy in the early 1990s, resulting in a drop from over 100 rocket launches a year to just 23 in 1996 and the slashing of spending on the military space programme by 90 percent (Man 1999: 142).

Yet, despite the historic significance of Earth’s first orbital spacecraft, and notwithstanding claims to the contrary (Poletti 2007), Sputnik 1 officially burned up on re-entry on 4 January 1958 after 92 days and 1,440 orbits (Zak 2007: Part 7) (or 96 days and 1,400 orbits, according to Turnill [1974: 170]). The R-7’s core stage, which was permitted to continue transmitting telemetry after separation from the satellite, orbited the earth 882 times and re-entered the

Anatoly Zak offers much historical detail about the various structures at Baikonur Cosmodrome on his Russian Space Web site and this is summarised here with an emphasis on the material culture of the Sputnik programme (Zak nd. a, nd. b, 2007: Part 5). The launch facilities include a single launch pad at Site 1, which was built between 1955 and 1956, and comprises a massive concrete flame duct; concrete foundations; hollow, steel-framed, concrete base pillars; a three-storey 40 m by 40 m steel launch tower; and a railroad running between the pad and Site 2. The first R-7 rocket took off from Site 1 on 15 May 1957. The launch pad was used for most of the early satellite and all of the early

4 Detailed specifications on the components of Sputnik 1 and of the modified R-7 rocket used to launch it are given in Zak (2007: Part 3) and Zak (2007: Part 4), respectively. Object D would subsequently be launched as Sputnik 3 (Zak 2007: Part 2).


GREG FEWER: 2228:34 HRS (MOSCOW TIME), 4 OCTOBER 1957. THE SPACE AGE BEGINS: THE LAUNCH OF SPUTNIK I personnel missions beginning with that of Yuri Gagarin (the world’s first astronaut/cosmonaut) in 1961. As a result, it gained the nickname of ‘Gagarin’s pad’. Until 1966, the pad was kept in ‘battlefield readiness’ as an ICBM launch site, and this resulted in the delay of a Mars probe while a nuclear missile was readied for possible deployment during the Cuban Missile Crisis in 1962. Site 1 was refurbished in 1958, 1962 (following an accidental explosion), 1970, 1979, 1983-4 (following another explosion), and again in 1992. These refurbishments presumably have implications for the survival of original features associated with the launch of Sputnik 1. The pad remains in service, its 400th launch taking place in August 2000.

(individually numbered from IP-2 to IP-9), ranging in distance between 25 km and 800 km downrange from the launchpad, was set up to triangulate the position of the moving rocket/missile with the command and control station (IP-1) at Site 18, about 1.5 km from the launchpad at Site 1. There were also observation stations on the Kamchatka Peninsula for visual tracking of the re-entering warheads. Each tracking and observation station required the housing of operational and military personnel. An additional 13 scientific measurement stations were also set up in 1957 to control and communicate with orbital spacecraft. These stations were collectively referred to as the Command and Measurement Complex (KIK), which had its headquarters on the campus of NII-4, the main research institute of the Russian Ministry of Defence, in the town of Bolshevo, and near to the town of Podlipki, where the R-7 developer, OKB1, was located.

Built in two phases between 1955 and 1957, Site 2 comprises an assembly building (MIK 2-1) for the R-7 rocket and a command post. Near to the assembly building, a military barracks was built and this was followed later by a hotel for civilian engineers and ‘three country-style houses … to accommodate top managers and officials’ (Zak nd. b). One of the three houses had been asigned to the chief designer, Sergei Korolev, and Yuri Gagarin spent a night there before his flight in April 1961. Consequently, the house is now an official landmark of Baikonur Cosmodrome. A second assembly building (MIK 2A) was constructed in 1957-8 to the south of MIK 2-1, at Site 2A, and contained ‘a watertreatment and boiler complex and a special storage building to house nuclear warheads for the R-7 and follow-on missiles’ (Zak nd. b). While the R-7 rocket and most of its payloads, including the Sputnik series of satellites and later Soyuz and Progress craft, were assembled in MIK 2-1, nuclear warheads were processed in MIK 2A.

With respect to Sputnik 1, the lack of onboard response in its modified R-7 rocket meant that the launch vehicle could only be tracked passively by radar and visual observation, but the respective ranges of these tracking systems were limited to 500 km and 200 km, respectively. As the satellite reached orbit, it would be 1,700 km from the nearest tracking station. Flight controllers therefore had to rely on the rocket’s telemetry data to confirm that the satellite had indeed reached orbit while Sputnik 1’s beeping radio signal would confirm that the satellite had separated from the core stage of the rocket. As noted earlier, since the Lovell telescope at Jodrell Bank, in Manchester, was the only radio telescope at the time able to track Sputnik 1’s third stage rocket, it could consequently be said to form yet another part of the material culture of Sputnik 1. In October 2007, the telescope’s dish was used as a large outdoor screen on which was projected a cinematic presentation to celebrate the first fifty years of the Space Age (Anon. 2007g; Anon. 2008a). Structures at two US military research facilities in New Jersey – at Wall Township and the Deal test site – which also detected and recorded Sputnik 1’s signals survive and so may be worthy of continued conservation. The centre at Wall Township used the signals to study the propagation of radio waves through the atmosphere, and one of the surviving structures is an antenna of the Diana complex at Wall Township (Zak 2007: Part 8).

The original assembly building (MIK 2-1) was extended in the early 1970s, the extension (called 1A) being used by cosmonauts to suit up and go through final checks before launching. Unlike Site 2A, which ‘remained operational at the turn of the twenty-first century, housing dummy warheads for the latest generation of liquid-propellant ballistic missiles in the Russian arsenal’, the assembly building at Site 1 (MIK 2-1) was abandoned in the mid-1990s (Zak nd. b). Thereafter, the processing of Soyuz and Progress spacecraft was carried out at site 254 at Baikonur Cosmodrome. As the first (R-7) ICBMs were fitted with radio control, an ‘extensive network of “measurement stations” had to be deployed under the rocket path in order to determine the vehicle’s speed and direction and send flight correction commands onboard’ (Zak 2007: Part 5). Two groups of tracking stations (one of which was in charge of the missile’s flight control while the other tracked test warhead re-entries over the Kamchatka Peninsula) were respectively called ‘Taiga region’ and ‘Kama region’ – vague names to confuse the enemy over the precise locations of the stations. An additional series of four pairs of tracking stations

Sputnik 1’s impact on popular culture Downey (2006: 139) makes the point that many people may have heard of Sputnik 1 but not its American counterpart, Explorer 1, which was launched nearly four months afterwards. This is despite the fact that Explorer 1 was responsible for the discovery of the Van Allen Radiation belts with its onboard cosmic ray detector (Anon. 2003; Launius 2007a: 142). Perhaps Sputnik 1 benefits from its unusual name (to English-speakers), thereby making it more 109

DEFINING MOMENTS memorable. That Explorer 1 should be forgotten may also be due to the fact that it was only the third satellite ever to be launched successfully, but there might also be a general lack of interest today in the race to put the first artificial satellite into orbit, as suggested by the ease with which the Launch Complex 36 towers at Cape Canaveral, Florida, were recently demolished without any apparent public opposition (see below). However, in 1957, amateur radio operators across the globe not only tuned into the beeps Sputnik 1 emitted, but also the satellite itself came to be tracked visually – together with the third stage of its rocket – by ‘volunteer Moonwatch teams in the U.S. and their counterparts in the Soviet Union and the rest of the world’ (Krupp 1997; cf. Gorman 2005b: 345). Many children and college students were excited by being able to see the satellite, or more likely the brighter third stage of its rocket, in the night sky (Ahlstrom 1990; Launius 2007a:141; Bracher 1997). According to Zak (2007: Part 8), the core stage of the rocket had a magnitude of 6 compared to the satellite’s magnitude of 1, which would explain why most people probably saw the former rather than the latter. During the 1960s, Sputnik 1’s impact on popular culture was diverse:

Figure 11.1 World Space Museum’s model of Sputnik 1 (courtesy of World Space Museum []).

Television, toys, and games all turned on a dime toward outer space. Long before personal computers and video games would claim the souls of those seeking harmless diversion, manufacturers introduced rockets and space travel into conventional formats of entertainment. ‘Space Race’, a card game devised in 1969 for aspiring space cadets, permitted players to negotiate the hazards of the solar system long before anyone except the Apollo crews went boldly where no one had gone before. (Krupp 1997)

other locations, the Air and Space Museum in Le Bourget,France; the Science Museum, London; the National Air and Space Museum at the Smithsonian, Washington, DC; and in the aviation museums of Prague and Budapest; while a supposed backup of Sputnik 1 is displayed at The Museum of Flight in Seattle (Zak 2007: Part 3; 2007: Part 8; Anon. 2007k; Mola 2007). Monuments commemorating Sputnik 1 stand in Moscow and in a small park near Site 1 at Baikonur Cosmodrome (Zak 2007: Part 8).

Reflecting the Cold War thinking of the day, the ‘Friendly Satellite’ card allowed a player to draw two cards while ‘the much-reviled “Sputnik” card spells the loss of two turns’ (Krupp 1997).

The image of Sputnik 1 has also made its way onto various forms of ephemera as well as memorabilia, including biscuit tins, stamps, books (e.g. Collins 2007) and documentary/propaganda films (such as The First Soviet Earth Sputniks, directed by Nikolai Chigorin and Maria Slavinskaya and released in the United States in 1958, and David L. Wolper’s 1959 film The Race for Space) (Gorman 2007; Zak 2007: Part 8; Anon. 2007f, 2007i; 2007j).

Toy replicas continue to be made of Sputnik 1, among a range of other spacecraft, such as the series of miniature models produced by World Space Museum and originally released in Japan in 2003 (Figure 11.1). Each toy in the series retails at $9.95 (Anon. 2007h). World Space Museum also manufactures collectible cards on the same theme (ibid.). For someone with more cash to spend, scale models of various spacecraft can be purchased from a specialist manufacturer, Nick Proach Space Models, that of Sputnik 1’s launch vehicle retailing at no less than $1,495 (ibid.).

Even the word sputnik has entered the lexicons of many languages across the world, not least that of English where the Oxford English Dictionary Online defines it as an ‘unmanned artificial earth satellite, esp. a Russian one; spec. (usu. with capital initial) the proper name of a series of such satellites launched by the Soviet Union between 1957 and 1961’ (Anon. 2008b).

Replicas of Sputnik 1 also exist in various museums, the example displayed in the Cosmonautics Memorial Museum, in Moscow, being considered by Zak (2007c) to be possibly the most accurate. Other examples can be found in, among 110

GREG FEWER: 2228:34 HRS (MOSCOW TIME), 4 OCTOBER 1957. THE SPACE AGE BEGINS: THE LAUNCH OF SPUTNIK I official landmark of Baikonur Cosmodrome (noted above) while some of the equipment used in tracking early launches, including the Kama relay antenna and a massive theodolite, stand as monuments at Site 18 (Zak 2007: Part 5).

Conclusion Conserving ground-based space heritage or sites of the Cold War period is not new. During the 1980s, the US National Park Service conducted surveys of the terrestrial ‘historic resources associated with the early American space program (with emphasis on the first moon landing)’ (quoted in Fewer 2007: 6). This involved some 300 sites out of which 25 were selected as especially worthy of conservation. Similarly, in the 1990s, the Royal Commission on the Historical Monuments of England (RCHME), later to merge with English Heritage, conducted its Cold War Project to record Cold War military installations in Britain, work that has been complemented by other organisations such as the Council for British Archaeology and its Defence of Britain Project (English Heritage 1998; Schofield 2004). British Cold War sites such as Orford Ness have undergone some conservation work while archaeological survey has also taken place at American sites, such as the Nevada Test Site (Beck 2002).

Hopefully, the international community will recognise the value of maintaining the collective cultural heritage of early space exploration both in situ and shared between museums so as to make it increasingly accessible to the public. The alternative is to allow it to become destroyed due to neglect or to make way for development, or even to become completely commodified and thus available only to collectors of space memorabilia. The recent fiftieth anniversary commemorations of the Space Age and/or Sputnik 1 may act as a catalyst to preservation and conservation but it can also encourage greater commodification – two competing interests that will no doubt come into conflict in the years ahead. References

Yet, despite the growing interest in Cold War and space science tourism (Bell 2006: 95-8), sites of the early space programme in the United States remain at risk from demolition due to health and safety concerns and the demands of commercial development. For example, the gradual deterioration of America’s launch facilities at Cape Canaveral, Florida (Kennedy Space Centre), and the lack of interest in conserving these historic structures has been lamented (McDade 2003). In June 2007, the two service towers at Launch Complex 36, which had been built for NASA’s Atlas Centaur programme in the early 1960s, were demolished to prevent them from becoming safety hazards, due to the corrosion of their steel beams by the sea air, and to make the complex ‘more attractive to prospective tenants who may wish to launch rockets’ there in the future (Powell 2007).

Ahlstrom, D. 1989. A giant step for mankind, or a Cold War crusade? The Irish Times, 20 July 1989: 11. Ahlstrom, D. 1990. Soviet scientist recalls launching of Sputnik 1. The Irish Times, 19 November 1990: 12. Ahlstrom, D. 2007. Developing the ‘social robot’. The Irish Times, 12 November 2007: 64. Anonymous. 2003. Space ‘gem’ marks sapphire anniversary. CollectSPACE – news, 31 January 2003. Online at Consulted 7 July 2007. Anonymous. 2005. Baikonur Cosmodrome Celebrates 50th Anniversary on June 2nd. PR Newswire, 23 May 2005. Anonymous. 2006. Jodrell Bank – UK’s greatest unsung landmark. The University of Manchester, Jodrell Bank Observatory, press release 2006/12. Online at Consulted 8 October 2007. Anonymous. 2007a. The 50 years since Sputnik and the next 50. Aviation Week & Space technology 166, 12: 122. Anonymous. 2007b. A very cold war [editorial]. Chicago Tribune, 14 August 2007. Anonymous. 2007c. The legacy of Sputnik. The New York Times, 4 October 2007. Anonymous. 2007d. Lovell Telescope: 50 years on. BBC Manchester: Science and nature. Online at 7/09/20/051007_jodrell_factfile_feature.shtml. Consulted 8 October 2007. Anonymous. 2007e. UK Celebrates 50 Years of Spaceflight. ASDNews - Aerospace & Defence News, 4 October 2007. Online at press_detail_B.asp?ID=13654&NID=77484. Consulted 4 October 2007.

Given the international (let alone Russian) importance of Sputnik 1’s launch, the abandonment of the original Site 2 assembly building (MIK 2-1) and the burning of barracks buildings at Baikonur Cosmodrome in the 1990s did not bode well for the future preservation of the architectural dimension of the satellite’s material culture. However, there are signs of hope that at least some of the heritage of Sputnik 1 will be preserved. Firstly, in January 2004, Kazakhstan and Russia signed a bilateral agreement, whereby Baikonur Cosmodrome will be leased to Russia at US$115 million per annum until the year 2050 (Anon. 2005). This will help to secure the cosmodrome’s financial future. Secondly, the city of Baikonur celebrated the fiftieth anniversary of the cosmodrome in 2005, which demonstrates an interest in commemorating its history at both urban and national level in Kazakhstan (ibid.). Presumably, this will encourage the conservation of the material culture of Russia’s early space programme there (and Kazakhstan’s role in it). This is already indicated by the status given to Sergei Korolev’s house as an 111

DEFINING MOMENTS Anonymous. 2007f. Sputnik program. In Wikipedia, the free encyclopedia. Online at Sputnik. Consulted 13 November 2007. Anonymous. 2007g. Space50: the BIG screen. BBC Manchester: Film, TV and animation, last updated on 5 October 2007. Online at manchester/content/articles/2007/09/19/051007_jodre ll_space50_event_feature.shtml. Consulted 8 October 2007. Anonymous. 2007h. Models & toys. – buySPACE. Online at buyspace/models-toys.html#wsm. Consulted 7 July 2007. Anonymous. 2007i. The first Soviet Earth Sputniks (1958). In Internet Movie Database. Online at Consulted 20 July 2007. Anonymous. 2007j. The race for space (1959) (tv). In Internet Movie Database. Online at com/title/tt0053205/. Consulted 20 July 2007. Anonymous. 2007k. Sputnik 1. In Wikipedia, the free encyclopaedia. Online at wiki/Sputnik_1. Consulted 13 November 2007. Anonymous. 2008a. Space50: in pictures. BBC Manchester: Science & Technology. Online at ies/061007_space50_gallery.shtml. Consulted 29 February 2008. Anonymous. 2008b. Sputnik. Oxford English Dictionary Online. Online (via subscription) at http://dictionary. Consulted 13 February 2008. Beck, C.M. 2002. The archaeology of scientific experiments at a nuclear testing ground. In Schofield, J., Johnson, W.G. and Beck, C.M. (eds), Matériel culture: the archaeology of twentieth century conflict, 65-79. Routledge: London and New York. Bell, D. 2006. Science, technology and culture. Maidenhead and New York: Open University Press. Bracher, K. 1997. The beep heard ’round the world. Mercury 26, 6. Campbell, J. B. 2003. Assessing and managing human space heritage in the solar system: the current state of play and some proposals. Paper presented at The heavens above: archaeoastronomy, space heritage and SETI session of the fifth World Archaeological Congress, Washington, DC, United States of America, 22-26 June 2003. Abstract available online at Consulted 7 May 2006. Capelotti, J. 2004. Space: The Final [Archaeological] Frontier. Archaeology 57(6). Online at Consulted 8 May 2006.

Chandler, D. L. 2007. Space: 50 years and counting. Cambridge, Mass.: New Track Media LLC. Collins, M. 2007. After Sputnik: 50 years of the Space Age. Washington, DC: Smithsonian/HarperCollins. Downey, J. R. 2006. Review of M. Bille and E. Lishock, The first space race: launching the world’s first satellites. Parameters: US Army War College 36, 2: 138-40. English Heritage, 1998. Monuments of war: The evaluation, recording and management of twentieth-century military sites. London: English Heritage. Fewer, G. 1998. Space heritage sites [correspondence]. Spaceflight 40,8: 286. Fewer, G. 2002. Towards an LSMR & MSMR (Lunar & Martian Sites & Monuments Records): Recording planetary spacecraft landing sites as archaeological monuments of the future. In M. Russell (ed.), Digging holes in popular culture: archaeology and science fiction, 112-120. Oxford: Oxbow Books. Bournemouth University School of Conservation Sciences Occasional Paper 7. Fewer, G. 2007. Conserving space heritage: The case of Tranquillity Base. JBIS: Journal of the British Interplanetary Society 60, 1: 3-8. Gilbert, M.1999. Challenge to civilization: a history of the twentieth century 1952-1999. London: HarperCollins. Gorman, A. 2005a. The cultural landscape of interplanetary space. Journal of Social Archaeology 5, 1: 85-107. Gorman, A. 2005b. The Archaeology of Orbital Space. In Australian Space Science Conference 2005, 338-357. Melbourne: RMIT University. Gorman, A. 2007. Space Age archaeology: February 2007. Blog archive online at 2007_02_01_archive.html. Consulted 8 July 2007. Grey, J. 1983. Moving from dream to reality. Environment 25, 9: 6-13, 33-40. Howard, D. B. 2004. Between avant-garde and kitsch: pragmatic liberalism, public arts funding, and the Cold War in the United States. Canadian Review of American Studies/Revue canadienne d’études américaines 34, 3: 291303. Khrushchev, S. 2007. We shocked the world. Air & Space Smithsonian, August 2007. Online at hev.htm. Consulted 8 October 2007. Kleiner, K. 1994. What a tangled Web they wove.… New Scientist 1936, 30 July: 35-9. Krupp, E. C. 1997. Space race. Sky & Telescope 94, 4: 80-1. Launius, R.D. 2007a. A significant moment for the space age. Space Policy 23: 141-3. Launius, R.D. 2007b. It all started with Sputnik. Air & Space Smithsonian, June-July 2007. Online at Consulted 8 October 2007.


GREG FEWER: 2228:34 HRS (MOSCOW TIME), 4 OCTOBER 1957. THE SPACE AGE BEGINS: THE LAUNCH OF SPUTNIK I Shanks, M., D. Platt and W. L. Rathje. 2004. The perfume of garbage: modernity and the archaeological. Modernism/Modernity 11, 1: 61-83. Siddiqi, A. 2007. Russia’s long love affair with space. Air & Space Smithsonian, August 2007. Online at n_space_dream.htm. Consulted 8 October 2007. Spennemann, D. H. R. 2004. The Ethics of Treading on Neil Armstrong’s Footsteps. Space Policy 20, 4: 279–90. Spennemann, D. H. R. 2006. Out of this World: Issues of Managing Tourism and Humanity’s Heritage on the Moon. International Journal of Heritage Studies 12, 4: 356–371. Steiner-Khamsi, G. 2006. U.S. social and educational research during the Cold War: an interview with Harold J. Noah. European Education 38, 3: 9-18. Turnill, R. 1974. The Observer’s book of unmanned spaceflight. London and New York: Frederick Warne & Co. Velocci, Jr, A. L. 2007. A Never-Ending Quest. Aviation Week & Space Technology 166, 12: pp. Vescio, P. 2002. Lunar Legacy Project flies high. New Mexico State University Library Newsletter 17, 3: 1. Zak, A. 2007. Sputnik. Russian Space Web. Online at Consulted 8 October 2007. Zak, A. nd. a. Centers: Baikonur: Soyuz launch facilities. [Site 1.] Russian Space Web. Online at http:// Consulted 8 October 2007. Zak, A. nd. b. Centers: Baikonur: Soyuz launch facilities. [Site 2.] Russian Space Web. Online at http:// Consulted 8 October 2007.

Lytton, B. 2008. Von Braun’s bargain [correspondence]. The New York Review of Books 55, 2: 55. McDade, J. 2003. Book preview: The first space race. CollectSPACE – news, 23 July 2003. Online at Consulted 7 July 2007. Man, J. 1999. The space race. London: Reader’s Digest. Mola, R. 2007. 50 ways to space out. Air & Space Smithsonian, June-July 2007. Online at Consulted 8 October 2007. Moore, P. 2005. Extract: Patrick Moore, the autobiography. Astronomy & Geophysics 46, 2: 38. Morozov, P. 1962. Letter of 24 March 1962 from the Acting Permanent Representative of the USSR to the UN’s Acting Secretary-General. United Nations document A/AC.105/INF.002. Online at oosa/ Consulted 24 May 2007. Morring, Jr, F. 2003. Half-Century of Spaceflight. Aviation Week & Space Technology 159, 24. O’Leary, B. 2006. The cultural heritage of space, the Moon and other celestial bodies. Antiquity 80, 307: Project Gallery. Online at index.html. Consulted 6 May 2006. O’Leary, B., R. Gibson, J. Versluis and L. Brown 2003. Lunar archaeology: A view of federal U.S. historic preservation law on the Moon. Paper presented at ‘The heavens above: Archaeoastronomy, space heritage and SETI’ session of the fifth World Archaeological Congress, Washington, DC, United States of America, 22-26 June 2003. Abstract online at Consulted 7 May 2006. Poletti, T. 2007. California man claims he has Sputnik remnants. San Jose Mercury News (San Jose, CA). 23 February 2007. Powell, J. 2007. Old Atlas towers tumble. Spaceflight 49, 9: 328. Rathje, W. L. 1999. EXO-archaeology. MSW Management 9, 5: 12. Rathje, W. L. 2004. The perfume of garbage. MSW Management March/April 2004. Online at Consulted 17 February 2006. Rice, M. J. 1992. Reflections on the New Social Studies. Social Studies 83, 5: 224-31. Roland, A. 1998. Twin paradoxes of the space age. Nature 392: 143-145. Schofield, J. 2004 (with contributors). Modern military matters. Studying and managing the twentieth-century defence heritage in Britain: a discussion document. York: Council for British Archaeology.


Chapter 12

11 February 1966 Proclamation 43 Martin Hall

Proclamation 43, issued by the South African government on 11 February 1966, marked for destruction a community of more than 60,000 people living close to the centre of Cape Town (Hall 2001). Mostly of mixed descent, ‘Coloured’ in apartheid typology, their suburb of District Six had been declared a white ‘group area’. The destruction of District Six would come to stand for the many similar episodes in other parts of South Africa, and would serve as a rallying point for the internal opposition to state repression that would lead to the collapse of the apartheid state some fifteen years later. More widely, systematic discrimination on the grounds of race in South Africa became the mark against which universal principles of justice and human rights came to be set in the second half of the twentieth century, building on the momentum of the US Civil Rights movement and crystallizing in the iconography of Nelson Mandela. As with other ‘defining moments’, the destruction of District Six acquired its meanings through political action, representation and ‘memory work’ in the years that followed. This chapter traces the strands of these meanings through to the recent past, in which some of the issues set in motion by Proclamation 43 remained unresolved.

District Six was not the first episode of what would today be called ethnic cleansing. Three years after the Group Areas Act was passed into law, the government announced its intention of moving communities from the Johannesburg suburb of Sophiatown away from the city centre to what was to become the massive black township of Soweto. The destruction of Sophiatown’s houses began in February 1955 and continued in the face of extensive peaceful protests until 1963 by which time all that remained was a number of churches. The suburb was rebuilt as Triomf (‘Triumph’) and was restricted for white ownership and residence. Hence by the time the apartheid government turned its attention to District Six – Cape Town’s equivalent of Sophiatown – the issue of group areas removals was already politically attenuated. While many other communities were destroyed in terms of the Group Areas Act, and these sites of destruction remain largely unmarked and increasingly forgotten, District Six was to become and remain a widely recognized signifier for apartheid and its consequences. The destruction of District Six gained notoriety because the South African government’s project to entrench and extend white privilege through legislated racial discrimination was seen as the exception in an era that embraced universal values. White South Africa was a dinosaur in a time of decolonization and civil rights. But in another sense, apartheid was the apogee of modernism – of the belief in the role and authority of the state to undertake massive exercises in social engineering in disregard to the wishes and rights of individuals. While he did not use it as an example, the set of apartheid laws that sought to dictate who could live and work where, restrict rights of marriage and punish sexual transgressions would have served well as a case study in James Scott’s Seeing Like a State (1998). Scott’s subtitle, ‘how certain schemes to improve the human condition have failed’, is an ironic epigraph for the final collapse of apartheid modernism in 1990, when Nelson Mandela, who had been among those opposing the bulldozing of Sophiatown in 1955, walked free from Cape Town’s Pollsmoor Prison.

Legislated segregation Proclamation 43 was issued in terms of the Group Areas Act of 1950, a key plank in a raft of apartheid legislation that included the Prohibition of Mixed Marriages Act (1949), the Immorality Act (1950), the Population Registration Act (1950) and the Reservation of Separate Amenities Act (1953). Promulgated in the years after the National Party gained power in 1948, this legislation formalized discrimination based on race that had long shaped South African society and access to economic advantage. All South Africans were to be classified and registered according to the racial categories ‘White’, ‘Indian’, ‘Coloured’ and ‘Bantu’ (later ‘African’, with ethnic subdivisions such as Zulu and Xhosa). In turn racial classification determined rights in terms of where people could live, work or visit, and with whom they could have sexual relations. While presented as ‘separate development’, this system of racial organization sought to ensure that economic advantage remained concentrated in white hands.

High modernism, as Scott shows, depends on a logic that drives forward grand projects for improvement; however dysfunctional such logic may seem from outside its system of argument, the perception of internal consistency is important to those driving the programme forward. Apartheid was an


DEFINING MOMENTS archetypal modernist bureaucracy – a massive administrative system that redistributed economic benefits to whites by employing them in substantial numbers. The underlying logic was the concept of ability pre-determined by race, the set of assumptions widely accepted as givens in the nineteenth century and the earlier part of the twentieth century. The execution of Proclamation 43 was driven forward by an unrelenting administrative bureaucracy of clerks, police officers, surveyors and contractors who managed the relocation of families and the demolition of their homes over the following decade. They were supported by politicians, clergy and media who provided an ideological apparatus of justification within white South Africa, and in conjunction with sympathetic governments in the United States and Europe, that saw white South Africa as a bulwark against the ‘red tide’ of Soviet and Chinese interests in sub-Saharan Africa.

prevailing in these areas’, to make the white voter aware that ‘sub-economic housing still leaves the worst slums untouched’ (Du Plessis 1944: 83). What was this suburb that attracted such opprobrium? District Six had been so named in 1867. By the early twentieth century the area had become the first destination for many immigrants to South Africa, serving as a dormitory for inner-city industries and dock workers. Residents were employed in clothing, leather working, tobacco, furniture and processed food production, and in a sizable service sector within the suburb: retailing, shop workers, building and transport trades, self-employed tailors, carpenters, dressmakers, seamstresses, shoemakers and cabinet makers (Bickford-Smith 1990; Nasson 1990). General municipal surveys and contemporary photographs show streets, lanes and the block plans of houses. Archaeological work has provided more detail: small houses with front and back rooms, narrow corridors and back yards, and frequent modifications as tenants sought to make the best of crowded circumstances (Hall 1994). Overcrowding was rife and municipal services were poor or non-existent. District Six’s cosmopolitanism resulted in a sense of distinction, defined by a rough, communal character, ‘an environment marked strongly by mutual needs and sharing between families and neighbours, whatever the divisions of income, occupation or religion’ (Nasson 1990: 64). Poverty and hardship were prevalent. In the words of Richard Rive, a writer born and brought up there, ‘it was a ripe, raw and rotten slum. It was drab, dingy, squalid and overcrowded’ (Rive 1990: 111).

The ways in which race was defined in apartheid ideology is a study in itself. Here, given that the focus is on Proclamation 43 and District Six, are some of the ways in which the category ‘Coloured’ was described and defined. Apartheid racial theory saw personal ability, culture and physical features as ineluctably linked. Thus, for example, ‘Cape Malays’ (a Muslim sub-set of ‘Coloureds’) were defined as comprising ‘racial elements’ drawn from ‘Javanese, Arabs, Indians, Ceylonese, Chinese and Europeans’, and with the following characteristics: ‘small in stature … with an olive skin which is sometimes yellowish, light brown or cinnamoncoloured … flattish face, high cheek-bones, black (slightly slanting) eyes, a small nose, wide nostrils, a large mouth … introspective, polite, kind towards women, children and animals … inclined to speak slowly, to be passive and indolent’ (Du Plessis 1944: 3).

Proclamation 43 initiated the process of destruction by setting aside the area for exclusively white ownership and occupation. The state estimated that it would need to move about 62 000 people and planned to do so within five years. A decade later, however, removals were still incomplete and costs of compensation, demolition and resettlement were six times the original estimates (Hart 1990). The job was finally done in early 1984, leaving a jagged scar across the foot of Devil’s Peak, described by Richard Rive as ‘South Africa’s Hiroshima’.1

These ‘Cape Coloured’ communities were, then, seen as originating in miscegenation. This presented a problem for apartheid theorists, since such mixing was the very thing that was supposed to be atypical, justifying separate development as natural in human history. As a result, ‘Coloured’ was best understood in the negative, a person who was ‘neither a White, nor an Asiatic, nor a native’ in the words of the official government yearbook. Given that such people were of mixed race, the official yearbook felt it necessary to warn that the racial category included an ‘undesirable class … ‘skollies’, the habitual convicts and ex-convicts, the drunkards, the daga-smokers, and the habitual loafers’ (Union of South Africa 1953: 1096).

Politics and nostalgia A first way in which the destruction of District Six gained the attenuated meaning that is characteristic of a ‘defining moment’ was as a rallying point for intensified opposition to apartheid. Media coverage of bulldozers at work and families being moved out to remote, wind-swept suburbs on the Cape Flats gave tangible substance to the dry administration of racial management. Early objections were given momentum by the wave of protests that started in Soweto in June 1976 and spread across the country. Civil society organizations attacked the state behemoth at its vulnerable points: negative

Such a writing of race led inexorably to the view that ‘Coloured’ communities were a social pathology that needed to be eliminated by the state in the interests of improved racial hygiene. It was inevitable that District Six would be seen as slum that should be cleared. In the words of a contemporary academic apologist, there was an urgent need to ‘shock the public into a realization of the conditions



Cape Times, 9 January 1986.

MARTIN HALL: 11 FEBRUARY 1966. PROCLAMATION 43 publicity forced the state to abandon a plan to house 15 000 whites in a high-rise development; there was widely supported opposition to the destruction of a crèche and a church; an attempt by a multinational company (BP) to redevelop the area was thwarted by an alliance of more than twenty civic organizations and former residents (Hart 1990; Soudien 1990).

Reflective nostalgia for District Six has a lineage that draws on nineteenth-century representations of Cape Town and its people in the work of artists such as Thomas Bowler (Hall 1991). Emile Maurice, writing in the catalogue for an exhibition of the art of District Six, captures this in the concept of a particular sort of outsider 'who stares, who, from the safety of distance, gapes, perhaps with curiosity and intrigue, as he captures objects not subjects – caricatures of people, not people themselves – in his snare, his magical, dexterous and seductive weave of broken lines and subtle textures that so cajoles us to waft on the wings of nostalgia' (Maurice 1995: 20, original emphasis). And Bill Nasson has captured this as the stereotype of the ‘Cape Malay’:

This political work continued after the collapse of apartheid hegemony in 1990 and the first democratic elections of 1994. After initially seeking a common cause, different sets of interests moved to polarized positions. On the one side was the Cape Town City Council, seeking to assert the authority of municipal government through a land trust which would control the redevelopment of the area. Opposed were former residents, organized as the District Six Restitution Front, who were seeking direct restitution or financial compensation.2 This immediate dispute was resolved by a land commission in 1997, which ruled for the former residents; however, discontent and conflict were to continue for more than a decade (Hall 2001).

… exclusively a merry community, with a rich, vigorous and rowdy popular life; a higgledy-piggledy riot of buildings and architectural styles, thronged with characters with an insatiable appetite for conviviality and an insatiable thirst for alcohol; a District Six of January Coon Carnivals, of cackling flower sellers like the durable and celebrated Maria Maggies, of blaring horns from hawkers’ carts during the snoek season … a colourful, legendary place, characterised by the perpetually open front door and cuddly youth from the Globe Gang, helping frail old women across Hanover Street with their weekend shopping from Spracklens or the Parade. (Nasson 1990: 48).

The political momentum that started with Proclamation 43 in 1966 and continued with effective force for thirty years and more was fuelled by memory continually provoked by the visible evidence of destruction; an open swathe of land close to the heart of the city, framing still-standing mosques and churches. However, there were contesting claims on memory – the happy-go-lucky caricature of the ‘Cape Coloured’ of apartheid race-construction and the commodification of history for the purposes of a surging heritage industry in the post-apartheid era. Differentiation between these forms of appropriation and the claims to the recognition of rights by former residents and their descendants requires a finergrained understanding of how memory works within the dynamics of power. In this respect, Svetlana Boym’s distinction between ‘restorative’ and ‘reflective’ nostalgia is useful. Drawing on a range of claims on the past, and particularly Eastern Europe, Boym shows how restorative nostalgia seeks a reconstruction of the lost home in a quest for truth and present rights. In contrast, reflective nostalgia thrives on the feelings of longing and loss in themselves, drawing on ‘the imperfect process of remembrance’ (Boym 2001: 41). Both restorative and reflective nostalgia will invariably call on heritage – on material remnants of the past that can be reimbued and saturated with associations and interpretations (Stewart 1993, Samuel 1994). Both will tend to collapse history into a mythology of the past – ‘the edenic unity of time and space before entry into history’ (Boym 2001:8). However, and as the case of District Six well illustrates, the political manifestations of these differing forms of nostalgia can be very different.


Figure 12.1 Nostalgia for District Six: poster for a musical recalling the culture of the segregated township More recently, this ‘imperfect process of remembrance’ has found new vigour in the ‘New South Africa’ that has taken form since the early 1990s. The re-regulation of the casino industry in 1996 has enabled massive investments in ‘entertainment destinations’ by multinational corporations. (Hall 2005; Hall and Bombardella 2005; see also Hannigan

Mail and Guardian 8 August 1997.


DEFINING MOMENTS 1998; Ritzer and Stillman 2001; Sagalyn 2001). These complexes are ‘decorated sheds’ – large, steel-framed hangars in which are assembled the technologies of theatre and illusion (Venturi et al. 1977). Cape Town’s GrandWest Casino and Entertainment World is one such complex, equipped with characteristic design features - a central casino and gaming area surrounded by a range of entertainment options that include cinemas, restaurants and shops selling designed goods. Visiting GrandWest – in common with similar destinations the world over – is essentially a theme park experience with an edge, a day out for the family with the dangerous edge of gaming, or a night out in a simulated town.

Proclamation 43 and has continued unbroken through the years. This continuing tradition of restorative nostalgia is best shown through the work of the District Six Museum (Rassool and Prosalendis 2001). Opened at the end of 1994, the museum became immediately popular with people who had been dispossessed by apartheid removals and, through its trustees, connected the earlier opposition to apartheid removals with post-1994 campaigns for land restitution. For example, one of the District Six Museum’s most popular exhibits is a display of street signs. Suspended as long banners from the high ceiling of the one-time church, these evoke rich memories of the District’s complex physical and social geography, and of the violence of dispossession. Former residents are immediately drawn to them. This display also represents all the ambiguities and contradictions in the violence of apartheid in its own particular history. Long assumed destroyed with the rest of the District’s architectural fabric, the street signs had in fact been secretly collected and stored by one of the white demolition workers employed by the state. Seeking relief from the burden of his history, the man presented himself and his collection to the Museum shortly after it was opened, as an act of personal reparation.

GrandWest’s pitch is heritage and nostalgia for District Six (Hall and Bombardella 2007). The design of its restaurants and specialist retail outlets seeks to evoke the narrow streets, washing-lines and vernacular facades that were destroyed by Group Area removals – a reincarnation of the caricature mapped out by Maurice, Nasson and others. This appropriation of nostalgia is consistent with Boym’s more general observation: ‘reflection suggests new flexibility, not the reestablishment of stasis. The focus here is not on recovery of what is perceived to be an absolute truth but on the mediation on history and passage of time’ (Boym 2001: 49). This is also consistent with the precepts of the ‘experiential economy’ – the appeal to individual experience, and the sale of individualized entertainment (Pine and Gilmore 1999). Reflective nostalgia is not bound by the constraints of ‘truth’ and ‘evidence’, but rather seeks to evoke the spirit of the past in the interests of the individual. Such nostalgia ‘inverts the temporal logic of fantasy (which tutors the subject to imagine what could or might happen) and creates much deeper wants than simple envy, imitation, or greed could by themselves invite’ (Appadurai 1996: 77). Such evocations connect directly with the requirements of the contemporary consumer economy and the management of desire in the interests of profit.

‘Marking the ground’ in ways such as these has been important in the continuing construction of the memory of District Six. Visitors to the Museum are confronted with a large map of the District spread across the floor, and are encouraged to mark the places were they lived. Bolts of calico are draped over chairs, and former residents are asked to sign their names and recall their memories; many metres of cloth have been marked in this way since the Museum opened in late 1994. In turn, these acts of marking encourage people to talk about their lives. The same concept of marking has been extended into the landscape itself. This was vividly demonstrated in the September 1997 Sculpture Festival (Soudien and Meyer 1997). Most of the Sculpture Festival’s installations made use of the debris of destruction: plastic, ceramic sherds, broken glass, stone, building foundations. Installations included a skeletal tree spray-painted luminous red and orange, with a small cairn of gold foil sand bags nearby – the treasure of memory – and branches touched by the sunset, or by blood. Next to this was a ship fashioned from paper and shredded plastic, with stick figure goblins swinging in its rigging; a parody on colonial history. Cairns of Hanover Street kerbstones were taped off as a development site (or a crime scene), while further up the slope a 'garden of remembrance' had been fashioned from stones, broken glass, ceramic sherds and the other debris of daily life, dug out from just beneath the surface; ordinary artefacts rearranged as a shrine.

Restorative nostalgia – the drive to reconstruct the lost home as part of a quest for truth and present rights – may use strands of memory, images and objects in common with appropriative evocations. However the welding of claims for restoration with the politics of opposition to discriminatory and establishment orders makes for a very different set of consequences (Hall 2001, 2006). For example, memoirs such as Linda Fortune’s may appear at face value to be at one with the images evoked by GrandWest: ‘people who grew up and lived in District Six knew everyone who belonged in the area. So did the gangsters, who grew up there and lived there. They recognised strangers immediately, and some of them would linger about, waiting to rob an unsuspecting victim. They never bothered any of us living in District Six’ (Fortune 1996: 58). But such ‘insider’ recollections are part of a genre of activist-directed remembrance that was initiated by

District Six, then, has persisted as much more than idea. Words, music and images are rooted in the scar across the



Figure 12.2 Marking the ground. An installation that formed part of the District Six Sculpture Festival

slopes of Devil’s Peak – a mark of shame and dispossession that serves as a monument – a mnemonic system that makes history tangible (Hall 2001, 2006). For Lefebvre, such 'nonverbal' signs are not merely reducible to words – they have additional qualities, and in particular an ambiguity. This allows a unity of otherwise-disparate meanings, in which repression can be 'metamorphosed into exaltation'. The material thus has a complexity that is more than words alone – a 'horizon of meaning', 'a specific or indefinite multiplicity of meanings, a shifting hierarchy in which now one, now another meaning comes momentarily to the fore, by means of - and for the sake of - a particular action' (Lefebvre 1991: 222). The material world of District Six, then, signals a radically unstable space. Objects are continually reinterpreted and reclaimed, the ground is marked and paced out, and mosques and churches used in defiance of the wasteland. In consequence, the space that is District Six after the years of apartheid’s bulldozers has remained 'lived': active, defiant, contradictory and contested.

out of reach. The two forms of nostalgia may use the same symbolic sets – photographs, street signs, recollections, household treasures – but the implications will be very different. ‘Restorative nostalgia evokes national past and future; reflective nostalgia is more about individual and cultural memory. The two might overlap in their frames of reference, but they do not coincide in their narratives and plots of identity. In other words, they can use the same triggers of memory and symbols, the same Proustian madelaine pastry, but tell different stories about it’ (Boym 2001: 49; Hall and Bombardella 2007). Continuing dissent The instability of District Six has continued today (see Beyers 2005). By the end of 1998 – a year after the Land Commission ruled – some 2,500 claims for restitution had been submitted and a trust had been established to represent the interests of former land owners, tenants and traders who had the right to claim in terms of the 1994 Restitution of Land Rights Act. In 1998 the District Six Beneficiary Trust, the City of Cape Town and Department of Land Affairs signed a Record of Understanding and then, in 2000, a formal agreement intended to enable land restitution to those claimants who had elected not to take financial compensation.3 However, Cape Town’s unstable political environment saw a coalition municipal administration

Boym, writing from the perspective of Eastern Europe, is worried by nationalistic obsessions with ‘original stasis’ and ‘prelapsarian moment’. In the political economy of heritage in South Africa, this preference is reversed. Restorative projects, such as those of the District Six Museum, contribute to social justice by mobilizing memory and memorabilia in the interests of contemporary communities. ‘Reflective nostalgia’, in contrast, has been appropriated by investment interests as part of a global trend in individualized entertainment that promotes consumption through desire for a state of life seen as better than the present, but ever just


‘District Six Task Team to fast track housing and commercial development in District Six’. Statement by the Chief Land Claims Commissioner. Argus, Cape Town, 6 September 2007.


DEFINING MOMENTS opposed to the provincial and national ANC governments and the earlier consensus began to unravel.

have been forcibly removed and chased to the wastelands of the Cape Flats’.9

Following a small but successful pilot housing project in District Six, 2007 opened optimistically with the announcement by the District Six Beneficiary Trust that 4000 new homes were planned – more than sufficient for the 2 400 or so remaining tenants and land-owners with a claim to restitution – and that one hundred families would be However, the Trust’s announcement granted houses.4 prompted an immediate protest from former landowners, who claimed that they were being treated unfairly in comparison with former tenants.5 By August, there was open conflict between the District Six Beneficiary Trust and the City of Cape Town – reminiscent of the dispute heard by the Land Commission a decade earlier – over who has jurisdiction for planning the development of the District, leading the Executive Director of Housing for the City to publicly attack the bona fides of the Trust, alleging ‘obstinate self interest’ and attempts to control the allocation of There was the semblance of lucrative contracts.6 reconciliation as the District Six Beneficiary Trust and the warring city, provincial and national government departments agreed to work together as a task team, but this arrangement soon showed its fragility as the 360 former landowners of the District Six Advocacy Committee pressed ahead with their legal challenge and the City of Cape Town announced that it would not defend the action, since it shared the landowners’ concern about the legality of the restitution agreements and the standing of the District Six Beneficiary Trust.7

By the end of 2007, then, the politics of land restitution in District Six had regressed to the conflicts of more than a decade earlier. Former residents and the City of Cape Town were at loggerheads, and were waiting on a decision by the Land Claims Court as to the legality of the restitution process as a whole. There was conflict between former landlords and former tenants – both victims of apartheid removals, but clearly with differing interests. Large business interests, reminiscent of the ill-fated attempt by BP to privatize the reconstruction of District Six in the 1980s, were pushing for lucrative development around the margins of land seen as ‘sacred’ for its associations and memories.

Tension was further heightened when a number of large scale private developments were announced on land that, it was claimed, is not subject to restitution.8 Most controversial of these was the Red Brick Building, with 84 up-market residential apartments for sale off plan. This project was described by Anwar Nagia, founder of the District Six Beneficiary Trust as ‘bloody unfair and insensitive … This is very, very sacred land. There is history on that land. People


‘District Six prepares for new life’. Argus, 13 January 2007. ‘Former District Six land owners claim bids being sidelines for those of tenants’. Argus 26 February 2007. 6 ‘City under fire from District Six trust as it calls for tenders’, Argus 14 August 2007; ‘District Six task team scraps city’s tender for business plan’, Argus 20 August 2007; Hans Smit (Executive Director of Housing), ‘Development of District Six is being put on hold by obstinate self-interest’, Argus 24 August 2007. 7 ‘District Six body contests claims. Call for process to be stopped’. Argus 23 August 2007; ‘Flaws found in District Six restitution plan’, Argus 4 October 2007; ‘Bid to halt building on District Six land. Land claims court considers matter urgent’, Argus 28 January 2008’ 8 ‘Row over District 6 lofts. Angry trust officials call for probe into upmarket development on disputed land’, Argus 20 September 2007; ‘Private developers muscle in on more District Six sites’, Argus 2 October 2007; ‘Land Commission to probe use of District Six land for private developments’, Cape Times, 3 October 2007. 5

Figure 12.3 Installation by Roderick Sauls. Part of the District Six Sculpture Festival, this installation recalls both slavery and the carnival traditions of the city Indeed, such issues over land and identity had widened to include contested sites in other parts of the city, with the District Six Museum playing an active role in mobilizing a broader ‘memory community’ (see Malan 2005). Here is Yazir Henry of the Direct Action Centre for Peace and 9

‘Faircape building steams ahead while others stall. Controversy bubbles over development in District Six’. Argus 10 October 2007.


MARTIN HALL: 11 FEBRUARY 1966. PROCLAMATION 43 Memory writing about the controversial excavation of a large burial ground a little way from District Six:

closure, and reminding us that the legacy of the ‘defining moment’ of 1966 continues today, and into the future.

After several years of holding their breath the rich can now move into their luxury apartments and pretend this was never a burial ground; the developers can make good on their investments and breathe a sigh of relief as if it was just another business deal; and the relevant political authorities such as the South African Heritage and Resource Agency, the City of Cape Town and the Ministry of Arts and Culture can pray that their inability to protect ordinary Capetonians from the rapacious greed of an apartheid beneficiary class which continues to benefit in the name of trickle down development will attract only academic interest. (Henri 2008:9).

References Appadurai, A. 1996. Modernity at Large: Cultural Dimensions of Globilization. Minneapolis: University of Minnesota Press. Beyers, C. 2005. Land Restitution in District Six, Cape Town: Community, Citizenship and Social Exclusion. PhD dissertation: University of Sussex. Bickford-Smith, V. 1990. The origins and early history of District Six to 1910. In S. Jeppie and C. Soudien (eds), The Struggle for District Six: Past and Present, 35-43. Cape Town: Buchu Books. Boym, S. 2001. The Future of Nostalgia. New York: Basic Books. Du Plessis, I. D. 1944. The Cape Malays. Cape Town: Maskew Miller. Fortune, L. 1996. The House in Tyne Street: Childhood Memories of District Six. Cape Town: Kwela. Gonzalez-Ruibal, A. 2008. ‘Time to Destroy: an Archaeology of Supermodernity.’ Current Anthropology. Hall, M. 1991. Fish and the fisherman, archaeology and art: Cape Town seen by Bowler, D'Oyly and De Meillon. South African Journal of Art and Architectural History 2, 3&4: 78-88. Hall, M. 1994. Horstley Street, District Six. Cape Town: Research Unit for the Archaeology of Cape Town. Hall, M. 2001. Cape Town's District Six and the archaeology of memory. In R. Layton, P. Stone and J. Thomas (eds), The Destruction and Conservation of Cultural Property, 298-311. London: Routledge. Hall, M. 2005. The Industrial Archaeology of Entertainment. In E. Casella and J. Symonds (eds), Industrial Archaeology: Future Directions 261-278. New York: Kluwer/Plenum. Hall, M. and P. Bombardella 2005. Las Vegas in Africa. Journal of Social Archaeology 5, 1: 5-24. Hall, M. 2006. Identity, memory and countermemory: the archaeology of an urban landscape. Journal of Material Culture 11, 1-2: 189-209. Hall, M. and P. Bombardella 2007. Paths of nostalgia and desire through heritage destinations at the Cape of Good Hope. In N. Murray, N. Shepherd and M. Hall (eds), Desire Lines: Space, Memory and Identity in the PostApartheid City, 245-58. London: Routledge. Hannigan, J. 1998. Fantasy City: Pleasure and Profit in the Postmodern Metropolis. London: Routledge. Hart, D. 1990. Political manipulation of urban space: the razing of District Six, Cape Town. In S. Jeppie and C. Soudien (eds), The Struggle for District Six: Past and Present, 117-142. Cape Town: Buchu Books.

Henri marks out a direct link between the uncovering of the burial ground, the history of forced removals in Cape Town initiated by Proclamation 43 and the rights of disposed communities and their descendants to restitution. Rather than the residents of District Six pitted against the apartheid government, this is now their marginalized descendants pitted against the City of Cape Town, state heritage agencies and the national government: ‘since a few thousand people have been given symbolic reparation in acknowledgement of such claims and millions have been denied their right to compensation the process of historical excision has been rendered both more complex and seamless’ (Henri 2008:9). The persistent landscape of District Six’s destruction, along with burial grounds and other material remnants of the previous form and structure of the city, demonstrate the potency of what Alfredo Gonzalez-Ruibal (2008) has called ‘manifestation’. Gonzalez-Ruibal argues that narrative alone can ‘saturate memory’ and result in trivialization. This directs us to the power of materiality – of the ‘recognition effect’ through which repression can be ‘metamorphosed into exaltation’ (Lefebvre 1991: 220): ‘social space contains a great diversity of objects, both natural and social, including the networks and pathways which facilitate the exchange of material things and information. Such ‘objects’ are thus not only things but also relations. As objects, they possess discernable peculiarities, contour and form. Social labour transforms them, rearranging their positions within spatiotemporal configurations without necessarily affecting their materiality, their natural state…’ (Lefebvre 1991: 220, 77). By building on the work of Lefebvre, Foucault and others, Gonzalez-Ruibal opens up the possibilities for an ‘engaged archaeology’ through allowing the non-verbal to be remembered as ‘places of abjection’ (Gonzalez-Ruibal 2008). Given the lack of resolution of claims to District Six, and the continuing and prominent existence of the scar that Richard Rive described as ‘Cape Town’s Hiroshima’, it might well be apt to describe this as such a place of abjection, denying


DEFINING MOMENTS Henri, Y. 2008. Building on the foundation of violence and pain. Cape Times, 17 January 2008: 9. Lefebvre, H. 1991. The Production of Space. Oxford: Blackwell. Malan, A. 2005. Contested sites: negotiating new heritage practice in Cape Town. Journal for Islamic Studies 24 &25 (2004-2005):17-52. Maurice, E. 1995. The sore on the queen's forehead. In E. Maurice, District Six: Image and Representation, 14-24. Cape Town: South African National Gallery. Nasson, B. 1990. Oral history and the reconstruction of District Six. In S. Jeppie and C. Soudien (eds), The Struggle for District Six: Past and Present, 44-66. Cape Town: Buchu Books. Pine, J. B. and J. H. Gilmore 1999. The Experience Economy: Work is Theatre and Every Business a Stage. Boston: Harvard Business School Press. Rassool, C. and S. Prosalendis, (eds) 2001. Recalling Community in Cape Town: Creating and Curating the District Six Museum. Cape Town: District Six Museum Foundation. Ritzer, G. and T. Stillman 2001. The modern Las Vegas casino-hotel: the paradigmatic new means of consumption. M@n@gement 4, 3: 83-89. Rive, R. 1990. District Six: fact and fiction. In S. Jeppie and C. Soudien (eds), The Struggle for District Six: Past and Present, 110-116. Cape Town: Buchu Books. Sagalyn, L. 2001. Times Square Roulette: Remaking the City Icon. Cambridge: MIT Press. Samuel, R. 1994. Theatres of Memory. London: Verso. Scott, J. C. 1998. Seeing Like a State. How Certain Schemes to Improve the Human Condition have Failed. New Haven: Yale University Press. Soudien, C. 1990. District Six: from protest to protest. In S. Jeppie and C. Soudien (eds), The Struggle for District Six: Past and Present,143-183. Cape Town: Buchu Books. Soudien, C. and R. Meyer 1997. The District Six Public Sculpture Project. Cape Town: District Six Museum Foundation. Stewart, S. 1993. On Longing: Narratives of the Miniature, the Gigantic, the Souvenir, the Collection. Durham: Duke University Press. Union of South Africa 1953. Official Yearbook of the Union and of Basutoland, Bechuanaland Protectorate and Swaziland. Pretoria: Bureau of Census and Statistics. Venturi, R., D. Scott Brown and S. Izenour 1977. Learning from Las Vegas: The Forgotten Symbolism of Architectural Form. Cambridge: MIT Press.


Chapter 13

March 1993 The Library of Babel: Origins of the World Wide Web Paul Graves-Brown Walking is controlled falling. Without the pull of the Earth, and without the resistance of the ground, we would remain still. That’s why, in our dreams, we can fly, and that’s why, in our dreams, we find it hard to run. (Guest 2007: 360)

ephemerality of Web content. But can a discipline primarily concerned with the material world deal with that which has, ostensibly, no physical, tangible existence?

Introduction In his Archaeologies of the Future (2005), Frederic Jameson points out that early versions of Utopia sequestered the ideal realm in space rather than time (see also Jameson 1977). Thomas More’s eponymous country was an island, and only in later centuries were the utopias and dystopia’s displaced into the future. In this chapter I shall argue that, in the early 1990s, this principle was again reversed: Utopia now became a contemporary enclave, spatially distinct in that it was virtual. This can be seen as a consequence of the then vogue for post-modernity:

Inventing the World Wide Web Web 0.5 The World Wide Web was ‘invented’ by Tim Berners Lee, a British computer scientist working at CERN (Conseil Européen pour la Recherche Nucléaire) in the 1980s and 90s. But as with all inventions, this did not happen in an intellectual or practical vacuum. There existed both prior competences, and a wide variety of prototypes (sensu Winston 1986). The Internet (often these days conflated with the Web) had its origins with the launch of Sputnik 1 in 1957 (see Chapter 11). Fearing Soviet domination of space, President Eisenhower set up the Advanced Research Projects Agency (ARPA) which, by the late 1960s, had established the ARPANET – a proto Internet involving US military and academic centres. ARPANET is often said to have been intended as a means of protecting communications in the event of a nuclear strike, yet many believe that in the event this would not have been a practical reality.

The announcement of ‘the end of history’ and the rejection of all future orientated speculation merely displaces utopia from time to space. To this extent postmodernists are returning to the older, pre-18th century, spatial form of utopia, the kind inaugurated by More. (Kumar 1993: 76) The ‘invention’ of the World Wide Web, its exact date debateable as we shall see below, came about within the context of this Arcadian enthusiasm. This chapter takes as its starting point the paper given by the late Sara Champion at the 1999 TAG conference from which this book originates. However, the nature of the web and the virtual has changed a great deal in the intervening years. Effectively the World Wide Web is now twice as old as it was in 1999. Even if the ‘end of history’ had stifled speculation about the future, the 1990s saw a great deal of academic and media discussion of cyberspace and cyberculture (e.g. Bendikt 1991; Escobar 1994; Featherstone and Burrows 1995; Laurel 1991). What is fascinating is the disjunction between the imaginings of the period 1990-1997, and what has actually transpired. A clear example of just how unpredictable unfolding events really are.

Multiple factors converge to form the context of the Web, too numerous to discuss here in any detail. The Web requires the existence of personal computers with a Graphical User Interface (GUI). These were first pioneered by Douglas Engelbart in the 1960s (he invented the mouse), and developed at Xerox’s Palo Alto research centre as part of their quest for a ‘paperless office’. The Palo Alto researchers, particularly Alan Kay, refined the desktop computer concept, its mode of programming and interface, but also developed other key elements that were essential, particularly the Local Area Network (Kay 1977). Many of the innovations of Palo Alto were then sold to Steve Jobs in exchange for stock in his Apple Computer company, and incorporated into the Lisa and later Macintosh computers.

In addition to giving an account of the origins of the Web and the zeitgeist that surrounded it, I will conclude with a consideration of how archaeology fits into the study of the virtual. In her 1999 abstract, Sara Champion pointed out the

Communications technologies such as e-mail have a long history since such systems existed in proto form to serve users 123

DEFINING MOMENTS of mainframe computers. Based on this and the growing Internet, a key precursor to the Web was USENET, conceived by Duke University graduate students Tom Truscott and Jim Ellis in 1979, which used an email-like, distributed Bulletin Board system. By the late 1980s, other, commercial network systems such as Compuserve and America Online also existed, and electronic communications were widely used by academics. For example, the UK JANET system was established in 1984 and, by 1991, was beginning to implement the Internet Protocol. Equally important, perhaps, was the United States High Performance Computing and Communication Act of 1991, promoted and passed as a result of the efforts of Al Gore. Whilst Gore may not have ‘invented’ the Internet, as he later claimed, nor even coined the term ‘information superhighway’, his Bill had practical effects, not least in the increased funding for the National Centre for Supercomputing Applications (NSCA – see below). Several Internet based data sharing systems were developed prior to the Web, most notably Gopher developed by Mark McCahill, Farhad Anklesaria, Paul Lindner, Dan Torrey, and Bob Alberti of the University of Minnesota. The Gopher protocol used a hierarchical link system visually similar to Microsoft’s later folder based file structures.

Web 1.0 Berners-Lee’s ideas for a hypertext based data sharing system began in 1980 when he created his simple Enquire programme during his first period at CERN. Returning there in the late 80’s, he finally put together a proposal for more formal development at CERN in March 1989 (Berners-Lee 1999). Although Berners-Lee is credited with the ‘invention’ of the Web, one should note at this point that he was assisted by a number of people, most notably his CERN colleague Robert Cailliau. By December 1990, the CERN team had a working system on a NeXT computer (a machine developed by Steve Jobs during a period when he had left Apple). This system included the basic HTTP and HTML protocols upon which all of the Web is now based. The Web standard drew on a variety of pre-existing ideas and technologies. The concept of a Web like hypertext is often attributed to Vannevar Bush, who described his ‘memex’ thus: Consider a future device for individual use, which is a sort of mechanized private file and library. It needs a name, and, to coin one at random, ‘memex’ will do. A memex is a device in which an individual stores all his books, records, and communications, and which is mechanized so that it may be consulted with exceeding speed and flexibility. It is an enlarged intimate supplement to his memory. It consists of a desk, and while it can presumably be operated from a distance, it is primarily the piece of furniture at which he works. On the top are slanting translucent screens, on which material can be projected for convenient reading. There is a keyboard, and sets of buttons and levers. Otherwise it looks like an ordinary desk. (Bush 1945)

Yet all of these elements remained fragmented in the early 1990s. The Internet was by no means ubiquitous. As BernersLee (1999) recollects, CERN and other European science institutions were subscribed to an entirely separate ISO derived networking system. USENET and the commercial providers were largely self contained, the average computer user, outside of academia, was unlikely to have a modem or other communications hardware.

In the 1960s Ted Nelson (of whom more below) conceived of a similar system which he christened Xanadu. By the late 1980s hypertext was widely used in CD ROMs and help systems, such as that of Windows 3.0 (introduced in 1990). Berners-Lee specifically drew on and simplified the already existing SGML hypertext protocol. Like many inventions, the Web concept took some time to take off. CERN had not been entirely enthusiastic about the scheme, but other enthusiasts began to take part, most notably staff at the Stanford Linear Accelerator Centre (SLAC) who set up the second ever Web server in 1991. By April 1992 a group of Finnish students had created the Erwise browser for the UNIX X-Windows system. Two other X-Windows browsers, ViolaWWW, created by PeiYuan Wei at the University of California, Berkeley and Midas, created by Tony Johnson at SLAC, also appeared in 1992. However, the turning point, perhaps even the defining moment for the WWW was when students Marc Andreesen

Fig. 13.1 The NeXT workstation used by Tim Berners-Lee as the first Web server on the World Wide Web.



Fig.. 13.2 Evolution of the web browser.


DEFINING MOMENTS eschewed any claim to copyright. On the same principle virtually all browser software has been made available free, and whilst Netscape and others have sold server software this is also widely available free, as in the widely used Apache server. The simplicity of the protocols made it relatively easy to implement on all types of computer operating system, whilst the easy installation of Mosaic inspired the rapid spread of the concept. From a commercial point of view, most people realised early on that Web provision itself would not make money, which is probably why Netscape was eventually a commercial failure. Rather, the Web constitutes a platform through which goods and services can be sold. Indeed, it seems probable that any attempt to commercialise the Web would have crippled it. In 1993, just as the WWW was starting, the University of Minnesota had decided to charge a licence fee for the software and protocols of Gopher, this effectively killed the concept since users and developers were wary of incurring such costs.

and Eric Bina, at NSCA, University of Illinois, produced the Mosaic browser (initially also for X-Windows) in March 1993. Andreesen seems to have been a more practically and commercially minded, and more serious developer than others, many of whom were creating software solely as student projects. Working with other NSCA students he developed the Mosaic browser to be more user-friendly and easy to install, and perhaps crucially, to allow images to be incorporated on the same page as HTML text. This allowed Web pages to look ‘cool’ but was at odds with the more research orientated CERN team; Berners-Lee apparently reacted vehemently against Mosaic’s inclusion of the tag (Reid 1997). By December 1993, versions of Mosaic had been created for both Microsoft Windows and the Mac computer. Freely distributed, the Web concept began to take off; between July 1993 and July 1994 the number of Websites increased from 150 to 3000.

Consensual Hallucination The later stages of Web development can be reviewed briefly. In December 1994 Microsoft announced that its new Windows 95 operating system would include a Web browser (a decision that led to a number of legal actions). Internet Explorer was based upon Spyglass Mosaic, a spin-off of the Mosaic browser developed by NCSA after Andreesen and his colleagues has left to set up Netscape. The latter had been founded by Andreesen and Jim Clark of Silicon Graphics, releasing the prototype Mozilla browser in October 1994. Crucially, the first commercial version, Mozilla/Netscape 1.0 included the Secure Sockets Layer (SSL) which enabled encrypted financial transactions across the Web; the foundation for all later e-commerce. The period 1995-1996 saw the so-called ‘browser wars’ between Netscape and Internet Explorer; Netscape, among others, objected to the fact that Microsoft ‘bundled’ Internet Explorer with the Windows operating system. However, by 1998 IE had largely prevailed and, despite being the largest ever Initial Product Offering (IPO) on the NASDAQ stock exchange in 1995, the Netscape company was obliged to sell out to AOL in 1998. Nevertheless, Netscape/Mozilla may have had the last laugh. An open source version of Netscape, again named Mozilla, was offered free in mid-1998. This has been gradually developed as a collaborative project to the extent that as of 2007 its descendent, Mozilla Firefox, was as widely used as Internet Explorer. One should perhaps note here that other browsers have been developed and continue to be used; Opera, developed by the Telenor company in Oslo continues to share the browser market, whilst Apple have recently released their Safari as a cross platform browser.

It is hard to be sure why the late 1980s/early 1990s were a period of technological utopianism. It may well be that a conjunction of technical, philosophical, scientific and social developments created a climate of enthusiasm and a new world view at this time. At a practical level, the development and spread of personal computers since the 1970s had made the technology both ubiquitous and increasingly powerful. This was underlined by the GUI based Apple Lisa (1983) and Macintosh (1984), and Microsoft Windows (1990) computers now widely used. There were also, of course, supercomputers, such as those being researched at NCSA where Mosaic was developed, although according to Reid (1997), this field was already considered a bit passé in 1993. In several areas, particularly in artificial intelligence and artificial life research, the virtual was already becoming a ‘reality’ by the early 1990s (de Landa 1998). The growth in IT and of the Internet interacted with other developments. Since the early twentieth century the work of Cantor, Boltzman, Gödel and later Turing had undermined the certainties of classical mathematics: the incompleteness of mathematical systems implied a complex relationship between the observer/mathematician and the observed (Hofstadter 1979). This was accompanied by the development of quantum physics which created further uncertainties about the nature of reality. Not only is the quantum world counterintuitive, but it also implies ‘observer effects’. The nature and the boundaries of the human body had been made ambiguous by medical technologies both in terms of repair and organ replacement and the rapid development of prosthetics. Critical theories in the social sciences had led to deconstruction and post-modernity.

The success of the Web can, in part, be attributed to BernersLee’s initial enthusiasm in promoting the idea. What is probably more significant is that he intended HTTP and HTML to be simple and freely available. CERN had early on

All these factors fostered the questioning of the nature of reality, from the cyborgs proposed in Donna Haraway’s seminal article of 1991, to the hyper-reality which Baudrillard 126

PAUL GRAVES-BROWN: MARCH 1993. THE LIBRARY OF BABEL: ORIGINS OF THE WORLD WIDE WEB (1994; 1995) lamented as a consequence of post-modernity. To quote Malcolm Bradbury (1987: 63), post-structuralism and deconstruction implied that ‘there is no about about for any thinking to be about’, whilst, conversely, it appeared that an artificial reality or virtual reality was a real (!) possibility. At the extreme, these ideas led to the position of post-humans or Transhumanists and extropians who saw prosthetic and computer technologies as either recreating us as cyborgs, or of uploading consciousness into computers. The extropians intend their bodies to be frozen until such technologies are available. Such views might be seen as a consequence of the sense of general disembodiment that characterises ‘late modernity’ (Giddens 1991), a sense of dislocation between the self and the physical body. At a perhaps less extreme level, advocates of artificial or virtual reality were proposing an immersive experiential domain which could be shared by computer networking. The latter utopian vision was in a sense the convergence of several technologies. On the one hand, a number of virtual communities existed by the beginning of the 1990s, some of which consisted of simple text-based virtual domains that had developed from the game Dungeons and Dragons - the first MUD (Multi-User Dungeon) had been created by Roy Trubshaw at Essex University in 1979 (Guest 2007), to be followed by the more sophisticated MOOs (MUD object orientated). At the same time, simulation technologies, which had been developing since the first, post World War One flight simulators, were becoming increasingly advanced. Indeed a great deal of this research had been conducted by ARPA, the same agency that originated the Internet. A key figure here was Scott Fisher who, at NASA’s Ames research centre, developed the video goggles and the controlling dataglove which were the foundation of immersive virtual reality. Significantly, it is said that Fisher had been inspired by the film Brainstorm (Trumbull 1983; see below). In the early 1990s, prophetic figures such as Jaron Lanier of the VPL company and John Walker of Autodesk saw head-mounted displays and data gloves as the future of communications. Indeed Lanier was at pains to emphasise that Virtual Reality was not a replacement for TV, but for the telephone; a new form of co-presence (see below).

Fig. 13.3 A VPL Research DataSuit, a full-body outfit with sensors for measuring the movement of arms, legs, and trunk.

One concept that united most of these disparate utopians was that of Cyberspace, the term/idea coined in William Gibson’s novel Neuromancer (1984: 69):

Cyberspace, it seemed, could be virtual reality, could be an alternative place to exist, could be a place of virtual community. Here it may be noted that Gibson was far from the first fiction writer to propose such ideas; see, for example, Forster’s The Machine Stops (1928) or Brunner’s Shockwave Rider (1975) which includes both the invention of the computer virus and identity theft. In both technological and academic circles, Gibson’s ideas became hot topics of discussion. Many saw the virtual community as the incarnation of the Global Village, which Marshall McLuhan (1964) had predicted as the outcome of ‘electric technologies’ (e.g. Benedikt 1991). That the utopianism of the time was

A consensual hallucination experienced daily by billions of legitimate operators, in every nation, by children being taught mathematical concepts. … A graphic representation of data abstracted from banks of every computer in the human system. Unthinkable complexity. Lines of light ranged in the nonspace of the mind, clusters and constellations of data.


DEFINING MOMENTS derivative to the ideals of the 1960s can be seen from the speakers who often shared conference platforms with Lanier. These included Timothy Leary and John Perry Barlow, the former lyricist of the Grateful Dead. Another key figure in the development of ideas about virtual reality and cyberspace was Stewart Brand. A researcher at the MIT Media Lab, Brand had set up the WELL, an early virtual community, in 1985. Earlier still, in the 1960s, he had been a member of Ken Kesey’s ‘Merry Pranksters’ and one of the instigators of the (in)famous ‘acid tests’ (Wolfe 1968). Finally we should not omit Ted Nelson. Nelson had been a key advocate of the social power of the personal computer; he appeared in adverts for the Altair 2800, said to be the world’s first PC. He was also the originator of aforementioned Project Zanadu; the concept of an inclusive linked system of hypertext, a term that Nelson himself invented. Zanadu is seen by many, including Berners-Lee, as the prototype for the WWW, although Nelson is apparently not an enthusiast for the latter. The utopian view of the virtual, then, owed a lot to the ideals and enthusiasms of the 1960s; the virtual world of cyberspace as an alternative to the altered states of hallucinogenic drugs.

into Gibson’s consensual hallucination; even Microsoft incorporated a VRML plug-in into Internet Explorer. Somehow, though, the concept never took off, a fact that surprised many, including Berners Lee (1999: 180): ‘I expected 3D to really take off, and still don’t quite understand why it hasn’t.’ In many respects the history of cyberspace bears a strong resemblance to that of Ronald Reagan’s SDI ‘Star Wars’ programme of the 1980s. Here too was a situation in which technologists attempted to take the ideas of Sci-fi writers and film makers and turn them into reality, c.f. Brainstorm (Trumbull 1983) and also TRON (Lisberger 1982 - see Turkle 1997). In the case of ‘Star Wars’ the entire notion was conceived by Dr Jerry Pournelle, in conjunction with a group of other Sci Fi authors including Larry Niven, who met at the home of Robert Heinlein. Pournelle then ‘sold’ the idea to Reagan through the medium of Richard Allen, one of his National Security advisors. As with cyberspace, Star Wars never quite became what the writers had imagined, although missile defence is still an active programme and Pournelle and his colleagues advised the George W. Bush administration on Homeland security (Spinrad 1999).

What is interesting is that most discussions in the period 1990-95 lack any awareness of the real ‘virtuality’ that was emerging from the work of Berners-Lee. When it did become successful, it is perhaps no surprise that the WWW acquired the cyberspace epithet. However, in a sense the whole point about the Internet is that it is not a space, that it is in effect a set of topological relations that exist independently of time and space (c.f. Giddens’ 1991 point about the separation of space and time in modernity). Indeed, by 1993, William Gibson himself had realised that the Internet was not as he had first imagined: ‘The offices that the girl rode between were electronically coterminous – in effect, a single desktop, the map of distances obliterated by the seamless and instantaneous nature of communication.’ (Gibson 1993: 85)

Web 2.0: Utopia 2.0 or Dystopia 1.1? It seems uncertain whether the Internet will ever become an immersive ‘place’ (although see below). For in a real sense the Web as it has developed is a much more mundane phenomenon than the utopians (and dystopians) were predicting. It extends what Giddens (1991) terms the ‘collage effect’ that had emerged in the print media of the nineteenth century. Whilst narrative is perhaps fragmented, in exchange we get both immediacy of information and a kind of hypernarrative. Nevertheless it does exhibit some properties which reflect the hype.

In the late 1990s, there were those who believed that the spatial model could be realised through the Web, in particular Marc Pesce who (with a whole range of collaborators) developed VRML (Virtual Reality Markup/Modeling Language), an HTML-like code that could create 3D spaces on the Web. Pesce (nd) has a mystical view of the Web:

Particularly since the collapse of the ‘dot-com bubble’ in 2001, new aspects of Web use have manifested in parallel with the exponential growth of e-commerce. These are often referred to as Web 2.0 (a term coined by Dale Dougherty of O’Reilly Media in 2003). If we consider the actual ‘constellations of data’ extant on the Web, it is clearly approaching Gibson’s imaginary, albeit not in 3D graphical form. With sites such as Wikipedia, the notion that the web is becoming a kind of Library of Babel (Borges 1970) is none too far fetched. As of September 2007 there were around 136 million Web sites and an estimated 1.2 billion Internet users. As Web browsing is increasingly incorporated into mobile phone technology, this figure is likely to grow by another order of magnitude. Equally, the growth in social networking, from chat rooms to blogging, to Myspace and Face Book, underlines the spread of some kind of virtual community, even if it is more dispersed and fluid in nature than a global ‘village.’

If we are to found a new world, admit its discovery and plant our flags here, it is necessary to accept that a manifestation of the sacred (mythos) can exist within the heart of an entirely technical edifice (teknos). Although cyberspace is constructed entirely by human hands, it is sacred, if we are at all sacred. Heavily influenced by Gibson’s novels, his first company Ono Sendai, was named after the makers of his ‘cyberspace decks’. Yet in 1997 it seemed feasible that the Web would evolve



Fig. 13.4 Stonehenges for sale. The author's avatar explores one of several Stonehenges to be found in Second Life. In virtual worlds, as both Slouka and Robins (1995) point out, there is no real basis for morality. Rheingold (1994) had been a particularly strong advocate of the virtual community as a way to reconnect and reconstitute an alienated society. Cyperspace is, ‘one of the informal public spaces where people can rebuild the aspects of community that were lost when the malt shop became the mall’ (Rheingold 1994: 256). Yet it can be argued that the Web/Internet is not an alternate society but an alternative to society. In this it is not alone. The net shares with Disneyland and other focii the characteristic that it has innumerable citizens but no residents, it becomes a simulation of place and being. In this sense the Web/Internet closely resembles More’s Utopia – the name having the double meaning of both Eu-topia, a perfect place, and Ou-topia, a no place. The sense of dislocation is also observable in virtual worlds such as Second Life, founded by Linden Labs in 2002:

What we haven’t got is the Tower of Babel imagined by cyberspace pioneers. Despite the renewed utopianism of Web 2.0, there is and always has been a darker side to the Web. As early as 1996 the W3 Consortium was forced to consider whether anything could be done about the huge amount of pornography already on the net (they decided it could not). According to one industry insider, ‘Mosaic was the pornographer’s wet dream’ (Channel 5: 2007). Indeed the same commentator went on to argue, perhaps with some justification, that the use of the Web to display pornographic images was one of the principle reasons for its initial growth. Whilst governments attempted to legislate pornography off the Web (see Levinson 1997) they failed. But even before the Web was widely known, warnings of the dystopian aspects of cyberspace had been voiced. The net can be seen as a challenge to identity that parallels the death of the author and the post-modern turn (c.f. the point about disembodiment above). As we are all well aware the Internet allows people, for a variety of reasons, to present themselves as someone they are not. Yet the reality of resultant events is questionable. Slouka (1995: 47) recounts an incident in 1993 where a number of ‘residents’ of the text based LambdaMOO were ‘raped’ by a rogue participant, but as he points out: ‘The woman in Seattle hadn’t been raped. … She’d been the victim of a sort of High-tech obscene phone call – disturbing, even upsetting, but certainly not rape.’

We wanted each other, and we didn’t want each other. Virtual worlds were a kind of solution to that tension, between self and other; a way to be together when we felt so alone. In virtual worlds, we could come together, but also keep each other at a safe distance (Guest 2007: 162). Second Life, and other virtual worlds that have sprung up as a product of the Web and the Internet, began with similar 129

DEFINING MOMENTS utopian ideals to those of Lanier or Pesce in the mid 1990s. Yet they have already become the venue for abuse, theft, blackmail and a form of cyber terrorism. Indeed, Linden Labs supposedly contacted the FBI concerning the latter, but were unable to pursue the matter given the ambiguity concerning ‘property’ held within a virtual world (Guest 2007 – see below).

Yet this has led to the creation of ‘virtual sweatshops’, often based in China or Eastern Europe, where workers create value within the virtual world for the sole benefit of the owner of the computers and Internet connection (Guest 2007; Krotoski 2005; Thompson 2005). Early critics of cyberspace were concerned that it was too much of a fantasy, that the aspiration was for a situation in which, ‘we had isolated our perfect machine world from the irrational, hideous world of trees, birds, animals’ (Zamyatin 1921: 89). The practical situation, as of 2008, is generally much more mundane – mostly high tech catalogue shopping. Perhaps if the Concorde version of cyberspace envisioned by Jarron Lanier or John Walker had been created, rather than the 747 version of the Web, the fantasy aspects of Utopia would be more of an issue. Yet the net as it exists is both mundane and also can be very real in its consequences; after a slow start, commerce and capitalism moved into cyberspace in a big way. There have been innumerable cases of child pornography rings using the Internet.

In the late twentieth and early twenty-first century we have what Giddens (1991) calls ‘reality inversion’. Although we still exist in an embodied place, so much of our experience is at a distance, it is, in the currently fashionable term, a function of globalisation. Interestingly, Giddens (1991: 169) foresees the experience described by Guest: ‘Reality inversion, indeed, may often be a functional psychological reaction ... an unconscious neutralising device’. In a curious way this experience at a distance is mirrored at a practical level in the technology of the net. When Java was introduced as a programming language that could create online programmes, Netscape realised that the browser could become an alternative to the computer desktop, and that (in the context of the browser wars) they could hi-jack applications away from operating systems to run entirely over the net via the browser. Although this was not to be at the time, such functionality is one aspect of Web 2.0, which on one level may be an advantage (breaking the stranglehold of Microsoft and Apple), or a danger (making users yet more dependent on their ISP). What it certainly means is that even the software we use becomes dis-placed.

It is easy to forget some of the dire warnings concerning the computer and the Internet. In the 1990s it was common to see icon based GUI technologies as the harbinger of a postliterate society (Porush 1998). Indeed Porush argues that the Internet is the greatest change in human history since the invention of the alphabet. Again this might be hype, but it is interesting to note the similarities between the Phonetic alphabet and the protocols of the Web, specifically that both have the advantage of being able to represent different ‘languages’ on a variety of different ‘platforms’. Be this as it may, post-literacy has not materialised yet, except in the form of a new literacy based on the ‘txt msg’ (interestingly, text messages revert to the earliest form of the alphabet by omitting vowels).

Critics of Web 2.0 take up some of the earlier concerns and express new ones. Early Internet activity, including USENET and to some extent the Web, relied solely on Peer to Peer (P2P) interactions. However, the DSL Internet connections that most people now use are asynchronous: unlike earlier dial-up connections or ISDN, download speeds are significantly faster than upload speeds; the user is increasingly the consumer and may be prevented from equal participation in the net. Some ISPs, at least, forbid users to use their connection to set up their own Web server. The counter to this would, presumably, be that many of the new aspects of the Internet are interactive. Social networking, wikis, customer feedback, blogs, etc., are able, in the words of Tim O’Reilly (2005: 2), to ‘harness collective intelligence’. Moreover, most do not require users to have either the necessary software or know-how to create their own integrated Web site. This is genuinely closer to what BernersLee had originally intended: a system that would let users seamlessly both browse and create content. But again, the problem is that whilst the content of YouTube, MySpace or FaceBook is created by its users, these sites are corporately owned and become valuable pieces of real-estate as a consequence of the unpaid labour of this new proletariat (Kleiner and Wyrick 2006). In Second Life, users do own what they create, unlike Sony Entertainment’s Everquest where world content remains the property of the company.

Since the invention of writing, if not language itself, there has been an increasing division in human affairs between transportation and communication (Levinson 1997). When the only means of communication was speech, co-presence was inescapable. Written texts allowed communications at a distance. The telephone inaugurated a ‘phoney’ co-presence. For both the utopian and dystopian visionaries of the early 1990s, it seemed that what was in prospect was a collapse of this division, that cyberspace and virtuality were to become both communication and travel, VR as Lanier saw it, replacing the telephone. In some trivial sense this might be true: the online virtual world Second Life has its own economy where real fortunes are to be made. But in RL (Real Life) the effects of the net for most, if not all of us, are still matters of communication, and of the ways in which new media of communication create new configurations of the message. An acquaintance recently attempted suicide, partly because of things that had been said about him/her on MySpace. Whatever the fantasy elements of the virtual, it is still on the reality side of the one way mirror of the screen


PAUL GRAVES-BROWN: MARCH 1993. THE LIBRARY OF BABEL: ORIGINS OF THE WORLD WIDE WEB where the real action takes place. We may ‘go to’ a Web site, but this is not yet teleportation (Levinson 1997).

documents; indeed one might go so far as to say that hypertext is in principle a tool as well as a document in that it contains links, video, sounds and pieces of software (Java applets etc.). In this sense Web content is quite different from the traditional materials of the historian, but is equally not the traditional matter of archaeology.

Reality Testing To what extent, then, is an archaeology of the Web or the Internet possible? From the start, it seems to me, there may be a problem analogous to Gilbert Ryle’s ‘category mistake’ (1949). In Ryle’s classic formulation a visitor is shown around a university. Having seen all the buildings and facilities he (sic) asks, ‘but where is the university?’ In terms of material culture, we could look at all the computers, client and server, and the communication systems that connect them together. Yet none of this actually constitutes the World Wide Web, and in any case, big chunks of this infrastructure serve entirely other purposes and have existed since the implementation of telegraphy in the nineteenth century. The physical technology is a necessary condition for the existence of the Web, but this is not sufficient. The Web only exists in terms of a set of protocols: TCP/IP, HTTP, HTML, PHP etc. - and as pieces of software, essentially the server software and the wide variety of browsers. Plus of course Web content.

We have, then, a category of what we might call ‘intangible artefacts’ and it might not be unreasonable to trace this back to the earliest forms of data encoding, such as the Jaquard loom or the phonograph. For these too are not documents but are tools or components of tools. To take what might seem trivial examples, consider the routine features of desktop and GUI, all of which feature on the Web. Being mouse-driven there is a proliferation of buttons and sliders which, in many cases, skuomorphically imitate components of real world technologies. We don’t actually need these and indeed one can now find many Websites that have dispensed with buttons since in practice one is just ‘clicking on’ a defined area of the screen. The graphical button eases the user between the physical and virtual world. This tells us much about how new technologies are accepted and how they often encompass the properties of other ‘media’ (see McLuhan 1964). The ‘mode d’emploi’ of the Web is not one of didactic explanation, but opts for analogy with the ‘real’ world, thereby eliminating any need for lengthy instructions and making it immediately accessible and comprehensible (Akrich and Boullier 1991). Such an approach has its origins with technologies such as subscription cable TV and, in France, with the Minitel system (Akrich and Boullier 1991), which, incidentally, is still used in parallel with the Web.

But can we consider software a legitimate concern for students of material culture? In a sense it has no physical existence, or rather it falls within the world of uncertainty that is quantum physics, mentioned above. Bits of computer data are instantiated in quantum states within the transistors of computers. This, among other things, is why every instance of a particular version of a Web browser is identical to another – its components are defined at a quantum level which is universally identical. Indeed computers would not work if this were not the case; failures of ‘parity’ result in data corruption. Nor is software quite what is meant by the UNESCO convention on intangible heritage. Article 2 paragraph 2 defines this as comprising:

Ultimately, the design of the Web, the nature of the tool, is by no means trivial. As Akrich and Boullier (1991) point out, the actions we take in such virtual systems can have important consequences. They involve us in what it is fashionable to call ‘actor networks,’ and, crucially, can have serious consequences. The most common example is where one is asked to repeat some action to confirm a request. Like signing a document, this can constitute a legal commitment, an important consideration when increasing numbers of financial and administrative actions are carried out online. In a similar sense, as noted above, there are also questions as to the status of online ‘property’. In this light, I suggest, the design of the tools that we use in the virtual world, and the nature of their relationship to real world analogues, can have important implications for our understanding of the nature of twentieth- and twenty first-century society. This suggests that archaeology, with its depth of understanding of material culture, can have an effective role in the exploration of the virtual.

(a) oral traditions and expressions, including language as a vehicle of the intangible cultural heritage; (b) performing arts; (c) social practices, rituals and festive events; (d) knowledge and practices concerning nature and the universe; (e) traditional craftsmanship. In other words, knowledge that exists in people’s heads, but not, it seems, knowledge that exists in the ‘minds’ of computers. But nevertheless, I would argue, pieces of software are by definition artefacts. They may not be objects, in that their physicality is ambiguous, yet they might be ‘things’ in the sense that Brown (2004) defines in his ‘thing theory’ in which the term ‘thing’ need not refer to an artefact or an object. Clearly software and Web content has been made by people and, unlike straightforward documents, programmes and applications are tools, they actually do something. Here we would have to allow that most Web pages are more than

Finally, we might also consider the way people relate to the Internet world, bearing in mind that the Internet is more


DEFINING MOMENTS Bush, V. 1945. Consulted online at Consulted 16 April 2008. Channel 5, 2007. How Porn Conquered the World. TV documentary broadcast Wednesday 26 September 2007, 11:05pm - 12:05am. de Landa, M. 1998. Virtual environments and the emergence of synthetic reason. In J. Broadhurst Dixon. and E. Cassidy, (eds.), Virtual Futures, 65-78. London: Routledge. Escobar, A. 1994. Welcome to Cyberia. Notes on the anthropology of cyberculture. Current Anthropology, 35, 3: 211-230. Featherstone, M. and R. Burrows, 1995 Cultures of technological embodyment. An introduction. Body and Society 3-4: 1-20. Forster, E M 1997[1928], The Machine Stops. In The Machine Stops and Other Stories. London: Andre Deutch. Gibson, W. 1984 Neuromancer. London: Gollancz. Gibson, W. 1993 Virtual Light. London: Viking. Graves-Brown, P. 2009, The privatisation of experience. In C. Holtorf and A. Piccini, (eds.), Contemporary Archaeologies: Excavating Now, 203-215. Berlin: Peter Lang. Guest, T. 2007 Second Lives: A Journey through Virtual Worlds. London: Hutchinson. Harraway, D. 1991. A Cyborg manifesto: Science, technology and Socialist-Feminism in the late 20th century. In D. Harraway, Cyborgs, Simians and Women: The Reinvention of Nature, 149-182. London: Free Association Books. Hofstadter, D. 1979. Godel, Escher Bach: An Eternal Golden Braid. New York: Basic Books. Jameson, F. 1977. Of Islands and Trenches: Neutralization and the Production of Utopian Discourse. Diacritics 7, 2: 2-21. Jameson, F. 2005. Archaeologies of the Future: The desire called Utopia and other Science Fictions. London: Verso. Kay, A. 1977. Microelectronics and the personal computer. Scientific American 237, 3: 231-244. Kleiner, D. and B. Wyrick, 2006. InfoEnclosure 2.0. Mute 4, 2. Krotoski, A. 2005. Virtual trade gets real. The Guardian Thursday 16 June 2005. Kumar, K. 1993. The end of socialism. The end of utopia. The end of history. In K. Kumar and S. Bann, (eds.), Utopia and the Millennium, 63-80. London: Reaktion. Laurel, B. 1991. Computers as Theatre. New York: Addison Wesley. Lefebvre, H. 1971. Everyday Life in the Modern World, Harmondsworth: Allen Lane. Levinson, P. 1997. The Soft Edge: A natural history and future of the information revolution. London: Routledge. Lisberger, S. 1982. TRON.

than just the Web. For instance, it is interesting that in the aforementioned Second Life, the self is represented by an avatar; a (generally) human-like figure which is manipulated as a kind of proxy body. This is some way from the utopian version of Virtual Reality which would call for us to experience the virtual from a first person point of view, ‘as if’ one were actually ‘there’, and perhaps pay the consequences of this, c.f. the films Brainstorm and TRON mentioned above. Second Life could have been created this way, but we might speculate that users prefer a sense of distance in their relationship to their avatar; that the one way mirror of the screen lends them a certain sense of security: ‘In virtual worlds, we could come together, but also keep each other at a safe distance’ (Guest 2007: 162). It may be significant here that the largest element of the economy of Second Life is in the purchase of clothing, often from RL fashion designers, to dress one’s avatar. The ‘Flatland’ (c.f. Abbott 1884) of the desktop and most of the Web offers a sense of privacy which is actually preferred to the immersive Utopia envisaged by its early enthusiasts. It is an aspect of the privatisation of experience (see Graves-Brown 2009) precisely because it allows us a degree of separation from cyberspace. Acknowledgements Firstly I should thank the late Sara Champion, whose paper for the 1999 TAG Conference formed the basis and inspiration for this chapter. Tom Cullis made many useful suggestions in the course of discussing the topic. I should also thank Madeleine Akrich for help in replacing a lost article. References Abbott, E. A. 1884. Flatland: A Romance of Many Dimensions. Akrich, M. and D. Boullier, 1991. Le mode demploi: genèse, forme et usage. In D. Chevallier (ed.), Savoir Faire et Pouvoir Transmettre, 113-132. Paris: Éditions de la Maison des sciences de lhomme. Baudrillard, J. 1994. Simulacra and Simulation. Ann Arbor: University of Michigan Press. Baudrillard, J. 1995. The Gulf War Did Not Take Place, Sydney: Power Publications. Benedikt, M. 1991. Cyberspace: First Steps. Cambridge Massachusetts: MIT Press. Berners-Lee, T. 1999. Weaving the Web : the past, present and future of the World Wide Web by its inventor. London : Orion Business. Borges, J. L. 1970. Labyrinths. Harmondsworth: Penguin. Bradbury, M. 1987. Mensonge. Harmondsworth: Penguin. Brown, B. 2004. Thing Theory. In B. Brown, (ed.), Things, 122. Chicago: University of Chicago Press. Brunner, J. 1975 Shockwave Rider. London: Dent.


PAUL GRAVES-BROWN: MARCH 1993. THE LIBRARY OF BABEL: ORIGINS OF THE WORLD WIDE WEB McLuhan, M. 1964. Understanding Media: The Extensions of Man, New York: McGraw Hill. O’Reilly, T. 2005. What is Web 2.0: Design Patterns and Business Models for the Next Generation of Software. Online at news/2005/09/30/what-is-web-20.html?page=2. Consulted 26 April 2008. Pesce, M. nd. Cybersamhian Ritual, Online at Consulted 4 March 2008. Porush, D. 1998. Telepathy: Alphabetic consciousness and the age of cyborg illiteracy. In J. Broadhurst Dixon and E. Cassidy, (eds.), Virtual Futures, 45-64. London: Routledge. Reid, R. 1997. Architects of the Web. 1,000 day that built the future of Business. Chichester: Wiley. Rheingold, H. 1994. The Virtual Community: Finding Connection in a Computerised World. New York: Secker and Warburg. Robins, K. 1995. Cyberspace and the World we live. Body and Society, 3-4: 135-156. Ryle, G. 1949. The Concept of Mind. Slouka, M. 1995. War of the Worlds: Cyberspace and the High-Tech Assault on Reality. New York: Basic Books. Spinrad, N. 1999. From Jules Verne To Star Wars: Too high the moon. Le Monde. Online at Consulted 4 March 2008. Thompson, T. 2005. They play games for 10 hours - and earn £2.80 in a virtual sweatshop. The Observer Sunday 13 March 2005. Trumbull, D. 1983. Brainstorm. Turkle, S. 1997. Life on the Screen: Identity in the Age of the Internet. New York: Simon and Schuster. Winston, B. 1986. Misunderstanding Media. London: Routledge. Wolfe, T. 1968. The Electric Kool-Aid Acid Test. New York: Bantam Books. Woolley, B. 1993. Virtual Worlds: A Journey in Hype and Hyperreality. Harmondsworth: Penguin. Zamyatin, Y. 1993[1921]. We. Penguin: Harmondsworth.


Chapter 14

0053 hrs, 12 October 1998 The Murder of Matthew Wayne Shepard: An archaeologist’s personal defining moment Thomas A Dowson To be nobody but yourself in a world that's doing its best to make you somebody else, is to fight the hardest battle you are ever going to fight; and never stop fighting. E. E. Cummings

On the evening of Thursday 8 October 1998 Matthew Shepard, a young gay student from the University of Wyoming in Laramie, USA, was lured out of a student bar. Two men, who lead him to believe they were also gay, took him to a remote spot on a country road on the outskirts of town. As they drove him away they hit him on the head with a pistol. Matthew was then dragged from their truck, his hands tied together behind his back and then tied to a fence post four inches above the ground, he was beaten over the head with a baseball bat, burned and robbed, and finally left lying on the ground to die in near freezing temperatures.

young man forced me to examine my own queer standpoint and its relation to the work I do as an archaeologist, and how I have come to use that standpoint to challenge archaeology's complicity in Western society's institutionalised homophobia. A personal defining moment When I heard this awful news various thoughts tumbled about in my mind: gay-bashing — fear — Laramie — archaeology. I suppose initially I thought, perhaps along with other gay men and women, ‘there but for the grace of god go I.’ Despite the rhetoric of liberal politicians, gay and straight, homophobia is as rife and as violent now as it has ever been; perhaps more so. And yes, there are times when I feel threatened and scared. I remember thinking of two archaeology students who had ‘come out’ to me in confidence. I was reminded of their fears and anxieties, and the burden of ignorance they carried for their peers and family. I relived my attempts to alleviate their fears. But in the face of such a gruesome tragedy my words could never be reassuring; they rang hollow and meaningless in my own ears. My heart went out to them, and the many other people negotiating the minefield of coming out. I knew only too well how trapped and even desperate some of them must have been feeling. I thought also it was inconceivable that such a savage assault could have taken place in that town, a town that, up until then, had such pleasant memories for me. In 1992 and 1993 I visited Laramie a few times while on research leave from my post in the Rock Art Research Unit, University of the Witwatersrand. I visited the Department of Anthropology at the University of Wyoming. In 1993 I gave the banquet address at the Wyoming Archaeological Society’s annual conference. I met a number of interesting archaeologists in Wyoming, and I was made to feel very welcome there. In October 1998 I wondered what they were thinking. To them and the world, I was sure, archaeology seemed so far removed from the events surrounding

Some twenty hours later Matthew was found by two passing cyclists, who at first mistook his body for a scarecrow flopped on the ground. In describing Matthew’s brutally disfigured face one of the cyclists said his face was covered in blood except for streaks that had been washed clean by his tears. He arrived at hospital in a critical condition having sustained severe head injuries; his skull was so badly fractured doctors were unable to operate. He lay on a life support system in a coma while news of this brutal attack spread around the world. On Monday 12 October at 12:53 am Matthew died. This brutal and senseless murder of a young man with his whole life ahead of him not only had an enormous impact on sexual politics at the time (for an account of the politics in the aftermath, see Loffreda 2000), it was also a deeply personal defining moment for me. Not because I am queer, but because I am an archaeologist. The shear brutality of Matthew's death prompted me to think very carefully about archaeology and homophobia, and in 1998 it forcibly bought home to me the necessity for the queering of archaeology. Rather than analysing here the senseless murder of a young man in true normative archaeological fashion I take this opportunity to continue my queering of archaeology (Dowson 2000; 2006) by exploring how the murder of a gay


DEFINING MOMENTS Matthew’s death that no one could possibly imagine a connection. Slowly but surely I began to realize there was indeed a connection.

heterosexuality and homosexuality that began to be the primary defining characteristic of men and women from the end of the nineteenth century onwards. Sedgwick argues that the distinction resulted from a homophobic desire to devalue one of those oppositions. Consequently, homosexuality is not symmetrically related to heterosexuality — it is subordinate and marginal, but necessary to construct meaning and value in heterosexuality (Sedgwick 1990: 9–10). In her groundbreaking study Sedgwick argues that this asymmetrical relationship between homosexuality and heterosexuality has been at the heart of every form of representation since the start of the twentieth century; and also, I argue, those produced within the disciplinary culture of archaeology.

During October 1998 I was writing an article for a Catalan archaeological journal, Cota Zero, in which I was exploring the relationship between archaeology and homosexuality (Dowson 1998). Specifically though, I was reading David Halperin’s Saint Foucault: Towards a Gay Hagiography (1995) when I heard the news of Matthew’s death. It was, I believe, simultaneously hearing the news from the United States and reading Halperin’s book that caused two aspects of my life to collide. At once, I was fearful of and angry at the homophobia that resulted specifically in Matthew’s death and the homophobia gay men and women confront regularly. But also I was reading of Halperin’s account of his experiences as a gay academic. It was then that I started to question my responsibilities as an archaeologist in the light of my sexuality, something I had not allowed myself to do in any way previously.

To challenge this assymetry in archaeology I draw on that diverse body of critical thinking we know of as queer theory. I adopt, like scholars such as Sedgwick and Halperin, a Foucauldian analysis and analyse the homophobic/ heteronormative discourse of archaeology in terms of its overall strategies. Instead of trying to play the game of archaeology harder, better, more intelligently, or more truthfully (the way in which the discipline has developed until now), I look at the rules of archaeology to find out ‘how the game is set up, on what terms most favourable to whom, with what consequences for which of its players’ (Halperin 1995: 38).

In fact, for some time after ‘coming out’ I emphatically believed my sexuality had absolutely nothing to do with my being an archaeologist. And with the growing interest in gender studies in archaeology then, I was determined not to get too actively involved. I know a number of lesbian, gay, and bisexual colleagues, in various disciplines, who have had, and still have, similar reactions. This sort of reaction does not result from being unsympathetic toward issues of gender. Rather, it derives from an unspoken social rule whereby academic men and women are forced to maintain an authority to act by denying or downplaying their sexuality. Although writing specifically of himself and his intellectual and political relationship to Foucault, but acknowledging a much wider social relevance, Halperin explicitly expresses the status quo many if not all of us gay, lesbian, and bisexual academics find ourselves in. We all share the problem of how to

My queerying of archaeology then aims to disrupt the way in which epistemological privilege is produced and recursively re-produced, and therefore constantly maintained in archaeology. I recognise its control in three crucial aspects of the disciplinary culture (Dowson 2009b). First, the game of archaeological discourse, to develop Halperin’s metaphor, is set up by determining who has the authority to act and speak. Secondly, those authoritative voices require their own favourable terms and methods, the rules of the game, by which to act in an authoritative manner. And finally, those authoritative actions produce restricted constructions of the past.

Acquire and maintain the authority to speak, to be heard, and to be taken seriously without denying or bracketing [our] gayness. It’s not just a matter of being publicly or visibly out; it’s a matter of being able to devise and to preserve a positive, undemonised connection between [our] gayness and [our] scholarly or critical authority. That problem of authorization, to be sure, dramatizes the more general social and discursive predicament of lesbians and gay men in a world where a claimed homosexual identity operates as an instant disqualification, exposes you to accusations of pathology and partisanship ... and grants everyone else an absolute epistemological privilege over you. (Halperin 1995: 8)

A number of archaeologists are indeed committed to sociopolitical critique of the disciplinary culture of archaeology; what Wylie (1999: 555) has identified as ‘equity’ and ‘content’ critiques. Studies of equity issues include analyses of the status of women within the discipline, but also the analysis of the impact of class, race, and nationalism on archaeological practice. Content critiques on the other hand expose bias founded on sex, race, class, and nationality in our constructions of the past; how men’s activities such as hunting are privileged over those of women for example. But, as Wylie (1999: 555) points out, ‘sociopolitical critics in archaeology have tended to side-step explanatory questions about how the silences and stereotypes [that these critiques] delineate are produced or why they persist.’ Equity critiques have rarely been deployed to show how the content of archaeological knowledge is produced, and, vice versa,

Here Halperin is drawing on Eve Kosofsky Sedgwick’s analysis of the Epistemology of the Closet (1990). Sedgwick explores the consequences of the binary distinction between 136

THOMAS A DOWSON: 0053 HRS ,12 OCTOBER 1998. THE MURDER OF MATTHEW WAYNE SHEPARD content critiques rarely make a connection between the content of archaeological knowledge and specific equity issues. By attending to the overall strategies of archaeological discourse in the manner outlined above we are able to reveal why and how at least some of those silences and stereotypes persist. I examine each of the three aspects of disciplinary authority in turn.

contemporary account of how being an ‘out’ lesbian has affected her education and training as an archaeologist, and her subsequent career in archaeology. Being open about her sexuality did not stop the homophobia, from both men and women, and has denied her access to certain opportunities, such as candidacy for professional office. She’s account is not an isolated one, there are many more – all either anecdotal or anonymous.

Authoritative actors In my own experience as an ‘out’ queer academic I identify with much of what She has experienced. When applying for promotion, for example, I have been denied support required from senior male colleagues because these men were more concerned with how their openly supporting a gay colleague would appear to the overt masculinist culture of the Faculty. These men were not merely judgemental about any impact my use of queer theory may or may not have had in archaeology; they simply denied any knowledge of my publications. And of course this was not thought unusual, why should it be? After all, they were heterosexual men. For these men, my research on queer theory had no bearing on my abilities as an archaeologist. Also, when suffering from depression this was rumoured and dismissed as a result of my being HIV+ (I am in fact HIV-, but as a gay man rumours about my HIV status were unquestioned – it is after all not that surprising for a gay man to be HIV+, in fact a lot of bigoted individuals expect it), and had nothing to do with the fact (acknowledged in writing by the Dean) that I was teaching significantly more and undertaking more administrative duties than any of my colleagues. That this workload had a negative effect on my personal life was dismissed because, as the Head of Department exclaimed, ‘I can not be expected to help you there!’ And yet allowances are always being made for married and unmarried heterosexual staff when their workloads affect their domestic situations.

In a recent book that offers a ‘new’ interpretation of European Palaeolithic cave art, Guthrie (2005) produces what must surely be the most obvious example in archaeology of the way in which someone’s intellectual authority is judged on the basis of their sexuality. Guthrie draws to our attention the anatomical representations of female genitalia on carved statuettes of the female body from the European Stone Age. The artists of these statuettes ‘incorrectly’ represented the vulva pointing forward. We are asked to compare these images with Leonardo da Vinci's well known sketch that depicts a cross section through the bodies of a male and female couple engaged in sexual intercourse. Despite the fact that Leonardo’s woman is reclining backwards, Guthrie maintains that Leonardo da Vinci’s drawn female body is anatomically incorrect in the same way as the statuettes of the Stone Age artists. He suggests that Stone Age artists got the anatomy of the female form wrong because they were inexperienced, adolescent boys. We are told, ‘Leonardo da Vinci made the same mistake in adulthood, but perhaps for the same reason of inexperience’ (Guthrie 2005: 358). There is surely no need to explain that what Guthrie means by Leonardo's ‘inexperience’ is his homosexuality. Because da Vinci is generally thought to have been homosexual it is assumed he would not have had accurate knowledge of female anatomy. Leonardo’s ‘inexperience’ is then used to challenge his claims to knowledge. That the artist conducted numerous dissections of the human body is not important; the fact that he had what most think of as a deviant sexuality is. As obvious and blatant as this example may be, Guthrie can only really get away with it because Leonardo da Vinci is far removed and somewhat distant from the day-to-day practice of archaeology. Dismissing the claims to knowledge on the basis of their sexuality, as overtly as Guthrie’s example, is only one way in which epistemological privilege is negotiated in archaeology. Epistemological privilege is more frequently exercised in much more subtle, but no less harmful, ways: homosexual men and women are treated less favourably in comparison to their heterosexual colleagues.

What is also painfully clear in She’s account, ‘out’ academics are not the only ones to be affected by the openness of their sexuality: ‘The knowledge of my sexuality has charted who constitutes my professional network, and what professionals my students have immediate access to’ (She 2000: 172). Homophobia, of both homosexual men and women, is as obvious in archaeology as it is in any other aspect of society, and it continues to play a significant role in shaping the character of archaeology by influencing who gets to practice and succeed in archaeology (see also Claassen 2000). Authoritative methods Feminist theorists have convincingly demonstrated that the normative and objectivist nature of scientific methods are the result of masculinist practice (see, for example, Harding 1986, 1987; Longino 1987, 1990). A number of feminist archaeologists have similarly related normative practice in archaeology with masculinist practice within the discipline

One moving and powerful autobiographical testimony that demonstrates how a homosexual identity has impacted on the life of one professional female archaeologist is provided in a recent anonymous contribution (She 2000). No one can dismiss this account: She gives us an immediate and


DEFINING MOMENTS (see, for example, Wylie 1999, 2002). It is this masculinist practice that dictates how the rules of the archaeological game are set up and maintained. But I argue, it is not only masculinist, it is heterosexist.

Rock art, as data for constructing the past, is not afforded the same status as other excavated materials, because it 'lacks a firm chronological framework'. (This statement does beg the question: where in archaeology is there a firm chronological framework?) The chronocentricism that underwrites this dismissive view derives, I argue, from the very core of masculinist practice in archaeology. Dating is a key component of most archaeological narratives, particularly those narratives associated with origins - which are decidedly masculinist in character (see Conkey & Williams 1991).

The study of prehistoric rock art, those images painted and/or engraved on cave walls, rock shelters or on rocks in boulder fields the world over, has always occupied a somewhat marginal role in archaeology (see Dowson 1993, 2001; Lewis-Williams 1993; Whitley 1997). In Palaeolithic archaeologies, for instance, the excavation and analysis of stone tool assemblages and the bones of animals is the dominant focus of attention, while the images painted and engraved on the rock shelters from which these remains are recovered receive little or no attention. Rock art is simply not something mainstream archaeologists study. Part of the reason may lie in the way in which artists and art are stereotypically perceived in Western society today. Artists are thought to be eccentric individuals, whose work is inspired by some form of divine inspiration and has no significant impact on our lives (see Wolff 1981). Such misconceptions of artists and their work have strongly influenced archaeological constructions of prehistoric and ancient artistic practices and traditions. And the study of rock art is no different. The images painted onto and engraved into rock surfaces are perceived to be snapshots of prehistoric artists spiriting away idle hours. These images, then, can have little value. They may offer insights into how artefacts might have been used in the past, such is the techno-centric nature of much archaeological research, Palaeolithic archaeology in particular, but nothing more.

In Western society, the male body is the desirable norm, and the women's lack of phallus is the key factor determining their intellectual and moral differences from men. The patriarchal order is structured around the primacy of the phallus as the signifier of difference. This allows men to misidentify their status and position in terms of women. Lack of phallus is equated with a lack of power and control in our phallocentric world. Similarly, in archaeology, chronology is of paramount importance. But crucially for this discussion about the marginalisation of rock art, chronology is the primary signifier of difference between mainstream archaeology and rock art. In no way am I suggesting that dating is unimportant. Rather, I challenge the normative view, widely held in archaeology, that without a ‘firm chronological framework’, constructions of the past are at best, and only for a few scholars, inferior and meaningless, or, worse still and this applies to most archaeologists, entirely impossible. Artefact assemblages, stone tools and animal bones for example, are excavated from different archaeological deposits one on top of the other. As a result, it is possible to construct from these stratified relationships a chronological framework: artefact Y is older than artefact X because Y was found in a deposit underneath X, and with highly sophisticated (but not free of controversy) scientific techniques these deposits and artefacts can be dated with varying degrees of precision. Unfortunately, the situation I have just very simply, but not inaccurately, outlined, does not pertain to rock art imagery. And where scholars have attempted to construct a chronological framework, these have been highly contentious, and often easily dismissed. So, without a chronology rock art research is rendered irrelevant and powerless (Dowson 2001). Chronocentricism is, I argue, as one of the rules that determine how the past is constructed, indicative of the phallocentric nature of heterosexist archaeology (for a discussion of phallocentricism in other aspects of archaeological practice see Baker 1997). Archaeologists have power and control over the past because it is they who decide what constitutes acceptable methodologies.

This misconception of artists and their work is, however, not sufficient to explain why archaeologists have ignored rock art. Despite a growing body of theoretically and methodologically sophisticated research on rock art traditions around the world (for three exceptional introductions, see Chippindale and Taçon 1996; Whitley 2001; Helskog 2001), prehistoric imagery still remains irrelevant for most mainstream archaeologists. Most if not all archaeologists and rock art researchers accept there are numerous methodological problems in any attempt at incorporating rock art imagery in our efforts to develop our understandings of prehistoric communities. But there is one specific problem that is cited again and again as the final blow: Numerous difficulties beset viewing rock art as the key to constructing San history, but the one which particularly concerns my research programme is the lack of a firm chronological context for the majority of the Natal Drakensberg paintings. Nonetheless, I would like to make it clear that I believe that the paintings form part of the San historical process, and that if, or more positively when, we are able to date them, they will be an important component in constructing these historical processes (Mazel 1993: 890).

Authoritative constructions I introduce this third aspect of disciplinary control by briefly returning to Guthrie’s reference to Leonardo da Vinci’s


THOMAS A DOWSON: 0053 HRS ,12 OCTOBER 1998. THE MURDER OF MATTHEW WAYNE SHEPARD sexuality discussed above, if only to demonstrate obviousness of the point I wish to make here. Remember Guthrie argues that images of women in the cave art of prehistoric Europe are anatomically incorrect – showing the artist has a lack of experience with the female body. He draws an analogy with Leonardo da Vinci’s anatomical drawing of a male and female engaged in sexual intercourse – in which the female body is anatomically inaccurate. Guthrie implies this inexperience is a result of da Vinci’s homosexuality and lack of first hand experience with women. Palaeolithic artists’ lack of experience of the female body (and of animals, etc), on the other hand, is a result of their immaturity: cave artists were adolescent boys fantasising about women and the hunt. Nowhere is it even considered that perhaps these artists were like da Vinci – homosexual – not even to dismiss the idea. Although such a suggestion is almost certainly wildly incorrect, the argument would in fact be more logical than that of Guthrie’s. What this example shows is that the past is always already heterosexual. And my concern is that archaeology underwrites an entirely heterosexual history of humanity. But there are more serious, less absurd, examples than this.

The same could in fact be said of all the dioramas. The artefacts used for each of the period-specific exhibits are brought together in one ‘family’ when in reality these artefacts were spread too widely in time and space ever to have appeared as they do in the dioramas. Feminist critiques of archaeological constructions challenge the androcentric bias inherent in those constructions of the past. The caption for the ‘Mesolithic Family’ (Figure 14.1) provides a good example from which to begin, it reads: People at this time lived by fishing, hunting and gathering food. From this distant past few things survive. Those that do tend to be made of stone and flint but are only parts of complete tools. The wooden shaft of the harpoon and the fishing net would rot away. Communities archaeologists and anthropologists study, like the Mesolithic family, are very often characterised by the way in which food is obtained, and they become known as ‘hunter-gatherers’ or ‘hunting societies’. The dichotomous relationship between hunting and gathering is not a symmetrical one. It is asymmetrical: men and their activities are seen as superior to women and their activities. The caption, like most constructions of hunter-gatherers past and present, highlights male tools and male activities. And it is the representational prominence given to the artefact that has been the subject of the feminist critiques of these dioramas: ‘each male clutches his symbol of power of authority; each female watches anxiously over a small child’ (Jones and Pay 1990: 162, see also Jones 1991). Consequently, in attempts to understand the development of humanity more generally, it is ‘man the hunter’ models of human evolution that have cultural and social capital.

A series of dioramas, created for the Festival of Britain in 1951 and now in the Jewry Wall Museum (Leicester, England), demonstrates my point. Five of these dioramas were produced in total, each one a representation of five ages of British prehistory (see Hawkes 1951): the Mesolithic (Figure 14.1), the Neolithic (Figure 14.2), the Bronze Age (Figure 14.3), the Iron Age (Figure 14.4) and the AngloSaxon (Figure 14.5). The captions for these models concentrate on artefacts associated with each period. For example the caption for the Bronze Age group (Figure 14.3) reads:

Detailed analyses of ethnographic studies, however, show that women’s activities can account for as much as 70 per cent of their community’s dietary intake – women clearly did not only look after the children. Feminist anthropologists and archaeologists rightly point out that women’s activities should not be seen as inferior to hunting activities. Some feminist writers then prefer to label these people as ‘gathererhunters’, or more neutrally, foragers. And, also, ‘woman the gatherer’ models of hominid evolution were proposed to challenge the androcentric models.

The clothes worn by the figures are based on examples found in Danish bogs, even the mini skirt. Special conditions in bogs allow cloth to survive. The figures are richly equipped with a jet necklace, jet buttons, a bronze dagger and a bronze spearhead. Flint is still used for arrowheads. Post-structuralist critiques of representations like these challenge the way in which a single image can be representative of an entire period. In British prehistory discussions of the representation of the past began with critiques of the Iron Age, or the Celts. It is understandable that the following caption now appears for the Iron Age group:

These feminist-informed studies expose the way in which anthropological evidence has been coloured by androcentric bias. And in that sense, it is irrelevant whether ‘gathererhunter’ is more appropriate, or ‘woman the gatherer’ models are right or wrong (Okruhlik 1998). These critiques of androcentric studies invert and revalue the categories of a dominant, masculist sex/gender system (see Fedigan 1986), and in so doing they promulgate a reverse discourse. While many researchers are quick to point to androcentric biases, there is rarely an explicit recognition of heterosexist biases.

These figures from the Festival of Britain in 1951 give an impression of an Iron Age family, although the artefacts are of a too diverse age range to ever have been used at the same time.



Figure 14.1: The Mesolithic Family Group

Figure 14.2: The Neolithic Family Group



Figure 14.3: The Bronze Age Family Group

Figure 14.4: The Iron Age Family Group 141


Figure 14.5: The Anglo-Saxon Family Group

Nowhere do we find sustained critiques of the heterosexual basis of the nuclear family unit in the past. In the Jewry Wall dioramas, for example, the heterosexual family is presented as such from earliest times (Mesolithic, 14.1) to more recent historical periods (Anglo-Saxon, 14.5); the modern, conservative notion of the family appears as ancient as humanity itself. This notion of the family is itself based on a biblical vision of humanity and the original heterosexual couple (Adam and Eve); the visual representation of which extends back to at least the 1400s (Moser 1998). When re-constructing prehistoric communities archaeologists impose modern, Western notions of the nuclear family unit; father, mother and children - and in some constructions even family pets are included (see 14.5, where the young Anglo-Saxon girl holds a cat; in the Iron Age family diorama it is the father that restrains the family dog, see Figure 14.4). The possibility that prehistoric communities were made up of units other than the Western ideal are seldom explored (but see papers in Schmidt and Voss 2000), despite considerable ethnographic evidence to the contrary.

situations where it simply cannot be ignored it was constructed within a heterosexist mindset. Homosexuality is often linked to paedophilia and other unsocial behaviour. In fact for Ancient Greece, Dover (1978) and many others since, have characterised same-sex relations between men as ‘culturally sanctioned pederasty/paedophilia’ (see also Andreu 1997 for an Anceint Egyptian example, cf Dowson 2008: 27-28). But as Sparkes (1998: 257) points out, such a view is ‘too sanitized a version of reality that was altogether more complex’. Modern homophobic attitudes are to blame for the sanitized and clinical discussions that so quickly link same-sex relations with what is perceived to be unsocial behaviour. Pathologising homosexuality in this manner simply justifies the manner in which other evidence, that would allow for a more complex discussion, is ignored. For instance, in Athenian life same sex unions did not only exist between men and younger boys; same-sex relations were also present in the army where both partners were older (Sparkes 1998: 257). My aim is not to merely challenge but to disrupt the blinding and deafening homophobic discourse of archaeology.

Obviously same-sex relationships, labelled as homosexuality, in the past have not always been ignored. In some archaeological contexts same-sex encounters could not be ignored, and an obvious example is Ancient Greece. In these

Some archaeologists have gone so far as to suggest that the roots of homophobia, and other acts of sexual aggression, are to be found in prehistory. For instance, Taylor (1996:164166) argues that it was the development of permanent living 142

THOMAS A DOWSON: 0053 HRS ,12 OCTOBER 1998. THE MURDER OF MATTHEW WAYNE SHEPARD structures in the Neolithic that allowed parents to monitor the reproductive lives of their sons and daughters. He explicitly states ‘the Neolithic period saw the true birth of homophobia’ (1996:165). Also acts of rape have supposedly been identified in the archaeological record. And we find an example in Taylor’s sensationalist, heteronormative account of the history of sex, an example from Dolni Vestonice from the Palaeolithic period (1996:113). Here, and elsewhere (see for example, Gamble 1993: 109), rape is perceived to be a mating strategy for hominid and prehistoric peoples. Scott (2001) has rightly challenged this view of rape as a rational part of the human males’ evolved sexual repertoire to enhance evolutionary fitness and reproductive potential. This discourse as Scott so powerfully demonstrates merely excuses rape as an inevitable result of a genetic imperative. Rape, and I would argue homophobia, is a socially constituted form of sexual intimidation and violence that is particular to modern, Western human beings. In examples such as these archaeologists are (ab)using archaeology to excuse intolerable acts of aggression in the present; presenting these behaviours as necessary and/or inevitable. Archaeologists are using the past to justifiy some of the most distasteful characteristics of the present. Challenging epistemological heteronormativity


Marxist theory or Einstein’s theory of relativity. Queer theory does not provide a positivity, rather ‘a positionality vis-a-vis the normative’ (Halperin 1995: 62), a way of producing reflection, a way of taking a stand vis-à-vis an authoritative standard. To effect that positionality queer theory ‘takes on various shapes, risks, ambitions and ambivalences in various contexts’ (Berlant & Warner 1995: 343). In so doing, it allows for ‘reordering the relations among sexual behaviours, erotic identities, constructions of gender, forms of knowledge, regimes of enunciation, logics of representation, modes of self-constitution, and practices of community - for restructuring, that is, the relations among power, truth, and desire’ (Halperin 1995: 62; see also de Lauretis 1991). Queer theory is thus very definitely not restricted to homosexual men and women, but to any one who feels their position (sexual, intellectual or cultural) to be marginalized. The queer position then is no longer a marginal one considered deviant or pathological; but rather multiple positions within many more possible positions – all equally valid. These brief comments inform my use of queer in the context of my attempts to examine constructions of the past. Queering archaeology does not involve looking for homosexuals, or any other supposed sexual deviant for that matter, in the past. Nor is it concerned with the origins of homosexuality. Queering archaeology is actively engaged in moving away from essentialist and normative constructions of presumed and compulsory heterosexuality (male:female deviant third sex), but also the normative character of archaeological discourse. It necessarily has to confront and disrupt the presumption of heterosexuality as the norm inherent in archaeological interpretation. Queer archaeologies are not better, more intelligent or more truthful than what has gone before – they are different – neither judgmental nor heteronormative. They are queer. As Sandra Harding (1993: 78, original emphasis) points out, experiences of ‘marginalized peoples are not the answers to questions arising either inside or outside those lives, though they are necessary to asking the best questions.’ While queer archaeologies adopt what the establishment might regard as deviant practice, these constructions are no less valid and can stand up to the same close scrutiny as any other construction of the past.


The three aspects of archaeological practice I have discussed comprise the overall strategies of a normative archaeology that is not only masculinist in character but also heterosexist. These are the authoritative standards by which all archaeological research is measured, and they constitute the normative basis on which the practice of much archaeology continues to be conducted, and on which all archaeology is judged. In fact ‘normativity’ has had a long and entrenched position in archaeological thinking. The post-structuralist trends of the 1980s may have introduced a more critical and self-reflexive approach to archaeology, but none of those critiques managed to step away from the heteronormative (see also Dowson 2009a). An entirely new attitude in archaeology is required, for which I believe queer theory provides some direction. Queer theory actively and explicitly disrupts the heteronormativity of scientific practice. ‘Queer’ began as a challenge to essentialist constructions of a ‘gay’ identity. In contrast to gay and lesbian identity, queer identity is not based on a notion of a stable truth or reality. As Halperin (1995: 62) explains ‘"queer" does not name some natural kind or refer to some determinate object; it acquires its meaning from its oppositional relation to the norm. Queer is by definition whatever is at odds with the normal, the legitimate, the dominant. There is nothing in particular to which it necessarily refers (original emphases).’ Queer theory is not like a theory in the scientific use of the word in that it does not provide a system of ideas used to explain something, as in

My strategy to preserve a positive connection between my scholarly authority and my sexuality was to distance my academic research from dealing with issues of gender. I was under the impression that my explicit researching of gender relations in rock art would only serve to draw attention to my sexuality, and reinforce my colleagues’ (misguided) sense of epistemological privilege. I was explicitly, and in some cases gently, told by some very liberal-minded archaeologists not to be too vocal about my sexuality. While I was prepared to


DEFINING MOMENTS ‘come out,’ it seemed obvious to me that I could only do so by downplaying my sexuality.

individuals that engage in it appears to be a uniquely human invention (Vasey 1995: 197).

When homosexual men and women ‘come out’ of the ‘closet’ there is the widely held view they are emerging into a world of unfettered liberty. Sadly, as Sedgwick, Halperin, and many others have shown, this is not the case; ‘the closet’ is not some personally perceived space, it is a product of complex power relations. And one cannot magically emerge from those power relations. ‘To come out is precisely to expose oneself to a different set of dangers and constraints’ (Halperin 1995: 30). In archaeology those dangers and constraints manifest themselves in a powerfully and universally masculist disciplinary culture that is repeatedly negotiated in all aspects of professional archaeological practice as well as popular representations of the discipline. The same disciplinary culture that led me to believe one could not be an out’ gay man and a practicing archaeologist at once. I now appreciate I did not need to specifically address issues of gender to be dismissed—whatever I wrote or said was easily dismissed (even by feminists) because of an authority based on the superiority of heterosexuality. And again from Halperin (1995: 13),

References Andreu, G. 1997. Egypt in the Age of the Pyramids. London: John Murray. Baker, M. 1997. Invisiblility as a symptom of gender categories in archaeology. In J. Moore & E. Scott (eds), Invisible people and processes: writing gender and childhood into European archaeology, 183-191. London: Leicester University Press. Berlant, L. & M. Warner. 1995. What does queer theory teach us about X? PMLA 110(3), 343-349. Chippindale, C. & P.S.C. Taçon, (eds). 1998. Archaeology of Rock Art. Cambridge: Cambridge University Press. Claassen, C. 2000. Homophobia and women archaeologists. World Archaeology 32(2), 173-179. Conkey, M.W. & Williams, S.H. 1991. Original narratives: the political economy of gender in archaeology. In M. di Leonardo (ed.) Gender at the crossroads of knowledge: feminist anthropology in the postmodern era, 102-139. Berkeley: University of California Press. de Lauretis, T. 1991. Queer theory: lesbian and gay sexualities, an introduction. Differences 3(2), iii-xviii. Dowson, T.A. 1993. Changing fortunes of Southern African archaeology: comment on A. D. Mazel’s ‘history’. Antiquity 67, 641-644. Dowson, T.A. 1998. Homosexualitat, teoria queer i arqueologia. Cota Zero 14, 81-87. Dowson, T.A. (ed.) 2000. Queer Archaeologies World Archaeology 32(2), 161-274. Dowson, T.A. 2001. Queer theory & feminist theory: towards a sociology of sexual politics in rock art research. In K. Helskog (ed.), Theoretical Perspectives in Rock Art Research, 312-329. Oslo: Novus. Dowson, T.A. 2006. Archaeologists, feminists and queers: sexual politics in the construction of the past. In P.L. Geller and M. K. Stockett (eds), Feminist anthropology: past, present, and future, 89-102. Philadelphia, PA: University of Pennsylvania Press. Dowson, T.A. 2008. Queering sex and gender in ancient Egypt. In C. Graves-Brown (ed.), Sex and gender in ancient Egypt, 27-46. Swansea: The Classical Press of Wales. Dowson, T.A. 2009a. Que(e)rying archaeology’s loss of innocence. In S.A. Terendy, N. Lyons and J. Kelley (eds), Que(e)rying Archaeology, in press. Calgary: Calgary University Press. Dowson, T.A. 2009b. Queer theory meets archaeology: disrupting epistemological privilege and heteronormativity in constructing the past. In N. Giffney and M. O’Rouke (eds), The Ashgate Research Companion to Queer Theory, pp. 457-484. London: Ashgate.

As I discovered to my cost ... if you are known to be lesbian or gay your very openness, far from pre-empting malicious gossip about your sexuality, simply exposes you to the possibility that, no matter what you actually do, people can say absolutely whatever they like about you in the well-grounded confidence that it will be credited. (And since there is very little you can do about it, you might as well not try and ingratiate yourself by means of ‘good behaviour’.) So, just as I was once no longer prepared to deny my sexuality, by the end of 1998 I was no longer prepared to compromise it either. I am now comfortable with my sexuality, and clear about how it influences my lifestyle as well as the way in which I think about and construct the past. I am proud of who I am and what I produce. But more importantly, because epistemological privilege in archaeology is unequivocally related to homophobia (see also She 2000; Claassen 2000), I now actively challenge the manner in which epistemological privilege is negotiated in archaeology. Not just as it affects me, or other deviant archaeologists, but also the way in which the very practice of archaeology authorizes an entirely heterosexual history of humanity. It is that heterosexual history of humanity that, I argue, legitimizes the mindless acts of homophobia we continue to witness today. … [T]here exists robust evidence that homosexual behaviour, and by extension, other nonreproductive sexual behaviours, are the products of a long evolutionary history that occurred independent of human culture. While homosexual behaviour is widespread among our primate relatives, aggression specifically directly toward


THOMAS A DOWSON: 0053 HRS ,12 OCTOBER 1998. THE MURDER OF MATTHEW WAYNE SHEPARD Sparkes, B. 1998. Sex in Classical Athens. In B. Sparkes (ed.), Greek civilization: an introduction, 248-262. Oxford: Blackwell. Taylor, T. 1996. The Prehistory of Sex: four million years of human sexual culture. New York: Bantam Books. Vasey, P. L. 1995. Homosexual behavior in primates: a review of evidence and theory. International Journal of Primatology 16, 173-204. Whitley, D.S. 1997. Rock art in the U.S. the state of the States. International Newsletter of Rock Art 16, 21-27. Whitely, D.S. (ed.) 2001. Handbook of rock art research. Walnut Creek: Altamira Press. Wolff, J. 1981. The social production of art. London: Macmillan. Wylie, A. 1999. The engendering of archaeology: refiguring feminist science studies. In M. Biagioli (ed.), The science studies reader, 553-567. New York: Routledge. Wylie, A. 2002. Thinking from things: essays in the philosophy of archaeology. Berkeley: University of California Press.

Dover, K. 1978. Greek Homosexuality, Cambridge, Mass.: Harvard University Press. Fedigan, L.M., 1986. The changing role of women in models of human evolution, Annual Reviews of Anthropology. 15, 25-66 Gamble, C. 1993. Timewalkers: the prehistory of global colonization. Harmondsworth: Penguin. Guthrie, R.D. 2005. The Nature of Paleolithic art. Chicago: The University of Chicago Press. Halperin, D.M. 1995. Saint=Foucault: towards a gay hagiography. New York: Oxford University Press. Hawkes, J. 1951. The origin of the British people: archaeology and the Festival of Britain. Antiquity 25, 4-8. Helskog, K. (ed.). 2001. Theoretical perspectives in rock art research. Oslo: Novus Forlag. Harding, S. 1986. The science question in feminism. Ithaca: Cornell University Press. Harding, S. (ed.). 1987. Feminism and methodology: social sciences issues. Bloomington: Indiana University Press. Harding, S. 1993. Rethinking standpoint epistemology: what is “strong objectivity”? In L. Alcoff and E. Potter (eds), Feminist epistemologies, 49-82. New York: Routledge. Jones, S. 1991. The female perspective. Museums Journal February, 24-27. Jones, S. and S. Pay. 1990. The legacy of Eve. In P. Gathercole and D. Lowenthal (eds), The politics of the past, 160-171. London: Unwin Hyman. Lewis-Williams, J.D. 1993. Southern African archaeology in the 1990s. South African Archaeological Bulletin 48, 45-50. Loffreda, B. 2000. Losing Matt Shepard: life and politics in the aftermath of anti-gay murder. New York: Columbia University Press. Longino, H.E. 1987. Can there be a feminist science? Hypatia 2, 51-64. Longino, H.E. (ed.) 1990. Science as social knowledge: values and objectivity in scientific enquiry. Princeton: Princeton University Press. Mazel, A.D. 1993. Rock art and Natal Drakensberg huntergatherer history: a reply to Dowson. Antiquity 67, 889-892. Moser, S. 1998. Ancestral images: the iconography of human origins. Stroud: Sutton Publishing. Okruhlik, K. 1998. Gender and the biological sciences. In M. Curd and J.A. Cover (eds), Philosophy of science: the central issues, 192-208. New York: W.W. Norton & Company. Schmidt, R. and B. Voss. 2001. Archaeologies of sexuality. London: Routledge. Scott, E. 2001. The use and misuse of rape in prehistory. In L Bevan (ed.), Indecent Exposure: sexuality, society and the archaeological record, 1-18. Glasgow: The Cruthne Press. Sedgwick, E.K. 1990. Epistemology of the closet. Berkeley: University of California Press. She. 2000. Sex and a career. World Archaeology 32(2), 166-172.


Chapter 15

0000:00 hrs, 1 January 2000 ‘Three, two, one …?’ The material legacy of global Millennium celebrations Rodney Harrison

about the ways in which Britons felt about, and approached, the Millennium as an event. In fact few obvious legible material remnants of Millennium commemorations remain at the time of writing, less than ten years since New Years Eve in 1999, yet an analysis of the particular forms of commemorative monuments produced as part of these celebrations suggests a focus almost entirely on the past, revealing a widespread Millenarianism at the close of the twentieth century in the UK. Concerns about global technological catastrophe and the end of the world centred on international fears surrounding the ‘Millennium Bug’, fears that were manifested in monumental building programmes which looked to the past to emphasise stability and the absence of change. In addition to these monuments, this chapter briefly considers the potential for the archaeology of a range of more ephemeral and enigmatic traces associated with the Millennium, including underground bunkers and digital artefacts of the ‘Year 2000’ problem, as well as the possibility of a global comparative archaeology of the Millennium.

Vignette: South Bank, London 2359:57, 31 December 1999 Trisha had organised to meet her friend Rashid earlier in the day, to join the crowds who had formed along the South Bank of the Thames under the London Millennium Eye to see in the New Year. She had seen the man on the morning television programme earlier that week, arguing that people were technically celebrating the beginning of the new Millennium one year early. She felt (and Rashid agreed) that the change from 1999 to 2000 was significant, and everyone else seemed to be making such a big fuss of it that she didn’t want to be left out. It was so exhilarating to be here with the hordes of people, speaking many different languages, some of whom she knew from newspaper reports had travelled to London just to be there for the Millennium celebrations. Throughout the day she thought of the reports of the ‘Millennium Bug’, a common term which had emerged to describe the potential problems that the rollover from 1999 to 2000 might bring for critical computer systems. She had thought the pale, conservatively dressed men who had appeared on the television for months now speaking emphatically of the potential disasters of the rollover for computer systems were simply doom merchants, but as they stood in the mass of people counting down to the beginning of 2000 with fireworks lighting the sky for as far as she could see, she wondered if anything would happen as the brightly lit observation wheel turned and the animated clock projected on the screen above her head counted down to zero and struck midnight…

The Millennium? There was much debate in the years leading up to the end of 1999AD as to whether New Years Day in the year 2000AD should be celebrated as the beginning of the new Millennium, or indeed, whether the ‘real’ new Millennium should be considered to begin on 1 January 2001. Those arguing for the later date pointed to the fact that the Gregorian calendar has no ‘Year Zero’, hence by counting 2000 years from 1AD one comes to the beginning of the year 2001AD (Gould 1997). However, for a number of reasons including the emerging problems associated with the ‘Millennium Bug’ (see below), the popular approach was to celebrate the beginning of the new Millennium on 1 January 2000. In this chapter, I consider the material remains associated with the Millennium as an ‘event’ primarily in relation to celebration of this popular definition of the new Millennium. However I also consider more permanent commemorative monuments erected as part of Millennium celebrations and festivals which occurred throughout the following year 2000AD. It is important to note that many commemorative structures now named for the Millennium were not completed prior to the new Millennium, but in this chapter I have considered any construction named as a commemoration for the Millennium as a ‘Millennium monument’.

Introduction New Year’s Eve 1999 saw what could be argued to be the biggest collective international celebration which had ever occurred in human history. New Year’s Eve celebrations were held throughout the world, and although they varied from country to country, and culture to culture, each country was involved in some formal celebration of the change from 1999 to 2000. But what remains of this world-wide celebration? An attempt to document the global archaeology of this event is outside of the scope of this paper. What I want to do here is to focus particularly on the range of archaeological sites associated with the material legacy of Millennium celebrations and commemorations in the United Kingdom, and to consider what these material remains might tell us 147

DEFINING MOMENTS ‘designed’ and ‘incidental’ remains. ‘Designed’ remains consist of a whole range of intentional commemorative constructions, and can be further broken into ‘functional’ commemorative buildings, bridges and infrastructure, and ‘non-functional’ remains, including monuments, statuary, signage and other forms of permanent or semi-permanent marking. ‘Incidental’ remains consist largely of physical remnants associated with Millennium celebrations themselves, or with the non-memorial remains of other human responses to the Millennium. I include in this category bunkers and shelters that were reputed to have been built by those who feared technological and social breakdown at the end of the Millennium, as well as archaeological remains associated with Millennium parties and celebrations themselves. Physical and virtual changes to computer hardware and software systems associated with Y2K compliancy should be considered to be yet another type of incidental artefact of the Millennium. In an attempt to gain some insight into the range of both designed and incidental remains associated with global Millennium celebrations, I have decided to focus in this chapter on the UK as a study area, and particularly on the work of the Millennium Commission, which in 1993 embarked on one of the most expensive monumental building programmes of the late twentieth century.

The Millennium ‘Bug’ The 1999-2000 calendar changeover became a focus for fears around what came to be known in popular parlance as the ‘Year 2000 problem’, the ‘Millennium Bug’, or simply ‘Y2K’, which described potential errors which might occur in critical computer systems due to the practice of storing year dates with two digits rather than four. The Year 2000 computer problem is globally considered as one of this century’s most critical issues, so much so that the world community has joined forces to resolve the problem. … at the end of the twentieth century, many software applications will stop working or create erroneous results when the year switches from 1999 to 2000. … date sensitive embedded chips could (also) stop working … embedded business systems control traffic lights, air traffic control, security systems, time clocks and hospital operating systems (Reid 1999: 1-2). Predictions regarding the effects of the Year 2000 problem ranged from the inconvenience of the failure of computer software programs to the collapse of critical services such as power and water. A very successful information campaign on the potential hazards of the Year 2000 problem led to most organisations and businesses upgrading computer software and hardware so that very few problems were experienced when the clock struck midnight on 31 December 1999 (CNN 2000). However, by this point the global preparedness for the Year 2000 problem was reported as having cost well over US$300 billion (BBC 2000).

Materialising memory: Memorywork and the Millennium Commission The Millennium Commission was created in 1993 by the National Lottery etc Act 1993 as a short term organisation for the distribution of National Lottery funding to projects ‘to mark the year 2000 and the beginning of the third millennium’ (National Lottery etc. Act 1993 (c. 39) Part II Section 22). The Commission was an independent body created and regulated by the Government, comprising nine Commissioners supported by a group of staff advisors. Over the period from the establishment of the Commission to the cessation of funding in August 2001, over ₤2 billion was allocated to Millennium projects under a competitive scheme where community groups, individuals and organisations could apply for funding under various categories (The Millennium Commission nd.). The Millennium Commission wound down its operations and was abolished in November 2006.

Even less than ten years after the end of 1999 at the time of writing, it is hard to recall the scale of fear surrounding the Year 2000 problem, and the way in which it connected in the popular imagination with other millenarian uncertainties. The Y2K Personal Survival Guide (Hyatt 1999) recommended stockpiling water, food and basic groceries for all families and included suggestions for households in developing their own alternate sources of power and heating. There were a number of newspaper and television reports of individuals building bunkers or shelters in which they intended to weather the impending apocalypse. This meant that in addition to the excitement of the dawn of a new Millennium for many, New Year’s Eve, 1999 was filled with a sense of apprehension and trepidation. I will argue later that this setting of widespread confusion regarding the impact of the Year 2000 problem has produced a lasting legacy in the form of monuments and material remains associated with the Millennium in the UK.

The work of the Millennium Commission represents the most ambitious exercise in publicly funded memory-work in the UK during the twentieth century. The Millennium Commission’s funding was divided into a number of categories. Only some of these categories related to financial provision for capital projects or other types of ‘designed’ material remains. The award schemes relevant to celebrations and works occurring in the years leading up to the Millennium and in the year 2000 itself were:

The material legacy of global Millennium celebrations It is possible to delineate two broad categories of material remains associated with the Millennium, which I term here


RODNEY HARRISON: 0000:00, 1 JANUARY 2000. ‘THREE, TWO, ONE …?’ • Millennium Projects: the major capital works funding programme, which awarded over ₤1.3 billion pounds in funding to 222 projects involving buildings, building enhancements or environmental schemes; • Millennium awards for individuals: a programme of small Lottery grants to assist communities and individuals, often resulting in the production of pamphlets or the establishment of community groups around particular issues; • Millennium Festival: a series of events such as community pageants, exhibitions and other activities which were sponsored during 1999 and 2000 and held across the UK; • The Millennium Experience: A temporary exhibition space created on the Greenwich peninsula which contained a series of exhibitions and live events open to the public between 1 January and 31 December 2000 (The Millennium Commission 2003).

the Millennium Commission’s online project database (Millennium Commission 2008). I have not listed those projects which produced temporary exhibitions or provided grants for the redesign or regeneration of existing facilities, but have tried to capture only those that involved permanent building work which was not part of an existing project, but designed specifically as a Millennium memorial. Although these projects are quite diverse, a number of themes emerge. Over 41% (18) of these projects related directly to the building or conservation of a heritage site such as a museum or existing heritage precinct, while almost 60% were physically associated with an existing heritage object, building or precinct (e.g. a bridge providing access to a museum or heritage precinct). With a few notable exceptions, these were largely not the future-oriented monuments to the new Millennium which one would expect, but projects which were clearly linked to the past. I want to explore these links to the past through more detailed examinations of two Millennium monuments: the Millennium Experience and Hilly Fields Stone Circle.

Various schemes were developed to expend excess funds throughout the final years of the Commission (2001-2006), including additional funding which was provided to some of the capital works projects and the establishment of various funding schemes to promote the arts and environmental issues.

The Millennium Experience and Dome The Millennium ‘Experience’ and ‘Dome’ was one of the flagship projects of the Millennium Commission (Figure 15.1). Architect Stephen Bayley has noted that the ‘Dome’ is not a dome, as such, but rather,

The Millennium Commission represented the major public funding body in the United Kingdom for commemorative Millennium projects, although many local governments and community organisations established their own local programmes of building and memorialising. On the Channel Island of Jersey, for example, the State of Jersey gave each of its parishes a memorial stone cross, while the Société Jersiaise presented a granite standing stone to each parish (La Société Jersiaise 2000, see further discussion below). The crosses were intended to be evocative of the wayside crosses which are recorded as having existed on the island during the Middle Ages. Although no comprehensive list of Millennium commemorative structures is currently available for the UK, the archives of the Millennium Commission provide a starting point for documenting the nature and scale of designed Millennium memorials which were produced as part of Millennium celebrations. While a large number of Millennium Commission-funded projects involved additions or renovations to existing infrastructure, I focus here on newly constructed memorials and buildings which were produced specifically to commemorate the new Millennium.

the largest membrane structure in the world, a polymer tent supported by tensioned cables arranged radially around a dozen 100 m masts made of open steelwork. I never knew whether the result looked more like the segment of a vast globe sinking melancholically into the bog or one rising in an optimistic symbol from it (2007). Providing 100,000 m2 of enclosed space, the ‘Dome’ was 320 m in diameter, with a circumference of one kilometre and a height of 50 m. The series of twelve steel masts was held in place by more than 70 km of high-strength steel cable which support its Teflon-coated glass fibre roof. Built on contaminated marshland on the Greenwich Peninsula, the Millennium Dome (as opposed to the Experience) was an impressive architectural monument to the Millennium, and rapidly became an iconic part of London’s panorama. Built on the Meridian Line, the Dome hosted the ‘Millennium Experience’, a series of temporary displays and events throughout the year 2000, opening on January 1st and closing on December 31st. As the architects of the Experience, Richard Rogers Partnership, note that it was,

‘Designed’ material remains A survey of Millennium Commission-funded projects reveals a number of interesting observations regarding the nature of State-sponsored public Millennium memorials in the UK. Table 15.1 lists the 43 major projects funded by the Millennium Commission which involved new constructions, be they buildings, parks, roads or squares, summarised from

… intended as a celebratory, iconic, non-hierarchical structure offering a vast, flexible space …(however) the Dome attracted intense media coverage and generated more political and public debate than any other British building of the last 100 years (Richard Rogers Partnership 2008). 149


Park and gardens

Heritage Site n

Physically associated heritage site y

Bristol London

Educational facility and gallery spaces Biodiversity conservation displays

n n

y n

Basildon Tottenham Carlisle Stoke-OnTrent Durham Gateshead London Leeds Rotherham Manchester London Londonderry Birmingham Cardiff Dungannon

Bell Tower Performing arts space Heritage walkway and gallery Museum

y n y y

n n y y

Series of arts facilities and park space Bridge Bridges Pedestrian Square Interactive Science Gallery Streetscaping and pedestrian square Bridge Theatre Science theatre and IMAX cinema Sport Stadium Museum/Visitor Centre

y n n n n y n n n n y

y y y y n y y n n n y

Carmarthen Liverpool Leicester Peterborough

Botanic Garden Museum exhibition Science museum Cycleways, pedestrian pathways, heritage centre

n y y y

n y y y

Bath London Sheffield

Spa arts, culture, education and heritage centre Heritage/urban regeneration project

y y y

y y y


Wildfowl & Wetlands Trust headquarters and visitor centre New library, regeneration and conservation works Art gallery and heritage conservation works Theatre space



y y n

y y n

New covered courtyard space and heritage conservation Aquarium and landmark Conservatory/botanic gardens/educational facility Library/heritage centre/public square Science centre and IMAX cinema Square and public art project Arts centre, theatre and footbridge Canal restoration and boatlift War memorial Seed bank



n n y n n n y n n

n n y n n n y y n

Concert venue/leisure and entertainment facility Completion of unfinished cathedral

n y

n y

Bridge Concert venue/leisure and entertainment facility

n n 18

y n 25




Agnew Park Seafront Redevelopment At-Bristol B.U.G.S. (Biodiversity Underpinning Global Survival) Basildon Bell Tower Bernie Grant Centre Carlisle Gateway City Project Ceramica - The Pottery Showcase


Durham Millennium City Gateshead Millennium Bridge Golden Jubilee Bridges London Leeds Millennium Square Magna Manchester Millennium Quarter Millennium Bridge Bankside Millennium Forum Millennium Point Millennium Stadium Moygashel Regeneration Project 'The Linen Green' National Botanic Garden of Wales National Museums Liverpool National Space Centre Peterborough Millennium Green Wheel Reviving Spa Culture Rich Mix Centre Sheffield - Remaking the Heart of the City Slimbridge 2000 Southwark Cathedral Tate Modern The Blackie/Great George's Community Cultural Centre The British Museum Great Court The Deep The Eden Project The Forum The Glasgow Science Centre The Henry Cort Millennium Project The Lowry The Millennium Link The Millennium Memorial Gates The Millennium Seed Bank The Odyssey Project The Suffolk Cathedral Millennium Project Torrs Millennium Walkway Wales Millennium Centre total

London London Liverpool London Hull St Austell Norwich Glasgow Fareham Salford Scotland London Haywards Heath Belfast Bury St Edmunds New Mills Cardiff

Table 15.1: List of major projects funded by the Millennium Commission involving new construciont work, noting instances where work was undertaken at an existing heritage site or in close phyisical proximity to one or more heritage sites. 150

RODNEY HARRISON: 0000:00, 1 JANUARY 2000. ‘THREE, TWO, ONE …?’ While fewer people visited the Experience than anticipated, 6.5 million people visited the attraction during the year 2000, which made it the most popular fee-paying attraction in the UK (The Millennium Commission nd). Its interior displays, although widely criticised in the media, were intended to chart the history and imagine the future of various themes: the body, faith, education, dreams, play and the environment. It included a live performance drawing on the imagery of William Blake’s epic poem The Marriage of Heaven and Hell re- imagined as,

After the Millennium Experience closed at the end of 2001, the Millennium Dome remained empty for many years while consideration was given to its future. Over 500 artefacts associated with the Millennium festival celebration, including photographs, videos and works of art, were transferred for curation to the V&A Museum in London. After much debate, the Millennium Dome reopened in 2007 as ‘The O2’, a completely (internally) refurbished state-of-the-art concert and performance venue. The O2 includes a 20,000 person capacity music or sports arena, a smaller, 2000 capacity concert venue, restaurants and bars, shops, cinemas, ice skating rink and exhibition spaces. Where earlier comparisons with Las Vegas were made in derision, the reopened O2 seemed to have achieved what the Millennium Experience had not in successfully mixing entertainment, dining and retail spaces in a giant ‘mega-mall’. The O2 blurs the line between museum, gallery, heritage site and entertainment venue through hosting travelling exhibitions in its exhibition space, The O2 Bubble. In 2007-8, The O2 Bubble hosted Tutankhamun and the Golden Age of the Pharaohs, a major touring exhibition of artefacts associated with the tomb of the Ancient Egyptian King.

a war between nature and technology waged 150ft in the air … feature(ing) ... fire-breathing Mad Max-style contraptions doing battle with the ‘earth people’ and their huge dragonflies (Gibbons 1999). This imagery - clear references to the fears surrounding climate change and the impact of human technology on the environment - invoked the fears embodied by the Millennium Bug and other uncertainties which plagued the turn of the Millennium. While the Millennium Experience was intended to look to the future, it could not do so except through the lens of the past. Nor could it imagine the future as a positive place, but only in terms of the post-apocalyptic melding of William Blake and Mad Max.

Figure 15.1: The Millennium Dome, with the Canary Wharf complex in the background. Photographed by Adrian Pingstone in June 2005.



Figure 15.2: Looking east across Hilly Fields stone circle, October 2007. (Photo: Author) Named for its major sponsor, the telecommunications company O2, it was suggested that the new name might ‘erase memories of its millennial forerunner’ (Johnson 2007).

environmental revelation. This might be read as a shift away from the apocalyptic fears of the Millennium to a more forward-looking engagement with both the past and the future as a source of inspiration.

Under the tight white canopy of Britain's most notorious building, there is one four-letter word that is strictly forbidden. While builders beaver away in the days running up to the opening of the O2, an enormous entertainment complex in a corner of Greenwich, southeast London, the suited executives ask only one thing: ‘Don't mention the dome’ (Johnson 2007).

Hilly Fields Stone Circle Hilly Fields is a local park in the southeast London Borough of Lewisham. The park encloses a steep hill top with views north across the Thames towards the City of London, and includes a children’s playground, basketball courts, tennis courts and enclosed picnic area. A memorial stone circle and two tall stones which cast their shadows across a calendar stone to operate as a linear sundial calendar were installed in the park as part of the local council’s Millennium celebrations in 1999. When I visited the park in 2007, the stone circle comprised a series of twelve unhewn granite standing stones approximately 1-1.4 m in height with a diameter of approximately 19 m in a flat clearing. In the centre of the circle was a flat rectangular slab of manufactured stone oriented approximately north-south and approximately 6 x 1 m in size. On this stone were engraved a series of lines to mark the various months of the year, as well as the Summer and Winter Solstices. Approximately 6 m to the west of the

However, despite the attempts to remove what some see as the stain of a political white elephant, plaques on site make it clear that the O2 was constructed as a Millennium monument and formed an integral part of London’s Millennium celebrations. It is interesting that despite its expensive reimagining as a modern concert venue, its ongoing role as a museum/gallery space maintains its connection with heritage and the past and its original function as a temporary exhibition space. The post apocalyptic imagining of the future from the original Millennium Experience display has been replaced by a return to the past as a source of pure marvel and entertainment, rather than a source of moral and 152

RODNEY HARRISON: 0000:00, 1 JANUARY 2000. ‘THREE, TWO, ONE …?’ stone circle were two tall shadow casting stones which are each approximately 3 m in height. A photograph and sketch plan of the Hilly Fields stone circle is shown in Fig 15.2 and 15.3.

In 1997, a group of Brockley artists, inspired by an interest in the past, came up with the idea of creating a new stone circle to act as a giant sundial and a focal point for people to gather and celebrate. The idea infiltrated various groups such as churches, schools, and clubs and was eventually adopted by the Brockley Society as a millennium project. The circle was laid out on Hilly Fields, Brockley, a park dedicated to the public since 1896 and the site of an annual midsummer fair for just over 25 years. Boulders were transported from Scotland and set in place on the morning of the spring equinox, March 21, 2000. It was opened in May last year and the gateway dedicated to Brockley's patron saint, Norbert. Michael Perry, who watched the solstice sunrise at 4:45 a.m. on June 21 last year, said: ‘A visit to the stones gives you an opportunity to reflect on your life and how this is integral to time. Myths and legends surround stone circles. It is up to us to continue this process’ (National Geographic News 2001).

Figure 15.3: Sketch plan of the Hilly Fields stone circle, October 2007

The stone circle functions both as a large analemmatic sundial, where the shadow cast by a person standing in the centre of the circle can be used to tell the time of day, and a calendar, by reading the shadow cast by the two largest shadow-casting standing stones on the calendar stone in the centre of the circle (see Figure 15.4). When I visited

The stone circle was designed by a group of artists and adopted by the local Brockley Society, who have run a midsummer fair in the park since the mid 1970s, as their Millennium project.

Figure 15.4: Detail of stone slab sundial calendar, Hilly Fields stone circle, October 2007. Note the burnt patch iofgrass within the circumference of the circle on the left of the photograph. (Photo: Author)


DEFINING MOMENTS the site there was evidence of a recent fire within the circle in the form of a small burnt patch of grass, suggesting its possible contemporary use by neo-pagans (e.g. see Wallis 2003).

The desire to root the experience of the Millennium in the past appears to have been an extremely strong one, manifested not only in these direct links to the past, but also through the imagery of festivals and exhibitions such as the Millennium Experience. While these monumental, designed material remains can be seen as demonstrating uncertainty and fear of the future in an indirect way, a whole series of ‘incidental’ remains of the Millennium document these fears more directly.

Standing stones as millennial monuments The stone circle at Hilly Fields is only one of a series of similar monuments which were constructed at the end of the Millennium in the UK, using official Millennium Commission Project funding, local government funding or built by individuals. Other comparable monuments include: Stave Hill, a 30 ft tall artificial mound at Rotherhithe, London; a stone circle constructed using Millennium project funding at Ham Hill in Somerset; another stone circle constructed as part of local Millennium celebrations at Holyport, near Maidenhead in Berkshire; a large granite monolith that was erected near Penrith as part of Eden’s Millennium celebrations; the Birchover Millennium stone in the Peak District; the standing stones known as the Millennium Stone, the Stanford Stone in County Down, Ireland; the New Aberdour Millennium Memorial and New Pitsligo Millennium Memorial in Aberdeenshire; a series of twelve standing stones erected in each of Jersey’s parishes to commemorate the Millennium; the Millennium stone near Bodley in Devon; and carved stones erected at Glastonbury and Wells in Somerset to mark the Millennium, amongst others. Indeed, I have been able to locate records of over 50 standing stones or stone circles/megalithic monuments erected throughout the UK to mark or commemorate the Millennium. Another similar form of monument which seems to have been built to mark the Millennium throughout the UK in many places was a stone Celtic or wayside cross, 12 examples of which have already been mentioned, which were erected on Jersey in 2000.

‘Incidental’ material remains Within this class of material remains associated with the Millennium, I would include a whole series of intangible aspects of media and literature, along with artefacts and sites associated with millenarian paranoia and fears of the Y2K bug. These include • bunkers, which were reported to have been built both by civilians for their personal protection, and those used by businesses and government organisations to protect their critical computer systems in the years leading up to the Millennium; • stockpiles of tinned food which were reputed to have been kept by individuals in case of the breakdown of critical services following the Millennium; • ‘Digital’ artefacts of Y2k, including software and hardware. Despite a wide search, I have not been able to locate any purpose built Millennium bunkers or stockpiles, calling into question whether this practice, reported as widespread in the months leading up to 2000, was as common as reported in the media. The Symantec Security Centre near Winchester has been called a ‘Millennium bunker’ (Espiner 2005), however it appears to have only been occupied by Symantec over the period 2002-2005, and is said to have originally been constructed during the Cold War as a Nuclear shelter for Government Water Authority executives (Collins 2008). It is possible that purpose-built Millennium shelters and stockpiles of food and other resources have made their way into the archaeological record, but the sort of systematic archaeological survey which would reveal such remains is outside of the scope of this chapter. Perhaps the most enduring of these ‘incidental’ artefacts are the changes in software and hardware which occurred as a result of global fears regarding the Year 2000 problem. By mid 1998, all computer software and hardware systems were being produced as ‘Y2K compliant’, and software was no longer programmed to store years in dates as two digits rather than four. These ‘digital artefacts’ of the Millennium have had an ongoing impact on the design and use of computers as a result of the identification of the Year 2000 problem.

Given the future-looking rhetoric that accompanied the countdown to the end of 1999, it seems peculiar that so much of this monument-building was focussed on creating monuments that looked like those associated with a romantic British past. Commenting on the Jersey standing stone and crosses, the president of the Société Jersiaise noted that: They allow the visitor the opportunity to stretch the imagination and construct in the mind's eye what might have been…Yet they also encourage the onlooker to leap 4,000 years into the past and link up with those who erected similar stones at Blanches Banques, St Ouen and at St Clement…They show how deeply we are rooted in our past and yet are fingers pointing to our future. The 12 crosses donated by the States of Jersey have fulfilled the desire to celebrate 2,000 years of Christianity. The 12 stones complement them by recording our debt to those who erected their symbols 2,000 years before as a sign of their appreciation of this lovely Island. Exact meanings are impossible to establish… (Johnson 2000). 154

RODNEY HARRISON: 0000:00, 1 JANUARY 2000. ‘THREE, TWO, ONE …?’ erection of standing stones at the end of the Millennium in the UK evoked a Neolithic past, re-imagined as a Golden Age in which fears of humanly generated environmental and technological disasters had no place. The Millennium Experience presented the future as a post-apocalypse, while overall, the work of the Millennium Commission can be seen as looking to the past through the emphasis on projects associated with existing heritage sites and precincts. I have linked these widespread fears of the new Millennium to the Year 2000 problem and other manifestations of secular millenarianism within UK society at this time. This chapter has outlined the potential for a global archaeology of the Millennium to explore whether such millennial fears were shared, and manifest in similar or different ways, throughout the world at the close of the twentieth century.

Discussion Some themes emerge from this brief survey of designed and incidental archaeological remains of the Millennium in the UK. I have argued that while at the time of writing it is less than ten years since it was seen to be the critical issue of the Millennium, the Year 2000 problem has largely been lost to the dustbin of history. Nonetheless, all contemporary computer architecture conserves artefacts of the fear of collapse of critical infrastructure and computing systems at the end of the last Millennium, fed not only by legitimate concerns about the computing problem, but also by broader uncertainties about the future which manifested at this time. These broader fears, which I have referred to as a form of millenarianism, are able to be read in the ways in which the designed memorials to the Millennium consistently evoked the past and indulged in familiar post-apocalyptic imagery in describing what was perceived to be an uncertain future. While individual memorials can be seen to evoke the past explicitly, I have also demonstrated the ways in which the work of the Millennium Commission, the most ambitious exercise in publicly funded memory-work in the UK during the twentieth-century, was largely put towards projects associated in some way with heritage objects and sites. This backward-looking approach to the Millennium demonstrates a widespread fear of the future and nostalgia for a re-imagined heroic past in which humans were more ‘in tune’ with nature to combat the widespread fears of global technological breakdown which accompanied the celebration of the Millennium.

References Bayley, S. 2007. A decade on...the Dome finally works. The Observer, Sunday 24 June 2007. Online at http:// Consulted 22 February 2008. BBC News Online 2000a. Millennium celebrations 'a huge success'. Sunday, 2 January, 2000. Online at Consulted 22 February 2008. BBC News Online 2000b. Y2K: Overhyped and oversold? 6 January 2000. Online at Consulted 22 February 2008. Collins, B. 2008. Symantec's nuclear bunker - yours for £300,000. PC Pro news 6 February 2008. Online at Consulted 21 April 2008. CNN 2000. Preparation pays off; world reports only tiny Y2K glitches. 1 January 2000. Online at 1/y2k.weekend.wrap/index.html. Consulted 22 February 2008. Espiner, T. 2005. Inside Symantec's secure bunker. security strategies. Online at,3 9024655,39154660-1,00.htm. Consulted 22 February 2008. Gibbons, R. 1999. Dome offers 'greatest show on earth': Aerial circus artistry will tell story of humanity, a war between nature and technology, through imagery of William Blake. The Guardian, 17 September 1999. Online at fiachragibbons. Consulted 22 February 2008. Gould, S. J. 1997. Questioning the Millennium: A Rationalist's Guide to a Precisely Arbitrary Countdown. New York: Harmony Publishers. Johnson, B. 2007. In an unloved Greenwich tent, a £350m gamble takes shape. The Guardian, Tuesday 19 June

This leads to a discussion of the potential for an archaeology of global Millennium celebrations. This paper has demonstrated the ways in which a study of the material remnants associated with millennium celebrations might help us to understand the ways in which different countries approached the Millennium as an event, and the role of memorials in helping us to understand a widespread national fear of the future. It remains to be uncovered whether such fears were shared, and manifested in similar or different ways in different countries, and whether they differed in western and non-western countries. Such a comparative study has the potential to uncover similarities and differences in the ways in which different cultures approached the future at the close of the twentieth century. Conclusion The British Prime Minister Tony Blair announced on New Year’s Day 2000 that, ‘what struck me both last night and again today is this real sense of confidence and optimism’ (BBC 2000a). However, an archaeological reading of the material remnants of Millennium celebrations in the UK reveals an overwhelming sense of anxiety about the new Millennium, and evidence that Britons were looking to find assurance not in the future, but in the past. The widespread


DEFINING MOMENTS 2007. Online at jun/19/ dome.musicnews. Consulted 22 February 2008. La Société Jersiaise 2000. Millennium Stones and Millennium Crosses. Online at Consulted 21 January 2008. National Geographic News 2001. Ancient Stones Ring in Summer Solstice in Britain. Online at 0_Stonecircles_2.html. Consulted 20 December 2007. Reid, E.O.F. 1999. Why 2K? - A Chronological Study of the (Y2K) Millennium Bug: Why, When and How Did Y2K Become a Critical Issue for Businesses. Singapore: Universal Publishers. Richard Rogers Partnership 2008 The New Millennium Experience. Online at render.aspx?siteID=1&navIDs=1,4,25,661. Consulted 22 February 2008. The Millennium Commission nd. The Millennium Commission: A lasting legacy. London: The Millennium Commission. The Millennium Commission 2003. Out of Time: Changing the Landscape of the United Kingdom in the new millennium. London: The Millennium Commission. The Millennium Commission 2008. Project Search. Online at action=search&t=2. Consulted 21 January 2008. Wallis, R. 2003. Shamans/Neo-Shamans: Ecstasy, alternative archaeologies and contemporary Pagans. London and New York: Routledge.


Chapter 16

n.d. Conservation and the British Graham Fairclough

influence) how ‘our’ century might be seen from the future. Those at the session were looking at the century in a proprietorial way, but were perhaps also trying to shake off the conviction that we automatically knew all about it because we had lived through much of it and were intimately acquainted with the rest through family memories and the period’s unique visual and aural documentation. But memories are not identical to what happened, and contemporary perceptions are partial. Oral and written histories are already starting to emerge about the period, including about the development of archaeological practice and policy such as PPG16 or the Valletta Convention, and not surprisingly they are revealing multiple pasts. Good oral history makes (hi)stories, not History; it should capture plurality not consensus.

A long moment … Like other chapters in this book, the pages that follow are the result of rewriting in 2008 a paper given at the Theoretical Archaeology Group conference (TAG) in 1999. The digital version of the paper written for TAG and slightly revised in January 2000 was lost, however (though it will exist somewhere on an old unreadable disk, like a postcard lost behind a drawer in the Post Office and never delivered, or that WWI letter in a bottle from a battlefield trench), and during re-typing (a distinctively late twentieth-century act, chasing after new technology) more recent ideas were added. My TAG paper’s original title was “n.d. good condition” (no date, good condition), a reference to how second-hand (‘antiquarian’) books were catalogued in the pre-digital and pre-Amazon world. At the time it seemed to summarise the issue of ‘conservation’ as a broader phenomenon - a defining moment with no specific date. I chose ‘conservation’ as my topic in 1999 even though it was not a moment or an event like the other topics of the ‘Defining Moments’ session. There were candidates for specific events that might be made to serve as microcosms of the whole (such as the Euston Arch or the Rose Theatre), but the idea of ‘conservation’, and all it encompasses (including the evolution of archaeological practice in the second half of the century) seemed too pervasive to be so tightly encapsulated.

Aspects of my TAG paper have been explored further elsewhere since 1999, notably the issues of the complexity and ‘hidden-ness’ of even very recent history in terms of Cold War and new landscapes (Fairclough 2007a, b). It is necessary to be careful about assuming that we understand the past simply because it is recent and because we were ‘there’; insiders’ views should be suspect; modern periods are called ‘contemporary’ because they are exceptionally open to interpretation and revised understanding. It is interesting that even in the balanced spread of this book’s chapters, three-quarters of the defining moments are situated within the first two-thirds of the century; the 1980s are unrepresented, and the earlier, safer, part of the century is (admittedly slightly) privileged.

Conservation was an appropriate subject for the session, as for this book, not least because several of the authors, explicitly or not, approach their subjects from a conservationminded standpoint. It might also be argued that whilst conservation is characteristic of the twentieth century, conservation is in turn defined by the century. It had philosophical origins in the late nineteenth century and an afterlife in the twenty first, but was essentially a mentality and behaviour of the twentieth. So, whither conservation (and therefore archaeology) in the twenty first century?

Now a further decade is closing, and the new century is getting old. We already have a greater distance from the twentieth century; we are separated from it psychologically and emotionally by apparent watersheds such as 9/11 and the ‘death’ of capitalism. As our memories fade and change, paradoxically perceptions and understanding of the past harden. Who of the right age has not watched an episode on TV of ‘Life on Mars’ (or ‘Ashes to Ashes’) and thought to themselves (but not voiced it in front of the children) ‘it wasn’t really like that!’ Yet for a younger generation, that is now what 1970s Manchester (or 1980s London) was like. History gets written, whether it is ‘what actually happened’ or not.

Knowing the past Whilst the session at TAG 1999 was described as being about the twentieth century ‘in retrospect’, it was also less explicitly about looking forwards, trying to guess (and perhaps


DEFINING MOMENTS that archaeologists are forced not to pretend that they are working with the past, but to accept that their resources all (and only) exist in the present - archaeologists are not historians. When looking back at the twentieth century, what often comes to mind is a phrase that my notes tell me was used several times in the TAG session: ‘the authority of the material world’. This might be paradoxical given that the twentieth century was without compare an age of documents and records, but it is also recent enough for its physicality to survive - the Holocaust, for example, ‘lives’ because it was documented and its remains are visible (indeed, with new monuments for it constantly being added to its material culture). Some may think that we do not need material culture in order to understand the past few decades, of whose events we are living witnesses and because, in any event, ‘everything’ was written down. But things tell their own, potentially different, stories, and future generations will need late twentieth-century material remains as evidence alongside the documents. One consequence of the importance of conservation in society is that we have become self-conscious about what to keep from our own period to pass on to the future. We expect future archaeologists (for example) to be interested in studying our period, and we are tempted to try to engineer the survival of a suitable archaeological resource for them. This is an opportunity (privilege or curse?) not available to previous generations of archaeologists, with their habit of thinking that archaeology was only about old things and sometimes even that archaeology was for prehistoric periods and unnecessary for historic times. There are responsibilities – what to choose, thereby what ‘picture’ their successors will be able to paint, and of course we do not know what future archaeologists will find interesting or significant. In this sense the later twentieth century was a defining moment in the development of an archaeology conscious of its own time – as we might say now, conscious that the past survives within the present. The practice of archaeology that came out of the twentieth century is a very different activity to the one that went into it in 1900.

Fig. 16.1 Aldermanbury (City of London), after bombing in the Second World War (Reproduced by permission of English Heritage. NMR; Reference Number: BL 5947.)

The obsession with physical conservation became so embedded in twentieth century mentalities that it is no longer easy to separate an attempt to understand the past and its meaning from agonising about which bits of it to protect and keep. It is almost as if one is not allowed to be interested in the past without wanting to keep or restore it. So enmeshed in the conservation idea are we that ‘significance’ and value’ seem to be the only legitimated way to describe the remains of the past, which seems to exist only to be preserved. The wide range of how the past is used by society has been reduced to the literal act of preserving its fabric. In that sense, history has been subsumed into heritage, scarcely having any independent existence.

Moments and movements It might be asked if something that happened slowly over the whole century - the growth of a ubiquitous conservation movement – can be called a defining moment? Moment has several meanings, however, as well as the one adopted for most of this book, of a short segment – a point - of time or an event. This chapter keeps its other meanings in mind: a mathematical meaning to do with small changes in quality; the archaic meaning where ‘of moment’ means ‘of importance’; a meaning to do with forces and changes; and of course it gives us momentum, a trend or direction.

One consequence is that those who work within the frame of conservation - archaeologists for example (the standpoint I use most in this chapter) – were long before the end of the twentieth century being asked to decide on the significance of very recent structures and on which examples best ‘represented’ our century. Earlier generations of archaeologists probably never had to consider such issues. The current association of archaeology with conservation, and with heritage, however, makes it necessary to see and experience the past only as part of the present. The benefit is


GRAHAM FAIRCLOUGH: N.D. CONSERVATION AND THE BRITISH In the TAG session, and in this book, the topics - ‘the defining moments which will no doubt dominate history books of the future’ - were chosen on the basis of single events that nevertheless symbolise something much larger. There is a risk in this approach that it falls into the same trap as the old ‘great men’ school of history, of letting particularities conceal more important processes or explanations. Conservation in particular is usually seen as a string of distinct cases or ‘battles’, but it is important to see it also as an embedded social phenomenon.

‘redevelopment’. It is, equally, why the term ‘preservation’ was replaced by ‘conservation’ during the 1980s, although the words are to all practical purposes synonyms at a literal level. It is commonplace now to say that conserving the historic environment is part of creating the present, almost that ‘yesterday begins tomorrow’, that the past must have a future, and yet such slogans mask a culturally-specific view of the relationship between past, present and future which is not identical in every country or culture. Defining moments of the conservation movement

In this sense, conservation is indeed characteristic of the century in social terms, and as implied earlier, the twentieth century may in retrospect come to be seen as conservation’s most characteristic century. At least in Britain, that is, because the situation is different elsewhere, even in near European neighbours, and one reason why it is interesting to consider conservation as a defining trait of the twentieth century is that it draws attention to the historical (and cultural) specificity of the whole activity, as many commentators, notably Lowenthal (1985), have identified.

It would be straightforward to tell the story of conservation through the frame of a defining event such as the demolition of the Euston Arch. Such turning points in opinion and practice were often the results of destruction, of ‘battles’ lost, less often of battles won, which itself perhaps tells us something about the conservation mentality, only happy when fighting a campaign, when opposing. The milestone cases on the journey are invariably the rare sites, revealing the common assumption that only when a few examples of something are left can the rest be considered important. How many times do we hear of the oldest or the last surviving example of something?

In 1960, two years after returning to power, Charles de Gaulle told the French people that France must ‘marry its epoch’ (‘marry the century’). In contrast, Britain, we might say, divorced itself from the century. Harold Wilson’s slogan ‘the white heat of a technological revolution’, whilst superficially progressive and modernist, contained an appeal to the past with its implicit reminder of the industrial revolution. Others, including archaeologists and conservationists, fell into the temporal equivalent of the mythical rural idyll - Jaquetta Hawkes, for example (but she was and is by no means alone), writing at mid-century, considered that the eighteenth century, ’was for all classes one of the best times to have been alive in this country’ (Hawkes 1951, 198). Such attitudes help to explain the British popular passion for old houses, old ways, and a resistance to change.

High profile losses always played a part, though the lessons are not always straightforward. High profile demolitions such as the Euston Arch and the Firestone Factory, however, created a new level of social value respectively for Victorian and 1930’s architecture, and prompted stronger protection of the built heritage. It seems unlikely, however, that they would have been so influential if the public mood on conservation had not already shifted so decisively. It is probably therefore more useful to glance behind the events at the underlying processes and attitudes that framed them. Four examples, in many cases distinctively twentieth century in their own right - urban growth, post-war rebuilding, social change and environmentalism - make the point.

Which prompts another question - if conservation was a distinctively twentieth-century phenomenon (at least in Britain), how will it fare in the twenty first? We already preserve the present and (almost) the future; people speak of designing tomorrow’s landscape; ‘place-making’ is a major government agenda (even if place - sense of place, genius loci – is surely first and foremost inherited or found, not made). People say that architects should strive to create tomorrow’s listed buildings (architectural quality viewed from a notional future was one of the benchmarks in the celebrated argument about redeveloping No 1 Poultry in London, a not uncommon justification for unpopular development proposals) as if life, like a Prime Minister’s term of office, is only to measured in terms of legacy.

o A deep-seated distrust of the city characterises many twentieth-century attitudes in Britain, despite or because of the UK being one of the most urbanised countries in the world at the start of the century. This took the form of longings for rural utopia and an anxiety that the countryside would vanish. From the 1920s onwards at least, the image of the ‘octopus’ – the outward stretching tentacles of cities - was invoked by a coalition of aesthetes, country-dwellers and landowners to deplore the expansion of towns, and thus implicitly to oppose a certain trajectory of social and demographic change. From this came the Green Belt and National Parks, but also the other side of the coin, the urban/rural dichotomy and the ‘solution’ of the high-density city/town/village which

The strength of conservation as a social and political force is one of the reasons why politicians and developers now speak of ‘regeneration’ and not (as in the 1960s and 1970s) of



Fig. 16.2 The Euston Arch (London) in 1960, three years before its demolition, a cause célèbre in the growth of conservation (© English Heritage. NMR; Reference Number: AA98/05420).

Fig. 16.3 Marsham Street Towers (Department of the Environment) (Westminster) during demolition in 2002, a removal more or less welcomed by the fully-fledged conservation movement. (© English Heritage. NMR; Reference Number: NMR 21758/19.) 160

GRAHAM FAIRCLOUGH: N.D. CONSERVATION AND THE BRITISH have inadvertently created a new settlement pattern so that neither conservation nor planning have had any coherent response to what is still misnamed as ‘urban sprawl’.

with bio-diversity, carbon offsets and ‘re-wilding’ as routine, barely-challengeable self-evident virtues; sustainable development is hijacked as a natural not a cultural issue.

o A sense of lost and vanishing inheritances underpins the conservation ethos; ‘threat’ is the oxygen of conservation. At its simplest this can be connected to the experience of the 1939-45 war, both in direct and indirect ways. Bombing of cities in the 1940s, some actually with an explicit cultural dimension, the socalled Baedeker raids, alerted people to the possibility of mass destruction of the building stock, leading to the invention of listed buildings. Less directly, but of greater impact, the war (and that of 1914-18 before it) destroyed certainties. ‘Post-war’ extensive clearance and re-planning of cites (from Plymouth onwards until the collapse of Ronan Point) caused as much change as the war itself, but though partly provoked by bomb damage, the tabloid view blamed (rarely credited) the planning and architectural professions and thus (unwittingly?) stoked public support for conservation. That was Britain’s failed flirtation with modernity; in the 1970s we dived back into the heated pool of a romanticised past.

Conservation finally, perhaps, came of age in the 1980s. Its basic principles became embedded into the very economy of the nation as house prices rose. No longer was conserving a building an expensive luxury; rather it was an essential investment. More remarkably, when compared to many other cultures, old buildings were everyone’s preference. If a house had to be newly built, better build it so it looks old (not a new idea, indeed, as the country’s inter-war suburbs testify). The same market principles led to proper funding for excavating archaeological sites as they were being destroyed. The moral arguments that had not fully impressed over the past 20 or 30 years became unnecessary when excavation was agreed to be merely a part of the normal, expected costs of developing land. Finally, the end of Coal (and of industry in general), and of the Cold War, were important milestones, connected because they both demonstrate the conservation idea beginning to overtake the process of change. Orgreave and Greenham Common became (amongst their other significances) defining places for conservation. When a ‘last factory’ is to close, the first stage of conservation is now ‘process recording’ (as at South Crofty or Ellington, or RAF Coltishall) before the workers leave for the final weekend. The line between use and disuse, between function and preservation, between being and remembering, is now an overlap.

o It is no coincidence that big leaps forward in conservation can be tied to the SAVE campaign to protect country houses. That the rescue (or, conservation for later use?) of the English aristocracy by the invention of country house conservation, following recognition that the 1940s war years and its Attlee aftermath had changed things forever, has always been a fundamental plank of the conservation cause. Whereas in 1919 there was anticipation that pre-war structures of life could somehow be resumed, 1945 with its Welfare State was different, a real turning point, and concerted efforts were made to preserve the aristocratic infrastructure. This simultaneously and fortuitously fed an emerging cult of mass tourism, and offered a social dimension to the rural idyll.

The examples above are just a few obvious ones, some of which came to mind when first writing the paper for TAG 1999 when the old century was not yet dead. There are many others, and everyone involved in conservation will have their own most significant moments. I could not decide on a single defining moment, however, and perhaps the real point is that conservation does not have one simply because it is so pervasive, so defining of the century in Britain. It is more accurate to see it rather as a defining movement, as one of the social processes which future historians will see as characteristic of the period in Britain. Future archaeologists might look at the conservation movement in a way similar to how we consider the enclosure movement, or industrialisation: as a socially-driven historic process that in turn shaped society and the environment, and which will therefore one day in the future come to an end, or transmute into something different.

o The green conscience - Rachel Carson’s book Silent Spring (1962) is credited with showing a mass audience for the first time how mankind has the potential for large scale ecological destruction. It and similar works in the 1960s and 1970s created a whole new conservation ethos, fuelled by many followers from Gaia to Gore. Even the pinnacle of twentiethcentury technological triumph was harnessed to its ‘back-to-nature’ message when the ‘Fragile Earth’ photos came back from the Apollo moon flights. Much later, more localised threats such as that to Halvergate Marshes, which started the slow turning of the CAP ‘oil-tanker’, were also pivotal; today we live

Context We can also, therefore, ask how conservation fits in with other defining aspects of the century. It is striking for example how many of those discussed in the rest of the book have conservation-related undercurrents. Here, three examples can stand for the whole picture.


DEFINING MOMENTS War is one theme of this book, as it was of the century itself. The changes it produced, whether to people, social structures or material, produced dislocations of place and of time, and new relations of past with future, a sense of fragility, perhaps a greater appreciation of memory and a sense of loss. This seemed to encourage attempts to keep cherished buildings, such as the origin of listing in wartime London to protect some buildings from post-bombing demolition. The knowledge of how easily things can be lost was a major starting point for the conservation movement, as the RESCUE or SAVE campaigns demonstrated. The effects of loss through war have been felt much more recently in other countries no further away than SE Europe, with their experience of the more instantaneous interplay between war and conservation. It is clear too that loss creates new significance: the Mostar Bridge, the Bamiyan statues of the Buddha, even the Twin Towers; these all have significance and meaning now that they did not possess before their destruction.

selected a sample assemblage. When thinking about the twentieth century, however, there is the opposite problem, too much material. Part of the response is to select, and often this is framed in the context of trying to guess what future archaeologists might need to study in order to understand the twentieth century. First, is a decision as to what we think was significant about the twentieth century and second is to identify those things we think will survive in a sufficiently material form to allow future archaeologists to recognise them. This is in effect what the Monuments Protection Programme (MPP) did, for example with coal and a few other industries, when it tried to ‘catch’ a whole ‘new’ section of the historic environment at the moment when it became redundant and making a choice as to which ‘bits’ to try to keep. It is also what is happening now at RAF Coltishall, where archaeologists and artists try to encapsulate some of the character (not necessarily and certainly not only the architectural fabric) of this distinctive ‘working and living place’ so that it might influence the creation of its probable successor, an ‘eco-community’ (Dunlop 2008; Schofield 2007).

The search for new frontiers (see Graves-Brown [Everest] and Fewer, this volume) is characteristic of the twentieth century, and even here conservation finds its reflections. If you can’t discover new things in the present world, you can instead rediscover new things in the past world. After all, if the past is another country it must have a frontier. Conservation has constantly sought new frontiers – new types of heritage, new periods of history; every decade sees the discovery of a new ‘neglected period’, a new overlooked field needing its own special interest group. There are museums of everything now, presumably (somewhere) a museum of museums.

The MPP’s examination of twentieth-century ‘heritage’ began with military heritage. This is perhaps predictable: the sequence by which archaeology expanded into ‘new’ periods such as the medieval, the post medieval or the twentieth century is almost standardised – first military things (castles, WWII defences, Cold War stuff) were deemed interesting, then religious (or for later periods industrial), then ‘big’ houses; ‘mundane’ ordinary things (industrial workers housing, 1950s suburban shopping parades, leftover parts of bypassed ring-roads) follow on later (as in the post-MPP ‘Change & Creation project, Fairclough 2007b) or not at all.

Mass production - not just of objects, but of war and death, of social control, of surveillance, of entertainment, of health, of mobility, of information, of education – is the western twentieth century way. Where does this leave us in relation to looking at heritage and at the past: mass access to history, or an escape into individual histories, or both? How does this affect the social urge to conserve (and the individual’s urge to collect)? Do we try to preserve the particular as an antidote to the mass? The growth of interest in the ‘local’, an increasingly important aspect of conservation, is partly a reaction to mass production, of course, but we might also consider that massproduced objects have interest and value. The basis of conservation however has to one degree or another been the idea of the rare and the special, the last example of something, and this fits very ill, because of problems of scale, practicality and ‘ownership’, with holistic concepts such as ‘place’ and even with local distinctiveness.

One problem with the conservation of recent stuff (hence part of the original title of this chapter) is that these parts of the archaeological resource are often simply too wellpreserved, in too good a condition, to make conservation easy. Take coal-mining, an industry iconic of the twentieth century far beyond the strictly technical issues of mining, from the creation of the nationalised industry as part of the establishment of the Welfare State, to the pivotal years of the mid-period Thatcherite forge of late twentieth- and early twenty first-century Britain. When faced with a couple of dozen decaying, disused and enormous 1960s collieries, where does the conservation process start? How many can we afford to keep? And why; and what for? What would they be used for, and how or whether would their longer term ‘safety’ be guaranteed? Why, indeed, keep any in so self-conscious a way; why not allow them to fend for themselves and survive the ‘natural’ erosion of redundancy and time in whatever partial form they can until what is left is easily manageable? Why not clear them away because sometimes there is popular resistance to the idea of their preservation because they are not yet

Too much remains … When trying to understand the more distant past archaeologists ‘make do’ with whatever has survived the wear and tear of time and human erosion; survival limits and shapes our understanding. The passage of time has already



Fig. 16.4 South Crofty pit in 1998 at the moment of closure, the then last-working tin mine in Cornwall. (© Crown copyright. NMR; Reference Number: NMR/15887/17.)

sanctified by age. But sometimes the opposite is true, and mining communities that lost their way of working and sometimes their way of life, often wished to keep some of the remains in commemoration; in earlier closures, the opposite had normally been true leading to a desire to clean away the evidence of an uncomfortable past.

legalistic practices of listing, not public perceptions. Rediscovery of the 1980s has so far only reached fashion and popular music. In professional or academic consciousness, however, the ‘time gap’ is often zero. For this chapter, the significant issue is that by the end of what might be called the century of conservation, conservation as a movement had reached the stage of trying to protect things while they were new, or at least before they passed into disuse. It had also reached the stage of claiming that everything, of whatever type, date or significance, comprised the ‘historic environment’. That is the point at which the aims and objectives of disciplines such as archaeology began to separate from those of conservation.

... and Past meets Future It is worth noting - although an obvious point – that the ‘time gap’ is shrinking, in public consciousness just as in archaeological perspective. By and large Victorian architecture only began to be widely valued in the 1960s (after the ‘loss’ of the Euston Arch [which in the early twenty first century is threatened with an unfortunate zombie-like resurrection from its resting place on the bed of the river Lee). In contrast, architecture of the 1930s had to wait for widespread acceptance only until the 1970s, and whilst the 1950s is still undervalued, and modernist architecture of any type still very contentious, a ‘revival’ of 1960’s fashion and architecture and interest in the 1970s, came within thirty or less years. The 30 year rule played a part by discouraging early consideration, but that only affected the slightly arcane

TAG 2099 It is finally interesting perhaps to ponder the visibility of conservation in the future material record. Future archaeologists will find no shortage of evidence for conservation, which like any major social process will leave its trace in the material record. By definition, conservation leaves its own physical remains: re-pointed walls, repaired 163

DEFINING MOMENTS earthworks, reconstructed buildings, new roofs and so on. Conservation work may be undated (out of time) but it is usually in good condition and is likely to survive where 1960s tower blocks are failing to do so. We are already re-conserving early conservation work, and Britain already looks rather different to many other European countries simply because of our propensity for conservation over the past hundred years. We have repaired and adapted buildings, using obsolete techniques and materials, we have been freezing ruined buildings by careful consolidation since the nineteenth century (not to mention sometimes in middle-period conservation decorating them with little signs with Latin words), we patch-up earthworks, we have been putting new buildings on stilts to achieve preservation in situ, and fill warehouses with cardboard boxes full of excavated artefacts to achieve preservation by record (which the more imaginative archaeologists might link to the large holes that were dug in cities before new building work started).

concept of sustainable development, and theorise a connection between that and conservation (if they can agree a definition of sustainability).

In the ‘countryside’ (if the land not yet built on is still called that) future landscape archaeologists will find repaired and maintained hedges and walls which do not seem to fit their associated land-use. Red telephone boxes that seem to have survived long after their technology had gone might challenge their dating techniques. Spatial and functional analysis of the Poundburys and their derivatives will trace the development of retro-architecture, and speculate on why cul-de-sac layouts designed for the car and for privacy in a crowded country were accompanied by houses redolent of eighteenth-century village communities. There will be strangely-managed artificial patches of woodland with no apparent function other than to exist. Odd zoo-like reserves of archaeological sites in Wiltshire will demand explanation, and explanations will be sought for why - uniquely in the whole of Britain - the Stonehenge area had scarcely changed since the 1960s.

Carson, R., 1962. Silent Spring, Boston: Houghton Mifflin, (reprint Mariner Books, 2002). Dunlop, G. 2008. The War Office: Everyday Environments and War Logistics. Cultural Politics 4(2), 155-60. Fairclough, G. J. 2007a. The Cold War in context: archaeological explorations of private, public and political complexities, in Schofield, J. and Cocroft, W. (eds) 2007: Fearsome Heritage, Diverse Legacies of the Cold War, pp. 19-32. Walnut Creek: Left Coast Press. Fairclough, G.J. 2007b. The contemporary and future landscape: change & creation in the later 20th century, in McAtackney, L., Palus M. & Piccini, A. (eds) Contemporary and Historical Archaeology in Theory, Papers from the 2003 and 2004 CHAT conferences, pp. 8388. (Studies in Contemporary and Historical Archaeology 4, BAR International Series 1677), Oxford: BAR Publishing. Hawkes, J. 1951. A Land, London: the Cresset Press. Lowenthal, D. 1985. The Past is a Foreign Country, Cambridge: Cambridge University Press. Schofield, J. 2007. Artists and Airmen: documenting drawdown and closure at RAF Coltishall (Norfolk). Conservation Bulletin 56, 25-27.

In conclusion, the conservation movement in the late twentieth century began to allow archaeologists to influence the twentieth-century archaeological record that their successors will inherit. The century also brought us new, very much wider, definitions of archaeology which contradict the ’archaeo’ bit and refer to past material culture of any date. Conservation was an important vehicle for the progress of archaeology during the later twentieth century, but if conservation was such a defining aspect of the century, it is difficult not to wonder what it will evolve into in our young, barely 10 years old, century, and of what archaeology’s future relationship to it should be. References

Explaining (rather than simply recording) these things might be more difficult and more fun. It will be an archaeologist’s as well as an historian’s task. In the more distant future, when twentieth-century motivation will seem even more mysterious that it does now (or when time elapsed allows reappraisal and a search for deeper causes), how will the material remains of conservation be explained? Why did twentieth-century Britons expend so much effort on keeping old houses? Functional and economic causes might not convince everyone, and some of the more consciously-radical schools of thought will look for social and contextual explanations. The braver theoreticians might look to religion, ritual or magic as an explanation, some might detect conservation as an attempt to escape into the past from one of the more stressful centuries in human history (as well of course as being the century of counselling, psychoanalysis and personal indemnity insurance, not to mention shopping), or might see it as an attempt to create a healthy future environment by keeping what is best from the past. One or two archaeologists might even dig up the long-abandoned