Philosophy of the Information Society: Proceedings of the 30th International Ludwig Wittgenstein-Symposium in Kirchberg, 2007 9783110328486, 9783110328080

This is the second of two volumes of the proceedings from the 30th International Wittgenstein Symposium in Kirchberg, Au

201 22 2MB

English Pages 326 [330] Year 2008

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Philosophy of the Information Society: Proceedings of the 30th International Ludwig Wittgenstein-Symposium in Kirchberg, 2007
 9783110328486, 9783110328080

Table of contents :
Table of Contents
Preface
Section 1: Philosophy of Media Medienphilosophie
Binding time: Harold Innis and the balance of new media
A view on the iconic turn from a semiotic perspective
Medienphilosophie und Bildungsphilosophie – Ein Plädoyer für Schnittstellenerkundungen
Medienwissenschaft, Medientheorie oder Medienphilosophie?
Media Philosophy— A Reasonable Programme?
Section 2: Philosophy of the Internet Philosophie des Internets
Science of Recording
Weltkommunikation und World Brain. Zur Archäologie der Informationsgesellschaft
Avatars and Lebensform: Kirchberg 2007
Towards a Philosophy of the Mobile Information Society
Section 3: Ethics and political Economy of the Information Society Ethik und politische Ökonomie der Informationsgesellschaft
On our Knowledge of Markets for Knowledge–A Survey
East–West Perspectives on Privacy, Ethical Pluralism and Global Information Ethics
Information Society: A Second “Great Transformation”?
Internet and the flow of knowledge: Which ethical and political challenges will we face?
Will the Open Access Movement be successful?
Globalisierte Produktion von (akademischem) Wissen – ein Wettbewerbsspiel
Section 4: Electronic philosophy resources and Open Source / Open Access Elektronische Philosophie-Ressourcen und Open Source / Open Access
Philosophy in an Evolving Web: Necessary Conditions, Web Technologies, and the Discovery Project
Some thoughts on the importance of open source and open access for emerging digital scholarship
The Necessary Multiplicity
References
Abstracts and Biographies

Citation preview

Herbert Hrachovec , Alois Pichler (Eds.) Philosophy of the Information Society

Publications of the Austrian Ludwig Wittgenstein Society. New Series Volume 7

Herbert Hrachovec • Alois Pichler (Eds.)

Philosophy of the Information Society Proceedings of the 30. International Ludwig Wittgenstein Symposium Kirchberg am Wechsel, Austria 2007 Volume 2

Bibliographic information published by Deutsche Nationalbibliothek The Deutsche Nationalbibliothek lists this publication in the Deutsche Nationalbibliographie; detailed bibliographic data is available in the Internet at http://dnb.ddb.de

Gedruckt mit Förderung des Bundesministeriums für Wissenschaft und Forschung in Wien und der Kulturabteilung der NÖ Landesregierung North and South America by Transaction Books Rutgers University Piscataway, NJ 08854-8042 [email protected]

United Kingdom, Ire, Iceland, Turkey, Malta, Portugal by Gazelle Books Services Limited White Cross Mills Hightown LANCASTER, LA1 4XS [email protected]

Livraison pour la France et la Belgique: Librairie Philosophique J.Vrin 6, place de la Sorbonne ; F-75005 PARIS Tel. +33 (0)1 43 54 03 47 ; Fax +33 (0)1 43 54 48 18 www.vrin.fr

2008 ontos verlag P.O. Box 15 41, D-63133 Heusenstamm www.ontosverlag.com ISBN 978-3-86838-002-6 2008 No part of this book may be reproduced, stored in retrieval systems or transmitted in any form or by any means, electronic, mechanical, photocopying, microfilming, recording or otherwise without written permission from the Publisher, with the exception of any material supplied specifically for the purpose of being entered and executed on a computer system, for exclusive use of the purchaser of the work Printed on acid-free paper ISO-Norm 970-6 FSC-certified (Forest Stewardship Council) This hardcover binding meets the International Library standard Printed in Germany by buch bücher dd ag

Table of Contents

Preface HERBERT HRACHOVEC AND ALOIS PICHLER

Section 1:

5

Philosophy of Media – Medienphilosophie

Binding Time: Harold Innis and the balance of new media CHRIS CHESHER, SYDNEY

9

A view of the iconic turn from a semiotic perspective AUGUST FENK, KLAGENFURT

27

Medienphilosophie und Bildungsphilosophie – Ein Plädoyer für Schnittstellenerkundungen THEO HUG, INNSBRUCK

43

Medienwissenschaft, Medientheorie oder Medienphilosophie? CLAUS PIAS, WIEN

75

Media Philosophy—A Reasonable Programme? SIEGFRIED J. SCHMIDT, MÜNSTER

89

Section 2:

Philosophy of the Internet – Philosophie des Internets

Science of Recording MAURIZIO FERRARIS, TURIN

109

Weltkommunikation und World Brain. Zur Archäologie der Informationsgesellschaft FRANK HARTMANN, WIEN

125

Avatars und Lebensform: Kirchberg 2007 MICHAEL HEIM, IRVINE

141

Towards a Philosophy of the Mobile Information Society KRISTÓF NYÍRI, BUDAPEST

149

Section 3:

Ethics and political economy of the information society – Ethik und politische Ökonomie der Informationsgesellschaft

On our Knowledge of Markets for Knowledge—A Survey GERHARD CLEMENZ, WIEN

167

East-West Perspectives on Privacy, Ethical Pluralism and Global Information Ethics CHARLES ESS, DRURY

185

Information Society: A Second “Great Transformation”? PETER FLEISSNER, WIEN

205

Internet and the flow of knowledge: Which ethical and political challenges will we face? NIELS GOTTSCHALK-MAZOUZ, STUTTGART

215

Will the Open Access Movement be successful? MICHAEL NENTWICH, WIEN

233

Globalisierte Produktion von (akademischem) Wissen – ein Wettbewerbsspiel URSULA SCHNEIDER, GRAZ

Section 4:

243

Electronic philosophy resources and Open Source / Open Access – Elektronische Philosophie-Ressourcen und Open Source / Open Access

Philosophy in an Evolving Web: Necessary Conditions, Web Technologies, and the Discovery Project THOMAS BARTSCHERER, PAOLO D’IORIO

261

Some thoughts on the importance of Open Source and Open Access for an emerging digital scholarship STEFAN GRADMANN, BERLIN

275

The Necessary Multiplicity CAMERON MCEWEN

287

Abstracts and Biographies

305

Preface

This is the second of two volumes of the proceedings from the 30th International Wittgenstein Symposium in Kirchberg, August 2007. It contains selected contributions from sections 4-6 and the workshop of the symposium: • Medienphilosophie – Philosophy of media • Philosophie des Internets – Philosophy of the Internet • Ethik und politische Ökonomie der Informationsgesellschaft – Ethics and political economy of the information society • Elektronische Philosophie-Ressourcen und Open Source/Open Access – Electronic philosophy resources and Open Source/Open Access The editors want to express their gratitude to all contributors and to those who took part in the discussions in Kirchberg and beyond. Special thanks to David Wagner (Vienna) who was in charge of typesetting this volume. In order to set the tone for this volume, and to hint at the interplay between the two distinctively different components which made up the symposium, we here reproduce the introductory remarks given by Herbert Hrachovec at the opening ceremony.

Dear colleagues, I want to begin with a hypothetical scenario: If this was the Wittgenstein Heritage Society we would be in serious trouble, because it is well known that Wittgenstein did not take a laptop to Norway and he would have been very critical of those of you who mentioned that Kirchberg may be a nice place, but there is no WiFi available. So we do have a certain amount of tension here, because information society is built upon these sort of gadgets which Wittgenstein did not use at all.

6 To reach back a little bit in the history of this conference I might point out that there is a motto to the “Philosophical Investigations” about which David Stern gave a talk several years ago, a famous motto taken from Nestroy: “Und überhaupt hat der Fortschritt das an sich, dass er viel größer ausschaut, als er wirklich ist.”—Progress looks much bigger than in fact it is. The question then becomes: How do we deal with this Wittgensteinian scepticism about progress vis-à-vis the inescapable optimism that will be found in many of the contributions to this symposium. Now I propose, just as a quick approach to deal with this, that Wittgenstein in quoting Nestroy actually is making use of an analogy from visual perception, namely “looking bigger than it is”. This is a deception of the senses transferred to the world of concepts and there are at least two types of sense deceptions: One is the fata morgana type where you think you see something and there is nothing there, so it’s just an error. And the second type of deception is the stick-in-the-water type where there is water and there is a stick which looks bent and you know it could not be bent. There is a substratum to the perception, it’s just that appearances are deceptive. If you take these two types of sense deceptions, philosophers may be grouped—very roughly I admit—according to their views about progress: There are those who think that there is just nothing to progress and pride themselves in rejecting this modern notion, including its manifestations like laptops etc., those who reach back to the ancient Greeks while at the same time making use of some of the amenities technological society provides. One could compare them to people wearing a traditional costume with synthetic Gore-Tex® fibres hidden inside its linings. These philosophers sometimes use Wittgenstein, and in particular his scepticism against progress, to argue their point, but I would submit that this is a misuse. It is much more appropriate to look at the problem from the stick-in-the-water point of view and the interesting thing is that this is a view of the world. Progress is a view of the world even though from this perspective things might seem systematically distorted. A philosophers’ job would be not to reject progress, but to find out about the distortions and their dangers and to put things into place. This seems to me to be a genuine Wittgensteinian move. So I can come back to the points Alois has just made about the development and the prospects of the concepts of information in a Wittgensteinian setting: getting things right, putting them in the right way, even though there is this deceptive surface. Herbert Hrachovec and Alois Pichler Vienna and Bergen, May 2008

Section 1: Philosophy of Media Medienphilosophie

Binding time: Harold Innis and the balance of new media Chris Chesher, Sydney

Introduction Much has been made of the impacts of digital media on the experience of space: new modes of perception and action at a distance: accelerating globalisation; shifting boundaries between work and home life; and so on. It is less common to read about the impacts of digital media on the experience of time. Yet, the digitisation of cultural practices and artefacts has significant implications for structuring our relationships with both the future and the past. In the theoretical traditions concerned with technology and time, the work of Harold Innis, a Canadian economist and communications theorist, offers an approach to understanding the social significance of all kinds of media. He analysed how different media relate to space and time: space-binding media extend influence and meanings over distances, helping to build empires and develop cohesion across space; while time-binding media influence cultural patterns in duration. For Innis, civilisations can be measured by their balance between managing time and controlling space. If this remains the case today, how has the computer changed this balance in our own culture? This paper examines the extent to which Innis’s concepts about media still apply today. Since his death in 1952, computers have transformed the mediascape, so his claims and approach must be entirely re-evaluated in the light of these changes. Innis’s methodology cannot be applied as easily to computers as to earlier media. His analysis of communications places great emphasis on the material features of particular media: from ancient tablets to modern broadcasting. The material status of computers is more complex, and might escape Innis’s methods. For instance, digitised artefacts can seem to be largely virtualised. In fact, their existence is split between several sites of inscription in computer storage and memory, and events of expression through outputs: a hypermateriality, and not a virtualisation. I argue, therefore, that Innis’s emphasis on the materiality of media remains a key question for theorists of new media. In this paper, I will revisit Innis’s approach to the relation of media to space and time and his intellectual transition from

10 economic geographer to media theorist, and, in the process, will note some resonances that his work has with contemporary media studies, philosophy, and science and technology studies. I will also evaluate his arguments about the significance of media to cultural change, in relation to the history of computers and networks.

Innis on media, space and time Innis was among the first scholars to argue that social change is closely associated with media change. His later works in particular, Empire and Communications (1972b) and Bias of Communication (1991), cast modes of communication as central agents of transformation of civilisations. He analysed civilisations in the longue durée by tracing changes in the materials and techniques used to organise social activities, communicate over space, and pass knowledge down through time. He compared and assessed these civilisations according to how well the dominant media of each balanced command across space and dominion over time. We must appraise civilization in relation to its territory and in relation to its duration. The character of the medium of communication tends to create a bias in civilization favourable to an over-emphasis on the time-concept or on the space concept and only at rare intervals are the biases offset by the influence of another medium and a balance achieved (Innis, 1991, p. 64).

Innis saw a dramatic and destabilising imbalance in the dominant media of the mid-twentieth century in their bias towards space, and their neglect of time. He saw European imperialism, United States expansionism, and the centralisation of commerce and politics as strongly related to the dominance of centralised space-binding media such as newspapers, commercial printing, the telegraph, and radio. By comparison, he saw a neglect of objects and practices with enduring value, such as archives, universities and books. Innis is likely to have considered the introduction and rapidly changing roles of computers as highly disturbing to the balance of media. Computer design tended to prioritise control over territory and populations, supporting centralised monopolies of knowledge as evident in the first users of computers; the Census Bureau, Air Force, Atomic Energy Commission, Army and Remington Rand Sales Office (Ceruzzi, 1998). Computer databases are also time-binding media, but only within limits, which will be explored later in this paper. Innis’s assessment that ‘sudden extensions of communication are

11 reflected in cultural disturbances’ (Innis, 1991, p. 31) can certainly be applied to digital media, and he had great suspicions about modern technology, expressed in a polemical 1948 paper ‘The mechanisation of knowledge’. The conditions of freedom of thought are in danger of being destroyed by science, technology and the mechanisation of knowledge (Innis & Drache, 1995).

Innis’s analysis of the relationship between communication technologies and cultural change only extended until the age of newsprint and radio. The first commercial computers were becoming operational towards the end of Innis’s life but there is nothing published to suggest he knew about these developments. Innis’s legacy provides an alluring gap in the understanding of media in its lack of explicit attention to electronic computing. Innis’s attribution of agency to material objects and their properties remain contentious in media studies today. He has often been critiqued, or dismissed, as a technological determinist (Burnett & Marshall, 2003). However, recent work in philosophy and social studies of science has challenged the fixed boundary between human and non-human action. Innis’s lack of distinction between objects and human ideas and actions can be read alongside Guattari’s concept of the asignifying semiotic: signs operating below the threshold of meaning (Guattari, 1995). Innis’s approach also anticipates posthumanism and actor-network theory and their analyses of the imbrication of non-humans in human affairs. The work of Friedrich Kittler on discourse networks and contemporary media also clearly draws from Innis’s legacy (Kittler, 1990, 1999; Kittler & Johnston, 1997). Derrida’s conception of cultural inscriptions emphasises the cultural / material dimensions of archives: ‘…the technical structure of the archiving archive also determines the structure of the archivable content even in its coming into existence and in its relationship to the future’ (Derrida, 1996, p. 17 italics in original). Innis’s influence has always been strongest in Canada. Library and Archives Canada recently put up a website on the legacy of Innis and McLuhan that includes short essays from several contemporary scholars on the theme ‘Archives as media’. The site raises the question: To what extent is historical knowledge not merely preserved, but shaped by the archive and its means of selecting, storing, and presenting information? (Libraries and Archives Canada, 2007, pp. index-e.html)

Archives necessarily accumulate through ongoing interplays between

12

human and non-human components. What is archived is conditioned by the material constraints of space, informational architectures of indexing systems, and the personal and collective choices of archivists. For Innis, the various time-binding media available to any community are qualitatively different, as each retains impressions over time in different ways. Government records, personal letters, and published poems are preserved in different ways. Each civilisation has a different orientation towards time. To understand Innis’s distinctive reading of time-binding media it is relevant to briefly follow his transition from social scientist of space to selfstyled Humanities scholar of space, time and media. Innis began with an interest in the technologies that build empires, conducting detailed research into the development of the economy and infrastructure of Canada. He wrote histories of the Canadian Pacific Railway and of the fur, cod, and dairy industries (Innis, 1940, 1962, 1972a). Each of these industries was structured very differently. For example, the fisheries were established in scattered independent settlements along the eastern coast of Canada and, by contrast, workers in the fur trade were dependent on the Hudson’s Bay Company which ran a command economy from London (Di Norcia, 1990). The differences between the fish and fur industries emerged from the material properties, behaviours, and environments of the commodified animals themselves. The beaver’s sedentary nature and its inadequate defences against humans contributed to the rapid growth of the trade in its fur. The dispersed distribution of fish stocks allowed fishing boats to operate more independently, and therefore the communities to adopt a more democratic ethos (Watson, 2006). Innis read cultures from the ground up, seeing spatial and economic patterns emerging from the properties of agents, environments and lines of movement at various speeds. In his later work, Innis’s attention increasingly turned towards the problem of how civilisations relate to time through different media. Time is harder than space to study empirically, so he switched methods from the notoriously detailed fieldwork and statistical analyses to the accumulation of facts from a huge array of secondary sources. Collegial relationships with classics scholars such as Charles Cochrane, Edward Thomas Owen and Eric Havelock at the University of Toronto (Heyer, 2003) helped him take an increasingly interdisciplinary approach. He came to consider himself as much as a philosopher, historian and sociologist as an economist (Innis, 1991). Innis combined his understanding of the economy and materialities of

13 building empires with his new interest in time to propose a theory of history based on the extent to which the dominant materials and modes of communication of a civilisation are biased towards space or time. When Innis refers to ‘media bias’, he is not talking about the political slant of journalists but biases emerging from the very properties of materials used in communication. He theorises that media bias in a particular civilisation emerges from the interactions between three interdependent layers: properties of media substrates; encoding conventions; and social and political arrangements using media for particular purposes. The computer will present complications at each of these three levels.

1. Media materiality At the first level of media materiality, Innis argues different substances have distinctive properties that support different styles of communicating and, most importantly, each tends to have a bias towards either space or time. For example, papyrus is light and portable in scrolls, can be made cheaply from water plants, and can be written on with rapid strokes of a brush. However, papyrus deteriorates quickly and is, therefore, biased towards extending communication across space, building, extending and maintaining empires. By contrast, carvings in stone last for centuries but are expensive and time-consuming to produce, and cannot be transported easily. So, as papyrus is biased towards space, stone is biased towards time. The dominant media of a civilisation reflects the materials available to that community, either from local sources or through trade routes: stone in Egypt; papyrus in the Nile Delta; clay in Babylonia and Assyria; parchment in the Carolingian dynasty; and paper, from China to Europe via the Middle East. The materials established the limits of communication in weight, durability, malleability, reflectivity, and other technical limitations and capacities. The expense, complexity of manufacture, and all other features affect how it can be taken up, and by whom. The most lightweight medium for communication is speech. This is the dominant medium in oral cultures where face to-face conversations, and the cultivated memories of citizens, are the main means of transmitting culture over space and time. Innis favours the Greek oral civilisation, as a model of a balanced regime of communication, because he believed it did not constrain thought in the same way that writing did. As writing is introduced, thought becomes limited:

14

Writing with a simplified alphabet checked the power of custom of an oral tradition but implied a decline in the power of expression and the creation of grooves which determined the channels of thought for writers and readers (Innis, 1991, p. 11).

Innis was not the only scholar of his time writing about media change and, particularly, the significant historical transition from oral to literate cultures. Classicist Eric Havelock (1963) emphasised the transformations in thought with the emergence of writing in Ancient Greece. Walter Ong (1988) analysed the changes in consciousness and personality with the expansion of print. Most famously, Marshall McLuhan (1994) saw media as extensions of the human nervous system. McLuhan was a strong advocate of Innis’ work, undoubtedly helping to sustain Innis’s influence. However, McLuhan’s reading of media was quite different from Innis’s predominantly due to his disciplinary origin in literary theory in contrast to Innis’s training in social science. Where McLuhan’s reference points were surrealism and New Criticism, Innis’s were economic staples, trade routes and modes of transportation (Watson, 2006). For McLuhan, mediated memory devices such as writing, printing, videotape, or computer disks are prostheses of human faculties—extensions of individual human memory. In contrast, Innis sees these devices in terms that are more materialist and historical, as ‘time-binding media’. Whole societies carry knowledge into the future in distinctive ways, forming time-binding systems that are always bound up with centralised or decentralised social power and traded against immediate goals. Innis regards thought not so much as a property of active individual subjects, but as a pattern that emerges across an entire civilisation, grounded in changes in the materiality of media. Innis’s work is distinctive in developing a materialist approach across entire civilisations.

Computers complicate media materiality Alan Turing’s (1936-7) famous thought experiment demonstrated that any universal computing machine could simulate any other Turing machine, with time and memory being the only limits. In principle, computing components could be built from any suitable substances, and in any valid form, making media materiality more complex than with other media. Where print and radio were relatively bound to their embodiments in pulp and the radio spectrum, computers would always be manifest as interconnected components comprised of many different materials—metals, paper cards, magnetic

15 surfaces, semiconductors, radio, and optical wavelengths. Turing’s mathematical approach to media was the opposite of Innis’s, because, for him, the material expression was irrelevant. Turing’s proposed machine, which would displace paper as the dominant media material, included an imaginary infinite ‘paper tape’ as an immaterial prop in his mathematical proof. While paper could never do this particular job in actual machines (even if cards and paper tapes did feature in some designs), some kind of time-binding component was central to Turing’s abstract design for an automated computing machine. This design concept called for entirely new materials for automatic memory, which could be written, read, erased, and rewritten automatically. Maintaining Innis’s emphasis on materiality highlights the unprecedented integration of different interconnected material components required in the engineering of computers. The time-binding components of the earliest commercially available computers were notoriously difficult, unreliable and limited. For example, one early memory device used in the UNIVAC-1 in 1951 was the acoustic delay-line (Grey, 2001). It worked by sending carefully tuned vibrations through the substrate of long hot columns of mercury, using techniques developed for radar. Information in the delay line was held in the precarious form of vibrations in the liquid cycling between opposing transducers at either end of the tube, which would then amplify and retransmit the same information. The toxic mixture inside the delay-line had to be kept at a stable temperature to maintain a uniform speed of sound that would keep the memory traces in synch with the machine cycles. Even with such a complex apparatus of seven units, each with 18 delay columns, the system could store only 1000 computer words. Fast memory would remain a prohibitively expensive computing component for several decades. The engineering of the computer, and the design attributed to Von Neumann, manifested a new form of ‘present-mindedness’ (Innis, 1991) in its pyramidal memory structure. Changes occurred from the top of the pyramid, in the central processor, and in the register flip-flops that store the information states for immediate use. The processor steps from one instruction to the next at high speed in a digital ‘stream of consciousness’, making decisions that trickle up and down, and to and from, lower levels of memory and storage. The slower main memory is connected through a memory bus and supplies data and instructions as required. Larger amounts of data tend to be stored in other slower and less expensive components lower in the pyramid, or even off-line. This design, along with the extremely high cost of the fast memory components operating close to the processor, required a distinctive separation between main memory and storage, which persists in today’s

16 computers. UNIVAC’s long-term storage was as cumbersome as its random access memory was volatile. UNISERVO was a magnetic tape system that used extremely heavy reels of half-inch wide strips of nickel-plated phosphor bronze to input, output and store information. The metal tapes zipped past the heads at over 2.5 metres per second, delivering 7,200 characters per second (Gray, 2001). However, tape drives gave only linear access to archival data, which meant that records were accessible at different speeds depending upon their position on the tape. Many tapes were stored off-line making much of the data relatively difficult to access. Since the 1950s, there has been a proliferation of different materials and designs for computer data storage, with a trend towards cheaper and faster systems. These have included: paper tapes; paper cards; magnetic tapes of several widths (three quarter inch, half inch, quarter inch, eighth inch, eight millimetre); compact audio cassettes; floppy disks (eight inch, five and a quarter inch, three and a half inch); IBM’s hypertapes; stringy floppies; holographic systems; laser discs; and the ‘millipede’ probe storage. Few of these storage solutions have been engineered for the long term. Most of them are now superseded by ‘solutions’ with far superior capacities and performance. However, replacement systems are rarely compatible with previous standards. Data is often lost in the move to new standards. In the 1990s, a number of commentators began to warn of the short-life span and vulnerability of the media holding much of today’s cultural information. They said that this trend risked making ours a ‘digital dark age’ in which most of the data stored on computers would be lost. Future historians will not be able to find the records to piece together accounts of our times, lives and experiences (Brand, 1999; Hillis, 1998; Kuny, 1997). These critics pointed out that physical storage media deteriorate quite quickly making data unreadable within only a few years. Floppy disks are unreliable after five years, hard disks after twenty or thirty years, and optical media such as CD-Rs and data DVDs not much longer than that. Meanwhile, computer equipment becomes obsolete within eight years, often to be replaced with improvements that are incompatible with older standards: think zip drives, SCSI and so on. Many documents prepared with the older standards become unreadable without that lost software version or hardware platform. Through this combination of entropic forces the apparently immutable promise of perfect digital copies is broken by the inherent instability of the digital medium. As Rothenberg wrote in 1995,

17 digital information lasts forever—or five years, whichever comes first. (Rothenberg, 1995)

A recent study of the computer records of Bronze Age excavations in North East London from the mid 1990s found that the computer records had deteriorated more in one decade than the relics had in thousands (BBC News, 2000). The cultural effects of these changeable storage standards are compounded by the perceived immateriality of data storage devices. A sense of magical distantiation from information is epitomised in the hard disk, which has gradually become the dominant solution for storing almost all data. Matthew Kirchenbaum (2004) argues that the hard disk has driven a ‘sea-change in the production and recording of knowledge’ (107) and, unlike previous media, is a black boxed inscriptive technology where writing is displaced to a hidden, invisible magnetic substrate, distanced from the user’s hand and eye. Further, its platters are in constant motion, allowing it to access data at random and its constant error checking effaces imperfections during copying. The capacity to store and make unlimited perfect copies of images, sounds and programs, as well as writing, have unleashed an insatiable drive to capture and make available the cultural archive. Many of the changes to computer culture over the past half century can be attributed to changes in the materials and economics of production. The complexity of cultural changes associated with the proliferation of digital media is a manifestation of the material complexity of computers as physical devices, as much as by their complexity of information.

2. Languages and genres Innis examines a second level in the patterning of media in the languages, scripts, and genres of content. For example, he observed that hieroglyphics carved into stone monuments and pyramids of Old Kingdom Egypt tended to be square, upright, decorative, and pictographic. This style emerged partly from working with chisel and stone, but was also connected with the religious and political environment in which this medium was being used, the third level of Innis’s analysis. The rigid styles and the use of durable media were closely associated with a centralised society that venerated religious authority in which knowledge was monopolised (Innis, 1972b). After 2000BC, hieroglyphics came to be written by brush onto papyrus

18 more often than chisel on stone. This writing was simplified, less like pictures and more like a flowing cursive script. These changes in materials and style were associated with new social arrangements and modes of thinking. By escaping the heavy medium of stone thought gained lightness… A marked increase in writing by hand was accompanied by secularisation of writing, thought, and activity. The social evolution between the Old and New Kingdom was marked by a flow of eloquence and a displacement of religious by secular literature (Innis, 1972b, pp. 16–17).

Alphabetical and phonetic scripts are more efficient than pictographic systems. With fewer characters, such scripts are quicker to learn and to use, and so tend to favour traders, rather than centralised groups protecting religious texts. Because simplified and democratised language allows the production of texts to become decentralised, vernacular texts become more common and the dominant forms of knowledge and belief in a culture tend to change.

19

ii. Divergent languages and temporalities Developments in computer operating environments and languages helped digital electronics shift from being the exclusive domain of specialists in the 1950s and 60s to becoming everyday artefacts of popular culture by the late 1980s. Early programming languages generated exclusive social groups based on mastery of FORTRAN, COBOL, and other specialised languages. Programmers could make full use of the computer’s power, where end users were bound to work within the capabilities of programs provided by programmers. Languages and applications of computers established what Innis (1991, p. 11) referred to as ‘grooves’, which channel thought and action, leading towards what Deleuze (1992) refers to as the ‘control society’. The complexity of computing standards and the high initial costs of hardware over three decades helped centralise power over these forms, similar to how monopolies of knowledge once gave power to the priesthoods that commanded arcane scripts and languages, such as hieroglyphics and the cuneiform. Like the ancient masters of writing, programmers’ power was moderated by their relationships to powerful actors such as government, military, commerce, and education. After microcomputers made hardware more accessible in the 1970s, control over standards became an even more influential force. Institutions such as Microsoft, Apple, Adobe, and so on acquired power by controlling the standards millions of people use to encode and interact with media content. Control over proprietary standards has major implications for the future accessibility of cultural records and means not only monopolising knowledge but also, effectively, owning the environments in which people actually generate new knowledge. There have already been struggles between institutional wills over the control of such standards and communities of users. At the same time, cheap equipment and networks have made open source models of software development viable and possible if the social dynamics of a distributed radical meritocracy can be resolved (West, 2003). Today, the computer’s capacity to bind time is conditioned by the dominant encoding standards for capturing symbolic and sensory data: ASCII text; GIF and JPEG images; MP3 and WAV sounds; MP4, WMP and MOV video; VRML spatial information and so on. The proliferation of standards within the same devices is commonly called convergence and yet these standards are also divergent, as they support different sensory modalities, textual forms and regimes of access. For example, encryption and security schemes limit access to information while networking standards, such as Ethernet and TCP-IP, open up channels of connection. There are also unintended conse-

20 quences from this range of standards and the frequency of upgrades, which are major culprits in the ‘digital dark ages’ scenarios. Software opens up communication to a diversity of dynamic environments such as gaming engines, e-commerce and social software, establishing and framing spatial and transactional spaces for cultural practices. These tools and standards, and the rituals associated with them, condition what will be remembered and how. For instance, as Manovich (2001) observed, there has been a conflict between narrative and database forms. Whereas the narrative drive is to put sequences into order, the database gathers entities but refuses to give them any final order. Another key computer form, the bitstream, reimposes the narrative’s logic in streaming media and podcasts. Online applications such as YouTube offer a huge databases of bitstreams, finding their own resolution of the narrative / database clash. Just like computers, comprised of a wide range of materials in complex arrangements, the huge variety of data standards and computer languages make media determinist accounts such as Innis’s more difficult to apply. The implications of every command, file type and application are quite specific and often contradictory. Each device, mode of connection and software application has its own signature temporalities and spatialities. There clearly are cultural consequences for this unprecedented diversity of media platforms.

3. Media and civilisations Innis argued that the predominant media of a civilisation both cause, and so provide evidence of, the distinctive character of that society. Each medium is selected and developed because it suits particular interests within that society. These choices of media reinforce, and sometimes transform, that society. Some civilisations become tied to one medium, while others are subject to constant change. For example, Innis attributed the limited capacity of Egypt to build empire in part to ‘the inflexibility of religious institutions supported by a monopoly over a complex system of writing’ (25). This contrasted with the Roman Empire where a ‘written tradition dependent on papyrus and the roll supported an emphasis on centralised bureaucratic administration’ (107). Changes in materials and techniques of communication contribute to, if not bring on, crises that produce wider transformations in cultures. When new trade routes or inventions bring new techniques for communicating, social changes invariably follow. Innis reads conflicts between or within cultures as struggles over media. He sees the First World War as a conflict be-

21 tween British newspapers and German books and the Second World War as a confrontation between German radio and British newspapers (Innis, 2004, p. 89). Innis perceived a significant and growing imbalance in the world media space, in favour of space over time, since the modern European empires emerged in the 1700s. These empires gained power from their use of paper, print and, later, the telegraph, and radio. These media afforded centralisation of national authority, with printed documents helping to establish uniform laws, education, and administrative infrastructures. At the same time, portable and durable communication allowed the administration to decentralise and accelerate throughout the nation. Market and time pressures that tended to favour the most recent content largely drove the output of this printing industry. Constant depreciation – new books drive out old books – publishers concerned with depreciation in publishing new books but also concerned with monopolies in building up their lines. How far printing essentially based on controversy perhaps centring around price system and philosophical books became by-product of excess capacity in quiet periods – Descartes in Holland centre of printing industry for Holland? Printing meant mechanical reproduction of images – consequent deterioration in value and closer adjustment to goods – advertising. (Innis & Christian, 1980, p. 130)

This entry from 1947-8 in the Ideas file, a collection of transcribed research notes kept by Innis, illustrates how Innis’s method was to connect and interrelate phenomena: economics (declining value of book titles), technology (printing of images), cultural forms (advertising), and ideas (excess production allowing production of philosophical books). For Innis, levels of communication are infinitely inter-woven and not a hierarchy where lower levels determine those above. Innis’ writing is notoriously dense, packed with detailed historical and economic facts, lacking discursive flourish or argumentation, betraying his earlier training in economic geography. An Innis biographer observes that his collections of writing on index cards anticipate computer databases in their non-linear structure (Watson, 2006). Innis’s work did not follow the conventions of analysis, dialectic or narrative so much as perform pattern-matching. Innis identified an accelerating tendency of knowledge to lose value almost immediately as it is displaced by something newer. The inflated value of information from the present is evident in publishing, newspapers, and the centralised live broadcasts on radio and television. These media have extended control across space at the expense of considered reflection on the

22 past and care for the future. They also establish and sustain monopolies of knowledge by regulating access to it, imposing selective delays on its release, and developing arcane technical systems that control it. The trend towards present-mindedness, centralisation, and proliferation of media technologies accelerated in the twentieth century. Innis directly experienced it as a signalman in the First World War, as a witness to the industrialisation of war in the Second World War and in the gathering clouds of the Cold War (Watson, 2006). He saw that the west privileged only immediate goals at the expense of past and future. This was not only apparent in leaders’ statements, the increased cultural dominance of advertising, and the decline of the university, but more fundamentally in media bias in the dominant material modes of communications. The book trade, newspaper journalism, and radio were conditioned by the material properties of pulp, paper, and the radio spectrum. Consequently, they supported cultural forms that increasingly prioritised the present.

Projecting Innis into the digital society There is little doubt that computers have accelerated many of the trends that Innis identified, particularly in supporting corporate and government command over space and populations. In the early years of computing, these systems operated exclusively to sustain and enhance highly centralised monopolies of knowledge. More recently, though, the cultural impacts of computers became more ambivalent and contradictory. PCs, networks, and other digital devices became broadly accessible and the contexts in which they operate have become increasingly diverse. Today computers operate according to a multiplicity of temporalities usually slanted towards the present. Just as radio news bulletins or newspapers leads with a top story, news websites typically list the top story in the prime position at the top of a web page, privileging newsworthy stories. Some news sites however, have an alternate view of ‘breaking news’ where the most recent story appears at the top. A further alternative to this ordering is a search-driven or customised news listing where another system of value operates and only the stories that are most relevant for that search, or that user, appear. This multiplicity of possible orderings by computers makes it difficult to equate media features with specific cultural practices or mentalities. Although search engines and archival databases make distant events and historical texts present in everyday life, web search databases still tend to privilege the present by over-writing records of earlier versions of indexed

23 websites (Hellsten, Leydesdorff, & Wouters, 2006). Internet web pages tend to have a limited lifetime and site redesigns, closures, corruptions, broken links and crashes can degrade the contents of the web over time. Other websites and internet applications privilege the present in their own ways. Facebook and the web application ‘Twitter’ ask users to write what they are doing right now, and make this short text available to others who have registered with the site. They archive these entries, forming a kind of narrative record of past presents. Internet chat clients such as MSNChat, AOL Chat and ICQ also ask users to enter their current status to signal their current availability for chatting. Users conduct conversations in text (or as an audio or video bitstream), and automatically generate transcripts. The longer form of the weblog, or blog, orders all posts reverse chronologically, so that the most recent post is at the top. In each case, while there is present-mindedness, there is also a time-binding record of the present being created. Perhaps even more important than archiving features are the changes to cultural practices associated with adopting particular computer applications. Some of the earliest evidence of this trend was with word processors running on personal computers. Writers began to change their everyday habits, as the electronic environment changed their capacities to compose and organise their texts (Heim, 1987, 1993). The general fluidity of composition, and organisational devices afforded by word processors encouraged changes in habits of textual production. This has contributed to a general growth in the texts and versions of texts being produced and distributed.

Conclusions The dominant time-binding media of our ‘civilisation’ operates paradoxically to both diversify and homogenise cultural patterns over time. Since the mid-twentieth century, the complexity of computer-based communications has complicated Innis’s reading of media materiality as the key driving force of history. Computers and computer networks are comprised of complex interconnected material components. The constant substitution of cheaper and more efficient materials and manufacturing methods has allowed regular substitutions of these materials. Alongside improving hardware and software designs, the trajectory towards mass customisation allowed digital technologies to access wider and different communities and increasingly diverse application domains. Cultural practices such as calculation, writing, photography, play, and moving image were gradually appropriated by digital media.

24 These changes emerged alongside, and formed relations with, an already complex analogue mediascape. The digitisation of many cultural records has made many archives ubiquitously accessible. All these translations, however, are subject to the limits and thresholds of digitisation: bit size; sampling rates; encoding schemas and so on. They are all subject to the threat of deterioration, peculiar to digital media, which make artefacts readable only through machine. This diversity of digitised ‘assets’ is matched by the ubiquitous relative homogeneity of computer devices and the commodification of the databases, whether by subscription or advertising. While the approach to media history, taken by Innis, becomes much more complicated with computer media, many of his argument are still valid. The monopolies of communication maintained by search engines, software standards, and silos of copyrighted content, are different from those created in other media but have generated their own sites of struggle. In many ways, the invention of computers has been a response to concerns about the neglect of time, as expressed by Innis and others, and the outcome has been a heterogenising of temporalities with a diverse range of digital media including many different bitstreams, databases and software environments. Conversely, digitisation has increased the risk of data loss. Innis’s key contribution to communications theory, linking cultural patterns to the materials of dominant media, remains surprisingly important on re-examination. The proliferation of computers has been sustained by the globalisation of production and the mass consumption of microelectronic components and programming. The diversity of cultural forms associated with digitisation draws on this pattern of trade as much as the material and informational complexity of the devices themselves.

References BBC News. (2000). Old computers lose history record [Electronic Version]. BBC News. Retrieved 16 July 2007 from http://news.bbc.co.uk/2/hi/science/nature/654116.stm. Brand, S. (1999). Escaping the Digital Dark Age. Library Journal, v124 n2 p46-48 Feb 1 1999. Burnett, R., & Marshall, P. D. (2003). Web theory: an introduction. London; New York: Routledge. Ceruzzi, P. E. (1998). A history of modern computing. Cambridge, Mass.:

25 MIT Press. Deleuze, G. (1992). Postscript on the Societies of Control. October, 59, 37. Derrida, J. (1996). Archive fever: a Freudian impression. Chicago: University of Chicago Press. Di Norcia, V. (1990). Communications, time and power: an Innisian view. Canadian Journal of Political Science, 23(2), 335–357. Grey, G. (2001). UNIVAC I: The First Mass-Produced Computer. Unisys History Newsletter, 5(1). Guattari, F. (1995). Chaosmosis: an ethico-aesthetic paradigm. Sydney: Power Publications. Havelock, E. A. (1963). Preface to Plato. Cambridge,: Belknap Press, Harvard University Press. Heim, M. (1987). Electric language : a philosophical study of word processing. New Haven: Yale University Press. Heim, M. (1993). The metaphysics of virtual reality. New York: Oxford University Press. Hellsten, I., Leydesdorff, L., & Wouters, P. (2006). Multiple presents: how search engines rewrite the past. New Media Society, 8(6), 901-924. Heyer, P. (2003). Harold Innis. Lanham, Md.: Rowman & Littlefield. Hillis, D. (1998, February 1998). Public Session: Panel Discussion. Paper presented at the Time & Bits: Managing Digital Continuity, Getty Center, Los Angeles. Innis, H. A. (1940). The cod fisheries: the history of an international economy. New Haven, CT: Yale University Press. Innis, H. A. (1962). Fur trade in Canada: an introduction to Canadian economic history. Toronto: Univ of Toronto Press. Innis, H. A. (1972a). A history of the Canadian Pacific Railway ((1st ed.). Newton Abbot,: David and Charles. Innis, H. A. (1972b). Empire and communications. [Toronto]: University of Toronto Press. Innis, H. A. (1991). The bias of communication. Toronto: University of Toronto Press. Innis, H. A. (2004). Changing concepts of time. Lanham, Md.: Rowman & Littlefield Publishers. Innis, H. A., & Christian, W. (1980). The idea file of Harold Adams Innis. Toronto: University of Toronto Press. Innis, H. A., & Drache, D. (1995). Staples, markets, and cultural change: selected essays (Centenary ed.). Montreal ; Buffalo: McGill-Queen’s University Press.

26 Kirschenbaum, M. G. (2004). Extreme inscription: towards a grammatology of the hard drive. TEXT Technology Retrieved February 13, 2007 from http://texttechnology.mcmaster.ca/ Kittler, F. A. (1990). Discourse networks 1800/1900. Stanford, Calif.: Stanford University Press. Kittler, F. A. (1999). Gramophone, film, typewriter. Stanford, Calif.: Stanford University Press. Kittler, F. A., & Johnston, J. (1997). Literature, media, information systems : essays. Amsterdam: GB Arts International. Kuny, T. (1997). A Digital Dark Ages? Challenges in the preservation of electronic information. In workshop on Audiovisual and Multimedia joint with Preservation and Conservation, Information Technology, Libraries and Equipment, and the PAC Core Programme. . Paper presented at the 63RD IFLA Council and General Conference |. Retrieved July 21, 2007 from http://www.ifla.org.sg/IV/ifla63/63kuny1.pdf|. Libraries and Archives Canada. (2007). Old messengers, new media. The legacy of Innis and McLuhan. Retrieved from http://www.collectionscanada.ca/innis-mcluhan/index-e.html July 21, 2007. Manovich, L. (2001). The language of new media. Cambridge, Mass. ; London: MIT Press. McLuhan, M. (1994). Understanding media: the extensions of man (1st MIT Press ed.). Cambridge, Mass.: MIT Press. Ong, W. J. (1988). Orality and literacy: the technologising of the word. London; New York: Routledge. Rothenberg, J. (1995, January 1995). Ensuring the longevity of digital documents. Scientific American, 272, 42-48. Turing, A. M. (1936-7). On computable numbers, with an application to the Entscheidungsproblem. Proceedings of the London Mathematical Society, Series 2, 42, 230-265. Watson, A. J. (2006). Marginal man: the dark vision of Harold Innis. Toronto: University of Toronto Press. West, J. (2003). How open is open enough?: Melding proprietary and open source platform strategies. Research Policy, 32(7), 1259-1285.

A view on the iconic turn from a semiotic perspective August Fenk, Klagenfurt

Jedes Zeichen scheint allein tot. Was gibt ihm Leben?— Im Gebrauch lebt es. Hat es da den lebenden Atem in sich?— Oder ist der Gebrauch sein Atem? Every sign by itself seems dead. What gives it life? – In use it is alive. Is life breathed into it there? – Or is the use its life? Wittgenstein (2006[1933]432.)

1. What is turning and what is iconic in the iconic turn? Media are not only a means of communication. From a cognitive perspective, they may be viewed as components of an external, auxiliary memory system (Schönpflug 1997), and contemporary cognitive science “construes cognition as a complex system in which cognitive processes are ‘embodied, situated’ in environments, and ‘distributed’ across people and artifacts” (Nersessian 2007: 2). In man-machine communication, man-man-communication via digital machinery and especially in the World Wide Web (Heintz 2006, Steels 2006) the “external” components of this system have taken on more and more of the characteristics of our individual, “internal”, living and active memory with its richness of sensual and symbolic formats. The intellectual challenge in the drafts of the “masterminds” of hypertext (Eisenstein) and multimedia (Lintsbakh) was the detection of temporal/spatial, mathematical and linguistic correspondences between such different sensual and symbolic representations (Bulgakova 2007, Tsivian 2007). The so called “iconic” or “pictorial turn” was pulled along by the digital turn, and it may in turn have stimulated and accelerated the digital turn. And the increasing supply of pictorial materials—in TV, computer games, and the World Wide Web—may well have changed our visual literacy and our routines in the use of different kinds of pictures: The so called Flynn ef-

28 fect—the significant improvement in test intelligence observed during the last decades—is assumed to reflect such tendencies (Neisser 1998), because this gain mainly affects subtests designed to measure non-verbal reasoning. But what is turning and what is iconic in the iconic turn? The term iconic turn would imply a shift, i.e. an increase of iconic representations at the expense of non-iconic representations. But the World Wide Web is not a new single medium. Its users can upload or select texts, pictures and music files e.g. in order to “associate tags with these materials” (Steels 2006: 287). It has brought an increase of all types of “representation” and of course also an increasing supply of productions uniting such representations, such as in ordinary movies. The supply of pictorial material, let alone icons in a narrow sense of the word, by no means grew at the expense of e.g. texts or music, and perhaps not even any faster than the supply of such other materials. Thus the “iconic turn” is not really a turn. But what is iconic in this so called “turn”? Inspired by the idea that the computer might overcome the book or even writing in general, the focus of the media debate has shifted from the relation between speech and written text to the relation between language and picture (Koch & Krämer 1997: 19). What are the differences between the linguistic and the pictorial format? And what are their common features, especially in those pictures most commonly called “icons”? In the following section I shall reconstruct relevant semiotic arguments and boil them down to a point concerning our quest (in Section 3) for iconicity in the “iconic turn”. These arguments of course regard the nature of iconicity but cannot be discussed separately from the even more general question of how to define the sign. To be “in use” (see the above motto by Wittgenstein), i.e. to be produced and/or interpreted—is this a characterization of sign in function, or, more fundamentally, a precondition for the concept of sign? Can we think of sign and reference disconnected from actual interpretation, or even from the existence of a cognitive subject in principle capable of such an interpretation? Or even disconnected from communication systems? And is it really consistent to extend the concept of sign beyond communication systems (cf. the note of the “natural sign” and the “symptom”) and to restrict, at the same time, similarity relations to a particular sign called the “icon”?

29

2. Similarity in the index and in the icon 2. 1 Problems with “natural signs” and “symptoms” The index has always played a very special role in semiotic classification. Nöth (1997: 208 f) compares a “dogmatically dyadic” tradition differentiating only between iconic and symbolic representations and extending from the Epicureans via Saussure “to the Radical Constructivists of today”1 with a triadic tradition “extending from the Stoics to Peirce” who defined the indexical sign, i.e. a third form of representation, “according to criteria such as causality and spatial or temporal contiguity”. These indexical signs, says Nöth, “are the most ignored ones in the theory of computational representation.” I suspect, however, that the concept of the index was from the very beginning rather polyvalent and too wide for substantial applications. In Peirce, long the most frequently quoted semiotician, the concept of the index is so wide that it is an idle question whether or on which levels indexicality plays a role in the man-machine dialogue and in “computational representation”. From an almost all-inclusive concept of the index follows an almost ubiquitous occurrence of indexical properties—inside and outside of information-processing and symbol-manipulating systems, irrespective of whether such a system belongs to artificial intelligence, as is the case of the computer, or to natural intelligence, as is the case of our “computer in the head”, and irrespective of whether it functions on the basis of electrical signals and in a binary code or electro-chemical signals in a mixed, digital/ analogue code. According to Nöth’s comparison, the present article inclines, though being free of constructivistic and of Saussurean ideas, to the “dogmatically dyadic” tradition. Let us start with a rather undogmatic view of Peirce’s semiotic approach. This view will focus on the trichotomy “most studied” in Peirce, i.e. the division of the sign into icons, indices, and symbols, and particularly on the definition of the index and its distinction from the icon. I shall precede it with a selection of the most informative passages I could find in Peirce’s numerous writings and identify these paragraphs as “P1”, “P2”, “P3”, and “P4” in the subsequent analysis. “An icon can only be a fragment of a completer sign. The other form of degenerate sign is to be termed an index. It is defined as a sign which is fit to serve as such by virtue of being in a real reaction with its object. For example, a weather-cock is such a sign. It is fit to be taken as an index of the wind for the reason

30 that it is physically connected with the wind. A weather-cock conveys information; but this it does because in facing the very quarter from which the wind blows, it resembles the wind in this respect, and thus has an icon connected with it. In this respect it is not a pure index. A pure index simply forces attention to the object with which it reacts and puts the interpreter into mediate reaction with that object, but conveys no information. As an example, take an exclamation “Oh!” The letters attached to a geometrical figure are another case. Absolutely unexceptionable examples of degenerate forms must not be expected. /…/ It is remarkable that while neither a pure icon nor a pure index can assert anything, an index which forces something to be an icon, as a weather-cock does, or which forces us to regard it as an icon, as the legend under a portrait does, does make an assertion, and forms a proposition.” (1976 [1904]: IV, 242) “But a symbol, if sufficiently complete always involves an index, just as an index sufficiently complete involves an icon. There is an infallible criterion for distinguishing between an index and an icon. Namely, although an index, like any other sign, only functions as a sign when it is interpreted, yet though it never happened to be interpreted, it remains equally fitted to be the very sign that would be if interpreted. /…/ The only way in which an index can be a proposition is by involving an icon.” (1976 [1904]: IV, 256) “... every sign is determined by its object, either first, by partaking in the characters of the object, when I call the sign an Icon; secondly, by being really and in its individual existence connected with the individual object, when I call the sign an Index; thirdly, by more or less approximate certainty that it will be interpreted as denoting the object, in consequence of a habit [which term I use as including a natural disposition] when I call the sign a Symbol. /.../ Indices, on the other hand, furnish positive assurance of the reality and the nearness of their Objects. But with the assurance there goes no insight into the nature of those Objects. The same Perceptible may, however, function doubly as a Sign. That footprint that Robinson Crusoe found in the sand, and which has been stamped in the granite of fame, was an Index to him that some creature was on his island, and at the same time, as a Symbol, called up the idea of a man. Each Icon partakes of some more or less overt character of its object.” (1906:495 f) “(And since this is the trichotomy I have most studied, and consequently most frequently mention, I will here say that the division is into,1st, Icons, which represent their Objects by virtue of resembling them, as a geometrical figure in a geometry-book, or as any Diagram, or Array of letters in algebra, where the resemblance is not sensual but intellectual; 2nd into Indices , which represent their Objects by virtue of being in fact modified by them, as a clinical thermometer may represent fever, or a letter attached to

31 a figure of a triangle may from its position represent an angle of the triangle; and 3rd into Symbols, which represent their Objects by virtue merely of the certainty (or probability) that they will be so interpreted; as any noun represents the thing for which it stands.)” (1976 [1908] III/2: 887)

We may note the essence of the index in Peirce: The index “represents” an object, draws attention to it and furnishes assurance of the reality of this object, by nearness and position (caption under a portrait; letter attached to a figure), “by being really and in its individual existence connected” with its individual object and by virtue of being “in real reaction” and “in fact modified” by it. Thus the index not only includes conventional signs, or, in terms of Langer (1942), “artificial” indices such as the letter attached to a figure, but also “natural signs” and “physical symptoms”.2 But even “natural” indices—such as the weathercock’s actual direction or footprints in the sand—may “involve an icon”. Peirce’s “infallible” criterion for distinguishing between an index and an icon (P2) would imply that the icon, unlike the index, does not remain “equally fitted to be the very sign” as if interpreted.3 But why not? Are we really obliged to accept the actual direction of the weathercock (P1) as an icon instead of an index? And only because in Peirce’s terminology the index is not allowed to be similar to its object and to convey information about this object? Peirce’s “infallible criterion” (in P2) is, as far as I can see, not as clear as the criterion in an 1885 article where Peirce states that in the index there is “a direct dual relation of the sign to its object independent of the mind using the sign . . . . Of this nature are all natural signs and physical symptoms (3.361)” (Sebeok 1986: 49). But this older criterion contradicts the only criterion offered by Peirce for a general distinction between signs and non-signs: Signs are signs or become signs only by virtue of an interpretational act. Keller (1995: 118) also copes with this problem of an all inclusive sign by allowing the symptom a very special position within the signs. But he puts it the other way round: The symptom alone is a sign only when and as long as it is the object of someone’s actual interpretation, while e.g. a symbol in a book remains this very same symbol irrespective of whether or not it is interpreted by somebody. This is, I think, a likely attempt to prevent the concept of the sign from becoming empty. But both the approaches of Peirce and Keller have to manage with the co-existence of more or less autonomous signs on the one hand and objects switching between existences as sign and non-sign on the other. Peirce characterizes the natural index as that sign which exists without any “utterer” and, as one would say today, beyond any code.4 And what is it that

32 makes something an icon? “No pure Icons represent anything but Forms; no pure Forms are represented by anything but Icons.” (Peirce 1906:513) The consequence when this criterion is applied to language: “The arrangement of the words in the sentence, for instance, must serve as Icons, in order that the sentence may be understood. The chief need for the Icons is in order to show the Forms of the synthesis of the elements of thought.” (Peirce 1906: 513). On the other hand, the “arrangement of the words” in the spoken or written sentence is, without any doubt, a matter of “position” and of temporal or local “nearness” to neighbouring words so that one should expect it being a case of idexicality. Needless to say that every sign in the narrow sense of the word, i.e. a component of a denotational system, necessarily functions at the same time as an index for other reasons as well, e.g. because they point to an “utterer” or author with this or that capability, intention, etc. According to Peirce (2.306) “it would be difficult, if not impossible, to instance an absolutely pure index, or to find any sign absolutely devoid of the indexical quality” (Spinks 1991: 64). Peirce’s explanations regarding the footprints (in P3) can hardly be understood without having in mind the whole set of such premises: The notion of a hardly existent “pure index” that does not convey any information; the pure icon representing nothing but a pure form; the implicational hierarchy in the first sentence of P2; and Peirce’s notion (in 2.306, quoted from Spinks 1991: 65) that indices “have no significant resemblance to their objects”. • Those footprints found in the sand by Robinson are, first of all, an index for him that some creature was on his island. But since a pure index does not resemble its object and does not convey any information, they don’t tell Robinson what sort of creature this might be. • But they are also a symbol calling up the idea of man. We remember: A symbol represents its object by virtue merely of the certainty or probability that it will be so interpreted (P3, P4). This probability might be estimated as being higher in the footprints of another person than in the tracks of other creatures, including the imprints of the toes of different beach birds all around in the sand. And the probability that this person was still on the island might be rated higher in a man than in a seagull, for instance. • But all that would not be sufficient for a really complete sign, since a symbol, “if sufficiently complete, involves an index, just as an index sufficiently complete involves an icon.” (P2) Thus the indices “force” the footprint “to be an icon” (P1) so that it partakes in some more or less overt character of its object (P2, P3).

33 Our rather restrictive conceptualizations suggest a much more parsimonious characterization of those footprints: They are simply a case of indexical similarity, i.e. a similarity indicating a certain causer! It is, of course, a similarity relation that indicates whether the imprints in the sand come from a man, or a dog, or a bird. The hollows in the sand resemble the feet of the particular causer in the way a coin resembles a particular matrix, and they also resemble, less specifically, the footprints of other (adult, male) members of our species. By virtue of these similarities they indicate another (adult, male) person as a causer that may be assumed to be still on the island, if one takes into account additional indications, such as e.g. the fresh look of the track. They are, like the tracks of beach birds, by no means a symbol, because they are not components of a code or of a denotational system. And they are not icons, not even icons in the broader sense of the word. Only if there is an agreement that e.g. a certain gait would mean a certain warning, can the footprints take on a symbolic function and “attain semiotic status” in the sense of Sebeok (see Section 2.2). Cases of iconicity (at least in a broader sense, see Section 3.2) would be if somebody draws the outline of a foot in the sand using his fingers, or if he moulds the sand with his hands in a way that the hollows look like a footprint, or if a youngster walks around in a way so that his track forms a heart or a boat.

2. 2 Delimiting the concept of sign All the above quotations and comparisons support, though not in a really systematic way, the view that Peirce’s trichotomy of the sign (icon, index, symbol) can—if at all (Eco 1979: 178)—only be understood as a distinction between “isolate dimensions” (Ransdell 1986: 56)5 or “different uses” (Pelc 1986: 14) of signs; or, following Peirce’s dictum (in P3) on the same “perceptible” that may “function doubly” as a sign, as different “functions” of signs (Fenk 1997)6. Corresponding terms (e.g. “the indexical function of a sign”) also avoid the misunderstanding of the symbol, index, and icon—in Peirce usually used as nouns—as distinct classes of sign. But even with such a conceptualization of the index or indexical function one risks, if these terms are subsumed under sign or sign-specific functions, making the concept of sign all-inclusive and hence empty. As far as I can see, a more tractable concept of sign can be achieved if we exclude the index from “external representations” and from sign-specific functions (Fenk 1997) and view indexicality as a fundamental cognitive principle pervading but also transcending referential systems and the world

34 of signs (Fenk & Fenk-Oczlon 2007: 889): The inferential or “indexical” interpretation of events is the essential function of our cognitive apparatus. Inference is much older and more fundamental than reference, i.e. the essential function of language. And it is a presupposition for reference. The exclusion of indexicality from the sign and from sign-specific functions also means an exclusion of “indexical similarity”, without any compromise. Neither indexicality, nor similarity, nor indexical similarity is sufficient for external representation and for sign. This offers a clear position in the cases of indexical similarity discussed e.g. in Sebeok and eight decades before Sebeok in Martinak. To begin with Sebeok: In countless instances, images appear naturally, but copies of this sort are ordinarily devoid of semiotic value: a man’s shadow cast upon the ground, his shape reflected in the water, his foot imprinted in the sand. Such everyday spatial images are necessarily endowed with certain physical, viz., geometric properties, but they attain semiotic status only under special circumstances. (Sebeok 1979: 123)

In the subsequent sentences Sebeok considers the possibility of a semiotic status at least in the last example, because here the image is “permanent”; it does not disappear with (the luminous source or with) the model. Which is not really a valid argument: a word, for instance, is a symbol and a sign irrespective of whether it is spoken and thereby transitory, or written and thereby permanent or “frozen”. Martinak (1993[1901]: 30) shows more resolution in this respect. He points out that both the mirror-image and the photo—i.e. the permanent or “frozen” mirror image in terms of Sebeok—are so near to reality that one can hardly ask (in German) what they “mean” (bedeuten) and what they are a “sign” (Zeichen) of. We use the term “indexical similarity” to characterize cases of similarity for which we cannot claim any semiotic status: the similarity between two animals indicating their relationship; the similarity between a tree and its shadow on the ground or between a bird’s toes and their imprints in the sand; between a face and its image in the water or in a mirror, or between this face and its (digital or classical) photo; between a leaping cougar and a video-record of the scene. In the first examples similarity comes about by “natural” processes, in the latter by technical devices constructed, produced and released by persons. But the similarity we observe is not produced by these subjects! In order to illustrate the little but decisive step from “indexical similarity” to “iconic similarity”: If the man whose shadow we observe enacts a shadow play, positioning his arm, hand and fingers in a way that the shadow resembles the silhouette of a swan or of a snapping wolf, then this is

35 a case of iconicity. At least in a broader sense of the term:

2. 3 Delimiting the concept of the icon If similarity is considered to be necessary for those external representations or signs which one may call “icons” or “iconic”, then it has to be a specific sort of similarity, i.e. the similarity established by a subject’s simulating (imitating, modelling, …) activities (Fenk 1997). Figure 1 comprises what remains after the elimination of the index, including cases of “indexical similarity”: two basic types of representation and two possibilities to define the icon or iconicity. iconic symbol

simulation

ICON

symbol

I ICON

II

Fig. 1: A rather broad (I) and a rather restrictive (II) conceptualization of the icon (adapted from Fenk 1998: 305).

The essence of this conceptualization: Symbols are those signs that encode or denote, by virtue of their rule-based use (Keller 1995), concepts and propositions. Some of these symbols, such as pictograms and onomatopoetic words, show simulative properties. We may call them “iconic symbols”, and this term is compatible with both Versions (I and II in Figure 1) of the definition of the icon and iconicity. The inverse expression—“symbolic icon” –, however, would be tautological in view of the more restrictive Version II, because according to Version II only symbols can be iconic. I am sympathetic to the more radical and, I think, more promising Version II. It is in line with those arguments foregrounding the symbol (allusively in Ransdell; our Footnote 5) and the existence of a code (Eco 1979: 121) but preserves, unlike in Eco, iconicity as a relevant function of specified signs. But I have to admit that these arguments shift some questions of definition to the terms code and denotation. For instance: Despite the fact that in iconic signs the similarity between the representation and the represented is produced by somebody, one need not agree with Eco (1979: 200) that every mode of producing similarity necessarily follows a code and that the detection of produced similarity necessarily “requires a trained eye” and “must be

36 learned”. Our highly developed abilities to detect patterns and similarities are much older than human culture, and iconicity is a means of using these abilities e.g. in order to establish a new code or new elements of an already established code.

3. Iconicity and indexicality in the New Media In Section 1 we questioned the appropriateness of the term “iconic turn”, since the supply of picture material did not increase at the expense of the supply of e.g. texts and music files. One might reply that the digital technique used in the new media is in many cases per se a picturing technique—e.g. in the case of a digital camera or in the scanner that produces a picture of a text or a picture of a picture. But this argument would not hold in the light of our semiotic analysis in Section 2. When applying semiotic concepts to the media we have to distinguish between techniques, processes and products of creating something, and techniques, processes and products of storing, reproduction, duplication, transmission, distribution, and retrieval. The digital technique belongs, first of all, to the latter. When used in order to record something—this “something” may or may not be iconic—the result is “only” a case of indexical similarity. The similarity between the record and the object of the record does not come about by a subject’s simulating activities. The digital camera may produce a picture of a person’s face (i), or of the mirror-image or a photo of this face (ii), or of a freehand sketch of this face (iii). In the last of these instances the object is iconic, but in none of the instances is the digital record iconic. Example (i) illustrates the simple case of the production of indexical similarity, and in the instances (ii) the record produces indexical similarity with something that has indexical similarity with the face. In (iii) the record of the iconic representation is no more iconic than the mirror image of the sketch would be. If, however, somebody uses his image in the mirror in order to draw a self-portrait, he produces something that is iconic at least in the broader sense of the term. “Indexical similarity” indicates a similarity between model and reproduction, but especially in digital “photography” this is something we never can be sure about. While the classical film document is almost unforgeable (Matuszewski 1898), the digital technique offers outstanding possibilities for forgeries: It is no problem to extend the nose of a politician in a way that makes him look like Pinocchio, or to apply bullet wounds to the forehead of a terrorist leader who in reality is living in the best of health. Such forgeries are in fact iconic, and due to the potentials of digital communication it

37 is extremely difficult to identify “indexes” indicating a forgery and pointing to specific falsifiers and to those who launched the forgeries in the World Wide Web. Less scheming are the digital reworks of the pictures of prominent actors to make them look like prisoners (www.worth1000com, April 27, 2007) In such forgeries and reworks the digital technique is not simply used for recording, storing and transmitting something, but for modifications where the authors actually simulate something and thus really produce iconic representations. But the digital technique offers the potential not only for modifications of given originals, but also for creating “completely new” and “genuine” artefacts from the very beginning. On high artistic and technical levels this potential is used e.g. for the creation as well as the animation of figures in the “cyber arts” and in the hybrid arts, in “cartoon films” and computer games, or for the creation and animation of blueprints of e.g. airplanes in computer aided design. At a first glance one might deny the iconic character of forgeries, of fantastic images and of the drafts of an airplane with the argument that none of these creations simulates an object as (already) existing in our perceptual world. But, actually, our fantasy cannot invent and imagine “completely new” things. All our fantastic images go back to perceptually familiar materials and properties to form new combinations: as in the unicorn, which combines a horse with a narwhal horn, or in the soft watches in Salvador Dali’s “The persistence of memory” that exhibit properties of a face cloth or the flow characteristics of heated plastic (watches). There is no medium that can go beyond this principle, irrespective of the tools available and of the ways they are used. On a lower level, graphic tools can be used for simple, non-animated drawings and drafts such as the logo in Figure 2. I was quite optimistic that the company in question would acquire this idea for an improved logo, but its agency in Germany did not want it for some reason or other. So I can use it here not only as a further example of iconicity as established by digital techniques, but, moreover, as a rather untypical instance concerning an assertion in Wittgenstein (2006 [1933]): “We do not realize that we calculate, operate with words, and in the course of time translate them sometimes into one picture, sometimes into another.” (449.) One may add that usually we also do not realize that such mental images or “mental models” (JohnsonLaird 1983) evoked by verbal expressions often become manifest in real pictures such as diagrams and pictograms. Pictograms are the prototypical icon that can be localized in the overlap of simulation and symbol in Figure 1. And many logos are pictograms. A pictogram such as the well known post horn logo is a really “complete” sign: As a symbol it represents the country’s

38 postal service. As an icon it represents a post horn. And it offers unlimited indexical properties: From the place of application we may infer that this car or building is used or was used by the postal service, or that the “Hotel Post” was in use a century ago, and that the colour of the logo must have been refreshed very recently, etc. Pictograms can be viewed as elements of written schemata (Fischer 1997) and are used and interpreted in similar ways as synecdochic or metonymic verbal expressions. Figural thinking is reflected in both the diagrams of the special type called “logical, non-representational, arbitrary pictures” and the corresponding figures of speech, i.e. spatial metaphors. These diagrams capture, as it were, different figures of speech that can be assigned to different metaphor families, such as the path, the inclusion, and the subsumption metaphors (Fenk 1994, 1999).

Fig. 2: A draft of a new logo for an old trademark

The results of digital recordings and transmissions resemble the model or reproduce it in analogue form. But this is a case of what we have called “indexical similarity“, not of iconicity, if iconicity is reserved for cases of similarity established by a subject’s simulating activities. The mirror, says Pirenne (1970: 11), “does not represent reality, it presents to us reality”. This is exactly what digital records do as well. But the “reality” presented to us by the mirror, or by a photo or the digital record, may of course be an artefact such as a cubistic portrait—an icon in the broader sense of the word—or a specified isotype by Neurath, i.e. an icon in the restricted sense of the word. The portrait by Picasso and the isotype by Neurath are as real as the real models of these representations.

Endnotes 1

The Radical Constructivist E. v. Glasersfeld (1982:194), however, considers the Peircean approach to be rather promising, at least as compared with the approach of Ogden & Richards (1923).

39 This distinction between conventional and natural signs calls up a related distinction in German between Zeichen (~ sign) and Anzeichen (~ indication, indicator). This is not a distinction that is used consistently, and in many cases Anzeichen can be replaced by Zeichen: The dark clouds on the horizon are not only Anzeichen of a storm coming up; one may also say that they “mean” (bedeuten) or are a “sign” (Zeichen) of a storm coming up. But, conversely, Zeichen cannot in every case be replaced by Anzeichen, e.g. when there is direct talk about the utterer of a sign: “Zum Zeichen der Versöhnung gab er ihm die Hand.” (“As gesture of good will he shook his hand.”) 3 In terms of Wittgenstein’s metaphor (see our motto under the title of the paper), a sign not interpreted by anyone still exists but is not alive. This is one of several points where one is tempted to ask—in analogy to Rellstab’s (2007) question “How much Grice is in Peirce?”—how much Peirce there is in Wittgenstein. 4 Taking up arguments by Grice (1957), Sperber & Wilson (1986) investigate relations between communication and cognition and differentiate between a “code theory” and an “inferential theory” of communication. From their point of view, Peirce’s semiotic approach, or also the Saussurean semiological approach, “is a generalisation of the code model /…/ to all forms of communication.” (Sperber & Wilson 1986: 6). But, actually, Peirce’s generalization extends even far beyond communication. 5 According to Ransdell (1986: 56f) Peirce’s “conception of the symbol, along with the coordinate conceptions of the icon and the index /…/ can function in application 2

40 only to isolate dimensions of the significance in things, but not to classify or sort out things into distinct groups /…/.” 6 Similarly, Eco (1979) succeeds, despite of his radical rejection of Peirce’s “untenable trichotomy” (p 178) and especially of the notion of the “iconic signs”, in isolating different “modes of producing sign-functions” (p 217).

References Bulgakova, Oksana 2007 “From stage to brain: montage as a new principle of a scientific narrative”, in: International Association for Semiotic Studies (ed.), 9th World Congress of Semiotics. Communication: Understanding, Misunderstanding, Helsinki: Hakapaino, 89—90. Eco, Umberto 1979 [1976] A Theory of Semiotics, Bloomington: Indiana University Press. Fenk, August 1994 “Spatial metaphors and logical pictures”, in: Wolfgang Schnotz and Raymond W. Kulhavy (eds.), Comprehension of Graphics (Advances in Psychology 108), Amsterdam: North-Holland, Elsevier Science B.V., 43—62. Fenk, August 1997 “Representation and Iconicity”, Semiotica 115, 3 / 4, 215—234. Fenk, August 1998 “Symbols and icons in diagrammatic representation”, Pragmatics and Cognition 6 (Special Issue on The Concept of Reference in the Cognitive Sciences), 1/2, 301—334. Fenk, August 1999 “Ikonische Symbole und visuelle Metaphern”, Papiere zur Linguistik 61, 2, 119—138. Fenk, August and Fenk-Oczlon, Gertraud 2007 “Inference and Reference in Language Evolution“, in: Stella Vosniadou, Daniel Kayser, and Athanassios Protopapas (eds.) EuroCogSci07, The European Cognitive Science Conference 2007, Hove: Lawrence Erlbaum Associates, 889. Fischer, Martin 1997 “Schrift als Notation”, in: Peter Koch and Sybille Krämer (eds.) Schrift, Medien, Kognition: über die Exteriorität des Geistes, Tübingen: Stauffenberg Verlag, 83—101. Grice, H. Paul 1957 “Meaning”, The Philosophical Review 66, 377—388. Heintz, Christophe 2006 “Web Search engines and distributed assessment systems. Collaborative tagging as distributed cognition”, Pragmatics & Cognition 14 (Special Issue on Cognition and Technology: Distributed Cognition), 2, 387—409. Johnson-Laird, Philip N. 1983 Mental models: towards a cognitive science

41 of language, inference, and consciousness, Cambridge: Harvard University Press. Keller, Rudi 1995 Zeichentheorie, Tübingen: Francke. Koch, Peter and Krämer, Sybille 1997 “Einleitung“, in: Peter Koch and Sybille Krämer (eds.) Schrift, Medien, Kognition: über die Exteriorität des Geistes, Tübingen: Stauffenberg, 9—26. Langer, Susanne 1942 Philosophy in a New Key, Cambridge (Mass.): Harvard University Press. Martinak, Eduard 1993 [1901] Psychologische Untersuchungen zur Bedeutungslehre, Leipzig: J.A. Barth. [Reprinted in Kodikas/Code, Ars semeiotica 16 (1/2), 125—126.] Matuszewski, Boleslas 1998 [1898] “Eine neue Quelle für die Geschichte. Die Einrichtung einer Aufbewahrungsstätte für die historische Kinematographie”, montage/av—Zeitschrift für Theorie & Geschichte audiovisueller Kommunikation 7, 2, 6—12. Neisser, Ulric (ed.) 1998 The rising curve: Long-term gains in IQ and related measures, Washington, DC: American Psychological Association. Nersessian, Nancy J. 2007 “Model-based Reasoning in Distributed Cognitive-Cultural Systems: Studies of interdisciplinary research labs“, in: Stella Vosniadou, Daniel Kayser, and Athanassios Protopapas (eds.) EuroCogSci07, The European Cognitive Science Conference 2007, Hove: Lawrence Erlbaum Associates, 2. Nöth, Winfried 1997 “Representation in Semiotics and in Computer Science”, Semiotica 115, 3 / 4, 203—213. Ogden, Charles K. & Richards, Ivory A. 1923 The meaning of meaning, London: Routledge & Kegan. Pelc, Jerzy 1986 “Iconicity: Iconic Signs or Iconic Uses of Signs?”, in: Paul Boissac, Michael Herzfeld, and Roland Posner (eds.), Iconicity: Essays on the Nature of Culture; Festschrift for Thomas A. Sebeok, Tübingen: Stauffenberg-Verlag, 7—15. Peirce, Charles S. 1906 “Prolegomena to an Apology for Pragmaticism”, The Monist VI, 4, 492—546. Peirce, Charles S. 1976 [1908] “[Letter to] P.E.B. Jourdain”, in: Carolyn Eisele (ed.), The New Elements of Mathematics, 3/2, Berlin: Mouton; Atlantic Highlands, NJ: Humanities Press, 879—888. Peirce, Charles S. 1976 “Καιν`α στοιχεˆια”, in: Carolyn Eisele (ed.), The New Elements of Mathematics, 4, Berlin: Mouton; Atlantic Highlands, NJ: Humanities Press, 236—263. Pirenne, Maurice H. 1970 Optics, Painting and Photography, Cambridge: Cambridge University Press.

42 Ransdell, Joseph 1986 “On Peirce’s conception of the iconic sign”, in: Paul Boissac, Michael Herzfeld, and Roland Posner (eds.), Iconicity: Essays on the Nature of Culture; Festschrift for Thomas A. Sebeok, Tübingen: Stauffenberg-Verlag, 51—74. Rellstab, Daniel H. 2007 “How much Grice is in Peirce?” in: International Association for Semiotic Studies (ed.), 9th World Congress of Semiotics. Communication: Understanding, Misunderstanding, Helsinki: Hakapaino, 369. Schönpflug, Wolfgang 1997 “Eigenes und fremdes Gedächtnis. Zur Rolle von Medien in Erweiterten Gedächtnissystemen”, in: Peter Koch and Sybille Krämer (eds.), Schrift, Medien, Kognition: über die Exteriorität des Geistes, Tübingen: Stauffenberg, 169—185. Sebeok, Thomas A. 1979 The sign & its masters, Austin & London: University of Texas Press. Sebeok, Thomas A. 1986 I think I am a verb. (More contributions to the doctrine of signs), New York & London: Plenum Press. Sperber, Dan and Wilson, Deirdre 1986 Relevance: communication and cognition. Oxford: Blackwell. Spinks, C.W. 1991 Peirce and Triadomania: a walk in the semiotic wilderness, Berlin/New York: de Gruyter. Steels, Luc 2006 “Collaborative tagging as distributed cognition”. Pragmatics & Cognition 14 (Special Issue on Cognition and Technology: Distributed Cognition), 2, 287—292. von Glasersfeld, Ernst 1982 “Subitizing: the role of figural patterns in the development of numerical concepts”, Archives de Psychologie 50, 191— 218. Tsivian, Yuri 2007 “Synesthesia to Multimedia: How Lintsbach’s Old Semiotic Fantasy Looks in the Digital Age”, in: International Association for Semiotic Studies (ed.), 9th World Congress of Semiotics. Communication: Understanding, Misunderstanding, Helsinki: Hakapaino, 14—15. Wittgenstein, Ludwig 2006 [1953][1933] Philosophical investigations, Malden (MA): Blackwell Publishing.

Medienphilosophie und Bildungsphilosophie – Ein Plädoyer für Schnittstellenerkundungen Theo Hug, Innsbruck

1. Einführung Zur Relation von Medien und Bildung ist in den letzten Jahren aus verschiedensten Perspektiven unterschiedlich Gehaltvolles gesagt und geschrieben worden. Ähnlich wie bei vielen pädagogischen oder psychologischen Themen gilt auch hier, dass sich die sprechenden und schreibenden Akteure ungeachtet ihrer Bildung, Aus-, Vor- oder Unbildung, und ungeachtet der Reichweiten ihrer Sprachhandlungen allemal kompetent fühlen und kein Mangel an vollmundigen Diagnosen, Ankündigungen und Vorschlägen für Massnahmen besteht. Bermerkenswert erscheint mir aber auch, dass das Verhältnis von Medien- und Bildungsphilosophie bislang weitgehend ein Desiderat geblieben ist. Das bedeutet nicht, dass es der Sache nach keine einschlägigen Hinweise in der Philosophiegeschichte oder bei den „Klassikern der Pädagogik“ (Scheuerl 1979, Tenorth 2003) zu finden gäbe. Das bedeutet auch nicht, dass keine entsprechenden aktuellen Diskurse in Gang gekommen wären. Ganz im Gegenteil, im aktuellen Tagungsgeschehen sind etliche thematisch relevante Akzentsetzungen auszumachen.1 Es spricht allerdings vieles dafür, dass im Mainstream der Bildungstheorie und -philosophie der „mediatic turn“ noch nicht angekommen ist. Medien kommen beispielsweise im Zusammenhang der jüngsten, in vieler Hinsicht trefflichen Bestandsaufnahme der Bildungstheorie und Bildungsphilosophie von Meyer-Wolters (2006) gerade mal unter dem Aspekt ihrer Zugänglichkeit vor (ebd.: 58). Der Modus der (Mit-)Erwähnung von Medien, nicht selten in Form von unverbindlich-generalistischen Verweisen auf die Relevanz technologischer Entwicklungen, ist sehr verbreitet. Aber selbst dort, wo Medien zur fachübergreifenden Querschnittsthematik oder zu einem „prioritären Thema“ (Keuffer/Oelkers 2001) erklärt werden, bedeutet dies im Allgemeinen keinen grundsätzlicheren Perspektivenwechsel. Grosso modo lässt sich

44 mit Blick auf Binnenperspektiven der Bildungswissenschaften insgesamt behaupten, dass – abgesehen von einigen, mehr oder weniger sichtbaren Teildiskursen – Fragen nach der Relevanz von Medien, Medialität und Medialisierung sehr oft halbherzig, zögerlich oder gar nicht aufgegriffen und noch öfters in ihrer Tragweite verkannt werden. Diese Inkonsequenz trifft sich in gewisser Weise mit Aussenperspektiven des impliziten oder expliziten Primats von einem Verbund technologisch-ökonomisch-bürokratischer Orientierungen für Bildungsfragen. Die Verwendung das Ausdrucks ‚Bildung’ hat dann weder mit „alteuropäischen“ Konzepten noch mit neueren wissenschaftlichen oder philosophischen Verständnissen von Bildung, Kompetenz und Wissen, sondern eher mit einem Reduktionismus von Teilaspekten des Lernens im Sinne von gut berechen- und verwaltbaren Grössen zu tun. Es scheint eine weitverbreitete Tendenz und eine Ironie der Zeit zu sein, dass gerade pädagogische und psychologische Begrifflichkeiten in alltagstheoretischer und technologisch verkürzter Weise gebraucht werden. Dies gilt nicht nur im Zusammenhang europaweit wirksamer Trancephänomene wie dem „Bologna-Prozess“. Für manche BildungspolitikerInnen sind technologieimprägnierte Denkformen schon dermaßen zur Selbstverständlichkeit geworden, dass sie sich gar nicht mehr vorstellen können, was es bedeutet zuerst danach zu fragen, welche zentralen gesellschaftlichen Probleme dringend bearbeitet werden müssen, was an den bestehenden Lernkulturen verbesserungswürdig ist und welche Maßnahmen dazu dienlich sein könnten, welche Bildungskonzepte als zukunftsweisend zu erachten sind, wie ein gedeihliches Lernklima geschaffen und wie den wachsenden Wissens- und Kommunikationsklüften entgegengearbeitet werden kann – bevor Fragen nach geeigenten Werkzeugen, Software-Produkten oder den technischen Vorlieben der zentralen Informatikdienste eine Rolle spielen. Während bei der Planung von Verkehrssystemen selbst jenen PolitikerInnen, die ihre Entscheidungen primär an wahltaktischen Überlegungen und Rufen aus „der Wirtschaft“ ausrichten, klar ist, dass ein Fachmann für H7-Fassungen und XENON-Licht zwar relevante Beiträge leisten kann, im Allgemeinen aber für die Bewältigung von derart komplexen Planungsaufgaben nicht ausreichend qualifiziert sein wird, haben viele HochschullehrerInnen und universitären EntscheidungsträgerInnen hingegen kein Problem damit, wenn ein Datenbankspezialist mit strategischen Entscheidungsvorbereitungen zur „eLearning-Entwicklung“ der Institution als Ganzer anstatt mit operativen Aufgaben befasst ist. Solche Alltagsbeobachtungen korrespondieren durchaus mit Vorstellungen, die auch in einzelnen wissenschaftlichen Teildisziplinen kursieren, und

45 mit Aktivitäten in der Politikberatung. So zeigen beispielsweise die Gewichtung und Positionierung von Lehr-, Lern- und Bildungsfragen in den „Empfehlungen zur Weiterentwicklung der Kommunikations- und Medienwissenschaften in Deutschland“ (Wissenschaftsrat 2007) unmissverständlich, dass den Bildungswissenschaften im Allgemeinen und der Medienpädagogik im Besonderen keine prominente Zuständigkeit für entsprechende Fragestellungen im Spannungsfeld von Medien und Kommunikation zugesprochen wird. Mehr noch: Mediendidaktische Aufgaben werden in der Medientechnologie (sic!), kurzum: der Informatik angesiedelt (ebd.: 23 ff). Wen wundert es da noch, dass mancherorts bereits das zehnjährige Jubiläum von Prozessen der „Blackboardisierung“2 gefeiert werden kann. Dem Fazit von Reinmann (2007: 4), dass die Darstellung bildungswissenschaftlicher Aspekte in den Empfehlungen des Wissenschaftsrats (2007) „dünn“ ausfällt und insgesamt „den neuen Phänomenen und Herausforderungen auf dem Gebiet von Medien und Kommunikation nicht gerecht“ wird, kann ich nur zustimmen.3 Entsprechende Herausforderungen bestehen nicht zuletzt in konzeptioneller Hinsicht. Wenn wir Fragen der Bildung, Medien und Kommunikation ernst nehmen, dann sind Klärungen angezeigt, die Sozial- und Kulturwissenschaften insgesamt betreffen und die weit über bildungspolitische Tendenzen und universitäre (Fehl-)Entwicklungen hinausgehen. Die Tragweite entsprechender Überlegungen wird deutlich, wenn wir uns die aktuellen Diskurse um eine neue Wende vergegenwärtigen. Im Anschluss an die Beobachtungen, die zur Rede von der Medialisierung der Lebenswelten geführt haben, zeichnet sich seit einigen Jahren unter den Titeln media turn, medial turn und mediatic turn ein weiterer paradigmatischer Wandel ab. In Analogie zu bekannten Fokussierungen auf Sprache (linguistic turn), Kognition (cognitive turn), Zeichen (semiotic turn) oder Bilder (pictural turn) wird hier das Augenmerk auf Medien gelegt. Die Rede von einer „medialen Wende“ meint auf einer meta-theoretischen Ebene eine Alternative und Ergänzung der etablierten Paradigmen, die sich durch eine Konzentration auf Medien, Medialität und Medialisierung auszeichnet. Auf einer empirischen Ebene wird die Bedeutung der Medien für Prozesse der Kommunikation, des Wissensaufbaus und der Wirklichkeitskonstruktion hervorgehoben. Der Ausdruck „Medialisierung der Lebenswelten“ beinhaltet gewissermaßen beide Aspekte: Die erfahrbare Alltagswelt und Beobachtungen der „Mediendurchdringung“ sowie die Unhintergehbarkeit medialisierter Welten und deren Funktion als Ausgangspunkte für unsere Erkenntnisbestrebungen. Im Anschluss an einige medienbegriffliche und zeitdiagnostische Überle-

46 gungen (2. Teil) werden im vorliegenden Beitrag unterschiedliche Versionen und Konzeptionen der „medialen Wende“ beschrieben (3. Teil).4 Auf diesem Hintergrund werden im vierten Teil einige Schnittstellen im Spannungsfeld von Medien- und Bildungsphilosophie skizziert. Abschließend wird der Bedarf an Bildung im Sinne zukunftsorientierter Reflexions- und Gestaltungsangebote nach dem mediatic turn verdeutlicht.

2. Ausgangsüberlegungen 2.1 Zeitdiagnostische Beschreibungsperspektiven Das verfügbare Auswahlspektrum zeitdiagnostischer Beschreibungsperspektiven reicht von der Erlebnis- und Kommunikations- über die Informations- und Wissens- bis hin zur Medien- und Weltgesellschaft (s. Schimank/ Volkmann 2000, 2002). Angesichts der Vielfalt der verfügbaren gesellschaftlichen Selbstbeschreibungen halte ich eine eindimensionale diagnostische Festlegung mit oder ohne daraus abgeleitete Therapievorschläge für problematisch. Vielmehr sollten wir unterschiedliche Beschreibungsperspektiven und deren Relation und Brauchbarkeit für Theorie und Praxis der Medienpädagogik in Erwägung ziehen. Gerade in multi-optionalen Ausgangslagen eröffnen Ansätze der Gestaltung von Erwägungskulturen (Blanck 2002, Blanck /Schmidt 2005) weit eher viable und tragfähige Entwicklungsdynamiken als salopp vorgetragene Behauptungen über die „wirkliche“ Verfassung der gesellschaftlichen oder medialen Wirklichkeit. Wenn wir von der Verwendung zeitdiagnostischer Ausdrücke im Sinne leerer Worthülsen einmal absehen, dann eröffnen die jeweiligen Beschreibungen unterschiedliche Reflexionshorizonte und Möglichkeiten der Kontingenzverarbeitung. Insgesamt haben wir es mit einer Inflation verschieden griffiger Selbstbeschreibungen von gesellschaftlichen Trends zu tun. Siegfried J. Schmidt spricht in diesem Zusammenhang von „operativen Fiktionen“ (Schmidt 2006: 4). Damit wird die doppelte Perspektive der Unerlässlichkeit solcher Beschreibungen für die Gestaltung von Kommunikationsprozessen und der Unvermeidlichkeit ihres fiktionalen Charakters betont. Er fasst solche operativen Fiktionen als reflexive Strukturen auf und unterscheidet dabei „im Wissensbereich Erwartungserwartungen und im Motivations- und Intentionsbe-

47 reich Unterstellungsunterstellungen. Wenn wir miteinander kommunizieren, unter stellen wir stillschweigend, dass die Kommunikationsmittel, die wir verwenden, ähnlich verwendet werden, dass die Themen, die wir behandeln, ähnlich interpretiert werden u. s. w. Wir unterstellen auch, dass die Intentionen, ein Gespräch zu führen, von beiden Seiten loyal und vergleichbar sind. All das können wir nicht überprüfen, und deshalb spreche ich von Fiktion. Es handelt sich um eine operative Fiktion, weil ohne diese Fiktion keine Sprache, keine Kommunikation, keine Kognition funktionieren würde. Funktioniert eine operative Fiktion, bewährt sie sich; funktioniert sie nicht, brauchen wir eine neue, um mit dem Scheitern umgehen zu können.“ (Schmidt 2006: 4)

Damit rücken dynamische, prozessuale und transversale Aspekte im Umgang mit zeitdiagnostisch-soziologischen Beschreibungen in den Vordergrund. Tatsachen-Ansprüche werden so probeweise vorausgesetzt und auf ihre Anschließbarkeit überprüft und nicht als Faktum hingestellt. Analoges gilt für Beschreibungen, die auf Transformationsprozesse abheben: Auch hier liegt der Akzent auf der sukzessiven Beobachtung dieser Prozesse und deren Bedingungen und nicht auf einer einmal getroffenen Transformationsdiagnose, die in der weiteren Folge unterstellt wird. Vorderhand erscheinen auf diesem Hintergrund drei Beschreibungsperspektiven besonders relevant: • Die Perspektive Medienkulturgesellschaft fokussiert die Ko-Evolution medialer, sozialer, politischer, ökonomischer und kultureller Veränderungen. Die Einführung neuer Medien schafft historisch immer wieder neue Spielräume der Kommunikation und Kognition sowie der Politik und Ökonomie. Die historischen Medien-Konstellationen stellen dann jeweils für gewisse Zeiträume Strukturbedingungen der Vergesellschaftung und der Wirklichkeitskonstruktion dar (vgl. Schmidt 2000: 175 ff, 2005). • In der Perspektive Wissensgesellschaft werden Dimensionen der Schaffung, Verfügung, Verteilung und Tradierung von Wissen problematisiert. Dabei geht es vor allem um die Bedeutung unterschiedlicher Wissensformen sowie um deren Zusammenspiel und deren Stellenwert als Produktionsfaktoren. Gerade die Vielgestaltigkeit des Wissens, seiner Repräsentationsformen und seiner sozialen Verteilung stellt eine besondere Herausforderung für die Pädagogik dar. Diese lässt sich nicht mit euphorischen Wissensgesellschafts-Verkündungen, sondern nur mit differenzierten Reflexions- und Orientierungsangeboten bewältigen (vgl. Höhne 2004, Kübler 2005). • In der Perspektive Netzwerkgesellschaft wird eine uralte menschliche

48 Praxis weiter gedacht (vgl. Castells 2001, Fassler 2001, Barney 2004). Durch den Einfluss der Informationstechnologien entwickeln sich aus traditionellen Netzwerken heute Informationsnetzwerke, die weit reichende Transformationsprozesse in Arbeits-, Bildungs- und Lebenswelten in Gang gesetzt haben. Als Beispiel für eine zukunftsweisende Zusammenschau, die wichtige Aspekte aller drei Beschreibungsperspektiven ansatzweise verbindet, sei hier auf die Analyse von „Wissensprozessen in der Netzwerkgesellschaft“ (Gendolla/Schäfer 2005) verwiesen.

2.2 Medienbegriffliche Differenzierungen Nachdem eine einzelne Definition des Medienbegriffs den Spielrum zu sehr einschränken würde und eine integrative Darstellung der Vielzahl der aktuell diskutierten Medienbegriffe (vgl. Neumann-Braun 2000, Hoffmann 2002, Leschke 2003) hier nicht zu leisten ist, will ich einige Unterscheidungen treffen, die mir wichtig erscheinen. Die weitere Ausdifferenzierung von Medien als „multiplexen Systemen“ (Rusch 2002: 80-82) im Sinne einer transdisziplinären Medienwissenschaft bleibt als Aufgabe vorerst erhalten.5 Mit gewissen Vorbehalten lassen sich medienphilosophische von medienwissenschaftlichen Begriffen unterscheiden. Zu den ersten zählen allgemein-abstrakte Konzepte, in denen Medien als Anschauungsformen oder sinnliche Wahrnehmungsmedien (Raum, Zeit) aufgefasst werden. Auch allgemeine Charakterisierungen mit mehr oder weniger universellen Ansprüchen – etwa von Medium als Mitte, Mittel und Mittler – lassen sich hier nennen, oder Luhmanns Medium/Form-Unterscheidung (1997: 190-201). In medienwissenschaftlicher Absicht lassen sich semiotisch-kulturalistische und technisch-apparative Akzentsetzungen in der Begriffsbildung unterscheiden: Zu den ersteren zählen semiotische Kommunikationsmittel (Bild, Sprache, Schrift, Musik), und zu den letzteren technische Herstellungs-, Speicher- und Übertragungsmedien (Buchdruck, Radio, Film, TV, Computer, Internet, etc.). Hinzu kommen die Medienangebote als Resultate der Verwendung von Kommunikationsmitteln wie Texten, Radio-/Fernsehsendungen oder Websites, in denen beide Akzentsetzungen in vielfältiger Weise aufgehoben sein können, sowie Medieneinrichtungen, die in drei Gruppen gegliedert werden können:

49 a) Medienorganisationen: Medien-Verbände, Vereinigungen, Verlagshäuser, WC3-Konsortium, etc., b) Medieninstitutionen: Medienrecht, Regelungen der Medienökonomie und Medienpolitik, c) intermediäre Institutionen und Organisationen: halbstaatliche oder politische Einrichtungen und Gruppen sowie Vermittlungsinstanzen zwischen Teilgruppen, regionalen, nationalen, transnationalen Sphären der Öffentlichkeit einschließlich neuer „Global Public Spheres“ (Volkmer 1999). In der Luhmann’schen Konzeption spielen neben den Verbreitungsmedien auch symbolisch generalisierte Kommunikationsmedien (Anerkennung, Macht, Liebe, etc.) eine ausgezeichnete Rolle als gesellschaftliches Bindemittel und funktionales Äquivalent zur Moral (s. Luhmann 1997: 316f). Was die Bildungswissenschaften, aber auch die Bildungstheorie und die Medienpädagogik betrifft, so stand lange Zeit ein Verständnis von Medien im Sinne technischer Kommunikationsmittel im Vordergrund. Mit der Verbreitung der digitalen Informations- und Kommunikationstechnologien wurde einerseits diese Tendenz vor allem im Kontext von e-Learning-Entwicklungen weitergeführt, andererseits kamen vermehrt auch andere Medienbegriffe zum Tragen bis hin zu expliziten Auseinandersetzungen mit medientheoretischen und medienphilosophischen Ansätzen. Eine systematische Entfaltung von Medienbegriffen und deren Relevanz für medienpädagogische Fragestellungen, die breite Akzeptanz findet, liegt allerdings nicht vor.

50

3. Mediatic turn – Eine Diskurs-Skizze 3.1 Zur Fragwürdigkeit der Thematik Unter Gesichtspunkten empirischer Wissenschaftsforschung ist die Ankündigung oder Ausrufung von Wenden aus mehreren Gründen fragwürdig. Abgesehen von historisch bedeutsamen Beispielen wie der Kopernikanischen Wende stehen Anspruch und Duktus der Darstellung in vielen Fällen in einer seltsamen Relation zur faktischen Rezeption und Verbreitung derselben. Nicht selten bleiben die Wirkungen beschränkt auf das nähere Umfeld eines Lehrstuhls oder eines freiberuflich schaffenden Akteurs. Selbst wenn man in Rechnung stellt, dass einige verkannte Genies zu Unrecht noch nicht für einen Nobelpreis vorgeschlagen worden sind, lässt die Differenz zwischen Selbst- und Fremdeinschätzung mitunter Zweifel und gemischte Gefühle aufkommen. Auch mit der Spannung zwischen medientauglichen Formen der Selbstdarstellung und oberflächlichen Theorie-Darstellungen verhält es sich ähnlich. Insgesamt scheinen im Zusammenhang von Wende-Verkündungen forschungspolitische Interessen, narzißstische Getriebenheit oder karrierestrategische Überlegungen oftmals wichtiger zu sein als problemadäquate Versuche der Klärung und sachliche Motive des wissenschaftlichen Arbeitens. Freilich kann hier eingewendet werden, dass heute ohne „visibility“, strategisches Networking und geschicktes Forschungsmarketing keine nennenswerten Drittmittel zu bekommen sind und ohne solche keine guten Evaluierungsergebnisse oder adäquaten Ausstattungen. Aber auch in Zeiten der Ökonomisierung der Wissenschaften im Allgemeinen und der betriebswirtschaftlich ausgerichteten Reorganisation von Universitäten im Besonderen wird der (meta-)theoretische Reflexionsbedarf nicht weniger. Im Gegenteil: Gerade angesichts der Schattenseiten im tertiären Sektor und der fragwürdigen Entwicklungen des Primats bürokratischer Verwaltungen gegenüber wissenschaftlich relevanten Prozessen, der weit verbreiteten Tendenzen der Reduktion von Bildung auf (scheinbar) marktgerechte Qualifizierung, der Demotivierung und Innovationsbehinderung durch Hierarchie-Orientierung und Entscheidungszentralisierung, der beschleunigten Lähmung und Leistungsbeschneidung durch Kritikverbote und falsch verstandene Leistungsorientierungen, u. s. w. – gerade angesichts solcher Entwicklungen sind die Klärung von Forschungsinteressen und deren Kontexte und Hintergründe sowie die Prüfung der Angemessenheit von Formen der strategischen Kommunikation in wissenschaftspolitischer oder karrierebezogener Absicht

51 allemal wichtige Aufgaben. Zu den fragwürdigen Aspekten der Thematisierung von Wenden zählt auch deren Häufung. Wenn man jene wissenschaftlichen Qualifizierungsarbeiten einmal ausnimmt, in denen Behauptungen paradigmatischer Innovation mehr mit Vermessenheit und besonders ausgeprägten Selbstwertgefühlen als mit wissenschaftlicher Kompetenz und argumentativer Überzeugungskraft zu tun haben, dann bleiben immer noch zahlreiche „turns“, die überwiegend in der jüngeren Denkgeschichte artikuliert worden sind (s. Tabelle 1). Beispiele für Wenden in der abendländischen Denkgeschichte

Beispiele für Wenden in der Pädagogik und Erziehungswissenschaft im 20. Jhdt.

Kopernikanische Wende

Realistische Wende (G. Roth)

Bewusstseinsphilosophische Wende (I. Kant)

Technologische Wende (F. von Cube)

Materialistische Wende (K. Marx)

Anthropologische Wende (F. Bollnow)

Linguistic Turn (L. Wittgenstein, R. Rorty et al)

Emanzipatorische Wende (K. Mollenhauer)

Pragmatic Turn (F. de Saussure)

Antiautoritäre Wende (A. S. Neill)

Cognitive Turn (U. Neisser, F. Klix; M. Sternberg)

Alltagstheoretische Wende (H. Thiersch)

Symbolic Turn (E. Cassirer)

Humanistische Wende (V. Buddrus)

Cultural Turn (P. Janich)

Neokonservative Wende

Qualitative Turn (K. B. Jensen)

Antipädagogische Wende (E. von Braunmühl)

Pictural Turn (W. J.T. Mitchell)

Postmoderne Wende (D. Baacke)

Iconic Turn (H. Burda/Ch. Maar, F. Hartmann)

Environmental Turn (Ch. Doelker)

Tabelle 1: Beispiele für Wenden in den Geistes-, Kultur- und Erziehungswissenschaften

Mit diesen und anderen Wenden sind jeweils thematische Akzentverschiebungen, Aufmerksamkeitsverlagerungen, veränderte Aufgabenbereiche und Methodologien sowie neue Problemkonstellationen und Formen der Problembearbeitung verbunden. Je nach Temperament, kognitivem Stil und Rahmenbedingungen wird diesen Veränderungen mehr ein Charakter von Reformen oder von Revolutionen zugeschrieben.

52 Insgesamt scheint es, dass im 20. Jahrhundert ein beschleunigter Verbrauch an Wenden entstanden ist – Turns werden offenkundig in immer kürzeren Abständen thematisiert. Auch wenn dies per se noch keine intensivierte Reflexivität und erst recht nicht ein Indiz für gesteigerte meta-reflexive Aktivitäten bedeutet, so kann umgekehrt die Rede vom mediatic turn nicht so ohne weiteres als kognitives Epiphänomen einer technisierten conditio humana, als Nebeneffekt mediengesellschaftlicher Verhältnisse oder als Ausdruck jener allgemeinen Rastlosigkeit und Getriebenheit abgetan werden, die viele Medienkulturen auszeichnet. Hier geht es um mehr als um Momentaufnahmen aus spezifischen Netzkulturen oder um Gelegenheitsargumentationen von Hobby-EpistemologInnen, die mit solchen Kurzschlüssen pauschaliter ad acta gelegt werden könnten.

3.2 Mediatic turn – Versionen und Varianten Die Fragwürdigkeit der Thematik mag durch weitere skeptische Argumente etwa im Hinblick auf den relativ geringen Differenzierungsgrad mancher Darstellungen oder die Vieldeutigkeit des Ausdrucks ‚Medien’ noch verstärkt werden. Insgesamt überwiegen jedoch die Gründe für eine Auseinandersetzung, zu denen ich insbesondere die folgenden zähle: • In einigen der genannten Turns sind Teilbereiche und Aspekte medienbezogener Argumentationen enthalten, die in einen Zusammenhang gebracht werden können. Aber nicht nur die neuerliche Konjunktur kulturwissenschaftlicher und kulturphilosophischer Grundlagendiskurse oder der Stellenwert von Bildern sind hier von Bedeutung; auch medienökologische Zugänge und die Problematisierung von Medien als Umwelten im Sinne eines „Environmental turn“ (vgl. Doelker 2005: 267-273) verweisen auf die Thematik des mediatic turn und behandeln Teilaspekte derselben. • Die Frage nach einem mediatic turn ist in mehreren Ländern (Bsp. Kanada, Schweden, Deutschland, Australien, Österreich und die Niederlande) – teilweise zeitgleich und teilweise zeitversetzt – ein Topos geworden, der zumindest einige AutorInnen beschäftigt. • Die veröffentlichten Argumentationen zur Thematik sind mitunter knapp, aber wohldurchdacht und stichhaltig (Bsp. Krämer 1998a und 1998b, Margreiter 1999), und sie fordern auf zur vertiefenden Auseinandersetzung.

53 • Hand in Hand mit der Vernunftkritik der letzten 20 Jahre erfolgte eine breite Diskussion der Grundlagen der Moderne und des Pluralismus von Wissens-, Denk- und Lebensformen. In epistemologischer Hinsicht ist die konstruktivistische Einsicht, dass Erkenntnis allemal einen blinden Fleck in der Optik aufweist, und dass diese Blindheit nicht zeitgleich zum Gegenstand derselben Optik werden kann, unübersehbar virulent geworden.6 Insofern ist das wissenschaftliche Wissen genauso kontingent wie jede andere Form des Wissens (vgl. Gumbrecht 1991: 844). Der mediatic turn fokussiert nun die bislang vielfach vernachlässigten medialen Dimensionen im Zusammenhang der diversen Formen der Kontingenzbewältigung, Wissensproduktion und Selbstvergewisserung. Nach dieser Abwägung stellt sich die Frage, was den mediatic turn nun auszeichnet. In einer ersten Annäherung lässt sich in Analogie zum linguistic turn, der die Sprachvermitteltheit menschlicher Geistestätigkeit in den Vordergrund stellt, als Kurzformel die Medienvermitteltheit aller Bemühungen um Erkenntnis, Wissen und Klärung von Selbst- und Weltverhältnissen nennen. Letzteres ist durchaus doppeldeutig zu verstehen und meint sowohl das Selbst- und Weltverhältnis einzelner Akteure als auch die Ebene von Verhältnissen, Konstellationen und (Möglichkeits-)Bedingungen. Während beim linguistic turn über weite Strecken sprachwissenschaftliche Dimensionen und in den sprachanalytischen Ausrichtungen vor allem Begriffsanalysen im Vordergrund standen, rücken beim mediatic turn – ähnlich wie beim symbolic turn oder beim cultural turn – symbolische, materiale und soziale Dimensionen verstärkt ins Blickfeld. Zwar zeichnet sich in der Sprachphilosophie mit den Gebrauchstheorien von Bedeutung eine Relativierung von geisteswissenschaftlichen Zugängen ab – symbolische und materielle Praktiken, prozessorientierte Sichtweisen kultureller Programme und Produkte sowie technologische Gesichtspunkte bekommen allerdings erst im Zusammenhang des symbolic turn, des cultural turn und des mediatic turn jenes Gewicht, das ihnen nach den Postmoderne-Debatten und der „Austreibung des Geistes aus den Geisteswissenschaften“ (Kittler 1980) zukommen sollte. In diesem Sinne meint die Rede von einer „medialen Wende“ auf einer meta-theoretischen Ebene eine Alternative und Ergänzung der etablierten Paradigmen, die sich durch eine Konzentration auf Medien, Medialität und Medialisierung auszeichnet. Auf einer empirischen Ebene wird die Bedeutung der Medien für Prozesse der Kommunikation, des Wissensaufbaus und der Wirklichkeitskonstruktion hervorgehoben. Der Ausdruck „Medialisierung der Lebenswelten“ beinhaltet gewissermaßen beide Aspekte: Die er-

54 fahrbare Alltagswelt und Beobachtungen der „Mediendurchdringung“ sowie die Unhintergehbarkeit medialisierter Welten und deren Funktion als Ausgangspunkte für unsere Erkenntnisbestrebungen. Bei näherer Betrachtung lassen sich Versionen des mediatic turn unterscheiden, wobei zwischen impliziten und expliziten Varianten differenziert werden kann. Zu den impliziten Varianten zähle ich jene, die sich in ihren Arbeiten nicht ausdrücklich auf einen medial oder mediatic turn beziehen oder einen solchen proklamieren, die aber der Sache nach einschlägige Diskursstränge bearbeiten und deren Argumentationen ähnliche Reichweiten wie die der expliziten Varianten behandeln. Ob die ersten relevanten Anknüpfungspunkte in der altindischen Philosophie, in Sokrates’ (470-399 v. Chr.) Kritik der Schrift, die das Gedächtnis schwäche und die von seinem Schüler Platon (427-347 v. Chr.) im Dialog „Phaidros“ (Platon 1986) verschriftlicht worden ist, in der Erfindung der Zentralperspektive durch Filippo Brunelleschi (1377-1446) (vgl. Löffler o. J.), in der Errichtung des ersten Instituts für Zeitungskunde an der Universität Leipzig in Deutschland im Jahre 1916 durch Karl Bücher (1847-1930), im medienhistorischen „Entstehungsherd“ im Jahr 1929 (Andriopoulos/Dotzler 2002) oder in den medientheoretischen Arbeiten von Walter Benjamin, Günther Anders, Jean Baudrillard, Vilém Flusser, u. a. zu finden sind, mag die medienhistorische Forschung näher erkunden. Hier soll es genügen, exemplarisch auf vier relevante Diskursstränge des 20. Jahrhunderts zu verweisen, die nahe an den expliziten Auseinandersetzungen mit dem mediatic turn angesiedelt sind:7 • Da ist zunächst die universell anwendbare Medium/Form-Unterscheidung, die Niklas Luhmann (1997: 190-201) im Anschluss an Fritz Heider (2005) fruchtbar gemacht hat, der seinerseits bereits 1926 die Unterscheidung zwischen Ding und Medium im Kontext einer Theorie der Wahrnehmung der physikalischen Welt gebraucht hat. Während bei Heider die Dinge feste Kopplungen in lose gekoppelten „Medien“ darstellen (Bsp. Schallwellen als Form im Medium Luftmoleküle), sind auch bei Luhmann Medien Ansammlungen lose gekoppelter Elemente, die die Bildung von Formen im Sinne temporärer oder dauerhafter Kopplungen solcher Elementen ermöglichen (Bsp. Sätze als Formen im Medium von Wörtern, die ihrerseits Formen im Medium von Buchstaben darstellen). Die systemtheoretische Fassung der Denkfigur von Formen als festen Kopplungen von Elementen, die als solche nicht verbraucht werden und die ansonsten lose gekoppelt sind, wird insbesondere von Dirk Baecker weiterentwickelt. Dabei bestimmt sich der Medienbegriff „aus der Dif-

55 ferenz zur Form, und dies als Differenz zwischen loser und enger Kopplung“ (Baecker 1994: 174). • Bei Marshall McLuhan (1992) wird die Form der Medien einerseits auf technologische und apparative Dimensionen eingeschränkt und andererseits im Sinne der „Ausweitung unserer eigenen Person“ (ebd.: 17) ausgedehnt.8 Elektronische Medien erweitern die sinnliche Wahrnehmung und das Handlungsvermögen, wobei räumliche und zeitliche Unterschiede relativiert oder neutralisiert werden. „Das Medium ist die Botschaft“ (McLuhan 1992: 17) lautet die bekannte Botschaft,9 denn „die ‚Botschaft’ jedes Mediums oder jeder Technik ist die Veränderung des Maßstabs, Tempos oder Schemas, die es der Situation des Menschen bringt“ (ebd.: 18). In diesem Sinne werden McLuhan zufolge Botschaften durch Medien geformt, wobei der „Inhalt“ eines Mediums „immer ein anderes Medium ist“ (ebd.: 18). • Die zeitgenössische Kunst-Soziologie bietet mit dem Konzept der digitalen Mediamorphose (vgl. Blaukopf 1982, 1989; sowie Smudits 1994, 2002) vergleichsweise stärker fokussierte Anknüpfungspunkte für eine Auseinandersetzung mit dem mediatic turn. Diese materialistisch akzentuierten Überlegungen heben auf eine Produktivkrafttheorie der Medien ab und behandeln die mediale Formbestimmung des Kulturschaffens und der kulturellen Kommunikation auf dem Hintergrund von Einflüssen der historisch jeweils neuen Informations- und Kommunikationstechnologien sowie die damit verbundenen Veränderungen des Beziehungsgefüges zwischen Kreation und Produktion sowie Distribution und Rezeption. • Unter den konstruktivistischen Gesprächsangeboten sind hier jene von Siegfried J. Schmidt in besonderer Weise anschlussfähig. Im Hinblick auf den gesamten Prozess von Wirklichkeitskonstruktionen unterscheidet er analytisch drei Bereiche: Poiesis (Handlung und Interaktion), Kognition und Kommunikation (Schmidt 2000: 42), wobei in allen drei Bereichen die Systemspezifik und nicht die „Realitätsspezifik“ dominiert. In diesem zirkulär gedachten Zusammenhang dieser Prozessbereiche tritt an die Stelle des Rekurses auf „die Realität“ eine Pluralisierung von Wirklichkeitsmodellen, die nach Kriterien wie Viabilität, Operationalisierungsmodus und Problemlösungsrelevanz beurteilt werden können. Die Relevanz des Übergangs von der Kommunikativität zur Medialität, der mit der Entstehung des Gesamtmediensystems moderner Medienkulturgesellschaften einhergeht, wird in seinem Beitrag im vorliegenden Band verdeutlicht, der auch explizit auf einen media turn Bezug nimmt. Was die ersten expliziten Auseinandersetzungen mit dem media(tic) turn

56 betrifft, so will ich mich ebenfalls auf vier ausgewählte Diskursstränge des 20. Jahrhunderts beschränken, die hier kurz skizziert werden sollen: • Göran Sonesson hat 1995 mit Blick auf die medialen Herausforderungen für kultursemiotische Überlegungen als einer der ersten explizit von einem „failure of the mediatic turn“ gesprochen (s. Sonesson 1997). Er geht davon aus, dass keine andere Disziplin für das Studium der Medien so angemessen erscheint wie die Semiotik (ebd.). Andererseits gelinge der mediatic turn nicht angesichts der gewichtigen Rolle von Kommunikationsmodellen, die in der mathematischen Informationstheorie wurzeln. Wegen der starken Konzentration auf Massenmedien werden Feinheiten der direkteren Kommunikationsformen nicht erfasst und raum-zeitliche Dimensionen nur inadäquat behandelt. Im Rückgriff auf Edmund Husserl und Alfred Schütz sowie Algirdas Greimas und James Gibson plädiert er stattdessen für eine Anknüpfung an die Lebenswelt-Diskurse und eine Befassung mit der Opposition Medien – Lebenswelt in Nachfolge der System-Lebenswelt-Opposition. “However, the prototypical Lifeworld would be immediate (or as little mediate as possible) and taken for granted but more than science and social institutions, media may well be able to transform secondary interpretations into significations taken for granted. Media, rather than the system world, may already be colonizing the Lifeworld” (Sonesson 1997: ebd.). Entsprechend geht es um die Entwicklung von Konzepten der Abstufung von relativen Graden der Vermittlung und die Entwicklung einer Mediensemiotik im Sinne einer erweiterten Proxemik auf der Ebene der Kommunikation von Individuen und auf kultureller Ebene. • Eine Reihe von medienphilosophischen Beiträgen zur Thematik hat Sybille Krämer (1998a, 1998b, 2004) verfasst. „Von der sprachkritischen zur medienkritischen Wende?“ fragt sie im Titel des Einleitungsbeitrags zu ihrem Sammelwerk „Über Medien“ (1998a) und entwickelt ihre Überlegungen in Auseinandersetzung mit dem linguistic turn und dem cultural turn. „Was zuvor als Sprache, Text oder Kommunikation beschrieben wurde, kommt nun als Performanz kultureller Praktiken in den Blick“ (Krämer 1998a: 1), und an anderer Stelle heisst es: „So, wie die ‚linguistische Wende’ die Präferenz für Bewußtseinsphänomene ablöste durch eine Hinwendung zur Sprache, so scheint nun das Thema der Sprache selbst eine mediale Umakzentuierung zu erfahren“ (Krämer 1998b: 73).

Einer ihrer zentralen Ausgangspunkte im Anschluss an Jan und Aleida Assmann lautet: „Alles, was wir über die Welt sagen, erkennen und wis-

57 sen können, das wird mit Hilfe von Medien gesagt, erkannt und gewusst“ (Krämer 1998b: 73). Sie problematisiert allerdings die Annahme, „Medien seien nicht nur Vehikel, sondern auch Quelle von Sinn“ (Krämer 1998b: 73), und distanziert sich von „fundamentalistischen“ Versionen eines medial turn, bei dem Medien das, was sie vermitteln, auch erzeugen, und so zum Apriori unseres Welt- und Selbstverhältnisses werden. Sie plädiert für eine gemäßigte Version, bei der Medien nicht als autonome Gegebenheiten und weniger als Mittel, sondern als Mitte und Mittler mit heteronomer Charakteristik aufgefasst werden, die im Wege unterschiedlicher Übertragungsmodalitäten Verbindungen zwischen heterogenen Welten stiften (vgl. Krämer 2005: 17). Entsprechend rückt die BotenMetapher ins Zentrum der Überlegungen, wobei unterschiedliche Typen medialer Übertragung differenziert werden. Als Beispiele für solche Übertragungsmodalitäten fungieren etwa die Übertragung durch Hybridisierung im Fall göttlicher Botschaften durch Engel, die Eigentumsübertragung mittels Geld durch Indifferentialität oder die Übertragung durch Transkriptivität in Fällen von Krankheit oder Störung durch biologische und technische Viren (vgl. Krämer 2005: 16ff). • Im deutschen Sprachraum hat insbesondere Reinhard Margreiter (1999) den Ausdruck „medial turn“ geprägt und in ersten Ansätzen auch in einer eigenen Fassung entfaltet. Er stützt seine Überlegungen auf ein Bündel medientheoretischer und medienphilosophischer Arbeiten und skizziert im Anschluss vor allem an die Symbolphilosophie von Ernst Cassirer10 die Umrisse eines „Medien-Apriori“ wie folgt: „Die – in empirischen Phänomenanalysen zu treffende – Unterscheidung eines sprach-, schrift-, buch-, bild- und rechnergestützten Denk- und Kulturtypus führt zwangsläufig zum Konzept eines (geschichtlich relativen) Medien-Apriori. Nicht nur die Neuen Medien, sondern auch die ‚alten’ Medien Oralität, Literalität und Buchdruck – genauer: die jeweilige historische Konstellation interagierender Medien – sind als dieses Apriori zu begreifen und funktional zu beschreiben. Medienphilosophie stellt somit weitaus mehr dar als eine sogenannte Bereichsphilosophie, denn Medialität ist nicht eine periphere, sondern die zentrale Bestimmung des menschlichen Geistes“ (Margreiter 1999: 17; kursiv im Org.). Dabei geht es um zeitgemäße Fassungen der Frage nach der „Erfahrung der Wirklichkeit und der Wirklichkeit der Erfahrung“, die Sichtung, Ordnung und weitere Bearbeitung der „bereits verfügbaren Einsichten in die mediale Verfassung des Denkens und der Kultur“ und nicht zuletzt um die medienphilosophische Rekonstruktion der Philosophie als Theorieunternehmen (ebd.: 17).

58 • In den Niederlanden ist weiters eine Arbeitsgruppe um Wim Binsbergen und Jos de Mul damit befasst, unter dem Titel „mediatic turn“ und unter Berücksichtigung globalisierungstheoretischer Positionen eine Ontologie der Medialisierung auszubuchstabieren (Binsbergen/de Mul o. J.). Eine der zentralen Annahmen lautet, dass die Informations- und Kommunikationstechnologien in der zweiten Hälfte der 20. Jahrhunderts in vielen Gesellschaften nicht nur das Leben der Einzelnen und die Organisation sozialer Einrichtungen, sondern auch die Weisen verändert hat, wie wir uns selbst und die Welt wahrnehmen und interpretieren. In Abgrenzung von sprach- und zeichenrelativistischen Auffassungen des linguistic turn und des semiotic turn gehe es beim mediatic turn weniger um Fragen der Determinierung von Prozessen der Wahrnehmung und Interpretation einer gegeben Welt durch Medien. Mit dem mediatic turn verlagert sich die Fragestellung entsprechend der Annahme, dass Medien zunehmend die Welten werden, in denen wir leben. So richtet sich das Augenmerk der Analyse beispielsweise nicht auf Formen des Mediengebrauchs, sondern auf Zusammenhänge von Gebrauchsformen und medialisierten Strukturen und deren Implikationen. Diese und weitere Diskursstränge, die gegenwärtig entfaltet werden, stellen auch für die Bildungstheorie eine Herausforderung dar. Wichtig erscheint mir dabei, dass es bei den ernst zu nehmenden Diskursen rund um den mediatic turn mehr um eine kritische Weiterführung und reflektierte Auslotung von Ergänzungen zu bekannten Paradigmen geht als um einen Paukenschlag, bei dem alle bisherigen Klärungsbemühungen kurzer Hand vom Tisch der Geschichte gewischt werden, um dann den frei gewordenen Raum mit einem neuen Paradogma (vgl. Mitterer 1992: 18) oder mittels Worthülsen-Marketing flink zu befüllen. Die Verbreitung verschiedener Formen der Memephoria11 ist auch in Wissenschaftskreisen bereits weit fortgeschritten. Sehen wir zu, dass der mediatic turn in Fachkreisen für Medienkompetenz und Medienbildung eher Anlass zu differenzierten Auseinandersetzungen als zur Beförderung von Memephoria gibt.

59

4. Schnittstellenerkungen Die Zusammenhänge zwischen Medien und Bildung sind vielgestaltig. Dies wird schnell deutlich, wenn wir die Diskussion nicht auf einen Bildungs- oder Medienbegriff einschränken. Solche Einschränkungen können auf eigentümliche Weise miteinander korrespondieren. Dies ist etwa dann der Fall, wenn angesichts der „Medienhochkonjunktur“ übersehen wird, dass sich diese primär auf technologische Aspekte bezieht. In sozio-kultureller Hinsicht jedoch erscheint beispielsweise die blogsphärische Geschwätzigkeit mitunter als Trivialisierungsveranstaltung, die zwar durch Elemente einer „digital fluency“ charakterisiert sein mag, aber nicht selten auch durch die Unfähigkeit zur Kontextsensivität, Selbstreflexion oder anteiligen Verantwortung. Was die Bildungswissenschaften betrifft, so ist eine Hochkonjunktur allenfalls in der quantitativ-empirischen Bildungsforschung im Verbund mit Standardisierungsbemühungen und Bildungscontrolling auszumachen. Aspekte der Persönlichkeitsbildung und Fragen nach der Bedeutung von Medien im Prozess der Sozialisation und Lebensbewältigung sowie im Selbst- und Weltverhältnis einzelner Menschen oder ganzer Gruppen spielen hier eine untergeordnete Rolle. Im Vordergrund stehen Fragen nach der Nutzbarkeit von digitalen Technologien zur Steigerung ökonomischer Leistungsfähigkeit, Kommerzialisierung des Wissens und marktorientierter Qualifizierung. Auch wenn in der Medienpädagogik andere Tendenzen auszumachen sind und Fragen nach der Bedeutung von Medien im Selbst- und Weltverhältnis und nach ihrer Rolle im Zusammenhang von Erziehungs-, Lehr-/Lern-, Wissens- und Bildungsprozessen im Vordergrund stehen, so heisst das noch nicht, dass die Prozesse der Medialisierung dabei umfänglich bedacht sind. Ähnlich wie in vielen anderen Bereichen fällt auch hier auf, dass vielfach Denkfiguren und Argumentationsmuster tradiert werden, die das Verhältnis von Wirklichkeit und Medienwirklichkeit so darstellen, als ob Medien zur „eigentlichen“ Wirklichkeit gleichsam hinzu kommen könnten oder auch nicht. Medien liefern dann Stoff für Problematisierungen des Verhältnisses zwischen tatsächlicher Wirklichkeit oder „der Realität“ und medialisierten Wirklichkeiten einschließlich der damit verknüpften Folgeprobleme. Hier besteht meines Erachtens in der Medienpädagogik und der Bildungsphilosophie gleichermassen ein Reflexionsbedarf, der einige Rückgriffe auf medientheoretische und medienphilosophische Überlegungen erfordert. Was bedeutet der mediatic turn für die Bildungsphilosophie, und was heisst es in diesem Zusammenhang, wenn wir Medialität nicht als eine optionale Dimension auffassen, die zur Bestimmung von Erziehung, Bildung,

60 Sozialisation, Kommunikation, Gesellschaft und Kultur quasi hinzukommen kann oder auch nicht? Die potenziellen Konsequenzen des mediatic turn Diskurses sind weitreichend, auch wenn es alles andere als ausgemacht ist, dass Medienphilosophie sich als neue Leitdisziplin für die Geistes-, Kulturund Sozialwissenschaften etablieren wird. Im Gegenteil: Die wissenschaftsdynamischen Konsequenzen des kaum überschaubaren Markts spekulativer Rhetorik, pseudo-theoretischer Gelegenheitsargumentation und inspirierender Aphoristik dürften sich in Grenzen halten, und die differenziert und umsichtig argumentierten konzeptionellen Überlegungen sind über die Medientheorie hinaus bislang kaum wahrgenommen worden. Inwiefern bietet nun der Diskurs des mediatic turn Anlässe und Anhaltspunkte zur Klärung bildungsphilosophischer Grundlagen? Eine Möglichkeit der Beantwortung dieser Frage besteht in der Erkundung einschlägiger Schnittstellen, die auf verschiedenen Ebenen angesiedelt sind und von denen hier einige kurz skizziert werden sollen: • Auf der Ebene der Begriffsklärung stellt sich die Frage, welche Begriffe und Konzepte von Medien, Medialität und Medialisierung für welche bildungsphilosophischen Problemstellungen brauchbar und relevant sind. Hochabstrakte, kategoriale Bestimmungen wie „Orts-/Zeitmedien“ oder Medium als „Mitte“ helfen im Kontext empirischer Untersuchungen kaum weiter. Die Medium/Form-Unterscheidung eröffnet fruchtbare Perspektiven der Theoriebildung, aber wie lässt sie sich sinnvoll mit medienwissenschaftlichen Medienbegriffen verknüpfen? Welche „Koordinatensysteme“ erweisen sich für medienpädagogische Fragestellungen als fruchtbar und wie können sie sich ergänzen? Angesichts einseitiger Akzentsetzungen und Medien-Desiderata in der Bildungsphilosophie mag es schon hilfreich sein, wenn medienphilosophische und medienwissenschaftliche Medienbegriffe differenziert und ins Kalkül gezogen werden. Hand in Hand damit sollte auch eine nachvollziehbare Klärung der nicht-instrumentellen Dimensionen der Medientechnologien möglich sein. • Auf der Ebene der medienanthropologischer Überlegungen im Spannungsfeld von überholten Wesensbestimmungen und biotechnologischen Gestaltungsphantasien geht es um zukunftsweisende Positionen im Anschluss an bekannte Konzeptionen der Erlösungs-, Erziehungs- und Unterstützungsbedürftigkeit menschlicher Akteure. Auch wenn die Versuche der eindeutigen Wesensbestimmung „des“ Menschen über weite Strecken längst differenzierten und historisch- wie kultur-kritischen Betrachtungsweisen gewichen sind, so steht die Medienanthropologie erst am Anfang

61 (Keck/Pethes 2001, Albertz 2002, Pirner/Rath 2003). Die Diskussionen über den Umgang mit intelligenten Maschinen, Cyborgs und e-dwarfs, „Mixed-Reality“-Technologien und Mensch-Maschine-„Symbiosen“ machen aber jetzt schon deutlich, dass hier gewichtige Konsequenzen für viele konzeptionelle und pragmatische Fragen der Medienpädagogik insgesamt absehbar sind. Die Suche nach Antworten auf die Frage nach den Möglichkeiten und Grenzen technologiegetriebener Formen der Bildsamkeit und der Verblödung und ihrem Verhältnis zur ästhetischen Bildung hat erst begonnen. Auch im Zusammenhang von Fragen nach der Situiertheit des Wissens und der Leibgebundenheit von Erkenntnis sind medienanthropologische Überlegungen erst ansatzweise bedacht worden. Analoges gilt für neue Akzentsetzungen der Mediensozialisationsforschung etwa im Hinblick auf Fragen nach der relativen Bedeutung gesellschaftlicher Bindemittel, nach den Formen individueller Bindung, Verbindung und Verbundenheit, nach der Bedeutung von Mediendynamiken für die (Nicht-)Veränderung von Kommunikationsprozessen, nach den (un-)gleichzeitigen Erfahrungswirklichkeiten globaler Mediengenerationen (Volkmer 2006) oder nach der Relevanz von Prozessen der „Ludifikation“ für die individuelle und kulturelle Identitätsentwicklung (Raessens 2006). • Im Diskurs des mediatic turn werden Fragen nach der (Un-)Mittelbarkeit von Erfahrung, der Wirklichkeit der Erfahrung in Relation zur Erfahrung der Wirklichkeit und dem Verhältnis von Wirklichkeit und Medienwirklichkeit neuerlich zugespitzt. Antworten verlangen hier generell eine begriffliche Sorgfalt und reflexive Umgangsformen mit Common-senseDarstellungen. Der Rekurs auf „die Realität“ und die Warnung von deren Verlust mutet da seltsam an. Konzepte von Wirklichkeit, Realität und Virtualität müssen expliziert werden, wenn wir der Komplexität der Erfahrungszusammenhänge und der Entwicklungsdynamiken gerecht werden wollen. Insbesondere die heute verbreiteten Auffassungen von „virtuell“ im Sinne von „auf Digitalisierungsprozessen beruhend“ haben nur indirekt mit traditionellen Verständnissen von „der Kraft oder Möglichkeit nach vorhanden“ zu tun. Leider wird auch in der Bildungsphilosophie kaum zwischen unterschiedlichen Virtualitätskonzepten differenziert. Die Rede von „virtueller Bildung“ oder „virtuellen Lernformen“ könnte erheblich ausdifferenziert werden angesichts der diversen Auffassungsweisen von „virtuell“ in der Philosophie (Bsp. Virtualität als „EmergenzBoden“ jeglicher Aktualität), der Physik (Bsp. geometrisch konstruierte, scheinbare Bilder oder „virtuelle Teilchen“) und der Medientheorie (Bsp. hybride Bildformen und Formen der Modalisierung medialisierter Wirk-

62 lichkeitserfahrungen) (vgl. Shields 2003). Bei näherer Betrachtung zeigt sich außerdem schnell, dass Wirklichkeit und Realität nicht nur in Philosophie und Wissenschaft, sondern auch in alltagsweltlichen Kontexten im Plural vorkommen (vgl. Slade 2002). • Damit korrespondiert ein Reflexionsbedarf in medienepistemologischer Hinsicht. Dabei geht es um Klärung von Referenzmodalitäten (Schmidt 1999: 134-139) im doppelten Sinne und nicht um die Beschwörung einer referenzlosen Epistemologie. Einerseits ist damit die Beschreibung von alltagsweltlichen Weisen der Referentialisierung von Wahrnehmungen (Beobachtungen erster Ordnung) gemeint, und andererseits die Klärung der Referenzmodalitäten im Wissenschaftskontext (Beobachtungen zweiter Ordnung). Wir müssen dabei eine Vielfalt kultürlicher Einstellungen in Betracht ziehen, wenn wir den alltäglichen Lebenswelten als medialisierten Wirklichkeitsbereichen Rechnung tragen wollen, die mehr oder weniger wache Akteure verschiedener Altersgruppen und Herkünfte in ihren Common-sense-Einstellungen als mehr oder weniger fraglos erleben. Den Rekurs auf „den“ geschlechtsunspezifischen „wache[n] und normale[n] Erwachsenen in der Einstellung des gesunden Menschenverstandes“ (Schütz 1979: 25) erscheint mir bei aller Wertschätzung für die zahlreichen Anknüpfungspunkte in der Sozialphänomenologie von Alfred Schütz auch für die Erforschung von Lebenswelten als Medienwelten problematisch. Die Klärung von Referenzmodalitäten ist überdies insofern erschwert, als wir heute vielfach mit einer doppelten Dynamik des „Verschwindens“ und der „Allgegenwart“ von digitalen Technologien konfrontiert sind. • In methodologischer Hinsicht macht der Diskurs des mediatic turn eine Neubestimmung des Stellenwerts von Werkzeugen im Forschungsprozess (Bsp. Software-unterstützte Verfahren der Datenanalyse), der Gegenstandskonstitution und deren kommunikative Stabilisierung im Forschungsprozess (Bsp. Entstehen und Vergehen von Internet-Teilkulturen oder spezifischen Musikkulturen), des Sonderstatus der Schriftsprache im Wissenschaftsprozess, der Übergänge und Abgrenzungen von wissenschaftlichen und künstlerischen Darstellungen (Bsp. literarisch vs. wissenschaftlich motivierte Weblogs) bis hin zu den Darstellungsmodalitäten von Forschungsergebnissen (Bsp. Datei auf einem Datenträger vs. gebundene Schrift) erforderlich. Die Fragen betreffen nicht nur die empirisch ausgerichtete Ansätze in der Bildungsforschung, sondern auch die Bildungsphilosophie und deren Diskurs- und Darstellungsmodalitäten. • Im Zusammenhang des mediatic turn sind weiters Neubestimmungen von Bildung und Medienbildung sowie Verhältnisbestimmungen von Medien-

63 kompetenz und Medienbildung angezeigt. Bei aller Vielfalt an Konzepten und Komponenten-Modellen, die inzwischen vorliegen, und ungeachtet der strategischen Einsätze des Ausdrucks in bildungs-, sozial-, rechts-, wirtschafts- und wissenschaftspolitischen Zusammenhängen, wurde der Bedeutung der sprachtheoretischen Wurzeln des Medienkompetenz-Begriffs kaum Beachtung geschenkt (vgl. Niedermair 2000: 178-180). Auch die Relation der Begriffe ‚Bildung‘ und ‚Kompetenz‘ ist klärungsbedürftig, zumal diese nicht ineinander aufgehen (vgl. Pongratz u. a. 2007). Was die Verbreitung technologisch orientierter Varianten von Medienkompetenz-Begriffen betrifft, so ließen sich dieser Reduktionismus zumindest in konzeptioneller Hinsicht im Lichte von Rahmungskonzepten überwinden. Dies betrifft nicht nur mikrosoziologische Perspektiven im Anschluss an E. Goffmann (vgl. Pietraß 2003), sondern auch um makrosoziologische Perspektiven (vgl. Reese 2001) sowie unterschiedliche fachwissenschaftliche Framing-Konzeptionen. Dabei geht es um die doppelte Denkbewegung der Fähigkeiten und Zuständigkeiten für die Rahmung (Kontextualisierung) von Konzepten der Medienkompetenz und der Medienbildung einerseits und Möglichkeiten der Bestimmung von Medienkompetenz als Rahmungskompetenz.12 Auf diese Weise lassen sich auch komplexe Problemlagen gut auf den Punkt bringen und Ansprüche der Beförderung von Medienkompetenz in ihrer Reichweite und Brauchbarkeit diskutieren. Und wie steht es um das Verhältnis von Bildung und Medienbildung? Wie kann Bildung, soweit sie sich nicht auf Aspekte der Qualifizierung, standardisierte und messbare Kompetenzen, wirtschaftspolitisch erwünschte Performanzleistungen oder Unbildung (Liessmann 2006) beschränkt ist, in zeitgemässer Weise konzipiert werden? Ich zweifle, dass es ausreicht, den Aspekt der „neuen Medien“ in bekannte Konzepte mitaufzunehmen. Wenn etwa Klafki (1990) angesichts neuerer Zeit- und Gesellschaftsanalysen die Fähigkeit zur Kritik und Selbstkritik, die Diskursbereitschaft, die Empathie und die Fähigkeit zum vernetzten Denken hervorhebt und zur Entwicklung einer Denkhaltung anregt, bei der Nebenfolgen prinzipiell beachtet werden (ebd.: 98 ff), dann ist dabei zwar an hochentwickelte Technologien, ihre faktischen und möglichen Folgen sowie damit verbundene politische und ökonomische Wirkungszusammenhänge, nicht aber an Zusammenhänge von Formen des Mediengebrauchs und medialisierten Strukturen gedacht. Auch das Motto der „Garantie des Bildungsminimums - Kultivierung der Lernfähigkeit“ von Tenorth (1994: 166 f) berücksichtigt zwar gewissermassen ein „Medienkompetenzminimum“, nicht aber die Dynamiken der Medialisierung. Wenn aber die grantierten Bildungsangebote auf computer literacy fokussiert bleiben oder in Ana-

64 logie zur Problematik der Verrechtlichung gedacht sind, dann werden Ansprüche der Herstellbarkeit von Handlungsfähigkeit der Individuen und Funktionstüchtigkeit von Gesellschaften nur sehr begrenzt einlösbar sein. Wenn wir hingegen davon ausgehen, dass die Frage nach dem Selbst- und Weltverhältnis in der Bildungs- und in der Medientheorie gleichermaßen bedeutsam ist, dann liegen neue Akzentsetzungen und Schnittstellenerkundungen nahe. Wenn wir weiters probeweise davon ausgehen, dass Medien „die historische Grammatik unserer Interpretationsverhältnisse“ (Krämer 1998b: 90) sind, dann lässt sich Medienbildung nicht sinnvoll auf einen Teilbereich von Bildung beschränken. Vielmehr stellt sich dann die Frage, wie Bildung ohne Medien heute überhaupt sinnvoll gedacht und modelliert werden kann. Einige Anhaltspunkte liegen bereits vor (vgl. exemplarisch Westphal 1998, Sesink 2004, Kerres/de Witt 2006). Die Anerkennung des epistemologischen Pluralismus und der Tragweite der Medialisierungsprozesse legen einen Bildungsbegriff nahe, für den zumindest drei Aspekte wichtig sind: Pluralitätskompetenz, reflexive Lernfähigkeit und Überwindung diskursiver Zwänge (vgl. Hug 1996: 55 ff). Bildung als Pluralitätskompetenz

Bildung als reflexive Lernfähigkeit

widerstreitende Bildung

Erläuterung

transversale Kompetenz, Befähigung zum gedeihlichen Umgang mit und zur geschickten Intervention in pluralen Lagen

Befähigung zur mehrperspektivischen Reflexion von Lernprozessen

Überwindung diskursiver Zwänge, Verzicht auf harmonisierende Ideale

thematische Relevanz

Umgang mit Lebenswelten als Medienwelten, Codes, Formaten, Metaphernkompetenz

(Re-)Organisation von Lernprozessen, Medien-, Affekt- und Aufmerksamkeitsmanagement

Referenzmodalitäten, Modularisierung und Modalisierung der Wirklichkeitserfahrung

deshalb

Bildung als Medienbildung – Bildung mit Medien statt gegen Medien Bildung als Fähigkeit und Befähigung zum Fähigkeit und Befähigung zum Differenzmanagement sowie als erwägungsorientierte, kontextsensitive, bricolierende Bildung

Tabelle 2: Bildung im Zeichen des epistemologischen Pluralismus (vgl. Hug 1996)

Harmonisierende Tendenzen haben in diesem Konzept eine relative Bedeutung. Der Akzent liegt eher auf der Entwicklung von Kompetenzen zum Differenzmanagement (vgl. Schmidt 2002: 92).

65 Die Liste von Schnittstellen und Problembereichen ließe sich ausdifferenzieren und erweitern. Wichtig scheint mir, dass eine Bildungstheorie, die um die Relativität und Kontingenz ihrer Beschreibungen weiß (vgl. Gumbrecht 1991: 844), auch unter den Auspizien des mediatic turn gute Chancen zur Komplexitätsbewältigung und zu einer soliden Standortbestimmung hat.

5. Ausblick Bildung wird nicht obsolet, weil die grundlegende Medialität unserer Selbst- und Wirklichkeitsbezüge angesichts technologischer und medienkultureller Entwicklungsdynamiken in besonderer Weise fragwürdig geworden ist. Im Gegenteil: Der Bedarf an differenzierten Verständnissen medieninduzierter Wissensformationen und nachvollziehbaren Beschreibungen medialer Hintergrundstrukturen ist gestiegen und Hand in Hand damit auch der Bedarf an Bildung. Wenn die Gewißheiten weniger werden, wird Bildung umso notwendiger: „Wenn sich nur gewisse Strukturen und diese auch bloß noch andeutungsweise anzeigen, dann bedürfen die Menschen mehr als nur der Ausbildung, auch mehr als einer lebenslangen. Die Menschen müssen befähigt werden, mit erheblich schwächeren Formen von Wissen und Bewußtsein umzugehen: sie brauchen dazu technische Fertigkeiten, Fähigkeiten zur Selbstkonstruktion vor Bildschirmen – einen Rest an metaphysischem Komfort zumindest noch eine Weile –, Fähigkeiten die schwankende Existenz in Netzen zu ertragen, und Kompetenzen der Kooperation mit emotionalen Wesen unter Bedingungen der entfesselten Kommunikation. Nichts weniger heißt Bildung nach dem mediatic turn.“ (Schönherr-Mann 2008)

In welcher Weise die digitalen Medien auch zur „Lesbarkeit der Welt“ (Blumenberg 1981) beitragen (können), und nicht nur zur Beförderung von Prozessen der Ökonomisierung, Trivialisierung und Beschleunigung der Lebensvollzüge, muss ausgelotet werden. Sie ermöglichen zweifellos auch Dynamiken des “informating” und der Erschließung neuer Dimensionen der Reflexivität, Handlungsfähigkeit und Sozietät, wie Bakardjieva (2007) im Anschluss an Zuboff (1988) betont. Insgesamt scheint es mir aber wichtig zu sein, das Spannungsfeld von „In-Formierungen“, die marschiert werden (Wallmannsberger 1998), und einer „In-formatio“, die Computer als Mittel und Medien menschlicher Einbildungskraft versteht (Sesink 2004), nicht aus dem Auge zu verlieren.

66

Literatur Albertz, Jörg (Hrsg.) (2002): Anthropologie der Medien – Mensch und Kommunikationstechnologien. Berlin: Schriftenreihe der Freien Akademie Andriopoulos, Stefan & Dotzler, Bernhard J. (Hrsg.) (2002): 1929. Beiträge zur Archäologie der Medien. Frankfurt am Main: Suhrkamp Baecker, Dirk (1994): Kommunikation im Medium der Information. In: Maresch, Rudolf & Werber, Niels (Hrsg.): Kommunikation, Medien, Macht. Frankfurt am Main: Suhrkamp, 174-191 Bakardjieva, Maria (2007): The (Wo)man on the Net: Exploring the New Social Distribution of Knowledge. In: Hug, Theo (ed.): Media, Knowledge & Education - Exploring new Spaces, Relations and Dynamics in Digital Media Ecologies. Proceedings of the International Conference held on June 25-26, 2007. Innsbruck: Innsbruck University Press (forthcoming) Barney, Darin D. (2004): The Network Society. Oxford: Polity Press Bauer, Walter u. a. (Hrsg.) (1998): Weltzugänge: Virtualität – Realität – Sozialität. Baltmannsweilter: Schneider Verlag Hohengehren Binsbergen, Wim van & Mul, Jos de (eds.) (o. J.): The mediatic turn: Aspects of the ontology of mediation. (s. Buchankündigung unter http://www. shikanda.net/general/gen3/index_page/philosop.htm [Stand: 2005-0515]) Blanck, Bettina (2002): Erwägungsorientierung, Entscheidung und Didaktik. Stuttgart: Lucius & Lucius Blanck, Bettina & Schmidt, Christiane (2005): „Erwägungsorientierte Pyramidendiskussionen“ im virtuellen Wissensraum opens Team. In: Tavangarian, Djamshid & Nölting, Kristin (Hrsg.): Auf zu neuen Ufern! E-Learning heute und morgen. Münster, New York, München, Berlin: Waxmann, 67 – 76 Blaukopf, Kurt (1982): Musik im Wandel der Gesellschaft, München: Piper Blaukopf, Kurt (1989): Beethovens Erben in der Mediamorphose. Kulturund Medienpolitik für die elektronische Ära. Heiden: Verlag Arthur Niggli AG Blumenberg, Hans (1981): Die Lesbarkeit der Welt. Frankfurt am Main: Suhrkamp Castells, Manuel (2001): Das Informationszeitalter I: Der Aufstieg der Netzwerkgesellschaft. Opladen: Leske + Budrich Doelker; Christian (2005): media in media. Texte zur Medienpädagogik. Ausgewählte Beiträge 1975-2005, hrsg. von Georges Ammann und Thomas Hermann. Zürich: Verlag Pestallozianum Faßler, Manfred (2001): Netzwerke. Einführung in Netzstrukturen, Netzkul-

67 turen und verteilte Gesellschaftlichkeit. München: UTB (Wilhelm Fink) Gendolla, Peter & Schäfer, Jörgen (Hrsg.) (2005): Wissensprozesse in der Netzwerkgesellschaft. Bielefeld: transcript Verlag Goux, Melanie (2003): McLuhan. Weblog-Eintrag in „bushstroke.tv“, abrufbar unter: http://www.brushstroke.tv/week03_35.html [Stand: 200605-19] Gumbrecht, Hans Ulrich (1991): Epistemologie/Fragmente. In: Gumbrecht, Hans Ulrich & Pfeiffer, Ludwig K. (Hrsg.): Paradoxien, Dissonanzen, Zusammenbrüche: Situationen offener Epistemologie. Frankfurt am Main: Suhrkamp, 837–850 Heider, Fritz (2005): Ding und Medium. Berlin: Kadmos Kulturverlag (zuerst erschienen 1926 in: Symposion. Philosophische Zeitschrift für Forschung und Aussprache 1, 109-157) Hoffmann, Stefan (2002): Geschichte des Medienbegriffs. Hamburg: Felix Meiner Höhne, Thomas (2004): Pädagogik und das Wissen der Gesellschaft. Erziehungswissenschaftliche Perspektiven. Internet-Dokument, abrufbar unter: http://geb.uni-giessen.de/geb/volltexte/2004/1830/ [Stand: 2006-05-19] Hug, Theo (1996): Wissenschaftsforschung als Feldforschung – ein erziehungswissenschaftliches Projekt. In: Störfaktor. Zeitschrift kritischer Psychologinnen und Psychologen. Jg. 8 / H. 3, Nr. 32, 45 – 63 Hug, Theo (2007): Medienpädagogik unter den Auspizien des mediatic turn – eine explorative Skizze in programmatischer Absicht. In: Sesink, Werner u. a. (Hrsg.) (2007): Jahrbuch Medienpädagogik 6: Medienpädagogik – Standortbestimmung einer erziehungswissenschaftlichen Disziplin. Wiesbaden: VS Verlag für Sozialwissenschaften, 10-32 Hug, Theo & Friesen, Norm (2007): Outline of a Microlearning Agenda. In: Hug, Theo (ed.): Didactics of Microlearning. Concepts, Discourses, and Examples. Münster: Waxmann, 15-31 Keck, Annette & Pethes, Nicolas (Hrsg.) (2001): Mediale Antinomien: Menschenbilder als Medienprojektionen. Bielefeld: Transcript Kerres, Michael & de Witt, Claudia (2006): Perspektiven der „Medienbildung“. In: Fatke, Reinhard & Merkens, Hans (Hrsg.): Bildung über die Lebenszeit. Wiesbaden: VS Verlag für Sozialwissenschaften, 209-220 Keuffer, Josef & Oelkers, Jürgen (Hrsg.) (2001) Reform der Lehrerbildung in Hamburg. Abschlussbericht der Hamburger Kommission Lehrerbildung. Weinheim/Basel: Beltz Kittler, Friedrich A. (1980): Austreibung des Geistes aus den Geisteswissenschaften: Programme des Poststrukturalismus. Paderborn u. a.: Schöningh

68 Klafki, Wolfgang (1990): Abschied von der Aufklärung? Grundzüge eines bildungstheoretischen Gegenentwurfs. In: Krüger, Heinz-Hermann (Hrsg.): Abschied von der Aufklärung. Perspektiven der Erziehungswissenschaft. Opladen, 91-104 Krämer, Sybille (1998a): Von der sprachkritischen zur medienkritischen Wende? Sieben Thesen zur Mediendebatte als eine Einleitung in diese Textsammlung. In: Krämer, Sybille (Hrsg.): 1-3, Internet-Dokument, abrufbar unter: http://userpage.fu-berlin.de/~sybkram/medium/kraemer1. html [Stand: 2006-06-29] Krämer, Sybille (Hrsg.) (1998a): Über Medien. Geistes- und kulturwissenschaftliche Perspektiven. Berlin, Internet-Dokument, abrufbar unter: http:// userpage.fu-berlin.de/~sybkram/medium/inhalt.html [Stand: 2006-06-29] Krämer, Sybille (1998b): Das Medium als Spur und als Apparat. In: Krämer, Sybille (Hrsg.): Medien, Computer, Realität. Wirklichkeitsvorstellungen und Neue Medien. Frankfurt am Main: Suhrkamp, 73-94 Krämer, Sybille (2004): Über die Heteronomie der Medien. Grundlinien einer Metaphysik der Medialität im Ausgang einer Reflexion des Boten, Journal Phänomenologie, Heft 22, 18-38 Krämer, Sybille (2005): Boten, Engel, Geld, Computerviren. Medien als Überträger. In: Paragrana. Internationale Zeitschrift für Historische Anthropologie, Themenheft: Körpermaschinen – Maschinenkörper. Mediale Transformationen, Bd. 14, Heft 2, Berlin: Akademie-Verlag: 15-24 Kübler, Hans-Dieter (2005): Mythos Wissensgesellschaft. Gesellschaftlicher Wandel zwischen Information, Medien und Wissen. Eine Einführung. Wiesbaden: VS Verlag für Sozialwissenschaften Leschke, Rainer (2003): Einführung in die Medientheorie. München: Fink Liessmann, Konrad Paul (2006): Theorie der Unbildung. Wien: Zsolnay Löffler, Davor (o. J.): Über die Auswirkungen der Entdeckung der Zentralperspektive. Internet-Dokument, abrufbar unter: http://userpage.fu-berlin. de/~miles/zp.htm [Stand: 2006-06-29] Luhmann, Niklas (1997): Die Gesellschaft der Gesellschaft. Bd.1, Frankfurt am Main: Suhrkamp Margreiter, Reinhard (1999): Realität und Medialität: Zur Philosophie des „Medial Turn“. In: Medien Journal 23, Nr. 1, 9 -18 McLuhan, Marshall & Quentin, Fiore (1967): The medium is the massage. New York: Random House McLuhan, Marshall (1992): Die magischen Kanäle. Understanding Media. Düsseldorf u. a.: Econ [engl. Erstausgabe 1964] Meyer-Wolters, Hartmut (2006): Denen die Daten - uns die Spekulation. In: Pongratz, Ludwig A.; Wimmer, Michael & Niecke, Wolfgang (Hrsg.):

69 Bildungsphilosophie und Bildungsforschung, 37-65, auch online abrufbar unter: http://www.uni-koeln.de/phil-fak/paedsem/meyer-wolters/pdf/0410-01.pdf [Stand: 2007-09-29] Mitterer, Josef (1992): Das Jenseits der Philosophie. Wider das dualistische Erkenntnisprinzip. Wien: Passagen Verlag Neumann-Braun, Klaus (2000): Medien – Medienkommunikation. In: Naumann-Braun, Klaus & Müller-Doohm, Stefan (Hrsg.): Medien- und Kommunikationssoziologie. Eine Einführung in zentrale Begriffe und Theorien. Weinheim/München: Juventa, 29-40 Niedermair, Klaus (2000): Ist Medienkompetenz die Meta-Kompetenz in einer individualisierten und globalisierten Lebenswelt? In: Hug, Theo (Hrsg.): Medienpädagogik in der Globalisierung / Media Education and Globalization. (= H. 2 / 2000 der Zeitschrift SPIEL) Frankfurt am Main u. a.: Lang, 175-189 Pietraß, Manuela (2003): Medienkompetenz als “Framing”. Grundlagen einer rahmenanalytischen Bestimmung von Medienkompetenz. In: Medienwissenschaft Schweiz, Heft 2, S. 4-9 Pirner, Manfred L. & Rath, Matthias (Hrsg.) (2003): Homo medialis. Perspektiven und Probleme einer Anthropologie der Medien. München: Kopaed Platon (1986): Phaidros oder Vom Schönen. Ditzingen: Reclam Pongratz, Ludwig A.; Reichenbach, Roland & Wimmer, Michael (Hrsg.) (2007): Bildung – Wissen – Kompetenz. Bielefeld: Janus, auch als WWWDokument abrufbar unter: http://duepublico.uni-duisburg-essen.de/servlets/DerivateServlet/Derivate-16159/sammelband2006v1e.pdf [Stand: 2007-09-09] Raessens, Joost (2006): Playful Identities, or the Ludification of Culture. In: Games and Culture. A Journal of Interactive Media. Vol. 1, No. 1, 52-57 Reese, Stephen D. (2001): Prologue - Framing Public Life: A Bridging Model for Media Research. In: Reese, Stephen D.; Gandy, Oscar H. Jr. & Grant; August E. (Eds.): Framing Public Life: Perspectives on Media and Our Understanding of the Social World. Mahwah, NJ: Lawrence Erlbaum Associates, 7-31 Reinmann, Gabi (2007): Stellungnahme zu den „Empfehlungen zur Weiterentwicklung der Kommunikations- und Medienwissenschaften in Deutschland“ durch den Wissenschaftsrat vom 25. Mai 2007. WWW-Dokument abrufbar unter: http://medienpaedagogik.phil.uni-augsburg.de/denkarium/ wp-content/uploads/2007/06/Wissenschaftsrat_Juni07_Medien_Kommunikation_Text.pdf [Stand: 2007-09-09] Rusch, Gebhard (2002): Medienwissenschaft als transdiziplinäres For-

70 schungs-, Lehr- und Lernprogramm. In: Rusch, Gebhard (Hrsg.): Einführung in die Medienwissenschaft. Konzeptionen, Theorien, Methoden, Anwendungen. Wiesbaden: Westdeutscher Verlag, 69-82 Scheuerl, Hans (Hrsg.): Klassiker der Pädagogik. 2 Bde., München: C.H. Beck. Schimank, Uwe & Volkmann, Ute (Hrsg.): (2000/2002) Soziologische Gegenwartsdiagnosen. 2. Bände, Opladen: Leske + Budrich Schmidt, Siegfried J. (1999): Blickwechsel. Umrisse einer Medienepistemologie. In: Rusch, Gebhard & Schmidt, Siegfried J. (Hrsg.): Konstruktivismus in der Medien- und Kommunikationswissenschaft. (= Delfin 1997) Frankfurt am Main: Suhrkamp, 119-145 Schmidt, Siegfried J. (2000): „Kalte Faszination“. Medien. Kultur. Wissenschaft in der Mediengesellschaft. Weilerswist: Velbrück Wissenschaft Schmidt, Siegfried J. (2002): Von Bildung zu Medienkompetenz – und retour? In: Kappus, Helga (Hrsg.): Nützliche Nutzlosigkeit. Bildung als Risikokapital. Wien: Passagen, 63 – 96 Schmidt, Siegfried J. (2005): Lernen, Wissen, Kompetenz, Kultur. Vorschläge zur Bestimmung von vier Unbekannten. Heidelberg: C. Auer Schmidt, Siegfried J. (2006) „Zwischen Fundamentalismus und Beliebigkeit.“ Gesprächsnotizen. Auszüge aus einem Gespräch zwischen Thomas Feuerstein und Siegfried J. Schmidt, das im Oktober 2005 stattfand. Erstabdruck in: Thoman, Klaus (Hrsg.) (2006): Thomas Feuerstein. Outcast of the Universe, Wien: 221–226; auch als www-Dokument abrufbar unter: http://www.myzel.net/Narration/ schmidt.html [Stand: 2006-02-09] Schönherr-Mann, Hans-Martin (2008): Bildung im Zeitalter des weltbildenden Bildschirms – Ein Essay. In: Hug, Theo (ed.): Media, Knowledge & Education - Exploring new Spaces, Relations and Dynamics in Digital Media Ecologies. Proceedings of the International Conference held on June 25-26, 2007. Innsbruck: Innsbruck University Press (forthcoming) Schultz, Oliver Lerone (o. J.): Augmented Embodiment. Excavated Dynamics of Media Theory – Critical Inlets to the (Second) Medial Turn. In: Hug, Theo (Hrsg.): Mediatic turn – Claims, Concepts and Cases. Frankfurt am Main u. a.: Lang (im Erscheinen) Schütz, Alfred & Luckmann, Thomas (1979): Strukturen der Lebenswelt. Band 1, Frankfurt am Main: Suhrkamp Sesink, Werner (2004): In-formatio. Die Einbildung des Computers. Beiträge zur Theorie der Bildung in der Informationsgesellschaft. Münster: Lit Shields, Rob (2003): The Virtual. London / New York: Routledge Slade, Christina (2002): The Real Thing. Doing Philosophy with Media.

71 Frankfurt am Main u. a.: Lang Smudits, Alfred (1994): Medien, Kodierungen, Mediamorphosen. Die mediale Formbestimmung der Kommunikation. In: Semiotische Berichte, Jg. 18, 1-4 Smudits, Alfred (2002): Mediamorphosen des Kulturschaffens. Kunst und Kommunikationstechnologien im Wandel. Wien: Braumüller Sonesson, Göran (1997): The multimediation of the lifeworld. In: Nöth, Winfried (ed.): Semiotics of the Media. Proceedings of an international congress, Kassel, March 1995. Berlin & New York: Mouton de Gruyter, 61-78, abrufbar auch im Internet unter: http://www.arthist.lu.se/kultsem/sonesson/ media_1.html [Stand: 2005-05-15] Tenorth, Heinz-Elmar (1994): „Alle alles zu lehren“. Möglichkeiten und Perspektiven allgemeiner Bildung. Darmstadt: Wissenschaftliche Buchgesellschaft Tenorth, Heinz-Elmar (2003): Klassiker der Pädagogik. 2 Bde., München: Beck Vander Wal, Thomas (2006): Memephoria. Weblog-Eintrag in „vanderwal.net“, abrufbar unter: http://www.vanderwal.net/random/entrysel. php?blog=1829 [Stand: 2006-06-19] Volkmer, Ingrid (1999): International Communication Theory in Transition: Parameters of the New Global Public Sphere. Internet-Dokument, abrufbar unter: http://web.mit.edu/comm-forum/papers/volkmer.html [Stand: 2006-06-29] Volkmer, Ingrid (Ed.) (2006): News in Public Memory. An International Study of Media Memories Across Generations. Frankfurt am Main u. a.: Lang Wallmannsberger, Josef (1998): Notate und Anzettelungen zu einer arte povera virtueller Informierungen. In: Hug, Theo (Hrsg.): Technologiekritik und Medienpädagogik. Zur Theorie und Praxis kritisch–reflexiver Medienkommunikation. Baltmannsweiler: Schneider–Verlag Hohengehren, 189-199 Weber, Stefan (1999): Die Welt als Medienpoiesis. Basistheorien für den ‘Medial Turn’. In: Medien Journal 23, Nr. 1, 3-8 Westphal, Kristin (1998): Grundlegende bildungsphilosophische Überlegungen zu einer Theorie der medialen Erfahrung und Bildung. In: Bauer, Walter u. a. (Hrsg.): 97-116. Willke, Helmut (2005): Symbolische Systeme. Grundriss einer soziologischen Theorie. Weilerswist: Velbrück Wissenschaftsrat (2007): Empfehlungen zur Weiterentwicklung der Kommunikations- und Medienwissenschaften in Deutschland. www-Dokument

72 abrufbar unter: http://www.wissenschaftsrat.de/texte/7901-07.pdf [Stand: 2007-09-09] Zuboff, Shoshana (1988): In the age of the smart machine: the future of work and power, New York: BasicBooks

Endnoten 1

2

3 4 5

6

7 8

9

Vgl. z.B. die Angebote im Rahmen der Ringvorlesung „Medien & Bildung“ (http:// mms.uni-hamburg.de/blogs/medien-bildung/), der internationalen Tagung „Media, Knowledge & Education“ (http://medien.uibk.ac.at/mwb/) oder der heurigen Herbsttagung der DGfE-Kommission Bildungs- und Erziehungsphilosophie zum Thema „Medien, Technik und Bildung“ (http://dgfe.pleurone.de/termine/bildungundmedien/). Der Ausdruck ‘Blackboardization’ wird gebraucht als “idiom for processes of norming learning cultures, trivialization of complex issues, misleading of trustful users, selling an e-learning approach as mother-of-all e-education, normalization of restraints, and implementation of structural bondages by asserting one Learning Content Management system as proprietary solution of priority” (Hug/Friesen 2007: 24). Vgl. dazu auch die Einträge unter http://medienpaedagogik.phil.uni-augsburg.de/ denkarium/?p=150. Bei den nachfolgenden Abschnitten handelt es sich um eine überarbeitete Fassung der Argumentationen in Hug (2007). Analoges gilt für eine detaillierte Auseinandersetzung der hier vertretenen Auffassung mit dem Medienkompakt¬begriff von Siegfried Schmidt im vorliegenden Band. Vgl. dazu die Erkundung des “blinden Flecks im Mediengebrauch” in Auseinandersetzung mit den Medien¬theo¬rien von McLuhan und Luhmann von Krämer (1998b: 73-94). Eine etwas anders akzentuierte Auflistung von „Basistheorien für den ‘Medial Turn’“ hat Weber (1999) erstellt. Schultz (o. J.) setzt den ersten medial turn in den 60er Jahren mit den Arbeiten von Marshall McLuhan, Douglas Engelbart, Dick Higgins u. a. an und kritisiert im Zusammenhang des „second medial turn“ der 90er angemesse¬ne Bezugnahmen auf diese Arbeiten im deutschen Sprachraum. Für alle, die sich über unterschiedlichen Schreibweisen (message vs. massage) des Aphorismus im Text und im Buchtitel (McLuhan/Quentin 1967) wundern, mag das Ergebnis der Recherche von Melanie Gaux (2003) erhel¬lend sein. Sie hat bei Michael McLuhan angefragt und bekam folgende Antwort:

73 „Much to my surprise, Michael McLuhan, Marshall’s son, responded. He wasn’t sure of the reason, so he forwarded my note to his brother Eric, who responded last week: »Actually, the title was a mistake. When the book came back from the typesetter, it had on the cover ‘Massage’ as it still does. The title should have read ‘The Medium is the Message’ but the typesetter had made an error. When Marshall McLuhan saw the typo he exclaimed, ‘Leave it alone! It‘s great, and right on target!’ Now there are four possible readings for the last word of the title, all of them accurate: ‘Message’ and ‘Mess Age,’ ‘Massage’ and ‘Mass Age.’«” (Goux 2003, kursiv im Org.) 10 Die Arbeiten von Cassirer ist zusammen mit denen von Saussure und Luhmann auch zentrale Referenzpunkt für die Theorie „Symbolischer Systeme“ von Helmut Willke (2005). Allerdings werden hier materiale Aspekte eher am Rande behandelt, während Margreiter das Augenmerk konsequent auf das Zusammenspiel von Symbolsystem und Medium richtet (Margreiter 1999: 16f). 11 Thomas Vander Wal hat diesen Ausdruck geprägt und charakterisiert ihn wie folgt: “Memephoria is the state of believing that spreading the latest meme as a mantra will earn you transcendence. Memephoria does not require one to understand the term used in the meme mantra, just to use it (correctly or in-correctly)” (Vander Wal 2006). 12 Vgl. den Foliensatz zum Thema “The Competence of Framing Media Literacy and Media Literacy as Framing Competence”, abrufbar unter http://web.mit.edu/commforum/mit5/papers/hug_handout.pdf.

74

Medienwissenschaft, Medientheorie oder Medienphilosophie? Claus Pias, Wien

Einleitung Die universitär institutionalisierte Beschäftigung mit Medien ist jung, auch wenn es manchen ihrer Wegbereiter inzwischen anders erscheinen mag. Selbst wenn man sie an anderen jungen akademischen Disziplinen wie etwa der Kunstgeschichte oder der Kulturwissenschaft mißt, hat sie kaum die Entbindungsstation verlassen. Dafür kommt sie gleich im Plural vor: »Die Medienwissenschaften«, deren Ähnlichkeit oft geringer ist, als der Familienname vermuten läßt, haben in den letzten fünfzehn Jahren eine unvergleichliche Konjunktur erlebt – einen wahren Gründungsboom, der umso erstaunlicher ist, weil doch zugleich der finanzielle Spielraum der Universitäten immer enger zu werden schien. Daß sich deshalb immer mehr Wissenschaftler und Institutionen – nicht zuletzt unter dem Druck drittmittelfähigen Forschungsdesigns – entschließen, auf die Attraktivität des Schlagwortes »Medien« zu setzen, ist allemal verständlich, und die Folgen sind klar erkennbar. Mittlerweile ist »Medienwissenschaft« (trotz verschiedenster Ausprägung nun im Singular) an etwa 45 deutschen Universitäten verankert. Diese Zahl muß jedoch noch mit der der neuen Studiengänge multipliziert werden, die sich durch die flächendeckende Einführung von BA- und MAAbschlüssen drastisch vermehrt hat. Allein in Nordrhein-Westfalen, einem einzigen Bundesland also, werden derzeit über 50 verschiedene MedienAbschlüsse angeboten.1 Hinzu kommen noch etliche Sonderforschungsbereiche und Graduiertenkollegs. Pessimistische Beobachter mögen darin schon eine Bedürfnisdeckungslage ohne Bedürfnisse ausmachen. Der deutsche Wissenschaftsrat jedenfalls hat unlängst eine Expertenkommission zur Zukunft der Medienwissenschaften gebildet und in seinen Empfehlungen vom 25. Mai 2007 einige Konsequenzen gezogen. [WR 2007] Die Reaktionen innerhalb der Community darauf reichten von leicht resigniertem Seufzen bis hin zu offenem Widerspruch [GfM 2007]. Verkürzt und zugespitzt lauten die wichtigsten Ratschläge: 1. möge man damit aufhören, alle möglichen Institute und Lehrstühle in »Medien-« umzubenennen und

76 stattdessen lieber die Medien-Frage aus den jeweiligen Herkunftsdisziplinen heraus stellen; 2. möge man mit der Erfindung immer weiterer, hochspezialisierter Bindestrich-BA-Abschlüsse in »Medien-« aufhören und behalte die Medien-Frage insgesamt der MA-Phase oder Graduate-Schools vor; 3. sei eine stärkere Fokussierung auf erkenntnistheoretische Fragestellungen wünschenswert, und zwar insbesondere, wenn sie in Forschungskooperation mit Medientechnik und Informatik geschieht. Darüber hinaus führt der Wissenschaftsrat die neue Bezeichnung »Medialitätsforschung« ein, die zwar sprachlich nur begrenzt gelungen ist und sich wahrscheinlich nicht durchsetzen wird, hinter der aber einleuchtende Überlegungen stecken.2 Die folgenden Ausführungen maßen sich keine Rechtfertigung der Empfehlungen des Wissenschaftsrats an, gleichwohl sie in einigen Punkten zu ähnlichen Schlüssen führen. Ebensowenig beanspruchen sie, einen Überblick über das kaum mehr überschaubare Terrain medienwissenschaftlicher Forschung und ihrer verschiedenen Ansätze oder »Schulen« zu geben. Stattdessen stellen sie einen idiosynkratischen, von der eigenen akademischen Biographie und von Erfahrungen bei medienwissenschaftlichen Gründungen geprägten Bestimmungsversuch dar, was die Frage nach Medien bedeutet, in welchen Kontexten sie gestellt werden kann und aus welchen historischen Gründen dem so ist. Dies verspricht zumindest eine gewisse Konturierung von Begriffen wie Medientheorie, Medienwissenschaft und Medienphilosophie.

1. Es gibt keine Medien Die Frage nach den Medienwissenschaften ist nicht nur eine wissenschaftsstrategische, sondern zunächst einmal eine wissenssystematische: Was ist der Gegenstand der Medienwissenschaften? Anders als beispielsweise die Germanistik, die bei ihrer Entstehung noch selbstgewiß den Gegenstand »deutsche Nationalliteratur« angeben konnte, hat es die Medienwissenschaft mit einem hochgradig inhomogenen Objektbereich zu tun. Und dies liegt sehr wahrscheinlich nicht daran, daß sie den Betriebszustand einer »Normalwissenschaft« noch nicht erreicht hat, sondern macht ihre konstitutive Besonderheit aus. Auf absehbare Zeit wird darum ein konsensueller Medienbegriff nicht herstellbar sein. Licht, Wasser und Luft, Menschen und Tiere, Elektrizität und Massenmedien, Tische, Türen und Fenster, Auto, Flugzeug oder Eisenbahn, graphische Notationssysteme und Zentralperspektive, Fernrohr, Grammophon und Meßgeräte, Schrift und Stimme – all dies und noch viel mehr ragt in den Objektbereich der Medienwissenschaften hinein

77 und wird zum Gegenstand ihrer Forschung, ohne deshalb eine elementare Definition dessen verschaffen zu können, was ein Medium ist. Einiges spricht dafür, daß jene Klärungsversuche, die mittlerweile unter der Frage »Was ist ein Medium?« eine gewisse Orientierungs- und Exklusionsleistung versprechen [Münker 2003], zum Scheitern verurteilt sind, weil sie allzu oft eine ontologische Dignität voraussetzen, anstatt Medien als heterogenes, variables und historisch instabiles Dazwischen hinzunehmen. Dies leuchtet zwar ein, weil üblicherweise (und schon aus forschungspolitischen Gründen) von einer Wissenschaft erwartet wird, daß sie ihren Gegenstand benennen kann; doch im Fall der Medienwissenschaften gründet sich ihre Produktivität wohl gerade darauf, daß sie es nicht kann. Medienwissenschaften, so wäre zu vermuten, sind solange produktiv, solange sie nicht verabschieden, was ein Medium ist [Engell 2006]. Medienwissenschaften sind zwar systematisch, aber nicht einheitlich und explizit. Möglicherweise sind sie eine »rhapsodische Wissenschaft«, deren Erkenntnisse sich nicht zu einem System fügen und deren implizite Logik sich weniger aus den Sachen, vielmehr aus einer bestimmten Diskursstrategie heraus erschließt. Dies wird im Zuge ihrer Institutionalisierung verständlicherweise problematisch. Wer Medienwissenschaft studiert, möchte wissen, was er oder sie lesen soll. Wer einen Studiengang »Medienwissenschaft« bezahlt, möchte wissen, was die Studierenden nachher können. Daher sind seit den späten 90er Jahren zahlreiche Einführungen in »die« Medienwissenschaft (bezeichnenderweise im Singular) veröffentlicht worden, die disziplinierend und kanonisierend wirken sollen, aber zumeist nicht anders verfahren konnten als verschiedene Ansätze, Gegenstandsbereiche oder Schulen nebeneinander zu stellen. In unserem eigenen Versuch [Pias 1999] haben wir uns bemüht, diesem Dilemma historisch-epistemologisch zu entkommen, indem wir Texte versammelt haben, die kein Medium und keinen Medienbegriff voraussetzen, sondern die Frage nach Medien erst auf unterschiedliche Weise konstituieren. Es handelt sich also um den Versuch, exemplarisch zu beobachten, wie etwas zu einer bestimmten Zeit und innerhalb einer bestimmten Problemlage erst zum Medium wird. Ein Beispiel wäre der Computer, dessen Thematisierung sich vom epistemischen Ding (frühe kybernetische Phase) zum technischen Ding (60er Jahre) und dann zum »Medium« wandelte. Als »Medium« interessieren am Computer weniger die anthropologischen Kränkungen eines »Elektronengehirns« oder die zivilen und militärischen Einsatzmöglichkeiten als Kalkulator, sondern die kulturellen oder diskursiven Bedingungen, die er zugleich schafft und verkörpert. Ein solches Medien-Werden hat unterschiedliche historische und gesellschaftliche Bedingungen und vollzieht sich asynchron:

78 In den USA hatten die Protagonisten der Personal Computing-Bewegung wie Ted Nelson oder Lee Felsenstein, Alan Kay oder Nicholas Negroponte um 1970 die Schriften Marshall McLuhans gelesen, der wiederum Norbert Wiener gelesen hatte, und erkannten plötzlich, daß sie es bei Computern mit Medien zu tun haben. Dazu war nicht einmal nötig, daß McLuhan sich besonders intensiv mit Computern (oder »Elektronengehirnen«) beschäftigt hatte, sondern nur, daß er Konzepte lieferte, mit denen die Funktion von Medien gedacht werden konnten [Pias 2007]. In Deutschland hatte man zwar auch McLuhan gelesen (ihn »verschlungen wie Marx und Mao«, wie sich Friedrich Knilli, 1971 der erste Professor für Medienwissenschaft in Deutschland, erinnert), ihn aber nicht im Zusammenhang mit Computern thematisiert. Genausowenig wie die Arbeiten der Max Bense’schen Schule der Informationsästhetik, die sich um 1970 in Semiotik transformierte. »Computer als Medium« wurde daher in Deutschland erst in den mittleren 80er Jahren zum Thema, und d.h. zu einer Zeit, als der PC nicht mehr kulturell und technisch erfunden werden mußte, sondern schon auf den Schreibtischen stand. Universitäre Medienwissenschaft begann hingegen um 1970 erst einmal als Film- und Fernsehwissenschaft auf der medientechnischen Basis des Videorecorders [Zielinski 1985], der Film als Gegenstand im Seminarraum bearbeitbar (d.h. zeigbar und zitierbar) werden ließ, und das heißt als »Videorecorder-Wissenschaft«. Obwohl sich noch viele andere Beispiele dieser Art nennen ließen, halten wir fest: Es gibt keine Medien in einem substanziellen und historisch stabilen Sinn. Medien sind nicht einfach auf Apparate, Praxen, Institutionen oder Repräsentationsformen reduzierbar. Wir haben es vielmehr mit einem historischen Prozeß zu tun, mit einem »Medien-Werden« (wie Joseph Vogl es deleuzianisch genannt hat), innerhalb dessen ein heterogener Verbund von Apparaten, Praxen, Institutionen oder Repräsentationsformen mediale Funktionen entfaltet. Dieses Medien-Werden ist in seiner historischen Kontingenz nicht präjudizierbar, wohl aber in seinen epistemischen Wirkungen erforschbar. [Hagen 2007]

2. Medienwissenschaft ist keine Disziplin Ebenso problematisch wie der Gegenstandsbereich der Medienwissenschaften ist ihr disziplinärer Ort. Wenn man davon ausgeht, daß Medien auch das sind, was immer zwischen den Wissenschaften und ihren Gegenständen liegt, was diese Gegenstände konstituiert, konfiguriert und dabei selbst un-

79 wahrnehmbar wird, dann adressiert die Frage nach Medien ein erkenntniskritisches Problem – und zwar nicht an einem einzigen disziplinären Ort, sondern an mehreren zugleich. Sie gibt kein Problemlösungsversprechen, sondern ist ein Problematisierungsverfahren eben jener Repräsentationsweisen, Praxen, Apparate und Institutionen von Wissenschaft innerhalb ihrer jeweiligen Disziplin. Darin ist sie etwa der Dekonstruktion oder der feministischen Theorie ähnlich, die nur in einem immer schon laufenden wissenschaftlichen Diskurs funktionieren und sich zu diesem (wohlgemerkt: im besten und produktivsten Sinne) parasitär verhalten. Medienwissenschaft ist der Zaungast verschiedener Wissenschaften: Sie hält sich am Rande auf (von wo aus man eine andere Perspektive gewinnt) und hat ein Bein immer schon außerhalb des Territoriums. Sie formuliert Fragen, die im besten Fall an die Grundbegriffe und Betriebsblindheiten der jeweiligen Wissenschaften rühren, die (nach Heideggers Beobachtung) aufhören zu denken, sobald sie funktionieren. Trotzdem beansprucht sie kein Außen und keinen erhöhten Status als Meta-Wissenschaft. Insofern taugt sie als immanentes »Renovierungsunternehmen« [Münker 2003, 10] der Wissenschaften, und zwar zunächst einmal der Geisteswissenschaften. Nimmt man dieses Argument nun ernst, so hat es einige Konsequenzen. Eine davon wäre, daß Medienwissenschaft keine Disziplin, sondern eine bestimmte Fragestellung ist, die in verschiedenen Disziplinen auftauchen kann. Es gäbe demnach z.B. eine literaturwissenschaftliche Frage nach Medien, eine kunstgeschichtliche Frage nach Medien, eine medizinische Frage nach Medien, eine informatische Frage nach Medien usw., die aber zunächst nur aus der Literaturwissenschaft, aus der Kunstgeschichte, aus der Medizin oder aus Informatik heraus gestellt werden kann. Das sollte auch deshalb schon erwartet werden, weil diese Frage ein (historisches und methodisches) Verständnis eben jener Repräsentationsweisen, Praxen, Apparate und Institutionen der jeweiligen Disziplinen voraussetzt, deren Wissen und Funktionieren hinsichtlich seiner Medialität thematisiert werden soll. Medienwissenschaft kann und darf das Niveau jener Diskurse, deren mediale Konstituiertheit sie problematisiert, nicht unterschreiten. Die Bildwissenschaften beispielsweise, die sich seit dem »iconic turn« (oder europäischer: dem »pictorial turn«) akademisch institutionalisieren, verkörpern diese Frage nach Medien für die Kunstgeschichte. Die Fachgruppe »Computer als Medium« der deutschen Gesellschaft für Informatik verkörpert sie innerhalb der Informatik. In beiden Fällen ist die Bewegung eine taktische, d.h. in Sichtweite disziplinärer Routinen und epistemischer Voraussetzungen, die im Blick bleiben obwohl man sich von ihnen distanziert. Die Optionen auf Interdisziplinarität werden jedoch dadurch erheblich erhöht, daß und inso-

80 fern die Form der Frage kompatibel ist. Ein historisches Beispiel, das der Situation der Medienwissenschaft vielleicht vergleichbar ist, gibt die Kybernetik der 50er und 60er Jahre. Das Interessante an der Kybernetik ist, daß sie weniger eine Disziplin, als vielmehr eine Epistemologie darstellt. Weder ist sie selbst eine klassische Disziplin mit eingeschränktem Gegenstandsbereich oder gesicherten Methoden (denn zumindest eines davon braucht man), noch steht sie wirklich außerhalb der Disziplinen. Stattdessen wird sie in ihnen wirksam. Über Konzepte wie »Information«, »Feedback« oder »Cyborg« brachte sie ganz unterschiedliche Disziplinen – und zwar aus sich heraus – dazu, ihr Wissen zu reformulieren, zu rekonzeptualisieren oder ihre grundlegenden Konzepte zu revidieren. Die Ökonomie (Tustin), die Anthropologie (Bateson/Mead), die Ökologie (Hutchinson) und viele andere Disziplinen gelangten im Zeichen einer kybernetischen Epistemologie zu einer kritischen und produktiven Revision ihrer Grundlagen. Interdisziplinarität kam dabei über ein gemeinsames Set von Modellen, Denkfiguren oder Fragen zustande, das sich aber immer in Bezug auf das Wissen der Einzelwissenschaften erweisen mußte. Die Kybernetik selbst war jedoch mit diesem Konzept nicht fähig, den Sprung in die Institutionalisierung als akademische Disziplin zu vollziehen – ganz anders (und das ist nun das Erstaunliche) als die Medienwissenschaften. Anscheinend waren weder der undefinierbare Gegenstandsbereich noch die methodische Grundlagenkritik der Disziplinen ein Hindernis für ihren Erfolg als Disziplin. Medienwissenschaft statuiert sich gewissermaßen, indem sie kein Standbein behält. Daß damit etwas zur Disziplin wird, das nicht Disziplin sein kann, macht derzeit die problematische (wenn nicht paradoxe) Situation der frisch etablierten Medienwissenschaft aus. In dem Maße nämlich, in dem sie ihre verschiedenen disziplinären Wurzeln abschneidet und selbst Wurzeln schlägt, verliert sie ihre bewegliche und parasitäre Produktivität. Sie verliert den Gegenstand ihrer Kritik, sobald sie sich selbst als autonom behauptet. Und sie ist gezwungen, einen Kanon an Methoden und Gegenständen auszubilden, obwohl dies ihrer Logik doch zutiefst widerstreben müßte. Studierende, die vom ersten Tag an Medientheorie studieren und dann ihre Dozenten fragen, ob sie nicht lieber erst einmal Philosophie oder Kunstgeschichte oder Informatik lernen sollten, sind gar nicht irrtiert, sondern haben die Lage klar erkannt. Die logische Folge der Institutionalisierung eines dissidenten Ansatzes lautet, sich gegen diesen wiederum dissident zu verhalten. Wer mit Medienwissenschaft akademisch sozialisiert wird und sie ernst nimmt, findet sich notwendigerweise in der Rolle wieder, klassische Disziplinen zu fordern oder die Medienwissenschaften selbst zu problematisieren. Dauerreflexion ist (einem Wort Helmut Schelskys fol-

81 gend) schwerlich insitutionalisierbar [Schelsky 1965]. Deshalb bilden sich inzwischen seltsame Allianzen: Die, die’s einmal ganz anders machen wollten, treffen sich mit denjenigen, die’s immer schon besser wußten. Bourdieu hätte daran seine Freude.

3. Medienwissenschaft ist eine Wissenschaft Daß Medienwissenschaft jedoch eine Wissenschaft ist, sollte aus mindestens zwei Gründen betont werden (denn es gibt ja auch Studierende, die berechtigterweise fragen, warum wir sie nicht zum Fernsehen bringen). Zum einen standen die Gründungen medienwissenschaftlicher Studiengänge, Institute oder gar Fakultäten in den 1990er Jahren unter der Prämisse der Gegenwartsbezogenheit. Der wirtschaftliche Aufschwung »Neuer Medien« machte es wissenschaftsstrategisch ratsam, jenes »Jetzt« der Medien zu betonen, durch das sie sich als politisch förderungswürdig ausweisen sollten. Das hat die Vermutung genährt, daß Medienwissenschaft eine berufsqualifizierende Angelegenheit sein könnte, eine Ausbildung, die Arbeitsplätze schafft, und man hat sich angesichts der Vorschußlorbeeren selten ernsthaft bemüht, diese Unterstellung zu dementieren. Inzwischen ist allerdings klargeworden, daß es sich auch bei Medienwissenschaften um eine Wissenschaft handelt, die genauso viel oder wenig eine Berufsausbildung ist wie jede andere (Geistes)Wissenschaft auch. Damit steht sie historisch nicht allein. Die Anglistik entstand im 19. Jahrhundert aus dem schlichten Grund, daß England immer »wichtiger« wurde, und das heißt, sie war der Kontingenz dominanter politischer und ökonomischer Interessen ausgeliefert. [LiLi 2003] In den ersten Jahrzehnten war somit nicht eigentlich klar, was der Gegenstand von Anglistik sei (Philologie, Übersetzungen, Landeskunde, Literaturinterpretation usw.), aber ihr Bestand war schon allein aufgrund ihrer Gegenwartsbezogenheit gesichert. Zweitens muß die Wissenschaftlichkeit der Medienwissenschaften verteidigt werden, gerade weil sie als Disziplin so problematisch ist. Eine Disziplin erfordert (wie Wolfgang Coy zusammenfaßte) Lehre, Forschung und flächendeckende Organisation,. Die Organisation der Medienwissenschaften ist noch unklar. Es gibt zwar Räume, Stellen und Geld, aber die unterschiedlichen Ansätze aus Germanistik, Philosophie, Soziologie, Ökonomie, Jura, Technikwissenschaften und kleineren Einheiten wie Film-, Theaterund Bibliothekswissenschaft ergeben zusammengefaßt erst einmal nur eine Ansammlung von Spezialisierungen. Deshalb gibt es beispielsweise auch so

82 gut wie keine allgemein anerkannten Zeitschriften für Medienwissenschaft.3 Entsprechend disparat ist die Lehre. Insbesondere die jüngste Studienreform hin zu BA- und MA-Abschlüssen hat leider viele produktive Differenzen getilgt und spannende Reibungsflächen nominell geglättet. Deshalb ist es nicht die Ausnahme, sondern inzwischen der Regelfall, daß jemand am Ort A »Medienwissenschaft« studiert und dabei nur mit quantitativ-sozialwissenschaftlichen Methoden in Berührung kommt, an den Ort B wechselt, wo »Medienwissenschaft« unter strikt technikhistorischer Perspektive betrieben wird, oder am Ort C auf eine »Medienwissenschaft« stößt, die sich an den angelsächsischen cultural studies orientiert. Was auf dem Papier das Gleiche ist und neuerdings überall mit »Medien-« anfängt, erweist sich vor Ort als vollständig unverträglich. Das ist alles andere als ein Nachteil und sollte darum nicht geleugnet, sondern forciert werden – denn warum sollte man den Studienort noch wechseln, wenn es überall das Gleiche zu hören gibt? Demgegenüber gilt es aber auch festzuhalten, daß die medienwissenschaftliche Forschung enorm produktiv ist und sich auf einem hohen Niveau von Originalität bewegt. Das hat – so müßte man konsequenterweise vermuten – auch damit zu tun, daß diejenigen, die sich im publikationsfähigen akademischen Alter befinden, allesamt noch nicht Medienwissenschaft studiert haben oder zumindet noch in erheblichem Maße mitbestimmen konnten, was unter Medienwissenschaft zu verstehen sei. Als Wissenschaft jedenfalls wird die Medienwissenschaft in den nächsten Jahren beweisen müssen, ob sie nicht bloß exemplarisch darauf verweisen kann, wofür die klassischen Disziplinen blind waren, sondern ob sie sich auch kontinuierlich auf diesem Reflexionsniveau bewegen kann.

83

4. Medienwissenschaftler hätten Medienwissenschaft nicht erfinden können Es ist kein Ursprungsdenken, sondern nur eine banale historische Beobachtung, daß diejenigen, die konzeptuell und durch ihre Forschungsansätze entworfen haben, was Medienwissenschaft sein könnte, nicht aus der Medienwissenschaft kamen und kommen konnten. Das gilt für den Literaturwissenschaftler Marshall McLuhan ebenso wie für die Historikerin Elizabeth Eisenstein oder den Germanisten Friedrich Kittler. Es sind technische Umbrüche, gesellschaftliche Veränderungen oder epistemologische Reflexionen innerhalb bestimmer Wissensgebiete, die anregen, Medien zu denken. Auch dies erhellt in gewisser Weise die Widersprüche der Institutionalisierung von Medienwissenschaft. Die Medienfrage setzte, zumindest historisch, voraus, daß sie etwas thematisiert, das bislang nicht innerhalb einer Wissenschaft thematisierbar war, sehr wohl aber den »Bias« [Innis 1964] ihres Wissens mitbestimmte; und sie thematisiert zugleich, warum es nicht thematisierbar werden konnte. So romantisch kann es vermutlich nicht ewig weitergehen, obwohl immer noch zu beobachten ist, unter welch enormem Beschleunigungsdruck Medienwissenschaften gerade durch diese Dynamik der epistemologischen Selbstreflexion stehen. Die bescheidene Einsicht, die man dieser Beobachtung abgewinnen kann, lautet, daß wenn man sich der Medienfrage zuwendet, man die Standards der Disziplin, in der man sie stellt, nicht unterlaufen sollte. Im Gegenteil: man muß sie gut kennen, sie thematisieren und gerade noch eins draufsetzen, d.h. man muß drin und zugleich draußen sein, damit ein Mehrwert an Erkenntnis und Kritikfähigkeit entsteht. Das bleibt, schon aus quantitativen Grenzen, gelegentlich auf der Strecke. Einen sich »Medienhistoriker« nennenden Doktoranden, der glaubt, Deutschland um 1900 sei »absolutistisch« werden Historiker nicht ernst nehmen (und zwar mit gutem Grund), auch wenn er ansonsten Fragen aufwerfen mag, die es sehr ernst zu nehmen gälte. Ähnliches gilt für Bildwissenschaftler ohne kunsthistorische Grundkenntnisse (von technischen ganz zu schweigen) oder Medienphilosophen, die bspw. die Klassiker der Erkenntnis- und Wissenschaftstheorie oder der Technikund Sprachphilosophie nicht mehr lesen. Selbstredend bleibt jedoch eine Disziplin, die medienwissenschaftlicher Kritik ausgesetzt ist, nicht dieselbe. Wir sind also wieder an jenem paradoxen Punkt, daß man Disziplinarität schützt und zugleich problematisiert, reserviert und zugleich erweitert, voraussetzt und zugleich dekonstruiert. Deshalb ist es alles andere als abwegig, die Medienfrage auf das post-BA-Studium zu vertagen oder lediglich

84 begleitend einzuführen, auch wenn dies möglicherweise finanzielle Einbußen zur Folge hat.

5. Medienwissenschaft ist nicht gleich Medienwissenschaft Was als »Medienwissenschaft« disziplinär zusammengeschlossen wurde und wird, ist nicht nur systematisch und methodisch äußerst different, sondern auch von unterschiedlicher historischer Herkunft. Die diskursanalytisch und technisch orientierte Medienwissenschaft, wie sie am entschlossensten von Friedrich Kittler in den 1980er Jahren propagiert wurde, ist dabei nur einer der Bestandteile. Sie ist für die Gesamtheit der Medienwissenschaften jedoch keineswegs repräsentativ, sondern eher ein ambitioniertes Minderheitenprogramm. Ihr spezifischer Provokationsanreiz waren die Methoden, Begriffe und Vorstellungen der Germanistik, denen sie im Namen einer technischen Diskursanalyse der Medien (und assistiert von Heidegger-, Foucault- und Lacan-Lektüren) den »Geist« und die Hermeneutik auszutreiben suchte. [Winthrop 2005] Sprache (»Kittlerdeutsch«), Theoriegestus und ein bestimmtes Corpus an dissidenten Lektüren sind dabei selbst (zumindest in einem gewissen Ausmaß) schulbildend geworden. Für die Medienwissenschaft als Institution spielen allerdings andere und ältere Bewegungen eine nicht minder bedeutsame Rolle. Die eine ist die Publizistik, die während des ersten Weltkriegs als Abspaltung aus der Wirtschaftswissenschaft entstand. Sie trat zunächst als »Zeitungswissenschaft« auf – ein Fach, das sich in der NS-Zeit kompromittiert hatte und keinen unbelasteten Nachwuchs hervorbrachte [Meyen/Löblich 2006, 64]. In den 50er Jahren in »Publizistikwissenschaft« umbenannt, institutionalisierte sie sich nach einer sozialwissenschaftlichen Wende in den 60er-Jahren als »Publizistik- und Kommunikationswissenschaft«. Aus berufspragmatischen Gründen waren und sind ihre Methoden empirisch-sozialwissenschaftlich, und nicht zuletzt aus politischen Gründen feierte sie (zumindest seit dem Aufstieg der Demoskopie) erhebliche Erfolge. Denn für die Bewältigung des Nationalsozialismus (in den sie selbst tief verstrickt war) kam ihr die Rolle zu, Schuld auf die Medien zu verschieben. Der Nachweis, daß Medien uns bis zur Unmündigkeit manipulieren, war ein wesentlicher Bestandteil einer Entschuldungsstrategie, deren Folgen noch heute spürbar sind. Umgekehrt zeigte sich nämlich, daß wenn man »den Medien« (gemeint sind immer klassische Massenmedien) eine derartige Kompetenz zuspielt, man auch mit »Medienberatung« einiges Geld verdienen kann. Wenn Medien so

85 mächtig sind, ist guter Rat teuer und gefragt. Insofern hatte die Kommunikationswissenschaft bald ein methodisch gesichertes Forschungsterrain, eine finanziell gesicherte universitäre und außerakademische Position und eine gesicherte öffentliche Präsenz vorzuweisen. Die andere Bewegung ist die (wesentlich jüngere) Film- und Fernsehwissenschaft, die sich in den späten 1960ern universitär zu entfalten begann. Sie resultierte nicht zuletzt aus einer Krise des Gegenstandsverständnisses von Wissenschaft im Zuge der Durcharbeitung der NS-Germanistik. In der Kopplung von Ästhetik, Technik und Ideologiekritk, aber auch im Hinblick auf medienpraktische Arbeit entdeckt sie das Politische des Alltags und des Popkultur und erhob bislang minderreputierte Phänomene in den Rang ernstzunehmender wissenschaftlicher Gegenstände (Heiratsanzeigen, Flugblätter, Italo-Western, Pornos, Fernsehserien usw.). Ihre Methoden waren jedoch dezidiert nicht empirisch-sozialwissenschaftlich, sondern zumeist hermeneutisch-werkorientiert oder philologisch und haben sich im Lauf der Zeit zu großer Vielfalt entwickelt. Auch von den Film- und Fernsehwissenschaften wird man daher sagen können, daß sie im Lauf der Zeit einen bestimmten Kanon von Gegenständen und Methoden ausbilden konnten und sich (zwar nicht als »big science«, aber doch auf eine klassische Weise) als Disziplin etablieren konnten. Man muß sich also zumindest einmal fragen, was es bedeutet, wenn diese verschiedenen Dinge (unter welchem Druck auch immer) inzwischen allesamt unter »Medien« rubriziert werden und damit produktive Differenzen flächendeckend ausgelöscht werden – zumindest nominell. Dabei handelt es sich auch (aber nicht allein) um einen politischen Irrtum unter finanziellem Kürzungsdruck. ***

Auf die drei Begriffe des Titels zurückkommend läßt sich vielleicht sagen, daß Medienwissenschaft der reichlich problematische (wenn nicht paradoxe) Status und Name einer akademischen Disziplin ist. Medientheorie ist dagegen eher ein Geschehen, das sich (wenngleich erst in jüngster Vergangenheit unter diesem Titel) in der Geschichte häufig ereignet hat (von der oft zitierten platonischen Schriftkritik über christliche Offenbarungslehren oder Äthertheorien, von Malerei über Film bis hin zu wissenschaftlichen Experimentalanordnungen usw.). Medientheorie muß also nicht unbedingt (in unserem modernen) Verständnis wissenschaftsförmig sein, sie muß nicht an Universitäten stattfinden und muß sich auch nicht in Büchern und Aufsätzen niederschlagen. Daher ist sie auch nicht an eine akademische Medienwis-

86 senschaft gebunden, kann aber sehr wohl deren bevorzugter Gegenstand im Rahmen einer historisch-epistemologischen Betrachtung werden (etwa innerhalb der Mediengeschichte oder einer medial orientierten Wissenschaftsforschung). Dieses Verständnis von Medientheorie als einem Nachdenken über Medialität vor oder jenseits universitärer Medienwissenschaft unterscheidet sich (nebenbei bemerkt) grundlegend von der weitverbreiteten Auffassung soziologischer Medienforschung, die unter Medientheorie die methodische Systematisierung von Forschungsabläufen versteht. Dies ist der Motor vieler Mißverständnisse zwischen Geistes- und Sozialwissenschaften. Medienphilosophie wäre nach dem bisher Gesagten wohl der Diskursbereich, der auf eine spezifische, nämlich auf eine philosophische Weise (und aus der Philosophie herkommend), die Medienfrage stellt, wie das in unterschiedlicher Weise beispielsweise Herbert Hrachovec, Frank Hartmann, Dieter Mersch, Mike Sandbothe oder Joachim Vogel (aus verschiedenen philosophischen Richtungen wie analytischer Philosophie, Pragmatismus oder Ästhetik herkommend) getan haben. Gleichwohl könnte man aber auch sagen, daß die Frage nach der Medialität von Wissen und Erkennen eine im Kern schon philosophische Frage ist, wodurch die Medienphilosophie ein ausgezeichneter Ort des Nachdenkens über Medien (unter vielen anderen) wäre. Das heißt natürlich noch lange nicht, daß die universitäre Philosophie die Medien-Frage auch im hier vorgeschlagenen Sinne stellt. Im Gegenteil ist philosophische Erkenntnistheorie oft alles andere als Medienphilosophie. Vertraut man einer aktuellen Einführung, so ist Erkenntnistheorie die »systematische Analyse und Erläuterung der das Erkennen und Wissen betreffenden Begriffe« [Schnädelbach 2002, 28] Ähnliches gilt für die Ethik, die jene medientechnischen Veränderungen, die bestimmte ethische Fragen erst aufwerfen und auf eine spezifische Weise konfigurieren, allzu oft mit wenig Aufmerksamkeit und Sachverstand behandelt. Und Ähnliches wird man auch von der politischen Philosophie oder der Wissenschaftstheorie sagen können – von einigen Ausnahmen abgesehen. Wider den Primat des Begriffs müßte eine Medienphilosophie aber jenen Verbund von Ästhetiken, Praxen, Apparaten und Institutionen berücksichtigen, der auf historisch kontingente Weise zu denken und zu erkennen gibt und damit auch jene Begriffe mitbestimmt, die gerne für »rein« philosophische genommen werden. Dies bedeutet weder, philosophische Fragen in technische oder historische aufzulösen, noch bedeutet es, daß Philosophie in der Lage wäre, andere Wissenschaften erkenntniskritisch zu belehren, noch impliziert es, daß man die Ansprüche der philosophierenden Naturwissenschaftler des 19. Jahrhunderts wiederholt. Es heißt nur, daß die Philosophie sich der historischen, (kultur)technischen und materiellen Aspekte von Medialität annehmen kann, ohne dabei auf spe-

87 zifisch philosophische Standards verzichten zu müssen und ohne philosophische Fragen verlieren zu müssen.

Endnoten Vgl. http://www.medienstudienfuehrer.de und http://www.medienhochschulkompass. de/ (gesehen 07/ 2007). 2 Gemeint ist damit »kulturwissenschaftliche Medienwissenschaft« in Abgrenzung zu »sozialwissenschaftlicher Kommunikationswissenschaft« und »Medientechnologie« (WR 2007, 7f.). Der im deutschsprachigen Raum wenig diskutierte Ansatz einer »Mediologie« wird nur einmal am Rande erwähnt [vgl. Hartmann 2003]. 3 Ausgenommen vielleicht die 1984 gegründete Zeitschrift Medienwissenschaft (http:// www.staff.uni-marburg.de/~medrez/index.html), deren Schwerpunkt lange Zeit film- und fernsehwissenschaftlich war. 1

Literatur [Engell 2006] Lorenz Engell, »Nach der Zeit«, Vortrag am Institut für Philosophie, Universität Wien, 13. Dezember 2006 (Audiostream unter: http://homepage.univie.ac.at/ claus.pias/aktuell/WasWarenMedien/WasWarenMedien.html) [GfM 2007] Harro Segeberg, »Stellungnahme der GfM«, in: Medienwissenschaft. Mitteilungen der Gesellschaft für Medienwissenschaft e.V., Hamburg 2007, S. 20-22. [Hagen 2007] Wolfgang Hagen, »Wie ist eine eigentlich so zu nennende Medienwissenschaft möglich?«, Vortrag am Institut für Philosophie, Universität Wien, 6. Juni 2006 (Audiostream unter: http://homepage.univie.ac.at/claus.pias/aktuell/WasWarenMedien/WasWarenMedien.html) [Hartmann 2003] Frank Hartmann, Mediologie. Ansätze einer Medientheorie der Kulturwissenschaften, Wien 2003 [Innis 1964] Harold Adams Innis, The Bias of Communication, Toronto 1964 [LiLi 2003] »Konzeptionen der Medienwissenschaften«, Zeitschrift für Literaturwissenschaft und Linguistik, Heft 132 (2003) und 133 (2004) [Meyen/Löblich 2006] Michael Meyen/Maria Löblich, Klassiker der Kommunikationswissenschaft. Fach- und Theoriegeschichte in Deutschland, Konstanz 2006 [Münker 2003] Stefan Münker/Alexander Roesler/Mike Sandbothe (Hrsg.), Medienphilosophie. Beiträge zur Klärung eines Begriffs, Frankfurt a.M. 2003 [Pias 1999] Claus Pias/Lorenz Engell/Joseph Vogl, Kursbuch Medienkultur. Die maß-

88 geblichen Texte von Brecht bis Baudrillard, Stuttgart 1999 (5. Aufl. 2005) [Pias 2007] Claus Pias, »Asynchron. Einige historische Begegnungen zwischen Informatik und Medienwissenschaft«, in: Informatik-Spektrum, 30/6 (2007) [Schelsky 1965] Helmut Schelsky, »Ist Dauerreflexion institutionalisierbar?«, in: ders., Auf der Suche nach Wirklichkeit, Düsseldorf 1965, S. 250-275 [Schnädelbach 2002] Herbert Schnädelbach, Erkenntnistheorie zur Einführung, Hamburg 2002 [WR 2007] Wissenschaftsrat, Empfehlungen zur Weiterentwicklung der Kommunikations- und Medienwissenschaften in Deutschland, Oldenburg 25. Mai 2007 [Winthrop 2005] Geoffrey Winthrop-Young, Friedrich Kittler zur Einführung, Hamburg 2005 [Zielinski 1985] Siegfried Zielinski, Zur Geschichte des Videorecorders, Berlin 1985

Media Philosophy— A Reasonable Programme? Siegfried J. Schmidt, Münster

1. Media philosophy: discourse or discipline? It is beyond any doubt that media have an enormous impact on our media-culture societies. Media influence our perception and our knowledge, our memory as well as our emotions. They create public spheres and public opinions and give rise to media realities. Media shape our socialisation and our communality. They transform economy, politics, science, religion and law. “What we know about our society, even about our world we are living in, we know via the mass media.” (Luhmann 1996:9; my translation) Accordingly, “the media” have become a paramount subject of interdisciplinary discourses in the last decades all over the world. All these developments have become topics of scientific analyses as well as parts of media programmes. Since decades, various academic disciplines focused on an other-observation (“Fremdbeobachtung”) of the media from an external state, whereas the media increasingly tend to observe themselves as well as one another in order to transform this self-observation into parts of their respective programmes. The other-observation is carried out either by scholars of communication- and/or media theory or by philosophers; but whereas the former are organised in academic disciplines, no established discipline entitled “media philosophy” exists until today. Instead, the various approaches to philosophical analyses of media are heterogeneous and lack a solid theoretical basis as well as a disciplinary organisation. Some scholars even hold the view that media are not even within the province of philosophers.1 Some people deeply regret this deadlock regarding not only topics and discourses but also future jobs and positions for scholars of a discipline “media philosophy” to come. Others welcome this stalemate which gives room to creative solutions of thematic as well as of organisational matters. Let us have a short look at some of the foreseeable options. One of the actual media philosophical approaches concentrates its efforts on a reformulation of traditional philosophical topics in the framework of

90 media efficiencies. The list of such topics is rather long and covers nearly all famous crucial subjects of philosophical discourses, reaching from reality, truth, culture, society, education or politics to time, space, emotion, subject or entertainment. This kind of rethinking or reformulating philosophical topics concentrates upon the question how—in the co-evolution of media systems and society—our daily experiences as well as our theoretical modellings of these topics have changed on the historical way from writing to the Internet. Some few examples may suffice. • Consider the fundamental change which all concepts of time and space as well as all daily experiences of and with time and space have undergone since the introduction of the Internet. • Consider the stepwise implementation of research topics such as media pedagogic or media psychology in the evolution of media-culture societies. • Consider the questioning of all concepts and experiences of reality initiated by electronic simulation and virtual realities. Today, zapping and infotainment are equally serious philosophical topics as the transformation of politics into media performances or attempts to replace the ecclesiastically based religion by a “TV-mass”. Another media philosophical approach is primarily concerned with technical aspects of media and their efficiencies. It is argued that the traditional concept of man has faded away. Man has turned into a mere appendix of technical systems. His body (no more than tedious “wet ware”) step by step becomes replaced by hard ware taking over the relevant functions. The logic of machines substitutes the traditional order of knowledge. New kinds of social relations arise in globally operating networks, new play-cultures are developed. The digitalisation of democracy, the basic transformation of our modes of perception in the context of virtual realities or even the transition of our “death culture” into virtual memorials in the Internet appear on the agenda. These few examples already elucidate that a mere list of topics, concepts or organisational devices does not suffice to establish a/the future “media philosophy”. Too many basic concepts wait for consensual definitions; too many selection problems regarding necessary, indispensable or solely additional topics of a future discipline “media philosophy” are still unresolved. Over and above, one cannot be sure, how the universities will develop in the near future in an international context of globalisation and informatisation. And, last but not least, we do no longer believe in any kind of finality—except perhaps in the finality of transition.2

91 Regarding this situation, in the following I shall try to deal with those basic problems in the discourse about “the media” which I consider sufficiently essential to attract the attention of all scholars of media research—no matter how this research domain shall be named and organised in the future.

2. Language and/as medium? Nearly every scholar participating in the discourses on media advocates his/her own media concept, and these concepts reach from light, sand and stone to technical distribution-media or symbolically generalised communication media (sensu N. Luhmann) such as love, money or power. Yet, most of the scholars agree that—of course—language is a medium, if not the “mother of all media”. In the philosophy of science it is widely accepted that definitions of concepts should not be judged by their truth but by their acceptability and usefulness in relevant discourses. Accordingly I propose to establish a difference between language and medium.3 Language I model as a system consisting of material items which can serve as semiotic instruments. These language components can be syntactically combined with each other to texts. Texts can be used in cognitive as well as in communicative processes in systems-specific ways. Accordingly, language can be observed from two perspectives: from a process oriented perspective the use of language (viz. speaking) is a specific social activity co-ordinating human beings; pursuant to a meaning- or senseoriented perspective language is used by speakers to construct meanings in their cognitive domains as well as to initiate understanding processes in the social domain of communication. The materiality of language embodies socially stabilised experiences with the use of respective signs and texts in relevant contexts. Due to their socialisation and to socially successful uses of language material the native speakers of a language know keenly which cognitive operations are supposed to be attributed to the use(s) of a sign in a socially expected manner.—In this respect I still adhere to L. Wittgenstein’s theory of meaning as successful use of language.—The production of meaning on occasion of the perception of language material is bound to the cognitive domains of individuals. Since the production of meaning is necessarily determined by the presuppositions and the conditions of action of the respective cognitive system, we must assume that meaning production is a highly subject-dependent process. Nevertheless even cognitive autonomous individuals are able to communi-

92 cate successfully because, due to their socialisation, they all refer in a comparable way to collective (cultural) knowledge which brings about a social co-orientation of subject-dependent cognitive operations. Hence, the materiality of language, cognitive processes and communication form a mutually constitutive framework of interactive dependencies (“Wirkungszusammenhang”) in the sense of the General Systems Theory (Schlosser 1993). As has been stated by many authors since H. R. Maturana, language serves the purpose of coupling (in a purely structural way) the separated dimensions of cognition and communication. Texts or utterances—as highly structured language offers—engender socially expected cognitive and communicative processes and orient those semantically without being able to enforce in a causal way specific results a speaker or writer has intended or expected. We cannot step back behind our socialisation, especially not behind our linguistic competence which can be briefly drafted as a complex social competence. We necessarily rely upon these competences whenever we make use of language material in cognition or communication. For this reason, our relation to our reality is fundamentally characterised by communicativity. Communication can only take place if partners are involved; and all communication processes prepare cooperative actions. (cf. Janich 2006:260 ff.) Knowledge resulting from actions transforms experiences into expectations. Both, experiences and expectations, are determined by the distinctions and descriptions a specific language offers their users, in other words they are conditioned by communicativity. Communication is performed on the basis of collective knowledge which communication partners impute to one another. In other words, reflexivity can be regarded as the basic mechanism which enables communication although the communication partners are endowed with closed (self-organizing) cognitive systems which can neither be directly observed nor intentionally geared.

93

3. A systems-oriented concept of ‘medium’ Regarding the variety of definitions and models of and for ‘medium’ used in the actual discussion a concept of ‘medium’ is needed which has to meet the following conditions: • • • •

it has to be as unequivocal a possible it has to be rendered plausible by empirical applications it has to allow for relevant differentiations in the domain of observation it has to be system-oriented in order to avoid open or merely additive concepts.

As set out on many other occasions I conceive of ‘medium’ as a compact concept (“Kompaktbegriff”) which integrates four dimensions and areas of effect: • communication instruments (such as languages, non-verbal behaviour or gestures) • technological devices (such as print, TV or Internet technology on the side of receivers and producers) • the social systems bodies of such devices (such as publishing houses or television stations) • media offers which result from the coalescence of these components and can only be interpreted referring to this complex context of production. The cooperation of these four components is modelled as systemic and selforganising. In these cooperation processes no component must remain unrespected. Communication instruments such as languages are distinguished from media, because they can be used in all media. Therefore it makes sense to use the difference between communication instruments and media in order to observe and describe the differences in the uses of these instruments in the different media. Here, a given example could be the Internet as a hybrid medium. The systemic interplay of the four components named above, I call medium system. Examples for medium systems are the print system, the broadcasting system, the television system or the film system. The entirety of all medium systems available in a society I call the total media system of a society. In the various medium systems different action domains have arisen

94 which mutually constitute each other. Generally, four of these domains can be observed, viz. the production, distribution, reception and post processing4 of media offers. In these domains different action roles have been developed which in course of history have been professionalised and differentiated in order to support a division of labour, so e. g. the role of the author, the player, the art director or the media agent. These action roles can be implemented either by individual or by collective role takers such as teams or target groups. It is in the ordered cooperation of actions and communications that media offers are fabricated. Therefore, they have to be regarded as system-specific results of processes and not as autonomous identities—an argument which significantly bears on all kinds of analysis, interpretation and evaluation of media offers. The concept of action roles has the advantage that it can be applied to all medium systems. It enables exact observations of the differences between the medium systems and stimulates synchronical as well as diachronical research. In addition it is strictly systems-oriented, enables an empirical analysis of media offers and claims the differentiation between self-observation and other-observation (or observation from outside).

95 The systematisation of aspects and operations in media systems (called media processes) which could and should be observed and described in media analyses refer to: • the components of medium systems (communication instruments, technical devices, social organisations, media offers) • action roles (production, distribution, reception, post processing) • reference systems (technique, economy, politics, law, socioculture) • reaches (regional, national, international, global) • directions of observation (diachronical, synchronical) • kinds of observation (descriptive, normative). All processes ongoing in a medium system are oriented by those subsystems of the culture of a society (= media culture) which shape media processes. The argumentation presented so far leads to the following hypothesis: The evolution of the total media system of modern media-culture societies from writing to the Internet has fundamentally changed our relation to the world and our modes of communication. This change can be described as transition from communicativity to mediality.5

4. Influences, efficiencies, causalities: What does ‘mediality’ mean? In the discussion about the mediality of our relation to the world various positions compete with one another. The range of hypotheses ranks from “man dominates the media” to “man has become a function of media-techniques”.6 This discussion suffers from a remarkable ambiguity of crucial terms like ‘influence’, ‘efficiency’ or ‘causality’. Regarding this situation, let me remind of some plausible trivialities. (a) Even the most developed semiconductor driven systems are (still?) produced by men no matter what is hidden behind the user’s interface. If nobody uses these technical systems they are worthless, and so far men (still?) decide upon the meaning of these uses and not (yet?) the technical systems themselves. One of the aficionados of the primacy of technique, Rudolf Maresch, quite recently remarked that the machines lack what defines men: imagination, subconsciousness and emotionality of experience. “Information might be collected, stored and transferred in computer centres, but only in humans’ brains which operate them, information becomes knowledge.”

96 (2006:5; my translation) (b) I fully subscribe to Maresch’s position. On the other hand it should not be overlooked that technical devices are not at all neutral components of media systems. Since Marshall McLuhan, many scholars have emphasised that medium systems exercise structural effects on the users which are independent from the effects semantic contents of media offers can trigger. Knuth Hickethier has summed up such structural effects: • construction and standardisation of our time concepts and our time experience • insight into the semiotic nature of our relation to the world • steering of attention • shaping of emotions • ranking of important and unimportant things • presentation of kinds of behaviour • orientation of socialisation and social adaptation (2003:230 ff.). (c) Both communication instruments and all media since the emergence of writing have on the one hand expanded our forms of perception and on the other hand disciplined them in relation to the various medium-specific conditions of perception and use. This explains why there are literates and illiterates for every medium. (d) Therefore, media offers are not independent objects but results of rather complex production, distribution and presentation processes following the economic, social and technical conditions of the respective medium systems. In other words: medium systems are necessarily conditioned by their systems-specific logic. This also proves true for media actors’ concepts of events, persons, data or objects beyond the respective medium system. By media processes such events, persons etc. are transformed into media facts which result from medium-specific references to reasonable and relevant presuppositions of all activities in the respective medium system. Accordingly, medium systems create and distribute media facts and not representations of facts or events in “the reality”. It is worth while to keep that in mind in any discussion about media and reality.7 (e) As already mentioned linear causal interventions into cognitive or social systems are not possible because these systems can only operate following their systems- specific presuppositions and working conditions. Therefore, it is implausible to apply models of linear causality to the analysis of the relation between men ad media systems. More than 30 years of rather unsuccessful research in media effects underpin this view.

97 The assumption that media provide actors and societies with objective information and knowledge neglects the constitutive and strictly complementary role of the recipients. Media offers do not transport knowledge, meanings and values; instead, they offer actors well structured semiotic materials which can then be used by actors for the production of meaning, knowledge or evaluation in their respective biographical and social situation—herein, an account for our sparse knowledge about the actual effectiveness of media offers might be endued. Therefore, applying models of co-evolution and enabling conditions seems to be plausible. The history of media reveals that new technologies have only succeeded if a relevant number of users made use of those. Only then new needs of communication could isochronously be developed and served, thus necessarily changing the relation to the world of users as well as of non-users of the new medium. Wilhelm Ong and Eric Havelock have provided evidence for this hypothesis, Elisabeth Eisenstein as well as Michael Giesecke specified that for the printing press. They all show that the development and the success of a new medium system can be regarded as creation as well as formation of new societal needs. For this reason, I propose to work with models of circular causality which sufficiently respect the reflexivity of all conditions of the emergence and the acceptance of new media and their applications, including the observation of structural as well as semantic effects. So far, my considerations regarding ‘mediality’, can be summed up as follows. By their activities in media systems men create media-worlds (or mediarealities) which compete with one another. Media systems work as observing and describing systems which do not start from “the reality” but from former descriptions of reality which accordingly are transformed into or followed by new descriptions. On this note, the description of reality and the reality of description coincide. This argument resolves the tedious question regarding the relation of media and reality. Due to their system-specific logic media cannot represent an extra medial reality in an objective way. Instead, they can only produce and present medium- specific realities. In this view we can observe a threefold observing constellation: • Recipients of media offers observe what medium systems observe and how this observation is realised and presented. In other words, they (can) develop a competence of second order observation by observing observers. This second order competence necessarily reveals the contingency8 of all observations and descriptions: other things could have been de-

98 scribed and the descriptions could have been different. • Reflexivity of observation and description also holds true for the medium systems themselves, which observe and describe one another regarding the media offers produced as well as regarding interests and modes of observation and description. • Media observe and describe the society, but in turn society observes the running of the media. Here the question arises how much second-order observation both sides can endure and if and how they can make use of it in a creative way. The insight into the mediality of our relation to the world gives rise to two basic problems: • From an epistemological point of view we nowadays are mostly concerned with realities adopted from the media, normally without considering the system’s logic, i.e. the conditions of production and reception of the respective medium system. • From a socio-political point of view we have to respect the power medium systems consciously or unconsciously (still?) put into effect upon the recipients due to their sovereignty of public observation and description. Television e.g. still disposes an important factor of influencing or even defining central categories of our social orientation such as democracy and freedom, terrorism and resistance or power and violence, but also emotion and taste, appearance and property, gender and partnership. We know from various empirical studies how many young people use the daily TV-soaps as instruments for orienting their own lives. Accordingly, programme makers bear a heavy load of responsibility which they must either accept or publicly reject—for what kind of reason so ever.9 (Events like the publication of amateur photographs showing the maltreatment of Iraqi prisoners in Abu Ghraib in the Internet make us realise how important the sheer disposing of pictures can be in a medium system which is globally accessible and (not yet?) censored—even the US-Senate had to accept these pictures as pieces of evidence.) The discussion about the mediality of our relation to the world has quasi automatically provoked a debate between epistemological realists and constructivists (sometimes far away of argumentative fairness) about the relation between reality and representation. In the following I shall therefore shortly comment on this debate, nota bene not with the intention to solve this problem but to resolve it.10

99

5. Media and reality, or: On the seduction by unobserved dualisms In the last years some philosophers, first of all Josef Mitterer11, have argued that the history of European philosophy has been and still is widely dominated by the postulation of seemingly evident dualisms such as language/reality, subject/object, media/reality, description/object of description, facts/statements or perception/object of perception. In 1998, Martin Seel once again propagated the resolution of the opposition realism/constructivism in the framework of his “philosophical realism”—which, unfortunately, is based itself upon a basic dualism. I quote: “Only because there are objects which exist independently of our knowledge media can have a recognizing access to objects.” (1998:352; my translation) In opposition to this seemingly irrefutable argument, Josef Mitterer has claimed that the dualism of object and description must and can be resolved, because the description and the object of description are one and the same. In our discourses—and that is the domain of our living, acting, and communicating—we fabricate descriptions of objects so far which serve as starting point for further descriptions from now on. This argument can be reformulated in use of another terminology. Let us take as an example the processes of perception or description. An actor performs a perception process in the course and as a result of which he perceives something as something. In a process of description an actor describes something which appears as an object of this description. In other words, these are three-part processes in which no single component is independent: perceiver, perception process and perceived are mutually self-constituting. This analysis reveals that all events and actions relating to human consciousness are systems-specific since they are tied to context-specific operations of actors. In other words, talk of objects can only mean talk of objectsof-perception or objects-of-description. The actor must not be disregarded, and actors consist of bodies and brains. Objects, as Werner Heisenberg once said, are relations, references or posited reference. In the light of these considerations I propose to switch deliberately from the description of (identical) objects to the analysis of (complex) processes. Accordingly, I do no longer ask whether or not an object X exists or whether our perception and description of X is true or false. Instead, I ask which process is running under which conditions and presuppositions, who performs this process and what kind of results arise for the actors involved in this process.

100 Following this argumentation Seel’s seemingly clear argument can be resolved. To quote Seel: “Although the objects of recognition cannot be given to the perceiver in a language-independent way, they can exist independently from all perceiving. The earth has existed long before any thought about the shape of the earth.” (1998:36; Seel’s emphasis, my translation) As a counterargument let me quote from (the non-constructivist thinker) Carl Friedrich von Weizsäcker: “Whenever we speak seriously of reality, we speak of reality, if nobody speaks of reality, reality is not talked about.” (1980:42; my translation)—The interesting point here is that von Weizsäcker does neither assert nor deny the existence of reality. Instead, he points to the insight that without someone referring to reality reality is not part of a discourse; but only in discourses can we talk about the existence or non-existence of reality or what else. Both, Mitterer and von Weizsäcker, emphasise that we—as human beings—can only act in place (here and now) of discourses. In these discourses we can aver the existence of all and everything in a beyond of discourses—statements we make here and now by reference to statements or descriptions which have been produced so far. This argumentation can be backed by the following consideration12: Whatever we do, we do it in the gestalt of a positing or supposition (“Setzung”). We do this (something), not that (something else), although we could have done it. A supposition always takes a certain gestalt for us as well—should we be under observation—as for the others: it is a supposition of type A and not type B, C, M or X. As far as we can judge within a lifetime, every single supposition that we are making here and now has been preceded by other suppositions to which we (can) relate as presuppositions (“Voraussetzungen”) more or less consciously. All our suppositions to date therefore form a context of suppositions in given concrete situations. We can refer to this context by way of memories and narratives now. This context of suppositions comprises the totality of our prior life experiences that will, in turn, affect our future experiences in terms of expectations in every concrete situation. Every supposition makes at least one presupposition. As a rule, however, many presuppositions are made or drawn upon by a supposition. The nexus between supposition and presupposition is auto-constitutive as neither can be meaningfully envisaged without the other. Therefore, supposition and presupposition are strictly complementary. The presupposition of a supposition can only be observed in the reflexive reference to the supposition. If one accepts the auto-constitution of supposition and presupposition, then one also accepts that there can be no beginning exempt from a presupposition. The only possible beginning is—to make a supposition.

101 Whether we perceive or describe something, ponder something or become consciously aware of something as something particular, we are always executing a serious game of distinctions. We (and not anyone else) describe (and do not explain) something as that particular something (and not as something else). In doing so we make use of linguistic resources whose semantic potential and social acceptance is tacitly presumed and, at the same time, by this very use confirmed as “viable” (i.e. as manageable or successful in the understanding of E. von Glasersfeld). All this is realised (meaning nothing but: all this we can envisage or think in this way only, not in any other) as a happening in a particular situation at a particular point in time, i.e. in a context of suppositions. Suppositions constitute contingency, because they must be selective regarding other options. As selections they are decisions, and only qua decisions do they make contingency observable. This means that selection and contingency must be envisaged jointly, they constitute each other, they are strictly complementary. Let me recapitulate: All our cognitive and communicative processes are suppositions which rely on presuppositions. The most important presuppositions in this respect are language and media, modelled in terms of frameworks of interactive dependencies which interrelate materialities and possible semantic contents in a systemic way, followed by collective cultural knowledge as the basis and outcome of socialisation. Due to this cultural knowledge which opens up the range of reflexivity in terms of expected expectations and imputed imputations cognitive autonomous individuals are able to cooperate and to communicate with one another. By culture, I understand the problem-solving programme of societies which orients the activities of actors and is in turn confirmed and stabilised by these oriented activities. Discourses function via the co-presence of materiality and meaning construction processes. This contemporarity defines the mediality of our relation to the world. Language is inseparably bound to materialities; media are necessarily bound to technicality. For this reason there is no withdrawal of communicativity and mediality. Therefore, I consider it plausible to coin media-oriented societies as media-culture societies which deserve a thorough analysis in the framework of media-cultural studies—however they will be entitled in future times.

102

6. Media science or media philosophy? In their extensive research report, Christian Filk et al. (2004) have described the different approaches towards “a” media philosophy. As already mentioned in the beginning, rather controversial arguments have been issued. Some authors claim that philosophy has always been media philosophy avant la lettre which is simply to be prolonged into the future. Others call for the establishing of a new neo-pragmatist discipline which is able to solve practical problems in our society (Sandbothe 2001). F. Hartmann, on the other hand, recommends organising media philosophy as an interdisciplinary research platform and not as an academic discipline. A short look at the long lasting efforts to install a new discipline called Kulturwissenschaft immediately reduces the attractivity of the idea of founding a new academic discipline called “media philosophy”. It is well known that inner disciplinary conflicts, the permanent fight for a reasonable distribution of money, personal rivalry etc. frequently destroy creativity and engagement in social organisations like academic disciplines. In addition, the subject “media & mediality” is of such importance that we must not be afraid that it might fall into oblivion when nobody administrates it ex officio. These considerations recommend an interdisciplinary approach to media problems in the organisational context of a research programme.13 In such a context the observation and description of all aspects of mediality from the perspectives of various disciplines can be organised according to the problems arising via the development of the total media system of our society. The media development does not wait for a media philosophy which notoriously runs late. Neither the established media- and communication sciences nor a philosophy which integrates the media into their research programme should take over the full responsibility for the topic “mediality”, anymore since both are still deeply rooted in their dualistic epistemological traditions and still maintain old-fashioned quasi-alternatives like theoretical/empirical or empirical/hermeneutic. Prospectively, we need extensive empirical research onto the full range of aspects of media processes on the basis of explicit theories, concepts and methods which allow for second order observations and legitimate itself via consequential results. Such results concern all kinds of participation in media processes in the cognitive as well as in the social domain. They should help us to extend our critical and creative use of media. Media research must become aware of its autological character, which is to say that media can only be studied in media and the results must be pre-

103 sented in media, too. This insight has not been realised in those philosophies and sciences which have been determined by writing and books. As soon as we realise that there are no contents outside the media we have to accept that research in media has to invest deliberately all possibilities of observation and description offered by all media. In the times to come new concepts of science and aesthetics, of rationality and creativity should and will for sure be developed in order to serve the needs of a media research programme we can only imagine today. Historical research has revealed the co-evolution of medium systems, societies and individuals since the invention of language and writing. This development and its impact on the full complexity of our living in mediaculture societies should be the grand subject of media research including all its cognitive, emotional, moral and social aspects. Of course it should not be forgotten that all kinds of media research need active researchers and financial resources. So the question arises whether or not well established scholars will be able and ready to orient their research interest towards aspects of mediality (which is difficult to believe regarding the teaching load in times of the Bologna process), whether or not new academic positions will be created for this specific research topic (which is not very likely regarding the economic situation of the universities). The best solution would be to establish coordinators to organise the interdisciplinary media research—but this issue is foremost a political one. However these questions will be answered we face the following situation: If it is true that any society gets the media system deserved (just because the media are social constructions for the construction of societies and their realities) then we need a system of observations and descriptions of all media which is capable to solve this societal problem. We are just starting to develop such a system. Nevertheless, I am sure that the solution of this problem belongs to the most crucial tasks of our media-culture societies.

Notes 1. See e. g. Margreiter 2007. 2. See Schmidt 2007. 3. See Schmidt 2000. 4. ‚Post processing’ covers all processes in which media offers are transposed into new media offers, e. g. screen adaptations of novels, the scientific analysis of daily soaps or all kinds of media critique.

104 5. See Krämer 1998 or Margreiter 2007:17.—This transition also concerns face-to-face communication which can only be described in contrast to communication mediated by media and their impact upon direct communication. 6. See Maresch 2006.—In communication studies this alternative has been discussed since Blumler & Katz 1974 who coined the famous questions: “What do people do with the media?” And “What do media do with the people?” 7. See Schmidt 2005a and Pörksen 2006. 8. According to the philosophical tradition everything which is neither necessary nor impossible is called ‚contingent’. 9. See Schmidt 2005. 10. See the details in Schmidt 2007, 2007 a. 11. See Mitterer 1992, 2001. 12. Schmidt 2007. 13. The same proposal has been published by the German Wissenschaftsrat in May 2007.

Literature Blumler, J. G. & E. Katz: The Uses of Mass Communication. Current Perspectives in Gratifcation Research. Beverly Hills: Sage Publications 1974. Filk, Chr., Grampp, S. & K. Kirchmann: „Was ist ‚Medienphilosophie’ und wer braucht sie womöglich dringender: die Philosophie oder die Medienwissenschaft? Ein kritisches Forschungsreferat.“ In: Allgemeine Zeitschrift für Philosophie H. 29/1, 2004: 39-65. Hartmann, F.: Medienphilosophie. Wien 2000. Hartmann, F.: Mediologie. Ansätze einer Medientheorie der Kulturwissenschaften. Wien: WUV 2003. Hickethier, K: Einführung in die Medienwissenschaft. Stuttgart/Weimar: Metzler 2003. Janich, P.: Kultur und Methode. Philosophie in einer wissenschaftlich geprägten Welt. Frankfurt/M.: Suhrkamp 2006. Krämer, S.: „Das Medium als Spur und als Apparat.“ In: S. Krämer (Hrsg.), Medien, Computer, Realität. Wirklichkeitsvorstellungen und Neue Medien. Frankfurt/ M.: Suhrkamp 1998, 73-94. Luhmann, N.: Die Realität der Massenmedien. Opladen: Westdeutscher Verlag, 2. Auflg. 1996. Maresch, R.: Medienwissenschaft statt Philosophie? Die medienwissenschaftliche Revolution in Deutschland entlässt ihre Meisterschüler. http://www.heise.de/bin/tp/issue/r4/dl-artikel2.cgi? Abruf 21.12.2006.

105 Margreiter, R.: Medienphilosophie. Eine Einführung. Berlin: PARERGA 2007. Mitterer, J. : Das Jenseits der Philosophie: wider das dualistische Erkenntnisprinzip. Wien: Passagen-Verlag 1992. Mitterer, J.: Die Flucht aus der Beliebigkeit. Frankfurt/M.: Fischer 2001. Münker, St.: „After the medial turn. Sieben Thesen zur Medienphilosophie.” In: Münker, St., Roesler, A. & M. Sandbothe (Hrsg.), Medienphilosophie. Beiträge zur Klärung eines Begriffs. Frankfurt/M.: Fischer 2003, 16-26. Pörksen, B.: Die Beobachtung des Beobachters. Eine Erkenntnistheorie der Journalistik. Konstanz: UVK 2006. Sandbothe, M.: Pragmatische Medienphilosophie: Grundlegung einer neuen Disziplin im Zeitalter des Internet. Weilerswist: Velbrück 2001. Schlosser, G., Einheit der Welt und Einheitswissenschaft. Grundlegung einer allgemeinen Systemtheorie. Braunschweig-Wiesbaden: Vieweg 1993. Schmidt, S. J.: Kalte Faszination. Medien, Kultur, Wissenschaft in der Mediengesellschaft. Weilerswist: Velbrück Wissenschaft 2000. Schmidt, S. J.: „Medienentwicklung und gesellschaftlicher Wandel.“ In: M. Behner et al. (Hrsg.): Medienentwicklung und gesellschaftlicher Wandel: Beiträge zu einer theoretischen und empirischen Herausforderung. Wiesbaden: Westdeutscher Verlag 2003: 135-150. Schmidt, S. J.: Kognitive Autonomie und soziale Orientierung. 3. Aufl., Münster: LIT 2004. Schmidt, S. J. (Hrsg.), Medien und Emotionen. Münster: LIT 2005. Schmidt, S. J.: „Fernsehwirklichkeiten: Eine systematisch verzerrte Debatte.“ In: P. Chr. Hall (Hrsg.), Info ohne -tainment? Orientierung durch Fernsehen: Kompetenz, Relevanz, Akzeptanz. Mainz: ZDF 2005 a, 19-31. Schmidt, S. J., Histories & Discourses. Rewriting Constructivism. Exeter: Imprint Academic 2007. Schmidt, S. J.: Beobachtungsmanagement. Über die Endgültigkeit der Vorläufigkeit. Audio-CD, 80 Minuten + Booklet 8 S. Köln: supposé 2007 a. Seel, M.: „Bestimmen und Bestimmenlassen. Anfänge einer medialen Erkenntnis theorie.“ In: DZPhil, Berlin 46 1998) 3, 351-365. Weizsäcker, Chr.. Fr.: Der Garten des Menschlichen. Beiträge zur geschichtlichen Anthropologie. München: Hanser 1980. Zierold, M.: Gesellschaftliche Erinnerung. Eine medienkulturwissenschaftliche Perspektive. Berlin: de Gruyter 2006.

Section 2: Philosophy of the Internet Philosophie des Internets

Science of Recording Maurizio Ferraris, Turin

Introduction In what follows I would like to recommend the creation of a new undergraduate course, or, rather, a brand new academic program, with all its MAs, BAs, and PhD programs, dean and faculty, freshmen and freshwomen, applications, debates on newspapers, government commissions, areas of study, Erasmus programs, faculty’s impact factor, and so on. The name of such a new academic program, or undergraduate course, would be “science of recording” and it would be designed to address the new needs of our society, which, in entering the new century, has to face the challenge of a world that was mistakenly thought to be dominated by communication (and, unfortunately, by all sorts of wannabe orators) and that unexpectedly discovers itself as being dominated by recording. Don’t worry, I am just kidding. I have no such intention. My aim is just to illustrate the significance of writing and recording in our world, and to point out that such a significance has been sometimes overlooked, causing perspectival mistakes that, at least in my opinion, should be corrected, and there is no better occasion to start doing it than at a conference on the “Philosophy of the Information Society”. No doubt that our society, and almost certainly every conceivable society, is a society of information: one cannot live without knowing; not even Robinson can do it. A fortiori in a complex society, one cannot live without knowing. It is often thought that that means that we live in a society of communication, both in the sense that communication is necessary for society and in the sense of an unprecedented expansion of communication in our time. I doubt both such contentions. Of course, a society must communicate in order to exist; but communication alone doesn’t suffice. Indeed it seems a function that is subordinated to something more essential, namely recording. The idea I would like to sketch here is that we live in a society of recording and that this is the possibility condition of a society of communication, and, of course, of information. By saying that we are in a society of recording I just mean that if we look to all the transformations that have characterized our time we find that they have mainly happened in the realm of recording and not in that of communication.

110

Ontology of actuality Let us begin by considering actuality. The 20th January 2008 the New York Times published an article on the new Japanese teen trend to read novels (written and distributed) on cell phones. One would make a mistake in deeming such a story as just another oriental weirdness, just like the story according to which Japanese people sleep in cylinders. No doubt it’s a weird story, however such weirdness has nothing to do with the Far East, but has to do with all of us. This is because this story plainly contradicts a conviction that is typical of our time—young people don’t read books, they just watch TV—and seems to deny the alleged primary function of phones, namely their being instruments designed for speaking, not for reading novels. Let us turn the page, so to say, but let us keep our eye on the New York Times. There have been a lot of discussions some months ago about the announced disappearance in 2013 of the paper edition of the New York Times. However, such piece of news, though having a symbolic significance, is not so surprising. Indeed, while the New York Times announces the end of its paper edition, entire forests are destroyed to produce the paper on which countless free publications are printed, and which owe their existence to computers. Isn’t it curious? We can dispense with paper thanks to computers, but will the function “print” be ever removed from word processors? It will never be removed—I bet—at least if we consider the moral of a further story that appeared on newspapers: the IATA (International Air Transport Association) has placed its last order of paper tickets. From 2008, there will be no more paper tickets, only codes. Really? I don’t think so. The right thing to say is that there will be codes that will be written on post-its, or printed on paper in office, just like we already do with our e-tickets for trains and planes. When there will be no official paper tickets, there will be a flourishing of homemade tickets, and the obvious reason for this is that it’s hard to remember all those codes and one cannot simply open her or his laptop at the check-in desk to retrieve the code—for this would predictably irritate the people waiting in line. What we are confronted with is a process that is somewhat paradoxical: the disappearance (in principle) of paper caused by the fact that it is possible to write “without paper” amounts to an unprecedented, in human history, invasion of paper. These are only three among numerous pieces of news whose subject is writing. These stories—that grab our attention more and more—often have the typical form of prophecies that haven’t been fulfilled or that of prophecies that have been fulfilled, but in the opposite way. Their characteristic mark is always the same: we were told that writing was going to be replaced

111 by the radio, the television, and the telephone, and—here we are—writing and reading all day long, on our computers, cell phones and smart phones, which combine the functions of computers and phones. This prophecy has been followed by many other prophecies. For instance, that computers would have been literally eaten alive by television (do you remember when they told us to use the TV screen as the screen of our computers?), when precisely the opposite happened. Or, that paper would have disappeared from our tables, whereas it is precisely the possibility to write without paper has multiplied the quantity of paper with which we have to deal with in our everyday life, from undesired advertisements to free newspapers and bargain books. And the prophecy according to which the postmodern world, the world of information, would have been immaterial, while obsolete plastic and silica contribute to a great extent to the worsening of the trash emergency. All these wrong predictions are somehow suspect. We’d better acknowledge an essential continuity in the phantasmagoria of technological transformations: what we are confronted with is the spreading—with or without paper—of something older than Pyramids, namely writing, which is, and has been, the genuine vehicle of globalization, more than jet planes. How is it possible? How is it possible that all the current technology is essentially designed for the function of writing, i.e. recording? How is it possible (another not so implausible prophecy) that the control of energetic resources is something less valuable, from a political point of view, than the control of memory? In order to answer these questions it is not enough, I believe, to make appeal to the power of technique. Because, even if it is true that technique engenders needs, technique also addresses existing needs that sometimes may be unexpressed, just like in the case of the explosion of writing: no one, neither computer makers and phone makers, could have imagined what happened next. This means that what took place, somehow surprising us all, has to do with the very foundations of social reality.

Communication Even if this is true, one might wonder whether there is something wrong, or at least exaggerated, in the two main presuppositions that has been at the core of the reflection on the nature of the society of the past century, namely that our society is a society of communication, and that the more synchronic is communication the more effective it is (I speak, you listen, then you speak), that is to say that perfection in communication, as Plato already had it (the first philosopher to condemn writing), can be achieved by means of

112 orality. On what was based the idea that modern times, more than any other times, are characterized by an explosion of communication (idea that has given rise to a great number of studies, disciplines, academic programs, devoted to communication, which were simply unthinkable and nonexistent before the twentieth century)? Here’s my answer. The idea behind the creation of something like a science of communication, or theories of information etc., is almost certainly the upshot of the introduction and impact of tools such as the radio (and its role in the great dictatorships of the Thirties and then in Second World War), the first television broadcastings (which were not recorded), and telephones in every house. This system of communication was the last mark of modernity. But if this is true, why there has been such a huge comeback of writing? One could say that writing is already present in communication, for instance in newspapers. But we know that in the discourse about communication newspapers were thought of as a fragile creature, threatened by television, a creature that was destined to disappear. Also in this case, the opposite has happened. Television shows have web sites and can be seen off line. Moreover, we can read newspapers on computer screens, we can visit a radio or TV web site, and access the infinite video resources of You Tube. The reflections on the science of communication go hand in hand with the ideal of dialogue, and the ideal of communicative transparence1. The basic idea was that the task of philosophy was that of analyzing language and of enabling dialogue, the fundamental rationale of humanity. But things went in a different way. On the one hand, the linguistic turn has ceased to be the crucial topic of philosophy (be it analytical or continental); on the other hand, no one talks about communicative transparence anymore, and this is not because human beings are evil, but simply because it couldn’t be achieved. Information doesn’t look for, and doesn’t need, transparence; and dialogue is a utopia that even its advocates do not practice. What happened instead is the triumph of control, of the tracking of every single information—ranging from our purchases on the web or in a supermarket to emails and phone calls— that increases the possibility of control.

Recording The crucial point then is that what evidently prevails is not communication, but, so to say, its possibility condition, namely recording. In such a phenomenon there is, somehow, a return to the origin, or better, a return to

113 the essence. The first forms of writing were not intended for communication, but for the recording of debts and credits, nothing but a means to overcome the finitude of individual memory. But isn’t this use of writing what has made its powerful come back in the last thirty years? Since 1981 (first pc), we have assisted to what I have already defined as an explosion of writing, and such an explosion is far from being accidental. Mainly because what characterizes it is not writing as communication, but writing as recording; and precisely these kinds of phenomena made something like globalization possible, which in turn is nothing but the possibility of overcoming the limits of synchronic communication and of transferring packs of recorded data, ranging from financial to personal data, from one part of the world to the other, thus transforming the orbe in urbe. With the telephone this wouldn’t have been possible, just like it wouldn’t have been possible to make agreements and sign contracts (that is to say to create social objects) without the parts involved in such contracts and agreements being physically present. We’d better start from this consideration. Human beings are animals that communicate. But there wouldn’t be any communication without recording: would a communication between amnesiacs have any sense? Our task, at this point, is that of not taking recording for granted, in order to detect its real features—a task that is form from being trivial. Communication is such a massive phenomenon that its possibility condition and its purpose (as I just said without the memory of what is communicated, without recording, there would be no communication, it would amount to talking past one another) are put in the background (without recording in our head we wouldn’t have nothing to communicate). Indeed there are three basic reasons behind the priority of recording on communication. Firstly, recording intervenes in fixing the subject of communication. In fact, there are at least two good reasons for not communicating: having nothing to say, and not remembering what one wanted to say. As to their practical consequences, these two circumstances are equivalent, and this is why philosophers, from Plato to Husserl, have always stressed the necessity of fixating the contents as the basis of knowledge, and, after that, for communication. However, even if we don’t consider hyperbolic scenarios like the one involving amnesiac speakers, it is enough to consider a messer speaker, who cannot focus on the content of her communications, muddling and confusing herself, and forgetting what she has to say. Almost certainly, her defects would be perceived as defects in communication, but at their basis there would be defects in recording. Secondly let us imagine that someone communicates with us with a non-

114 fixated code, for instance by mixing different languages in a casual way; very few things would be understandable. Or let us imagine that someone wanted to communicate with us by means of a private language, namely a language whose code nobody possesses, neither the speaker of such a language. We wouldn’t understand anything, precisely like in the case of the messer speaker. And also in such a scenario we would deal with something that looks like, at least superficially, a defect in communication whereas the defect in question is one that concerns recording, in particular that very important kind of recording that is codification. Finally, let’s imagine to be in a conversation with someone and to formulate clear and distinct ideas in a perfectly fixed code. The other speaker, however, has taken a medicine that prevents her to record segments of speech longer than a few words. When I will be ending my utterance, the other speaker will have already forgotten its initial part and when my utterance will be complete, she will remember (for a very short time) only the last two words. No doubt that such a speaker (who completely ignores to be under the effect of a medicine, and even if she were told so she would have forgotten it immediately) would believe that it is me who talks in a very confusing way, and that it is me having serious defects of communication, whereas the truth is that it is her having serious problems in recording. A first conclusion can be drawn from this simple thought experiment. By examining the classic model of communication2, it’s easy to see that there are elements in it that belong to recording. In other terms, only the function of contact, i.e. addressing someone, cannot be translated in terms of recording.

Code Addresser Message Addressee Contact Context

Recording Recording Recording Recording Contact Recording

115

Inscription At this point, one can legitimately object that even if recording is important, what really matters is that what is recorded can manifest itself. In other words, aren’t records that are hidden in our heads completely useless? Of course, but, first of all, a record isn’t necessarily hidden in our heads. My shopping list is undoubtedly a record (since I didn’t intend to communicate what it’s written on it to someone else and it’s highly dubious that I intended to “communicate” its content to myself), but it’s certainly not hidden in my head. Moreover, it can be read by someone else, but if the person who reads it treated the list as a form of communication he or she would be quite bizarre and somehow indiscreet (unless this person is affected by one of those pathologies that makes people think that everything talks to them, including laundry receipts and unknown people’s shopping lists). What I have called elsewhere3 “inscription in technical sense” is a kind of record that is, at least in principle, accessible to others but isn’t necessarily destined to become communication, and I believe this is the answer to the objection according to which records are nothing if they aren’t communicated. If I ask the time to someone and he or she answers, we have a communication. But if I read the time of an ATM transaction on a receipt can we really say that this is communication? There’s a crucial difference, I believe. Recording not only is at the foundation of what we call “communication”, but, contrary to the appearances, is at the very foundation of our social life. To illustrate this point, I propose two simple experiments: 1. Keep all the tickets and pieces of paper collected in one day; 2. Do the same in a oneweek journey. The quantity of paper collected would be very telling: a network of rights, obligations, possibilities, institutions and payments, that are encapsulated in documents, be they solemn documents such as the Magna Charta or more ordinary documents like parking cards, that nowadays are made of plastic. As I have argued elsewhere, the constitutive rule of social objects is Object = Inscribed Act: social objects are constituted by the records of acts that involve at least two persons and that are characterized by the fact of being inscribed in some physical support, ranging from marble to neurons, from computers to paper, of course. To summarize: if our biological life depends on Dna, our social life depends on another code: writing, whose function is to record, with or without paper, our social acts. This is why our future, even in the more extraordinary scenarios, will always require writing. Because many of the actions we do in our everyday life are social acts and such acts always leave a trace, be it the

116 list of telephone companies that tracks our calls or the restaurant receipt, the train ticket or the taxi receipt, the sent mail or the received one.

Imitation If the hypothesis I am following can be accepted, the notion of inscription, which is a public, ordinary thing—something before everyone’s eyes— could take the place of two rather esoteric notions that have been widely discussed over the last years: the nemes and collective intentionality. Let us begin with nemes. Roughly thirty years ago4, social sciences have started to be interested in the notion of “meme”: an information unit that can be replicated by a mind or some other support, and which constitutes the equivalent of the gene for the genetics. The underlying idea is that a neme encodes information, and it can be transmitted by imitation. Now, the biological metaphor, according to me, is not of any help, and neither is the idea of dealing with minimal units. From this point of view, appealing to inscriptions, which are things everybody knows about, has the same explanatory advantage as appealing to memes, without endorsing an almost science fictional route. There are plenty of inscriptions in the world, and—metaphorically speaking—there are styles that remain impressed: behaviors, memories. This is how culture comes about, and—more specifically—how social objects come into existence, without thereby evoking esoteric entities. Inscriptions—both in a proper and a metaphorical sense (i.e. styles and the like, as above I hinted at)—are transmitted by imitation (viz. through one of the most typical performance of writing: iteration) and this is all what it takes to constitute a minimal social structure. It seems to me that here we have a very important intuition concerning society: society actually comes about by imitation, and by iteration of behaviors (and this is a further proof of the fact that recording is more important than communicating). Now, let us consider this attentively. If it is possible to explain the whole formation of social reality on the grounds of a imitation-iteration system—what inscriptions ensure—then we meet both the demands of common sense (which knows basically everything about social reality—think at envy, which is a form of imitation: this is what investments in advertising are based on), and those of philosophical and sociological reflection5. Moreover, after having dismissed an obscure notion, we would be in a position to explain the whole social reality without appealing to another notion—which is not less obscure that the former: the notion of “collective intentionality”. Allegedly, collective intentionality6 is a weird kind of “social glue”. It is

117 what allows hyenas to cooperate in hunting a lion, and us all to consider a 10 euros bill as possessing a value of 10 euros. Such a glue is not just peculiar, but it is mysterious too7. Now, it seems to me much simpler to ground society on imitation, and imitation on writing. In order to back up such a thesis think at what I have just said concerning imitation, and then consider the following counter-argument against collective intentionality: if really the glue of social reality is collective intentionality, what do we need documents for? Should we simply consider them memos of collective intentional states? This would be nonsense. If collective intentionality can do and undo basically everything at its own will—as in a sort of permanent revolution—why should we fix some if its states, which are doomed to be overcome anyway (I wonder what Searle would think about the Trotzkist outcomes of his theory!). If things are not that way and we indeed need documents, how can be maintained that everything depends on collective intentionality? Let me give an example, in order to be clear. In The Untergang there is a scene, after the Berlin battle, in April 1945, in which the last German defenders are enclosed in a fortified area, they are determined not to give up till the very end, and they clearly share with the collective intention “we are in war and we will stand united till the end”. At a certain point, the Russian soldiers arrive. For a long while everybody hesitate, then a German soldier drop the rifle off, and one after another they follow him and do the same. They all give up, and undoubtedly (since they are a group) at a certain point they have thought “we give up”. However, the first one has surely thought “I give up”. More importantly, the fact that everybody eventually gave up has not brought the war to an end. To be sure, the war ended only after that the resigning, first of the troops of the Berlin Square, then of all further German troops, has been signed. Therefore, what determined the social object “peace” has not been collective intentionality, but a series of inscriptions. A last remark concerning collective intentionality. Recently, the so called “mirror neurons” have been discovered and studied. These are neurons that fires both when a animal (a chimp) makes certain movements for a certain aim, and when it observes those very movements in the researcher or in an another animal8. The idea was to find in these neurons the biological base for collective intentionality. My point is whether it would not be more sensible to take them, more simply, as the base of imitation. Actually, there is no reason to think that something more complex is going on here, and it would be an exaggeration to put the whole burden of social reality and its construction on a kind of neurons. According to the hypothesis that I am putting forth here, the “imitation system” is based on mirror neurons, and this would not rule out that a related “inscription system” might be explained by a base

118 of “reading neurons”9, and by the evolutions of the forms of inscriptions, along with bureaucracy and cultural systems—that is the normal equipment through which we both explain the social world, and we act in it.

Power Let me sum up what I have done so far, before going to the most salient points of the paper. Actuality is characterized by an explosion of writing and recording with respect to which an explanation in term of “communication” looks like underdetermined. However, if we look at recording and inscription (as the public dimension of recording) in so far as they are constitutive elements of social reality, we are in a position to account for the reason why writing is so predominant, and therefore we have an explanation of the actuality we are in. And we can do that without appealing to weird entities, but only to common sense ingredients, which can show us how the social world is based on the rules of imitation and relies heavily on inscriptions of acts, up to the creation of a class of bureaucracts and the like, who after all are much more visible entities, and have roles and functions in a much more clear way than memes and collective intentionality. Now, I would like briefly to explain in which way the theory of inscription I am presenting here can explain not only the construction of social reality, but also, more specifically, the genesis of the institutional reality—that is to say, of power. How is that possible that certain inscriptions can generate other inscriptions, and also to endow them with a value? In other words, how come that a document makes myself a professor and authorizes me to hold exams, which is what other people needs to get a degree, which in turn is what will authorize them to perform certain functions? How is that possible, and a fortiori, that a document authorizes the Bank of Italy to emit banknotes that may be used all over Europe and that, in case, can be used to generate other documents, for instance by buying stamps. I will explain this possibility on the ground of two elements, tradition and securitization. I start with tradition. Both philosophy10 and common sense have always known the value of tradition, which guides our judgments through our prejudices, confers authority to practices which have been preserved through generations, and so on. Now, it would not be possible for a tradition to be there, if it were not possible for recordings to be the case—be it in a oral form, as in the poems and rites of societies that do not have writing, or be it in laws and documents, which often transfer the “holy” power of tradition (of “unwritten laws” in a more mundane space, that of beaurocracy,

119 which—on the other hand, and as Kafka teaches us—is not less powerful that the traditional space. The clerks who are supposed to give us the forms to fill in manifest, through their rudeness, a power that is similar to that of the sorcerers11. Even in beaurocratic societies, we see a survival of traditional authorities, for instance in the etiquette’s rules. Those are institutional non-written laws, which have been written only on behalf of the uneducated, who are uneducated precisely because they need to read the etiquette’s rules. Beside laws that must not be written (as the etiquette’s rules), there are secret societies whose laws cannot be written (although if we could we would do it, and often, very carefully, indeed we do it). Usually such associations have complex rituals (masonry initiations, mafia code, etc.) that hold the place of writing. In any case, etiquette and secret societies are exceptions to the rule of “beaurocratization”. Rather, they show how the development of a tradition of written laws is the most common outcome of he construction and distribution of power in modern societies. Let us come now to the role of securitization, which I intend not just in the economic sense (e.g. as applied to illiquid assets), but in a wider sense, with reference to the possibility of fixing through writing values, activities, titles, appointments, goods. Here, by generalizing an hypothesis concerning the birth of the capital12, my hypothesis is that the possibility of writing is at the base of the construction not just of the capital, but also every form of power, which not by chance is always backed up by systems of distinction, classifications, hierarchies, and archives13. This fact, strictly speaking, is not merely a development of the idea that a society without a memory cannot be conceived, but it encompasses a further element, the nexus between recording and power. On the one hand, there is no power if there is not an inscription that establishes it—either a document in our pocket, an academic toga, a military degree, or a sachem hat. On the other hand, in the social world the recording of sensorial data is obviously a source of power: indeed, one can acquire power even simply by possessing certain recordings (think at telephone companies), or by recording what the costumers of a supermarket buy, or more simply by having an addresses list. In many cases there is a privacy legislation that, at least in principle, protect such recordings, and this is quite telling about the power that they confer to whom possesses them (and it is not by chance that such recordings are usually property of the State, namely the entity that, in modern societies, embodies legitimate power). A last observation on this. An argument e contrario on the role of inscrip-

120 tion in the constitution of power my come exactly from what certain philosophers call the “bare life”14, i.e. life without rights, life reduced to its biology. It is not by chance that such a life is called “bare”, namely without inscriptions, either documents or cloths, mobile numbers, or credit cards. Such life is described as void of any power, liable to anyone; and it is not difficult to believe it, if we think at the poor sans papiers15, or to the inmates of Lagers. Now, it is not difficult to notice that the level of lack of power is inversely proportional to the number of inscriptions which a subject has at his or her disposal. The poor has no money, but has an identity, and thereby—at least in principle—has rights; the sans papiers has no documents, but usually has a mobile, or certain documents whose validity is disputed; at the lower level of the pyramid, void of any rights and powers, there is the Lager inmate. I take this to be a good example to show how power depends on inscriptions.

Difference I must now explain why recording is so powerful as I depict it. The underlying idea is that mere communication has not social import, if it is not recorded. If someone communicates to me something, and I immediately forget what I have been told, all I can do is to ask to repeat what they have just said, and be more attentive, or (in case the object of communication is not easy to remember, let us say a telephone number or an address) get a pen and a piece of paper. Communication alone is worthless, and what I have said about information holds true (a fortiori) of an order or a promise, not to mention of a performative, that would be completely meaningless if the element of recording were not present in communication. Still, we ask how is all that possible. Let me come back to globalization, and elaborate on it. As a matter of fact, for a long time I have asked myself why, in a famous book16, Derrida has coupled writing and difference, and I have always found cryptic and difficult to grasp all his talk about différance17. However, precisely the example of globalization (and all what contains) can make much apparent, even trivial, the link between the two, and make concretely understandable the notion of différance, which I will simply call “difference”—keeping in mind that such a word does not refer only to difference, but also to the act of differing, or referring. I claimed before that what makes globalization possible is not the fact that goods and commodities travels, or that people at different places can communicate, but writing as a form of recording. Consider these three situations.

121 (1)

A Travel B

This is the model of physical transport, which has always been there, and that thus cannot be what has produced globalization. If in A something from B arrives, or if you move from A to B, either way you are localized, not globalized. (2)

A TV, telephone B

You are in A and see B, or you speak with B. But at a certain moment, because of the time zones, you have to choose: either you are in A or in B. And indeed you stay in A. (3)

A Web B

Through fast writing (web) you are in A and in B at the same time, thanks to a difference (postponing) that inscriptions make possible. Model (3)—the model of globalization through writing—allows us to overcome the problem of the time zones, it transmits recordings that are already stored in a compact way, and is the fastest and the most prolific— compared to all others. How is that possible? Indeed, we realize that the possibility of differing, of postponing, of avoiding the synchrony of exchanges is the base not only of globalization, but of every kind of society. And this demonstrates that globalization is not an accident, but what society, since humanity dwelled the caves, was bound to.

Notes 1. 2. 3. 4. 5. 6. 7.

Habermas 1981. Jakobson 1963. Ferraris 2007. Dawkins 1976. E.g. the reflections by Gabriel Tarde on the “laws of imitation” (Tarde 1890) Searle 1995, Tuomela 1995 e 2002. I have discussed collective intentionality in Ferraris 2005, and in

122 8. 9. 10. 11.

12. 13. 14. 15. 16. 17.

Gallese 2003. Dehaene 2007. Gadamer 1960. I owe to Giuliano Torrengo the following comment: “Beaurocracy in India is indeed massive (every “official” action requires a lot of forms and documents, and generates as many of them). The beaurocrats (who are 10% of the population, sometimes including whole families) have really power, since by delaying or denying to you even just a step of the usually long course, they literally can get you stuck for days (and imagine if you need to reserve a train for the day after...). From this it follows a weird mix of reverence and hate by the non-beaurocrats, and also a sort of “submission” behavioral code—whose only purpose is to make the whole beaurocratic machinery work. The basic rules are: do not get the beaurocrat nervous, you won’t get anything done, always claim that their are right, apologize all the time, etc.” De Soto 2000. Bourdieu 1979. Agamben 1995 (the expression is by Walter Benjamin). Ferraris 2007. Derrida 1967b. Derrida 1968.

References Agamben, G. (1995), Homo sacer. Il potere sovrano e la nuda vita, Torino, Einaudi. Bourdieu, P., (1979), La Distinction, Paris, Ed. de Minuit. Dawkins, R. (1976), The Selfish Gene, Oxford University Press; tr. it. di G. Corte e A. Serra, Milano, Mondadori 1994. Dehaene, S. (2007), Les neurons de la lecture, Paris, Odile Jacob. Derrida, J., (1967b), L’écriture et la différence, Paris, Seuil. ––, (1968), “La différance”, in Id., Marges de la philosophie, Paris, ed. de Minuit. De Soto, H., (2000), The Mistery of Capital. Why Capitalism Triumphs in the West and Fails Every Where Else, New York, Basic Books. Ferraris, M., (2005), Dove sei? Ontologia del telefonino, Milano, Bompiani ––, (2007), Sans Papier. Ontologia dell’attualità, Roma, Castelvecchi Gadamer, H. G. (1960), Wahrheit und Methode, Tübingen, Mohr. Gallese, V., (2003), “La molteplice natura delle relazioni interpersonali: la ricerca di un comune meccanismo neurofisiologico”, Networks, 1, pp. 24– 47.

123 Habermas, J. (1981), Theorie des kommunikativen Handelns, Frankfurt a.M., Suhrkamp 1981, 2 voll. Jakobson, R. (1963), Essais de linguistique générale, Paris, Ed. de Minuit. Searle, J. R., (1995), The Construction of Social Reality, New York, Free Press. Tarde, G. (1890), Les lois de l’imitation; n. ed. Paris, Les Empêcheurs de Penser en Rond 2001 Tuomela, R., (1995), The Importance of Us, Stanford (CA), Stanford University Press. ––, (2002), The Philosophy of Social Practices, Cambridge, Cambridge University Press.

124

Weltkommunikation und World Brain. Zur Archäologie der Informationsgesellschaft Frank Hartmann, Wien

Philosophische Identitätskrise Das infrastrukturelle Netzwerk der Telegraphenkommunikation wurde an jenen Punkten geknüpft, die in geopolitischer wie in merkantiler Hinsicht am interessantesten waren: entlang der Handelsrouten, und bis 1900 mit London als Zentrum der kolonial gestalteten Welt. Durch diese Knotenpunkte fließen bis heute die Datenströme der modernen Lebensform, an ihnen wird verkehrs- und nachrichtentechnisch „eine Welt geknüpft und zusammengezogen, die es vorher nicht gegeben hatte“ (Schlögel 2003, 75). Was bedeutet diese neue Infrastruktur für das Denken? Diese Frage wurde überraschenderweise schon im 19. Jahrhundert auch als eine philosophische Frage nach der neuen Technik gestellt. Es geht mir im folgenden also auch um eine Archäologie der Medienphilosophie, und zwar in materialer Hinsicht, nicht als theoretische Begriffsdefinition. Denn bei Medien geht es wesentlich nicht um Begriffe oder um Diskurse, sondern um Anwendungen und Infrastrukturen, um Techniken und Materialitäten. Hat die Philosophie des 20. Jahrhunderts die begriffliche Repräsentation der Welt und die sprachliche Verfasstheit der Lebenswelt ins Zentrum gerückt (Linguistic Turn), so steht die Philosophie des 21. Jahrhunderts nun vor begriffslosen numerischen Repräsentationen auf Oberflächen, für die eine platonische Anstrengung – die kritische Unterscheidung von Sein und Schein – nicht mehr so recht greifen mag. Dieser Medial Turn wird in der Folge eine neue Wahrnehmungstheorie verlangen, um der Sprache der neuen Medien habhaft zu werden (Manovich 2001), und eine post-typographische Erkenntnistheorie (Giesecke 2007), die es aber von den BildtheorieDiskursen zu unterscheiden gilt. Es ist relativ interessant zu beobachten, dass Medien und Netzwerke der Philosophie meist kein Thema sind, weil sie angeblich „bleibende“ philoso-

126 phische Fragen nicht wirklich berühren und manchen daher als eine „vorübergehende Sache“ (Martin Seel) gelten. Aber der Rückzug in die Sicherheit „reflexiver“ Textwelten genügt nicht – von Philosophen erwartet sich die Öffentlichkeit hoffentlich wenigstens einigen Widerspruchsgeist angesichts des allgegenwärtigen Netbusiness. Sie erwartet sich auch eine Diskussion, die auf Augenhöhe ihrer Zeit ist, die sich also vor einer Auseinandersetzung mit Medientechnologien nicht scheut (vgl. Debray 2003). Es gibt in der Geschichte der neueren Philosophie nicht viele Beispiele, die eine solche Erwartung erfüllen, aber es werden bereits erste Rekonstruktionen verfasst (Margreiter 2007). Sie finden sich auch eher bei den Außenseiterpositionen, und nicht in der Schulphilosophie, die sich seit ihrer nachidealistischen Identitätskrise den Problemen der technisch generierten Medienmoderne weitgehend entzogen hat. Die medientechnische Verdichtung des Erfahrungsraumes im 19. Jahrhundert, die den transzendentalen Ansatz der Kantischen Philosophie durch die Apparaturen der Telegraphie und der Fotografie problematisiert, wird in den philosophischen Studierstuben prinzipiell nicht zur Kenntnis genommen – technische Medien gelten als „Zumutungen“ philosophischen Wissens (Rieger 2001). Mediale Apparaturen als inhärenter Bestandteil der kognitiven Architektur einer aufgeklärten Vernunft jenseits begrifflicher Konstellationen tauchen im philosophischen Diskurs erst mit sehr großer Verspätung auf.1

127

Technik als neue Qualität Die Informationsgesellschaft entstand bereits im 19. Jahrhundert, als inmitten der Industriegesellschaft die Elektrizität künstlich nutzbar gemacht wurde – die Telekommunikation auf Grundlage der Elektrizität bereitete die globale Medienkultur vor (Hartmann 2006). Es sind Entwicklungen, die strukturelle Ähnlichkeit mit denen des späten 20. Jahrhunderts haben, wobei mit der Ausbildung des Weltverkehrs sogar schon Begriffe wie Vernetzung auftauchen (Krajewski 2006, 53). Schon mit der historischen Tatsache eines Viktorianischen Internets (Standage 1999) und seiner neuen geopolitischen Weltordnung (Hugill 1999) stößt man auch auf die explizite Thematisierung des Telegraphensystems bei einem philosophierenden Zeitgenossen, dessen Wirkung auf den Diskurs im 20. Jahrhundert systematisch unterschätzt worden ist – obwohl Ernst Kapp, von dem hier die Rede sein soll, in Martin Heideggers philosophischer Terminologie ebenso unübersehbar weiterwirkte wie in Sigmund Freuds Fassung des Menschen als Prothesengott oder Marshall McLuhans Medientheorie von den Extensions of Man.2 Fernschreiben, Fernhören, Fernsehen – die neuen Technologien auf Basis von Kabel und Funk konnten nicht mehr gut in der Begrifflichkeit der reinen WerkzeugTechnik gefasst werden. Man behalf sich mit biologistischen Ansätzen, zum Teil bereits mit organizistischem Einschlag. So taucht im frühen Mediendiskurs – avant la lettre – bei Kapp die Idee der „Organprojektion“ auf, und sein Ansatz gilt den kognitiven Implikationen der neueren Technik mit ihrer Herausforderung, dem, was Soziologen „große technische Systeme“ (Braun/Jorges 1994) nennen. Ernst Kapp (1808-1896) – ein Anhänger der anti-idealistischen naturwissenschaftlichen Forschung in der Folge Alexander von Humboldts und Carl Ritters, dem Begründer der wissenschaftlichen Geographie – ist keine große Figur der modernen Philosophiegeschichte, aber sie verdankt ihm die erste explizite „Philosophie der Technik“.3 Kapp lehrte an einem deutschen Gymnasium Geographie und publizierte zum Thema philosophische Erdkunde. Ab 1849 lebte er mit seiner Familie in Texas in einer deutschen Exil-Kommune, weil er als Freidenker und Anhänger der bürgerlich-demokratischen Märzbewegung nach deren Niederschlagung in Deutschland nicht Beamter bleiben konnte oder wollte. Dieser Hinweis ist im doppelten Sinn wichtig, weil Kapp in Texas als Farmer und Zimmermann, ja sogar als Wellness-Anbieter gearbeitet hat.4 Diese physisch-praktische Erfahrung findet sich in seinem philosophischen Werk wieder, welches die Werkzeug- und Maschinenwelt als Teil der Entstehungsgeschichte menschlicher „Cultur“ zu ergründen sucht. Hier werden nämlich im Verlauf der Argumentation immer wieder

128 pragmatische Bezüge hergestellt, und es lohnt sich, darauf ein wenig näher einzugehen; vor allem auch, weil in der medientheoretischen Fachliteratur die Referenzen auf Kapp dessen Grundgedanken von der Organprojektion durchwegs in grotesker Verkürzung wiedergeben, als Anthropomorphisierung der Technik und als Prothesentheorie.5 Außerdem ist dieses Werk eines akademischen Außenseiters, das er nach seiner Rückkehr nach Deutschland schrieb, eine der seltenen philosophischen Reaktionen auf eine neue Mediensituation seiner Zeit: die Ausbreitung der Telekommunikation nach 1850.

Artefakte und Organe Begann im frühen neunzehnten Jahrhundert mit G. W. F. Hegel eine reflektierende Bestimmung des Technischen und der Arbeit (als technischem Herstellen aus subjektiver Leistung) im Sinne einer instrumentellen Vernunft, so galt Technik in der europäischen Philosophie der Moderne als ein angewandtes System von Mitteln, das sowohl zur Verwirklichung von Sittlichkeit und damit zur Konstitution von bürgerlichem Bewusstsein führt, als auch ein Medium ihrer Wirklichkeitserzeugung darstellt. Begrifflich steht Technik für die in neuzeitlichen Maschinen und modernen Apparaten geronnene Erweiterungsform subjektiver Fertigkeiten vor dem Hintergrund des überlieferten, kollektiven Wissens. Die Technik und in weiterer Folge die Medien als Mittel menschlicher Welterschließung werden entwicklungsgeschichtlich als Moment der Befreiung vom tierischen Zustand, als fortgesetzte Emanzipation von der Natur in Richtung Kultur angesehen. Kapp fasst diese Befreiungsgeste in die Denkfigur der Organprojektion, das heißt, die kulturell entwickelten Technologien werden evolutionär als Exteriorisierungen, also funktionale Auslagerungen des menschlichen Organismus, verstanden. „Zunächst wird durch unbestreitbare Thatsachen nachgewiesen, dass der Mensch unbewusst Form, Functionsbeziehung und Normalverhältniss seiner leiblichen Gliederung auf die Werke seiner Hand überträgt und dass er dieser ihrer analogen Beziehungen zu ihm selbst erst hinterher sich bewusst wird. Dieses Zustandekommen von Mechanismen nach organischem Vorbilde, sowie das Verständnis des Organismus mittels mechanischer Vorrichtungen, und überhaupt die Durchführung des als Organprojection aufgestellten Princips für die, nur auf diesem Wege mögliche, Erreichung des Zieles der menschlichen Thätigkeit, ist der eigentliche Inhalt dieser Bogen.“ (Kapp 19877, Vorwort)

Wenn Kapp in der Folge die „artefactische Außenwelt“ der Technik als eine

129 Projektion und damit als eine Ausweitung der menschlichen Organanlagen thematisiert, dann ist dies keineswegs neu, sondern führt nur einen Gedanken aus, den er von Humboldt übernimmt, der sich seinerseits Gedanken darüber gemacht hat, welche kulturellen Veränderungen durch neue Medientechnik und Elektrizität anstehen mögen: „Das Erschaffen neuer Organe (Werkzeuge zum Beobachten) vermehrt die geistige, oft auch die physische Kraft des Menschen. Schneller als das Licht trägt in die weiteste Ferne Gedanken und Willen der geschlossene elektrische Strom. Kräfte, deren stilles Treiben in der elementarischen Natur, wie in den zarten Zellen organischer Gewebe, jetzt noch unseren Sinnen entgeht, werden erkannt, benutzt, zu höherer Thätigkeit erweckt und einst in die unabsehbare Reihe der Mittel treten, welche der Beherrschung einzelner Naturgebiete und der lebendigeren Erkenntnis des Weltganzen näher führen.“ (Alexander von Humboldt, zit. nach Kapp 1877, 105f)

Diese in der ersten Hälfte des 19. Jahrhunderts aufgenommene Reflexion zu den neuen Medien (Mitteln) wird von Kapp aufgenommen und weitergeführt, offenbar weil die Zeitläufte sie nur bestätigen und verstärken konnten. Von Medien war zwar nicht wörtlich die Rede, sondern nur der Sache nach, denn im Land der Dichter und Denker verlangten die neuen Technologien eine sprachpolitische Regelung. Von den Franzosen her kam nicht nur die Technik der Telegraphie, sondern eben auch die neue Terminologie, die es zu entzaubern galt: „Médium“, so heißt es in Campes 1813 erschienenen Wörterbuch zur Erklärung und Verdeutschung der unserer Sprache aufgedrungenen fremdem Audrücke, „kann in den meisten Fällen durch Mittel, zuweilen durch Hülfsmittel übersetzt werden.“ (zit. nach Hoffmann 2002, 72)

Die menschliche Wahrnehmung jedenfalls wurde durch neue und perfektionierte Apparate erweitert: die wissenschaftliche Fotografie boomte und lieferte beispielsweise Bilder vom Mond und anderen Gestirnen, die den Weltraum erfahrbar machten; die Mikroskopie wurde professionalisiert und die Mikrofotografie entwickelt, welche neue Einblicke in die Struktur der Wirklichkeit lieferte; optische oder akustische Zeichen wurden, als elektrische Signale codiert, über große Entfernungen übertragen; die Telegraphenstrecken wurden international ausgebaut und öffentlich zugänglich gemacht, die Kommunikation in Verwaltungsgebäuden wurde durch Rohrpostanlagen rationalisiert; usw. – die unabsehbare Reihe der Mittel, auf die Humboldt noch eher hypothetisch Bezug nimmt, wurde durch zahlreiche technische

130 Innovationen äußerst real installiert. Wie es in Hegels Vorlesungen zur Weltgeschichte über Die neue Zeit so schön heißt: „das Technische findet sich ein, wenn das Bedürfnis vorhanden ist.“

Organizität des Technischen? Die Frage nach der techné – altgriechisch für die Fähigkeit des Hervorbringens, im weiteren Sinn auch für Handwerk – stellt sich in der Folge neu, weil die neuen Apparate und Instrumente sich von den herkömmlichen Werkzeugen und ihrer Mechanik grundlegend unterscheiden. Entstanden sind die Werkzeuge, als Entlastung der organischen Physis, wobei zwischen Organ und Werkzeug ein koevolutionäres Verhältnis herrscht, das zu einem „Verwachsensein des Werkzeugs mit dem menschlichen Selbst“ führt; organon steht im Altgriechischen sowohl für Körperglied als auch für dessen Nachbild, das Werkzeug (Kapp 1877, 60). Indem die Hand sich mit einem Gegenstand befasse, herrscht eine Rückbezüglichkeit zwischen diesem und der „Mache“ der Menschenhand, die Grundlage für jeden Mechanismus wird. Letzterer ist als mechanische Unterstützung der „Hände Werk“ deren mittelbare Erweiterung. Die Erweiterung versteht Kapp nicht als simple Verlängerung, etwa wie wenn ein Stock die Reichweite des Armes verlängert, sondern als im Humboldtschen Sinne „Mittel der Erhöhung der Sinnesthätigkeit“. So findet er zur Bezeichung der Organprojection, wobei dieses Projizieren als ein unbewusstes Entwerfen oder ein unbemerktes Hervorbringen zu verstehen ist. Hand, Arm und Gebiss stehen am Anfang aller Geschichte; die damals bekannten historischen Artefakte (Höhlenfunde und erste archäologische Grabungen) weisen auf einen Entwicklungsprozess hin, den sie durchlaufen haben und mit ihnen der Mensch; seinen Tätigkeiten und die Benennungen der Werkzeuge haben eine gemeinsame Wurzel, deren Spur sich in der Sprache findet. Mehr noch: technische Innovation ist immer von der Verrichtung her zu denken, nicht von der Erfindung her; Tätigkeit und nicht Reflexion sind der Motor der Fortschritts, argumentiert Kapp wohl auch unterm Eindruck der Evolutionstheorie. Neue Werkzeuge entstehen also nicht durch Nachdenken über Verbesserungen, sondern durch langwierige Optimierung in winzigen Schritten. So finden sich Werkzeuge und Menschen eher, als dass diese jene erfinden würden, und Kultur ist letztlich die technische Realisierung einer natürlichen Anlage. Das ist Kapps bemerkenswerte neue Grundlage zur „culturhistorischen Begründung der Erkenntnislehre überhaupt.“6

131 Ich möchte an dieser Stelle anmerken, was im gegebenen Rahmen nicht ausgeführt werden kann: Technik und Kultur liegen scheinbar im Widerstreit, weil Technik in der Moderne – auch als ein begriffliches Erbe der Industriekultur – kaum anders gedacht wird denn als anorganisches Konstrukt (als Gestell, ein Begriff, den schon Kapp verwendete). In den HighTech-Schmieden und „Exzellenzclustern“ unserer Zeit, die bereits begonnen haben, aus der Erforschung organischer Matrizes anorganische Strukturen nachzubauen (etwa Biosilikate), entsteht derzeit ein ganz anderes Bild von Technik, das manchmal „organizistisch“ genannt wird. Hier wird nanotechnologisches Neuland betreten, und jenseits des mechanischen Weltbild wird das geisteswissenschaftliche Argument, man dürfe Technik niemals biologisieren, neu zu bewerten sein, weil letztlich eine neue Ebene betreten wird. Was das bedeutet, können wir uns derzeit ebenso wenig vorstellen wie Menschen des Industriezeitalters, die einst mit den Phänomenen des Elektromagnetismus konfrontiert waren.

Bio-Logik der Technik Diagnostisch gesehen fand sich bei Kapp ein Bewusstsein auf der Höhe seiner Zeit, denn er artikulierte seine Technikphilosophie explizit vor dem Hintergrund von „Großindustrie“ und „Weltcommunication“. Telegraph und Dampfmaschine sind in seiner Philosophie selbstverständlich Thema, und zwar in der Perspektive, dass diese eine der direkten Wahrnehmung entzogene, scheinbar selbsttätige Bewegung annehmen. Dieses quasi-Dämonische der Technik ist aber nur ein Aspekt, der für ihre Erklärung viel zu kurz greifen würde (das entspräche ihrer Hypostasierung zum geschichtsphilosophischen Subjekt). Dieser Aspekt kommt lediglich dem Mechanischen zu, der Tradition von Bewegungsautomaten – „jene künstlichen Uhr- und Räderwerke tanzender und plappernder Gliederpuppen“, die täuschen und als „mechanische Gestelle“ nichts taugen.7 Im Bezug zur Technik gibt es noch mehr als bloße Imitation des Organischen oder die Umkehrung dieses Verhältnisses in einer Anthropomorphisierung der Technik. Und eben dafür hat Kapp den Begriff der Organprojektion vorgeschlagen. Dass der Mensch seine organische Anlage technisch reproduziert und mittels Technik auch potenziert, dieser theoretisch Topos sollte im 20. Jahrhundert eine recht prominente Neuauflage erhalten: als Extensions of man, die Ausweitungen des Menschen, wie McLuhan (1964) die neuen elektronischen Medien zu fassen suchte.

132 Sind Medien tatsächlich Ausweitungen des Menschen, etwa auch Prothesen im Sinne einer Ersetzung seiner körperlichen Anlagen? Ganz so einfach ist die Organprojektion nicht zu verstehen, und es bleibt der Begriff im hier gegebenen Zusammenhang und mit Bezug auf die oben gemachte Anmerkung zur Relativität der Trennung von organisch/anorganisch in der Technik genauer auszuloten.8 Kapp ging es nämlich nicht darum, technische Medien vom menschlich-Organischen her zu erklären, sondern umgekehrt das Menschliche von seiner technischen Exteriorisierung her zu begreifen (Leroi-Gourhan 1995). Folgendes Beispiel erklärt dies recht deutlich. Das organische Leben bringt stabile Formen hervor, weil die Funktion es verlangt, wie etwa Knochen. Die neuere Wissenschaft nun, wie Kapp referiert, erhält neue Einblicke in die Natur durch die Analyse mikroskopischer Schnittpräparate. Dabei stellt sie fest, dass die innere Struktur etwa eines Oberschenkelhalses „kein regelloses Gewirre von Knochenbälkchen und Hohlräumen“ sei, wie man bisher glauben mochte, sondern „wohlmotivirte Architektur“, die penibel den Zug- und Drucklinien entspricht, denen der Knochen an dieser Stelle ausgesetzt ist (Kapp 1877, 113). Da in der Natur überall dieselben Kräfte am Werk sind, komme es bei der praktischen Ausführung von Zug- und Druckbalken eines modernen Brückenträgers, oder eines Kranes, zu deckungsgleichen Formen wie im Knochenaufbau. Diese Kongruenz war dem Brückeningenieur, der den Knochenquerschnitt gar nicht kannte, nicht bewusst. Denn auf der normalen Wahrnehmungsebene begreift der Mensch diese Dimension der Technik, die eine Art Rückkopplung eines Teils der Natur mit sich selbst wäre, gar nicht. Das ist hegelianisch, also a-humanistisch gedacht, und so erstrahlt an der modernen Technik kein aufgeklärtes Subjekt, sondern immer „nur der Abglanz aus der Tiefe des Unbewußten“ (Kapp 1877, 123).

Abbildung 1 – Quelle: Kapp 1877

Nun war die Parallelisierung der Strukturähnlichkeit von Organischem und Technischem für jene Zeit nicht gerade ungewöhnlich. Kapp verglich die Querschnittfläche des Tiefseekabels (Abb. 1) mit der eines menschlichen

133 Nervs (Abb. 2). „Die Nerven sind Kabeleinrichtungen des thierischen Körpers, die Telegraphenkabel sind Nerven der Menschheit!“ (Kapp1877, 141)

Abbildung 2 – Quelle: Kapp 1877

Kapp bezieht sich in dieser Analogisierung von Nerven und Telegraphen u. a. auf die Cellularpathologie des Berliner Mediziners Rudolf Virchow und auf die ganzheitliche Physiologie des Naturphilosophen Carl Gustav Carus. Die im 19. Jahrhundert verbreitete Parallelisierung von Nerven und Kabeln zeigt an, dass die Zeit der aufstrebenden Naturwissenschaft und der sich ausbreitenden Medienkultur nach einem Weg suchte, die Konstruktion neuer technischer Realitäten in einer zweiten Natur im Wechselverhältnis mit den Bedingungen der ersten Natur zu begreifen. Die Theorie der organischen Entwicklung, unter dem Eindruck von Charles Darwins Lehre, und die mechanisch-industrielle Praxis suchen nach ihrem Grund, philosophisch ausgedrückt: nach der allgemeinen Bedingung ihrer Möglichkeit. Diese Suche hat mehr, als es auf den ersten Blick scheinen würde, mit den Wissenschaften vom Menschen zu tun; doch erst etwa hundert Jahre später, bei Michel Foucault (1973), schien es möglich, neben das formal-philosophische ein historisch-technisches Apriori zu setzen. Mensch und Technik sind epistemisch nicht klar abzugrenzende Gebiete eines Diskurses, der zu Kapps Zeit seinen Ort definitiv noch nicht gefunden hat. Doch seine Argumentation steht klar und deutlich auf Seiten technischer Hervorbringungen, die – als unbewusstes Nachbild – die Geheimnisse der organischen Innervation erschließen. So wie ihm der Mensch nicht als losgelöster Geist, sondern nur als „die eingefleischte Seele“ wirklich ist, so mussten ihm die Telegraphie und im besonderen das epochaleTransatlantik-Kabel von 1866 wie eine Nervenleitung erscheinen, deren Körper nichts weniger als die Menschheit bildet und die somit eine Innervation des absoluten Geistes als technisch implementierte Weltkommunikation wäre.

134

Vernetzung zum World Brain Die Nerven dieser Weltkommunikation bilden nicht nur Übertragungsverhältnisse, sondern auch instantane Synchronisierungsleistung aus. Die Metapher vom Weltgehirn liegt nahe. Nicht umsonst sollte Paul Otlet, der belgische Pionier der Datenbank-Logik, bald von einer neuen Maschinerie der Wissensvernetzung sprechen, dem Cerveau méchanique et collectif. Die Wissensmaschine Buch wird in einen neuen Datenspeicher aufgehen, so die damalige Vision des Hyper-Buches als einer qualitativ neuen Technologie, wobei die Telekommunikation die einzelnen Wissensinhalte in einem „Réseau de communication, de cooperation et d’échanges“ vernetzen würde. 9 Unmittelbar damit in Zusammenhang zu sehen ist die Vorstellung vom World Brain des Schriftstellers H.G. Wells, keine Science Fiction sondern die politisch motivierte Antizipation einer effizienten Wissensorganisation, die nach einem neuen Wissensapparat verlangt.10 Wie Otlet und Wells zeugt die ebenfalls eher vergessene Publikation von Lewis Mumford, Technics and Civilisation (1934), von einem neuen Technik-Verständnis, nach dem gerade die Medien-Maschinerie es sei, die uns dazu verhelfe, unsere selbsterschaffene Welt zu verstehen.11 In allen genannte Publikationen wird die Ineffizienz der vorhandenen Wissensorganisation kritisiert und so der Boden bereitet für neue Auffassungen, die ihren Ort bald im Diskurs der Kybernetik finden sollten oder auch in dem von McLuhan beschworenen Jenseits der Schriftkultur.12 Dass im Zeitalter des Internet durch die Wissensvernetzung nicht nur die Einzelnen klüger werden, sondern die Emergenz einer höheren Wissensform anstehe – ob es sich dabei nun um ein Global Brain sozialer Intelligenz (Bloom 1999) handelt oder um eine kollektive Intelligenz (Lévy 1997) – ist also ein schon älterer Gedanke, zu dem es nicht erst mit dem Bewältigungsdiskurs zum Internet kam. Schon bei Kapp erscheint Weltgeist aufgehoben in der neuen Ontologie globaler Medientechnik. Das makrosphärische Ereignis Weltkommunikation sorgt für eine bislang unbekannte telematische Erfahrungsdimension, der technischer Eigensinn bis hin zu einer Form geistigen Eigenrechts der Technik zukommt, für das als „nichtmenschlichen Zustand [dem Menschen] der ganz entsprechende Ausdruck fehlt“, und weiter: „Von der Beschaffenheit nichtmenschlicher ›Bewusstseine‹ macht der Mensch sich Vorstellungen nur auf Kosten der Integrität seines Selbstbewusstseins.“ (Kapp 1877, 161) Als ein unbewusstes Finden, eine erst im Nachhinein als solche erkennbare technische Nachformung, der alle Technikentwicklung entspräche, ist die Organprojektion ein gleichsam Technoimaginäres, das von den Grundformen der Werkzeuge bis hin zur Atlantikverkabelung am Werke ist und wofür

135 wir keine Begriffe haben. Dem Geist steht aber offen, sich in diesen Entäußerungen wiederzuerkennen. Hätte nicht die Sprache – und darin liegt nach Cassirer eine echte philosophische Herausforderung – über ihrem Triumph als Werkzeug des Geistes den Geist des Werkzeugs vergessen (Cassirer 1995, 50f). Wie aber ist nun der Zentralgedanke dieser Technikphilosophie: die Exteriorisierung im Sinne der Kulturbegründung, zu verstehen? Das erschließt sich eben nicht, wenn man Kapps Medientheorie mit jener McLuhans kurzschließt, die Medien als Ausweitungen des Menschen13 begreifen will: Exterisorisierung ist nicht gleich Extension im Sinn von Prothese, sie bedeutet nicht Ausweitung sondern Auslagerung. Obwohl Kapp diese Terminologie nicht benützt, kommt er ihr am nächsten dort, wo er von der Werkzeug entwickelnden Hand als einem „auswendigen Gehirn“ spricht (Kapp 1877, 151). Den Werkzeuggebrauch begreift er damit als eine mediologische Geste, denn das Werkzeug macht ein bestimmtes Anwendungswissen übertragbar auch in dem Sinn, dass dieses Wissen sich von seiner ursprünglichen Handlichkeit ablöst und in einer Maschinen- und Apparatekultur aufgeht. Diese ist ein Gebilde von Menschenhand, eine quasi reflexive Spiegelung der Hand, und damit das Artefakt eigentlich ein Manufact; die Hand ist der „reale Grund für das menschliche Selbstbewußtsein“ und bleibt es auch für Kunst und Wissenschaft „im transzendentalsten Fluge“ (ebd.).

Technik und Kulturmorphologie Daraus folgt für Kapp kein Technikdeterminismus, sondern die Grundlage einer (nicht mehr wirklich ausgeführten) Kulturmorphologie, die im Werkzeug- wie im Sprachgebrauch die Gleichursprünglichkeit einer Mediengeste entdeckt. Es wird dadurch möglich, Kultur und Technik nicht in einem Konfrontationsverhältnis zu sehen, sondern eine Theorie der Medienevolution zu entfalten, die nicht dem Kulturpessimismus verfällt. André Leroi-Gorhan sollte in seinen Forschungen der 1950er Jahre über den Ursprung von Technik zu ganz ähnlichen Erkenntnissen kommen wie einst Kapp. Der französische Paläontologe betonte auch den Zusammenhang von Werkzeugen einerseits und Sprachgesten andererseits, die exteriorisierten Organen des Menschen entsprechen. Diese zivilisationsgeschichtlich wesentliche Leistung der Exteriorisierung, Auslagerung des Geistes und in weiterer Folge neben dem Werkzeuggebrauch die Fähigkeit, Denken symbolisch zu codieren und damit auf die eine oder andere Weise tradierbar zu machen, ist für Kultur entscheidend. Menschen weiten sich in dieser projektiven Tätigkeit

136 nicht einfach aus auf eine objektive Natur, die sie bearbeiten, sondern sie schaffen einen kollektiven Organismus, der Möglichkeiten zur Wissenskumulation gebraucht (im Werkzeuggebrauch, in Mythen, in Literaturen und allen anderen Formen von Überlieferungen). Zum „Fortschritt“ oder zur Ausdifferenzierung von Technik kommt es nach dieser Sicht, weil Werkzeug- und Symbolgebrauch zunehmende funktionale Entlastungen mit sich bringen, sowohl für die Einzelnen wie für die Gattung. Nicht nur der Werkzeuggebrauch, sondern auch die medialen Technologien des Speicherns, Übertragens und Verarbeitens von Daten und Informationen können in dieser Linie als fortgesetzte Befreiungsgesten gesehen werden.14 Technik, im umfassenden Sinn des Wortes, entsteht aus der Evolution ausgelagerter Operationsketten, die von Mensch zu Mensch, von Generation zu Generation und von Kultur zu Kultur übertragbar sind. Dieser Ansatz zeigt auch, wie sich biologische Anlagen in der Technik fortsetzen, wie die direkte Motorik der Geste zur indirekten Motorik der Maschine wird, die sich ihrerseits zum Automaten weiterentwickelt. Die Befreiung des Gedächtnisses durch die Entwicklung der Schrift und die Entdeckung des Buchdrucks, mit der noch unabsehbaren Folge durch Mikroelektronik, Computersysteme und die globale Vernetzung von Wissensressourcen gehört in diese Abfolge. So stehen auch die symbolmanipulierenden technischen Apparate, wie die Fotografie und die Telegraphie, in einem koevolutionären Zusammenhang mit der Kulturentwicklung, und brechen eben nicht als völlig wesensfremde Macht über die Kultur herein – wie es das teleologische Denken und die Rhetorik von den „Erfindungen“ suggerieren. Die Technik als umfassendes Phänomen, das im Gang der Geschichte sich entfaltet und zu höheren Formen der Existenz führt: für eine Zeit, die gerade erste Schritte in die Richtung machte, sich durch Anwendungen der Elektrizität und des Elektromagnetismus neue Dimensionen der Wirklichkeit zu erschließen, symbolisierte die Telegraphie also viel mehr als bloß eine Nachrichtentechnik. Sie steht für die Idee eines Brückenschlags zwischen getrennten Kulturen und einer Verschmelzung der Gegensätze, für neue Gemeinschaft und eine friedlich geeinte Menschheit, für einen technologisch fundierten Kulturalismus, an dem sich biologistische Untertöne ebenso vernehmbar machen wie heilsgeschichtliche Erwartungshaltungen. Über Wells Idee vom „World Brain“ bis hin zu Pierre Teilhard de Chardins Vision einer „Plantarisierung des Menschen“ (planetisation) sollte die spirituelle Idee aller Vernetzungs-Ideologie explizit hervortreten, lange bevor es die technische Wirklichkeit des Internets gab.15 Die Frage nach der Technik als einem unentdeckten Muster der biologischen Evolution (Kelly 1994) hat uns bis heute nicht ganz losgelassen. Mit Kapp begegnen wir in der zweiten

137 Hälfte des Telegraphenjahrhunderts einer Form der philosophischen Theoriebildung, die nach verborgenen Entwicklungsgesetzen und der Selbstorganisation des Sozialen fragt und die somit nicht die einzelnen „Erfindungen“ und den technischen Fortschritt verherrlicht, sondern die in den Verrichtungen und den Benennungen seiner Zeit eine philosophische Herausforderung explizit gemacht hat: von den Mitteln der Beschreibung der Wirklichkeit zu den Medien des Wirkens (in der Formulierung von Cassirer) überzugehen. Der Spiritismus mag dabei getrost auf der Strecke bleiben, denn die Organizität der Technik etwa in Form des „World Brain“ bedeutet bei näherem Hinsehen kein künstliches Denken (Artificial Intelligence), sondern ganz pragmatisch eine verbesserte Organisation des Wissens.

Endnoten Martin Heidegger thematisierte in Sein und Zeit (1927) en passant Telekommunikationsmedien wie das Radio und seine Auswirkung auf das „Dasein“; etwas konkreter zur technischen Entwicklung des 19. Jahrhunderts wird Ernst Cassirer in Form und Technik (1930), in Cassirer 1995, 39-90; als philosophisches Problem ernst genommen hat die Medien aber erst (im Anschluß an die Kybernetik) Vilém Flusser (1983). Zum neueren Stand der Diskussion vgl. Lutz Ellrich: „Medienphilosophie des Computers“, in: L.Nagl/M.Sandbothe Hg. 2005, (Hg.): 2005, S.343-358. 2 Freud artikulierte sein „Unbehagen in der Kultur“ 1929, McLuhan reflektierte die medientechnischen Ausweitungen des Menschen in den späten 1950er-Jahren; in der deutschen Übersetzung von Understanding Media. The Extensions of Man (McLuhan 1964) verschwand der Untertitel. 3 Ernst Kapp: Grundlinien einer Philosophie der Technik. Zur Entstehungsgeschichte der Cultur aus neuen Gesichtspunkten, Braunschweig 1877. 4 „Badenthal – Dr. Ernest Kapp’s Water-Cure“, Sisterdale, Texas. Biographische Information nach The Handbook of Texas Online – www.tsha.utexas.edu/handbook/online/articles/KK/fka1.html 5 Daher auch bis heute die philosophische Aufregung zur an sich nüchternen Frage, ob denn Computer denken können – wie sie bekanntlich zuerst gestellt wurde von Alan Turing: „Computing Machinery and Intelligence“, Mind 59, 1950. Zur Frage der Verselbständigung von Computersystemen in einer technischen Ko-Evolution vgl. Mainzer 2003. 6 Kapp 1877, S.33. Kapp referiert hier einen Vortrag des Sprachforschers Lazarus Geiger: „Die Urgeschichte der Menschheit im Lichte der Sprache, mit besonderer Beziehung auf die Entstehung des Werkzeugs“, in: ders., Zur Entwicklungsgeschichte 1

138 der Menschheit, Stuttgart 1871. 7 Kapp 1877, S.103ff – der Begriff des Gestells in Heideggers Technikphilosophie hat hier seinen (nicht ausgewiesenen) Ursprung, ebenso die Frage nach dem „Meisternwollen“ der Technik durch den Menschen, vgl. Martin Heidegger: Die Technik und die Kehre, Pfullingen 1962. 8 Arbeiten Computer nach Prinzipien der mechanischen Logik oder nach jenen einer erst noch zu begreifenden Bio-Logik? Versuche zur Entdeckung und technischen Implementierung der letzteren etwa bei Heinz von Förster lieferten keine ernst zu nehmenden Ergebnisse, vgl. dazu Krieg 2005. 9 Paul Otlet: Traité de documentation. Le livre sur le livre, théorie et pratique (1934), Liège 1989. Vgl dazu Hartmann 2006, 218-225. 10 Vgl. W.Boyd Rayward: „H.G. Wells’s Idea of a World Brain: A Critical Re-Assessment“, in: Journal of the American Society for Information Science 50, May 1999, S. 557-579. 11 „Through the machine, we have new possibilities of understanding the world we have helped to create.“ – Lewis Mumford: Technics and Civilisation, San Diego 1934, S.343. 12 Eigentlich Licteracy als posttypographische Medienkompetenz, vgl. Herbert Marshall McLuhan: „Culture without Literacy“, in: Explorations vol.1, Toronto 1953, S.117127. 13 „It is a persistent theme of this book“, schreibt Marshall McLuhan im Kapitel 10 von Understanding Media, „that all technologies are extensions of our physical and nervous systems to increase power and speed.“ McLuhan 1964, 90. 14 Vgl. Michel Serres: „Der Mensch ohne Fähigkeiten“, in: Schöttker Hg. (2003) 207218. 15 Das Internet als Hybrid aus elektromagnetischer Telekommunikation und elektronischer Datenverarbeitung wäre demnach ein wesentlicher Schritt der Menschheitsentwicklung hinzu super-organisierten Formen, vgl. Pierre Teilhard de Chardin: L’Avenir de L’Homme, Paris 1959.

Literatur Bloom, Howard (1999): Global Brain. Die Evolution sozialer Intelligenz, Stuttgart Braun, Ingo und Jorges, Bernward Hg. (1994): Technik ohne Grenzen, Frankfurt am Main Cassirer, Ernst (1995): Symbol, Technik, Sprache. Aufsätze aus den Jahren 1927-1933, Hamburg Debray, Régis (2003): Einführung in die Mediologie, Bern

139 Flusser, Vilém (1983): Für eine Philosophie der Fotografie, Göttingen Foucault, Michel (1973): Archäologie des Wissens, Frankfurt am Main Giesecke, Michael (2007): Die Entdeckung der kommunikativen Welt. Studien zur kulturvergleichenden Mediengeschichte, Frankfurt am Main Hartmann, Frank (2006): Globale Medienkultur. Technik, Geschichte, Theorien, Wien Hartmann, Frank (2000): Medienphilosophie, Wien Hoffmann, Stefan (2002): Geschichte des Medienbegriffs, Hamburg Hugill, Peter J. (1999): Global Communications since 1844. Geopolitics and Technology, Baltimore Kelly, Kevin (1994): Out of Control. The New Biology of Machines, Social Systems and the Economic World, London Krajewski, Markus (2006): Restlosigkeit. Weltprojekte um 1900, Frankfurt am Main Krieg, Peter (2005): Die paranoide Maschine. Computer zwischen Wahn und Sinn, Hannover Leroi-Gourhan, André (1995): Hand und Wort. Die Evolution von Technik, Sprache und Kunst (1964/65), Frankfurt am Main Lévy, Pierre (1997): Die kollektive Intelligenz. Eine Anthropologie des Cyberspace, Mannheim Mainzer, Klaus (2003): Computerphilosophie zur Einführung, Hamburg Manovich, Lev (2001): The Language of New Media, Cambridge/Mass. Margreiter, Reinhard (2007): Medienphilosophie. Eine Einführung, Berlin McLuhan, Marshall (1964): Understanding Media. The Extensions of Man, New York Münker, Stefan et al. Hg. (2003): Medienphilosophie. Beiträge zur Klärung eines Begriffs, Frankfurt am Main Nagl, L. und Sandbothe, M. Hg. (2005): Systematische Medienphilosophie, Berlin Plessner, Helmuth (2003): Anthropologie der Sinne, Gesammelte Schriften III, Frankfurt am Main Rieger, Stefan (2001): Die Individualität der Medien. Eine Geschichte der Wissenschaften vom Menschen, Frankfurt am Main Schlögel Karl (2003): Im Raume lesen wir die Zeit. Über Zivilisationsgeschichte und Geopolitik, München Schöttker, Detlev Hg. (2003): Mediengebrauch und Erfahrungswandel, Göttingen Standage Tom (1999): The Victorian Internet. The Remarkable Story of of the Telegraph and the Nineteenth Century’s Online Pioneers, London

140

Avatars and Lebensform: Kirchberg 2007 Michael Heim, Irvine

Preface Several years ago, after a decade of experiments in the software industry, I returned to academia and found philosophy colleagues troubled by the term “virtual reality”—a term which enjoys wide usage in the field of immersive computing but which raises hackles in post-metaphysical philosophers. Some vocabulary in this paper may create similar unease, so a warning may be in order. What makes sense to software engineers may for philosophers carry too much baggage. Words like “empathetic” or “empathic” may cause similar discomfort for those with an allergy to Romanticism. While these adjectives associated with poets like Wordsworth, the term “empathy” belongs equally to software designers and video-game artists who use it to describe the opposite of “first-person shooter” software. Empathic, as opposed to “shoot ‘em up” software, encourages the exchange of viewpoints beyond first-person perspective and may even merge several perspectives. Rather than deepen a user’s first-person point-of-view, empathic software offers a socializing experience, and in fact, is sometimes called “social” software, “Net 2.0,” or “computer supported cooperative work.” “Und eine Sprache vorstellen, heißt eine Lebensform vorstellen ... ‘Sprachspiel’ soll hier hervorheben, dass das Sprechen der Sprache ein Teil ist einer Tätigkeit, oder einer Lebensform ... Zu einem Sprachspiel gehört eine ganze Kultur.” (Ludwig Wittgenstein, Philosophische Untersuchungen, 19, 23; Vorlesungen über Ästhetik, 8)

Part I. The Promise and the Frustration The author’s experiments with avatars in virtual worlds occurred over a ten-year period from 1993 to 2003. The experiments originated in Southern California at an intersection of graduate studies in design, architecture insti-

142 tutes, and the software industry’s nexus of government and technology centers such as Technology Training Corporation (Torrance, California), Eon Reality Inc. (Irvine, California), Art Center College of Design (Pasadena, California), the Long Beach Virtual Reality Group, the University of California at Los Angeles, the University of Southern California, and California State University (Long Beach). Rather than recount projects in detail, a web page with links has been set up to accompany this paper. Interested readers can follow the threads of the experiments by browsing online resources at www.mheim.com/avatars. The upshot of the projects can be simplified for this paper as follows. The experiments examined avatars or online identities similar to currently popular multiplayer games like Warcraft and Second Life. Driven by participation and designed for interaction—these experiments were shaped by art designers, architects, and online educators. They played with a possible collective “we,” an ad hoc online community. During the 1990s, virtual worlds went from research to commercial game platforms. Unlike the later commercial avatars of Warcraft and Second Life, these early avatar experiments fostered non-programmed sociality. The model was less rule-based games than town meetings, discussion groups, or even casual parties. The specific design of virtual environments underscored the sensory imagination needed to support sustained encounters. During these years, a progressive momentum energized the virtual reality community. Books and articles from this period held out a less-than-utopian but nevertheless hopeful promise.

143

II. Avatar Diplomacy? “We need recovery. We should look at green again, and be startled anew (but not blinded) by blue and yellow and red. We should meet the centaur and the dragon, and then perhaps suddenly behold, like the ancient shepherds, sheep, and dogs, and horses — and wolves. Fairy-stories help us to make this recovery. In that sense only a taste for them may make us, or keep us, childish. Recovery (which includes return and renewal of health) is a re-gaining — regaining of a clear view. I do not say ‘seeing things as they are’ and involve myself with the philosophers, though I might venture to say ‘seeing things as we are (or were) meant to see them’ — as things apart from ourselves. We need, in any case, to clean our windows; so that the things seen clearly may be freed from the drab blur of triteness or familiarity — from possessiveness.” (J.R.R. Tolkien, “On Fairy-Stories”, 146)

The 2001 terrorist attacks against the United States brought a sense of urgency and a search for fresh kinds of outreach. Avatar experiments went beyond typical problems such as identity authentication, world design, and self-expression. The Internet was—at least in principle—a world-wide network. The trans-national dimension might reveal links between online communities and planetary politics. The project called “avatar diplomacy” made appeals to educational institutions to sponsor software kits for trans-national bonding. Schools across the globe might become seedbeds for linking transnationally divided peoples—not unlike the children’s “pen-pals” in previous generations. Software kits with support from college students were envisioned for school children in the United States, Europe, Scandinavia, and eventually the Middle East. Presentations were made to UNESCO, peace alliances, and several colleges and university departments. While the 2-year project for avatar diplomacy was welcomed in theory, it was never put into practice.

144

III. The Path into the Dark Forest ‘A te convien tenere altro viaggio’ rispuose poi che lagrimar mi vide ‘se vuo’ campar d’esto loco selvaggio (Dante: Inferno, Canto I, l. 92)

Why did avatar diplomacy never get off the ground? The failure seemed to this writer not so much inertia as a desire to achieve “business as usual.” After 9/11, academic leaders looked for comfort in self-imposed limits: “Our business is to be a school. We are academic institutions, not launch pads for action.” Where was the ability to feel and respond to a global situation? Does fear put a freeze on novel forms of communication? People who wear rubber gloves to open their mail—as prophylactic against anthrax poisoning—are in no mood to take long-range risks. Many analyses of the September 11, 2001 attacks on the Twin Towers see a “failure of imagination” in the inability of U.S. intelligence community to “connect the dots.” In years following the attacks, a similar failure of imagination may have weakened collective action in the newly developed social networks of the Internet. Is it not a larger failure of imagination to not connect information technology with trans-national communication? Or does Stoic apatheia slap every finger that reaches for the future? These were some of the questions that experimenters faced as the way forward became blocked and as research pathways trailed off into the dark woods.

145

IV. Leftover Notes from the “Music of Humanity” ”Hier

kann ich, wie so oft, nicht umhin, mich im Vorübergehen an dem inneren und fast geheimnisvollen Zusammenhang des altphilologischen Interesses mit einem lebendigliebevollen Sinn für die Schönheit und Vernunftwürde des Menschen zu weiden,—diesem Zusammenhang, der sich schon darin kundgibt, daß man die Studienwelt der antiken Sprachen als die ‘Humanioren’ bezeichnet, sodann aber darin, daß die seelische Zusammenordnung von sprachlicher und humaner Passion durch die Idee der Erziehung gekrönt wird und die Bestimmung zum Jugendbildner sich aus derjenigen zum Sprachgelehrten fast selbstverständlich ergibt. Der Mann der naturwissenschaftlichen Realien kann wohl ein Lehrer, aber niemals in dem Sinn und Grade ein Erzieher sein, wie der Jünger der bonae litterae.” (Thomas Mann, Doktor Faustus, 16)

Disappointment with the inadequacies of virtual subjectivity leads to a further question that goes to the heart of communication technology: Can the apathetic imagination be re-invigorated? Can collective subjectivity be re-tuned in a way that makes empathic performance more appealing to technological culture? If we have become deaf to “the music of humanity” (Wordsworth), can our listening be re-tuned, re-connected, re-sensitized? What tools are available for reviving a lethargic imagination? Certain threads of Heidegger’s thoughts about language point to language as usable heritage. The pre-given art works of poetic traditions enshrine residual power where imagination is preserved in a rich music of rhythm and patterned sounds. Several generations have protected and venerated these imaginative incantations. In searching for practical tools of revival, might not the current humanities—the interdisciplinary discussion of literature, history, music, and drama—turn to these works to awaken empathic imagination? By listening to the songs and legends built into language by poets and dramatists, might collective subjectivity regain depth of feeling—not as a personal psychological acquisition but as an awakening to the possibilities of feeling and imaginative action? Might the musicality of poetic language, as well as the making of musical improvisations, awaken lethargic imaginations to the “music of humanity” (Wordsworth)?

146

V. Visiting the Shrines of Imagination “Wir sprechen und sprechen von der Sprache. Das, wovon wir sprechen, die Sprache, ist uns stets schon voraus. Wir sprechen ihr ständig nur nach. So hängen wir fortwährend hinter dem zurück, was wir zuvor zu uns eingeholt haben müßten, um davon zu sprechen. Demnach bleiben wir, von der Sprache sprechend, in ein immerfort unzureichendes Sprechen verstrickt. Diese Verstrickung sperrt uns gegen das ab, was sich dem Denken kundgeben soll. Allein diese Verstrickung, die das Denken nie zu leicht nehmen darf, löst sich auf, sobald wir das Eigentümliche des Denkweges beachten, d.h. uns in der Gegend umblicken, worin das Denken sich aufhält. Diese Gegend ist überall offen in die Nachbarschaft zum Dichten.” (Martin Heidegger, “Das Wesen der Sprache” in Unterwegs zur Sprache, 179; GA 12, 168 f.)

In recent years, some leading thinkers have taken up Heidegger’s advice and have moved their philosophizing into the neighborhood of poetry and the arts. Richard Rorty and Jacques Derrida are notable examples of philosophers who have practiced their discipline in the context of literature departments rather than in departments of philosophy. Like the philologist and author J.R.R. Tolkien, many “poetic dwellers” see the need for a “loose semantics” (Tolkien) that seeks to evoke multiple levels of imaginative reverberations rather than seek a literal correspondence that “pictures” a univocal world. The consistency that a Tolkien seeks is one of deep resonances and fully imagined configurations of details. Such an interpretive language can be an independent art work or a discursive prose that wraps around art works. The centrality of hermeneutics is nothing new. But the urgency of the technological horizon requires special emphases for fostering the music of humanity in the interpretive process. Some of the conditions that might be important for awakening empathic perception are: (1) Disarming the fortresses of disciplinary towers; (2) Suspending the apparent conflict between philosophy and poetry, logic and myth, by welcoming the loosely haunting semantics championed by great myth-makers and philologists like Tolkien—as opposed to the modernist semantics of propositional representation; (3) Worrying less about the self-definition of the humanities or what the humanities are and focusing more on what the humanities can do and how they actively contribute; (4) Retrofitting the studium humanitatis with an interactive dimension. This includes a physical, phonological, musical involvement on an amateur level. The song might again stand on its own as a felt experience with a participatory, energetic dimension. In this way, the pre-modern

147 style of creativity might address the spontaneity and improvisation valued in online interactions. A goal for the humanities might be to awaken empathic perception which may someday feed the self-organizing action of avatars. “What was Wordsworth’s ‘healing power?’ How does his best poetry work so as to save not only the poet himself in his own crises, but so as to have been therapeutic for the imagination of so many poets and readers since? Six generations have passed since Wordsworth experienced his Great Decade (1797-1807), and still the attentive and dedicated reader can learn to find in him the human art he teaches better than any poet before or since, including precursors greater than himself (but no successors, as yet, of his eminence). Wordsworth educates the affective life of his reader. He teaches how to become a renovated spirit, free of crippling self-consciousness yet still enjoying the varied gifts of an awakened consciousness.” (Harold Bloom, Best Poems of the English Language, 323)

Note: Thanks go to Dr. Oliver Berghof, my colleague in Humanities Core at the University of California at Irvine for responding to an early draft of this paper. The views in it are, of course, entirely my own. —M.H.

148

Literature Crockett, Tobey 2004, “Building a Bridge to the Aesthetic Experience: Artistic Virtual Environments and Other Interactive Digital Art”, Intelligent Agent Vol. 5 no. 1, http://tinyurl.com/2leapu Crockett, Tobey 2005, “An Aesthetics of Play”, Banff, British Columbia, Canada, Banff New Media Institute, http://tinyurl.com/2ro7ax Feenberg, Andrew 1995, Alternative Modernity, Berkeley and Los Angeles: University of California Press. Flieger, Verlyn 2002, Splintered Light: Logos and Language in Tolkien’s World, Kent, Ohio: Kent State University Press. Heim, Michael 1987 1999, Electric Language: A Philosophical Study of Word Processing, New Haven: Yale University Press. Heim, Michael 1993, The Metaphysics of Virtual Reality, New York: Oxford University Press. Heim, Michael 1998, Virtual Realism, New York: Oxford University Press. Jakobsson, Mikael 1999, “A Virtual Realist Primer” in: Ehn, Pelle & Löwgren, Jonas (eds.) Studies in Arts and Communication #01. Malmö: Malmö University, http://tinyurl.com/yt4zd5 Jakobsson, Mikael 2006, “Virtual Worlds and Social Interaction Design”, Umea University, Sweden, http://tinyurl.com/17a Schroeder, Ralph (ed.) 2002 The Social Life of Avatars, London: Springer. Shippey, Tom A., J.R.R. Tolkien: Author of the Century, New York: Houghton Mifflin. Vitanza, Victor (ed.) 1998 2004, Cyber Reader, London: Allyn and Bacon.

Towards a Philosophy of the Mobile Information Society Kristóf Nyíri, Budapest

1. The Rise of Mobile Studies This introductory section of my present paper is a kind of report on the ongoing social science research programme I am directing: the project “Communications in the 21st Century”, launched in January 2001, conducted jointly by T-Mobile Hungary (until 2004 Westel Mobile) and the Hungarian Academy of Sciences. In the framework of the project a number of international conferences were held, on the basis of which altogether eleven volumes—four Hungarian, one German, and six English—have been published. I will first give a very brief summary of these volumes, and then provide a more detailed description of some of the main results we arrived at. The eleven volumes are witness to the history of the mobile phone between 2001 and 2007, no doubt the most dynamic aspect of the recent history of technological and social transformation. But most of all they amount to a first laying of the foundations for, and at the same time the awakening to consciousness and self-reflection of, a young discipline: the social science of mobile communication. Initially, research on problems pertaining to the mobile arose as an interdisciplinary task. From the interdisciplinary research, each of the participating disciplines profited, being forced to take account, on the level of theory, of the new medium which by now has come to constitute their main communicational environment. As a consequence of this taking account of the new realities, by 2005 a transformation was occurring which today has clearly become irreversible: the internal adaptation of the social sciences to the world of mobile communications. At the same time, an autonomous line of research emerged, based on a set of wellestablished paradigms of its own: the social science of mobile communication, Mobile Studies. Both aspects of this juncture in the history of science are represented in Nyíri (2007a, 2007b), which on the one hand takes stock of the paradigmatic results of mobile studies, and on the other hand highlights some new perspectives of the social sciences becoming aware of their mobile environment.

150 Our first two Hungarian collections (Nyíri 2001a, 2001b) on the one hand gave an initial inventory of the pertinent research problems and conceptual resources, and on the other hand, by way of drawing, as it were, a sketch of the mobile horizons of the future, formulated the following theses/hypotheses: 1. Mobile communication and physical mobility mutually reinforce each other—the internet and mobile telephony ultimately result in more, not less, travel and personal encounters. 2. Since knowledge is information embedded in context, and mobile communication is markedly situation-dependent and thus context-creating, the mobile information society is likely to be a society of knowledge, not of mere information. 3. Mobile, interactive, multimedia communication amounts to a return, on a higher level, to primordial, less-alienated forms of communication. 4. Mobile communication induces changes in human cognitive faculties. 5. The nature of political communication becomes transformed. 6. The nature of scientific communication becomes transformed. The third Hungarian-language collection (Nyíri 2002b) appeared as an enlarged variant of the parallel German and English volumes (Nyíri 2002a, 2003a). This collection discussed in depth the notion of the information society as a knowledge community; among its chapters were Robin Dunbar’s essay on gossip as a mechanism for maintaining community cohesion, and a definitive study on m-politics by Miklós Sükösd and Endre Dányi.2 Of the five other English-language volumes following upon (Nyíri 2003a), in the same year a further two appeared: (Nyíri 2003b), which played a pioneering role in the social scientific exposition of the notion of “m-learning”, a notion that today is very much in the centre of educational theory; and (Nyíri 2003c), presenting the mobile phone as the very answer to the communicational challenges of a decentralized global mass society. In the collection (Nyíri 2005) we strove to demonstrate that the mobile phone is not just an instrument for enabling global contacts, but also a means to maintain local bonds, organizing the life of small regions and small territories. (Nyíri 2006a) once more turned to the issue of m-learning, this time on a markedly philosophical level. It told of the revolution in epistemology, in particular about the revolution in educational theory; of our mobile companion as part of our mind; of the return of collective thinking; and of the development of the world of non-formal learning. To (Nyíri 2007a) I have referred to above. Besides the literature mentioned in note 1 above, I would here like to single out four further excellent monographs on the topic of mobile communication: Ling (2004), from which I received a major impetus for my Kirchberg paper (Nyíri 2006b); Levinson (2004); Goggin (2006); and, last but not

151 least, Castells et al. (2007). Between 1996 and 1998, Manuel Castells published his famous trilogy The Information Age (to which I will have occasion to come back presently) taking some 1400 pages to reach the conclusion that information and communication technologies were deepening, rather than closing, the gap between the rich and the poor. As he would characteristically put it in his unfathomable left-wing idiom: ICTs were instrumental in supplanting the “space of places” by a “space of flows”. In 1999, I put a vicious review of the work onto the web.3 In the following years I have come to regret the viciousness, but certainly not the critical stance, of the review. And it is with great satisfaction I note that, under the impact of the rise of the mobile phone, Castells himself has by today quite dramatically shifted his position. Castells et al. (2007), actually published in November 2006, fully recognizes the liberating effects of today’s dominant ICT, namely mobile telephony.

2. Knowledge Societies or Knowledge Communities?

2. 1. The Notion of a Knowledge Society The new information and communication technologies herald the promise, and indeed have to a significant measure already brought about, changes which can be meaningfully discussed under the heading of knowledge societies. However, the term “knowledge societies” by now seems to have acquired two distinct, albeit related, meanings. In its first meaning it refers to the trend classically analyzed by Bell (1973). Bell’s central term is “postindustrial society”, but he also uses the terms “knowledgeable society” (Bell 1973, 263) and „knowledge society”. The term “knowledge society” first occurs on p. 212 of the book. As Bell here writes: Technology is one axis of the post-industrial society; the other axis is knowledge as a fundamental resource. Knowledge and technology are embodied in social institutions and represented by persons. In short, we can talk of a knowledge society. … The postindustrial society, it is clear, is a knowledge society in a double sense: first, the sources of innovation are increasingly derivative from research and development (and more directly, there is a new relation between science and technology because of the centrality of theoretical knowledge); second, the weight of the society—measured by a larger

152 proportion of Gross National Product and a larger share of employment—is increasingly in the knowledge field.

In a post-industrial society, knowledge and technology have become the central resources of society and economy. Perhaps it is the today oft-used circumlocution “knowledge-based society” that best expresses this state of affairs; and perhaps it is useful to stress, as especially Castells does, that what characterizes this society “is not the centrality of knowledge and information, but the application of such knowledge and information to knowledge generation and information processing/communication devices, in a cumulative feedback loop between innovation and the uses of innovation” (Castells 1996, 32). To be sure, the expressions “knowledge society” or “knowledgebased society” are not ones Castells would use. His preferred terms are, on the one hand, the well-worn “information society”, and, on the other, “informational society”—the latter a phrase of his own coinage. I have indicated above that the term “information society” is misleading—confounding, as it were, information and knowledge; while Castells’ tormented neologism “informational society” is unhelpful at best. The term first occurs in Castells’ early book The City and the Grassroots (Castells 1983). It is here that he introduces, in the course of a rather hair-splitting elucidation, the concept “informational mode of development”. Modes of development, Castells stresses in the wake of Touraine, must be carefully distinguished from modes of production; the concept of a mode of development “refers to the particular form in which labour, matter, and energy are combined in work to obtain the product. Work is certainly related to social (class) relationships, but, in addition to the way through which the surplus is appropriated, it is also important to understand how the surplus is increased.” There are two types of mode of development, Castells here points out, the industrial and the informational, and then goes on to give a slightly confused explanation: “For the informational mode of production, productivity is based on knowledge... Informationalism is orientated towards technological development, that is, towards the accumulation of knowledge” (Castells 1983, 307). In his deservedly famous The Informational City—the book in which Castells, for the first and the last time, can actually bring himself to believe that the new information technologies might have a politically liberating potential—there occurs the felicitous formulation “what is specific to the informational mode of development is that here knowledge intervenes upon knowledge itself to generate higher productivity” (Castells 1989, 10), only to give way, in The Information Age, again, to a rather less transparent “analytical distinction” between

153 the notions of “information society” and “informational society”, with similar implications for information/informational economy. The term information society emphasizes the role of information in society. But … information, in its broadest sense, e.g. as communication of knowledge, has been critical in all societies, including medieval Europe which was culturally structured, and to some extent unified, around scholasticism, that is, by and large an intellectual framework... In contrast, the term informational indicates the attribute of a specific form of social organization in which information generation, processing, and transmission become the fundamental sources of productivity and power, because of new technological conditions emerging in this historical period (Castells 1996, 21).

To my review of Castells’ work I have referred above, in section 1. In its second meaning the term “knowledge society” is connected with the idea of universal access to knowledge, emphasizing that while such access is becoming increasingly attainable today, it was never a possibility in earlier ages. In a trivial sense, all societies are knowledge-based; however, much of the knowledge both in traditional societies and all through modernity was possessed by a minority, and much of that knowledge was not knowledge at all, but rather myth, superstition, lethal error or, at best, spurious learning. It does a disservice to progress to deny that we are today in fact witnessing a historical turning-point, and that past societies were ignorance-based societies rather than knowledge-based societies.

2. 2. Information and Knowledge Echoing T. S. Eliot’s famous lines from the early 1930s—“Where is the wisdom we have lost in knowledge? Where is the knowledge we have lost in information?”—John Naisbitt in his popular book Megatrends (Naisbitt 1982) bemoans the phenomenon that the world is “drowning in information, but is starved for knowledge”. Naisbitt’s formulation is taken up by Vartan Gregorian among many others, in an address given in 1992.4 Gregorian—at that time President of Brown University—there also refers to Carlos Fuentes as saying that “one of the greatest challenges facing modern society and contemporary civilization is how to transform information into knowledge”. The conclusion Gregorian reaches is that today’s educational institutions must be careful to “provide not just information, but its distillation, namely knowledge”. The notion that “information” is somehow inferior to “knowledge” is not of recent origin. Although the Latin word informare, meaning the action of

154 forming matter, such as stone, wood, leather, etc., also took on the senses “to instruct”, “to educate”, “to form an idea”5—Cicero’s informare deos coniectura was explained as “imaginer en son esprit et conjecturer quels sont les dieux” by Robert Estienne in his Dictionarium Latinogallicum (1552)—“informare” in Italian, “informer” in French, and “to inform” in English from the beginning had the connotation of conveying knowledge that is merely particular. Perhaps another Latin word, informis—meaning unshapen, formless—had, with its French and English derivatives (“informe”, “inform”), a certain coincidental effect here. To have information amounted to knowing details, possibly unconnected. Hence the use of the word “information” in the contexts of criminal accusation, charge, legal process. John Locke, in his Essay Concerning Human Understanding (1690), might have thought that “information” had to do with “truth and real knowledge”;6 however, what the OED refers to as the “prevailing mod. sense” of inform, namely “to impart knowledge of some particular fact or occurrence”, or the Larousse phrase “informer quel-qu’un de quelque chose”, indeed appear to capture the essentials of the concept. Thus Roszak (1986) can correctly point out that in the days of his childhood, shortly before the outbreak of World War II, “information” was a dull word, referring to answers to concrete questions, having the form of names, numbers, dates, etc. With Shannon’s and Weaver’s technical concept of information, put forward in Shannon and Weaver (1949), and with the emergence of computers, it also became a misleading—and glorious—word. Attempts at clarification of course abound. Daniel Bell made such an attempt. As he wrote: “By information I mean data processing in the broadest sense; the storage, retrieval, and processing of data becomes the essential resource for all economic and social exchanges. ... By knowledge, I mean an organized set of statements of facts or ideas, presenting a reasoned judgment or an experimental result, which is transmitted to others through some communication medium in some systematic form” (Bell 1979, 168). Slightly less straightforward from the point of view of the present argument is Dretske’s formula: “Roughly speaking, information is that commodity capable of yielding knowledge, and what information a signal carries is what we can learn from it” (Dretske 1981, 44). Let me sum up the foregoing by saying that knowledge can be usefully regarded as information in context. Now it is a standard observation that information sought through mobile phones is, characteristically, locationspecific and situation-specific. It seems, then, that mobile communication tends to engender not just information, but information in context: that is, knowledge per se.

155

2. 3. Communication and Community The early phase of the research project “Communications in the 21st Century” went under the title „The Mobile Information Society”, a phrase that has been current since 1999 or so. However, we have increasingly come to realize that it is a misleading phrase. For mobile communications point to a future which offers a wealth of knowledge, not just of information, and promise to re-establish, within the life of modern society, some of the features formerly enjoyed by genuine local communities. „Community” on the one hand, „society” on the other, clearly differ in their connotations; and it was Tönnies who, towards the end of the nineteenth century, crystallized this difference into a conceptual contrast. As Tönnies sees it, community involves “real”, “organic”, continuous associations. While the members of societies “are essentially separated in spite of all connecting factors”, the members of a community “remain essentially connected in spite of all separating factors”. As Tönnies of course states, “community is old, society is new, as a phenomenon and as a name”;7 however, the striking observation in the recent literature on mobile telephony is that through constant communicative connectedness a kind of turning back to the living, personal interactions of earlier communities is brought about. Certainly this is the message of the formula “perpetual contact” in Katz and Aakhus (2002). The “socio-logic”, indeed the “ontologies”, of perpetual contact receive here (Katz and Aakhus 2002, 305–309)—not without a sidelong glance, incidentally, at Heidegger—an especially profound analysis in the closing essay by Katz and Aakhus, “Conclusion: Making Meaning of Mobiles—a Theory of Apparatgeist”. Writing about fixed-line telephone networks Claude S. Fischer had already in the early 1990s marshalled arguments against the view that “the telephone is yet another of modernity’s blows against local Gemeinschaft, the close community” (Fischer 1994, 25). Adhering to a fundamental idea of the German Romantic philosophy of language, Tönnies propounded the view that it is not individual consciousness, but rather communication within the commmunity, that is the agent of human thinking. “Mental life”, writes Tönnies, “manifests itself through communication, that is through the effect on kindred beings through signs, especially words pronounced by the use of vocal organs. From this develops thinking, i.e., the communication to oneself through audible or inaudible speech” (Tönnies 1957, 107). In the introductory chapter “The Theory of Community” Tönnies emphasizes that language, which “by means of gestures and sounds, enables expressions”, is not “a means and tool by which one makes oneself understood”, but it is “itself the living understanding”

156 (Tönnies 1957, 47). The same idea of course plays a major role also in Heidegger’s views, for whom “understanding” and “being together” (Mitsein) are intrinsically related to each other. As he puts it in the famous § 34 of Being and Time, making assertions or giving information is just a special case of “communication”. In its most general sense, communication is the relationship in which “being with one another is understandingly constituted”; “communication is never anything like a conveying of experiences ... from the interior of one subject into the interior of another” (Heidegger 1962, 205). John Dewey already in 1915 formulated the thesis that social life is not just maintained by communication, but indeed constituted by it. As his oftquoted lines run: Society not only continues to exist by transmission, by communication, but it may fairly be said to exist in transmission, in communication. There is more than a verbal tie between the words common, community, and communication. Men live in a community in virtue of the things they have in common; and communication is the way in which they come to possess things in common (Dewey 1915, 4).

Dewey’s thesis is corroborated by contemporary research in evolutionary psychology. Robin Dunbar in his essay referred to above, in (Nyíri 2003a), propounds the view that language emerged in order to ensure social cohesion within primate groups at a stage where pre-verbal means of mutual attention had ceased to be effective due to growing group size. Language creates social cohesion and group identity; linguistic differences serve the isolating of groups from each other. With the increasing influence of literacy however there arises a functional disorder: written language appears as the “correct” one in contrast to the merely spoken dialects (Sándor 2003). The new technologies of communication—the rise of secondary orality,8 especially in the form of mobile telephony—now promise to heal that disorder. Even the most cursory survey of the topic of communication and community would be one-sided without a reference to Deutsch (1953), a book it is imperative for contemporary philosophical research on communication to rediscover. Like Tönnies, Deutsch postulates a conceptual contrast between community and society, but in his case the dimension of communication plays a rather more explicit role than it did in Tönnies’ work. Deutsch applies the notion of complementarity, originally a concept in communications theory, to the issues of social communication, and defines communities as characterized by patterns of communication that display a high level of complementarity between information conveyed through various channels

157 (Deutsch 1953, 69 ff.). It is because of the drive to multimedia inherent in networked and mobile communication, and indeed because of the recent, quite overwhelming trend of telecommunications convergence—the amalgamation of the fixed-line phone, mobile telephony, internet access, and entertainment (IPTV)—that the approach of Deutsch today again appears as especially timely.

3. The Network Individual In the age of telecommunications convergence, it appears to be warranted to speak of a new type of personality: the “network individual”. The network individual is the person reintegrated, after centuries of relative isolation induced by the printing press, into the collective thinking of society—the individual whose mind is manifestly mediated, once again, by the minds of those forming his/her smaller or larger community.9 This mediation is indeed manifest: its patterns can be directly read off the displays of our electronic communications devices, of which the mobile phone has clearly become the central and most important. Also, there is a theoretical framework at our disposal in which those patterns can be conveniently classified and interpreted: Robin Dunbar’s theory of the social brain. According to this theory,10 language came about primarily as a tool of social intelligence. People mostly converse about others and about each other, gossip is a cohesive force. Dunbar established a co-variation between on the one hand the neocortex volume of primates, and on the other, various aspects of primate social behaviour, including social group size. If a primate species embarks on a path to living in a larger group so as to be able to more effectively solve its ecological problems, it has to develop a sufficiently large neocortex to provide capacities for the social information processing needed. Calculations show that with a neocortex of the size humans possess, we should live in groups of about 150. And this in fact seems to be the case. “Although humans”, writes Dunbar, can obviously cope with very large urban environments and even nation-states, the number of people within those large population units with whom one can say that one has a direct personal relationship is very much smaller. Censuses of the population units of hunter-gatherers, the size of scientific sub-disciplines, the number of people to whom one sends Christmas cards and the number of people of whom one can ask a favour all turn out to be about 150 in number (Dunbar 2003, 58).

158 Within this circle of 150 persons there is a series of smaller circles of individuals with whom we can maintain a relationship of a given degree of intensity. There is ample evidence to the effect that the number of persons we can have a particularly close connection with is limited to around 12–15, and that there is an inner circle of about 5 persons with whom this relationship is especially strong. We have, in addition, grounds to believe that there may be a series of layers, with upper boundaries at around 35 and 80–100, each associated with a declining level of emotional closeness. Each of us as it were sits in the centre of a series of expanding circles of 5, 15, 35, 80 and 150 persons (Dunbar 2003, 59). Let us now cast a glance upon our mobile phone. There are hundreds of telephone numbers stored (as well as, to say it parenthetically, thousands of e-mail addresses in our mailbox). The number of persons with whom in the course of time we have had SMS contacts, is again several hundreds—since quite often we have to send SMS messages even to strangers. Recall the formula, dismissive but not at all unusual, on the mobile answering device: “Please do not leave a message at this number. Send an SMS, or write an e-mail.” However, we conduct regular SMS communication with a limited number of persons only—the figure is certainly below 35, and with most people even below 15. Finally, MMS messages will not be exchanged beyond one’s circle of the most intimate friends—on the average with 5 persons at the utmost. As Döring et al. (2006, 198) put it: “The average number of people a person exchanges MMS messages with is estimated to be 2 to 5, usually including his or her partner and close friends. This at the same time implies that MMS messages require the communication partners to share a high degree of contextual knowledge and are often incomprehensible to outsiders.”11 With the rise of Skype we have yet more access to a rich source of experiental data. How many people figure on one’s “Skype Contacts” list? According to my informal survey, the list seldom contains more than 35 Skype-names—that is, the number of persons with whom we occasionally talk over the internet does not exceed the third Dunbarian circle. I have chosen the word “occasionally”, since my impression is that the number of persons whom we regularly call using VoIP is nearer to 5 than to 15. And the number 15 seems to indicate the approximate upper limit of the circle of persons with whom we maintain chat contact. I myself find it frustrating if my Skype contacts list refers to more than 15 persons, and again and again delete the Skype names of those to whom I do not have a really close relationship. This is, after all, a list I have continuously before my eyes, and it shows intimate details. I learn who is online when, who has not touched

159 the computer for more than 5 minutes (“Away”), has deserted it for more than 20 Minutes (“Not Available”) and who is online, but does not wish to be contacted (“Do Not Disturb”). Also, I see faces. Chat in its newer versions appears to be restricted, quite unequivocally, to the two innermost Dunbarian circles. My list of approximately 15 persons of course contains names, too, which do not figure on the lists of my intimate chat partners—each of us inhabits the centre of different concentric circles. The friends of my friends are not necessarily my friends—and it is important that through my friends I should also be able to reach, when the need arises, strangers. We have arrived at Stanley Milgram’s famous small-world phenomenon (Milgram 1967), also known as the „six degrees of separation” pattern. In a way, I find it astonishing that Dunbar nowhere refers to Milgram, and indeed that research does practically not connect the two names with one another. For there is a rather obvious point where the results of the two meet: Milgram’s circle of acquaintances known on a first-name basis is identical with the Dunbarian circle of 150. And we might assume that should the number of individuals one has a personal connection with overstep the limit of 150—a development Dunbar holds impossible for cognitive reasons—the Milgram figure would in its turn decrease. Now the latter today is actually the case: a repeated experiment has yielded the number 4.6. As The Economist recently wrote: „Being able to keep in touch with a much wider range of people through technologies such as e-mail has brought everyone closer” (The Economist 2006, 4). Perhaps Dunbar does, after all, underestimate the effect of those most recent communications technologies upon our cognitive capacities. And with the ongoing convergence of telecommunications technologies, that effect is likely to become even more pronounced. In the global knowledge community on the rise, distances between people are characterized by ever smaller degrees of separation.

160

Endnotes For some first responses to this challenge see Roesler (2000), and the outstanding volume Katz and Aakhus (2002); further Katz (1999), Kopomaa (2000), Brown, Green and Harper (2002). Rheingold (2002) is a useful compilation of quotes and interviews. 2 Our Hungarian-language volumes are fully accessible, the German-language and English-language volumes partially accessible, via the www.socialscience.t-mobile.hu webpage. 3 See http://www.hunfi.hu/nyiri/castells_rev.htm. Printed as (Nyíri 2004). 4 See http://www.cni.org/docs/tsh/Keynote.html. 5 Recall, also, the original meaning of the Greek words eidos or idea: “pattern”, “visual form”. 6 Cf. book 3, chapter 10, sect. 34. 7 Compare Tönnies (1957, 33 ff. and 65). I had to modify the English translation at a number of points. 8 The term “secondary orality” was coined by Walter J. Ong. As he put it: “with telephone, radio, television and various kinds of sound tape, electronic technology has brought us into the age of ‘secondary orality’. This new orality has striking resemblances to the old in its participatory mystique, its fostering of a communal sense, its concentration on the present moment... But it is essentially a more deliberate and self-conscious orality, based permanently on the use of writing and print, which are essential for the manufacture and operation of the equipment and for its use as well. ... secondary orality generates a sense for groups immeasurably larger than those of primary oral culture” (Ong 1982, 136). 9 I have begun using the term “network individual”, for designating what I think is a new psychological type—and in a sense also the return to a primordial type of personality —in the early stages of the project “Communications in the 21st Century” (cf. http://www.socialscience.t-mobile.hu/2001_dec_konf/SUMMARIES.pdf, see also my preface in (Nyíri 2003c, 16). The network individual is not the uprooted, freefloating being as depicted by Barry Wellman. Wellman uses the term “networked individualism”. His description: “People remain connected, but as individuals rather than being rooted in the home bases of work unit and household. Individuals switch rapidly between their social networks. Each person separately operates his networks to obtain information, collaboration, orders, support, sociability, and a sense of belonging” (Wellman 2002). 10 See in particular (Dunbar 1996), as well as his essay in (Nyíri 2003a). 11 For a philosophical interpretation of the issue of MMS, image, and context, see Kondor (2007). 1

161

Literature Bell, Daniel 1973 The Coming of Post-Industrial Society: A Venture in Social Forecasting, New York: Basic Books. Bell, Daniel 1979 “The Social Framework of the Information Society”, in: M. L. Dertouzos and Joel Moses (eds.), The Computer Age: A Twenty-Year View, Cambridge, MA: MIT Press, 163–211. Brown, B., Green, N. and Harper, R. 2002 (eds.) Wireless World: Social and Interactional Aspects of the Mobile Age, London: Springer. Castells, Manuel 1983 The City and the Grassroots: A Cross-Cultural Theory of Urban Social Movements, London: Edward Arnold. Castells, Manuel 1989 The Informational City: Information Technology, Economic Restructuring, and the Urban-Regional Process, Oxford: Basil Blackwell. Castells, Manuel 1996 The Information Age: Economy, Society and Culture, vol. 1: The Rise of the Network Society, Oxford: Blackwell. Castells, Manuel et al. 2007 Mobile Communication and Society: A Global Perspective, Cambridge, MA: MIT Press. Deutsch, Karl W. 1953 Nationalism and Social Communication: An Inquiry into the Foundations of Nationality, New York: John Wiley & Sons. Dewey, John 1915 Democracy and Education, New York: Macmillan Co. Döring, Nicola, et al. 2006 “Contents, Forms and Functions of Interpersonal Pictorial Messages in Online and Mobile Communication”, in: Kristóf Nyíri (ed.), Mobile Understanding: The Epistemology of Ubiquitous Communication, Vienna: Passagen Verlag. Dretske, Fred I. 1981 Knowledge and the Flow of Information, Oxford: Basil Blackwell. Dunbar, R. I. M. 1996 Grooming, Gossip, and the Evolution of Language, Cambridge, MA: Harvard University Press. Dunbar, R. I. M. 2003 “Are There Cognitive Constraints on an E-World?”, in: Kristóf Nyíri, Mobile Communication: Essays on Cognition and Community, Vienna: Passagen Verlag, 57–69. Fischer, Claude S. 1992 America Calling: A Social History of the Telephone to 1940, Berkeley: University of California Press. Goggin, Gerard 2006 Cell Phone Culture: Mobile Technology in Everyday Life, London: Routledge. Heidegger, Martin 1962 Being and Time, transl. by John Macquarrie and Edward Robinson, Oxford: Basil Blackwell. Originally published as Sein und Zeit (1927). Katz, James E. 1999 Connections: Social and Cultural Studies of the Telephone in American Life, New Brunswick, NJ: Transaction Publishers. Katz, James E. and Aakhus, Mark 2002 (eds.) Perpetual Contact: Mobile Communica-

162 tion, Private Talk, Public Performance, Cambridge: Cambridge University Press. Kondor, Zsuzsanna 2007 “The Mobile Image: Experience on the Move”, in: Kristóf Nyíri (ed.), Mobile Studies: Paradigms and Perspectives, Vienna: Passagen Verlag, 25–33. Kopomaa, Timo 2000 The City in Your Pocket: Birth of the Mobile Information Society, Helsinki: Gaudeamus. Levinson, Paul 2004 Cellphone: The Story of the World’s Most Mobile Medium and How It Has Transformed Everything!, New York: Palgrave Macmillan. Ling, Rich 2004 The Mobile Connection: The Cell Phone’s Impact on Society, Amsterdam: Elsevier. Milgram, Stanley 1967 “The Small-World Problem”, Psychology Today, 60–67. Naisbitt, John, 1982 Megatrends, New York: Warner Books. Nyíri, Kristóf 2001a (ed.) Mobil információs társadalom: Tanulmányok [The Mobile Information Society: Essays], Budapest: MTA Filozófiai Kutatóintézete. Nyíri, Kristóf 2001b (ed.) A 21. századi kommunikáció új útjai: Tanulmányok [New Perspectives on 21st-Century Communications: Essays], Budapest: MTA Filozófiai Kutatóintézete. Nyíri, Kristóf 2002a (ed.) Allzeit zuhanden: Gemeinschaft und Erkenntnis im Mobilzeitalter, Vienna: Passagen Verlag. Nyíri, Kristóf 2002b (ed.) Mobilközösség—mobilmegismerés: Tanulmányok [Mobile Society—Mobile Cognition: Essays], Budapest: MTA Filozófiai Kutatóintézete. Nyíri, Kristóf 2003a (ed.) Mobile Communication: Essays on Cognition and Community, Vienna: Passagen Verlag. Nyíri, Kristóf 2003b (ed.) Mobile Learning: Essays on Philosophy, Psychology and Education, Vienna: Passagen Verlag, 2003. Nyíri, Kristóf 2003c (ed.) Mobile Democracy: Essays on Society, Self and Politics, Vienna: Passagen Verlag. Nyíri, Kristóf 2004 “Review of Castells, The Information Age”, in: Frank Webster and Basil Dimitriou (eds.), Manuel Castells, London: SAGE, vol. III, 5–34. Nyíri, Kristóf 2005 (ed.) A Sense of Place: The Global and the Local in Mobile Communication, Vienna: Passagen Verlag. Nyíri, Kristóf 2006a (ed.) Mobile Understanding: The Epistemology of Ubiquitous Communication, Vienna: Passagen Verlag. Nyíri, Kristóf 2006b “Time and Communication”, in: F. Stadler and M. Stöltzner (eds.), Time and History / Zeit und Geschichte, Frankfurt/M.: ontos verlag, 301–316. Nyíri, Kristóf 2007a (ed.) Mobile Studies: Paradigms and Perspectives, Vienna: Passagen Verlag. Nyíri, Kristóf 2007b (ed.) Mobiltársadalomkutatás: Paradigmák – perspektívák [Mobile Studies: Paradigms and Perspectives], Budapest: MTA / T-Mobile. Ong, Walter J. 1982 Orality and Literacy: The Technologizing of the Word, London:

163 Methuen. Rheingold, Howard 2002 Smart Mobs, Cambridge, MA: Perseus. Roesler, Alexander 2000 “Das Telefon in der Philosophie: Sokrates, Heidegger, Derrida”, in: Stefan Münker and Alexander Roesler (eds.), Telefonbuch: Beiträge zu einer Kulturgeschichte des Telefons, Frankfurt/M.: Suhrkamp, 142–160. Roszak, Theodore, 1986 The Cult of Information, Cambridge: The Lutterworth Press. Sándor, Klára 2003 “The Fall of Linguistic Aristocratism”, in: Kristóf Nyíri (ed.), Mobile Communication: Essays on Cognition and Community, Vienna: Passagen Verlag, 2003, 71–82. Shannon, Claude E. and Weaver, Warren 1949 The Mathematical Theory of Communication, Urbana: The University of Illinois Press. The Economist, 2006 Special survey The New Organisation, January 21st. Tönnies, Ferdinand 1957 Community and Society, East Lansing, MI: Michigan State University Press. Originally published as Gemeinschaft und Gesellschaft (1887). Wellman, Barry 2002 “Little Boxes, Glocalization, and Networked Individualism”, in: Makoto Tanabe, Peter van den Besselaar and Toru Ishida (eds.), Digital Cities II: Computational and Sociological Approaches, Berlin: Springer-Verlag, 10–25.

Section 3: Ethics and political Economy of the Information Society Ethik und politische Ökonomie der Informationsgesellschaft

On our Knowledge of Markets for Knowledge―A Survey Gerhard Clemenz, Wien

1. Introduction At the Lisbon Summit 2000 the EU set herself the goal of transforming the European Union by 2010 into “the most competitive and dynamic knowledge based economy in the world capable of sustainable economic growth with more and better jobs and greater social cohesion”. I take this statement as a starting point for this paper for two reasons: On the one hand it acknowledges the crucial role of knowledge in an advanced economy. On the other hand, it raises the question what needs to be done in order to achieve this ambitious goal. In particular, since the EU is also committed to a market economy and the maintenance of competition the question arises how well markets function with respect to the creation and distribution of knowledge, and what measures may be required, either to support the market mechanism or to replace it by some other institutions. This article deals with the first question and offers a survey of the problems encountered in markets dealing with knowledge. In the next section I discuss briefly the role of knowledge and information in economics. After that I point out a few difficulties with finding a precise and generally accepted definition of knowledge. Section 4 is the core of the article and discusses various types of market failures which might occur when the commodity produced and traded is knowledge. I conclude with a few suggestions for further research.

2. The role of knowledge and information in economics Knowledge and information deserve a central role in economics for at least three reasons: 1) In advanced economies the information sector contributes around 50% to Gross Domestic Product, and it has been the fastest growing sector over the last twenty years. 2) Research, development and innovations are main contributors to econom-

168 ic growth and explain up to 50% of the increase of labor productivity. 3) Accounting for information poses a major challenge to economic theory in almost all of its branches. Even though there is now general agreement about the importance of knowledge and information it took some time before this was fully reflected in economics, both at a theoretical and at an empirical level. It is remarkable that “Austrian” economists who were contemporaries of Wittgenstein were among the pioneers in this area. Fritz Machlup was the first to point out the growing importance of the “creation and distribution of knowledge” (Machlup 1962, 1980) and is considered as “one of the fathers of thinking about what has come to be labeled the information society and the information economy.” (www.caslon.com. au/biographies/machlup.htm). Joseph Schumpeter realized very early the crucial role of innovations for economic growth and introduced the famous concept of “creative destruction” into economics. While his ideas have been neglected in mainstream― i.e. neoclassical―economics for a long time they are now widely believed to be the most solid foundation for both, theory and policy of economic growth (Aghion and Howitt 2005). Friedrich von Hayek, a nephew of Ludwig Wittgestein’s mother, insisted for a long time that a market economy is superior to a centrally planned economy because it is much more efficient with respect to the creation and utilization of social knowledge, which is scattered in a society and can never be efficiently centralized (Hayek 1945). Ironically, later research which was at least partly inspired by Hayek has shown that markets are far less efficient when it comes to creating and distributing knowledge or information than he apparently believed (e.g. Grossman and Stiglitz 1980), though he appears to have been correct with respect to the still greater inefficiency of central planning. It is the main theme of this paper to discuss problems and shortcomings of markets when it comes to dealing with knowledge and information. It is worth mentioning, however, that the problems associated with knowledge and information as commodities are only a comparatively small part of the problems that are dealt with in the so called economics of information (see Stiglitz 2001).

169

3. Some definitions So far we have treated knowledge and information as synonyms and refrained from offering a precise definition of either. There are several good excuses for this: Consulting dictionaries or encyclopedias does not get us very far. In the most popular source of wisdom for the internet generation we learn that “knowledge is defined (Oxford English Dictionary) variously as (i) facts, information, and skills acquired by a person through experience or education; the theoretical or practical understanding of a subject, (ii) what is known in a particular field or in total; facts and information or (iii) awareness or familiarity gained by experience of a fact or situation (Wikipedia).” Actually, in my version of the Oxford Advanced Learner’s Dictionary the word “facts” is replaced by “understanding”. In any case, looking at various dictionaries makes one feel even worse than Chruchill when dealing with economists. He once complained that when asking three economists one gets four opinions, two of them from Mr. Keynes. Looking at three dictionaries one would be happy to get just four definitions of “knowledge”. In fact, the one I liked best can be found in the Webster Handy College Dictionary, where knowledge is defined as “awareness of facts, truths, or principles; a body of accumulated facts”. Interestingly, it was the only source I found where “knowledge” and “information” are not treated as more or less synonymous. No wonder that one of the current pundits of the information society noted, “the distinction between information and knowledge is a tricky one” (Benkler 2006, p.313). I am not sure whether his own definition is really satisfactory: “[he uses] information … colloquially, to refer to raw data, scientific reports of the output of scientific discovery, news and factual reports. “Knowledge” refer[s] to the set of cultural practices and capacities necessary for processing the information into either new statements on the information exchange, or more important in our context, for practical use of the information in appropriate ways to produce more desirable actions or outcomes from action” (Benkler 2006, p.313). Machlup (1962), who attempted to measure the magnitude of the production and distribution of knowledge distinguished five types of knowledge: • practical knowledge, • intellectual knowledge, • pastime knowledge, • spiritual or religious knowledge, • unwanted knowledge, accidentally acquired and aimlessly retained. Curiously, he included the distribution of typewriters and stationery as part

170 of this knowledge industry. In any case, the distinction between knowledge and information is again somehow blurred. Another eminent economist of this generation, Kenneth Boulding, made the following observation: “There is a little terminological problem here, that the word “knowledge” in English has some tendency to approach the meaning of “truth”. We have really no convenient word to describe the content of the human mind without regard to the question as to whether this cognitive content corresponds to anything outside it. For this reason I have in the past used the term “image”… I will revert to the term “knowledge” with a warning that I make no assumption about the content of people’s minds being true”. (Boulding 1966, p.1). I am afraid that I cannot get any further than this, and I hope that for my purposes it is not necessary to do so. I shall use “information” and “knowledge” as synonyms, unless stated otherwise, and I follow Boulding: “I shall become very pragmatic at this point and consign the philosophical problems to my esteemed colleagues who make this their specialty, and I shall assume simply that knowledge exists” (Boulding 1966, p. 2). And, I might add, even though we may not be able to define knowledge in a precise way, we recognize it when we encounter it, at least when discussing it in the context of markets for knowledge.

4. Knowledge as a commodity and market failures 4.1 The first welfare theorem One of the most remarkable achievements of economics in the 20st century was the so called “general equilibrium theory” and its welfare theorems. The main result of the “Arrow-Debreu-Model” is that in an economy with a complete set of perfectly competitive markets an equilibrium exists and is Pareto-efficient, i.e. there is no waste (Arrow and Debreu 1954). It would be quite misleading, however, to take this result as a proof for the overall efficiency of a (capitalist) market economy. In fact, quite the contrary is true since the conditions for this result to hold are listed meticulously, and it is quite obvious that they are extremely unlikely to be satisfied in any real world economy. If one or more of these conditions are violated efficiency of an equilibrium (if one exists) is no longer guaranteed and we get what has been termed market failure. Now the occurrence of market failures does not necessarily mean that free markets are inferior to other ways of organizing

171 economic activities, but at least they make it worthwhile to think about alternatives. The main causes of market failures are market power, externalities, public goods (for definitions see below) and various forms of imperfect information. What is remarkable for our topic is the fact that knowledge as a commodity is very likely to induce all these traditional causes of market failures (Allen 1990) and, as if this were not enough, add a few more not encountered with other commodities. In what follows I shall discuss some of these market failures in markets for knowledge in more detail. For the sake of completeness I should like to add that of course not all types of knowledge will under all circumstances display all or even one of the causes for market failures. The claim is rather that for each type of market failure one can find some type of knowledge leading to this market failure.

4.2 Knowledge and market power Knowledge and (market) power may be linked in various ways, and I shall confine myself to discussing two aspects:

4.2.1 Economies of scale and scope A main reason for the existence of big firms who obviously have considerable market power are economies of scale and scope, i.e. the fact that it is less costly to produce given volumes of outputs within one large corporation rather than in many small independent firms. One reason for this phenomenon, which is also referred to as “sub-additivity of costs” is the existence of large fixed costs which have to be incurred regardless of the volume of output. A moment’s reflection shows that such economies of scale and scope are likely to exist in the production of many types of knowledge. Consider the compilation and processing of huge amounts of data. There are large fixed costs for setting up the appropriate hardware, and similarly one needs flexible and powerful software. In comparison the costs of using both for the generation of information by feeding in data are small. Or think of University departments. Certainly in economics, but I suspect Philosophy is not so different, a critical mass of researchers with some variety of expertise is needed in order to achieve success and make it to the upper segments of rankings. It is not just economies of scale―the bigger,

172 the better―but also economies of scope that can be utilized: If various lines of research and expertise can be combined the chances for new and relevant results will improve. Whether economies of scale and scope will actually translate into market power depends on the appropriability of the knowledge generated. As I shall discuss in some detail below knowledge very often cannot (and quite frequently should not) become the private property of anyone. With basic scientific research as conducted in universities this is pretty obvious. But of course there exist other types of knowledge with a large potential for profitable commercial use. It is knowledge that allows the production of known goods at lower costs or of new or better goods. An economic agent in possession of such knowledge can use it in two―not mutually exclusive―ways: She can either use it for producing and selling goods herself, thus selling knowledge indirectly embodied in her products, or she can sell the knowledge to somebody else. As we shall see below the second option has several difficulties, but for the moment I would like to concentrate on the first one.

4.2.2 Arrow’s dilemma Now suppose somebody has succeeded in creating knowledge that could be used in a commercially profitable way. If she retains exclusive possession of this knowledge then she obviously has a monopoly implying a socially inefficient use of productive resources in the short run: In order to exploit her monopoly she will produce less than the social optimum and charge a price above marginal costs. The first welfare theorem would require that her knowledge becomes available to everybody in order to ensure perfect competition. However, if the creation of profitable knowledge is costly and risky, who would ever bother to engage in such activities if she is forced to give away her knowledge immediately after she gets it? So here we have a problem pointed out by Arrow back in 1962: There is a conflict between short run efficiency which would require that socially useful knowledge becomes accessible to everybody, and long run efficiency which would require that economic agents are willing to make the investment and take the risk of creating such knowledge because they will be able to enjoy a handsome return by exploiting a (temporary) monopoly profit (Arrow 1962). Needless to mention that the problem has not been solved yet―and presumably never will be. Ongoing debates about intellectual property rights, patents, copy rights and open source are a clear indication that Arrow’s dilemma has not lost its importance, though it has come up in different guises.

173

4.3 Knowledge and externalities One of the currently most popular introductory text books in economics offers the following definition: An “externality arises when a person engages in an activity that influences the well-being of a bystander and yet neither pays nor receives any compensation for that effect” (Mankiw 2005, p.204). In this dramatic age of global climate changes we associate with externalities usually negative effects like the pollution of water and air or the emission of greenhouse gases. New knowledge, respectively its use, however, very often causes positive externalities. Suppose again that an entrepreneur has made an innovation, say discovered a new technology that enables him to reduce the (marginal) costs of production drastically―meaning she enjoys a monopoly. Of course she will make a very handsome profit, but consumers will benefit as well: In order to maximize her profits the innovator will pass on part of the cost reduction to consumers. This affects their well-being, but they do not compensate the entrepreneur fully―they enjoy a positive externality. But there may also be other beneficiaries: Consumers who need less of their income for buying the good will increase their demand for other goods―especially if they are complements of the good that has become cheaper. Alternatively, the innovation may reduce the costs of production or the quality of other goods―the progress that has been made with respect to electronic equipment has revolutionized the production of many other goods. In short, even if the new knowledge remains exclusively with the innovator positive externalities will be created with the effect that the private return to the innovation is considerably smaller than the social one. There exist various estimates of private and social returns to research and development, and while the absolute figures differ due to different methods of measurement the ratio of social to private returns is remarkably stable: it is about 2:1 (Griliches 1995). As a consequence, we have a market failure: private entrepreneurs tend to spend less on R&D than would be socially optimal. In fact, this is a market failure most industrialists and their lobbies are quite happy to admit, since it is used―quite successfully―as an argument for obtaining subsidies. As I shall discuss below, things are not always that simple, and there are circumstances under which there is too much, or rather misdirected private R&D investment. But let me pursue a bit more the problem of insufficient generation of knowledge in a market economy. So far we have only discussed the occurrence of externalities without any direct use of the new knowledge by other economic agents. In many instances it is inevitable to give away at least some knowledge the moment it is actually used. Think for

174 example of the pharmaceutical industry. If a certain approach has turned out to work for some problem one can infer that it is quite likely to work at least for related problems. A similar problem may affect the celebrated―though often doubted―efficiency of financial markets. It is claimed by some economists that financial market are extremely efficient with respect to processing relevant information about the prospects of firms. Now suppose it is indeed possible to obtain information about firms which can be profitably used for transactions in the stock market. The problem is that other market participants can observe the actions of the agents who have obtained the information. This way the information is revealed and therefore is no longer profitable. But if obtaining information is costly then nobody has an incentive to get informed as long as she can get the information for free by observing what the informed agents are doing. So eventually nobody will be informed. But if nobody is informed there is again an incentive to become informed, and no equilibrium exists. This problem, analyzed first by Grossman and Stiglitz (1980) may look very special, but it points to a much more general problem: Using information reveals it at least partly to others and thereby makes it less valuable. To sum up this section, due to positive externalities which drive a wedge between social and private returns to the creation of knowledge and information markets may produce too little of both.

4.4 Knowledge as a public good Consulting again Mankiw (2004, p. 225) we find the following definition: ”Public goods are neither excludable nor rival. That is, people cannot be prevented from using a public good, and one person’s use of a public good does not reduce another person’s ability to use it.” Interestingly, basic research, i.e. the creation of knowledge, is mentioned as an important example for a public good, immediately after national defense and before the old textbook favorite, the lighthouse. A distinction is drawn, however, between general knowledge, like a mathematical theorem, and specific knowledge, such as an invention of a better battery. Whereas the latter can be patented and hence exclusion is possible, the former cannot. In a way this distinction is more apparent than real. The use of specific knowledge is not rival from a purely technical point of view: Your ability to produce a better battery does not affect my ability to do likewise. What is rival, however, is the commercial utilization of this specific knowledge: If we both produce the better battery we compete for consumers and our profits

175 will go down. Here we are back again at Arrow’s dilemma. While markets are not particularly good at providing public goods―since by definition they are not excludable it is impossible to charge a price for them―it has to be admitted that finding an efficient decision procedure for the determination of the quantities and types of public goods to be produced is not a trivial task.

4.5 Imperfect information about knowledge A crucial assumption of the Arrow-Debreu general equilibrium model is that all market participants have perfect information about the qualities and prices of all goods offered. Though some introspection reveals immediately that this assumption is wildly unrealistic its importance has been ignored for a long time. Taking into account that the information of market participants is not only incomplete, but also unequally distributed has far reaching implications, which have been surveyed extensively by one of the pioneers of the economics of information, Joseph Stiglitz (2001). In this paper I want to discuss just a few aspects which are relevant when the commodity under consideration is knowledge. A useful classification of commodities based on the information of the consumer is the following: 1. Inspection goods: The buyer can evaluate the quality of a product on the spot before buying it. 2. Experience goods: The buyer learns the quality of a good only by using it, i.e. after buying it. 3. Credence goods: The buyer is never able to evaluate the quality of the good or service she has bought. Knowledge as a commodity sometimes belongs to the first category, but very often to the second and frequently to the third. In fact, if it is possible to evaluate the quality of some piece of knowledge before buying it there is a problem for a market to work: In order to be able to determine the value of knowledge for a buyer she needs to know what it is. But once she knows it herself she does not need to buy it. We shall return to this and similar problems in later sections. Experience goods are more common than one might think at first sight. Whether the car I am buying is of good quality or is what Americans call a “lemon” will only be revealed in the future. Closer to our topic, whether the lecture you are attending or the books you are reading are worth the time and money spent can only be said after you have done it.

176 Credence goods are often provided by experts offering services of which the buyers cannot say whether they are really necessary and/or whether they have actually been performed (Dulleck and Kerschbamer 2006). Again cars are an example: Most of us cannot really determine whether a particular part has to be replaced and, once we have accepted the advice of the expert, whether it actually has been replaced. Medical treatments often display similar problems, and so do many professional services. Research, i.e. the creation of knowledge, is another example. Very often it is not even possible to tell ex post what the expert has actually done, since the outcome we observe is not only the result of his actions, but also of some random influences. If we are lucky we get away without accident even if the brakes have not been repaired properly, and often we overcome some illness despite the treatment we receive (to be fair, both examples can as well be turned into their opposite). More generally, in many markets we get asymmetric information: The seller knows more about the quality of the good or service he offers than the buyer. If providing better quality is costly then the seller has an incentive to offer quality which is worse than what he pretends to. This may have important implications for the functioning of markets. In particular, prices may no longer be adjusted to equate supply and demand, but rather serve as an incentive and selection mechanism. Think of labor markets, and in particular of labor markets for scientists. It is not an easy task to monitor what a researcher in some university department actually does. If we see a scientist at his desk staring into emptiness we cannot really tell whether he is thinking of his girl friend or trying hard to solve a very difficult problem. What we can really observe―at best―is the final result, which depends on three things: the ability of the researcher, his effort, and on his good luck. Now suppose there is an excess supply of researchers. Should universities reduce salaries? If one believes in monetary incentives, the answer even most non-economists will like to hear is no! The reason is that lowering the salaries has two negative effects on the quality of research: There is an adverse selection effect since the most gifted researchers are likely to be able to find better paid jobs elsewhere, and there is a moral hazard problem, since badly paid researchers won’t care very much if they are fired for lack of success, hence they have little incentives to exert a lot of effort (Akerlof and Yellen 1986). Since the recruitment of scientists and the provision of incentives are crucial for the production of knowledge it should be obvious that relying on market forces alone will not do. They have to be supplemented by various other measures like peer reviews, measurement of impact and rankings, but

177 also by developing a social structure and a value system that induces scientists to do their best. Ongoing debates about university reforms are a good indicator that this is not an easy task.

4.6 Private and social value of knowledge If markets are supposed to be used to determine the production and distribution of knowledge two conditions are necessary―though not sufficient― to ensure that the outcome is efficient: Economic agents should be able to determine the value of the knowledge they are trading for themselves, and the private and social value of knowledge should be the same. We have already given examples above which show that under a variety of circumstances those conditions are not satisfied, but at this point I would like to discuss this problem in more detail.

4.6.1 Private value of knowledge It has already been pointed out that knowledge as a commodity quite often can be viewed as an experience or even as a credence good, implying that a buyer does not know for sure the value of what she is about to get, but can only make a more or less informed guess. It has been shown some time ago that the winner of an auction for some object about which the bidders have different information and/or believes may suffer what has been termed as “the winner’s curse” (Wilson 1969). It means that it is not unlikely that the person who is willing to pay most is also the most optimistic one with exaggerated expectations. If you look for examples remember some of the UMTS-auctions a few years ago, or think of some transfers in professional football. In any case, the beliefs of individuals about the value of certain objects may lead to rather inefficient decisions and thus impede the smooth functioning of markets. Interestingly, the private value of information may be negative, even if the information is perfectly correct, i.e. economic agents may be willing to pay if certain information does not become available (Hirshleifer 1971). As an example, suppose there is a village and people know that one of the houses will be destroyed by fire, flood, or some other catastrophe, but they don’t know whose house it is. Assume it would be possible to find out at small costs which house it will be. Now if people are risk averse then they would be willing to pay that this information is not obtained because as long as it is

178 unknown who will be hit by the disaster they would be willing to participate in a mutual insurance scheme. In fact, this example is less far fetched than it looks at first sight. A very similar situation may arise in stock markets, and another example may gain dramatically in importance as early diagnoses of certain diseases become more accurate. As long as I only learn that I will get some disease, but without hope of prevention or cure, I would rather not know about it. In general, less information is often preferred to more if the latter reduces the opportunities for risk sharing.

4.6.2 Private vs. social value of knowledge It has already been discussed above that knowledge is likely to generate positive externalities thus driving a wedge between the social and private return to knowledge creating activities. Similarly, as far as knowledge is a public good its private value is far below its social value because its use is not excludable. While in both cases the social value of knowledge is greater than its private value in some cases this relationship is reversed. Consider a so called “patent race”: several firms are trying to make a certain innovation, say a new medicine, a new electronic device or what you have. Now for society as a whole it does not really matter whether the discovery is made a few days or even months sooner or later. For those participating in the race, however, it very often makes all the difference to be first or only second, since usually “the winner takes it all”. As an example, just recall the battle between different video recording systems: In the end VHS was the only survivor, not necessarily because it was the best system, but because it was the first in the market and had an “installed base” of users which could not be successfully attacked by its competitors. In fact, this was not due to obtaining a patent at first, but because video systems are an interesting example of network goods, a phenomenon we shall return to below. Note that such races are not only found in the commercial sphere, they occur as well in academia. As an example, consider the current race of some leading quantum physicists for the longest distance over which entangled light can be sent or for the first usable quantum computer. Of course I wish my esteemed colleague and friend of the University of Vienna, Anton Zeilinger, that he wins this race. However, without denying in the least the importance of this work, from a social point of view it does not matter all that much whether he succeeds a few weeks earlier or later, though it may be extremely important for him since the reputation―and the financial reward―of being first is disproportionally

179 greater than of being second. But it is not only in winner takes it all situations that the private value of knowledge exceeds its social value. Some innovations are hardly improvements as compared to existing best practices and serve only to shift profits from one firm to another. Market guided research may also lead to other inefficiencies. There may be too much duplication of efforts, and coordination of research activities would be socially preferable. A related problem concerns the variety of approaches. It can be shown that under certain assumptions firms tend to pursue very similar research strategies where from a social point of view a diversification would be better. Again this has to do with risk aversion: If I do something similar as my competitor I forego the chance to come up with something truly unique, but I reduce the risk of a complete failure. It may be better to succeed―or fail―in the same way as my competitor than to fail because I tried to do something completely different. Again the situation is often quite similar in Academia: It may be safer to remain within the mainstream without much hope of coming up with something revolutionary (and valuable), but without much risk to be deemed an unqualified failure. The problem is that for the individual researcher it is almost impossible to diversify her lines of research, so if she fails she bears the whole burden alone. To sum up this section we note that it is often very difficult for a person to determine the value of some piece of knowledge or information; if the private value can be determined it often differs from the social value; and private risk taking may differ from socially efficient risk taking.

4.7 Network effects A network good has the property that its utility for an individual user is increasing in the number of other users. An obvious example is telecommunication: If there is only one person who owns a telephone its usefulness is quite limited. Many types of knowledge and knowledge goods are characterized by such network effects. The ICT-sector has been the most spectacular and fastest growing network industry, but it is by no means the only one. Network industries have been the subject of extensive research in recent years because they have interesting and important implications for the functioning of markets (Shy 2001). One of them is the possibility of lock-in effects. As has been shown by Brian Arthur (1989), society may get stuck with an inferior technology, either because it was the first and already had a large number of users―the

180 so called installed base―by the time a better technology was available, or because it was superior at the beginning for a small number of users and thus got introduced, while a technology that would have been better for a large number of users never got started. An often quoted―though disputed―example is our QWERTY-keyboard. Attacking an incumbent producer of a network good with a strong installed base is quite a risky business, even if a better product is available. Very often there are substantial switching costs― users have to learn how to operate the alternative, i.e. their old knowledge becomes obsolete and has to be replaced by a new one, and this makes it harder to get a critical mass of users necessary to compete with the incumbent. One may speculate at this point how far these considerations apply for Academia when it comes to changing the prevalent paradigm. Some of the most important network goods, especially computer hardware and software, display crucial complementarities, i.e. consist of several components which are only useful when put together. A computer is useless without operating system, and the operating system by itself is useless without supporting software. Interestingly, it has been shown that under a wide variety of circumstances producers of various components have strong incentives to make their products compatible. The main reason is that making some components compatible softens competition for others. The logic is as follows: Suppose you have two components, hardware and some software. The software is a network good. Now if there are two producers and their machines are not software compatible, i.e. in order to use a machine you have to use the software of its producer, then competition is very stiff because in order to sell the machine I have to ensure a sufficiently large installed base. If both machines are software compatible then the network benefits are the same for both machines and price competition between them becomes weaker. Similarly, it is in the interest of producers that their operating systems are compatible with as many supporting software products as possible. So it is not surprising that technology sharing and open source are frequently encountered in this sector. At the same time it must be mentioned that in some situations private firms may have a strong incentive to keep relevant knowledge to themselves, even when it would be socially desirable to share it. This is particularly true if the incumbent is or feels strong enough to fight off any entrant. To sum up this section we note that network effects are likely to produce markets with a dominant incumbent who is difficult to attack once he has established himself. It takes either a far superior product and/or a very strong entrant to be successful. On the other hand, if there are several producers of

181 comparable strength competition between them may well lead to technology sharing and socially desirable outcomes.

4.8 Trading knowledge As we have seen when knowledge is regarded as a commodity which can be traded in a market like any other commodity we encounter a number of difficulties which are likely to lead to inefficient outcomes and various types of market failures: • The paradigm of perfect competition which is at the heart of the first welfare theorem of the Arrow-Debreu model is unlikely to be applicable since economies of scale and scope as well as network externalities more often then not will generate firms with considerable market power. • Positive externalities, but also winner takes it all situations and business stealing effects drive a wedge between private and social costs and benefits of knowledge. In addition, risk aversion may lead to socially inefficient behavior. Taken together this implies that socially optimal activities for the creation of knowledge are the exception rather than the rule. • As far as knowledge is a public good there is little incentive for its creation, on the other hand granting patents and intellectual property rights, while providing such incentives, hampers the socially efficient use of knowledge. • Finally, it is often very difficult to determine the value of knowledge in advance, and often it is not clear what a particular piece of knowledge is actually worth even after one has acquired it. Clearly, this uncertainty does not help markets to perform well. There is at least one additional difficulty which so far has not been dealt with explicitly. If I sell specific knowledge which can be used profitably I don’t actually give it away. If I tell somebody something I know I still know it afterwards. Obviously, this seriously limits the willingness to pay for knowledge. Unfortunately, the effect also works the other way round. If I want to sell specific knowledge I have to tell the potential buyer what it is all about. But once I have told him how can I stop him from using it without paying anything? To sum up, markets do not seem to do very well when it comes to the creation and distribution of knowledge, and one might understand why economics has been referred to as the “dismal science”. However, a few words of caution are in order. First of all, the benchmark we have used for evaluating market perform-

182 ance is an extremely demanding one: It is the first best outcome in a perfect world, in which an omniscient and benevolent planner optimizes some well defined welfare function. It should come as no surprise that market outcomes fall short of such an optimum. Secondly, having shown that markets are unlikely to deliver a first best solution does not mean that there exists another institutional setting which fares any better. Note that many problems we have mentioned are intrinsic problems of knowledge and not of markets, i.e. they will be encountered in any alternative setup as well. Thirdly, markets and the forces of competition have been remarkably innovative with respect to finding new ways of organizing the production and distribution of goods and services under changing technological conditions. Much has been made of networks and open source, some authors claim that this is a completely new way of organizing the production and distribution of goods, outside and in addition to the traditional forms of firms (hierarchies) and markets (polyarchies) (Benkler 2001, 2006, Powell 1990). While I find these contributions interesting, thought provoking and informative, I also find some of their claims exaggerated. It is beyond the scope of this paper to go into details, but it should be mentioned that markets and firms do play a crucial role in networks. Firms are part of several networks, in fact they are central organizers in some of them; firms have emerged as a consequence of successful open source projects; firms have created open source networks inside themselves; the products of open source networks compete in markets; and several such projects would not be possible without commercial competitors or forerunners. In my view, networks at different levels and with different internal organizations are one, alas an important reaction to some of the problems pointed out above, but in my view they are part of the market economy and they can be fruitfully analyzed and understood with the tools developed in economics (Lerner and Tirole 2002, 2005).

5. Research Agenda It seems to be necessary to come down from the level of generality used in this survey and to be more concrete. It has to be acknowledged that there is no such thing as a commodity “knowledge” which can be treated as homogeneous at least to the same degree as cars or computers. “Knowledge” is created in various forms, under various conditions and for various purposes, and it looks impossible to determine its properties and its handling in markets or other institutions without specifying the context. So in some sense we are

183 back at the old and unsolved epistemological question what knowledge is. However, economic considerations along the lines presented above should be helpful. We should look at the type of knowledge we are considering, and Machlup’s classification is probably a good starting point. We should look at the technical conditions under which knowledge is produced, e.g. the extent to which it can be modularized, and we should look at the ways in which it can be utilized, commercially and otherwise. And finally, we should keep in mind that economics provides a useful way to look at things, but it needs to be supplemented by the insights of other disciplines as well.

6. References Aghion, P. and P. Howitt (2005), Appropriate Growth Policy: A Unifying Framework, The Joseph Schumpeter Lecture delivered at the 20th Annual Congress of the European Economic Association. Akerlof, G.A. and J.L. Yellen (eds.) (1986), Efficiency wage models of the Labor Market, New York: Cambridge University Press. Allen, B. (1990), Information as an Economic Commodity, American Economic Review 80 (2), 268-73. Arrow, K.J. (1962), Economic Welfare and the Allocation of Resources for Invention, The Rate and Direction of Inventive Activity and Social Factors, National Bureau of Economic Research, Princeton University Press. Arrow, K.J. and G. Debreu (1954), Existence of an equilibrium for a competitive economy, Econometrica 22, 265-290. Arthur, B. (1989), Competing Technologies, Increasing Returns, and Lockins by Historical Events, Economic Journal 99, 116-131. Benkler, Y. (2001), Coase’s Penguin, or, Linux and The Nature of the Firm, Yale Law Journal 112, 369Benkler, Y. (2006), The Wealth of Networks, New Haven and London: Yale University Press. Boulding, K.E. (1966), The Economics of Knowledge and the Knowledge of Economics, American Economic Review 56 (2), 1-13. Dulleck, U. and R. Kerschbamer, (2006), On doctors, mechanics and computer specialists. Journal of Economic Literature 44(1), 5-42. Griliches, Z. (1995), R&D and Productivity: Econometric Results and Measurement Issues, in P. Stoneman (ed.), Handbook of the Economics of Innovation and Technological Change, Oxford: Blackwell, 52-89. Grossman, S.J. and J.E. Stiglitz (1980)), On the Impossibility of Informationally Efficient Markets, American Economic Review 70 (3), 393-408.

184 Hayek, F.A. (1945), The Use of Knowledge in Society, American Economic Review 35 (4), 519-30. Hirshleifer, J. (1971), The Private and Social Value of Information and the Rewards to Inventive Activity, American Economic Review 61 (4), 561574. Lerner, J. and J. Tirole (2002), Some Simple Economics of Open Source, Journal of Industrial Economics 52,197-234. Lerner, J. and J. Tirole (2005), The Economics of Technology Sharing: Open Source and Beyond, Journal of Economic Perspectives 19, 99-120. Machlup, F. (1962), The Production and Distribution of Knowledge in the United States. Machlup, F. (1980), Knowledge and Knowledge Production, Princeton University Press. Mankiw,, N.G. (2004), Principles of Microeconomics, Thomson-Southwestern. Powell, W.W. (1990), Neither Market nor Hierarchy: Network Forms of Organization, in B.M.Staw and L. Cummings (eds.), Research in Organizational Behavior, vol. 12, Greenwich, CT and London: JAI Press, 295336. Schumpeter, J. (1942) Capitalism, Socialism and Democracy, New York: Harper & Row Shy, O. (2001), The Economics of Network Industries, Cambridge University Press. Stiglitz, J.E. (2001), Information and the Change in the Paradigm in Economics, Nobel Prize Lecture, Wilson, R. (1969), Competitive Bidding with Disparate Information, Management Science 15, 446-448.

East–West Perspectives on Privacy, Ethical Pluralism and Global Information Ethics Charles Ess, Drury

Introduction: the Manichean Problem Information and Communication Technologies (ICTs) are both primary drivers and facilitating technologies of globalization—and thereby, of exponentially expanding possibilities of cross-cultural encounters. Currently, over one billion persons throughout the planet have access to the Web: of these, Asian users constitute 35.8% of the Web population, while Europeans make up 28.3 % of world users—and North Americans only 20.9% (Internet World Stats, 2007). Our histories teach us all too well that such encounters—especially concerning potentially global ethical norms—always run the risk of devolving into more destructive rather than emancipatory events. Specifically, these encounters risk pulling us into one of two contradictory positions. First of all, naïve ethnocentrisms too easily issue in imperialisms that remake “the Other” in one’s own image—precisely by eliminating the irreducible differences in norms and practices that define distinctive cultures. Second, these imperialisms thereby inspire a relativistic turn to the sheerly local—precisely for the sake of preserving local identities and cultures. Hence the general problem: how we might foster a cross-cultural communication for a global ICE that steers between the two Manichean polarities of ethnocentric imperialism and fragmenting relativism?

A Global ICE: Basic Requirements This difficulty is not new with ICTs and ICE—but is complicated by the fact that ICTs, most especially the Internet, embed and foster the cultural norms and communicative preferences of their Western roots (see Ess

186 2006a, 2006b, 2007b.) By the same token, as Soraj Hongladarom points out, until relatively recently, Computer Ethics have remained largely the work of Western ethicists (2007, 110). But of course, there are sharp contrasts between the basic assumptions underlying Western ICE and those defining world traditions and cultures as shaped by Confucian thought, Buddhism, Indigenous traditions, etc. To begin with, in contrast with the modern Western emphasis on the atomistic individual as a primary reality, many of these traditions understand human beings as relational beings, ones whose identity and reality essentially turns on their relationships with others in the larger community (and, perhaps, nature and/or divinity itself). So Barbara Paterson suggests that in general, “In African philosophy, a person is defined through his or her relationships with other persons, not through an isolated quality such as rationality…” (2007, 157). This means in turn that “African thought sees a person as a being under construction whose character changes as the relations to other persons change.” (ibid) This notion of the human subject as a relational being is likewise found in Confucian thought (see Ames and Rosemont 1998, 49). These irreducible differences thus work to define the differences between cultures—and thereby between individuals as shaped by these cultures. These foundational differences are thereby essential to our identities as cultures and members of cultures. Given that persons and cultures have a basic right to identity (Ess 2006a), this immediately means that we are obliged to honor and foster the irreducible differences that define our individual and cultural identities. At the same time, however, as we seek to develop a global ICE, we must do so in ways that simultaneously foster and sustain a shared ethos or set of ethical practices.

How far ought we go towards “the Other”? We must ask still one more question before proceeding to develop a global ICE: How far do we want / need / ought to go to meet “the Other”? This question is central because our responses to it will determine how far we may remain satisfied with an ethics that emphasizes shared assumptions and obligations only—and how far we may be willing, if not required, to take up additional ethical obligations necessary in order to honor and foster the irreducible differences that define our cultural and individual identities. In the following, I begin to sketch out the characteristics of each of these responses.

187

Minimal standards—emphasis on commonalities What we can think of as a set of minimal ethical standards for a global ICE emphasize commonalities more than differences for the sake of largely pragmatic economic interests. These minimal standards begin with what Johnny Søraker has described as pragmatic arguments, i.e., arguments that appeal to our shared economic interests. Such arguments are strong candidates for inclusion in a global ICE, precisely because they largely bypass foundational cultural and political differences (2006). Such arguments seem to work: for example, as a condition of joining the World Trade Organization, China has agreed to the Human Subjects Protections endorsed by the World Health Organization as required for medical research: even though these protections were quite alien to the philosophical foundations of Chinese cultures and earlier medical practices, the economic advantages of WTO membership were too great for China to turn down (Döring, 2003). Moreover, we may expect a global ICE to include agreements on identical values and standards because globalization—as fueled by ICTs themselves—fosters a cultural hybridization and the creation of “third identities” (i.e., syntheses of two distinct cultural values, practices, beliefs, etc.) that represent precisely a shared, global identity. One of the clearest examples of such a third identity is in the domain of privacy. As a number of commentators have observed, young people in Asian countries—specifically Japan, Thailand, and China—increasingly insist on a Western-like practice of individual privacy, one that directly contradicts traditional Asian notions (see Nakada & Tamura 2005, Rananand 2007, and Lü 2005, respectively). Clearly, young people in these countries are influenced by their exposure to Western notions of individual privacy: and, insofar as there is an increasingly identical set of understandings and values surrounding notions of individual privacy in both East and West, then we may expect that a global ICE will be able to develop a single, (quasi-) universal set of norms and practices for protecting that privacy.

Maximal standards: resonance But insofar as the irreducible differences defining diverse cultures and identities are not eradicated or overshadowed by such hybridizations, we are left with the difficulty of crafting a global ICE that will preserve such differ-

188 ences. As I’ve suggested, to do so depends in part on how far we believe we ought / need / want to go beyond pragmatic relationships that emphasize our shared commonalities—and thus, how far we are prepared to engage “the Other” as Other, i.e., precisely in ways that recognize, respect, indeed foster our irreducible differences. A central model for encountering “the Other” in this second way is provided by the Japanese Buddhist and comparative philosopher Kitarō Nishida’s understanding of resonance. This notion of resonance, we will see, is of interest in part because it represents a notion that is shared between such Western philosophers as Plato and Aristotle, and such Eastern traditions as Confucian, Daoist and Buddhist thought.

189

Nishida and resonance Nishida draws on the language of German philosophy, emphasizing that to preserve our identities as irreducibly distinct from one another, our relationships with one another always take place across the difference of “absolute opposites” [Entgegengesetzter]. But if only sheer difference defines our relationship—then there will be no connection or unity [Vereinigung]. To describe human relationships as a structure that holds together both irreducible difference and relationship, Nishida turns to the term and concept of resonance. As with its musical definition, a resonant relationship entails a connection that simultaneously sustains the irreducible differences required to keep our identities and awareness separate: “The mutual [gegenseitige] relationship of absolute opposites [Entgegengesetzter] is a resonant [hankyō] meeting or response. … Here we encounter a unity of I and You and at the same time a real contradiction.” (Nishida 1988, 391f.; cited in Elberfeld 2002, 138f. Translation from the German by CE) As I have shown elsewhere, resonance and an affiliated pluralism are central to the work of eco-feminist Karen Warren (1990) and specifically the information ethics of Larry Hinman (1998). Similar notions of resonance emerge in contemporary political philosophy, most specifically in the work of Charles Taylor (2002) and his notion of strong complementarity (see Ess, 2006c). Such complementarity, moreover, is not restricted to other human beings. We may further seek—or believe ourselves required to seek—such resonance with the larger community, and/or the natural order, and/or perhaps even divinity (so far as we believe divinity to exist). Broadly speaking, the further we understand our interrelationship with “the Other” to extend, the more extensive our ethical obligations will become. Between Nishida and Taylor, then, we can discern models of resonance and complimentarity for our engagements with “the other”—whether in human, natural, and/or divine form—that insist on preserving and fostering the irreducible differences that define our identities as distinct from one another, while simultaneously sustaining relations that, ideally, foster the flourishing of all. This understanding of the sorts of harmonies we are to strive for, moreover, guide the ethical and political thought of a range of world traditions, including Aristotle, Confucian thought, African thought, etc. At the same time, this emphasis on harmony is likewise a theme shared by contemporary virtue ethics, ecofeminism and environmental ethics. Hence these notions of resonance, complimentarity, and harmony appear to offer a kind of ethical lingua franca that may serve as common grounds for a global ICE. But again, we will

190 also see that the ethical demands and obligations these notions entail go well beyond those that follow from an initial—but minimal—emphasis on commonalities alone, as we seek to foster engagements with “the Other” via ICTs distributed globally in ways that preserve the irreducible differences at work in such resonant relationships. To see how this is so, I first turn to the possible ways—first in theory and then in praxis—of developing such a global ICE, one that constructs a pluralism constituted by shared ethical norms and values alongside multiple interpretations or applications of these values as refracted through—and thus reflecting and preserving—irreducibly different cultural traditions, practices, etc.

Ethical pluralism West and East The difficulty of developing an ethics that works across diverse cultures and traditions is an ancient problem: the ancients in both Eastern and Western traditions have developed often highly sophisticated ways of resolving the apparently conflicting demands between agreement and difference. At the same time, the ancient Western and Eastern solutions in fact closely resemble one another in several fundamental ways.

Ethical Pluralism West: Plato, Aristotle, phronesis and “cybernetic pluralism” Both Plato and Aristotle—and subsequently, Aquinas—responded to this complex requirement in at least two key ways. To begin with, Plato develops a view that I have characterized as “interpretive pluralism” (Ess, 2006c). On this view, as elaborated especially in The Republic, we may conjoin shared ethical norms with irreducible differences by recognizing that diverse ethical practices may represent distinctive interpretations or applications of those shared norms. Such differences, that is, do not necessarily mean, as ethical relativists would argue, that there are no universally legitimate ethical norms or values: rather, such differences may mean only that a given norm or value is applied or understood in distinctive ways—precisely as required by the details of a given context as shaped by a particular tradition, cultural norms, and practices.

191 Aristotle builds on Plato’s teaching in several ways, beginning with his notion of pros hen or “focal” equivocals. Such equivocals stand as linguistic middle grounds between a homogenous univocation (which requires that a term have one and only one meaning) and a pure equivocation (as a single term may have multiple but entirely unrelated meanings—for example, “bat” can refer both to a winged mammal and a wooden stick used in baseball). Pros hen or focal equivocals, by contrast, are terms with clearly different meanings that simultaneously relate or cohere with one another as both point towards a shared or focal notion that anchors the meaning of each. Aristotle uses the example of “healthy” to illustrate his point: “ … the term ‘healthy’ always relates to health (either as preserving it or as producing it or as indicating it or as receptive of it ….” (Metaphysics 1003b2-4; cf. 1060b37-1061a7). So we could say, for example, that a particular diet is healthy1—and good kidney functioning may also be said to be healthy2: but the two terms are not univocals—that is, they do not have precisely the same meaning. On the contrary: with healthy1 we mean that the diet contributes to the state of being healthy—while healthy2 means that good kidney function is a reflection of the state of being healthy. At the same time, however, precisely because healthy1 and healthy2 refer to the same “state of being healthy” that, as a shared focal point, thus grounds their meanings—their differences in meaning are thus conjoined with a coherence or connection alongside these differences. For Aristotle, our ability to negotiate the complex ambiguities of pros hen equivocals is affiliated with a particular kind of practical judgment, i.e., phronesis. Just as we can recognize and appropriately utilize terms that hold different but related meanings—so phronesis allows us to take a general principle (as the ethical analogue to the focal term ground two pros hen equivocals) and discern how it may be interpreted or applied in different ways in different contexts (as the ethical analogues to the two pros hen equivocals—i.e., that are irreducibly different and yet inextricably connected). But what phronesis thereby makes possible is an ethical pluralism that recognizes precisely that shared ethical principles and norms will necessarily issue in diverse ethical judgments and interpretations, as required by irreducibly different contexts defined by an extensive range of fine-grained details. In fact, Aristotle’s understanding of phronesis and thus of ethical pluralism is intimately connected with a central component of computation—namely, cybernetics. Most of us are familiar with the term—as originally developed by Norbert Wiener—as referring to the ability of computer systems to self-regulate and self-correct their processes through various forms of feedback mechanisms. But we need reminding here that “cybernetics” is derived from Plato’s

192 use of the cybernetes. The cybernetes is a steersman, helmsman, or pilot, and Plato uses the cybernetes as a primary model of ethical judgment—specifically, our ability to discern and aim towards the ethically-justified path in the face of a wide range of possible choices. So Plato has Socrates observe in The Republic: … a first-rate pilot [cybernetes] or physician, for example, feels the difference between the impossibilities and possibilities in his art and attempts the one and lets the others go; and then, too, if he does happen to trip, he is equal to correcting his error. (Republic, 360e-361a, Bloom trans.; cf. Republic I, 332e-c; VI, 489c.)

“Cybernetics,” then, means more originally the capability of making ethical judgments in the face of specific and diverse contexts, complete with the ability to self-correct in the face of error and/or new information. This is to say, the cybernetes, as a model of ethical self-direction, thereby embodies and exemplifies the sort of ethical judgment that Aristotle subsequently identifies in terms of phronesis—i.e., precisely the ability to discern what general principles may apply in a particular context—and how they are to be interpreted to apply within that context as defined by a near-infinite range of fine-grained, ethically relevant details.

Bridge notions with Eastern thought: pluralism, harmony, and resonance These notions of judgment and pluralism are found throughout diverse religious and philosophical traditions—including, for example, Islam (Eickelman 2003) as well as Confucian thought (Chan 2003). Moreover, Rolf Elberfeld (2002) has extensively described how the metaphors of harmony and resonance appear in both Western and Eastern traditions, beginning with Plato’s account of the role of music as critical to education in The Republic (401d). The metaphors of resonance and harmony, moreover, are clearly structures of pluralism: that is, these notions explicitly entail structures of connection alongside and in the face of irreducible difference. Specifically, the Chinese term ying (resonance) means precisely “a conjunction [Zugleich] of unity [Vereinigung] and division [Trennung]” (Elberfeld 2002, 132). Finally, Elberfeld demonstrates that these understandings of harmony, resonance, and a correlative ethical pluralism are also found in both an-

193 cient and contemporary Daoism and Buddhism (2002, 137f.) And, as we have seen, the highly influential Japanese comparative philosopher Nishida Kitarō takes up the Japanese version of resonance [hankyō] as key to our knowing one another as human beings. Hence it is clear that these notions of pluralism and resonance are shared cross-culturally—but, unlike simple commonalities, these notions further include the ability to articulate and preserve irreducible differences.

Pluralism in Contemporary ICE Indeed, there are at least two examples of such pluralism operating in contemporary theoretical work, beginning with Terrell Ward Bynum’s synthesis of the work of Norbert Wiener and Luciano Floridi in what Bynum calls “flourishing ethics.” (2006) Similarly, Luciano Floridi (2006) has developed a conception of what he calls a “lite” information ontology—precisely with a view towards avoiding a cultural imperialism, on the one hand (resulting from unilaterally and homogenously applying a single ethical framework across all cultures), while also avoiding, on the other hand, a merely relativist insistence on a local framework only, one that would thereby remain fragmented and isolated from other cultures and frameworks, as the effort to preserve their irreducible differences would (mistakenly) insist on avoiding all shared, putatively universal norms and values. A “lite” ontology can serve as a shared framework that allows precisely for a pluralistic diversity of understandings and applications of a shared notion of informational privacy, as, in effect, the focal, pros hen notion referred to by specific understandings and implementations of privacy within specific—and irreducibly different—cultural settings. In addition to strong notions of pluralism in these prominent ICE theories, a number of important examples instantiate pluralism in praxis.

Pluralism in Praxis As a first example: Bernd Carsten Stahl develops “critical reflexivity” as a procedurally-oriented approach to ICE, one intended precisely to avoid the Manichean polarities of homogenization and fragmentation that confront any effort to develop ethical norms to be shared across cultures. In doing so,

194 Stahl takes up the central difficulties of defining ‘emancipation’ in a way that would work cross-culturally. This requires a formal approach that emphasizes creating “…procedures that allow the individuals or groups in question to develop their own vision of emancipation or empowerment” (2006, 105). Such a procedural approach has the advantage that “the critical researcher will not prescribe certain features that she believes to be emancipatory, but that she gives the research subjects the chance to define their version of emancipation” (ibid, emphasis added, CE). Such critical reflexivity and its allied procedural approach thereby issues in a pluralism that recognizes and respects the irreducible differences defining individual and cultural identities. Stahl sees such pluralism emerging from the application of this procedural approach to debates regarding government and the democratic uses of ICTs (2006, 105). In addition, Deborah Wheeler (2006) documents how women in Jordan have been able to take up ICTs in ways that are indeed emancipatory—where ‘emancipation,’ as Stahl describes, emerges from the agency of local actors who seek to determine the meanings and practices of ‘emancipation’ that make sense and work best within their specific cultural frameworks and real-world contexts. Stahl’s critical reflexivity and procedural approach to defining central norms such as “emancipation” thus issues here in praxis in “emancipation” as a pluralistic concept, one that allows for diverse interpretations and implementations across different cultures. Similar examples of pluralism can be noted—e.g., Maja van der Velden’s account of how such pluralism may be encoded in the source code of software used by diverse Indymedia groups around the world—such that each group is able to modify the software to meet local conditions and requirements, while preserving its main features (2007). More broadly, I have argued that such pluralism can be seen at work in notions of privacy as defined within Western contexts (using the U.S. and Germany as examples) and Eastern contexts (focusing primarily on China and Hong Kong). Here, the pluralism at work serves to reflect and preserve profound differences between Western notions of the individual as a primary, but atomistic reality, and Confucian notions of the self as a relational being. As we might expect, these differences lead to very different data privacy protections. While limited in comparison with Western rights and protections (that emphasize the importance of privacy rights for individuals as rational autonomies participating in democratic governance), privacy rights and data privacy protections are nonetheless emerging in Thailand, China, and Hong Kong (justified primarily as data privacy protections contribute towards economic development as online commerce becomes increasingly important in these economies). In

195 this way, we again see a pluralistic, pros hen structure emerge. Privacy and data privacy protection serve as the ethical focal points towards which both Western and Eastern societies orient their laws—but each society understands and interprets the meaning of privacy and data privacy protection in ways that fit their specific context, traditions, values, norms, practices, etc. (Ess 2006c). As a last example, Soraj Hongladarom has taken up these apparent conflicts between Western and Eastern conceptions, with particular attention to the Buddhist traditions (Theravadan and Mahayanan) of Thai society (2007). Hongladarom moves beyond the initial contrast between the Western emphasis on the atomistic individual vs. Eastern conceptions of the individual as a relational being (Confucian thought) or, as in Buddhism, as a mistaken belief altogether (one that is, indeed, at the source of human dissatisfaction). Hongladarom draws on Nagarjuna’s distinction between the self as an empirical-conventional reality, on the one hand, and ultimate reality on the other: given this distinction, Buddhism is perfectly capable of endorsing and taking the individual self as real—at the empirical-conventional level. Indeed, the Buddhist striving towards Enlightenment (as the dissolution of the “self”) requires individual effort and responsibility—manifest, e.g., in the injunction to cultivate compassion towards others (Hongladarom 2007, 118). Hence Hongladarom argues that Buddhist societies such as Thailand have a prima facie reason to protect the privacy of such (empirical-conventional) individuals, especially as part of a movement towards establishing a more democratic society in Thailand. That is, the Buddhist injunction, in which each person is responsible for his or her own liberation, thereby sustains notions of equality and democracy that are at least closely similar to those developed and endorsed in Western societies. In my terms, there emerges here yet again an interpretive pluralism regarding conceptions of the self and privacy as pros hen, ethical focal points: in such a pluralism, modern Western notions of the self (as an ultimate reality whose privacy is a positive good) and Buddhist conceptions of the self (as an empirical-conventional reality) are understood as diverse interpretations or understandings of focal notions of self and privacy—and thereby as conceptions that may nonetheless resonate or harmonize with one another. Indeed, Hongladarom and I have further argued that this harmony further extends between the Buddhist notion of Attasammapanidhi, of ethical selfdirection and self-adjustment, and Plato’s model of the cybernetes, the pilot or steersman who symbolizes a similar capacity for ethical self-correction (Hongladarom & Ess 2007, xix). Finally, Hongladarom points out that Buddhist ethics closely resemble Western-style virtue ethics and the pragmatic

196 ethics of Richard Rorty (1975). Hongladarom’s analysis thus identifies and reinforces a further deep resonance between Western and Eastern thought— namely, between Western virtue ethics (whether in Socratic, Aristotelian, and/or contemporary feminist forms) and the ethical systems of Confucian thought and Buddhism. Taken together with the previous examples of privacy East-West, the Thai example again marks out in praxis as well as in theory the possibility of a global ICE—one constituted by shared ethical focal points (i.e., shared norms, values, etc.) that are nonetheless articulated and instantiated in diverse ways as these focal points are interpreted and applied in distinctive cultural contexts.

Emerging Rights / Duties? In light of the theoretical foundations and practical expressions of pros hen or focal pluralism in an emerging and genuinely global ICE, what rights and obligations might emerge as we take up ICTs more and more into the fabric of our lives? I can see three layers of responses to this question.

1. Irresoluble conflict

While multiple instances demonstrate the possibility of resolving ethical differences within a pluralistic resonance or harmony—manifestly, not all such conflicts will allow for such resolutions. So, for example, Dan Burk (2007) documents the intractable differences between U.S. and European Union approaches to copyright—with the U.S., property-oriented approach currently dominating over the E.U., author-oriented approach. Similarly, Pirongrong Ramasoota Rananand suggests that the tradition and affiliated customs of the Thai “surveillance state” may succeed in keeping “privacy” an interesting idea, but not a right articulated and defended in law (2007).

2. Minimal requirements—shared commonalities.

Again, it is possible to begin our encounters with one another globally via ICTs with the straightforward quest for commonalities, including a set of minimal rights and obligations towards one another, justified at least by shared economic interests (Søraker 2007). What emerges from this approach

197 is what Westerners will recognize as familiar but primarily negative obligations: don’t violate another person’s privacy, right to intellectual property, etc.—by not sharing passwords and/or hacking where you don’t belong, copying illegally, etc.

3. Maximal requirements: meeting “the Other” online

More broadly, our emerging and global ICE depends very much on how far we want / will / need / ought to go in meeting “the Other” online. Presuming that we seek to meet with and engage “the Other” in a more robust way—i.e., one defined by our willingness to acknowledge not only commonalities but also the irreducible differences that define our individual and cultural identities—we are apparently required to move to a more complex mode of thinking and behaving, one shaped precisely by the structures of pluralism and harmony. To move in these more robust directions, we can perhaps draw at least initial guidance from the following considerations.

Cross-cultural communication ethics Broadly speaking, two of the most important factors of successful crosscultural communication that sustains the irreducible differences defining individual and cultural identities are trust and the ability to recognize and effectively respond to the linguistic ambiguity that thereby allows for a pluralistic understanding of basic terms and norms as holding different interpretations or applications in diverse cultures (Ess and Thorseth, 2006). Such pluralism allows precisely for a structure of both shared commonalities and irreducibly different understandings and practices that emerge from our distinctive cultures: thereby, pluralism and ambiguity are necessary conditions for cross-cultural encounters with one another that preserve these irreducible differences as part of the resonance that describes such engagements. Unfortunately, these dimensions of trust, ambiguity, and resonance may be hindered rather than fostered by online environments (cf. Søraker 2006; Grodzinsky & Tavani 2007). Moreover, these elements of human communication finally require the now familiar work of judgment—beginning with judgments as to how far or close one’s meaning is understood by “the Other,” and in turn, how far one understands the meanings of the Other: even though we may use the same

198 word or term, their differences in our diverse cultural settings require very careful attention and judgment to determine whether or not we are sliding into equivocation and mis-understanding. But: earning and sustaining trust, successfully recognizing and comfortably negotiating linguistic ambiguities, and utilizing the needed judgment in establishing and sustaining resonant relationships that preserve our irreducible differences—these capacities are not easily captured in analytical frameworks, much less taught in any formal way. They can, of course, be learned through example and experience with embodied teachers: but this again means that the most important elements of successful cross-cultural communication may not be best learned in the disembodied context of contemporary online venues (cf. Dreyfus, 2001).

Information justice and the cultivation of character? Numerous writers have argued the rights-based approaches of the West will not work well in “other” cultures. As we have seen, such approaches emphasize the autonomous individual as distinct from the larger community. Such an approach sharply contrasts with the basic assumptions regarding the individual as a relational being that shape the more communitarian / collectively-oriented cultures and traditions of Africa, indigenous peoples, those countries shaped by Confucian and Buddhist traditions, etc. (Cf. van der Velden 2007, 83). For his part, Hongladarom argues that the more radical Buddhist solution to the problem of protecting privacy is not simply to erect laws and create technological safeguards: it is rather to attack the root cause of our motivations to violate privacy in the first place—namely, egoism and its affiliated greed (2007, 120f.)

Conclusion In short, a global ICE that seeks to move beyond shared commonalities (and comparatively negative) requirements will apparently call upon us to take up a range of positive obligations and duties, as these are required if we are to preserve irreducible differences while simultaneously engaging in dialogue with “the Other.” Happily, these positive obligations and duties are not entirely foreign to the Western traditions: both ancient and contemporary feminist virtue ethics and ethics of care move us in these directions, as do the

199 deontological ethics of Kant and others. As we work—individually and collectively, and especially cross-culturally to develop a global ICE, part of our response, as I hope I’ve shown with some clarity, depends on how we respond to a central question: how far am I prepared to go today—i.e., how well am I prepared to take up relationships with “the Other” that entail not simply comparatively straightforward commonalities and pragmatic agreements, but further entail the difficult efforts to understand and negotiate ambiguity and irreducible difference, precisely in the name of preserving individual and cultural differences— perhaps, as Paterson argues, even preserving the environment where such negotiations will require the skills—learned only slowly and over a lifetime—of judgment, and the cultivation of compassion and care? Again, the cultivation of such virtues is not entirely alien to Western traditions: on the contrary, I have argued elsewhere for the necessity of an education that fosters Socratic critical thinking and moral autonomy, as key to moving beyond one’s own culture towards a more encompassing understanding of a wide diversity of cultures—a movement captured in Plato’s Allegory of the Cave, and further exemplified in our notions of Renaissance women and men who attain multiple cultural, linguistic, and communicative fluencies that allow them to comfortably live and work with “Others” around the globe. Contra “cultural tourists” and “cultural consumers” whose ethnocentrism may only be reinforced rather than challenged by their online engagements, such a Socratic-Renaissance education would further foster, following Habermas and feminism, an empathic perspective-taking and solidarity with one’s dialogical partners—including our sister and fellow cosmo-politans (world citizens). Of course, such education aims towards the development of phronesis, the practical wisdom required to negotiate the multiple contexts of ethics and politics, with the goal of achieving eudaimonia, human contentment, and harmony in one’s own society and the larger world (Ess 2004, 164). And, in terms that have emerged here, such an education would further highlight the importance of moving beyond pragmatic commonalities and shared economic interests to the pluralism of the cybernetes, the one who is able to discern what ethical course to pursue in a specific context—including the often radically diverse contexts of irreducibly distinct cultures—and who is able to correct her errors when they are made. Resonant with Socratic, Aristotelian, and feminist virtue ethics, such an education would further seek to foster the virtues of compassion and care. Such compassion and care are essential to healing the ruptures that follow

200 upon our inevitable mistakes in our efforts to understand, work, and live with “the Other” –most especially as we venture out into new linguistic and cultural settings. Such compassion and care, finally, are essential to building and sustaining the trust essential to all human interactions. ICTs continue their apparently inexorable expansion throughout the world—meaning, they are taken up by more and more people in diverse cultural contexts and settings. It seems certain that if we are to avoid a homogenous world culture—what Benjamin Barber famously called “McWorld” (1995)—more and more of us will need to take up the moral postures and communication skills of the cybernetes, rather than simply pursuing commonalities, pragmatics, and economic self-interest. Perhaps the dramatic scope and speed of cross-cultural encounters made possible precisely by ICTs might help more and more people recognize the need for such exemplary ethics and cultivation of character: but such hopes, of course, must recognize the multiple ways in which most of our online engagements rather foster the minimal obligations entailed by seeking out simply shared interests and pragmatic commonalities, especially as these engagements are oriented towards easy consumption.

References Ames, Roger, and Rosemont, Henry. (1998). The Analects of Confucius: A Philosophical Translation. New York: Ballantine Books. Aristotle. (1968). Metaphysics I-IX (Vol XVII, Aristotle in twenty-three volumes). H. Tredennick, trans. Cambridge, Mass: Harvard University Press. ______. (1960). Posterior Analytics, Topica. H. Tredennick & E. S. Forster, trans. Cambridge, Mass: Harvard University Press. Barber, Benjamin. (1995). Jihad vs. McWorld: How Globalism and Tribalism Are Reshaping the World. New York: Ballantine Books. Burk, Dan. (2007). Privacy and Property in the Global Datasphere. In S. Hongladarom and C. Ess (eds.), 94-107. Bynum, Terrell Ward. (2006). A Copernican revolution in ethics? In G. Dodig-Crnkovic & S. Stuart (Eds.), Computing, philosophy, and cognitive science. Newcastle-upon-Tyne, UK: Cambridge Scholars Press. Chan, Joseph. (2003). Confucian Attitudes towards Ethical Pluralism. In Richard Madsen and Tracy B. Strong (eds.), The Many and the One: Religious and Secular Perspectives on Ethical Pluralism in the Modern World, 129-153. Princeton: Princeton University Press.

201 Döring, Ole. (2003). China’s struggle for practical regulations in medical ethics. Nature Reviews | Genetics 4 (March), 233-239. Dreyfus, Hubert. 2001. On the Internet. New York: Routledge. Eickelman, D. F. (2003). Islam and Ethical Pluralism. In Madsen and Strong (eds.), The Many and the One: Religious and Secular Perspectives on Ethical Pluralism in the Modern World, pp. 161-180. Princeton and Oxford: Princeton University Press. Elberfeld, Rolf. (2002) Resonanz als Grundmotiv ostasiatischer Ethik [Resonance as a Fundamental Motif of East Asian Ethics], in Rolf Elberfeld and Günter Wohlfart (eds.), Komparative Ethik: Das gute Leben zwischen den Kulturen [Comparative Ethics: The Good Life between Cultures], 131141. Cologne: Edition Chora. Ess, Charles. (2004). Computer-Mediated Colonization, the Renaissance, and Educational Imperatives for an Intercultural Global Village. In Robert Cavalier (ed.), The Internet and Our Moral Lives, 161-193. Albany, NY: SUNY Press. ______. 2006a. From Computer-Mediated Colonization to Culturally-Aware ICT Usage and Design. In P. Zaphiris and S. Kurniawan (eds.), Advances in Universal Web Design and Evaluation: Research, Trends and Opportunities, 178-197. Hershey, PA: Idea Publishing, 2006. ______. 2006b. Du colonialisme informatique à un usage culturellement informé des TIC. In J. Aden (ed.), De Babel à la mondialisation: apport des sciences sociales à la didactique des langues, 47-61. Dijon : CNDP - CRDP de Bourgogne, 2006. Ess, Charles. (2006c). Ethical Pluralism and Global Information Ethics. Ethics and Information Technology 8 (4: November), 215—226 Ess, C. 2007a. Cybernetic Pluralism in an Emerging Global Information and Computing Ethics. International Review of Information Ethics 7 (September). Draft version online: ______. 2007b. “Déclinaisons culturelles en ligne : observation « de l’autre »,» Etudes De Linguistique Appliquée, No. 146 (avril, mai, juin 2007). Ess, Charles, and Thorseth, May. (2006). Neither relativism nor imperialism: Theories and practices for a global information ethics, Ethics and Information Technology 8 (3), 91-95. Floridi, Luciano. (2006). Four Challenges for a Theory of Informational Privacy, Ethics and Information Technology, 8(3), 109-119. Grodzinsky, Frances S. and Tavani, Herman T. (2007) Online Communities, Democratic Ideals, and the Digital Divide. In S. Hongladarom & C. Ess (eds.), 20-30.

202 Hinman, Lawrence. (1998). Ethics: A Pluralistic Approach to Moral Theory. Fort Worth: Harcourt, Brace. Hongladarom, Soraj. (2007). Analysis and Justification of Privacy from a Buddhist Perspective. In S. Hongladarom and C. Ess (eds.), 108-122. Hongladarom, Soraj and Ess, Charles. (2007). Introduction. In S. Hongladarom and C. Ess (eds.), xi-xxxiii. Hongladarom, Soraj & Ess, Charles (eds.), Information Technology Ethics: Cultural Perspectives. Hershey, PA: Idea Group Reference Internet World Stats. (2007). INTERNET USAGE STATISTICS - The Big Picture: World Internet Users and Population Stats. Retrieved April 8, 2007, from http://www.internetworldstats.com/stats.htm Lü, Yao-huai. (2005). Privacy and data privacy issues in contemporary China. Ethics and Information Technology, 7, 7-15. Nakada, Makoto, & Tamura, Takanori. (2005). Japanese conceptions of privacy: An intercultural perspective. Ethics and Information Technology, 7, 27-36. Paterson, Barbara. (2007). “We Cannot Eat Data: The Need for Computer Ethics to Address the Cultural and Ecological Impacts of Computing.” In S. Hongladarom and C. Ess (eds.), 153-168. Rananand, P.R. (2007). Information Privacy in a Surveillance State: A Perspective from Thailand. In S. Hongladarom and C. Ess (eds.), 124-137. The Republic of Plato. (1991). Translated, with Notes, an Interpretive Essay, and a New Introduction by Allan Bloom. New York; Basic Books. Rorty, R. (1975). Philosophy and the mirror of nature. Princeton, NJ: Princeton University Press. Søraker, Johnny. (2006). The Role of Pragmatic Arguments in Computer Ethics, Ethics and Information Technology, 8(3):121-130. Stahl, B.C. (2006). Emancipation in cross-cultural IS research: The fine line between relativism and dictatorship of the intellectual, Ethics and Information Technology 8 (3: July), 97-108. Tavani, Herman. (2007). Ethics and Technology: Ethical Issues in an Age of Information and Communication Technology, 2nd ed. New York: John Wiley Taylor, Charles. (2002). Democracy, inclusive and exclusive. In R. Madsen, W. M. Sullivan, A. Swiderl, & S. M. Tipton (Eds.), Meaning and modernity: Religion, polity, and self, 181-194. Berkeley: University of California Press. van der Velden, Maja. (2007). Invisibility and the Ethics of Digitalization: Designing so as not to Hurt Others. In S. Hongladarom & C. Ess (eds.), 81-93.

203 Warren, K. J. (1990). The power and the promise of ecological feminism, Environmental Ethics 12: 2 (Summer), 123-146. Wheeler, Deborah. (2006). Gender sensitivity and the drive for IT: Lessons from the NetCorps Jordan Project, Ethics and Information Technology 8 (3: July), 131-142. Wiener, Norbert. (1948). Cybernetics: or Control and Communication in the Animal and the Machine. New York: John Wiley.

204

Information Society: A Second “Great Transformation”? Peter Fleissner, Wien

1. Introduction: Goods and Services, Use-Value and Exchange-Value To understand from a perspective of Political Economy what is going on in the so-called information society we should identify and understand the new kind of goods and services that are produced, distributed and consumed via digital information and communication technologies (DICT). To perform this task we go back to the basics: Let us start with the notion of “useful things”. Useful things have many attributes, and we can therefore use them in many ways. The usefulness of a thing makes it a use-value, because by its intrinsic characteristics it can satisfy some human need, either real or imaginary, maybe positive or negative for anybody. Although elementary, the concept of a useful thing is not trivial, because the notion of usefulness is rather tricky. This notion in fact reflects the complex cobweb of the society in question. What is useful in one society can become useless in another one or vice versa, therefore even a use-value does not represent an invariant over time. However, there is more to be told: Already Aristotle stated that beyond the use-value of an object there is another kind of value, exchange-value, which marks the definition of a commodity up to now: “The one (i.e. use-value P.F.) is peculiar to the object as such, the other (i.e. exchangevalue P.F.) is not, as a sandal which may be worn, and is also exchangeable. Both are uses of the sandal, for even he who exchanges the sandal for the money or food he is in want of, makes use of the sandal as a sandal. But not in its natural way. For it has not been made for the sake of being exchanged. ”

More than 2000 years later, in 1776, Adam Smith (1776) repeated Aristotle’s distinction, this time on the level of the value of an object: “The word value, it is to be observed, has two different meanings, and sometimes expresses the utility of some particular object, and sometimes the power of purchasing

206 other goods which the possession of that object conveys. The one may be called ‘value in use’; the other, ‘value in exchange.’”

Marx used this source in the first volume of “Das Kapital”, which begins with the following famous paragraph: “The wealth of those societies in which the capitalist mode of production prevails, presents itself as ‘an immense accumulation of commodities,’ its unit being a single commodity. Our investigation must therefore begin with the analysis of a commodity.”

The tacit assumption behind this definition of commodity is the assumption of materiality (Stofflichkeit). The commodity is reified or codified in a useful thing, an object. This useful thing is tangible; it has a certain lifetime, is independent of the producer or consumer, and can be stored and resold. Examples are apples or computers. In the contemporary market, we find other entities: There are also services. Within the framework of economic circulation, services are a rather strange animal. Although they represent use-values, we consume them at the time they are produced. They do not have any continuous and permanent existence inside or outside the market; they cannot be stored, or accumulated. Let us take express mail as an example: You will fail if you try to resell the service of the delivery of a letter. Alternatively, try to bring a haircut you have received at a coiffeur to the market again. Of course, you will not find any customer. Nevertheless, in market economies, one can sell services once, and they are able to attract financial remuneration, but it is not possible to resell them, and you cannot invest them either. And strangely enough: Although services have become more and more important in the economy of our days (they account for more than two thirds in developed countries), it is still true that they do not directly contribute as such to economic growth. The growth of the service industries themselves has to rely on material products, on commodities in the full meaning of the definition of classical political economists. Marx called these material products “surplus product” which is necessary for economic growth. In terms of labor time, he called it “surplus labour”. Surplus labor expressed in money terms is the basis for profits. In an economy in stationary equilibrium, this connection means: No growth—no profits. This equation works in both directions. If we assume (1) an economy as closed (without contact to the outside world, without exports or imports), if we (2) do not allow the economy to deplete resources (our economy should only be based on flows, not on stocks), and if (3) the economy would completely rely on services, such an economy

207 could hardly survive for more than a few days. Because of the fundamental property of people, being in need of material inputs as basic ingredients of their consumption, but also as the expansion and replacement of production machinery is based on material products, people in such an economy will die of hunger earlier or later. Moreover, if they survive by miracle, they will have to face a shrinking economy up to the moment where the production facilities are completely worn out. However, this restriction in their direct contribution to economic growth does not hinder services to provide us with important indirect effects: More educated people—education is another kind of service—are possibly more productive in their jobs than people who do not take advantage of education and training. The output of a service could also be a new technological principle of production, but it needs new machines to incorporate this invention. Services are also able to enrich the range of consumption, and therefore the well-being of people. To express this case in the language of political economy: Services represent pure use-values. People may consume them, and by consuming them the characteristics and properties of people at work or at leisure time could change. To summarize, the essential difference between material products and services is not their ability to be traded at markets. One can buy and sell both kinds of use-values, and one can associate a price to both of them. Their basic difference is on the one hand, the ability of material products to contribute to the surplus product, to the surplus value and to the total amount of possible profits on the level of the aggregate economy, while on the other hand services are in principle not able to do so. The latter allow the vendor to earn profits, but only branches of material production provide the basis for them. If there would not be any service production in the economy, the rates of profit in the branches of material production would be higher. In an economy with services, branches of material production “share” the possible profit with the service-producing capitalists. The redistribution of profits from commodity production to services is done by a change in relative prices.

208

2. Commercialization of Human Communication After this excursion into the basics of political economics, let us come back to the contemporary information society and look for new developments and new goods and services associated with it. At the beginning of the 20th century, the German sociologists Ferdinand Tönnies and Max Weber have identified two different ideal characteristics (“Idealtypen”) of relations between human beings. They called them community (“Gemeinschaft”) and society (“Gesellschaft”). In communities, emotions and/or traditions create the links of social relationship (like in a tribe, the family or a military unit based on camaraderie). Communication is direct, face-to-face. With the emergence of mass markets where you exchange goods anonymously, and with the nation state based on impersonal law another type of human relations developed: Society. There, communication is related to “rationally motivated compromises” of human interests. It is indirect and no longer face-to-face. Over time, more and more technical innovations assisted the indirect communication process, starting from Chappe’s simple mechanic-optical telegraph, the wired telephone, one-way communication via radio stations to the first heavy wireless communication devices of the sixties and seventies of the last century. Nowadays, in the age of globalization, Digital Information and Communication Technologies provide the means of chatting worldwide, bridging different locations in space. Direct face-to-face communication is extended to voice exchange, mediated by electronic devices. Commercialized (mobile) communication via cell phones enhances traditional communication of people in the family, in the settlements and at work. However, this new kind of communication comes at a certain price. Even the seemingly free services of Voice over IP cannot be used if one does not have the necessary technical devices at hand, like a PC, a laptop, access to the Internet, and the necessary skills. From the point of view of a sociologist, one could say that technologically mediated communication creates the possibility of new communities, this time no longer restricted to certain local spaces but on a global scale. For the first time in history people have the possibility selectively creating new topologies between them based on shared or mutually related interests. The distance between them shrinks to the need for dialing a cell phone number. Nevertheless, at the same time the market has become the impersonal and commercial link mediating personal communication between individuals or small groups of people.

209 Economically speaking, the former type of direct, face-to-face communication has been transformed into a service transferring speech from one location to the other in very short time, but one usually has to pay for it— speech is now commercialized within a global market—thus limiting access to those who can afford it.

3. Commodification of Cultural Activities However, DICT can do more than bridging distant locations via telephony and transforming human speech from a human activity into a commercial service. By an interesting interaction of technology and law, it can create information goods, which show all the properties of commodities, like apples or PCs. We call this process the commodification of essential parts of human culture.

The Role of Technology

The transformation of a volatile service/activity into a commodity can be described as the result of a two-step procedure, (1) application of technology (reification and reanimation), and (2) a legal procedure (Intellectual property rights and copyrights).

Reification

In a first step, a special group of services, i.e. all human activities that can be technically described within the framework of the binary system (or any other system of numbers), can be reified with the help of digital media and digital computers. DICT make it possible to codify flows of information into a pattern of energy distribution that can be stored in various material devices, even at falling costs. Human activities (flows) are stored as stocks of information as if they were frozen into energy patterns of a material carrier.

Reanimation

Pop or classical concerts, theatre performances, actors posing for a movie, lectures, storytellers, but also the situation you have encountered in your holidays, the first steps of your child, are subject to digital reification on a

210 carrier into stocks of information. However, in a second step the carrier can be used to reanimate the activities of the past. They—like in a time machine —can be moved into presence. The stocks are unfrozen and transformed into flows again, at a later point in time. You can enjoy a concert or a movie many years after its first performance.

Copying

Nevertheless, reification and reanimation are only parts of the potential of technology. While technology prepared the ground for commodification by creating the physical/energetic basis of a commodity, which therefore can be stored, re-sold and accumulated, at the same time it undermines the possibility of commodification by the threat that commodities can be copied and transferred via the Internet nearly without costs—an ideal pre-condition for a future society where the distribution of information and knowledge of all kinds is possible nearly for free and accessible to everybody. However, under capitalistic conditions, there exists a particular interest not to choose such a framework. Private enterprises do not like that (in their language) “free riders” will show up. Free riders could copy the content and could resell it at a lower price or—in the extreme—could give it away free of charge. Anyway, the market would be undermined and could no longer be used for making proper profits. The process of commodification comes under the threat of being reverted. While the group of potential users of software and digital content will favor free riding, the management of the involved companies would prefer a situation that will enable them to sell the output at a proper price.

The Role of Law

To assure this, lawyers have invented particular regulation mechanisms: copyrights, patents, licenses, or generally speaking, Intellectual Property Rights. Enterprises called the Law for support. Copyright laws provide people who would like to do copies with the threat of a fine. To assure the market of reified services, the European Union has issued two European Directives on copyright in the information society. The “Directive 2001/29/EG on the harmonisation of certain aspects of copyright and related rights in the information society” of 22 May 2001 contains several regulations on net security, while the “Directive 2004/48/EC of the European Parliament and of the Council on measures and procedures to ensure the enforcement of intellectual property rights” of 29 April 2004 intends to

211 give a copyright owner proper instruments for the realization of his rights. “Member States shall bring into force the laws, regulations and administrative provisions necessary to comply with this Directive by 29 April 2006” (Directive 2004/48/EC, Art. 20, Par 1). Even if laws cannot really make copying (technically) impossible, they are sufficient to keep up the market for reified services. Under such preconditions, the commodification process is completed and will lead to the intended result: New areas of human activities can be marketed; one can open up new sources of exchange-value, and—most important—of new sources of profit. In fact, big and smaller business exploit two areas of commodification at once: There is a market for carriers of information, representing reified services, and a market for devices to bring them to life again, to reanimate and replay the past activities. In particular, this is true for all cultural activities where flows of information are involved, like talking, writing, developing software, doing research, inventing, singing, dancing, painting, designing, playing music, creating movies. The market has conquered everything. It is subject to further investigation to identify the positive and negative effects of commodification and commercialization, for whom they are useful and for whom they may be detrimental, and if it is possible to assess the net result. One should note that the process of reifying human activities is not completely new. It started with the human ability of painting and writing, with the invention of the printing press, photography and film fixed on paper or celluloid, and continued with tapes and records. Recently, the potential for storing information has grown once more via Compact (CD) and Digital Video Disks (DVD). However, the size of the market is unprecedented larger than ever, and new synergies are exploited (convergence of technologies, new emergent markets).

4. Commercialization of Labor in the 19th Century The above-described transformation of human activities into marketable goods and services seems to be on the same level of importance for society like the creation of the labor market. Karl Polanyi scholarly described this contradictious development in his famous book “The Great Transformation”. He showed eloquently that after the active transformation of soil and money into commodities the commercialization of work opened the doors for a capitalist society. After half a century of protective measures of peasant

212 work and the introduction of a kind of minimum wage by the Speenhamland System, a “free” labor market was established and allowed the capitalistic system to take off in a qualitatively new way. The Speenhamland System was a method of giving relief to the poor, based on the price of bread and the number of children a man had. It further complicated the 1601 Elizabethan Poor Law because it allowed the able-bodied—those who were able to work —to draw on the poor rates. It was set up in the Berkshire village of Speen by local magistrates who held a meeting at the Pelican Inn on 6 May 1795. They felt that ‘the present state of the poor law requires further assistance than has generally been given them’. A series of bad harvests had put wheat in short supply and consequently the price of bread had risen sharply. (http:// www.dialspace.dial.pipex.com/town/terrace/adw03/peel/poorlaw/speen. htm). While in the beginning the laborers and the feudal lords were in favor of the Speenhamland Law (it increased wages for the workers and it saved cost for the owners of land), it became evident that the Law demoralized the laborers and “deterring them from honest work, and making the very concept of an independent working man an incongruity” (Polanyi 1957, 224). The quality of life of the workers was that low that the Members of Parliament spoke about them as a special miserable race. Before the middle of the 19th century, the system became exhausted. “By the Poor Law Amendment of 1834 the social stratification of the country was altered, and some of the basic facts of English life were reinterpreted along radically new lines. The New Poor Law abolished the general category of the poor, the ‘honest poor’, or ‘laboring poor’…The former poor were now divided into physically independent workers who earned their living by laboring for wages. This created an entirely new category of the poor, the unemployed, who made their appearance on the social scene. While the pauper, for the sake of humanity, should be relieved, the unemployed, for the sake of industry, should not be relieved …That this meant penalizing the innocent was recognized …The perversion of cruelty consisted precisely in emancipating the laborer for the avowed purpose of making the threat of destruction through hunger effective” (Polanyi 1957, op.cit.). This structure became the prototype for the liberal economic policies applied later on in many parts of the world. The capitalist economy was transformed into capitalist society.

213

5. Final Remark Here is not the room to discuss the possible effects of commercialization and commodification of cultural activities, but there is a wealth of literature directly or indirectly related to them. Good examples on possible effects can be found in Lawrence Lessig’s books (Lessig 1999; 2001; 2004; 2006). There are also many publications on alternatives to restrictive Intellectual Property Rights (e.g. Open Source Jahrbuch 2004; 2005; 2006; 2007). It is interesting to observe that contemporary information society creates new information goods and services that are no longer subject to shortage. On the contrary, in principle cultural activities and knowledge could be made available in plentiful supply. However, in real life people cannot access this abundance fully: At the same time, society created artificial shortage as if the economy would still be restricted to the realm of material production.

References Aristotle: De Rep. l. i. c. 9. see http://www.econlib.org/library/YPDBooks/ Marx/mrxCpANotes.html, footnote 47. Lessig, Lawrence (2006): Code v2, see http://pdf.codev2.cc/Lessig-Codev2. pdf Lessig, Lawrence (2004): Free Culture: How Big Media Uses Technology and the Law to Lock Down Culture and Control Creativity, New York: The Penguin Press. Lessig, Lawrence (2001): The Future of Ideas: The Fate of the Commons on a Connected World, New York: Vintage Books. Lessig, Lawrence (1999): Code and other laws of cyberspace, New York: Basic Books. Marx, Karl: Das Kapital, Vol 1 see http://www.marxists.org/archive/marx/ works/1867-c1/ch01.htm#S1 Open Source Jahrbuch 2004 - 2007, annually, Berlin: Lehmanns Media. Polanyi, Karl (1957): The Great Transformation, the political and eco nomic origins of our time, Boston: First Beacon Paperback edition, Beacon Press, Beacon Hill (copyright 1944 by Karl Polanyi). German Edition: Polanyi, Karl (1978): The Great Transformation, Politische und ökonomische Ursprünge von Gesellschaften und Wirtschaftssystemen, Wien: suhrkamp taschenbuch wissenschaft 260, Suhrkamp Taschenbuch Verlag, Deutsche Rechte: Europa Verlag GesmbH.

214 Smith, Adam (1776): The Inquiry into the Nature and Causes of the Wealth of Nations, Book 1, Chapter 4.

Internet and the flow of knowledge: Which ethical and political challenges will we face? Niels Gottschalk-Mazouz, Stuttgart

Introduction The term “knowledge” is used more and more frequently for the diagnosis of societal change (as in “knowledge society”). According to Bell (1973), since the 1970s we have been experiencing the first phase of such a change towards a knowledge society, consisting of a rapid expansion of the academic system and a growth of investments in research and development in many countries. In this phase, as Castells (1996) points out, information technology has been rapidly changing the workplace as well as the composition of social organisations. In this first phase, the focus has been on scientific knowledge, its production and application in expert cultures. Since the Mid-1990s, however, this focus has been widening, such that one can speak of a second phase of the knowledge society (Drucker 1994a, 1994b; Stehr 1994; see also Knorr-Cetina 1998; Krohn 2001). Now it is no longer only scientific knowledge that is seen as driving the change, but also ordinary knowledge and practical knowledge, as know-how. The change is, as I would put it, autocatalytic, for typical of knowledge societies is “not the centrality of knowledge and information, but the application of such knowledge and information to knowledge generation and information processing/communication devices, in a cumulative feedback loop between innovation and the uses of innovation“ (Castells 1996: 32). Science has also been changing to be part of this loop, as shown in the rise of applied sciences and in the acknowledgement of uncertainty and ignorance issues (cf. Heidenreich 2002: 4 ff.; see also Hubig 2000 and Böschen & Schulz-Schaeffer 2003). The most

216 significant change in this second phase however is the popularization of the Internet, that is seen as a key factor that governs societal change today. So what exactly is this “knowledge” that is driving present knowledge societies? Can we rely on the philosophical analysis of the term to get some insight here? And, what are the ethical and political challenges that we had to face so far? The first section of this paper will be devoted to answering these questions. The second section looks at present transformations: Much is said about “Web 2.0” at the moment, the rise of a “social internet”. In this section, we will therefore ask how knowledge is changing with these Web 2.0 developments, and which ethical and political challenges they are bringing about. In the third and final section, we will discuss how the internet, information and communication technologies in general might evolve, and try to sketch ethical and political challenges that we may have to face in the future. In short, we will be looking at the flow of knowledge and how it may change as the internet continues to develop, and most importantly its resulting ethical and political challenges. Much of the following has been discussed in the past by other authors and much of it has also become common knowledge. The material here is presented in new frameworks. The first section redefines the concept of “knowledge” in an innovative way. The second section analyses the notion of “social” more closely than usual and combines the results with those of the first section. In the third part the analysis focuses on possible paths for evolution of the internet with respect to its effects on knowing-how, as opposed to conventional analysis which has focused on knowing-that (if it has focused on knowledge effects at all).

1. Knowledge Philosophy has been trying to get a grasp of knowledge from its very beginning, and with immediate success. Has not Plato pointed out a definition of knowledge that remained valid ever since? According to this definition, knowledge is justified true belief. In fact, most contemporary philosophers seem to endorse this tripartite definition or a mild refinement of it as necessary and sufficient (see Steup 2006 for an overview). Some of these refinements state, following Gettier 1963, that at least one additional condition has to be met for knowledge. Others, following Sartwell 1992, want to drop the justification condition. This condition turned out to be most controversial; there is an ongoing debate about a notion of justification that is suitable for

217 the definition of knowledge (if any notion is). Plato himself, however, did not subscribe to this definition (cf. the aporetic dialogue of Theaitetos, where he explores definitions of knowledge at great length). Through the centuries, there has always been criticism of this definition (cf. Ritter et al. 2004, 855-957). In the 20th century, a consensus was developing on the tripartite definition as aiming at knowledge only insofar as it can be expressed in the paradigmatic sentence: S knows that p (with S a subject and p a proposition).2 The critics then argued that there is more to knowledge than just knowing-that, namely, that there is also knowing-how (S knows how to f, with f any action verb) and of knowing of objects (S knows x, with x any object) and that these forms of knowledge might be more fundamental.3 It is also easy to see that we attribute knowledge not only to subjects but also to objects (y contains knowledge, with y any suitable object like a textbook or an electronic knowledge base). Both these criticisms are undermining the belief-condition. It was also in the 20th century that the value of the truth-condition became questionable: Following a coherence theory of truth, truth is grounded in justification, and thus not independent from it. Following a pragmatic theory of truth, it is grounded in knowing-how, undermining the tripartite definition that takes knowledge as knowing-that. So all in all, the tripartite definition is problematic: It seems to be too narrow to grasp all important aspects of knowledge. Yet there is another philosophical tradition of defining knowledge established in the late 20th century, namely, defining knowledge as some kind of information (e.g. Dretske 1981). But it did not lead us to a clear, uncontroversial and inclusive definition, either. This can be seen from facts that properties that shall distinguish knowledge from (other) information vary from author to author, that properties ascribed to information vary from author to author as well (and some conceive information quite narrowly, like there being only true information, which then is highly contested, cf. note 2 of this paper for a possible reason for this), and finally, that some authors define knowledge in terms of information while others define information in terms of knowledge. So all in all, the use of “information” to define knowledge is problematic: It means simply to pass the buck to a more technical, less well known term that itself is in greater need of being explained than knowledge is (see Gottschalk-Mazouz 2006 for more on this topic). Both the tripartite strategy and the information strategy of defining knowledge try to find out what knowledge “really is”, but for our purpose it seems to be more interesting to find out what knowledge “is like”, what it “can do”, i.e. to look for features, not for (alleged) substances of knowledge. Given its shortcomings, it may well be that these substantialist definitions are failing

218 to grasp the most important aspects of the knowledge that is driving knowledge societies. While for inner-philosophical dispute it may have been productive to deliberately narrow the focus,4 in an interdisciplinary discourse on knowledge societies, the same narrowing is unlikely to be productive. That is we should not simply stipulate philosophy’s (or any other academic discipline’s) definitions. We should rather at least try to meet the standard requirement of saving the phenomena (cf. Aristotle, EN 1145b2-7) with our definitions. To be able to do this, we have to show what the features of this knowledge are that makes it drive knowledge societies. One way of doing this is to look at contributions from key authors from Sociology, Economics, Psychology, Computer Science and Philosophy of Technology that led us to describe the current society as transforming into a knowledge society in the first place. More specifically, to look at those parts of their contributions to the knowledge society discourse where they try to define knowledge. Although these trials are far from convincing as definitions (as all of them are too loose, lopsided or in parts circular), they nevertheless underline important typical features of knowledge. In the following, seven typical features of such knowledge will be exposed.5 These features are neither necessary nor sufficient for knowledge. But after browsing through 40-50 definitions, and seeing the features repeating themselves, I think that they are typical for the kind of knowledge that is at stake here. ‘Knowledge’ will thus be reconstructed as a complex concept (Komplexbegriff; see Gottschalk-Mazouz 2006: 25-27, similar to a so-called cluster concept in linguistics) that is comprised of these seven typical features. I see this as a prerequisite to the discussion of the aptness of any strict definition to sharpen the contours of the phenomenon “knowledge society” even further. So what does knowledge mean when we talk about “knowledge society” and related matters such as “knowledge management”, “globalisation of knowledge” etc.? What are the typical features of such knowledge?

F1 Knowledge has a practical aspect Knowledge is valued because it helps to solve problems. This includes problems of orientation, evaluation and reflection. Thus, knowledge does not only consist in knowing objective facts. Moreover, due to its practical aspect, every chunk of knowledge is related to (practical) situations, not just one situation, but (typically) to situations of a similar kind, to (practically defined, broader or more specific) domains of knowledge.

219

F2 Knowledge is person-bound or not Knowledge comes in two forms; it is either person-bound (as naturally in psychology) or is not person-bound (though rather bound to or incorporated in objects). In other words, it is either personalised or (externally) represented. If knowledge is perceived as a good or a commodity, as it is in economics and elsewhere (see Toffler et al. 1994 for knowledge changing from being a public to being a private good), then this refers in large parts to represented knowledge (and to its practical use, of course, cf. F1). Representation is not constrained to knowledge of facts, but also to knowledge about possibilities, evaluations etc. To a certain extent, knowing-how can also be externally represented (e.g. in algorithms, recipes or in machines), and knowing of objects can be represented in paintings or novels. External representations can have the form of a text, picture or sound—anything that can be understood as carrying knowledge. The whole dynamics of knowledge production and use cannot be understood without incorporating both personalised and represented knowledge, and its interdependencies. The production of personalised knowledge requires represented knowledge (if it remained bound to person x then it would never reach person y), and production and use of represented knowledge requires personalised knowledge.

F3 Knowledge has a normative structure The normative fine structure of knowledge is at least two-fold: knowledge consists of recognised claims, i.e. claims that are not only recognised as claims (“knowledge candidates”) but also as successful claims. With respect to F1, one can say that knowledge candidates are regarded as possible solutions to more or less given problems. The normative components of these claims can be further analysed, e.g. as consisting of normative commitments and entitlements which allows to reformulate some of the insights of the tripartite definition (see Brandom 1994, 200-203, for this).

F4 Knowledge is internally networked An entity taken as knowledge normally has an internal structure (i.e. its parts stand in certain relations to each other), and the whole entity stands

220 in certain relations to other (external) entities. I propose to conceive these relations as network-like. Learning also means integrating knowledge into already existing knowledge. This integration is happening in—explicit or implicit—processes of interpretation, justification, application and complementation (to name just a few). Thus, knowledge has an internal structure whose parts are typically regarded as knowledge themselves (but on a different level of knowledge formation). Knowledge typically does not consist of a single sentence or another single “atomic” representation. Typically, it has an internal structure that is not only syntactic or sentential, but it also has, for example, axiomatic, taxonomic or narrative structures. Metaphorically spoken knowledge is a net that allows to catch fish of a certain kind in a given environment (i.e. for cognition and problem solving in a certain domain).

F5 Knowledge is externally networked Knowledge has to be related to other knowledge if it is to count as knowledge. This has already been the case according to the tripartite definition; for a belief has to be justified by something else to count as knowledge. The justifier, however, is normally again justified or justifiable by something else, and all these justifications are very often possible in more than one way (quite rare there is “the” single reason for anything). So a holistic picture of the network character of knowledge appears: knowledge is networked with knowledge both internally (i.e. consisting of knowledge) and externally (i.e. supporting other knowledge as knowledge). The cut that singles out a chunk of knowledge seems somehow artificial then. It immediately follows that knowledge presupposes (other) knowledge in a way that we do not start with one single (and 100 percent certain) piece of knowledge and reconstruct our web of knowledge from there.

F6 Knowledge is dynamic Castells (1996) characterised knowledge as generating knowledge in a “cumulative feedback loop”. In their best effort to reformulate the tripartite definition according to their needs, Nonaka and Takeuchi (1995) wrote that ”knowledge is a dynamic human process of justifying personal belief toward the ‘truth’.” They explicitly added the notion of “dynamic” to such

221 definition while characterising it all in all as a process and not a state or a proposition. That knowledge is dynamic means that it is changing, and this change is not simply a growth, though be it a nonlinear one. Knowledge is acquired and disposed, is recognised, used/applied, sold and bought, written down, transferred, shared or kept secret, reformulated, etc. New knowledge can deflate old knowledge or make it more valuable. Knowledge can also be forgotten or disappear when unused for a long time. From time to time, there might also be larger conceptual changes (cf. Kuhn 1962 and Foucault 1966). Knowledge lives in time, so to say, but it also lives in space, as there are local cultures of knowledge on the one hand and a global circulation of (some) knowledge on the other hand, and interactions between these two.

F7 Knowledge has institutional contexts Institutions matter for knowledge generation, formation and distribution: This is evident from schools, universities, laboratories, libraries, archives etc. The acknowledgement of something as knowledge proceeds by “individual and institutional recognition”, as Hubig (1997: 173) puts it: It is recognition by other institutions or, ultimately, individual recognition that lets a given institution be in charge or become obsolete. But nevertheless, knowing is no longer an individualistic enterprise (if it ever was). It is easy to see that in modern functionally differentiated societies, those who possess knowledge are no longer individuals or small groups, only. And that hardly any subject nowadays can acquire relevant knowledge alone. But vice versa, knowledge is also important for the maintenance of institutions, it fits to and supports certain institutions. Berger and Luckmann speak of knowledge as “sheltering canopies over the institutional order” 1966: 102). Therefore, knowledge is reigned by institutions but also stabilizes institutions (and the practices and power relations that come with them). These seven features are the most important features of knowledge, at least of the kind of knowledge that drives (evolving) knowledge societies. Before we proceed, let us have a quick look at how the tripartite definition shows up in these features. The belief criterion is reflected in F2 (“personbound”) and in F7 (as individual recognition). The justification criterion is reflected in F5 (externally inferentially networked), and the truth criterion depending on your theory of truth in F1 (pragmatic), F5 (coherentist) or F2/ F7 (constructivist). However, the components of the tripartite definitions are

222 clearly neither necessary nor sufficient conditions for knowledge, or at least not for the kind knowledge at stake here.

2. Present Internet challenges From the very beginning of the knowledge society discourse, information technology has been seen as playing a major role as a catalyst of societal change. Nowadays, it is the internet in particular that is seen as playing this role. So what kind of knowledge is provided through the internet—and how might it be influenced by current and future technological change? In the light of the features pointed out above, the kind of knowledge that is provided through the internet can be characterised as representative (because it is not person-bound) and consisting of chunks of knowledge candidates (“chunks” because it is typically scattered and comes in pieces that have to be compared and assembled, and “candidates” because it has to be recognised as knowledge on an individual basis as institutional recognition mechanisms are mostly absent). It is highly dynamic, pages are added and altered rather quickly and so are the page links, and thus the external networking structures change. To be able to understand, assess and productively use these representations, a significant level of skill and person-bound knowledge is necessary. The institutional and organisational background is also quite complex (you need at least a host, a published address, a provider, a phone/cable company and computer hardware, an operating system and a browser. All these things have to harmonize in a certain way if knowledge should be provided via “the internet”. We will now give an overview of ethical and political challenges of the kinds of knowledge flow that are made possible by the internet as it was known to us one or two years ago (Web 1.0), as we are experiencing it changing today (Web 2.0) and, in the next part of this paper, as we may experience it in the future (Web 3.0). Challenges of Web 1.0 knowledge flow included a great deal of topics that have been already widely discussed elsewhere (cf. Langford 2000; Hamelink 2001), focusing on concerns over matters such as: Plurality of information sources, viewpoints, debunking; Access; Copyright; Privacy; Security; Free Speech/Censorship; Netiquette; Networking/Political organisation and action; Sabotage, “defacing”; Neutrality of net infrastructure (e.g. ICANN). With Web 2.0 (cf. O’Reilly 2005 for this term), however, the social dimension is becoming more important and flow patterns are changing according-

223 ly. We began to see a shift from single to common uses (Wiki, Blog, Rating), from static to fluid applications/Webservices instead of fix programs. While with Web 1.0 we typically had a one upload/multiple download pattern (left in Fig. 1), with Web 2.0 we are now experiencing live streaming, collaborative upload and automated partial download patterns (right in Fig. 1, top, middle and bottom, resp.).

A

B C D

A B C D

1

E F G

A

1

A B C

1 2

D E F

B

2

C

Fig.1: Knowledge flow patterns in Web 1.0 (left) and Web 2.0 (right). A, B, … are denoting users. 1, 2, … are denoting internet nodes. Explanations are given in the text.

For a closer analysis of these changes and their effects on knowledge flows, we need some terminology that allows us to characterise the commonalities and “the social” that is involved in Web 2.0. The concept of the social good can be used to accomplish this task. Two notions can be distinguished: (1) a social good that is affecting the social community, and (2) a social good that is affected by common action and/or affecting common action. Let us have a look at the latter where we find the constitution, distribution and consumption of these goods either as substantial or as normative. Persons constitute goods together either substantially (production) or normatively (evaluation), and if normatively than either driven by corresponding intentions (external goals; poiesis) or by mutual recognition (internal goals; praxis). Some goods can be constituted only together and only in a normative way (promenade, friendship, competition, contract, promise, we-intentions)—and dissoluted as well (contract, promise). Taylor calls these goods “irreducibly social goods” and argues that they “essentially incorporate common understandings of their value” (1997: 140). Sociality in the distribution of goods means

224 that persons distribute together—one gives while the other takes (1:1), also many-to-one, one-to-many and many-to-many (advanced peer-to-peer networks). Sociality in the consumption of goods is comprised of the substantial consumption (use and dissolution) as well as the normative dissolution (of valuations), again in two respects: poietic (e.g. “this brand isn’t cool any longer”) and practical (as a reciprocal cancellation of a contract, promise etc.). Of course there are also combinations, such as the identity gains by consumption (music, books, clothes, etc.) as an example for the normative constitution of a good by the substantial consumption/use of another. These terminological distinctions can be combined with the aspects of knowledge as explained in part one in a topical (7x6) matrix:

Fig.2: Web 2.0 knowledge flow analysis matrix: Knowledge features F versus sociality dimensions S.

This matrix makes it possible to ask a bundle of precise questions that cover the field of Web 2.0 knowledge flows. These questions are of the following form: “How are changes in Sx related to changes in Fy?” Two examples may illustrate this. Take the matrix field (1,1) first, where we hence want to know “How are changes in the substantial constitution of knowledge relate to changes of the practical aspects of knowledge?” The answer is that there is a broader scope of (sometimes quite trivial) practices addressed (howtos, recipes, after-buy experiences, “Lebensberatung”, etc.), that many many persons are in a position to contribute, and that finally an explosion of pro-

225 duction of everyday-knowledge (loosely structured boards, more rigidly structured Wikis, …) results. The second example is (2,7): “How are changes in the substantial distribution of knowledge related to changes in the institutional/organisational contexts of knowledge?”. The answer is that these changes are juridical (e.g. copyright- and patent-law), economic (changing user fees/flatrates), organisational (broadband, networked (home) servers), that distribution patterns change (circulation of source code, download and upload of audio/video content, filesharing, plagiarism), and that applications and content change incrementally, automatically and with open-end, leading to a “perpetual beta” of software (cf. O’Reilly 2005) and knowledge. The best illustration for this may be the Wikipedia. —Of course, these answers have to be figured out more in detail. By working through the whole bundle of matrix questions, however, the various aspects of the problem that are already addressed in the literature (c.f. Quigley 2007) can be organised and checked for blind spots. Cross-cutting topics that attracted recent attention—ignorance/nonknowledge and social exclusion in particular—can also be addressed within the framework of the above matrix. The former can be understood as indicating “defects” of some feature of Fx (e.g. F5: Missing justification) in the presence of other features that are attributable. The latter can, in general, be seen as multiple involuntary social uncoupling (education, work, health, politics). In a knowledge society, this consists in being not involved in many matrix field activities at once (“patterns of exclusion”; identifying these may help to develop strategies for inclusion as well).

3. Future internet challenges When speculating about the future of the internet, two visions are constantly referred to: Semantic Web and Ubiquitous Computing. The former means to provide ontological metadata, such that men and machine can get rid of synonymies and use knowledge in a more productive fashion (see Herman 2007). The latter means an infrastructure that contains standardised IT elements with ad-hoc node updating, such that the virtual-real and onlineoffline distinctions vanish in a so-called “augmented reality” or better yet in “augmented actuality”6. To establish an augmented actuality by facilitating human action was the core objective of ubiquitous computing: Inspired by the social scientists, philosophers, and anthropologists at PARC, we have

226 been trying to take a radical look at what computing and networking ought to be like. We believe that people live through their practices and tacit knowledge so that the most powerful things are those that are effectively invisible in use. This is a challenge that affects all of computer science. Our preliminary approach: Activate the world. Provide hundreds of wireless computing devices per person per office (…) This has required new work in operating systems, user interfaces, networks, wireless, displays, and many other areas. We call our work “ubiquitous computing”. This is different from PDA’s, dynabooks, or information at your fingertips. It is invisible, everywhere computing that does not live on a personal device of any sort, but is in the woodwork everywhere. (Weiser 1996).

In fact, this infrastructure seems to be about to develop. You find Linux driving a supercomputer, a laptop, a handy or a (wrist) watch. Your TV, fridge or car might also be running on Linux. What’s more is that microprocessors and components are also becoming more and more standardized (Intel, ARM, PPC, MIPS etc.). The impact of these two visions for the flow of knowledge are classically seen in the domain of symbolically represented, explicit knowing-that. For the Semantic Web, this seems to be evident because the whole idea is to attach explicit ontological categories to each entry. For Ubiquitous Computing, this is less evident, I think. Nevertheless, the focus has been on knowingthat effects (privacy etc.), and on identity effects (Heesen et al. 2005; IRIE 2007). But there is also a direct impact on action (knowing-how, implicit), and this is my main point here: The (social) constitution, distribution and consumption of knowing-how may become possible now, because in addition to software (and this means: algorithms) also firmware, training data, learned patterns, etc. can be shared, parts of the infrastructure can dynamically and cooperatively be updated, reprogrammed or deactivated. The rest of this text elaborates on what this means to us and ask for the corresponding ethical and political challenges. The difference in focus can be explained with respect to the action cycle: 1. aiming, 2. executing, 3. evaluating (goal reached?), and then starting to aim again. We (and our things) are more and more relying on network assistance in (1, 2, 3), as the things get more responsive, in a broad sense of this word, and we get more responsive to them. We form men-machine-systems of cognition and action, then. According to the classical analysis, Web 3.0 affects (1) and (3): Things will let us know, provide us with knowledge about possible aims and suitable means; things will talk to each other but finally to us; we will live in information-enhanced environments; we will be able to search in real space (Google Maps/Earth). According to the complementary

227 analysis proposed here, maybe even greater challenges affect (2). Our means of action are directly at stake. As things talk to things (other people’s things), they facilitate the surveillance of our actions, and as their talking changes the behaviour of the things, they facilitate the surveillance of our actions such that they might function otherwise or not any more (functions include, but are not limited to, data transfer). The list of ethical and political challenges according to the classical analysis is long enough (see Heesen et al. 2005 or Greenfield 2006 for an overview). It comprises RFID tagging (attaching short range wireless unique identifier and data storage) allowing for the surveillance of trajectories of small things (and of their owners…), audiovisual recognition allowing for the logging and storing of data on real and virtual operations and moves, the storage of this data for an indefinite time, the combination of this huge amount of data (data mining) and the combination of this “flow data” with static data (registers, knowledge bases, harvested WWW information—a semantic web would be very useful for this). Thus, the main challenge according to the classical analysis is privacy, broadly construed (“it’s all about knowing-that”). According to the complementary analysis proposed here, there are even more ethical/political challenges to be considered. Not only the surveillance of real and virtual moves, but also the control of these moves (functionings), leading to a direct control of action (not only by law or incentives). As soon as these functionings are compromised, we are deprived of knowing-how, we simply cannot do certain things anymore. Political/subversive action thus becomes important, and an example of this is hacking functions/firmware versus control and DRM (“digital restrictions management”, as some cynics say). The struggle will be more about what you can do, not what you know or what you are allowed to do, and thus about power in the classical sense as conceivable as control over means (cf. Weber 1980: 28). It will be about an infrastructure design according to norms/values (where privacy is respected or disrespected by design, where control might be built in, where the infrastructure design itself might be intransparent and exclusive). A shift in control over means might also induce conceptual changes by mediality effects (Hubig 2006: 183 ff.; 229 ff.): Our self-experience is mediated through our (more and more technically enhanced) grip on the world, and our self-understanding is more and more governed by technological analogies. These shifts may lead to a redefinition of the actor-means distinction and of concepts of identity and personhood. Many young people have no problem to publish even very personal things in the internet. Many customers willingly give away the data on customer behaviour for free or for

228 minor bonuses. This indicates that the concepts of personhood and privacy might be changing, that are central to western culture, but that are quite differently perceived in other cultures anyway (Nakada and Tamura 2005; see also Hongladarom and Ess 2006). It seems to me that for various reasons (security related, business related, cultural, etc.) mutual surveillance is becoming more and more accepted. Will mutual control become accepted as well? However it will be,7 in the confusing struggles over control, personal or mutual trust will be more important than (unenforceable and inadequately simple) rules or oughts, such that virtue ethics might be an adequate framework if one wanted to analyse this point more in depth. In summing up, the Web 3.0 scenario may ultimately look like this: We will be experiencing smarter, adaptive, responsive, but less robust and less versatile things (because they are programmed to a certain extent). We will have a Semantic web that makes smart things smarter (and the automatic surveillance and control or their use more effective). We will also encounter artificial agents: programs or machines that act in the name of an individual (or organisation) with legally binding consequences (but unclear accountability) and effects on competitive advantage (raising fairness issues). First signs of this are experienced today, such as when you are placing an eBay bid today, you are competing with humans directly, but also with eBay’s bidding agents and with eBay snipers—but this is only software. We will experience the propagation of candidates of what is traditionally referred to as know-how, but in this case embodied in machines. It will to a certain extent be possible to share different firmware modules, training data etc. of our “smart things” that support our actions. It will depend on individual and institutional recognition whether what is provided by these machines and their algorithms counts as know-how, and whether it is regarded as useful, annoying or dangerous. The trends in controlling knowing-that and knowing-how seem to follow a similar logic. In the name of anti-terror warfare, some people not only want to know what you are knowing but also what you are doing and plan to do. It may even be communicated to us all that they want to know all this. This may lead to large scale action reporting and surveillance. But, moreover, the same people also want to be able to intercept your actions and to force other actions. This may lead to large scale action that will result in further control. The technical infrastructure that we are about to establish will in principal allow for all this to happen. These options are very attractive to both the “good guys” and the “bad guys”: Federal agencies, police etc. on the one hand and organized crime, terrorists etc. on the other. The most successful strategies, however, will not

229 necessarily be high-tech. As soon as one side is relying on the new digital options, the other side might try to establish analogue counter-cultures. In the long run however, these options may very well become more radical over time, for mainstream technologies may converge towards NBIC (Nano-BioInformation-Cognitive Technology). This would lead to opportunities for fine-scale surveillance and control of even more (biophysical) processes: An analysis of the formation of a “biopower” (cf. Foucault 1974) and that of a “technopower” sketched above may have to be combined accordingly then if one still wants to be able to analyse the political and ethical challenges of knowledge flows in the internet age.

Endnotes The first part of this paper draws on work that was presented in Munich, Berlin and Bayreuth. It has been published in an extended version in German elsewhere (Gottschalk-Mazouz 2007). The second part was also presented in Bayreuth and was previously unpublished. Some ideas of the third part were presented in Stuttgart, where I have learned a lot about Ubiquitous Computing while I participated in preparing the Stuttgart Collaborative Research Centre SFB 627 in 2003, and also thereafter from the people working in this centre and their publications (see http://www.nexus.uni-stuttgart.de). I am currently elaborating this part for a German anthology on electronic surveillance (Gottschalk-Mazouz, to be published). I want to thank the various audiences as well as the Kirchberg audience for their helpful remarks. My special thanks go to Nadia Mazouz for intense discussions on how a philosopher should deal with an interdisciplinary subject like this. The views expressed here and any remaining errors and flaws should nevertheless be attributed to myself. 2 For “information”, there is no such model sentence. This may be one reason why ordinary language intuitions apparently do not converge on, say, the question of a truth presupposition of information. 3 See Ryle (1949) and Russell (1929) for these types of knowing, while for the priority of knowing-how to knowing-that credits are frequently given to Wittgenstein and his remarks on Rule-Following (see PI 185-243). 4 Personally, I do not think so, so I followed the analytic discussion to some of their dead ends and tried to reconstruct a broader, process view of knowledge from the writings of Plato, Aristotle and Brandom in my Habilitation (Gottschalk-Mazouz 2006). 1

230 They were extracted from the literature elsewhere (Gottschalk-Mazouz 2006). The quotes used there stem either from key writings frequently given reference to, from textbooks or from leaders in their disciplinary scientific field (be it on a national or international scale). 6 I prefer “virtual actuality” here because there will be changes in the domain of action as well.The term is borrowed from Hubig 2007 who uses it to distinguish reality (all that is the case) from actuality (“Wirklichkeit” as all that we experience) reintroducing the latin realitas/actualitas distinction to the virtuality debate. 7 In Gottschalk-Mazouz (to be published) it is argued for a positive answer. 5

References Bell, D. (1973): The Coming of the Postindustrial Society: A Venture in Social Forecasting. New York. Berger, P.; T. Luckmann, T. (1966): The Social Construction of Reality: A Treatise in the Sociology of Knowledge, Garden City/New York 1966. Böschen, S.; Schulz-Schaeffer, I. (ed.) (2003): Wissenschaft in der Wissensgesellschaft. Wiesbaden. Brandom, R. (1994): Making it Explicit. Cambridge, Mass. / London. Castells, M. (1996): The Rise of the Network Society. Oxford. Drucker, P. F. (1994a): “The Age of Social Transformation”. In: The Atlantic Monthly 273 (11), 53-80. Drucker, P. F. (1994b): Knowledge Work and Knowledge Society. The Social Transformations of this Century. L. Godkin Lecture at Harvard University’s John F. Kennedy School of Government (May 4, 1994). http://www. ksg.harvard.edu/ifactory/ksgpress/www/ksg_news/transcripts/drucklec. htm (5.11.05). Dretske, F. I. (1981): Knowledge and the Flow of Information, Cambridge, Mass. Foucault, M. (1966): Les Mots et les choses: Une archéologie des sciences humaines. Paris. Foucault, M. (1974/2001): „La naissance de la médecine sociale“, lecture of 1974, in: Dits et écrits, vol. 2. Paris, 207-228. Gettier, E. L. (1963): “Is Justified True Belief Knowledge?”, in: Analysis 23, 121-123. Gottschalk-Mazouz, N. (2006): Gründe geben und nehmen. Philosophische Untersuchungen zu „Wissen“ und „Nichtwissen“ in der Wissensgesellschaft, Habilitationsschrift, Universität Stuttgart.

231 Gottschalk-Mazouz, N. (2007): „Was ist Wissen? Überlegungen zu einem Komplexbegriff an der Schnittstelle von Philosophie und Sozialwissenschaften“, in: Wissen in Bewegung. Dominanz, Synergien und Emanzipation in den Praxen der ‚Wissensgesellschaft‘, S. Ammon et al. (ed.), Weilerswist, 21-40. Gottschalk-Mazouz, N. (to be published): „Überwachung und Macht & die Spezifik technisierter Überwachung“. In: 1984.exe – Gesellschaftliche, politische und juristische Aspekte moderner Überwachungstechnologien, Gaycken, S.; Kurz, C. (ed.), Bielefeld. Greenfield, A. (2006): Everyware. The Dawning Age of Ubiquitous Computing. Indianapolis. Hamelink, C. J. (2001): The Ethics of Cyberspace, London/Thousand Oaks. Heesen, J.; Hubig, C.; Siemoneit, O.; Wiegerling, K. (2005): Leben in einer vernetzten und informatisierten Welt : Context-Awareness im Schnittfeld von Mobile und Ubiquitous Computing; Abschlussbericht – Teilprojekt D3; SFB 627 Bericht Nr.2005/05. Stuttgart (http://elib.uni-stuttgart.de/ opus/volltexte/2007/3172/). Heidenreich, M. (2002): „Merkmale der Wissensgesellschaft“. In: Lernen in der Wissensgesellschaft., Bund-Länder-Kommission für Bildungsplanung und Forschungsförderung u.a. (ed.), Innsbruck u.a., S. 334-363. cited according to http://web.uni-bamberg.de/sowi/europastudien/dokumente/blk. pdf (14.11.05). Herman, I. (2007): Semantic Web, (http://www.w3.org/2001/sw/, 1.10.06) Hongladarom, S.; Ess, C. (2006) (ed.): Information Technology Ethics: Cultural Perspectives. Hershey, NY. Hubig, C. (1997): Technologische Kultur. Leipzig. Hubig, C. (Hg.) (2000): Unterwegs zur Wissensgesellschaft: Grundlagen Trends - Probleme. Berlin. Hubig. C. (2006): Die Kunst des Möglichen I. Technikphilosophie als Reflexion der Medialität. Bielefeld. IRIE (2007): Call for Papers: Ethical Challenges of Ubiquitous Computing, http://www.i-r-i-e.net/cfp8.htm. Knorr-Cetina, K. (1998): „Sozialität mit Objekten. Soziale Beziehungen in post-traditionalen Wissensgesellschaften“. In: Technik und Sozialtheorie, W. Rammert (ed.), Frankfurt am Main/New York, S. 83-120. Krohn, W. (2001): „Wissenschaft und Lebenswelt“. In: Wissensgesellschaft: Transformationen im Verhältnis von Wissenschaft und Alltag, Franz, H. et al (ed.)., Bielefeld, S. 10-17. Kuhn, T. S. (1962): The Structure of Scientific Revolutions, Chicago. Langford, D. (2000) (ed.): Internet ethics, New York.

232 Nakada, M.; Tamura, T. (2005): “Japanese conceptions of privacy: An intercultural perspective”, In: Ethics and Information Technology 7, 27–36. O’Reilly, T.: What Is Web 2.0. Design Patterns and Business Models for the Next Generation of Software, 2005 (http://www.oreillynet.com/pub/a/ oreilly/tim/news/2005/09/30/what-is-web-20.html, 10.5.06) Quigley, M. (2007) (ed.): Encyclopedia of information ethics and security, Hershey, PA. Ritter, J.; Gründer, K.; Gabriel, G. (2004) (ed.), Historisches Wörterbuch der Philosophie in 12 Bänden, Bd. 12, Basel. Russell, B. (1925): “Knowledge by Acquaintance and Knowledge by Description”. In: Mysticism and Logic, London, 209-235. Ryle, G. (1949): The Concept of Mind. New York. Sartwell, C.: “Why Knowledge is Merely True Belief”, in: The Journal of Philosophy 89 (1992), 167-180. Stehr, N. (1994): Arbeit, Eigentum und Wissen. Zur Theorie von Wissensgesellschaften. Frankfurt a. M. Steup, M. (2006): “The Analysis of Knowledge”, In: The Stanford Encyclopedia of Philosophy (Spring 2006 Edition), Edward N. Zalta (ed.), (http:// plato.stanford.edu/archives/spr2006/entries/knowledge-analysis/) Taylor, C. (1995/1997): “Irreducibly Social Goods”, in: Philosophical Arguments, Cambridge, Mass., 127-145. Toffler, A., Toffler, H., Keyworth, G. A. und Gilder, G. (1994): “Cyberspace and the American Dream: A Magna Carta for the Knowledge Age”. (http:// www.pff.org/issues-pubs/futureinsights/fi1.2magnacarta.html, 5.5.2006). Weber, M. (1980): Wirtschaft und Gesellschaft: Grundriß der verstehenden Soziologie, J. Winckelmann (ed.), Tübingen. Weiser, M.: Ubiquitous Computing, 1996 (http://www.ubiq.com/ubicomp/, 5.5.2006)

Will the Open Access Movement be successful? Michael Nentwich, Wien

The starting point: beginnings only No doubt that from the point of view of scholars around the world, Open Access (OA) seems to be the obvious solution to the evident problems of scholarly publishing in the present age of commodification. Access to the academic literature would be universally available and hence not restricted to those lucky enough to belong to wealthy institutions that are able to afford all the subscriptions necessary. Furthermore, many believe that only if we have a fully digital, openly accessible archive of the relevant literature, enhanced with overlay functions such as commenting, reviewing and intelligent quality filtering, we will be able to overcome restrictions of the present, paper-based scholarly communication system. Many initiatives have been launched (e. g. the Berlin Declaration1), some funding agencies have already reacted by adopting Open Access policies (notably the British Wellcome Trust2, but also the German DFG3 or the Austrian FWF4), new journal models are being tested to prove that Open Access is a viable economic model (e. g. BioMedCentral5), Open Access self-archiving servers flourish around the world (not least in philosophy) and even high politics has reacted (most recently the European Commission6). A few years ago, this author boldly predicted that a third phase of (re-)de-commodified scholarly publishing is around the corner after the old de-commodified period and the present age of almost universal commodification (Nentwich 2001). But still, after a decade or so of initiatives (a well-known timeline on Open Access goes back to the 1990s, the Budapest Initiative7 dates from 2002), of testing and promoting only a fraction of the available scientific literature is Open Access (a rough estimate is 15 %8). It is growing, no doubt, but we are a long way from universal Open Access. So, will the Open Access Movement be successful? Or, put differently, can it be successful? What are the chances that the incumbents—the big commercial (as well as the non-profit, associational) publishing industry—will

234 give way to a de-commodified future? Is there a middle-ground where all the players and interests could meet? This paper will contribute to this open debate by analysing recent trends and weighting the arguments put forward (this contribution, however, is not an account of the overwhelming amount of papers published on this issue, but cites them very selectively9).

Colourful roads: white, yellow, blue, green, and gold Since its beginnings, the Open Access movement supported two approaches: academic publications should be made available for free either via digital repositories parallel to the print publication or, alternatively, via (electronic) journals or books that are themselves available for free. The promoters of the Sherpa/ROMEO project10 labelled it the green and the golden road to Open Access. Still, there are a few further colours involved: Seen from an Open Access perspective, the so called white publishers are the “bad guys”, as they ask their authors to hand over exclusive copyright and do not allow any form of Open Access archiving; the “yellows” and “blues” allow their authors either to archive the pre-print or the post-print version; the “greens” finally allow both versions to be archived. Some of the “green” publishers do so only on condition that the author pays for it. The “golden” road to Open Access, by contrast, is a different approach: the journal or book itself is accessible for free, and not the reader, but the author side is financing. This can be either the author or his/her institution paying a publication fee (usually between 500 and 2000 € per article) or the publication series itself is supported by a research institution’s current budget, by membership fees of a scholarly association, by a specific grant of a funding agency, or by a patron or sponsor. In some respect, the green road is more conservative: the present publication system stays more or less the same, the publishers—whether commercial or non-profit—sell their products as usual, but they allow under varying conditions that authors make a digital copy of their paper available for public use. The result is two parallel worlds: the one we got used to over the last decades, namely the journals and books that we buy; and a new one of digital archives in which authors or their institutions publish their research results a second time. Although the latter road seems to be more efficient to make almost all publications available, it has two big disadvantages: first, it needs special attention from the side of the authors, they have to do something actively, OA is not automatic here; second, versioning is a problem: the colours mentioned above indicate that it is not clear which version is go-

235 ing to be archived, which one of several is the reliable one, which one should one quote. In other words, the parallel world will never be exactly the same as the original one. The golden road, by contrast, is in a sense more revolutionary, as it tends to replace the old system by a new one and, indeed, some of the newly funded Open Access journals have been explicitly set up to compete with existing non-OA journals. Even some of the commercial publishers test the idea and launched OA journals with an author-pays system; or they offer a hybrid model in which individual articles are bought off to gain OA status while the rest of the published articles remain non-OA. The big advantage of this road—and this is the reason to label it “golden”—that there would be no question about the right version of a “golden” OA article (the latest, the published, and hence the quotable). It is just the one published on the journal’s or book’s website. However, only a tiny minority of all journals and books are made available under this system. In practice, both roads are being used so far: the ROMEO database11 shows an increasing proportion of “coloured”, that is non-white publishers, and at the same time we observe that there are quite some OA journals already on the market as documented by the DOAJ database12. Funding agencies react to this colourful map by a two-tier approach: they encourage their funded researchers to either deposit their publications in an archive or they promise to pay the author fees owing to the publishers. Some research institutions act likewise with respect to their research staff members—some of them have registered their respective institutional policy in the ROARMAP database13.

Open questions However, most observers are univocal in their assessment: despite all declarations and initiatives, despite all progress, we are far away from a situation in which all papers are available Open Access in some form. In the following I shall discuss some of the most important issues or arguments helping to explain the state of affairs.

Is the Open Access movement strong enough?

Open Access activists are quite visible: a simple Google search for “Open Access” will result in over 49 billion hits. So everyone looking for OA will find them on the web, will get access to the declarations, will be impressed

236 by the hundreds and thousands of signatories and will discover many big and even high-level conferences devoted to OA. Seen from this perspective, OA seems to be strong. It also seems to be quite successful: when we look at the constant growth of OA journals in the DOAJ database and green publishers in the ROMEO database, we tend to conclude that “we are almost there”. However, my strong impression is that the activists involved in the movement are only a few, this is not a mass movement, most scholars and academic authors have not heard from it or even thought about repositories. Clearly, there are—as always—distinct differences between the various research areas, but there are enormous dark spots, both on the disciplinary level and regionally. Take the example of Austria. There are almost no institutional repositories, only very few Open Access journals14, only two signatories of the Berlin declaration15 and only a handful of actors who have Open Access on their agenda. Not even the big players like the universities and the Academy of Sciences have an explicit OA policy yet. My preliminary conclusion: there is always the chance of a “revolution from above”, but the movement lacks a strong basis, so far. Open Access is no issue among most researchers. Given the fact that Open Access involves quite some distributed activity by every scientific author, this lack of awareness and missing agenda is probably at the root of the modest success of the OA Movement.

Are the incumbents too strong?

No doubt, the big players in the scholarly publishing business (Elsevier, Springer, Sage etc.) are very strong and tenacious (Nentwich 2003, 407 ff). They control an enormous share of the market; hence their market (bargaining) power is much bigger than that of the largest consumer consortia (libraries). Among other effects, this leads to disadvantageous licensing agreements. For instance libraries are not allowed to access the digital versions of journals they have once subscribed to after the end of the subscription period (by contrast, the paper journal is still available for future use, after you cancelled your subscription); they include stipulations that bind the libraries for long periods with the effect that they cannot take advantage of consortia agreements. Taken together, this led to a situation in which the expected advantages of electronic publishing did not pay off for the customers. Furthermore, vis-à-vis their authors, publishers are in an even stronger position and there is no equivalent to consumer law regarding the relationship between the publisher and the author. In most cases uninformed and legally immature authors usually never put into question the general terms and conditions and sign over to the publisher the exclusive copyright. While this

237 undermines all efforts towards Open Access, this is quite understandable: when an author has successfully completed the peer review process and is offered the publication contract, this is certainly not the time to risk impact points by arguing with a big anonymous enterprise. While genuine Open Access journals are quite successful (see for instance the BioMedCentral journals to name just one prominent example), it needs a lot of effort and activism to found new such journals and it needs time to become established. Obviously, as long as the new journal has no reputation, as long as it is not recognised by the indexing services, in particular by the “almighty” Thompson Scientific (who runs the ISI Citation Indices16), authors cannot be blamed for choosing the incumbent’s well established journal.

Cheaper for whom?

As a rule, OA journals are electronic journals and as such, no doubt, production costs are considerably lower than with print journals. Nevertheless, there are costs and publishers, also OA publishers, argue that they are considerable and charge at least 500 € per paper to the author or his/her institution, often six times as much. It all depends on how you look at it. There are many low-budget OA journals that are run by scholarly associations and the enthusiasm of a few. The costs per published paper may be considerably less than 500 €, depending on what quality you expect when it comes to layout, proofreading and language checking—I am not speaking of content quality here: this is, obviously, not negotiable—, depending on the amount of volunteered labour (which is not available at the same level in all fields), and depending on the level of internal support by a scholarly association. In my view, there are convincing arguments that the scholarly world could deliver what is needed for scholarly communication at much lower costs. Many of the entries put forward by those arguing in favour of the present commercial model are not defendable when we look at the genuine needs of the research community. I am not only talking about a profit share of a third of the turnover, but also of marketing etc. By contrast, if you want to preserve the highest formal standards as well as a good share of profit for the publisher, a publishing system that turns from user pays to author pays will not be cheaper. This is how the publishers and their followers argue: you should not count on making the system cheaper altogether. The key issue is the margin of profit. A system run by libraries, scholarly associations, non-profit university presses and the research community would surely be much cheaper, even if you do it as professionally as done by the commercial

238 publishing houses. In addition, it is important to differentiate between the target groups: A fully operational OA publishing system will be cheaper overall than the present one on condition that it is fully electronic and that the market is less concentrated than hitherto, that is, if there were many new (non-profit) players challenging the few big ones. However, seen from the perspective of the individual institution, it is by no means clear whether the OA system would be cheaper or even more expensive. Research institutions whose authors publish a lot may find themselves in a position to pay more for author charges in the future than they did for subscriptions in the past (Bauer 2006). On the other end of the continuum, private research enterprises, e. g. in the biotech or pharmaceutical sector, usually do not publish a lot (but rather patent their results). What they would have to pay in a fully OA environment would be much less; in other words: OA subsidises private research. It seems difficult to estimate how much less the contribution of the private sector would be to the otherwise publicly funded system and even more difficult to say whether this would be less or more as compared to the indirect subsidies now given to the publishing industry. In any case, the uncertainties about winners and losers, the differing interests between many actors groups involved in scholarly publishing are probably part of the explanation why there is no forceful movement in one direction only.

Do politics play too weak a role?

Politics in the narrow sense is dealing with Open Access only occasionally. Main actors of the recent path have been the European Commission and the OECD17. The EU, for instance, is going to invest 85 M€ over the next years to support the OA infrastructure18. This is an important step, but more have to follow in order to keep the infrastructure sustainable beyond the initial funding. States, by contrast, rarely or never formulate an Open Access agenda, despite the enormous sums of public money spent on “buying back” in published form the research results that have been financed by the state in the first place. Some observers even argue that non-acting in this area is a form of indirect subsidy to the publishing industry. However, no competition or subsidies case has been filed yet. Copyright law does not restrict the bargaining power of the publishers. There is only one area in which some states (and the EU) recently started to become active, in an indirect way, by stipulating that research directly financed by public funds has got to be made Open Access. An important amendment to this stipulation would be

239 that the necessary financial means for author fees be made available because otherwise, the research output would be indeed Open Access, but not properly published in the top journals. This is a first step, but there are still many research funds that do not yet bind the support of a research project to Open Access publishing. Furthermore, there is a lot of research that is only indirectly financed by the state, namely via paying the salaries of the researchers. As far as I can see, only a tiny minority of functionaries in the research administration are already aware of the problem and the possible solution “Open Access”. In this respect, the situation is not much different when compared to the awareness of researchers. Open Access, we have to conclude, is not yet considered a public duty. The issue is not “sexy” in political terms, maybe because Open Access taken seriously would hurt parts of the industry. So if the national governments are not active, the actors in the field, in particular, the research institutions, the funds, the libraries and archives could step in and formulate institutional policies. Some do so. The ROARMAP database lists those who did already. While it may well be that some of those who are active are not aware of that database, the list of OA policies is not yet impressive at all: as of end of July 2007 only 54 policies have been registered worldwide. Taking the example of the academic institutions in the German speaking world, less than ten universities in Germany and Switzerland, none in Austria, and two research funds, among them the Austrian FWF have registered their policies. Seen from the perspective of OA promoters, there is much room for awareness raising activities.

Is there no way out of double windmill situations?

The main issue, as far as I can see, is that there are double windmill situations involved that make it particularly difficult to turn the tide in favour of Open Access. They are all of the same type: You cannot move to Open Access unless first other conditions are fulfilled. First, a reasonable author will not choose an alternative route to publish his/her papers unless the alternative is equally renowned; otherwise s/he would risk his/her career. However, without the best pieces of the best authors, the journals will not become attractive enough. Only a handful of Nobel Prize winners have actively supported OA journals like BioMedCentral. Policies of scholarly associations and research institutions that accept OA journals as equivalent to traditional journals may be a solution to overcome this lock-in situation. Also the active involvement of those would help who don’t need further publications as badly as the newbies at the beginning of

240 their career. Second, the libraries and research institutions do not want to cancel the subscriptions of key journals that are not Open Access—although they are very expensive—as long as their customers (the readers) rely on them. Without saving money on the subscription side, there is not enough room for manoeuvre to support authors who have to pay for their article to get published. Here, additional financing for the interim period of transformation seems to be helpful. Third, the small publishers are in a particularly odd position. Usually they have only a few cash cows, so switching to OA is the equivalent to biting the hand that feeds them. Unless they find other sources of income, they might face hard times in a changing environment that asks for OA as their bargaining power is less strong compared to the big ones and to the library consortia. I believe that the small publishers will either turn into non-profit/smallprofit publishers; that is, they turn into service providers. Or they are going to specialise in niche products, special services, sophisticated products that academia cannot provide for itself. Fourth, even the big publishers face a double windmill: their first option is to keep things as much as they are because the established model with its high profits seems to be unbeatable; however, they feel the pressure from their customers and the public bodies. Turning into “green” publishers seems to undermine their business model and offering expensive hybrid models might not be accepted in the market. But if they do not get an inch, they may loose control altogether because an avalanche may be triggered. In my opinion, it is in their interest to serve their customers and this might mean that they will have to meet their customers halfway. While the original, peer-reviewed final text version will be available in an OA version, the version in a fancy layout with additional editorial comments, a journalistic news section, add-ons like high-resolution pictures, video-streams etc. will have to be paid for. I believe that there is a market for these “luxurious” publication products. However, many publishers, it seems to me, want to stick to the traditional model, just like the music industry did a while ago, rather unsuccessfully. There are no easy solutions to escape the windmill situations. Therefore, fast transformation from the commodified to the OA system is not to be expected. As exemplified above, only slow evolution and a step-by-step approach seem to be able to overcome these barriers.

241

Outlook: The odds are not so good after all, but … Summarising my above account, one needs to be a big optimist to believe in a swift transformation of the present academic publishing system towards one based on Open Access. Although OA is no panacea, I am convinced that it is the superior system. The main barrier is a lack of coherent agency of the research community. However, coherence is not to be expected across the board. The actors have divergent interests. Therefore, I do not expect the “golden” revolution, but a “green” evolution, that is a slow but steady growing of a parallel infrastructure of institutional and other repositories that will grow together and build a universal, decentral Open Access database. It seems likely that at some point in the not so distant future all publishers will turn “green” and accept this parallel system. Depending on the field and at different times, this basic, community-run database may become in the long run the first point of reference (at least for the majority of researchers) with the present commercial journal databases playing second fiddle. For instance, where there is a big market also outside academia as in the biotech, the pharmaceutical or the legal fields, OA may never be more than a niche; by contrast, in fields that have already an important pre-print culture with large archives or a more inwards oriented communication culture, they may soon become the major point of reference. Fields with an important monograph culture, like philosophy and others may go both ways—but this projection might be a good starting point for discussion now. In any case, it is my conviction that what I would call “cyber-entrepreneurs” are the single most important ingredient in this transformation process. Without them not much will happen. Coming back to my initial question, my answer is yes, there is a chance that the Open Access movement will be successful, but it will be a lengthy and cumbersome process.

References Bauer, B., 2006, Kommerzielle Open Access Publishing-Geschäftsmodelle auf dem Prüfstand: ökonomische Zwischenbilanz der „Gold Road to Open Access“ an drei österreichischen Universitäten, GMS Medizin Bibliothek Information 6(3) http://www.egms.de/pdf/journals/mbi/2007-6/ mbi000050.pdf>. Nentwich, M., 2001, (Re-)De-commodification in academic knowledge distribution?, Science Studies 14(2), 21-42 . Nentwich, M., 2003, Cyberscience: Research in the Age of the Internet, Vienna: Austrian Academy of Sciences Press .

Endnotes 1 2 3

4 5 6

7 8

9

10 11 12 13 14

15

16 17

18

http://oa.mpg.de. http://www.wellcome.ac.uk/doc_WTD002766.html. http://www.dfg.de/forschungsfoerderung/wissenschaftliche_infrastruktur/lis/projektfoerderung/foerderziele/open_access.html. http://www.fwf.ac.at/de/public_relations/oai/index.html. http://www.biomedcentral.com. http://ec.europa.eu/research/science-society/pdf/scientific-publication-study_ en.pdf. http://www.soros.org/openaccess/initiatives.shtml. This estimate is based on a personal communication by Stevan Harnad to this author and certainly open for debate. See for instance the OA Webliography 2005 at http://www.escholarlypub.com/cwb/ oaw.htm. http://www.sherpa.ac.uk. http://www.sherpa.ac.uk/romeo.php. http://www.doaj.org. http://www.eprints.org/openaccess/policysignup. Bruno Bauer recently counted eight OA journals and five repositories in Austria (see Bauer 2006). The Austrian Research Fund FWF and the Rectors’ Conference of the Austrian universities. http://portal.isiknowledge.com. http://www.oecd.org/document/0,2340,en_2649_34487_25998799_1_1_1_1,00. html. http://europa.eu/rapid/pressReleasesAction.do?reference=IP/07/190&format= HTML&aged=1&language=EN&guiLanguage=en.

Globalisierte Produktion von (akademischem) Wissen – ein Wettbewerbsspiel Ursula Schneider, Graz

Ethnographie als Methode der Selbstreflexion Im Entdeckungszusammenhang der folgenden Überlegungen wirkt lebensweltliche Praxis, nämlich ein Unbehagen an den angeblich der Qualitätssicherung dienenden Ritualen der akademischen Praxis. Es wird die These aufgestellt, dass Ergebnisse1 in einem globalen Kontext unter Wettbewerbsbedingungen zunehmend industrialisiert generiert werden, wofür einiges an empirischer Evidenz ins Treffen geführt werden kann. Das Einnehmen einer kritischen Perspektive ruft notwendig Kritik hervor. Auf diese Weise kommt das Denken voran, wenn die Kritik sich auf die vorgetragenen Argumente einlässt und sich nicht ausschließlich auf Interpretationen stützt, die dem Referenzsystem der Kritiker entstammen. Um einem unnötigen Schlagabtausch vorzubeugen, rechtfertigt die Autorin, ihren einseitigen Fokus auf problematischen Seiten einer industrialisierten Produktion von Wissen (in Universitäten) mit dem pragmatischen Argument, dass der Zeitgeist beinahe schon Mantra-artig dessen positive Effekte betont, für die es ebenfalls gute Argumente gibt. Die folgenden Ausführungen lassen sich zwei Kategorien von Argumenten zuordnen: Ein kleinerer Teil hat mit der Erkennbarkeit dessen zu tun, was wir unscharf mit Realität bezeichnen. Hier geht es um die bekannten Grenzen des Erkenntnisprojekts der Moderne, wobei – so die These dieses Papiers – unterschiedliche Sichtweisen nun stärker um Marktakzeptanz ringen, weil der Wettbewerb zunimmt. Ein größerer Teil der Beobachtungen erfolgt aus ethnologischer Perspektive auf das eigene System: Welche Rituale, welche Heldenkonstruktionen, welche Modalitäten der In- und Exklusion sind im akademischen System wirksam? Der Schwerpunkt der Untersuchung liegt dabei auf Praktiken und deren Wirkungen. Die Autorin thematisiert nicht weiter, dass jedes System aus Gründen der Identitätsbildung, der Orientierung, der Koordination und Komplexitätsbewältigung der beschriebenen Modalitäten bedarf. Vielmehr beschäftigt sie die Beob-

244 achtung, dass sie aus dem Kontext der (Dorf)Gemeinschaft stammen und im anonymen, „entbindenden“ Kontext moderner Gesellschaft daher gerne ausgeblendet werden. In Anbetracht der fortgeschrittenen Möglichkeiten kulturwissenschaftliche Paradigmen auf das politische und wirtschaftliche System anzuwenden, findet sie eine relativ starke Abstinenz gegenüber einer Übertragung auf das akademische System dennoch überraschend. Die Arbeit geht von der Beobachtung aus, dass es diesem System an systematischer Selbstreflexion mangelt, was durch eine geringe Zahl an Arbeiten gestützt wird, die sich mit den Produktionsbedingungen und der Reproduktion in diesem System beschäftigen.2 Dabei stammt die unmittelbare Anschauung der Autorin aus den Sozialwissenschaften, auf welche sich fast alle empirischen Belege beschränken, während sich die theoretische Ableitung der an die Empirie gestellten Fragen vermutlich auf andere Disziplinen übertragen lässt. Historisch wählt die Arbeit bewusst einen knappen Ausschnitt: Betrachtet wird die Zeit zwischen 1970 und 2007, als Zeit des Umschlagens der vorherrschenden gesellschaftlichen und wirtschaftspolitischen Paradigmen von Keynesianismus und sozialer Marktwirtschaft zu neoliberalen Konzepten, die mit mehrfachen Reformen der europäischen Universitäten einhergehen. In einer dichotomen Betrachtung lassen sich die Reformen als Bewegung von Selbstorganisation, Evolution und Kooperation zu ermächtigten und professionellen Führungsorganen, strategisch geplanten Schwerpunkten und einer Verstärkung wettbewerblicher Elemente beschreiben (vgl. Goedebuure & van Vught, 1996). Während in Wirtschaftsunternehmen Gegenmodelle zur fordistischen Organisation von Produktion entwickelt werden, um den Besonderheiten von Wissensarbeit Rechnung zu tragen, scheint das so genannte New Public Management3 eher an eine mechanistische, von Kontroll- und Machbarkeitsvorstellungen geprägte Denkweise über Organisationen anzuknüpfen. Aus Platzgründen sei hier nur knapp angeführt, was typologisch unter Fordismus verstanden werden kann: 1. Eine extrem ausgeprägte Arbeitsteilung in horizontaler und vertikaler Richtung vermittelt Spezialisierungseffizienz (z. B. aus Lernkurveneffekten) und schafft gleichzeitig Koordinationsprobleme. Im akademischen System entspricht diesem Merkmal eine starke Ausdifferenzierung der einzelnen Fächer, die sich bis in kleinste Nischen verästeln und –zumindest in den Sozialwissenschaften- nicht zu einer oder mehreren verbindenden gemeinsamen Theorie(n) zusammengefügt werden können. 2. Es erfolgt eine Trennung von intelligenter Arbeitsvorbereitung (Maschinenlayout, Planung, Programmierung) und monotonem Arbeitsvollzug.

245 Übertragen auf den akademischen Sektor bedeutet dies, dass junge ForscherInnen sich in vorhandene Programme und Standards ihrer Abarbeitung einfügen, die von Forschungs- und Technologieräten und Herausgebergremien erdacht werden und – das ist der hier vermutete Unterschied zur liberaleren Vorepoche- bei Hinterfragung Karrieren negativ beeinflussen. Wenn nämlich Produktivität als Output gemessen wird, dann halten Arbeiten, welche an Prämissen rütteln, das Weiterkommen innerhalb derselben eher auf. Am augenfälligsten wird die quasi industrielle Organisation der Produktion von Wissen am Beispiel der Abarbeitung von DNS-Sequenzen nach einer programmierten Methodik. Eine zunehmende Trennung von Management und Ausführung zeigt sich europaweit an Trends, Universitäten und Forschungseinrichtungen nach dem Vorbild von Wirtschaftskonzernen zu führen und Methoden der strategischen Planung zur Anwendung zu bringen, um über Schwerpunktthemen und Mittelvergabe „rational“ entscheiden zu können. Diesem Vorgehen wird eine höhere Erfolgswahrscheinlichkeit zugeschrieben als einer ungesteuerten, dezentralen Evolution gemäß der Idee der Freiheit der Forschung. Damit ist eine Trennung etabliert zwischen Personen, die wissenschaftliche Arbeit über Outputkontrolle, Regelsetzung und Ressourcenvergabe steuern und Personen, welche in diesem Rahmen forschen, also ausführen, wenngleich auf höchstem Niveau. Dies wirft Fragen der Wissenschaftsethik auf, die nicht einfach zu beantworten sind. 3. Aus Spezialisierung und „Entlastung“ vom Metawissen der Kontextgestaltung resultieren Entfremdung, partielle Dequalifizierung und Motivationsprobleme, da die Sinnfrage kaum noch aus dem Arbeitskontext beantwortet werden kann. Dies müsste vor allem für ForscherInnen gelten, die im Stellen der „Warum“ Frage geschult sind. Der Hinweis auf Dequalifizierung einer der formal höchst gebildeten Gruppen dürfte als identitätsbedrohend erlebt werden (wie die gesamte hier geführte ethnologische Reflexion) und kann dennoch mit der notwendigen intensiven Konzentration auf einen kleinen Ausschnitt an Fragestellungen und Methoden und der damit ebenso notwendigen Ignoranz aller anderen Ausschnitte argumentiert werden. Im Folgenden werden die Bedingungen fordistischer Wissensproduktion entlang der im Abstract angeführten Kriterien betrachtet.

246

Globalisierung Wissenschaft ist ihrem Wesen nach – zumindest gemäß dem Anspruch der Moderne – universell. Dennoch lassen sich unterschiedliche Wissenschaftstraditionen unterscheiden (vgl. Galtung, 1981), die jeweils unterschiedliche Beiträge zur Erkenntnisgewinnung leisten. Durch die beiden Weltkriege des 20. Jh. sind insbesondere die europäischen bzw. „deutschsprachigen“ Geistes- und Sozialwissenschaften stärker auf ihre Grenzen zurückgeworfen worden, ein Prozess, der nun durch Globalisierung rückgängig gemacht wird. Universelle Wissenschaft gelingt leichter auf Basis einer universellen Sprache, daher beobachten wir aktuell eine zunehmende Verbreitung von „Off-Shore“-Englisch als neue lingua franca. Diese kommt allerdings nicht ohne Nebenwirkungen, deren klinische Tests quasi in der Realität stattfinden. Englisch schafft z.B. die Verständigungsillusion dort, wo Bedeutungsverschiebungen und interkulturell unterschiedliche Interpretationen von Begriffen im Spiel sind. Verbunden mit der Dominanz englischsprachiger Journale impliziert nicht-muttersprachliche, wissenschaftliche Arbeit zum einen Nuancierungsverluste, zum anderen eine Stagnation der Weiterentwicklung der nicht-englischen Wissenschaftssprachen. Wenn man die soziale Konstruktion von Wissen unterstellt (vgl. Berger & Luckmann 1967) gehen ferner Denktraditionen verloren, deren Vielfalt zu einem höheren Maß an neuen Erkenntnissen beitragen könnte. Die Standardisierung von Sprache und Textformaten stellt jedenfalls ein Merkmal industrieller Massenproduktion dar.

Digitalisierung Für die aktuelle Generation der Studierenden gibt es kaum Geistesleben vor dem Internet, was dort nicht zu finden ist, wird kaum noch zur Kenntnis genommen. Metatheoretische und theoretische Konzepte können im Netz rasch rezipiert werden, deren Weiterbehandlung erhält mehr Wert zugeschrieben als sie selbst, da es in ihrer Natur liegt, selbstverständlich zu erscheinen, wenn sie einmal verbreitet sind. Möglichkeiten der raschen Bearbeitung, Verbreitung und Reproduktion verursachen Probleme bei der Zuschreibung von Copyrights, bei der Beurteilung von Eigenständigkeit und bezüglich der Bewältigung von Datenfülle. Es kann die Vermutung angestellt werden, dass Qualität nicht mehr nach der Substanz der Basistexte beurteilt wird, sondern sich nur noch auf die „Tags“ bezieht, mit denen solche Texte klassifiziert werden. Da beim Indexieren sehr viel Information verloren geht, ist dies

247 zwar vermutlich pragmatisch unvermeidlich, in Bezug auf die Sicherung von Qualität allerdings problematisch, wie zu zeigen sein wird.

Differenzierung Hier wird dem Ansatz von Luhmann gefolgt, der mit Differenzierung die Ausprägung einer eigenen Leitdifferenz, sowie von Programmen und einer Sprache verbindet, mit der die Leitdifferenz bearbeitet werden kann (vgl. Luhmann, 1997). Spezialisierung führt eben wegen der Verengung des Horizonts einerseits zu Effizienz, andererseits zu Kommunikationsproblemen mit Externen, zu suboptimalen Lösungen für das Gesamtsystem und zu struktureller Verantwortungslosigkeit mit Bezug auf externe Effekte4. Im System der industrialisierten globalen akademischen Produktion ist Spezialisierung durch eine beinahe exponentielle Zunahme an Publikationen durch die Gründung von Journalen und die Auffächerung von Disziplinen (wie Medizin oder Betriebswirtschaftslehre) in Teildisziplinen (wie Interne Medizin oder Finanzwirtschaftslehre) und letzterer in fein verästelte Teilgebiete (wie Gefäßmedizin oder behavioral finance) zu erkennen. Narrativ lässt sich dies in die Bemerkung fassen: „Man muss Nischensoziologe werden, wenn man noch neue Erkenntnisse beisteuern will“ (Walzl, 2007). Dabei gilt es als riskante, aber wirksame Karrierestrategie, neue Nischen zu begründen. Dies scheint folgende Konsequenzen zu haben: Spezialisten der Teilgebiete lesen ausschließlich Texte aus diesen Teilgebieten, weil schon deren Bewältigung Dauereinsatz verlangt (vgl. den Aspekt der Fülle). Sie entwickeln eigene „Sprachspiele“, was an der expliziten Oberfläche Begriffe und Methoden betrifft, im impliziten Grund aber die Entwicklung verschiedener „Praxis“ (Brown & Duguid, 1999, Polanyi, 1966) bedeutet. Daraus entstehen Verständigungsprobleme, Verluste an Klarheit und Übersicht und die ständige Neuerfindung des Rades. Die genannten Wirkungen können im System allerdings kaum wahrgenommen werden, weil die einzelnen „Biotope“ unverbunden bleiben. Grenzgänger werden häufig in ihren Karrieren behindert, Inter- und Transdisziplinarität (vgl. Stehr, 2003 und Gibbons et al. 1994) scheinen nur kompensatorisch zu diesem Befund entwickelte Schlagworte, die stattfinden „wenn die Monologe artig nebeneinander her aufgesagt werden“.

248

„Empiric Turn“ Ökonomie hat als zwischen Natur- und Geisteswissenschaft angesiedelte Disziplin immer um Anerkennung gerungen, die sie durch Nachahmung der Forschungszugänge der Naturwissenschaften zu erlangen trachtete (vgl. Snow, 1993). Als Folge der Globalisierung fand in den Sozialwissenschaften eine Wende hin zu positivistischen (kritisch-rationalen) Erkenntnistraditionen statt, die sich im Vorherrschen des Verfahrens der Hypothesentestung spiegelt. Allerdings stellen die Erkenntnisse über Menschen und ihre Interaktionen keine akzeptierten Konventionen dar, so dass die industriell abgearbeiteten Hypothesen auf eine willkürliche Vielfalt von Theorien zurückzuführen sind. Dabei ist dem Versuch, aus einem in diesem Paradigma als vorwissenschaftlich geltenden Stadium der Normativität heraus zu treten, durchaus etwas abzugewinnen, wenn er einer unreflektierten Normativität entgegen gestellt wird, wie sie gerade die deutschsprachigen Sozialwissenschaften lange prägte. Andererseits ist zu fragen, welche Aussagen aus einem „Ist-Zustand“ abzuleiten sind, der aus Gründen mangelnder Standardisierung der Ausgangstheorien, mangelnder Ressourcen und mangelnder Zugänge zum „lebenden Objekt“5 letztlich nur für den eingeschränkten Kontext der Erhebung gilt. Ferner stellt sich die grundsätzliche Frage, ob die Methoden der Newton’schen Physik geeignet sind, mit der doppelten Kontingenz zu verfahren, die sich aus der Unmöglichkeit ergibt, innere Prozesse von beforschten und forschenden Subjekten direkt zu erfassen, weshalb immer doppelte Deutung im Spiel ist. Wie eine Analyse der letzten Jahrgänge einschlägiger Zeitschriften im Internationalen Management ergab, nehmen paradigmatisch-konzeptionelle Beiträge zugunsten empirischer Arbeiten ab. Bei letzteren liegt der Schwerpunkt auf statistischen Methoden der Datenverarbeitung, während der Entdeckungszusammenhang und die Güte der Datengewinnung kaum reflektiert werden. Zunehmend stellen Beobachter fest, dass es den mit solchen Verfahren behandelten Problemen an Relevanz mangle. Jensen, selbst viel zitierter Autor formaler Arbeiten, spricht von „… toy problems, raised by an article in another journal“ sowie von „… rigorous, but empty theorems and results springing from uninteresting statistics“ (Jensen, 2007). Was bleibt sind raum-zeitlich, erhebungstechnisch und logisch eingeschränkte Befunde, die sich nicht an Phänomenen, sondern am Markt der Aufmerksamkeit in der Community zu bewähren haben. Dieser Markt funktioniert aufgrund des Füllproblems nur auf der Meta-Ebene der Indexierung, ähnlich wie es das semantische Web für eine automatische Verarbeitung von Inhalt vorsieht (vgl. Daconta et al, 2003). Forschung, die sich auf die Funktionsweise des

249 akademischen Markts bezieht, existiert kaum; Politik und Theoriebildung befassen sich mit Produktion, nicht mit Diffusion. Das widerspricht einer Konzeption von Wissen als Prozess statt als Produkt, nach der Produktion und Diffusion ineinander greifen (vgl. Schneider, 2001; 51). Noch weniger wird die Verbreitung von Erkenntnissen in die soziale Praxis untersucht, die über Kanäle der Ausbildung, Beratung, Konferenztätigkeit und Publikation erfolgt. Wenn letztlich willkürliche Ergebnisse auch noch punktuell selektiert und oberflächlich rezipiert herangezogen werden, um politische Interessen zu untermauern, gewinnt der oben zugespitzt dargestellte Zustand akademischen „fast foods“ demokratiepolitisches Gewicht. Die Produktmetapher für wissenschaftliche Erkenntnisse, aus der sich auch intellektuelle Eigentumsrechte ableiten lassen, müsste daher konsequenterweise die Anerkennung von Gewährleistungsansprüchen einschließen, doch wären letztere nur unter hohem Aufwand zu judizieren, wie dies bereits heute für ärztliche Kunstfehler zu beobachten ist. Was hier als „empiric turn“ bezeichnet wurde, führt im Verbund mit Massenproduktion jedenfalls zu einer Situation, in der wesentliche Selektionsentscheidungen implizit verlaufen, was zu unvorhersagbaren aber pfadabhängigen Effekten führt, die dem mit einer Wissenschaft der Aufklärung verbundenen Rationalitätsanspruch nicht genügen. Dort wo partielle und vorbehaltsbelastete soziale „Theorien“ in soziale Techniken übersetzt werden, findet eine manipulative Wende statt, aus der hohe Ansprüche an die Ethik der Forschenden abgeleitet werden können. Der empiric turn lässt sich dem zweiten Merkmal fordistischer Organisation der Produktion von Wissen zuordnen. Praktiken der Mittelvergabe sorgen für die Außerstreitstellung von Paradigmen und sogar von Theorien mittlerer Reichweite und treffen Vorentscheidungen über Erfolg versprechende Fragestellungen. Der dadurch erzielbare Effizienzgewinn wird politisch als Entlastung der ForscherInnen verkauft, die sich nun ganz auf den Begründungszusammenhang konzentrieren können. Gemeinsam mit dem Effizienzgewinn aus Differenzierung ergibt sich ein Übergang vom Handwerk zur industriellen Fertigung, wobei Produktivitätsgewinne bevorzugt als Outputsteigerungen gemessen werden, wie mit der Zahl akquirierter Projekte, betreuter Dissertationen oder Publikationen. Zugegebenermaßen werden bei letzteren zusätzlich selbst gesetzte Maßstäbe der Qualifizierung wirksam, wie Zeitschriftenrankings und Impact Faktoren.

250

Publikationsfülle, Information (Data!) Overflow Wie an der rasch ansteigenden Zahl von Einreichungen, am Wachstum von Publikationsorganen, Subdisziplinen und Konferenzen nachvollziehbar ist, sind mittlerweile sogar die Publikationen in einem Spezialbereich nicht mehr überschaubar. Der Eintritt von Indien und China in die globale Forschungsgemeinschaft verstärkt diesen Effekt aufgrund der schieren Menge an potenziellen Einreichungen. Wenn die doppeltblinde Evaluierung als bestmögliche Praxis erhalten werden soll, bedeutet dies, dass wesentlich mehr Reviews benötigt werden, für die nur Personen zur Verfügung stehen, die ihrerseits wesentlich mehr produzieren müssen. Seriosität ist dabei kaum noch zu gewährleisten, weshalb man sich mehrschichtiger Verfahren bedient. An der Basis steht die Substanz der Gedanken. Diese wird von Reviewern auf ihre Übereinstimmung mit dem Vorgedachten geprüft, was einen bestimmten Modus der Aufbereitung voraussetzt: Verwendung der in der Reviewer Nische üblichen Begrifflichkeit und Argumentationsfiguren, Verwendung von in dieser Nische als anspruchsvoll anerkannten Methoden und Bezugnahme auf in dieser Nische anerkannte Quellen (Informationsverlust 1). Damit nehme ich epistemologisch keine vollkommen relativistische Position ein. Aber ich halte dafür, dass Art und Umstände der Erkenntnis Einfluss darauf haben, was erkannt werden kann. Ferner beobachte ich empirisch, dass unter Beschleunigungsbedingungen eher ein Computer-ähnlicher Abgleich mit bekannten Begrifflichkeiten und Argumentationsstrukturen stattfindet, als ein Sich Einlassen auf abweichende Innovationen. Auf Ebene 2 werden Texte mit „Tags“ versehen, die einerseits eine Bestichwortung im Sinne gültiger Terrainvermessungen leisten, andererseits Qualität an Hand von Methode und Referenzen einschätzen. Parallel dazu entscheidet die Community entweder marktbezogen oder durch administrative Verfahren, welche Zeitschriften als „führend“ einzuschätzen seien. Ebene 3 liefert so Meta-Indizes in Form von Zeitschriftenrankings, auf deren Basis über weitere Karrieren entschieden wird (Informationsverlust 2). Wegen der begrenzten Mittel und ihrer zunehmend kompetitiven Vergabe werden auf diese Weise marginale Leistungsunterschiede (etwa 360 versus 354 Punkte auf einer künstlich vereinheitlichten Skala) über den Effekt der Selbstverstärkung in große Ressourcen- und Reputationsunterschiede überführt. Die Fülle scheint auch einen Erklärungsbeitrag für die Beobachtung zu liefern, dass auf einen kleinen Teil von Arbeiten immer wieder Bezug genommen wird. Vermutlich um die Unsicherheit der Akzeptanz durch die Community zu mindern, entsteht ein Bedarf an Kernarbeiten (salient papers), auf die mehrheitlich Bezug genommen wird, sodass AutorInnen sich absichern können,

251 wenn sie diese zitieren. Es wäre eine interessante Forschungsfrage, zu untersuchen aus welchen Gründen die Initialzündung erfolgt, – Zufall, Herkunft der AutorInnen, Passung zu zeitgeistlicher Strömungen, oder „tatsächliche“ Erkenntnisinnovation wären betrachtenswerte Variablen – danach jedenfalls wirken Selbstverstärkungseffekte, die ähnlich funktionieren, wie Zinseszinseffekte bei Geldkapital: Arbeiten werden zitiert, weil sie schon zitiert wurden, ihre inhaltliche Substanz tritt in den Hintergrund. Das ist wiederum im Kontext einer stärkeren Einkommens- und Ressourcendifferenzierung auf Basis evaluierter „Leistungen“ von Interesse. Die zentrale Frage lautet, ob die beobachtete Fülle an Forschungsarbeiten dem Erkenntnisfortschritt dient, ihn behindert oder ihn nicht beeinflusst. Im Verbund mit den im nächsten Punkt betrachteten Evaluierungspraktiken deutet Einiges darauf, dass wir es eher mit Behinderung zu tun haben.

Evaluierungsrituale Die Übertragung von Managementansätzen auf den Produktionsprozess von Wissen impliziert operationale Zielvereinbarungen und laufende Ergebnismessungen. Verbunden mit den Review- und Herausgabe-Strategien akademischer Journale bzw. mit den Projektvergabepraktiken im Rahmen der Wissenschaftsförderung führt dies zu einer kurzfristigen Orientierung, zur „Informationsverschmutzung“ durch Mehrfachverwendung derselben Ideen und Aussagen, vor allem aber zu einer opportunistischen Akzeptanz des sogenannten Main-Streams, was wiederum (diskontinuierliche) Innovationen beeinträchtigt. Lineare Intervention in den komplexen Prozess der Wissensschaffung löst beim Nachwuchs unmittelbare Reaktionen der Art aus, dass sie den Forschungsprozess umdrehen: Forschungsfragen werden vom erwünschten Ergebnis her, bzw. von den gerade Aufmerksamkeit genießenden Themenfeldern her definiert. Eine solche Orientierung am „Markt“ könnte zu einer besseren Allokation geistiger Ressourcen zu gesellschaftlichen Problemstellungen beitragen, wenn der Markt letztere abbildete. Wegen der zunehmenden Selbstreferenz trifft dies jedoch bestenfalls zufällig zu, weshalb der Verlust genuiner, unabhängig und dezentral sich entfaltender Neugier aus ökonomischer Sicht als Defizienzproblem quasi-zentraler Steuerung und erkenntnistheoretisch als Gefahr mangelnder Originalität zu Buche schlagen. In hoch differenzierten Nischen ist doppelt blinde Begutachtung selten blind, da Fragestellungen, Schreibstil und Referenzen relativ leicht zugeordnet werden können. Wegen des Fehlens gemeinsamer Basistheorien und weil unterschiedliche Paradigmen unterschiedliche Herangehensweisen er-

252 fordern, können auch seriös arbeitenden Begutachter zu sehr unterschiedlichen Einschätzungen gelangen. Eine Inter-Rater-Reliability von 12 Prozent (vgl. Starbuck, 2006; 19) für das Beispiel des American Science Quarterly deutet nicht gerade auf eine hohe Validität und Objektivität der Methode der Peer Review, auch wenn diese Methode zugegebenermaßen das Beste scheint, was verfügbar ist, ehe Erkenntnisse den langen Test der Geschichte zu bestehen haben. Allerdings verleitet sie – menschlich, allzu menschlich – auch zu Missbrauch, wie im Folgenden kurz behandelt wird.

Zunahme von Betrug, Missbrauch und Manipulation Wie Di Trocchio berichtet, sind die in der Überschrift genannten Phänomene keineswegs neu (vgl. Di Trocchio, 1994) Spektakuläre Fälle, wie jener des Stammzellenforschers Hwang aus Korea scheinen allerdings ein Indiz dafür, dass der Wettbewerbsdruck zu einer Knappheit von Aufmerksamkeit und damit zur Versuchung führt, immer schrillere Töne der Veröffentlichung anzuschlagen. Es ist auf Grundlage der aufgezeichneten Vorfälle und ihrer unterschiedlichen Entdeckbarkeit im Epochenverlauf nicht möglich, eine empirische Zunahme von Betrug, Manipulation und Missbrauch zu erhärten. Feststellbar sind allerdings neben dem erwähnten Erfolgs- und Evaluierungsdruck, verbesserte technische Möglichkeiten des Betrugs (und seiner Aufdeckung) sowie ein zunehmendes mediales Interesse an allen seinen Ausprägungsformen, von der nicht erfolgten Ko-Autorenschaft, über Plagiate und Titelkäufe bis zum Stehlen begutachteter Ideen und zur Fälschung von Messdaten und Ergebnissen. Wie im Falle der oben angeführten, unbeabsichtigten impliziten Selektionen stellen sich Fragen der Gültigkeit, Verlässlichkeit und einseitigen politischen Wirkung gefälschter Ergebnisse. Lawrence, ein Mit-Herausgeber von „Nature“, berichtet über bewusste Behinderungen von Wettbewerbern um ähnliche Entdeckungen wie einst des AIDS Virus durch deren Konkurrenten: Es werden zum Beispiel Zeit verzögernde zusätzliche Labortests verlangt (vgl. Lawrence, 2003).

Zunehmende Selbstreferenz Luhmanns Systemtheorie folgend sind Systeme dadurch gekennzeichnet, dass sie die Elemente und Beziehungen, aus denen sie bestehen, laufend reproduzieren; er spricht von Autopoiesis. Dies verbindet er mit dem Merkmal operativer Geschlossenheit in dem Sinne, dass Systeme nichts verarbeiten

253 können als etwas, wofür sie basale Operationen entwickelt haben. Sie sind daher von außen nicht direkt beeinflussbar, sondern lediglich irritierbar. Irritation wird durch Trigger ausgelöst, die eine basale Operation anzusprechen vermögen und das System entsprechend anregen, das heißt in Schwingung versetzen. Übertragen auf das akademische System wird erkennbar, warum dessen „Insassen“ nur auf einander bzw. nur auf interne Prozesse (etwa der Reputationsgewinnung) reagieren. Durch die oben angesprochene Mehrschichtigkeit der Bezugnahme kommt es dazu, dass sich zwischen die zu erklärenden (zu verstehenden) Phänomene und die aktuellen Deutungen durch „amtierende“ ForscherInnen einige Lagen and Deutungskulissen schieben. Da diese Lagen oft mehr mit historischen Interpretationen früherer Autorinnen zu tun haben als mit dem Phänomen selbst, liegt die Annahme nahe, dass sie den Blick auf das eigentliche Phänomen verstellen können. Wenn ganze Seminare und Workshops sich hingebungsvoll der Exegese eines Satzes eines der Großen im Fach widmen und dabei eine an Akronymen reiche Insidersprache entwickeln, die von ExpertInnen anderer Nischen nicht nachvollzogen werden kann, darf sowohl der Erkenntnisbeitrag als auch besonders die lebensweltliche Wirksamkeit solchen Tuns berechtigt in Streit gestellt werden. Anders ausgedrückt: Mannigfaltige Prozesse der Dekontextualisierung von Phänomenen aus ihren raum-zeitlichen und sozialen Bezügen und deren Rekontextualisierung in Form der Zuordnung zu Denkschulen können die unvermeidliche, aber produktiv irritierbare Selbstreferenz in ein quasiautistisches Glasperlenspiel verwandeln, dessen Legitimierung nach außen dann nur unvollständig gelingt.

Conclusiones, oder besser: offene Fragen Dieses Essay stellt den Versuch dar, potenzielle Fehlentwicklungen im europäischen bzw. zunehmend globalen System der Produktion von akademischem Wissen aus ethno- bzw. wissenssoziologischer Sicht zu betrachten. Dabei wird auf die Begründung der Fehlentwicklungen im Großen und Ganzen verzichtet, da dies einer längeren Auseinandersetzung mit Fragen der individuellen und kollektiven Identitätsbildung, mit Fragen des „Gesichts“ einerseits und mit Fragen von Macht und Einfluss andererseits bedurft hätte. Vielmehr liegt der Schwerpunkt auf (Rück)Wirkungen der postulierten Fehlentwicklungen auf das akademische System. Letzteres wird nicht an einem wie immer definierten Vorgängermodell gemessen, sondern anhand von aus der industriellen Organisationstheorie gewonnenen Kriterien analysiert. Gestützt auf die wenigen Quellen, die sich an tendenziell identitätsbedrohende

254 Selbstreflexionen heranwagen und gestützt auf eine langjährige, kulturübergreifende Erfahrung in diesem System, die von vielen Gesprächspartnern, meist hinter vorgehaltener Hand bekräftigt wurde, gehe ich6 davon aus, dass aus der Gruppenidentität und aus individuellem Machtstreben entspringende Motive und unbewusste Schließungen des Denkens der Akteure im System von größerer Bedeutung für Rangordnungen sind, nach denen Prestige und Ressourcen verteilt werden, als im Modell der rationalen Wahl gemeinhin angenommen wird. Damit sei nicht(!) unterstellt, dass ein Großteil der Beteiligten jegliches Ethos der Wahrheitssuche und kollegialen Fairness über Bord geworfen habe. Die Fruchtbarkeit eines die Erkenntnisse schärfenden Peer Diskurses oder, wie wir es in der Sprache der Wirtschaft formulieren würden, eines Prototyping von Entwurf – Kritik – verbessertem Entwurf, wird als Idealmodell ausdrücklich unterstützt. Das hier geführte Argument zielt eher darauf ab, dass unter den gegenwärtigen Bedingungen einer quasi-industriellen Produktion von Wissen seriöses Prototyping eher die Ausnahme als die Regel darstellt. Die Hypothese einer quasi-industriellen Produktion von Wissen wird in Analogie zum fordistischen Ansatz der Produktion von Industriegütern entfaltet, womit Spezialisierung (und damit notwendig verbundene „Laiisierung“ jenseits der eigenen Schwerpunkte), vertikale Arbeitsteilung zwischen Entscheidern über den paradigmatischen und Ressourcenrahmen und „ausführender“ Forschung in vor entschiedenen und aus Wettbewerbsgründen geschärften und verengten Rahmen und zuletzt ein Fokus auf Effizienz beschrieben sind, der Fragen der Wirksamkeit (Effektivität) oder des Sinns tendenziell ausblendet. Wie in jeglicher Kommunikation werden auch im akademischen Diskurs neben den Inhalten laufend und wechselseitig Positionen mit verhandelt. In Verbindung mit externen Entwicklungen der Globalisierung und Digitalisierung, auf welche der Gesetzgeber mit Universitätsreformen reagiert hat, welche Steuerung in Analogie zur Unternehmensführung konzipieren, führt dies dazu, dass die Annahme objektiver und valider Qualitätskontrolle als Illusion in Frage zu stellen ist. Eine solche In-Frage-Stellung hat auch, aber nur nachrangig, mit Schwächen und Defiziten der beteiligten Akteure zu tun. Sie stützt sich primär auf strukturelle Aspekte der Spezialisierung, die zur Fragmentierung wird, der Informationsfülle, die Selektionen zunehmend willkürlich gestaltet, und des Wettbewerbs in Verbindung mit bürokratischen Disziplinierungsmitteln, die zu einem erhöhten Druck führen, die Ideenproduktivität zumindest auf der Ebene des messbaren Scheins zu erhöhen und zu beschleunigen. Die Frage ist, wie sich dies auf die Qualität der Ideen und des Diskurses über Ideen im System auswirkt. Letztlich müssen zwei Fragen

255 offen bleiben: Wie valide sind die unter den genannten Bedingungen hervorgebrachten Ergebnisse und vieviel Innovation erlaubt das System? Die Validität ist dabei nicht so unangefochten, wie sie gerne dargestellt wird, sie wird allerdings auch nicht so verfehlt, wie ein Schwerpunkt auf Kritik nahe zu legen scheint. Innovation scheint im gegenwärtigen wissenschaftlichen System hingegen bedroht, weil sich die 5% Inspiration, mit denen sie verbunden ist, planwirtschaftlichen Ansätzen nicht nur entziehen, sondern etwaig sogar kontraproduktiv auf letztere reagiert wird. Hier hilft allerdings das Vertrauen auf einen Forschergeist, von dem wir (metaphysisch!) annehmen, dass er sich den Strukturen zum Trotz immer wieder Bahn brechen wird.

256 Hypothese: Industrialisierte Massenproduktion von „Wissen“ in den Sozialwissenschaften DESKRIPTION/EVIDENZ

WIRKUNGEN

GLOBALISIERUNG Abbau der Grenzen zwischen nationalen Wissenschaftstraditionen, Dominanz der englischen Sprache Indikatoren: • Herkunft Autoren englischsprachiger Publikationsorgane, • Dominanz englischsprachiger Journale in Punktesystemen

Rückkehr zum universalistischen Modell mit einer lingua franca erhöht Diffusionschancen, trägt zum Information Overflow bei, begünstigt einfachen Ausdruck und die Übernahme von Denkstrukturen muttersprachlicher Autoren

DIGITALISIERUNG & VERNETZUNG Veränderte Zeithorizonte und Rezeptions-Praktiken Indikatoren: • Studentische Arbeiten im Zeitvergleich

Frühzeitige Präsenz beeinflusst Impact mehr als Substanz, ältere Quellen werden zunehmend ausgeblendet

DIFFERENZIERUNG Feingliederung bzw. Fragmentierung des Wissens (Luhmann, 1997) Indikatoren: • Wachstum von Disziplinen und Publikationsorganen

Sprachlosigkeit, Scheitern von Inter- und Transdisziplinarität Neuerfindung des Rads (systemische Ineffizienz)

EMPIRIC TURN Fokusverschiebung vom Problem (Frage) zur Methode Abwertung paradigmatischer und konzeptioneller Arbeiten (Galtung, 1981) Indikatoren: • Publizierte Artikel in Top Journalen (der Sozial- und Wirtschaftswissenschaften)

Statistik ersetzt Logik Prüfung der Elemente mehr als ihrer Relationen Aufbau von Defensivroutinen Mangelnde Relevanz der untersuchten Hypothesen

INFORMATION OVERFLOW/ PUBLIKATIONSFÜLLE Missverhältnis zwischen Produktion und Qualitäts-kontrolle (Jensen 2007; Bennis & O´Toole, 2004) Indikatoren: • Zunahme an Publikationen • Zunahme an AutorInnen • Zunahme von Verbreitungsmedien

„Zufällige“ Schaffung von Kernarbeiten durch Defensivzitate Oberflächliche Reviews an Hand von Kriterien der Meta-Ebene (Indizierung von Text) Geringe Inter-Rater Reliability

VERGLEICHENDE EVALUIERUNG Universitätsgesetz 2002, sowie die Erfahrung der britischen, skandinavischen und US-amerikanischen Systeme als Basis von Karriere- und Mittelallokationsentscheidungen

Transformation marginaler Unterschiede in ausgeprägte Chancenungleichheit Umkehr des Forschungsprozesses (Planung vom Ergebnis her) Orientierung am Mainstream, Rücknahme von Originalität, Kreativität

ZUNAHME DER VERSUCHUNG ZU BETRUG, MANIPULATION UND MISSBRAUCH Indikatoren: • Aufgedeckte Fälle, • Lebenszyklus von später preiswürdigen Ideen

Datenmanipulation, Ergebnismanipulation, Kartellbildung, Plagiate Verlangsamung von Wettbewerben durch Auflagen. (Lawrence, 2003; Di Trocchio, 1994)

ZUNEHMENDE SELBSTREFERENZ Bezugnahme auf mehrschichtige Konstruktionen von Beobachtern von Beobachtern Konstruktion von Dualitäten als Motor zur Aufrechterhaltung des Erkenntnissystems

Abnahme von Relevanz „Scheinprobleme aus Scheinwelten“

257

Endnoten Von den Begriffen Wissen oder Erkenntniss wird hier bewusst Abstand genommen. Ergebnisse werden als Produkte im Sinne der ökonomischen Theorie betrachtet: Die Frage von intellektuellen Eigentumsrechten wird hier nicht betrachtet, obwohl sie ganz wesentlich von der offenen Debatte darüber abhängt, ob man akademisches Wissen als kollektives und öffentliches oder als privates Gut ansieht. 2 Hingegen nehmen Arbeiten zu, die sich mit Fragen des Outputs und Outputvergleichs befassen. 3 New Public Management bezieht sich auf die Steuerung von Aktivitäten der öffentlichen Hand und zielt auf eine Übertragung bewährter Methoden aus der Privatwirtschaft auf diesen Sektor. 4 Als externe Effekte bezeichnen Ökonomen Nebenwirkungen des Vollzugs der eigenen Basisoperationen, die Dritte betreffen und deren Folgen von Dritten zu tragen sind, wie etwa Umweltverschmutzung, aber auch Vollbeschäftigung. 5 Es ist beinahe unmöglich, den üblichen Dualitätsunterstellungen zu entkommen. Um diese Frage nicht vorauszusetzen, sondern sie bearbeitbar zu halten, setze ich Anführungszeichen (vgl. Mitterer, 2001) 6 Ein kleines sprachliches Detail, an dem sich Weltanschauungen brechen. Manche Begutachter und Zeitschriften verlangen unpersönliche Formulierungen, um das Bemühen um Distanz im Dienste unverzerrter Beobachtung und Analyse zu betonen. Andere lassen auch die Ich-Form zu, mit der Autoren Verantwortung für das Gesagte übernehmen. In diesem Sinne verwende ich hier die Ich-Form. 1

Literatur Bennis, Warren G., O’Toole James (2005): „How Business Schools Lost Their Way“, Harvard Business Review 83, 96–105. Berger, Peter L., Luckmann, Thomas (1967): The Social Construction of Reality, New York: Doubleday Anchor. Cook, Scott D. N., Brown, John S. (1999): „Bridging Epistemologies: The Generative Dance Between Organizational Knowledge and Organizational Knowing“, Organisation Science 10, 381–400. Daconta, Michael, Obrst, Leo, Smith, Kevin (2003): The Semantic Web. A Guide to the Future of XML, Web Services, and Knowledge Management, Indianapolis: Wiley Di Trocchio, Federico (1994): Der grosse Schwindel: Betrug und Fälschung in der Wissenschaft, Frankfurt a. Main et al.: Campus Verlag. Dueck, Gunter (2000): Neue Wissenschaften und deren Anwendung, Infor-

258 matik Spektrum 23, 60–64. Galtung, Johann. (1981): Structure, Culture and Intellectual Style. An Essay Comparing Saxonic, Teutonic, Gallic and Nipponic Approaches, Social Science Information 20, 817-856. Gibbons, Michael, Limoges, Camille, Nowotny, Helga, Schwartzman, Simon, Scott, Peter, Trow, Martin (1994): The new production of knowledge, London: Sage Publications. Goedegebuure L., van Vught, F. (1996): Comperative Higher Education Studies: The Perspective from the Policy Sciences. Higher Education, Vol. 32, Issue 4, pp 371-394. Jensen, Michael 2007 Keynote at EURAM 2007, 16th – 19th of May 2007, Paris: http://www.euram2007.org/r/default.asp?iId=KIEHK, last access on 20th of May, 11:30 a.m. Lawrence, Peter A. (2003): „The Politics of Publication. Authors, Reviewers and Editors Must Act to Protect the Quality of Research“, Nature 422, 259–261. Luhmann, Niklas (1997): Die Gesellschaft der Gesellschaft, 2 Bände, Frankfurt a. Main: Suhrkamp Verlag. Polanyi, Michael (1966): The Tacit Dimension, Garden City: Doubleday & Company. Power, Michael (1997): The Audit Society. Rituals of Verification, Oxford: Oxford University Press. Schneider, Ursula (2001): Die sieben Todsünden im Wissensmanagement. Kardinaltugenden für die Wissensökonomie, o.O.: FAZ Verlag. Snow, Charles P. (1993): The Two Cultures, Cambridge: Cambridge University Press. Starbuck, William H. (2006): The Production of Knowledge: The Challenge of Social Science Research, Oxford: Oxford University Press. Stehr, Nico (2003): Wissenspolitik. Die Überwachung des Wissens, Frankfurt a. Main: Suhrkamp. Walzl, F.(2007): Chaos der Disziplinen? Von der Fragmentierung der Soziologie zur transdisziplinären Kooperation, Vortrag anlässlich des Kongresses „Nachbarschaftsbeziehungen“ der Österreichischen Gesellschaft für Soziologie am 25. 9. 2007 in Graz.

Section 4: Electronic philosophy resources and Open Source / Open Access Elektronische Philosophie-Ressourcen und Open Source / Open Access

Philosophy in an Evolving Web: Necessary Conditions, Web Technologies, and the Discovery Project Thomas Bartscherer, Chicago Paolo D’Iorio, Paris

Introduction The Internet has been a mixed blessing for humanities’ scholars, and especially for philosophers. On the positive side, instant access to an inexhaustible fount of information caters to our inveterate and equally inexhaustible curiosity. And recent developments in both habits of use and technological capacity―captured, albeit loosely, in the notion of Web 2.0―have made the Web, in particular, ever more hospitable to philosophy. And yet for the purposes of academic research, the Internet has heretofore suffered from decisive shortcomings. The very freedoms that make cyberspace a lively forum for intellectual exchange make it treacherous for scholarship. The litany of questions is familiar enough: how reliable, for example, are the articles in Wikipedia? Or those in any given electronic journal? Or the translations that can be found on, say, the Pirate Nietzsche Page? Is that website even in existence any longer? And if not, what happens if I cited it as a source in an essay I wrote? Perhaps one may think I had it coming, citing a website with a name like that. But how about the more staid-sounding Conference: A Journal of Philosophy and Theory, hosted by Columbia University? Or Earth Ethics? Or Acta Philosophica: An International Journal of Philosophy hosted by the Pontifical University of the Holy Cross? All were once available on the Web but are presently defunct, and while that last one may yet be resurrected, there is no shortage of examples of online resources that have disappeared forever. And there is the more general and more pervasive problem of broken links.1 In a nutshell, the questions pertain to standards of quality and stability of sources. In this paper, we shall be considering how digital technology―and in particular recent innovations in networking and Semantic Web―can be exploited to assist scholars in conducting academic research while at the same time avoiding the pitfalls that render the Internet a false friend of scholarship.

262 We begin with an attempt to articulate certain principles that are independent of any given technology but are of fundamental significance for the humanities in general as it becomes ever more intertwined with emerging information technology. In light of these principles, we then turn to a detailed look at one particular example―the Discovery Project, recently launched under the aegis of the European Union’s eContentplus programme―that is presently developing a web-based network for academic research in philosophy.2 And we conclude by noting three major challenges that confront this project and all similar initiatives aimed at integrating humanities research and digital technology.

Conditions of Possibility The digitization of humanities research thus far has in many cases been driven more by capacity than need, with each advance in technology inspiring a new set of aspirations and plans, many of which never come to fruition or quickly lose relevance. In what follows, we shall be discussing a project that, through its engagement with emerging technologies, runs precisely these risks. While such risks are unavoidable in this field, we believe that they can be minimized, and it is to this end that we would like to emphasize the importance of beginning with a reflection on how humanities scholarship has traditionally been structured and practiced. The more accurate one’s understanding of the underlying presuppositions of academic research, the better equipped one will be to conceive, design, and evaluate technological support for it. Concretely, we would like to try to formulate certain principles that undergird scholarship in the human sciences as it has come to be practiced in the West. To borrow a Kantian formulation, one could say that these are conditions necessary for the possibility of scholarship. While not purporting to have assembled an exhaustive list, we offer the following preliminary set of interrelated conditions: i. Stability―Once published, a scholarly source must remain in a fixed form at a fixed, citable address. This is a familiar expectation in the context of paper publishing. The citation―complete with the name of the source, its author, the publisher, and the publication date―points to a source that is fixed in its published form. Insofar as academic research functions as a science, it requires that authors make and defend claims which are then open to refutation. The stability of sources is a necessary precondition for this activity. Once published, a text cannot be “unpublished,” nor can

263 someone other than the author change the text without the author’s consent. Subsequent changes made or authorized by the author, meanwhile, result in the creation of a new source document―e.g., a second edition that will be identified as such in its title and in any citations pointing to it. ii. Accessibility―Both primary sources and secondary scholarship in a given field should be made generally available for consultation and review by others working in that field. Without access to primary sources, scholars are, to varying degrees, at the mercy of intermediaries and constantly at risk of over-reliance on the argument from authority. Access to secondary sources, meanwhile, is necessary not only so that the knowledge may be shared, but so that arguments may be tested, verified, or refuted. This has traditionally been ensured by the preparation and publication of texts on the part of scholars, and by the establishment and support of libraries that acquire this material and furnish it to scholars. iii. Durability―In principle, published texts should be preserved and permanently available. In the paper world, this has traditionally been the responsibility of libraries and archives. iv. Dissemination―While properly thought of as a means by which accessibility and durability are achieved, dissemination merits emphasis because of the key role it has played in the development of scholarship. The reproduction and distribution of texts―originally in manuscript, then in print and now also in photographic and digital formats―has been, in practice, the most effective means to ensure that scholars have access to primary and secondary material. At the same time, the more broadly a text is disseminated, the more likely it will be that it survives the vicissitudes of time. v. Standards of Quality―This is, to be sure, complex and controversial territory, but it is in the nature of academic scholarship that standards be established and enforced. In academic publishing, editorial prerogative, institutional oversight, and peer review have all traditionally played roles in maintaining standards of quality.

264

Discovery To see how these principles might be realized in the digital environment, we turn now to the Discovery Project, a multi-national initiative co-financed by the European Commission under the aegis of the eContentplus programm and launched in November 2006 with a total budget of € 2,912,289.00.3 Discovery is developing a two-tiered infrastructure consisting of, 1) philosource, an extensive digital library of primary sources and peer-reviewed secondary material that is augmented with metadata for Semantic Web compatibility; and 2) philospace, a networked peer-to-peer desktop application that works in conjunction with philosource to process and manipulate Semantic Web data, permitting users to add metadata and build ontologies.4 While philospace is designed to exploit the great power of the Internet to support open, fluid, loosely structured, fast-moving intellectual exchange, philosource is directly concerned with assuring the necessary conditions for the possibility of scholarship.5

Philosource Philosource will be a digital library of primary sources and a publishing venue for scholarship, both new and previously published. While it is conceived as a continually expanding collection, it will originally be launched with a substantial kernel of material consisting of representative philosophical works, drawn from the entire history of western philosophy, that are not yet freely available on line in reliable scholarly editions. It will be helpful first to sketch an overview of that initial critical mass of content before taking up the question of how philosource proposes to meet the conditions outlined above.6

a. Ancient Philosophy

Three authoritative reference works will form the core of the ancient material in philosource: – a complete electronic edition of the fragments and testimonia of the preSocratic philosophers, based on Die Fragmente der Vorsokratiker edited by Diels and Kranz, in ancient Greek with German and Italian translations. – a complete electronic edition of all testimonia related to Socrates and the so-called Minor Socratics, based on Giannantonis’ Socratis et Socra-

265 ticorum Reliquiae and including, in addition, the text of Aristophanes’ Clouds and Xenophon’s Socratic writings, all in ancient Greek. – the complete text of Diogenes Laertius’ Lives of the Philosophers, in ancient Greek with accompanying Italian translation. This material is being prepared in Rome by the Istituto per il Lessico intellettuale europeo e Storia delle idee (ILIESI) of the Italian Consiglio Nazionale delle Ricerche (CNR).

b. Early Modern Philosophy and Science

This group of materials, also being prepared by ILIESI, contains a selection of philosophical and scientific texts in Latin, Italian and French, from the 16th to the 18th centuries. Authors represented, and some sample texts, include G. Bruno, De linfinito, universo et mondi and Spaccio de la bestia trionfante; R. Descartes, Meditationes de prima philosophia and Passions de lame; B. Spinoza, Tractatus politicus and Ethica ordine geometrico demonstrata; G. W. Leibniz, De primae philosophiae principia (Monadologia) and Principes de la nature et de la grâce fondés en raison; G. B. Vico, Principi di una scienza nuova and De uno universi iuris principio; and A. G. Baumgarten, Meditationes philosophicae de nonnullis ad poema pertinentibus.

c. Post-Enlightenment Philosophy

NietzscheSource (or the HyperNietzsche Project7) at the Institut des Textes et Manuscrits Modernes, CNRS-ENS, Paris) has already digitized roughly 30,000 pages of primary material for the study of Friedrich Nietzsche, including manuscripts, correspondence, first editions of Nietzsche’s published works, photos, and other biographical documents.8 Roughly 8000 pages of this material have already been published on the website, with the balance to come over the next two years. This facsimile edition is one of the three main components of the site’s collection of primary sources. The second is an xml-encoded digital version of the Colli-Montinari critical edition of Nietzsche’s published works, posthumous fragments, and correspondence. Finally, NietzscheSource will provide access to a complete genetic edition (including all the manuscripts, fair copies, proofs, and the first edition) in facsimile and scholarly transcription of The Wanderer and his Shadow and Daybreak. The former has already been published and the latter is currently being proofread for publication on the site. The Wittgenstein Archives at the University of Bergen (WAB) will contribute 5000 pages of the Wittgenstein Nachlass to philosource, including

266 material from the Big Typescript complex (1929-1934), the Brown Book complex (1934-1936), the “Lecture on Ethics,” and “Notes on Logic,” in both facsimile and critical transcription, including texts in German and English with Wittgenstein’s own translations of English texts into German and vice versa. These primary sources will be accompanied by a continually expanding multilingual resource of Wittgenstein research material, including text, audio, and video files from WAB’s collection “«Fragments»: Views into Wittgenstein Research.”9

d. Contemporary Philosophers

Some three hundred video and/or sound segments featuring contemporary philosophers addressing a range of philosophical topics will be made available by RAI Radiotelevisione Italiana (RAINET). These segments are drawn from several philosophy-related series in RAI’s vast archives. Included are thematically organized lectures and interviews on the history of philosophy and contemporary philosophical problems, drawn from RAI’s Enciclopedia Multimediale delle Scienze Filosofiche (Multimedia Encyclopaedia of the Philosophical Sciences); specially designed hour-long “video monographs” from the series The Roots of European Philosophy; half-hour segments presenting philosophical perspectives on social and political issues from the series Philosophy and Current Affairs; and material from other relevant series. Prominent philosophers featured in these programs inlcude H.G. Gadamer, Gilles Deleuze, Paul Ricoeur, and Gianni Vattimo. In addition to being a multi-media digital library, philosource will be a publishing venue that maintains standards of quality at least as demanding as those in the best paper journals. Material submitted for publication will be subjected to double-blind peer review by an Editorial Board of established experts. Secondary work accepted for publication will be fully integrated with both the primary material available on the given website and with all the other secondary material on the site. So, for example, if an article cites either a primary or secondary source that is present on the site, links will automatically be established from article to source. Likewise, when viewing any source on the site, the user will have immediate access to all the other material that cites that particular source. This system can function at various levels of granularity. For example, while viewing Nietzsche’s Daybreak one could choose to see a list of all the articles that refer to the text as a whole, to a given section of the text, or to a particular aphorism. The system could, theoretically, be refined to an even more narrow focus―homing in on a par-

267 ticular line, phrase, or word. Moreover, the affiliated websites that constitute philosource will be similarly integrated, meaning that an article published on the Nietzsche site that references the Diels-Kranz edition of the pre-socratics would be automatically linked to the Diels-Kranz source, and so forth. Returning now to the question of the conditions for the possibility of scholarship, Discovery will be instituting two measures to ensure the stability of scholarly sources. The first is a principle to which Discovery is committed and one that is made clear to authors through the publication agreement: an item published in philosource cannot subsequently be modified or deleted by anyone, including the author.10 Second, every item published in philosource is given a permanent URI (Universal Resource Indicator), which will make the item available via a simple, intuitive, and citable web address. For example, a complete facsimile edition of the notebook labeled N IV 1 in Nietzsche’s archive is available at the address . One can also specify page number and even individual notes on the page, so goes directly to the first note on page 5 of the notebook N IV 1. Nietzsche’s published works are accessed through a similar system: accesses the third aphorism of The Wanderer and his Shadow. Secondary sources, meanwhile, are identified by author, so for example the first contribution by Mazzino Montinari to HyperNietzsche bears the stable address . As has often been observed, much of the activity on the Web resembles a vast, ongoing conversation, more like spontaneous, ephemeral face-to-face communication―hence the ubiquity of the term “chat”―than like the authoring of books and articles. This aspect of the web is most manifest in what has come to be called the blogosphere. But scholarly conversation, whether in the hard or the human sciences, is a special kind of conversation. At its core lie the making, critique, and defense of claims, or in other words, demonstration followed by refutation or confirmation. And this in turn requires stable sources. While the value of such stability has long been recognized in the world of paper publishing, it has heretofore been largely neglected in the hectic, future-obsessed digital realm.11 If the notion of ensuring such stability on the Web were to take hold, it may mean that the next “revolution” in digital culture will take the form of a partial “restoration,” a renewal of timehonoured customs in a world altogether new. Our second principle, accessibility, is optimized by philosource. Let us take Nietzsche studies as an example. In this field, Nietzsche’s published texts and Nachlass are customarily regarded as primary sources. Various critical editions of his work have been published over the years, but all of

268 them, even the celebrated Colli-Montinari edition, remain editions. They are not without weaknesses and they always act, to some extent, as a screen or filter between the scholar and the original source. This is prominently so in the case of the Nachlass, for which the real primary sources are the manuscripts that are stored in the Nietzsche archives. The Colli-Montinari edition makes much of this material available in a convenient form, but at the cost of serious distortion.12 By making digital reproductions of those manuscripts available online for free, philosource brings scholars much closer to the original sources and permits competing claims to be adjudicated on that more solid basis. Moreover, the increased accessibility of secondary sources leads to an overall gain for the field, as arguments are subjected to increased scrutiny and therefore are more likely to be refuted or confirmed by other scholars. What we have called the durability of scholarly resources is a concept that overlaps somewhat with both stability and accessibility, but it places emphasis more on the bare existential issue: whether scholarly material will be preserved for the future. Discovery responds to this need it two ways. First, the project is committed to developing the institutional structures necessary to ensure free access in perpetuity to the material it publishes. While not a guarantee of success, prioritizing long-range planning from the start at least maximizes the chances that solutions will be found. Second, Discovery subscribes to a principle clearly articulated and defended by the project whose acronym communicates the key idea: LOCKSS, or, Lots of Copies Keep Stuff Safe.13 Whatever commercial advantage the various regimes of “digital rights management” may or may not have, from a preservationist’s perspective, they are troubling. As students of antiquity know, the more copies that are made of a text, the more likely it is to survive. This insight lies behind the decision to use Copyleft licenses in philosource and thereby to encourage the wide distribution and reduplication of scholarly material. Philosource, like many other recent initiatives, thus exploits the power of the Internet as a tool for the dissemination of information in order to ensure both the accessibility and the durability of research resources. Often, however, electronic dissemination is pursued energetically without sufficient attention being paid to questions of stability and quality. Discovery aims to see whether a research paradigm can be developed for the Internet in which all of these conditions are met and are in fact mutually reinforcing. To ensure standards of quality, Discovery relies on double-blind peer review, which is, in terms of objectivity, the most rigorous form of quality control in academic publishing. Each affiliated site is responsible for establishing an Editorial Board to oversee the review and publishing procedures

269 on that site. In contrast to some traditional publishers, the Editorial Board of NietzscheSource, for example, will not be appointed by the publisher, nor, at the other extreme, will it consist simply of all users of the site, as in the case of organizations run by direct democracy. Rather, the community of users of the site vote to elect a representative Editorial Board drawn from the members of the community, and the Board is then responsible for all editorial decisions. While the program adopted by Discovery is expressly designed to provide for the conditions indicated, it must be acknowledged that even for a project so designed there are significant challenges to maintaining those conditions in the digital environment over the long term. We will consider some of those challenges at the close of this essay.

Philospace We now turn to the other aspect of Discovery, which aims to harness the power of emerging Semantic Web technology for the purposes of research in philosophy. The idea here is to develop a desk-top application, called philospace, that functions in conjunction with philosource but that also exploits the potential of collaborative, open source scholarship through the use of peer-to-peer networking and semantically structured metadata. If philosource is a kind of library, philospace is like a personal workspace that, through networking, becomes a public forum for scholarly exchange. One way to think about philospace is to compare it to how scholars have traditionally worked. Imagine a scholar in 1960 working on the concept of eternal recurrence in Nietzsche. She may own a copy of the The Gay Science and have filled the margins of §341 with handwritten notes, which might well include citations of related passages in the same book or in other works from Nietzsche or in works from other authors. This scholar would also likely have written down thoughts in notebooks, including citations to other sources. The idea behind philospace is to permit scholars to collaborate at this stage of research, before their work is ready for publication. It is as if our scholar working on eternal recurrence were to make her copy of The Gay Science, together with all her annotations, citations, and notes available to others interested in the same topic and were, in turn, to have access to similar material that her colleagues make available, all in a highly structured, easily navigable environment. With philospace scholars will be able to augment and enrich the content of the philosource material with new, un-reviewed contributions without comprising the integrity of that which has been of-

270 ficially published. Sophisticated filtering mechanisms will be designed to permit scholars to select what they wish to make public and to sift through the mass of material from other scholars quickly and easily. Philospace will be a customized version of an existing application called Dbin.14 In technical terms, DBin is a peer-to-peer desktop application, similar to existing file-sharing applications, but one that traffics in Semantic Web metadata to permit browsing, searching, and the editing of annotations. Philospace will rely on a “Brainlet”, a domain-specific application, to tailor the program to the specific needs of each research community. DBin permits the gradual, sustainable development of a local database that supports semantically driven browsing and intelligent interaction with the local media and files. DBin also accommodates a number of experimental modules to deal with specific kinds of metadata (audio metadata extraction, textual analysis, desktop integration) and provides a domain-specific user interface. Importantly, DBin also includes an RDF subgraph digital signature facility that uses personalized trust policies to provide filtering out of unwanted information.

Future Challenges In closing, we would like to call attention to three interrelated and significant problems that will be confronted by the Discovery project, and by any similar initiatives, in the foreseeable future. The first is the difficulty of coordination and execution. To harmonize such a large and diverse set of actors and material is exceedingly difficult in the decentralized world of academic scholarship, and that difficulty is amplified by the instability of rapidly evolving technology. In this regard, there is reason to be optimistic about Discovery’s prospects. The partners comprising the consortium have a proven record of accomplishment, the objectives are clear, and thus far at least, collaboration has proceeded smoothly. But still, the road from potential to actual is full of pitfalls. Even if the project achieves its major objectives, it is not certain that there will be financial support available to preserve those achievements beyond the initial founding stage, which is financed through a limited-duration government grant. Moreover, as noted in the discussion of stability and durability, it is essential to the success of Discovery that the material in philospace, at least, be sustained in perpetuity. Here again, there is reason for optimism. Most of the partners that comprise Discovery are firmly established, well-financed public institutions with substantial physical, not just “virtual”, pres-

271 ence and assets. These institutions are rooted in their respective societies and have stable sources of funding. And it should also be noted that the cost of simply maintaining, as opposed to augmenting, a project like philospace is relatively miniscule: at a bare minimum, one large server and one staff person could keep it all going. Perhaps the most profound concern, however, is the threat of obsolescence. From a technological perspective, it is not enough simply to maintain such a resource; it must be consistently upgraded to sustain a functioning and profitable relationship with other information systems. From a theoretical perspective, meanwhile, the danger of obsolescence is no less serious: the way we think about the organization of information is constantly affected by the tools at our disposal, and those tools are changing at a rapid rate. In this, however, we are simply confronting the radical uncertainty of the future, which to one degree or another haunts all human endeavour and about which one can only wonder. If we remain optimistic at this point, it is with the belief that Aristotle was right when he said that philosophy begins in wonder.

References Bartscherer, T. (2003). Ecce HyperNietzsche: It’s Not Just the Philology of the Future Anymore. NMEDIAC: The Journal of New Media and Culture 2 (2). http://www.ibiblio.org/nmediac/fall2003/ecce.htm (accessed 10 September 2007). Bartscherer, T. (2007). La naissance d’Hyper enfanté par l’esprit de la critique génétique. IN Gifford, P. & Schmid, M. (Eds.) La création en acte: devenir de la critique génétique. Amsterdam, Rodopi. D’Iorio, P. (Ed.) (2000a). HyperNietzsche. Modèle d’un hypertexte savant sur Internet pour la recherche en sciences humaines. Questions philosophiques, problèmes juridiques, outils informatiques, Paris, Presses Universitaires de France. D’Iorio, P. (Ed.) (2000b). HyperNietzsche. Modèle d’un hypertexte savant sur Internet pour la recherche en sciences humaines. Questions philosophiques, problèmes juridiques, outils informatiques. Presses Universitaires de France, Paris. http://www.hypernietzsche.org/doc/puf/ (accessed 10 September 2007). D’Iorio, P. (2002). Principles of HyperNietzsche. Diogenes, 49, 58-72. D’Iorio, P. (2007). Nietzsche on New Paths: The HyperNietzsche Project

272 and Open Scholarship on the Web. IN Fornari, M. C. (Ed.) Friedrich Nietzsche. Edizioni e interpretazioni. Pisa, Edizioni ETS. Gerike, I. (2000). Les manuscrits et les chemins génétiques du Voyageur et son ombre. IN D’Iorio, P. (Ed.) HyperNietzsche. Modèle d’un hypertexte savant sur Internet pour la recherche en sciences humaines. Questions philosophiques, problèmes juridiques, outils informatiques. Paris, PUF. Groddeck, W. (1991). ‘Vorstufe’ und ‘Fragment’. Zur Problematik einer traditionellen Untersheidung in der Nietzsche Philologie. IN Stern, M. (Ed.) Textkonstitution bei mündlicher und bei schriftlicher Überlieferung. Tübigen, Niemeyer. Ho, J. (2005). Hyperlink obsolescence in scholarly online journals. Journal of Computer-Mediated Communication. http://jcmc.indiana.edu/vol10/issue3/ho.html (accessed 2 September 2007).

Endnotes It is worth noting that the common practice of citing the date of access when using electronic sources is of little help if the source subsequently disappears. For a study of broken link syndrome, see (Ho, 2005). 2 See note 3. 3 For more information about the Discovery, see the project’s website: . For more information about the eContentplus programme, see . The present authors are both currently affiliated with project, of which D’Iorio is the director. 4 For a basic introduction to Semantic Web, metadata, and digital ontologies, see the site of the World Wide Web consortium (W3C): , , and . 5 It is worth emphasizing that Discovery is an integrated project, bringing together both humanities scholars and IT specialists, stakeholders from both the academy and industry. Partners include: the Institut des Textes et Manuscrits Modernes of the Centre National de la Recherche Scientifique, Paris; Lessico Intellettuale Europeo e Storia delle Idee (ILIESI), Rome; the Wittgenstein Archives at the University of Bergen (WAB), Bergen; the Department of Electronics, Artificial Intelligence and Telecommunications at the Polytechnic University of Marche (DEIT), Ancona; the Italian public broadcaster Rai Radiotelevisione Italiana (RAINET), Rome; and the Pisa-based IT company Internet Open Solutions (Net7). The project will also hold workshops and training sessions designed to encourage support from academia, pri1

273 vate industry, and governmental bodies to sustain the project after the initial period of EU funding. The following overview is adapted from the Discovery website: . 7 NietzscheSource is the successor to the HyperNietzsche Project. Material originally published at www.hypernietzsche.org will still be available through the original address or at www.nietzschesource.org. 8 For more information about the project, see (Bartscherer, 2003, Bartscherer, 2007, D’Iorio, 2000a, D’Iorio, 2000b, D’Iorio, 2002, D’Iorio, 2007). 9 See . 10 Corrections and additions of any kind can of course be made after the original publication, but such addenda will be clearly marked as subsequent additions and the date of entry will be noted. The original text will always remain accessible in its original form. 11 There are exceptions, prominent among them the Netpreserve consortium: . 12 For a discussion of some weaknesses in the Colli-Montinari edition, see and (Groddeck, 1991, Gerike, 2000). 13 See . 14 DBin is being developed at Semantic Web and Multimedia Group (SEMEDIA) at the Università Politecnica delle Marche, Ancona, Italy. See . 6

274

Some thoughts on the importance of open source and open access for emerging digital scholarship Stefan Gradmann, Berlin

1

Introduction

Both the open source and the open access movements have their roots in the ‘hard’ sciences rather than in the social Sciences and Humanities (SSH). They have been concerned, traditionally, with open access to source code for computational data processing and with open access to scientific information published as journal articles. Still, the basic assumption of the present contribution is that there is a specific open source and open access agenda within the SSH and that this may affect these disciplines—once such an agenda is fully in place—in a way hardly conceivable in the ‘hard’ sciences. However, understanding the full impact and potential of such approaches in the SSH requires reflection upon broader methodological issues. Two vectors or primary oppositions are of specific interest in this respect: −



the scholarly information continuum as a whole and its evolution from print based to electronic working paradigms and the revolutionary changes that can be foreseen as a consequence the specific difference of the SSH as opposed to the Science-TechnologyMedicine (STM) culture of relating signifiers to significates and the specific impact of the digital revolution resulting from this specific difference.

Exploring these two vectors this contribution will try to indicate constituent elements of an ‘open’ agenda for the digital humanities.

276

2 Evolution of the scholarly information continuum from print to XML As W. McCarty has put it, „Academic publishing is one part of a system of highly interdependent components. Change one component [...] and system-wide effects follow. Hence if we want to be practical we have to consider how to deal with the whole system.“1 Thus, in order to understand the coming paradigm shifts it is useful to first consider the evolution of the print based scholarly information continuum which has been stable and basically unchanged for centuries. This continuum can be conceived as a circular work flow centered around basically monolithic and static printed information objects and is sketched in Figure 1 below: author = write

annotate = write

review = read + write

quote = write

publish = print

apprehend = read manage (library) = write (create meta-data, describe, classify)

Figure 1: The traditional scholarly information continuum

In this traditional view of the scholarly information chain typical stages such as ‚authoring’, ‚reviewing’, ‚publishing’, ’managing’, ‚apprehension’, ‚quotation’ and ‚annotation’ of scholarly information objects were implemented using very few and very stable cultural techniques (basically reading and writing). Furthermore, these stages were organized in linear, circular workflows with no or at most marginal modifications in sequence and centered around well understood, monolithic entities (documents). With the advent of digital media and working instruments this functional sequence remained practically unchanged in a first phase, during which

277 the individual steps were simply electrified using digital means to emulate what had been done using traditional cultural techniques before as indicated in Figure 2 below: annotate = write (Office/LaTex) quote = write (Office/LaTex)

author = write (Office/LaTex)

PDF

or other print analogue format

apprehend = read

review = read + write (Office/LaTex)

publish = print (PDF) manage (library) = automated library functions (meta-data creation / ‘write’)

Figure 2: The traditional continuum in emulation mode

This scholarly value chain in emulation mode is somewhat similar to incunabulae in early print age: just as the latter have been preserving major characteristics of medieval folios the former kept (and partially still keeps) typical characteristics of the traditional value chain. Not only is the circular sequence preserved, but also its individual stages remain functionally unchanged and the use of well known cultural techniques remains constitutive. The same is true for the information object at the center of the circle which uses print-analogue formats such as PDF to emulate basic characteristics of the ‘bookish’ information support. The first real qualitative change within this functional continuum happens with its transition to a third phase which is illustrated in Figure 3 below including some of the questions related to this process. In this third phase individual stages in the still basically unchanged linear function paradigm are now remodeled digitally and thus undergo substantial changes. Transition to this phase is currently under way and more or less advanced depending on the different scientific cultures.

278 author = generate XML/XSLT

annotate = e-annotate Inline? Linked to ‘document’? How? reference = identify, ‘point to’, reference microstructures, ‘quote’?

XML – XSLT

review = e-annotation (public?)

0101010101 0010101011 1011010101 0110011101 1010110010

apprehend = ‘read’?

publish = stabilise, version, add identifier

manage (library) = automated library functions (meta-data creation / ‘write’) Figure 3: Scholarly information continuum ... going digital!

Authoring of scholarly documents, for instance, turns into generating of content using some XML syntax and appropriate presentation modes using XSLT or similar processing techniques. The reviewing stage turns into a more or less public and open procedure of digital annotation. ‚Publishing’ in this context may be equivalent of stabilizing document content, applying version information and a unique identifier. ‚Quotation’ instead of replicating parts of external documents more and more turns into identifying external information objects and referencing to its internal micro structure. It remains unclear, to which extent the term ‘reading’ can still be applied to the related acts of apprehension. And it becomes more and more evident that the ‘library’ metaphor is increasingly inappropriate for the fundamentally changed management methods for digital information objects. Even if the formative power of traditional cultural techniques rapidly decreases within the individual stages as part of the transition from analogue to digital representation modes at different stages of the scholarly communication continuum, other basic characteristics of the traditional continuum remain unchanged in this stage: the scholarly value chain remains linearcircular and is focused around a seemingly still well understood monolithic information object, the ‘document’. However, these two remaining characteristics in turn may be subject to de-construction in a next phase that is already casting its shadows and which is likely to influence the continuum as indicated below in Figure 4:

279 annotate

reference

author

review

Doc

publish

apprehend manage

Figure 4: A de-construction of the scholarly information continuum

Two tendencies can already be outlined regarding this future phase: the stages that used to be organized in a sequential-circular will increasingly relate to each other in almost any networked order and the central information object, the ‘document’ looses its monolithic character, itself becomes a networked cluster of information entities with increasingly dynamic and diffuse borders. We will thus be facing a triple paradigm shift but which has specific consequences with respect to the different scholarly / scientific cultures. If one accepts—at least as a working hypothesis—the distinction established by C. P. Snow in his Rede lecture on “The Two Cultures” and considers the respective consequences of the triple paradigm shift for the sciences (henceforth STM) and the humanities (SSH) striking differences are almost evident. In such a perspective, the erosion of the linear/circular function paradigm only marginally affects the way ‘publication’ is conceived in the SSH because of the prevalent ‘monolithic’ publication practice in this culture: − journal articles and related peer reviewing procedures are still rather marginal − authors still tend to work in 'splendid isolation' in the SSH with collaborative authoring still being an exception (such as the present contribution!). The declining formative power of traditional cultural techniques certainly

280 affects the SSH (and probably much more than the sciences), but this does not specifically affect the publishing function. However, the de-construction of the ‘document’ notion in digital, networked settings vitally affects the SSH in a very specific way. This process fundamentally changes the conditions of production and publication as well as the conditions of apprehension and reuse of scholarly documents. The consequences touch the very core of scholarly work which in both of its main strands of work is fundamentally concerned with documents both as objects and as instruments of scholarly activity. As shown in Figure 5 below, both the ‘aggregation’ (arrows pointing down) and the ‘modeling’ strands have their point of origin in digital corpora (and thus most of the time in document clusters) and produce new documents in turn! Scholarly publication Tools for communication & coordination (groupware, collabroatories) Digital heuristics: modeling and visualisation tools for generating hypotheses Transformation & statistics, automated Unambiguous - tools for statistical analysis - transforming tools (e. g. automated translation) - Parser

Corpus based modeling

Formal declarative, interactive Tolerating ambiguity - tools for semantic analysis - syntax for meta-languages (e. g. XML) - meta-languages (e. g. TEI)

Referential structures Digital corpora / sources (grammar type)

Referential data (lexicon type)

Aggregation tools (converter type) Newly aggregated primary data (e-Editions, etc.)

Figure 5: The two strands of scholarly work based on document corpora

And this observation organically leads to a closer investigation of the specific relation between the SSH (especially the hermeneutical rooted disciplines) and the constituent representation modes of documents as complex signs.

281

3 The pandora box of semiotics … When considering this issue in more detail it becomes clear that signification and document modeling in all discussion related to electronic publishing up to now have basically been coined on the information model prevailing in the empirical sciences. In this model, scientific research as the core activity is completely dissociated from the publication process. Only once ‘research’ has yielded ‘results’ these in turn are ‘packaged’ in discourse and published (typically as a journal article): in this extremely robust and not very complex ‘container’ model of scientific publishing it is perfectly sufficient to remain on ‘emulation level’ as outlined above, since the publishing stage is not at the core of scientific work, anyway. However, scholarly publishing in the SSH takes place in a substantially different information model: scholarly research and discoursive ‘packaging’ cannot be separated in this perspective and the published results of the core scholarly activity are again documents. This accordingly results in complex document models and publishing formats heavily intertwined with core research operations. In such a view, the ‘container’ models used in ‘hard sciences’ publishing are over-reductionist and inappropriate and complex relations between signifiers and significates are constitutive. Clearly, behind the different information models underlying the respective publication cultures of the STM and the SSH another, even more fundamental semiological difference is hidden. In fact, dominant discourse in electronic STM publishing communities (mostly emanating from computer science) uses terms such as ‘document’, ‘sign’ or ‘name’ quite naively and without referring to their inherent semiological complexity. This results in a (technically) high level nominalist regression: the ‘Pointer -> Object’Model, in which ‘words’ are supposed to point to ‘real’ things as in Figure 6 below:

Words

Things

Figure 6: Words pointing to ‘things’

The perfect incarnation of such a thinking are the ‘ontologies’ of the semantic web!2 As opposed to this very simple mode of conceiving the relation between

282 words and things it is useful to consider the linguistic model of significance that has developed in the 20th century starting from DeSaussures theory of the sign and considerably refined by Hjelmslev, Eco and others3 as indicated in the (much simplified) Figure 7 hereafter: FORM Syntactic system (types)

Semantic system (types)

Signifiant

Signifié

produced units (tokens)

interpreted units (tokens)

SUBSTANCE ‘Things’

Sounds (acoustics, phonetics)

Figure 7: A simplified model of the semiological space

Signifiers and significates cannot be dissociated in this vision as it is impossible to consider form and substance of constituents independently: produced and interpreted individual units always have to be seen as part of they respective systemic context. And both sounds and real ‘things’ are not part of the representational space in such a view. Such thinking has once been declared by a senior computer scientist as “opening the pandora box of semiotics”—but the fact is that exactly such thinking is required to understand the way the SSH relate to documents, which in turn must be conceived as complex significant units and themselves are part of a system made up of such units (vulgo ‘litarature’). It then becomes clear that (electronic) text is not just a transcription of speech acts (parole) and at same time it must be noted that the notion of ‘text’ basically remains a blank spot in linguistics and still is subject to fundamental research as a complex, semiological digital object. In such an approach the model used above might tentatively translate to electronic documents as in Figure 8 below: Signifiant

(XML + schema)

Bits & bytes

Doc

Signifié (OWL ?)

‘Content’

Figure 8: A tentative representational model for electronic documents

283

4 … and a way to re-think the ‘document’ notion The heart of the issue thus seems to better understand the metamorphosis of the ‘document’ notion in the digital context—and a very competent attempt in this sense has been made by the French research group RTP-DOC (CNRS) that has used the pseudonym Roger T. Pédauque to publish fundamental work relating to the de-construction of the ‘document’ notion currently under way in the digital, networked context.4 RTP-DOC presents the evolution of the ‘document’ notion in the passage from printed to digital documents along three paradigms: Form (vu =’Look at’, morphosyntax), as material or non-material structured object, the corresponding chapter is forme, signe et médium, les re-formulations du numérique; • Sign (lu =’read’, semantics), as meaningful instance and thus both intentional and part of a sign system, the corresponding chapter is Le texte en jeu: permanence et transformations du document; • Medium (su =’Knowledge, Interpretation, Apprehension’, Pragmatics) as a vector of communication, part of a social reality with constituting temporal and spatial processes of mediation, the corresponding chapter is Document et modernités. •

In each of the three conception paradigms one of the aspects is used as a dominant, yet non-exclusive vector for developing equations that distinguish traditional, electronic and future web-based document notions with each of these equation triples resulting in a definition of the respective nature of the ‘electronic document’. Thus, the ‘form’ vector, in which object nature is constitutive, can be summed up in these three equations: 1. Traditional document = medium + inscription 2. Electronic document = structures + data 3. XML-document = structured data + stylesheet And these in turn result in a first definition: “An electronic document is a data set organized in a stable structure associated with formatting rules to allow it to be read both by its designer and its readers”. Likewise. The ‘sign’ focused on the meaningful nature of documents yields the following three equations:

284 1. Traditional document = inscription + meaning 2. Electronic document = informed text + knowledge 3. Semantic Web document = informed text + ontologies And the resulting definition reads: “An electronic document is a text whose elements can potentially be analyzed by a knowledge system in view of its exploitation by a competent reader”. Finally, the ‘medium’ vector organized around the ‘document’ as social phenomenon hast these three equations: 1. Traditional document = inscription + legitimacy 2. Electronic document = text + procedure 3. Web-Document = publication + measured usage/access With the following definition associated: “An electronic document is a trace of social relations reconstructed by computer systems.” Without referring more in detail the rich discussions within RTP-DOC it should be evident that the conceptual framework proposed by this group could serve as an excellent fundament for re-building consensus regarding the ‘document’ notion and for a better understanding of the nature of digital, networked document resources. Such an understanding in turn is required in order to better understand the specific impact of digital publication techniques in the SSH, as the ‘document’ notion is at the semiological heart of hermeneutical based scholarly work.

285

6 Our Cultural Commonwealth: the need of a triple ‘open’ agenda in the SSH All the observations made in the preceding chapters converge in what one could call a triple ‘open’ agenda for the Social Sciences and the Humanities as it is already partly expressed in the report on cyberinfrastructure for the social sciences and humanities prepared for the American Council of Learned Societies Commission under the title “Our Cultural Commonwealth”5.

6.1

Open and standardized document models

6.2

Open sources …

6.3

… and open source processing instruments

First of all and as should be clear from the above the effectiveness of the digital paradigm shift in the humanities vitally depends on open and non-proprietary techniques for document modelling and authoring. This is even more evident if one considers not just isolated documents, but webs of interrelated documents pointing and referring to each other. And this evidence gets particularly striking if one considers the need to maintain coherent webs of documents over time for decades or even centuries. In such a perspective introducing document protection technology such as for DRM in the audiovisual industry would create ridiculous and nightmarish functional scenarios!

Second, for innovative processing of digital sources to work at all a very specific understanding of the term ‘open source’ needs to be consequently and systematically applied: such scholarly work requires the free availability of all source material! Hence the primary characteristic of cyberinfrastructure according to a statement made on source material made by the ACLS report: “It will be accessible as a public good”.

Finally, the heuristics used for novel corpus modelling and aggregation work as well as their technical implementations and foundations need to be open source in the more traditional sense of the term as well as based on open standards to enable future digital hermeneutical heuristics. This is emphasised in recommendation 7 of the ACLS report stating: “Develop and maintain open standards and robust tools”.

286 The author of this contribution is convinced that at least substantial progress in all three areas of this triple agenda of openness is required for genuine digital scholarship to happen at all!

Endnotes http://lists.village.virginia.edu/lists_archive/Humanist/v17/0336.html 2 The following paper gives a very valuable discussion of the profound inappropriateness of positivist ontology based approaches in the SSH: Benel, Aurélien et al.: Truth in the Digital Library: From Ontological to Hermeneutical Systems. Proceedings of the fifth European Conference on Research and Advanced Technology for Digital Libraries (Lecture Notes in Computer Science, 2163). Heidelberg 2001, pp.366-377. 3 Probably the best introduction to this semiological approach still are Eco, Umberto: La struttura assente. Milano 1968 and Eco, Umberto: A Theory of Semiotics. Bloomingrton 1976. 4 Two publications are of interest here: Pédauque, Roger T.: Le document à la lumière du numérique. Toulouse 2006 and Pédauque, Roger T.: La redocumentarisation du monde. Toulouse 2007 as well as the web presence of the group at http://rtp-doc. enssib.fr. 5 http://www.acls.org/cyberinfrastructure/ 1

The Necessary Multiplicity1 Cameron McEwen One day, walking in the Zoological Gardens, we2 admired the immense variety of flowers, shrubs, trees, and the similar multiplicity of birds, reptiles, animals. Wittgenstein: I have always thought that Darwin was wrong: His theory doesn’t account for all this variety of species. It hasn’t the necessary multiplicity. (Rhees, 160)

1. Natural inclination to plurality In ‘The Digital Wittgenstein’3, the guess was made that the Tractatus may be read as a meditation on digitality. ‘Digitality’ names what the digital is as differentiated from the analog. Here the further guess will be essayed that also the later Wittgenstein may be seen as deeply—that is, fundamentally —concerned with the entailments and manifestations of that logical digitality set out in the Tractatus. But, as is fitting, with a fundamental difference. ‘Digitality’ leads inherently beyond the sort of crystalline definition practiced in the Tractatus to the messier sort of multiform observations and remarks comprising the later work. Wittgenstein himself may simply have followed this inherent inclination, or exfoliation, which lies within the digital itself, that “natürliche Neigung (…) nach allen Richtungen”4 which he notes in his prefatory remarks to the Philosophische Untersuchungen (= PU). Compare Hegel on the transition from the philosophy of logic to the philosophy of nature: “Die absolute Freiheit der Idee aber ist, daß sie (…) sich entschließt, das Moment ihrer Besonderheit (…) sich als Natur frei aus sich zu entlassen.” (Hegel, 1840, #244)5 The point at stake may be approached in the question: if a fundamental concern for the digital characterizes both Wittgenstein’s early and late periods, how was it that it is expressed in such different ways of doing philosophy? Perhaps it was this sort of question he intended to raise when he wrote (again in his prefatory remarks to the PU): Vor vier Jahren aber hatte ich Veranlassung, mein erstes Buch (die “Logisch-Philosophische Abhandlung”) wieder zu lesen und seine Gedanken zu erklären. Da schien es mir plötzlich, daß ich jene alten Gedanken und die neuen zusammen veröffentlichen sollte: daß diese nur durch den Gegensatz und auf dem Hintergrund meiner ältern Denkweise ihre rechte Beleuchtung erhalten könnten.6

288 It is the aim of this paper to argue that ‘digitality’ may be the key to what Wittgenstein had in mind with this suggestion and therefore also to the relation between his early and later philosophy. Perhaps this might also throw light on the contested questions concerning his method or methods after 1929. Indeed, since it is the distinctive mark of the digital to display identity via difference, it may be that the only way to preserve a continuity of the early and late Wittgenstein and at the same time to respect the deep differences between the two, is to take up consideration of them within a digital perspective. In this way, the mode of analysis employed in looking at Wittgenstein would correspond to that deployed (as I will argue) by him. Both would be digital. And perhaps it is just such correspondence (the nature of which remains to be specified) at work in the approach to Wittgenstein that is requisite for entering into his philosophical space: Willst Du nicht, daß gewisse Menschen in ein Zimmer gehen, so hänge ein Schloß vor, wozu sie keinen Schlüssel haben. Aber es ist sinnlos, darüber mit ihnen zu reden, außer Du willst doch, daß sie das Zimmer von außen bewundern! Anständigerweise, hänge ein Schloß vor die Türe, das nur denen auffällt die es öffnen können, und den andern nicht. (VB, 7-8)7

2. Mind the Gap In his highly interesting ‘Postscript’ to Recollections of Wittgenstein, Rush Rhees contrasts Wittgenstein’s way of thinking to that of Otto Weininger: ‘brute’ differences (…) are not important for Weininger. (…) Weininger sees in this only a difference between ‘more complete’ and ‘less complete’, ‘more serious (greater intensity of consciousness)’ and ‘less serious’, ‘more creative of one’s life’ and ‘less creative’. (…) Weininger says often that in a woman there are not even the possibilities of a higher nature. But he has stacked the cards at the outset by speaking as though the difference between one soul or character and another were always a difference of degree (not of nature), a difference of higher or lower on the same scale. (Rhees, 184-185, parenthetical remarks in the original)

By contrast, Rhees describes how: Wittgenstein would emphasize that one man’s nature and another’s are not the same,

289 and that what is right (or imperative) for one man may not be right for another. So that in certain cases it would be wrong to think of examples in the lives of other people and put trust in their solutions, or to ask what someone whom I admire would have done in a situation like this. “Don’t take the example of others as your guide, but nature.” (Rhees, 187, parenthetical remark in the original. The passage cited from Wittgenstein is from VB, 41: “Laß Dich nicht von dem Beispiel Anderer führen, sondern von der Natur!”)

Rhees charges that Weininger “stacked the cards at the outset”. The implication is that more than one “outset” is possible: if stacking the deck is blamable, or at least if Wittgenstein did not not do so, or if he did so differently, Weininger’s “outset” must be only one of multiple options. Further, “outset” in this sense must be fundamental or essential: such “differences in nature”, as Rhees notes, are “brute”, “absolute”, “irreducible” (Rhees, 184, 185, italics in the original). Otherwise they would not constitute an “outset” at all, but only some sort of continuation, some sort of “higher or lower on the same scale”. The critique of Weininger’s thinking that is made here in contrast to Wittgenstein’s is therefore that the possibility of “differences in nature” or of “different scales”, each with “a different standard, with different sorts of methods and evidence (…) never occurred to him or had been brushed aside.” (Rhees, 186) “Weininger leaves no room for this.” (Rhees, 187) Wittgenstein makes this same point to Russell in two letters from February and March, 1914. In the first of these he refers to “enormous differences in our natures”, “how totally different our ideas are” and “fundamental differences”. It would follow that philosophical rigour in Wittgenstein’s sense must exactly leave such room by continually revisiting the possibilities (plural) at first principle and thereby interrogating whether it itself has made its “outset” properly—“anständig”, as Wittgenstein would say (a word with overtones in German of “outset”, even while meaning “properly”). Philosophical experience would therefore be distinguished from the non-philosophical by two repetitive motions: a vertical motion by which first principle(s) would continually be revisited and questioned; and an abysmal horizontal motion at first principle by which a new and different “outset” would be taken up from among the competing possibilities at origin.8 To “leave room” for this somersault movement of inquiry would therefore require “leaving room” in two further senses. First, “room” would have to be left between experience and its “outset” so that the two might be differentiated, would not ‘run together’. Only so could a vertical motion be made to revisit and question one’s “outset” before or otherwise aside from

290 experience. Second, “room” would have to be left between “outsets”, since only so could “outsets” be distinguishably plural. Respecting “room” in these multiple senses, or failing to do so, is exactly what is ultimately at stake in the question of digital and analog forms.9

3. Singularity and Plurality Wittgenstein observed about his thought: Meine Art des Philosophierens ist mir selbst immer noch, und immer wieder, neu, und daher muß ich mich so oft wiederholen. (VB, 1)10

The combination of repetition (“immer noch”, “immer wieder”, “Wiederholung”) with “neu” is strange and noteworthy. Doesn’t repetition mean exactly not new? These go together only where a repeated retreat to “outset” is ventured such that a fresh beginning is enabled. Wittgenstein suggests that he continually surprised even himself through this somersault action: “mir selbst immer noch, und immer wieder, neu”. On the one hand, such originary, and originating, exploration differentiates philosophy from science and from non-philosophical experience generally. Rhees quotes Wittgenstein as observing: In fact, nothing is more conservative than science. Science lays down railway tracks. And for scientists it is important that their work should move along those tracks. (Rhees, 202, italics in the original)

On the other hand, this sort of somersault in thought (where, as Hegel says in the preface to the Phänomenologie, it is necessary to walk for once on one’s head)11, accounts for the peculiarly obscure ways along which, according to Wittgenstein, philosophy arrives at its results. Elsewhere in Recollections Of Wittgenstein, Maurice O’Connor Drury cites ‘A Lecture on Ethics’ in which Wittgenstein describes a “difficulty” which characterizes a philosophical exposition: the hearer is incapable of seeing both the road he is led [along] and the goal which it leads to. That is to say: he either thinks: “I understand all he says, but what on earth is he driving at” or else he thinks “I see what he’s driving at, but how on earth is he going to get there.” All I can do is again to ask you to be patient and to hope that in the end you

291 may see both the way and where it leads to. (Rhees, 77-78, from LE, 4)

Drury then goes on to quote Wittgenstein in conversation as follows: philosophy is like trying to open a safe with a combination lock: each little adjustment of the dials seems to achieve nothing; only when everything is in place does the door open. (Rhees, 81)

These images of an obscure pathway and of a combination lock suggest that many of the remarks and examples in the later work may tend away from the result to which Wittgenstein would lead, but yet are necessary to its achievement. The difference from the Tractatus might be said to be that the TLP provides a ladder which ultimately must be discarded, while the later work provides countless ladders—which must be discarded. This plurality is fundamental. But plurality requires borders and breaks and changes of direction and these must be treated as fundamentally as the plurality they enable: “room” must be left! An all important part of the exercise in the later work therefore consists in showing both the possibility and the resulting necessity of quitting one avenue (or railway track) of inquiry and beginning another, just as the deployment of a combination lock first of all requires knowing how to change the direction of the dial between the numbers of the combination. More, the less the numbers of such a lock have to do with one another on the way to the final combination, the better. Indeed, the worst case for a ‘combination’ lock would be to have only a single number and, therefore, only a single direction on the way to it: Ich habe diese Gedanken alle als Bemerkungen, kurze Absätze, niedergeschrieben. Manchmal in längeren Ketten, über den gleichen Gegenstand, manchmal in raschem Wechsel von einem Gebiet zum andern überspringend. - Meine Absicht war es von Anfang, alles dies einmal in einem Buche zusammenzufassen, von dessen Form ich mir zu verschiedenen Zeiten verschiedene Vorstellungen machte. Wesentlich aber schien es mir, daß darin die Gedanken von einem Gegenstand zum andern in einer natürlichen und lückenlosen Folge fortschreiten sollten. Nach manchen mißglückten Versuchen, meine Ergebnisse zu einem solchen Ganzen zusammenzuschweißen, sah ich ein, daß mir dies nie gelingen würde. Daß das beste, was ich schreiben konnte, immer nur philosophische Bemerkungen bleiben würden; daß meine Gedanken bald erlahmten, wenn ich versuchte, sie, gegen ihre natürliche Neigung, in einer Richtung weiterzuzwingen. - Und dies hing freilich mit der Natur der Untersuchung selbst zusammen. Sie nämlich zwingt uns, ein weites Gedankengebiet, kreuz und quer, nach allen Richtungen hin zu durchreisen. (PU, 9)12

292 In this further passage from the prefatory remarks to PU, Wittgenstein again contrasts two different methods or ways of proceeding. One way tends toward singularity: “einmal in einem Buche zusammenzufassen... in einer natürlichen und lückenlosen Folge fortschreiten... zu einem solchen Ganzen... in einer Richtung”. The other is plural: “ zu verschiedenen Zeiten verschiedene Vorstellungen... immer nur philosophische Bemerkungen...ein weites Gedankengebiet, kreuz und quer, nach allen Richtungen hin.” Wittgenstein emphasizes that these are contrasting standards (“von Anfang”, “wesentlich”, “das beste”, “Natur der Untersuchung”) which he characterizes in several ways which go beyond vocabulary in the direction of poetry. For example, the standard of singularity is described through a series of infinitives: “zusammenzufassen”, “zusammenzuschweißen”, “weiterzuzwingen”. These at once express an imperative to be followed and illustrate that imperative via the grammatical fusion of the infinitive form of separable verbs in German: so “zusammenzufassen” describes an action, but the word itself is also an image of that action. In contrast, the infinitive form used to characterize the plural standard (“zu durchreisen”) is separated, so that it, too, both describes an action (“nach allen Richtungen hin zu durchreisen”) and is also an image of the plurality which is implicated in that action. Wittgenstein emphasizes this contrast of infinitive forms by ending the passage with the exceptional one. This same contrast between singular and plural forms is also presented in Wittgenstein’s use of two of the same words (in the same order) to describe both standards (namely, “zusammen” and “zwingen”); these are used in combined form to describe the singular standard (“zusammenzufassen”, “zusammenzuschweißen”, “weiterzuzwingen”) and in separate form to describe the plural standard (“dies hing freilich mit der Natur der Untersuchung selbst zusammen. Sie nämlich zwingt uns...”). Singularity and plurality considered as standards restate the question of “room” broached above. Singularity refuses such room, therefore refuses the attendant possibilities of a separate ‘space’ of outset and of multiple possibilities ‘there’.13 In fundamental contrast, plurality insists on such room and space. The former is analog, the latter, digital.

4. Community: Yes and No Later in his ‘Postscript’ to Recollections, Rhees describes walking with Wittgenstein in 1945 and telling him of his idea of joining the Trotskyite Revolutionary Communist Party:

293 Wittgenstein stopped walking at once and grew more serious as he did if you mentioned a problem that he’d thought about. ‘Now let’s talk about this. We sat down on a park bench. He got up almost at once, because he wanted to illustrate what he said by walking back and forth. His main point was: When you are a member of the party you have to be prepared to act and speak as the party has decided. (…) Perhaps the party line will change. But meanwhile what you say must be what the party has agreed to say. You keep along that road. Whereas in doing philosophy you have got to be ready constantly to change the direction in which you are moving. At some point you see that there must be something wrong with the whole way you have been tackling the difficulty. You have to be able to give up those central notions which have seemed to be what you must keep if you are to think at all. Go back and start from scratch. (Rhees, 208)14

Rhees then cites Wittgenstein from Zettel: In 1931 he wrote in parenthesis: “(The philosopher is not a citizen of any community of ideas. That is what makes him into a philosopher.)” (Rhees, 208)15

The parenthesis expresses Husserlian ‘bracketing’ and illustrates the remove of the philosopher from any “party line”. This second recollection of Rhees again shows Wittgenstein contrasting two fundamentally different modes of experience and existence. There is that of the “party line” where “you keep along that [single] road”. And there is that of philosophy where “you have got to be ready constantly to change the direction in which you are moving.” Underlying these contrasting modes of orientation are essentially different acceptances regarding first principle —or first principles. Wittgenstein speaks of “the whole way you have been tackling the difficulty”, of “central notions which have seemed to be what you must keep if you are to think at all”, of the possibility and resulting imperative to “go back and start from scratch.” What is at stake here are standards or forms or first principles. Non-philosophical orientation rests on a conception of first principle which is essentially—fundamentally—singular. Whereas the constantly changing ways of the philosopher correspond to—respond to—a plurality of possibilities for “outset” at origin. It might therefore be said, in order to be in community with the community of original possibilities, it is requisite that “the philosopher is not a citizen of any community of ideas”.

294

5. Crossing Over In the first of the two recollections discussed here from Rhees in his ‘Postscript’, Wittgenstein is said to differ from Weininger by allowing for “brute” or “absolute” differences, “differences in nature”, and thereby refusing the notion of a single scale with only degrees of difference. In the second, he is again pictured as arguing against a single scale (“the party line”) and for that original plurality which is required if among multiple “central notions” it is possible to “start [again] from scratch”. A number of questions emerge concerning the vertical and then horizontal movement of the philosopher whereby she retreats to multiform origin and there is able to “start from scratch” with a new “outset”. How is such (so to say) crossing over possible? Crossing back to multiform origin and crossing across at origin to other possibilities of “outset”? To repeat: in doing philosophy you have got to be ready constantly to change the direction in which you are moving. At some point you see that there must be something wrong with the whole way you have been tackling the difficulty. You have to be able to give up those central notions which have seemed to be what you must keep if you are to think at all. Go back and start from scratch.

Digital systematicity is the answer to these questions. For crossing back and forth is fundamental to the digital, it is “natural”, “brute”, even “absolute” to it. Digital systematicity is such ‘crossing over’: Wittgenstein’s “kreuz und quer”. Now in the Tractatus, Wittgenstein set out what might jokingly be called ‘the logical structure of the world’ in terms of the digital. Famously, he thought at the time that this completed what philosophy had to accomplish and thereby showed how insignificant this was. During the 1920’s, further consideration of the matter (including his reading of Spengler, Nietzsche, Kierkegaard and Plato), and doubtless augmented by the experience of teaching in the Volksschule (where different children would have learned in different ways at different speeds), seems to have revealed to him, however, that the Tractatus was not after all the last word and that philosophy might indeed have a vocation, or vocations, which he had not previously recognized.

295

6. The Turning Point There are many reasons to guess that the fulcrum on which Wittgenstein’s life and thinking turned at this time (and continued to do so until his death twenty years later) was the question of essential plurality or “necessary multiplicity”. This is the signature difference of the digital from the analog, the former being plural at origin, the latter singular—but just how does such digitality function and just what does it entail? What follows from “necessary multiplicity”? Wittgenstein had long recognized that the logical structure of the digital (and the digital structure of logic) has an ethical dimension. Even before the war he had written to Russell of his desire “ein anderer Mensch [zu] werden”; he wanted to become a “different” person with a “different” relation to others (perhaps especially to those who struck him as unthinking). His decision to become a Volksschule teacher served both these demands. He might become “different” by taking up a “different” relation—one of service—to ordinary people16. In this way, Wittgenstein drew both personal and social consequences from the logic of digitality described in the Tractatus. Exactly because “difference” is fundamental to digitality, and because digitality constitutes the logical structure of the world, it was both possible and necessary for the individual to be “different” (to become different and to live differently). By the end of the 1920’s, Wittgenstein’s attempt to live digitally may have rebounded on his conception of philosophy. For if everything turns on original difference and multiplicity, how could the digital alone be true? How could philosophy have only one method with only one formulation? How could it ever be, as Wittgenstein wrote of the Tractatus in his prefatory remarks, “unantastbar und definitiv”? How could it not have the essential possibility of difference which is the very signature of digitality? Wittgenstein’s desire to be different in accord with original difference may have found a new impetus in this way. Implicated in this impetus was the insight that the strength of the analog (a strength manifested, as he recognized, throughout modern civilization) was not some accident or trick or inexplicable fate, but the expression of the fundamental place of the analog in comprising, with the digital, original difference and “necessary multiplicity”. Only so could origin have a “multiplicity” which would be fundamental or “necessary”. Only so could the relation between the analog and the digital itself be digital. Essential plurality, or digitality, demands fundamental difference at first principle. The implication, as Hegel saw, is that the analog is necessary to

296 the digital in order for the digital to be digital. The digital must tolerate the analog both at first principle and throughout experience as its distinctive way of being itself, “ohne den er das leblose Einsame wäre”.17 For Wittgenstein’s philosophical work, the guess may be made that he would now turn to methods, plural, including analog methods, as ways of communicating and teaching insight into that original difference to which he had, always, found himself called to respond.

7. A Guess at the Riddle18 This paper is offered as a kind of thought experiment since its author is certainly no expert on Wittgenstein. Its suggestions, which remain to be tested against the full gamut of his texts, are these: • in accord with one strand (and the most important strand) of the German tradition at least since Kant, Wittgenstein held that human beings are exposed to discrete first principles which are plural in number and which are mutually exclusive (except insofar—and this ‘insofar’ is monumental —as they are equally present at origin); • these first principles are various modes of identity (therefore also of difference) which have competing universal claim and application (a remarkable notion which has been present in philosophy at least since Plato’s gigantomachia19); • These principles govern relations holding everywhere, between (eg) the “outset” of human experience and its manifestation, between logic and the Realwissenschaften, between one human being (culture, time) to another human being (culture, time), between humans and the natural world, between individuals and God (and so on): such is the universal claim made by each of them; • the contest of these competing principles therefore poses ontological, epistemological and ethical puzzles at the “outset” of human experience, to which a peculiar kind of “Schritt zurück”20 must be attempted to that space of “outset” where, alone, the competing universal claims can be ‘decided’ a priori;

297 • the most important affirmation of this tradition is that the finite and the infinite do not contradict one another (as they do for analog experience on account of its basis in undifferentiated unity), but instead implicate each other—“Das Unendliche - wie gesagt - konkurriert mit dem Endlichen nicht. Es ist das, was wesentlich kein Endliches ausschließt.”21 • since the 2 basic types of these principles have become familiar outside of philosophy over the last half-century as analog and digital processes, and since the nihilism which is engulfing the world is anchored in analog presupposition, the time may be at hand when a principial solution to nihilism in digital experience may become generally available (instead of occurring only to isolated individuals with no means of communicating it to the culture at large); • this would then meet Wittgenstein’s hope for a different sort of civilization or Lebensform, one where his work would be understood, since, on the one hand, it is analog presupposition which characterizes modernity (“Unsere Zivilisation ist durch das Wort ‚Fortschritt’ charakterisiert. Der Fortschritt ist ihre Form”)22 and, on the other hand, language is anchored in Lebensform (“Und eine Sprache vorstellen, heißt eine Lebensform vorstellen”23); • as is appropriate to a gigantomachia, however, the opposing power of the analog is just as fundamental as the digital and just as little liable to final subjugation: no rapture is to be awaited.24

298

8. Wittgenstein on Weininger In a letter to G. E. Moore (August 23, 1931), Wittgenstein commented on Weininger as follows: I can quite immagine that you don’t admire Weininger very much, what with that beastly translation & the fact that W. must feel very foreign to you. It is true that he is fantastic but he is great & fantastic. It isn’t necessary or rather not possible to agree with him but the greatness lies in that with which we disagree. It is his enormous mistake which is great. I.e. roughly speaking if you just add a “~” to the whole book it says an important truth.25

In the context of the gigantomachia of the digital and analog forms set out here, Wittgenstein’s comments may be understood as situating Weininger on the analog side26 of the battle of “outsets”. This battle (machia) is “great” (giganto), “enormous”, because it concerns fundamental forms in, as Plato says, their “quarrel about reality”(γιγαντομαχία περì της ουσίας)27. It is “foreign” and “fantastic” because this battle takes place apriori, before experience, in what Hegel therefore characterizes as a realm of shadows: Das System der Logik ist das Reich der Schatten, die Welt der einfachen Wesenheiten, von aller sinnlichen Konkretion befreit. Das Studium dieser Wissenschaft, der Aufenthalt und die Arbeit in diesem Schattenreich ist die absolute Bildung und Zucht des Bewußtseins.28

In this Schattenreich, says Plato, “an interminable battle is always going on between the opposing camps (εν μέσω δε περι ταυτα απλετος αμφοτέρων μάχη τις […] αει συνέστηκεν)29. The battle is “interminable” because it is waged by ontological forms whose reality is absolute and whose right to be is therefore not subject to diminution: no final victory is possible for either side in a battle between forms of the real. There is yet something between (εν μέσω δε) these gigantic forms which holds them together at origin in a contested sort of peace despite their gigantic differences. Plato sees the vocation of the philosopher as consisting in the recognition and response to this original “between”: It seems that only one course is open to the philosopher who values knowledge and truth above all else. He must refuse to accept from the champions of the forms the doctrine that all reality is changeless, and he must turn a deaf ear to the other party who represent reality as everywhere changing. Like a child begging for ‘both’, he must declare that

299 reality or the sum of things is both at once [το όν τε και το παν συναμφότερα] (Sophist 249d)

Wittgenstein is following Plato’s view here when he asserts that Weininger’s “greatness lies in that with which we disagree. It is his enormous mistake which is great.” Only through acknowledgement and appreciation of what is fundamentally different at origin is it possible to valorize what binds them together ‘there’. This is an “unspeakable” power that is even more shadowy than the shadow-forms it both separates and holds together, but it is a power which nonetheless is able to exercise its sway with them and, therefore, with differences wherever they appear. If with the great, how not with the small? So it is that Wittgenstein observes in regard to Weininger: “roughly speaking if you just add a “~” to the whole book it says an important truth”. This is a negative sign at origin and is therefore fundamentally different from a negative sign belonging to a “single scale”. At origin, the negative expresses an exfoliation which is essential. Its study in differences throughout experience is therefore an act both of mindfulness and of piety.

9. Beyond the Dreams of Philosophy After Drury became a doctor specializing in psychiatry, Wittgenstein said to him: “I wouldn’t be altogether surprised if this work in psychiatry turned out to be the right thing for you. You at least know that ‘There are more things in heaven and earth’ etc.” (Rhees, 152)30 Drury notes that the appeal to such variety and “necessary multiplicity” beyond the grasp of philosophy was not unusual for Wittgenstein. He recalls that Wittgenstein: was fond of quoting the proverb: ‘It takes many sorts to make a world’, adding, ‘That is a very beautiful and kindly saying.’ (Rhees, 148)

300

Endnotes This is part 2 of an extended project concerning ‘the digital Wittgenstein’. “We”, that is, Wittgenstein and Drury. 3 http://wab.aksis.uib.no/wab_contrib-mec.page 4 “natural inclination (...) towards all different directions”. 5 Wallace translates: “Enjoying however an absolute liberty, the Idea (…) resolves to let the ‘moment’ of its particularity (…) go forth freely as Nature.” From the InteLex Hegel in translation database: http://www.nlx.com/titles/titlhegt.htm 6 “Four years ago I had occasion to re-read my first book (the Tractatus Logico-Philosophicus) and to explain its ideas to someone. It suddenly seemed to me that I should publish those old thoughts and the new ones together: that the latter could be seen in the right light only by contrast with and against the background of my old way of thinking.” This and the remaining Wittgenstein texts in the footnotes are taken from the Blackwell editions available in the InteLex ‘Collected Works’ database: http://www. nlx.com/titles/titllwtr.htm 7 “If you do not want certain people to get into a room, put a lock on it for which they do not have the key. But it is senseless to talk with them about it, unless you want them all the same to admire the room from outside! The decent thing to do is: put a lock on the doors that attracts only those who are able to open it & is not noticed by the rest.” 8 Cf, McGuinness, 18: “It seemed to her [Hermine Wittgenstein in Familienerinnerungen] that all the pictures [in the Wittgenstein art collection], curiously enough, were characterized by (...) a stressing of the verticals and horizontals which she wanted to call ‚ethical‘.” 9 “To leave room” is used here in the passive sense of “to respect existing room”: for such “room” enables what we do, is not enabled by us. 10 “I myself still find my way of philosophizing new, & it keeps striking me so afresh, & that is why I have to repeat myself so often.” 11 See Dante, Purg 4: Vassi in Sanleo e discendesi in Noli, montasi su in Bismantova ‘n Cacume con esso i piè; ma qui convien ch’om voli... 12 “I have written down all these thoughts as remarks, short paragraphs, of which there is sometimes a fairly long chain about the same subject, while I sometimes make a sudden change, jumping from one topic to another.--It was my intention at first to bring all this together in a book whose form I pictured differently at different times. But the essential thing was that the thoughts should proceed from one subject to another in a natural order and without breaks. After several unsuccessful attempts to weld my results together into such a whole, I realized that I should never succeed. The best that I could write would never be more than philosophical remarks; my thoughts were soon crippled if I tried to force them on in any single direction against their natural inclina1 2

301 tion.--And this was, of course, connected with the very nature of the investigation. For this compels us to travel over a wide field of thought criss-cross in every direction.” 13 ‘Room’ and ‘space’ as used here are mutually implicating: both might be ‘Raum’ in German. Regarding ‘there’, Heidegger takes the “da” in “Dasein” to indicate how humans fundamentally belong to this “eigentümlichen Bereich” of ‘outset’. 14 Compare Karl Wittgenstein, Ludwig‘s father, writing in 1898: “an industrialist must take risks: when the moment demands it, he must be prepared to place everything on a single card, even at the danger of (...) losing his initial stake, and having to start again from the beginning.“ (Quoted in McGuinness, 14) These words will have impressed Ludwig on account of his age (he was 9) and on account of the fundamental change of direction undertaken by Karl at just this time, when he gave up the various directorships which made him the single most powerful industrialist in Austria and simply withdrew into private life. This astonishing change at the young age of 50 must have given Ludwig striking illustration and emphasis to Karl‘s prescription of the need to be ready “to start again from the beginning”. 15 Z #445: “(Der Philosoph ist nicht Bürger einer Denkgemeinde. Das ist was ihn zum Philosophen macht.)” 16 That Wittgenstein never gave up this idea may be seen, not only in his hospital service during WW2, but also in his repeated ambition to quit philosophy to study medicine and in his advice to friends that they they should take up some such social employment . 17 “Without which he would be lifelessly alone”. Hegel’s Phänomenologie ends as follows (italics have been added): “beide zusammen, die begriffene Geschichte [that is, the logical system and the messy realm of the finite] bilden die Erinnerung und die Schädelstätte des absoluten Geistes, die Wirklichkeit, Wahrheit und Gewißheit seines Throns, ohne den er das leblose Einsame wäre; nur - ‘aus dem Kelche dieses [endlichen] Geisterreiches / schäumt ihm seine Unendlichkeit’.” Wittgenstein’s use of the word ‘nur’ in the preface to PU (“nur philosophische Bemerkungen”, “nur ein Album”) should be considered in this context where exactly limitation, the discrete as marked by the word “nur”, is accorded fundamental importance. As Heidegger observes in Identität und Differenz (18): “Nur” – dies meint keine Beschränkung, sondern ein Übermaß. (“Only”—this does not indicate restriction, but exorbitance.) 18 This title is taken from a C. S. Peirce paper which remained unfinished and unpublished in his lifetime. See: http://www.cspeirce.com/menu/library/bycsp/guess/guess. htm 19 Sophist 246a-249c 20 A“Schritt zurück” is required since there is no human experience where such a decision has not ‘always already’ been made and from which it is therefore necessary to be liberated (as Schiller notes in the citation below) if a “new” and free—a priori—decision is to be made. The phrase “Schritt zurück” appears frequently in Heidegger. He

302 seems to have taken it from Schiller’s 20. Brief über die ästhetische Erziehung des Menschen: “Der Mensch kann nicht unmittelbar vom Empfinden zum Denken übergehen; er muß einen Schritt zurücktun, weil nur, indem eine Determination wieder aufgehoben wird, die entgegengesetzte eintreten kann.” Also the concluding lines of Hegel’s Phänomenologie, following “nur”, are, of course, taken from Schiller (“Die Freundschaft”). 21 PB, 157: “As I’ve said, the infinite doesn’t rival the finite. The infinite is that whose essence is to exclude nothing finite.” Compare the end of Hegel’s Phänomenologie given in note 17 above. 22 VB, 7: “Our civilization is characterized by the word progress. Progress is its form …” 23 PU, #19: “And to imagine a language means to imagine a form of life.” 24 Indeed, since rapture would obviate “the necessary multiplicity”, such a hope (or fear) is itself analog. 25 Ludwig Wittgenstein: Gesamtbriefwechsel/Complete Correspondence, The Innsbruck Electronic Edition – http://www.nlx.com/titles/titllwgb.htm 26 As described in ‘The Digital Wittgenstein’, there is not one analog side to the gigantomachia, but two. The digital constitutes a third side to the battle and also the overall form of the battle itself. 27 Sophist 246. Cf Monk, 3-4 with added emphasis: “This [Wittgenstein‘s “very sense of being a philosopher”] points not to a change of opinion, but to a change of character – the first of many in a life that is marked by a series of such transformations, undertaken at moments of crisis and pursued with a conviction that the source of the crisis was himself. It is as though his life was a battle with his own nature.” 28 Die Wissenschaft der Logik, Erster Teil – Die objektive Logik, Einleitung: Allgemeiner Begriff der Logik. Miller translates: “The system of logic is the realm of shadows, the world of simple essentialities freed from all sensuous concreteness. The study of this science, to dwell and labour in this shadowy realm, is the absolute culture and discipline of consciousness.” 29 Sophist 246 30 There are more things in heaven and earth, Horatio, Than are dreamt of in your philosophy. Hamlet 1.5

References Hegel, GFW, Enzyklopädie der philosophischen Wissenschaften, Berlin, 1830 Hegel, GFW, Wissenschaft der Logik, Nuremberg, 1812-1816 Heidegger, Martin, Identität und Differenz, Pfullingen, 1957.

303 McGuinness, Brian, Wittgenstein, A Life (Young Wittgenstein 1889-1921), London and Berkeley, 1988 Monk, Ray, Ludwig Wittgenstein: The Duty of Genius, New York, 1990 Charles S. Peirce, “A Guess at the Riddle”, http://www.cspeirce.com/menu/library/ bycsp/guess/guess.htm Plato, Sophist, translated by F.M. Cornford, in The Collected Dialogues of Plato, Princeton, 1963 Rhees, Rush, Recollections of Wittgenstein, Oxford (Oxford University Press), 1984. Wittgenstein, Ludwig, “A Lecture on Ethics”, in Philosophical Review, v 74 (1965), p 3-12 (=LE) Wittgenstein, Ludwig, Philosophische Bemerkungen, Oxford (Blackwell), 1964 (=PB) Wittgenstein, Ludwig, Philosophische Untersuchungen, Oxford (Blackwell), 2nd ed, 1958 (=PU) Wittgenstein, Ludwig, Vermischte Bemerkungen, Oxford (Blackwell), 2nd ed, 1980 (=VB) Wittgenstein, Ludwig, Zettel, Oxford (Blackwell), 1967 (=Z)

304

Abstracts and Biographies

307

THOMAS BARTSCHERER, PAOLO D’IORIO, Philosophy in an Evolving Web: Necessary Conditions, Web Technologies, and the Discovery Project This article considers how digital technology—and in particular recent innovations in networking and Semantic Web—can be exploited to assist scholars in conducting academic research while at the same time minimizing the risks posed by web-mediated scholarship. We argue that an clear understanding of how humanities scholarship has traditionally been structured and practiced is a prerequisite for the success of any large-scale digital humanities initiative, and we therefore attempt to articulate what we have identified as the conditions necessary for the possibility of scholarship, conditions which are independent of any given technology. We discuss the following five conditions: stability, accessibility, durability, dissemination, and standards of quality. We then turn to a detailed look at the Discovery Project, recently launched under the aegis of the European Union’s eContentplus programme, to consider if and how the project meets the conditions specified. We conclude by noting three major challenges that confront this project and all similar initiatives aimed at integrating humanities research and digital technology. THOMAS BARTSCHERER is completing a doctoral thesis at the University of Chicago’s Committee on Social Thought and is Associate Director of the Language and Thinking program at Bard College in New York, where he also teaches in the Humanities. He is co-editor of Switching Codes, a collection of essays on the impact of digital technology on thought and practice in the arts and the humanities (University of Chicago Press, forthcoming) and has co-edited Erotikon: Essay on Eros, Ancient and Modern (University of Chicago Press, 2005). Bartscherer holds an MA from the University of Chicago and a BA from the University of Pennsylvania. He has been a Visiting Scholar at the École Normale Supérieure (2001-03) and the University of Heidelberg (1997-99). He is a research associate with the eContentplus project Discovery. DR. PAOLO D’IORIO is senior research scholar at the Institut des Textes et Manuscrits Modernes (CNRS-ENS, Paris) and research associate at the Oxford Internet Institute and at the Maison Maison Française d’Oxford. In recent years he has held a major Humboldt foundation fellowship at the University of Munich, where he directed the HyperNietzsche Project. He is now coor-

308 dinator of two European initiatives whose common goal is to establish an infrastructure for humanities scholarship on the Internet: the COST Action “Open Scholarly Communities on the Web” and the eContentplus project “Discovery. Digital semantic corpora for virtual research in philosophy”.

CHRIS CHESHER, Binding time: Harold Innis and the balance of new media This paper evaluates how Harold Innis’s 1950’s theory of communication can help to understand the 21st century world dominated by digital media. For Innis, how media of an era relate to time and space helps define the civilisation: in its materials, languages and social relationships. Space-binding media (such as papyrus, or electronic communication) facilitate command and control over territory, supporting empire-building. Time-binding media (such as stone, and spoken communication), on the other hand, operate to maintain cultural continuity, and tend towards more stately priestly structures. The computer, as the dominant medium of the current era, tends to generate distinctive relationships with time and space (without simply determining social outcomes). Digitality seems to efface media materiality, as software operates across proliferating networks of different materials: electronic, semiconducting, magnetic, optical and so on. However, materiality always returns. In line with Innis‘s warnings, the implementation of computers has always been out of balance. Command functions were always cheaper and more effective than memory or storage. Digital cultural artifacts are notoriously unstable, as physical media deteriorate, standards are superseded, and expertise is lost. At the same time, some configurations of new internet services such as chat, digital images, social networking and microblogging suggest new patterns in the formation and maintenance of cultural memory. CHRIS CHESHER is Director of the Digital Cultures program at the University of Sydney. From a background in media studies, his research has consistently been with new media, including recent work on mobile phones and everyday life; computer games and screen cultures; and the interplay between human and non-human language users on the internet.

309

GERHARD CLEMENZ, On our Knowledge of Markets for Knowledge—A Survey It is commonplace that advanced economies are knowledge (based) economies, meaning that knowledge is a major factor for the creation of economic benefits. The importance of knowledge for economic development was emphasized a long time ago by leading “Austrian” economists like Joseph Schumpeter and Friedrich Hayek. Both contemporaries of Wittgenstein claimed that the superiority of a capitalist economy as compared to its alternatives stems from its efficiency with respect to the creation, processing and utilization of knowledge. But knowledge, however defined, is quite different in many respects from most other goods. Markets are fairly efficient when it comes to the allocation of conventional goods, but various difficulties and paradoxes arise when they are supposed to trade knowledge and provide incentives for its creation. We shall consider some of them and discuss whether alternative institutional setups after all may not be as bad as some Austrian economists maintained. GERHARD CLEMENZ is currently full professor at the Department of Economics at the University of Vienna; Chairman of the Senate of the University of Vienna. Past president of the Austrian Economic Association and certified expert witness for cartel cases. Past positions include Full Professor of Economics at the University of Regensburg and Bundesbank Professor at the Free University of Berlin. Main current research interests include Industrial Organisation, research and development, vertical integration, competition policy. Numerous publications in international journals including International Economic Review, European Economic Review, Journal of Industrial Economics, Journal of International Economics, Scandinavian Journal of Economics etc.

CHARLES ESS, East-West Perspectives on Privacy, Ethical Pluralism and Global Information Ethics Information and Communication Technologies (ICTs) are both primary drivers and facilitating technologies of globalization – and thereby, of exponentially expanding possibilities of cross-cultural encounters. Such encoun-

310 ters may foster greater communication and economic development: but they further risk evoking a Manichean dilemma between ethnocentric imperialism (i.e., the overt or covert imposition of our own cultural norms, values, and practices upon “the Other”) and a fragmenting relativism (one inspired in part by the need for tolerance and the protection of cultural identity and diversity – but which further opens the field to those who would simply impose their own views by force). Discerning middle grounds between these Manichean polarities is a central task of an emerging global information and computing ethics (ICE). I argue that we may discern and articulate the middle grounds required by a shared global ICE – first of all, by developing a critical self-awareness of how specific Western norms, values, and practices are embedded in ICTs themselves. I then turn to current discussion of conceptions of privacy and efforts at data privacy protection in both Western and Eastern contexts: these examples from praxis highlight a series of guidelines and approaches to the sorts of cross-cultural dialogues required for an ICE that establishes middle grounds between imperialism and relativism. Next, I highlight a number of recent theoretical developments in especially East-West dialogues regarding ICE – most importantly, an emerging virtue ethics that, as rooted in both Eastern and Western traditions, may serve as an important bridge between cultures that will thereby sustain an ICE that avoids imperialism and relativism. Finally, I elaborate on an ethical pluralism that emerges in conjunction with each of these elements: this pluralism seeks precisely to establish a middle ground between ethnocentric imperialism and fragmenting relativism by understanding shared norms and values as allowing for diverse interpretations / applications / and understandings – specifically, as mediated through phronesis, ethical judgment – that thereby reflect and preserve the distinctive norms and values defining specific cultures. Both at the levels of meta-theory and praxis (e.g., with regard to the issue of privacy), then, such ethical pluralism thus recommends itself as an important component of an emerging global ICE. CHARLES ESS is Professor of Philosophy and Religion and Distinguished Professor of Interdisciplinary Studies, Drury University (Springfield, Missouri, USA); and Professor II, Programme for Applied Ethics, Norwegian University of Science and Technology (Trondheim). Dr. Ess has received awards for teaching excellence and scholarship, and published extensively in comparative (East-West) philosophy, applied ethics, discourse ethics, history of philosophy, feminist Biblical studies, and Computer-Mediated Communication (CMC). With Fay Sudweeks, Dr. Ess co-chairs the biennial conferences

311 Cultural Attitudes towards Technology and Communication (CATaC). He has served as a Visiting Professor at IT-University, Copenhagen (2003), the Université de Nîmes (2007), and Aarhus University (2007), and as a Fulbright Senior Scholar, University of Trier (2004).

AUGUST FENK, A view on the iconic turn from a semiotic perspective The “iconic” or “pictorial turn” pulled along by the “digital turn” was sometimes considered rather a re-turn to preliterate societies. But, actually, the respective picture material is embedded in a mix of materials of diverse sense-modalities and symbolic modalities. From a cognitive perspective such materials may be viewed as those modules of an almost inexhaustible “external memory system” which now can be retrieved, linked and extended by its users. Some facets become apparent in a semiotic analysis of these modules: They deprive us from many “indexical” cues which might otherwise allow to directly assign them to concrete agents and intentions, places and points of time. And if the term iconicity is reserved for cases of similarity established by a subject’s simulating (picturing, imitating, modelling …) activities, all “automatic recordings”, including the digital format, are noniconic. Paradoxically, they are at best iconic in cases of artificial reworks or forgeries. AUGUST FENK (b. 1945) studied Psychology and Physical Anthropology in Vienna and graduated there 1971 with an experimental dissertation on slow brain potentials in problem solving. Since 1973 at the University of Klagenfurt (Department of Media and Communication Studies) where he qualified 1980 for lecturing in Psychology with experimental studies on sensory storing in language processing. Contributions to cognitive psychology, quantitative linguistics, theoretical semiotics. After two terms of office as Chair of the Faculty of Humanities two sabbaticals: winter-semester 03/04 and summer-semester ’05. Current research centers on statistical language universals and on cognitive and semiotic aspects of visualization.

312

MAURIZIO FERRARIS, Science of Recording In 2006, all the newspaper published a document, proving that Gunter Grass was soldier of Waffen-SS. A sort of purloined letter in the sense of Poe, or, so to speak, a Leave of Grass. But this kind of use of documents will be increasingly powerful and widespread when the paper disappears, and traces, records and inscriptions of every sort appear in the internet age, and can be found by Google. How can we manage documents in a world characterized by the explosion of writing? The problems related to privacy, constantly increasing in advanced societies, are usually interpreted in the light of the recurrent image of a Big Brother, that is, a big watching eye, according to the model of Bentham’s Panopticon. On one hand, it’s surely true that things like infrared viewers are nowadays widespread as well as cameras that constantly survey every aspect of our lives, in banks, stations, supermarkets, offices and private buildings. On the other hand, however, the power of this big eye would be useless without a registration, which is exactly what transforms a vision in a document. No doubt, the recent debates about phone interceptions are just the tip of an iceberg: the question we are facing here is an important one for democracy, and a complete grasp of the category of documentality (the social inscription of our life in the digital age) is required in order to get a satisfactory answer to it. MAURIZIO FERRARIS is professor of philosophy at the University of Turin, where he heads the Center for Theoretical and Applied Ontology (http:// www.ctaorg.org/), the Laboratory of Ontology (http://www.labont.it/), and is director of “Rivista di Estetica”. Awarded with many literary and research prizes (fellow of the Humboldt Foundation and of the Italian Academy at the Columbia University, “Claretta” Prize, Valitutti Prize, Castiglioncello Philosophical Prize) he is the author of 32 books (among the most recents, “Goodbye Kant!”, Milan 2004, “Dove sei? Ontologia del telefonino”, Milan 2005, “Jackie Derrida”, Turin 2006, “Babbo Natale, Gesù Adulto. In cosa crede chi crede?”, Milan 2006) and more than 1.000 scientific articles. He is a regular contributor of the Sunday cultural extra of “Il Sole-24 ore”, “La Stampa” (Turin) and “Il secolo XIX” (Genoa). A more extensive version of his curriculum, with a complete bibliography, descriptions and reviews of his works can be found at http://www.labont.it/ferraris/.

313

PETER FLEISSNER, Information Society: A second “Great Transformation”? Usually one characterizes contemporary capitalism as acting on global markets, production dominated by transnational companies, with an important role of financial capital, and backed by neoliberal ideologies. But for an appropriate assessment of the emerging information society this is not sufficient. The author proposes to emphazize the economic content of the information society by two essentially new features: commodification and commercialisation of many areas of human activities. Culture, knowledge, arts, research, entertainment are globally conquered by the market. Commodification and commercialisation have always ambiguous effects. While on the one hand Intellectual Property Rights and a technology of copy protection generate artificial shortage of the nowadays superfluous digital goods (because in principle they could be distributed nearly for free to everybody on the globe), commodification and commercialisation can also lead to new goods or services and a better quality of products. This trend of commodification and commercialisation of human culture can be compared by extension and importance to the commercialisation of work during the first half of the 19th century in England which was described by Karl Polanyi in his well known book „The Great Transformation“ (1944). There he located the basic transformation of a capitalistic economy into a capitalistic society. PETER FLEISSNER, born 1944, since 1962 living in Vienna, retired October 2006 from his chair on Design and Assessment of New Technologies, which he held at the University of Technology in Vienna, Austria, since 1990, after seven years of work as a temporary agent for the European Union (19972000 Head of the Department “Technology, Employment, Competitiveness and Society” of the Seville based Institute for Prospective Technological Studies of the Joint Research Centre of the European Commission in Spain; 2000-2004 Head of the Department “Research and Networking” of the European Monitoring Centre on Racism and Xenophobia). Before, he had worked for the Austrian Academy of Sciences, for the International Institute for Applied Systems Analysis, Laxenburg, Austria, as a research scholar at MIT (Massachusetts Institute of Technology) and at the Institute for Advanced Studies, Vienna, Austria. He is elected Member of the Leibniz-Sozietät, Berlin, Germany, and was a Member of the Board of Directors of the research institute of the Black Sea

314 Economic Association (ICBSS), in Athens, Greece, and of the Advisory Board of the recently founded United Nations University/College of Europe (Bruges, Belgium). He has worked as a consultant for Siemens, Munich, for ORF (Austrian Radio and Television), for the Science Center, Berlin, for the Institute for Systems Technology and Innovation Research, ISI, Karlsruhe, for the Austrian Chamber of Labour, and for the Austrian Association of Industrialists. He is the (co)author of more than 15 books and of hundreds of book chapters and articles in scientific journals and research reports. For a complete list of publications and activities see: http://www.arrakis. es/~fleissner

NIELS GOTTSCHALK-MAZOUZ: Internet and the flow of knowledge: Which ethical and political challenges will we face? The classic philosophical definition of knowledge does not address important features of the kind of knowledge that is central in the so-called knowledge societies, or so is argued here. Therefore a different conceptual approach is proposed that combines and interprets typical features of knowledge that have been expressed in a range of disciplines, including sociology, economics, psychology and information science. Using this approach, the kind of knowledge is examined that is provided through the internet. I characterise it as representative and consisting of chunks of “knowledge candidates,” for it has to be recognised as knowledge on an individual basis and because institutional recognition is mostly absent, and I discuss some core ethical and political challenges of this. In doing so, recent or planned innovations are considered (Web 2.0, Semantic Web, Ubiquitous Computing). With respect to the flow of knowledge and the types of knowledge that constitute this flow, it is argued that these innovations might interact in a way that would intensify the propagation of candidates of, what has traditionally been referred to as, know-how, embodied in machines as parts of man-machine systems. If this prospect is plausible, then further struggles will focus on surveillance and, finally, control of the actual functioning of such systems. That is to say, these struggles will finally be on what you can (or cannot) do, and thus on power. Following this hypothesis, the main ethical and political challenges in these struggles are seen as those that are typical for power issues.

315 DR. NIELS GOTTSCHALK-MAZOUZ is Wissenschaftlicher Assistent at the Universität Stuttgart. He studied physics, philosophy and history of science in Berlin, Leipzig and Tübingen and graduated at the Technische Universität Berlin (Dipl.Phys., 1995). He joined the philosophy faculty of the Universität Stuttgart in 1997, where he obtained his doctorate in 1999 with a thesis on discourse ethics and his habilitation in 2006 with a thesis on concepts of knowledge. His research has covered a variety of subjects in applied ethics, political philosophy and philosophy of technology, including interdisciplinary work on energy, environment, healthcare, and ICT. More information and a list of publications is available at http://www.uni-stuttgart.de/philo/index.php?ngm

STEFAN GRADMANN, Some thoughts on the importance of Open Source and Open Access for an emerging digital scholarship Both constituents of the term ‘Open Access’ are of equal and vital importance for innovative scholarship to work effectively in a networked digital setting. ‚Access‘ in such a context implies much more than just being able to freely read scholarly sources: the freedom to process and reaggregate such sources must also be part of such a setting (while fully respecting source authenticity and integrity). ‘Open’, as well, has at least three distinct connotations here: (1) absence of economic and legal barriers for access on the WWW, (2) availability of scholarly sources in open, transparent and non proprietary formats, (3) the processing methods to be used for processing sources should themselves be implemented as open source software. The presentation will explore all three aspects of ‘openness’ but will put particular stress on the issue of open document formats and models as well as on the reasons for open source based approaches being appropriate for effective scholarly work in the e-humanities. Economic considerations as well as the need to retain control of processing methods are considered in this context together with the implicate responsibilities such a strategy creates for an open source based community of scholarship in philosophy. DR. STEFAN GRADMANN, Professor of Library and Information Science at Humboldt-University in Berlin with a focus on knowledge management and semantics based operations. He studied Greek, philosophy and German literature in Paris and Freiburg (Brsg.) and received a Ph.D in Freiburg in

316 1986. After additional training in scientific librarianship (Cologne, 19871988) he worked as scientific librarian at the State and University Library in Hamburg from 1988-1992. From 1992-1997 he was the director of the GBV Library Network. 1997-2000 he was employed by Pica B.V. in Leiden as product manager and senior consultant. Form 2000 to March 2008, he was Deputy Director of the University of Hamburg Regional Computing Center. He was the Project Director of the GAP (German Academic Publishers) Project of the German Research Association and technical co-ordinator of the EC funded FIGARO project. Stefan has numerous European and international contacts in the areas of open access publishing and humanities computing. He was an international advisor for the ACLS Commission on Cyberinfrastructure for the Humanities and Social Sciences and as such has contributed to the report “Our Cultural Commonwealth” (http://www.acls. org/cyberinfrastructure/OurCulturalCommonwealth.pdf) Stefan currently is heavily involved in building Europeana, the European Digital Library, and more specifically is leading WP2 on technical and semantic interoperability as part of the EDLnet project .

FRANK HARTMANN, Weltkommunikation und World Brain. Zur Archäologie der Informationsgesellschaft Zu den herausragenden mentalen Effekten der bereits Mitte des 19. Jahrhunderts einsetzenden Globalisierung zählt die Tatsache, dass Unwahrscheinlichkeiten wie die Telekommunikation den in Termini des Transports bestimmten Raum neu definieren. Das Netz der Telegraphenkommunikation wird an jenen Punkten geknüpft, die in merkantiler Hinsicht am interessantesten sind. An solchen Knotenpunkten hängenauch heute noch die Datenströme der modernen Lebensform, an ihnen wurde verkehrs- und nachrichtentechnisch »eine Welt geknüpft und zusammengezogen, die es vorher nicht gegeben hatte.« (Karl Schlögel) Um welche Infrastruktur handelte es sich dabei? Diese Frage wurde überrraschenderweise auch philosophisch gestellt. Die neuen Technologien konnten nicht mehr in der Begrifflichkeit der Werkzeug-Technik gefaßt werden. So taucht im frühen Mediendiskurs – avant la lettre – die Idee der Organprojektion auf, und eine neue Philosophie der Technik widmet sich den kognitiven Implikationen der neueren Entwicklungen, deren Tenor bereits erstaunlich dem des Theorie-Begleitdiskurses zum Internet gleicht. Mit

317 Ernst Kapp untersucht der Vortrag den exzentrischen Vertreter einer neuen Philosophie der Technik, deren Motive auf Medieneffekte gerichtet waren, wie sie in der akademischen Philosophie bis vor kurzem noch systematisch unterschlagen wurden. FRANK HARTMANN, Univ.-Doz. Dr. habil., lebt in Wien - Studium der Philosophie, Soziologie, Kunstgeschichte sowie der Publizistik- und Kommunikationswissenschaft an der Universität Wien, 1987 Promotion am Institut für Philosophie ebd.; 2000 Habilitation und seither Universitätsdozent (nicht beamtet) für Medien- und Kommunikationstheorie an der Universität Wien, Lehrbeauftragter im Fachbereich Kommunikationswissenschaft der Universität Salzburg, Gastprofessor an der philosophischen Fakultät der Universität Erfurt (2007). Gewerbliche Tätigkeit als Autor, Wissenschaftsjournalist, PR- und Medienberater. Arbeits- und Publikationsschwerpunkte: Medienphilosophie, Mediengeschichte mit Schwerpunkt visuelle Kommunikation. Publikationen (Auswahl): Medien und Kommunikation (Wien: facultas wuv/UTB 2008), Globale Medienkultur. Geschichte und Theorien (Wien: facultas wuv /UTB 2006, 2. Auflage), Bildersprache. Otto Neurath, Visualisierungen (mit Erwin K. Bauer, Wien: WUV 2006), Mediologie. Ansätze einer Medientheorie der Kulturwissenschaften (Wien: WUV 2003), Medienphilosophie (Wien: WUV/UTB 2000), Cyber-Philosophie. Medientheoretische Auslotungen (Wien: Passagen 1999, 2. Auflage) Internet: www. medienphilosophie.net. E-mail: [email protected]

MICHAEL HEIM, Avatars und Lebensform: Kirchberg 2007 The avatar is a pointer to human presence on computer networks but it is also a phantom of future transnational communities. The first half of this paper scans initiatives in avatar research—a journey from promise to frustration. Experiments suggest that virtual-worlds technology holds promise for transnational empathy, beyond fi rst-person point-of-view, which might someday ease confl icts between nations. But the unfulfilled promise reveals a disconnect between technological evolution and human empathy. The second half of the paper shifts to educational practices that might support trans-socializing technology. Reviving cultural artifactslike poetry, music, and drama, the interdisciplinary humanities might increase the likelihood of

318 imaginatively adopting another’s perspective. The discussion and emulation of art works may belong to the future of avatars. MICHAEL R. HEIM is the author of several books including The Metaphysics of Virtual Reality (Oxford, 1993), Virtual Realism (Oxford, 1998), and Electric Language: A Philosophical Study of Word Processing (Yale, 1985, 1999). His books have been translated into several languages. He was recently contributing collaborator for the award-winning Austrian Zukunftsmagazin „.copy“ and during the 1990s he organized six national conferences in Washington, DC for the Data Processing Management Association. At Art Center College of Design in Pasadena, California, he pioneered the use of 3D virtual worlds (cyberforum@artcenter), and in 2001 he held the first cross-campus Digital Cultures Fellowship from the University of California. He currently lectures in Humanities at the University of California at Irvine. His website is: www.mheim.com.

THEO HUG, Medienphilosophie und Bildungsphilosophie – Ein Plädoyer für Schnittstellenerkundungen Anders als das Verhältnis von Bildungsphilosophie und Bildungsforschung ist jenes von Medien- und Bildungsphilosophie bislang weitgehend ein Desiderat geblieben. Das ist insofern erstaunlich, als in den letzten Jahren sowohl in der Medienphilosophie als auch in der Bildungsphilosophie der optionale Bereichscharakter der Medien in Frage gestellt worden ist. Dies kommt einerseits in der Rede von der „medialen Wende“ zum Ausdruck, die auf einer meta-theoretischen Ebene eine Alternative und Ergänzung der etablierten Paradigmen meint und die sich in Analogie zu bekannten Fokussierungen auf Sprache (linguistic turn), Kognition (cognitive turn), Zeichen (semiotic turn) oder Bilder (pictural turn, iconic turn) durch eine Konzentration auf Medien, Medialität und Medialisierung auszeichnet. Auf einer empirischen Ebene wird die Bedeutung der Medien für Prozesse der Kommunikation, des Wissensaufbaus und der Wirklichkeitskonstruktion hervorgehoben. Der Ausdruck „Medialisierung der Lebenswelten“ beinhaltet gewissermaßen beide Aspekte: Die erfahrbare Alltagswelt und Beobachtungen der „Mediendurchdringung“ sowie die Unhintergehbarkeit medialisierter Welten und deren Funktion als Ausgangspunkte für unsere Erkenntnisbestrebungen. Andererseits werden in der Bildungsphilosophie Fragen nach

319 der Bedeutung der Medien in Bildungsprozessen und nach dem Stellenwert von Medienbildung zunehmend relevant. Ausgehend von der Rekonstruktion unterschiedlicher Versionen und Konzeptionen der „medialen Wende“ werden im Vortrag einige Schnittstellen neuerer bildungs- und medienphilosophischer Diskursstränge erkundet. THEO HUG, Dr. phil., is associate professor of educational sciences at the University of Innsbruck and coordinator of the Innsbruck Media Studies research group. His areas of interest include media education and media literacy, e-education and microlearning, theory of knowledge and philosophy of science. He is particularly interested in interfaces of medialization and knowledge dynamics as well as learning processes. Some of his recent work is focussing on instant knowledge, bricolage and didactics of microlearning.

CAMERON MCEWEN, The Necessary Multiplicity The later Wittgenstein was certainly a pluralist. But there are many different pluralisms. With particular reference to recollections of Wittgenstein from Rush Rhees and Maurice O’Connor Drury, the guess is made that the later Wittgenstein’s plurality was ontological and digital. The later work therefore continued and refined the Tractarian positions that the foundation of the world is not singular and that the relation of worlds as based on these plural foundations is digital, not analog. But just as Wittgenstein himself came to this position in a way which was not purely digital (otherwise he would already have known what he despaired of finding), so the relation of the analog to the digital had to be rethought in the latter work in order to account for such achievements as language acquisition. At the same time, this allowed Wittgenstein to illustrate and to teach that method of persistent questioning and repeated re-starting through which he himself had come to discover his calling. CAMERON MCEWEN is a partner in the electronic publisher InteLex (http:// www.nlx.com/). His research interests include Wittgenstein and Heidegger and are generally dedicated to an understanding of the history and implications of ontological plurality (Plato’s “gigantomachia”).

320

MICHAEL NENTWICH, Will the Open Access Movement be successful? No doubt that from the point of view of scholars around the world, Open Access seems to be the obvious solution to the evident problems of scholarly publishing in the present age of commodification. Access to the academic literature would be universally available and hence not restricted to those lucky enough to belong to wealthy institutions that are able to afford all the subscriptions necessary. Furthermore, many believe that only if we have a fully digital, openly accessible archive of the relevant literature, enhanced with overlay functions such as commenting, reviewing and intelligent quality filtering, we will be able to overcome restrictions of the present, paperbased scholarly communication system. Many initiatives have been launched (e.g. the Berlin Declaration), some funding agencies have already reacted by adopting Open Access policies (notably the British Wellcome Trust, but also the German DFG or the Austrian FWF), new journal models are being tested to prove that Open Access is a viable economic model (e.g. BioMedCentral), Open Access self-archiving servers flourish around the world (not least in philosophy) and even high politics has reacted (most recently the European Commission). But still, after a decade or so of initiatives, testing and promoting only a tiny fraction of the available scientific literature is Open Access. It is growing, no doubt, but we are a long way from universal open access. So, will the Open Access movement be successful? Or, put differently, can it be successful? What are the chances that the incumbents—the big commercial (as well as the not-for-profit, associational) publishing industry—will give way to a de-commodified future? Is there a middle-ground where all the players and interests could meet? This paper will contribute to this open debate by analysing recent trends and weighting the arguments put forward in this heated debate. MICHAEL NENTWICH – Studium der Handelswissenschaften an der Wirtschaftsunversität Wien sowie der Politikwissenschaft und Rechtswissenschaften an der Universität Wien, Sponsion zum Mag.iur 1988; Promotion zum Dr. jur. 1995. 1990-92 wiss. Mitarbeiter am ITA (Institut für Technikfolgenabschätzung); 1991-96 Universitätsassistent am Forschungsinstitut für Europafragen der WU Wien; zahlreiche Publikationen im Bereich des Europarechts und der Verfassungspolitik der EU; 1994/95 Research Fellow an den Universities of Warwick and Essex (England); 1998/99 Gast-

321 wissenschafter am Max-Planck-Institut für Gesellschaftsforschung in Köln; seit Sommer 1996 wissenschaftlicher Mitarbeiter des ITA hauptsächlich im Bereich Informations- und Kommunikationswissenschaften, insb. Internet; im März 2004 Habilitation in Wissenschafts- und Technikforschung an der Universität Wien. Seit Jänner 2006 Direktor des ITA.

KRISTÓF NYÍRI, Towards a Philosophy of the Mobile Information Society The mobile phone is not just the most successful machine ever invented, spreading with unheard-of speed; it is also a machine which corresponds to deep, primordial human communicational urges. The emergence of the mobile phone, far from being an insignificant event from the point of view of philosophy, forces the latter to reconsider a range of old issues, as well as to face a number of radically new ones. In fact the phenomenon of the mobile phone constitutes an obvious challenge to the humanities. Worldwide, there were some 2.5 billion mobile phone users by August 2007. This figure, impressive enough by itself, reflects some fundamental conditions and changes. Combining the option of voice calls with text messaging, MMS, as well as e-mail, and on its way to becoming the natural interface through which to conduct shopping, banking, booking flights, and checking in, the mobile phone is obviously turning into the single unique instrument of mediated communication, mediating not just between people, but also between people and institutions, and indeed between people and the world of inanimate objects. Furthermore, the mobile is today emerging as the dominant medium in the sense of that strange singular in the plural, “media”—both as mass media and new media. The age group perhaps most deeply affected by the rise of the mobile is that of children. Ubiquitous communication fulfils a deeply human urge, and children especially suffer if deprived of the possibility of keeping in touch. Also, Dewey’s observation that we need schools—artificial educational environments—because the young can no longer move around in the world of adults and thus learn spontaneously, by now appears to have once more become irrelevant. The medium in which the young play, communicate, and learn, is increasingly identical with the world in which adults communicate, work, do business, and seek entertainment. The mobile is clearly creating an organic learning environment. The phrase “The Mobile Information Society” has been current since 1999 or so. However, one must increasingly come to realize that it is a mis-

322 leading phrase. For mobile communications point to a future which offers a wealth of knowledge, not just of information, and promise to re-establish, within the life of modern society, some of the features formerly enjoyed by genuine local communities. “Community” on the one hand, “society” on the other, clearly differ in their connotations; it was Tönnies who, towards the end of the nineteenth century, crystallized this difference into a conceptual contrast. However, the striking observation in the recent literature on mobile telephony is that through constant communicative connectedness a kind of turning back to the living, personal interactions of earlier communities is brought about. KRISTÓF [J. C.] NYÍRI, born 1944, is Professor of Philosophy at the Budapest University of Technology and Economics, Institute of Applied Pedagogy and Psychology. He is member of the Hungarian Academy of Sciences and of the Institut International de Philosophie. He was Leibniz Professor of the University of Leipzig for the winter term 2006/07. His main fields of research: the history of philosophy in the 19th and 20th centuries; the impact of communication technologies on the organization of ideas and society; the philosophy of images and the philosophy of time. Some main publications: Tradition and Individuality, Dordrecht: Kluwer, 1992; “Electronic Networking and the Unity of Knowledge”, in Stephanie Kenna and Seamus Ross (eds.), Networking in the Humanities, London: Bowker-Saur, 1995; “The Concept of Knowledge in the Context of Electronic Networking”, The Monist, July 1997; “The Picture Theory of Reason”, in Berit Brogaard and Barry Smith (eds.), Rationality and Irrationality, Wien: öbv-hpt, 2001; Vernetztes Wissen: Philosophie im Zeitalter des Internets, Vienna: Passagen Verlag, 2004; “Time and Communication”, in: F. Stadler and M. Stöltzner (eds.), Time and History, Frankfurt/M.: ontos verlag, 2006. Further information: www.hunfi.hu/nyiri. E-mail: [email protected].

CLAUS PIAS, Medienwissenschaft, Medientheorie oder Medienphilosophie? Die Beschäftigung mit „Medien“ ist eine recht junge Erscheinung, die sich schnell und umfassend zu institutionalisieren vermochte. Für solche Neuerscheinungen gibt es durchaus historische Präzedenzfälle wie Informatik oder allgemeine Literaturwissenschaft. Unscharf bleiben trotz verschiedener Kanonisierungsunternehmungen jedoch Name, Programm und Gegenstand

323 dessen, was als Medien-Wissenschaft, Medien-Theorie oder Medien-Philosophie summiert wird. Die Sache wird dadurch nicht einfacher, daß sich unter dem Druck drittmittelfähigen Antragsdesigns mittlerweile etliche inkompatible Forschungsansätze unter dem erfolgversprechenden Begriff „Medien“ versammeln. Zuletzt spielt (insbesondere im deutschsprachigen Raum) die Institutionengeschichte mit einer empirisch-sozialwissenschaftlichen Publizistik bzw. einer hermeneutisch-geisteswissenschaftlichen Film- und Fernsehwissenschaft eine erhebliche Rolle. Vor diesem Hintergund sollen fünf Thesen vorgestellt und diskutiert werden: 1. Es gibt keine Medien. 2. Medienwissenschaft ist keine Disziplin. 3. Medienwissenschaft ist eine Wissenschaft. 4. Medienwissenschaftler hätten Medienwissenschaft nicht erfinden können. 5. Medienwissenschaft ist nicht gleich Medienwissenschaft. CLAUS PIAS, Professor für Erkenntnistheorie und Philosophie der Digitalen Medien am Institut für Philosophie der Universität Wien. Studium der Elektrotechnik in Aachen, der Kunstgeschichte, Philosophie und Germanistik in Bonn und Bochum. Aktuelle Forschungsthemen und -projekte: Geschichte und Theorie der Computersimulation, Wissensgeschichte der Kybernetik, Skalierung als technisches und philosophisches Problem, Epistemologie des Kalten Krieges, Edition der kritischen Schriften Hermann Bahrs (18631934).

SIEGFRIED J. SCHMIDT, Media Philosophy—A Reasonable Programme? Über die wachsende Bedeutsamkeit von Medien in unserer Medienkulturgesellschaft gibt es weder im Wissenschafts- noch im Alltagsdiskurs ernsthaft Dissens. Wie in einer solchen Situation üblich, werden Medien und ihre Bedeutsamkeit zunehmend einer Selbst- wie einer Fremdbeobachtung unterzogen. Die Selbstbebachtung der Medien führt zu einer zunehmenden Selbstreferenz, aus der ganz praktisch Programm gemacht werden kann. Die Fremdbeobachtung haben einerseits empirisch orientierte Medien- und Kommunikationswissenschaftler, andererseits Philosophen übernommen. Während Medien- und Kommunikationswissenschaften seit langem disziplinär organisiert sind, kann bis heute von einer Medienphilosophie weder im Sinne eines festen Theoriegebäudes noch im Sinne einer etablierten Teildisziplin der Philosophie gesprochen werden. Ein medienphilosophischer Ansatz konzentriert sich darauf, traditionelle

324 philosophische Themen im Lichte der Wirksamkeit von Medien zu reformulieren. Die Liste der behandelten Themen ist sehr lang, sie reicht von Wirklichkeit, Wahrheit, Kultur, Gesellschaft, Erziehung und Politik bis zu Raum, Zeit, Gefühl, Subjekt und Unterhaltung. Ein zweiter medienphilosophischer Ansatz konzentriert sich auf medienspezifische Themen wie Netzkultur, neue Sozialformen im Internet, Globalisierung, Wandel der Körperrelevanz, Wahrnehmungswandel, Spielkulturen oder Digitalisierung der Demokratie, wobei Überschneidungen mit den oben genannten traditionellen Themen unvermeidbar sind. Diese Kurzcharakterisierung der thematischen Ebene impliziert m. E. auch eine Empfehlung für die Lösung der Organisationsfrage einer Medienphilosophie. Wenn sich die Themen aus argumentierbaren Gründen nicht randscharf voneinander trennen lassen, empfiehlt es sich, Medienphilosophie als notwendig interdisziplinär angelegtes Forschungsprogramm und nicht als traditionelle akademische Disziplin zu organisieren. Alles wird darauf ankommen, als Beobachter 2. Ordnung Medienkompetenz zu entwickeln. Und dafür braucht es Medienphilosophie, die sich darüber klar ist, dass sie sich nur – in den Medien vollziehen kann, und zwar unter dem klaren Motto „Konstruktion und Kontingenz“. SIEGFRIED J. SCHMIDT, geb. 1940, Studium der Philosophie, Germanistik, Linguistik, Geschichte und Kunstgeschichte. o. Universitätsprofessuren in Bielefeld, Siegen und Münster. 2006 emeritiert von der Universität Münster.

URSULA SCHNEIDER, Globalisierte Produktion von (akademischem) Wissen – ein Wettbewerbsspiel Im Entdeckungszusammenhang der folgenden Überlegungen wirkt lebensweltliche Praxis, nämlich ein Unbehagen an den angeblich der Qualitätssicherung dienenden Ritualen der akademischen Praxis. Es wird die These aufgestellt, dass Ergebnisse in einem globalen Kontext unter Wettbewerbsbedingungen zunehmend industrialisiert generiert werden, wofür einiges an empirischer Evidenz ins Treffen geführt werden kann. Dieser Industrialisierungsprozess hat seine Wurzeln in einer Ausdifferenzierung zwischen und innerhalb von Disziplinen, im Gebrauch einer Weltsprache und einem Fokus auf Publikationsorganen aus dem angelsächsischen Forschungsraum, in der Digitalisierung, in der schieren Menge an Publikationen, in

325 der „Audit Society“ (Power, 1997), die eine Obsession bezüglich Messung und Vergleich entwickelt hat. Letztere führt beobachtbar zu einer Zunahme von Betrug, Manipulation und Missbrauch, sowie zu einem erhöhten Maß an Selbstreferenz, welches die allgemein kausal interpretierten Korrelationen zwischen F&E Ausgaben und Wirtschaftswachstum zumindest bezüglich sozialwissenschaftlicher Erkenntnisse fragwürdig werden lässt. Auf der Seite der Konsequenzen dieser Beobachtungen sind Mängel an Kreativität, Ästhetik, Eigen-Sinn, eine Abnahme lebensweltlicher Relevanz einerseits bzw. die nicht reflektierte Unterstützung von Herrschaft andererseits, sowie eine zunehmende Sprachlosigkeit innerhalb und zwischen Disziplinen näher zu untersuchen. URSULA SCHNEIDER ist nach Studien der Ökonomie und Pädagogik seit 1994 für das Institut für Internationales Management an der Karl-Franzens-Universität verantwortlich. Davor hat sie als Wirtschaftsprüferin und an Lehrstühlen in verschiedenen deutschen Bundesländern und den USA gearbeitet. Forschungs- und Lehraufenthalte haben sie ans College d’Europe, nach St. Gallen, Australien, USA, Indien, Thailand und Malaysien geführt, so dass sie unterschiedliche akademische Systeme kennen lernen konnte. Ihre Forschungsschwerpunkte sind Globalisierung, Wissensproduktion, Organisationstheorie und das Messen von Qualitäten. Sie hat einige Forschungspreise gewonnen und eine Vielzahl an Monographien und Aufsätzen publiziert.

326

THE EDITORS HERBERT HRACHOVEC, born 1947, is associate professor of Philosophy at the University of Vienna and member of the University’s academic senate. He has founded an electronic archive for philosophy texts and a repository for audio documents of a theoretical nature. For more detailed information see http://hrachovec.philo.at. ALOIS PICHLER, born 1966, is a member of staff at the Department of Culture, Language and Information Technology (AKSIS) at UNIFOB AS, Bergen, Norway. He is Director of the Wittgenstein Archives at the University of Bergen. He is member of the Executive Committee of the Austrian Ludwig Wittgenstein Society. Homepage: http://teksttek.aksis.uib.no/people/alois.