Psychology in Historical Context: Theories and Debates 9781138683846, 9781138683853, 9781315544304

Psychology, the study of mind and behaviour, has developed as a unique discipline in its brief history. Whether as it cu

1,265 130 2MB

English Pages [377] Year 2017

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Psychology in Historical Context: Theories and Debates
 9781138683846, 9781138683853, 9781315544304

Table of contents :
Cover
Half Title
Title Page
Copyright Page
Dedication
Table of Contents
Preface
Acknowledgements
1 Historical perspectives: Psychology as the study of … what?
2 Scientific perspectives: Psychology as the study of … how?
3 Challenging the mainstream: new paradigms for old
4 People as Psychologists: common sense Psychology
5 People as organisms: Biopsychology
6 People as environmentally controlled organisms: Behaviourism
7 People as information processors: Cognitive Psychology
8 Humans as an evolved species: Evolutionary Psychology
9 Individuals as driven by unconscious forces: Psychodynamic Psychology
10 People as self-determining organisms: Humanistic-phenomenological and Positive Psychology
11 People as diverse: group and individual differences
12 People as selves: subjectivity, individuality, and social construction of identity
13 People as deviant: psychiatry and the construction of madness
References
Index

Citation preview

Psychology in Historical Context Theories and Debates

Psychology, the study of mind and behaviour, has developed as a unique discipline in its brief history. Whether as it currently takes place, or how it has been conducted over the past 140 years or so since it became recognized as a separate field of study, there has been constant debate on its identity as a science. Psychology in Historical Context: Theories and Debates examines this debate by tracing the emergence of Psychology from parent disciplines, such as philosophy and physiology, and analyzes key topics such as: L the nature of science, itself a much misunderstood human activity often equated

with natural science; L the nature of the scientific method, and the relationship between data gathering

and generalization; L the nature of certainty and objectivity, and their relevance to understanding the

kind of scientific discipline Psychology is today. This engaging overview, written by renowned author Richard Gross, is an accessible account of the main conceptual themes and historical developments. Covering the core fields of individual differences, cognitive, social, and developmental psychology, as well as evolutionary and biopsychology, it will enable readers to understand how key ideas and theories have had impacts across a range of topics. This is the only concise textbook to give students a thorough grounding in the major conceptual ideas within the field, as well as the key figures whose ideas have helped to shape it. Richard Gross has been writing Psychology texts for both undergraduate and A-level students for 30 years. He has a particular interest in the philosophical aspects of Psychology, including the nature of the discipline, the free will/determinism debate, and the defining features of personhood.

Psychology in Historical Context Theories and Debates Richard Gross

First published 2018 by Routledge 2 Park Square, Milton Park, Abingdon, Oxon OX14 4RN and by Routledge 711 Third Avenue, New York, NY 10017 Routledge is an imprint of the Taylor & Francis Group, an informa business © 2018 Richard D. Gross The right of Richard D. Gross to be identified as author of this work has been asserted by him in accordance with sections 77 and 78 of the Copyright, Designs and Patents Act 1988. All rights reserved. No part of this book may be reprinted or reproduced or utilized in any form or by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying and recording, or in any information storage or retrieval system, without permission in writing from the publishers. Trademark notice: Product or corporate names may be trademarks or registered trademarks, and are used only for identification and explanation without intent to infringe. British Library Cataloguing in Publication Data A catalogue record for this book is available from the British Library Library of Congress Cataloging in Publication Data A catalog record for this book has been requested ISBN: 978-1-138-68384-6 ISBN: 978-1-138-68385-3 ISBN: 978-1-315-54430-4 Typeset in Frutiger and Sabon by Wearset Ltd, Boldon, Tyne and Wear

To Mike Stanley (1944–2016) Much missed friend and colleague

Contents

Preface Acknowledgements

1

Historical perspectives: Psychology as the study of … what?

ix xi

1

2

Scientific perspectives: Psychology as the study of … how?

29

3

Challenging the mainstream: new paradigms for old

67

4

People as Psychologists: common sense Psychology

91

5

People as organisms: Biopsychology

109

6

People as environmentally controlled organisms: Behaviourism

133

7

People as information processors: Cognitive Psychology

157

8

Humans as an evolved species: Evolutionary Psychology

181

9

Individuals as driven by unconscious forces: Psychodynamic Psychology

203

People as self-determining organisms: Humanistic-phenomenological and Positive Psychology

223

People as diverse: group and individual differences

245

10

11

vii

Contents

12

13

People as selves: subjectivity, individuality, and social construction of identity

271

People as deviant: psychiatry and the construction of madness

293

References Index

viii

318 345

Preface

The question ‘What is Psychology?’ just won’t go away. As much now as when it began to emerge as a separate discipline, Psychology has always had an identity crisis: what kind of discipline is it? Is it a science? If so, what kind of science? How does it resemble the natural sciences and how does it differ from them? Is it more of a social science, or does it somehow manage to blend science and the humanities in a way that no other discipline does? In a way, Psychology in Historical Context is just one more attempt to suggest what Psychology’s identity might be. Unlike many such attempts, it considers the historical development of each of the major schools of thought/theoretical orientations that are commonly identified as forming part of ‘Psychology’. Such a historical approach inevitably also takes account of philosophical, social, cultural, and political influences, including the rise of science as a predominant source of knowledge and ‘truth’. Psychology in Historical Context combines a critical approach to ‘mainstream’ Psychology with a brief overview of some of the major features of each of the main schools of thought. Recurring themes throughout the text include reductionism, essentialism, the nomothetic/idiographic distinction, free will and determinism, ethical issues, the nature of science and methodology, and the nature of knowledge and truth. Psychology in Historical Context was originally conceived as a dedicated text for the compulsory Conceptual and Historical Issues in Psychology (CHIP) component of UK undergraduate degrees. While still aimed at this component, it’s hoped that it will have much wider appeal as an introductory text: rather than being topic-based – as the majority of introductory Psychology texts are – it is based around different conceptions of what human beings are like as promoted by different schools of thought, including their past and present influence. This is a way of considering what Psychology’s subject matter is. By discussing these schools of thought within the context of the nature of science – its methods and the resulting knowledge it produces – Psychology in Historical Context also considers the kind of (scientific) discipline Psychology is – and should be. Embedded throughout the text are several ‘Pause for Thought’ questions relating to methodological, theoretical, ethical, and other issues. They are all aimed at helping the reader assimilate the material. Some of these are numbered, denoting that suggested answers are provided at the end of the chapter. Those that aren’t numbered are more opinion-/experience-based questions, often suitable for seminar discussion and/or personal reflection. Throughout the text I use ‘Psychology’ (upper case ‘P’) to denote the discipline (and, likewise, ‘Experimental Psychology’, ‘Clinical Psychology’, etc.), and ‘psychology’ (lower case ‘p’) to denote its subject matter (e.g. ‘human psychology’, cognition, motivation). This distinction is quite crucial when discussing the uniqueness of Psychology as a discipline.

ix

Preface

(Whether referring to the discipline or the subject matter, the adjective ‘psychological’ is always used with a lower case.) Finally, Psychology in Historical Perspective is aimed as much at provoking discussion and debate as it is at informing; to this extent, it is less a textbook and more a source book. While each chapter can be read in isolation from the rest, the first three form a mini-section, each building on the previous one(s); it would be useful to have read these before reading any of the subsequent chapters. However you go about it, I hope you find the text useful and thought-provoking.

x

Acknowledgements

I’d like to thank Russell George at Routledge for believing in this project – and me – while determining just what form it was going to take. It has proved to be one of the most challenging – but also satisfying – writing projects that I’ve undertaken, and Russell’s unwavering support has made it possible. I’d also like to thank all those involved in the production of this book at its various stages, including, recently, Alex Howard.

xi

Chapter 1 Historical perspectives Psychology as the study of … what?

If there’s a logical place to start one’s study of Psychology, it’s by looking at how it came to be where it is now; in other words, its history. But as we shall see below, this isn’t as straightforward as it may sound: there are different ways of ‘doing history’ and different resultant histories. Put another way, there’s more than one history of Psychology. Some may disagree with the premise that we should start with Psychology’s past, claiming instead that the logical starting point is to decide what Psychology is about, its subjectmatter. But again, this too is a matter of debate and disagreement. In terms of one of its histories, Psychology’s subject matter (its ontology) is defined differently by different schools of thought or theoretical approaches, which have developed over time (roughly, the past 140 years, albeit with considerable overlap between them). Part of this debate relates to similarities and differences between human beings and non-human animals; sometimes this is addressed directly, sometimes it’s ignored altogether. These approaches (such as Structuralism, Behaviourism, and Cognitive Psychology) differ not only in terms of what they consider the appropriate subject-matter to be, but also in terms of the methods used (or advocated) for studying this subject-matter. This relates to the debate regarding the nature of science (in general) and the validity and appropriateness of using certain methods to investigate human beings/people (in particular). The implication of the preceding paragraphs is that, regardless of how we view Psychology’s history, and regardless of how its subject-matter is defined, Psychology adopts a scientific approach. This, in turn, begs two major questions: (1) What do we mean by science? And (2) what kind of science is/should Psychology be? These questions relate to epistemology and methodology: the nature of the knowledge we are trying to acquire and the methods used to acquire it. These are discussed in the remainder of this chapter.

1

Historical perspectives

Psychology’s histories The contribution of Wilhelm Wundt: founding father or origin myth? BOX 1.1 KEY FIGURE: Wilhelm Wundt (1832–1920) L Wundt originally trained as a doctor, practising briefly

L

L

L L

as an assistant pathologist, before studying physiology. In between, he had conducted his own experimental research into aspects of anatomy and physiology. He worked at the University of Heidelberg, in Germany, as assistant to the famous physiologist Hermann Helmholtz. (Helmholtz studied the speed of the nerve impulse in sensory and motor nerves, colour vision, and was the first to propose ‘unconscious inference’ in visual perception: see Gross, 2015.) Like Helmholtz and Fechner (see Box 1.2), Wundt was beginning to explore ways of subjecting psychological processes to experimental tests. He believed that there were now sufficient grounds for establishFigure 1.1 ing a whole new field of Experimental Psychology; this could be taught in universities alongside more Wilhelm Wundt. traditional subjects (including both the natural sciences and the humanities). Wundt set out this proposal in the introduction to his Contributions to the Theory of Sensory Perception (1862). On introductory Psychology courses and in introductory textbooks, the history of Psychology is often presented in terms of the emergence of a new, separate discipline during the second half of the nineteenth century from its ‘parent’ disciplines of physiology, philosophy, and biology (among others). The year 1879 is often cited as Psychology’s ‘birthday’, when Wilhelm Wundt opened the first laboratory (in Leipzig, Germany) devoted exclusively to the study of human conscious thought. The so-called laboratory was actually a small, single room used for demonstration purposes. This then became converted into a ‘private institute’ of Experimental Psychology.

Wundt aimed to explore the structure of human thought – hence, his approach is commonly referred to as structuralism; it’s also commonly referred to as introspectionism, based on his use of the method of introspection by which participants were trained to observe their own mental states. Introspection’s aim was to analyse conscious thought into its basic elements and perception into its constituent sensations (much as chemists analyse compounds into elements). Wundt and his co-workers recorded and measured the results of their introspections under controlled conditions using the same physical surroundings, the same ‘stimuli’ (such as a clicking metronome), the same verbal instructions to each participant, and so on. This emphasis on measurement and control marked the separation of the ‘new Psychology’ from philosophy.

2

Historical perspectives

The aim of Experimental Psychology was to systematically vary the stimuli and conditions that produce differing mental states; Wundt believed that it should be possible to manipulate and observe the facts of consciousness (conscious mental states) using introspection in a comparable way to how research is conducted in physics, chemistry, and physiology For these reasons, Wundt (together with William James – see below) is traditionally regarded as the ‘founder’ of the new science of Experimental Psychology. As he stated in the preface to his Principles of Physiological Psychology (1874, 1974), ‘The work I here present to the public is an attempt to mark out a new domain of science’ (quoted in Fancher, 1979). People from all over the world came to visit the institute, returning to their own countries to establish laboratories modelled on that of Wundt. Wundt founded the journal Philosophische Studien (‘Philosophical Studies’), which, despite its name, was the world’s first to be primarily devoted to Experimental Psychology; this reflected the popularity and success of the ‘new Psychology’. While this account seems straightforward and uncontroversial, there are some major problems with it. First, some have challenged the (widely held) claim that Wundt was the founder of Experimental Psychology (as described in Box 1.2).

BOX 1.2 L According to Richards (2010), Gustav Fechner’s (1860 [1966]) Elemente der Psy-

chophysik (‘Elements of Psychophysics’) is generally regarded as the work that marks the beginning of Experimental Psychology. According to Bunn (2010), Fechner’s (1801–1887) greatest achievement was to show how Psychology could initiate a programme of systematic empirical enquiry (1) without possessing any standard units of measurement or (2) without committing the ‘Psychologists’ fallacy’, according to which the analysis of subjective experience is confused with objective reality (Leary, 1990). Fechner claimed that Psychology’s task was to search for a functional relationship between the ‘physical and the psychical that would accurately express their general interdependence’ (psychophysics). L Building on the work of E.H. Weber, Fechner recorded the relationship between changes in the magnitude of a stimulus as measured objectively and as experienced. Weber had earlier discovered that the just noticeable difference (j.n.d) – the amount of difference needed for a change to be perceived – was a function of the size of the stimulus. Fechner proposed that sensation is proportional to the logarithm of the stimulus intensity. This generalized formula was a way of quantifying experienced, as opposed to objectively measured, size (or magnitude) (the ‘Weber–Fechner law’). L According to Bunn (2010), this remains Psychology’s sole claim to having formulated a scientific law (although it only holds under certain conditions). Ironically, Fechner’s aim – and belief – was to help solve the mind–body (or mind–brain) problem (see Chapter 2); his legacy, instead, was the subdiscipline of Psychophysics (Richards, 2010), which remains the ‘gold standard’ for Experimental Psychology (Robinson, 2010).

Consistent with the brief description of Psychophysics given in Box 1.2, Wundt believed that introspection was only applicable to psychophysiological phenomena, that is, ‘lower mental processes’ (or immediate objects of conscious awareness), such as sensations, reaction times, and attention. Wundt’s and Fechner’s shared belief that the experimental method was restricted to the investigation of the most basic psychological mechanisms is

3

Historical perspectives

often overlooked in accounts of structuralism and Wundt’s (claimed) pioneering research in Leipzig. These mechanisms, he argued, are amenable to study using the methods of the Naturwissenschaften (natural sciences), with the experiment at their heart. However, in order to study memory, thinking, language, personality, social behaviour, myth, and cultural practices (the ‘higher mental processes’), we need to study communities of people (Volkerpsychologie). Human minds exist within human communities and are too complex to be amenable to experimental manipulation (Danziger, 1990). Instead, these higher mental processes – with language underpinning all the other collective/social processes (see the discussion of Social Constructionism below) – can only be studied using the methods of Geisteswissenschaften – social or human sciences and the humanities. (This distinction is discussed further below.) Despite Wundt’s beliefs regarding the limitations of the experimental method, it has been portrayed as being of paramount importance in Psychology’s construction of itself as a natural science. As a consequence, Jones and Elcock (2001) believe that Wundt’s advocacy of this approach for the study of certain psychophysiological processes has become blown out of all proportion, creating what they call an ‘origin myth’ – a distorted account of how Psychology ‘began’. According to Richards (2010), Wundt’s importance lies largely in his creation of the experimental laboratory (which included some – for the time – high-tech equipment, which helped promote the image of Psychology as a hard scientific discipline (Draaisma and de Rijcke, 2001)), combined with the fact that he attracted large numbers of American postgraduate students who laid the foundations of American Psychology during the 1880s and 1890s. However, most of those returning Americans (including the English-born Titchener, who then emigrated to the US) failed to establish his introspective methodology as a permanent approach within American Psychology. Although Wundt helped to put Experimental Psychology ‘on the map’, beyond that: his status appears to derive more from the symbolic significance he acquired for others than the success of his Psychology. The discipline wanted a founding father with good experimental scientific credentials, and his American ex-students naturally revered him as their most influential teacher even while subsequently abandoning most of what he taught them. (Richards, 2010, p. 39) As we shall see below, the most outspoken critic of Wundt’s introspectionism was the American founder of Behaviourism, John B. Watson. However, according to Fancher (1979), there were pre-existing, underlying cultural differences between Germany and the US that made it difficult for American Psychologists to embrace Wundt’s work. As a representative of the German intellectual tradition that was interested in the mind in general, Wundt wanted to discover the universal characteristics of the mind which can account for the universal aspects of human experience. By contrast: Americans, with their pioneer tradition and historical emphasis on individuality, were more concerned with questions of individual differences … and the usefulness of those differences in the struggle for survival and success in a socially fluid atmosphere. These attitudes made Americans especially receptive to Darwin’s ideas about individual variation, evolution by natural selection, and the ‘survival of the fittest’ when they appeared in the nineteenth century. (Fancher, 1979, p. 147)

4

Historical perspectives

The contribution of William James: Functionalism BOX 1.3 KEY FIGURE: William James (1842–1910) L According to Fancher (1979), William James was arguably the greatest writer and

teacher Psychology has ever had. L The older brother of the famous novelist Henry James, William went to Harvard

L

L

L

L

(in 1861) to study chemistry, but he soon shifted his interest to physiology. Like Wundt, James trained to be a doctor. He interrupted his medical studies in order to go to Germany (staying from 1867 to 1868), largely for health reasons. While there, he read Wundt’s article (‘Recent Advances in the Field of Physiological Psychology’), which persuaded him that the time had come for Psychology to begin to be a science. Still in poor health, he returned to Harvard to complete his medical training. But he finally found his niche when he started teaching physiology and anatomy there. He then changed his course title to ‘The Relations between Physiology and Psychology’. James never founded an institute for psychological research, and, in fact, did relatively little research himself. But he used his laboratory to enrich his classroom presentations, and his classic textbook, The Principles of Psychology (1890) was a tremendously popular success, making Psychology interesting and personally relevant. The two-volume ‘masterpiece’ (Richards, 2010) includes chapters on brain function (see Chapter 5 of this volume), habit, the stream of consciousness, the self (see Chapter 12), attention, association, the perception of time, memory (see Chapter 7), perception, instinct, free will (see text below), and emotion (see text below). The Principles gave us the immortal definition of Psychology as ‘the science of mental life’. Following the publication of The Principles, James became increasingly interested in philosophy and thought of himself less and less as a Psychologist. However, he was the first American to draw attention to the work of Sigmund Freud, who, at the time, was still a rather obscure neurologist from Vienna (see Chapter 9).

According to Fancher (1979), what James proposed was not so much a theory as a point of view (as much philosophical as psychological); this directly inspired Functionalism (or Pragmatism), according to which ideas must be useful and meaningful to people’s lives. For example, he emphasized the functions of consciousness rather than its content (which is consistent with Darwinian views regarding why consciousness evolved – see Chapters 5 and 7). A good illustration of this Functionalist/Pragmatist approach relates to the debate regarding free will (see Box 1.4).

L L L L

What do you understand by the term ‘free will’? To what extent is the possession of free will part of our (common sense) concept of a person? What do you understand by the term ‘determinism’ in the context of science? To what extent do the terms free will and determinism conflict? Are they necessarily mutually exclusive?

5

Historical perspectives

BOX 1.4 James’ ‘soft determinism’ L In The Principles, James related ‘the will’ to attention. He described effort, or the

L

L

L

L

L

L

L

sensation of voluntary effort, as the primary subjective indication that an act of will has occurred. But should a scientific Psychology recognize the existence of free will? If having free will is part of our common sense, taken-for-granted understanding of what it means to be a person, is this compatible with a scientific understanding of the mind? Belief in determinism seems to fit best with the scientific view of the world, while belief in free will seems required by our social, moral, political, and legal practices, as well as our personal, subjective experience. Faced with this dilemma, James concluded that both views are true (i.e. useful) in their respective domains: belief in determinism works best in the world of science, while belief in free will works best in our everyday interactions with other people and social institutions. Psychology as a science can only progress by assuming that (‘hard’) determinism applies to human thought and behaviour as it does to the physical, inanimate world, but this doesn’t mean that we have to abandon the idea of free will in other contexts. Scientific explanation doesn’t provide the ‘ultimate’ explanation; as Fancher and Rutherford (2012) say, ‘Both views were essentially articles of faith, incapable of absolute proof or disproof ’ (p. 313). A second solution to the dilemma is what James called soft determinism (what Locke, Hume, and other empiricist philosophers – see text below – called compatibilism (Flanagan, 1984)). According to soft determinism, the question of free will rests on the type(s) of cause our behaviour has, not whether it is caused or not caused. According to James, if the proximate (i.e. immediate) cause of our actions is processing by a system such as conscious mental life (CML: consciousness itself, purposefulness, personality, and personal continuity), then they count as free, rational, voluntary, purposive actions. As far as hard determinism is concerned, CML is itself caused; the immediate causes are only part of the total causal chain that results in the behaviour we’re trying to explain. According to this view, if our behaviour is caused at all (i.e. it isn’t random, the opposite of ‘free’), then there’s no sense in which we can be said to act freely.

Functionalism, in turn, helped stimulate interest in individual differences, which determine how well or poorly people will adapt to their environments. James’ theory of emotion (see Gross, 2015) claimed that behaviour (such as running away) produces changes in our conscious experience (e.g. fear). So, James inverted the common sense view (according to which we run away because we’re frightened) by claiming that we experience fear as a result of running away. Whatever the rights and wrongs of this counter-intuitive account, it had the effect of making consciousness seem less important than previously believed; this, in turn, helped lead American Psychology away from a focus on mentalism and towards behaviour (Leahey, 2000).

6

Historical perspectives

Behaviourism: Watson’s new brand of Psychology BOX 1.5 KEY THINKER: John Broadus Watson (1878–1956) L Having been steered in a religious direction

L

L

L

L

L

by his deeply pious mother, Watson developed a fierce rebellious streak that remained part of his character. His first academic post – as a PhD student – in 1900 was at the University of Chicago, in the still undivided Philosophy–Psychology department, headed by John Dewey (see Table 6.2). He objected to the introspective methods required for much of the psychological research involving people. What appealed to him was the animal research being conducted by Jacques Loeb (a staunchly mechanistic biologist who coined the term ‘tropism’) and Figure 1.2 Henry Donaldson, a neurologist who studied John Broadus Watson. the nervous system of white rats. His doctoral research focused on the correlation between the increasing behavioural complexity of rats and the increasing growth of myelin sheaths around the neurons in their brains (see Chapter 5). Watson was highly regarded both within the department at Chicago and beyond. In 1904, he moved to Johns Hopkins University on a full professorship. In 1909 he became departmental head and also took on the role of editor of Psychological Review. From his position of power, he began pressing the university president to separate Psychology from Philosophy and to establish new links between Psychology and Biology. After taking charge at Johns Hopkins, Watson continued to teach courses based on the work of Wundt and James, while conducting his own animal research. But he became increasingly critical of the use of introspection; in particular, he argued that introspective reports were unreliable and impossible to verify; such reports are based exclusively on private, subjective experience, which means that the investigator has no means of accessing them to check their accuracy. Surely this was no way for a scientific Psychology to proceed! (Based on Fancher and Rutherford, 2012)

Watson’s objections to early twentieth-century Psychology However, there were other issues that Watson had with the state of Psychology in the early 1900s. According to Richards (2010): 1 As Watson saw it, Psychology’s continuing philosophical concerns (in particular, the nature of consciousness and the mind–body problem) kept it bogged down in intractable metaphysical debates that have little practical value. 2 He was committed to a positivist view of science, according to which only overt, visible, measurable phenomena are amenable to scientific investigation. This ruled out consciousness or ‘mind’ and ruled in behaviour.

7

Historical perspectives

3 Loeb’s (1901) Comparative Physiology of the Brain and Comparative Psychology adopted a highly reductionist approach, claiming that psychological questions can ultimately be answered in physiological terms. Watson was greatly impressed by Loeb’s ideas. 4 He believed that Psychology was too human-centred. Just as the then rapidly developing discipline of genetics was concerned with heredity as a general phenomenon (i.e. across species), so Psychology should be concerned with behaviour in general. Again, in the same way as geneticists were confining their empirical research to a single, convenient species (the fruit fly), so Psychologists could use the white (laboratory) rat as their ‘species of choice’. Rats could serve as a convenient ‘behaving organism’ for studying behaviour in general. 5 He was increasingly unhappy with Psychology’s hereditarian bias (the result of its commitment to an evolutionary theoretical orientation). This bias was incompatible with Watson’s belief that the aim of science was ‘prediction and control’ and the related belief that it should provide practical knowledge. While he had originally accepted a limited role for heredity, this was downgraded to a belief that ‘unlearned’ behaviour comprises a few physiologically controlled reflexes (see text below). According to Skinner (1974), Watson was the first explicit Behaviourist. Famously, Watson’s 1913 article ‘Psychology as the Behaviourist views it’ is commonly referred to as his ‘Behaviourist Manifesto’: Psychology as the behaviourist views it is a purely objective natural science. Its theoretical goal is the prediction and control of behaviour. Introspection forms no essential part of its methods, nor is the scientific value of its data dependent upon the readiness with which they lend themselves to interpretation in terms of consciousness. The behaviourist, in his efforts to get a unitary scheme of animal response, recognizes no dividing line between man and brute. The behaviour of a man, with all its refinement and complexity, forms only a part of the behaviourist’s total scheme of investigation. (Watson, 1913, p. 158) Three features of Watson’s manifesto deserve special emphasis: 1 Psychology must be purely objective, excluding all subjective data or interpretations in terms of conscious experience. Whereas Wundt’s introspectionism/structuralism used objective observations of behaviour to supplement introspective data, Watson argued that these should be the sole and exclusive subject matter. He was redefining Psychology as ‘the science of behaviour’, instead of the ‘science of mental life’ (Fancher, 1979). 2 While Wundt was attempting to describe and explain conscious mental states, Watson’s goals were to predict and control. 3 Watson wanted to remove the traditional distinction between human beings and nonhuman animals. If, as Darwin had shown, humans evolved from more simple species, then it follows that human behaviour is simply a more complex form of the behaviour of other species; i.e. the difference is merely quantitative (one of degree) rather than qualitative (a difference of kind). Consequently, rats, cats, dogs, and pigeons were to become the major source of psychological data. Since ‘psychological’ now meant ‘behaviour’ rather than ‘consciousness’, animals that were convenient to study, and whose environments could easily be controlled, could replace people as experimental subjects.

8

Historical perspectives

Watson claimed that only by modelling itself on the natural sciences could Psychology legitimately call itself a science: only events/phenomena that can be intersubjectively verified (that is, agreed upon by two or more people) are suitable for scientific investigation. Cognition, thinking, believing, feeling, and so on are private events and therefore inaccessible – and unverifiable – to anyone else and so should be excluded from a science of Psychology. To the extent that most (if not all) Psychologists would agree with the principle of intersubjective verifiability, they would regard themselves as Methodological Behaviourists (Blackman, 1980; Skinner, 1987). Belief in the importance of empirical methods, especially the experiment, as a way of collecting data about human (and non-human) behaviour, which can be quantified and statistically analysed, is now what most Psychologists believe and practice (i.e. mainstream Psychology). So, what was revolutionary in 1913 has become ‘orthodox’ and taken for granted. (For a critique of mainstream Psychology, see Chapter 3.)

Watson’s legacy According to Mills (1998), ‘Historians agree that behaviourism was the dominant force in the creation of modern American psychology’ (p. 1). In addition to the three major aspects of the manifesto highlighted above, Behaviourism focused on adjustment to change (what Watson called ‘habit’) as the central focus for psychological research, and a pragmatic focus on using psychological research to solve real-world problems. Although others had also identified these themes, Watson did much after 1913 to flag them before both fellow Psychologists and the general public. Much of today’s Psychology still reflects Watson’s Methodological Behaviourism, according to which behaviour is taken as an index of events occurring ‘in other universes of discourse’ (Lattal and Rutherford, 2013, p. 3).

Pause for thought … 1 What do you think Lattal and Rutherford mean by this reference to ‘other universes of discourse’?

As Malone (1982) puts it, behaviour remains the ‘ambassador of the mind’. This is consistent with the claim made by Cognitive Psychologists that cognitive processes can only be inferred from people’s performance on a variety of experimental tests (of memory, attention, problem-solving, pattern recognition, etc.). While Watson’s concept of learning was narrowly focused on classical conditioning (as described by the Russian physiologist Ivan Pavlov – see Chapter 6), learning – broadly considered as adjustment to change – remains a major focus of psychological research, incorporating everything from post-traumatic stress to habituation to habit-forming drugs. Similarly, the applications of Psychology (clinical, educational Clinical, Educational, etc.) continue to be its bread and butter (see Chapter 3); this gives credibility to Watson’s conviction that Psychology should be practically relevant. The actual distinctiveness of Watson’s position has been challenged by those who, with the hindsight of history, argue that Behaviourism was not so much a break from the past as an emerging synthesis of pragmatism, Comparative Psychology, Experimental Psychology, and various other psychological and historical viewpoints (Lattal and Rutherford, 2013). It was in fact Yerkes and Morgulis who introduced the English-speaking world to Pavlov’s

9

Historical perspectives

work on conditioning in 1909, rather than Watson himself. But regardless of whether Watson’s ideas were completely new or revolutionary, no one could have championed Behaviourism more forcefully or passionately than Watson; he believed it could change the world (Lattal and Rutherford, 2013): If not a watershed moment, 1913 at least marked a gradual turning point that eventually led to the instantiation of behaviourist theory as the dominant (although never hegemonic) theoretical approach in American departments of psychology for much of the mid-20th century. (Lattal and Rutherford, 2013, p. 5) Samelson (1981, 1985) has closely examined the degree of acceptance of Watson’s ideas among academic Psychologists between 1910 and 1940. He discovered that Watson was at first treated very much as an extremist whose views were considered to be peripheral in the history of Psychology. In fact, most of the Psychology establishment were highly sceptical up to 1920. However, Watson’s views on education and child-rearing became well known to the general public through articles written for this audience; also, during the 1920s, Clarke L. Hull (1884–1952), a clear supporter of Watson’s ideas, was put in charge of the newly created Centre for the Study of Human Behaviour (after 1929 the Institute of Human Relations). While it’s dangerous to speak of a Behaviourist domination of Experimental Psychology before the late 1920s, it becomes more appropriate to do so for the period 1930–1956 (Murray, 1995). In addition to the influence of Behaviourism (in one form or another) on applied areas of Psychology (especially Clinical) and mainstream topics in Psychology, Watson’s manifesto generated a huge amount of debate and controversy: all agree that as time passed since its publication it became a focal point for controversy stoked by historians, critics, and behaviourists alike. (Lattal and Rutherford, 2013, p. 6)

Pause for thought … 2 Name some applied areas of Psychology and mainstream topics to which classical/Pavlovian conditioning has made a contribution.

Skinner’s Radical Behaviourism BOX 1.6 KEY FIGURE: B.F. Skinner (1904–1990) L Skinner had originally wanted to be a professional author of fiction (novels and

short-stories), but despite having some articles published in various newspapers and magazines, he abandoned this ambition in favour of Psychology. L However, later in his career he did write a utopian novel, Walden Two, a fictional account of a society modelled on his real-life – and life-long – research into operant conditioning (see Chapter 6).

10

Historical perspectives L He’d read about Watson’s work and from this he adopted a life-long enthusiasm

for positivism. This was reinforced during his time at Harvard as a doctoral student, where he came under the influence of William Crozier, a hardline positivist who rejected any hypotheses regarding mechanisms that might be mediating environmental influences and behaviour. This is reflected in Skinner’s rejection of free will (see Box 1.7). L His first teaching post was at the University of Minnesota (1936). During the Second World War, he began training pigeons to peck at a mark to guide a missile towards a target. The project was abandoned with the advent of radar, but it is believed to have worked. L In 1948, Skinner returned to Harvard as a professor of Psychology. L Not only did he found what’s known as Radical Behaviourism, with behaviour analysis at its centre (see text below and Chapter 6), but he strongly believed that its methods could be used to make people behave morally. According to Harré (2006): As his conviction of the rightness of his conception of psychology grew, Skinner adopted an almost messianic ambition to redesign the human world and to redeem its evils by the universal application of the techniques of operant conditioning. (p. 18) L The techniques of operant conditioning are described in detail in Chapter 6.

Skinner, generally regarded as the ‘arch-behaviourist’, rejected Watson’s insistence on ‘truth by agreement’. According to Radical Behaviourism, cognitions are covert behaviours (‘within the skin’) that should be studied by Psychologists along with overt behaviours (capable of being observed by two or more people). He was not ‘against cognitions’, but argued that so-called mental activities are ‘metaphors or explanatory fictions’: behaviour attributed to them can be explained more effectively in other ways, specifically the principles of reinforcement derived from his experimental work with rats and pigeons. What’s ‘radical’ about Radical Behaviourism is the claim that thoughts, feelings, sensations, and other private events cannot be used to explain behaviour but are to be explained through behaviour analysis (BA). Since private events cannot be manipulated, they cannot serve as independent variables – but they can serve as dependent variables (in other words, they aren’t causes, but effects). Consistent with Radical Behaviourism is Skinner’s claim (shared by Freud) that our common sense belief in free will is an illusion (see Chapter 5).

BOX 1.7 Skinner’s view of free will as an illusion L In Beyond Freedom and Dignity (1971), Skinner argued that behavioural freedom

is an illusion. In James’ terms, he was as hard a determinist as they come. L Radical Behaviourists regard their view of behaviour as the most scientific,

because it provides an account in terms of material causes, all of which can be objectively defined and measured. Free will is just one more explanatory fiction. L Because the causes of our behaviour are often hidden from us in the environment, the free will myth/illusion survives. So, what is the nature of these causes?

11

Historical perspectives L When we act in order to achieve certain desirable outcomes (positive reinforcers),

or when what we do is dictated by force or punishment (or by their threat, i.e. negative reinforcement), we’re clearly not acting freely. However, most of the time we’re unaware of the environmental causes of our behaviour; hence, it looks and feels as if we are choosing to act that way. L Even when we believe (or ‘know’) that we’re not being forced to behave in a particular way (or being coerced by negative reinforcement), our behaviour is still determined by the pursuit of past positive reinforcers. When we perceive others as behaving freely, we’re simply ignorant of their reinforcement histories.

L L

Try to formulate some arguments against Skinner’s claims as outlined in Box 1.7. For example, he maintains that the only reason anybody behaves lawfully is (1) to avoid being fined or imprisoned or (2) because of past positive reinforcement for obeying the law. Are these the only possible explanations?

It could be argued that Skinner is merely assuming that it’s the threat of imprisonment or past reinforcement for ‘good behaviour’. But people may sometimes choose to behave lawfully – either because their sense of right and wrong coincides with what the law dictates as lawful/unlawful, or because the risk of getting caught outweighs the benefits to be gained from the unlawful behaviour. Skinner fails to (and perhaps is unable to) offer any additional evidence to show that his argument (‘when someone appears to be making a choice, we have simply failed to identify the real pay-offs’) is more valid than the opposing argument (‘when someone appears to be making a choice, they probably are’). As Morea (1990) says, Skinner’s argument seems to be of the ‘Heads I win, tails you lose’ variety. (Ironically, this is an argument that Popper (1959) uses against Freud’s psychoanalytic theory, rejecting it as unscientific because there’s no way of proving Freud wrong – see Chapter 9. Skinner, as the ‘arch-behaviourist’ and advocate of experimental manipulation, really should know better!) However, some years after publication of Beyond Freedom and Dignity, Skinner suggested that instead of our behaviour being determined by positive and negative reinforcement, it is merely shaped and modified by them; this allows for some active part to be played by the actor. Indeed, Skinner (1986) states that operant behaviour is the field of ‘intention, purpose, and expectation’. He had always contrasted operant behaviour with respondent behaviour, the former being emitted by the organism, the latter elicited by some triggering stimulus (see Chapter 6). These types of behaviour can be thought of as voluntary and involuntary, respectively. However, O’Donohue and Ferguson (2001) argue that, just because Skinner uses the word ‘purposive’ to describe operant behaviour (aimed at changing the environment and producing particular consequences), we shouldn’t take this to imply that the individual has free will.

L

12

If ‘voluntary’ implies ‘emitted’ rather than ‘elicited’, and if ‘voluntary’ or ‘purposive’ don’t imply ‘free’, try to identify some other meanings of ‘free’ which may be more challenging to Skinner’s claim that free will is an illusion.

Historical perspectives

Gross (2014) identifies a number of definitions or senses in which the term is used: 1 Having a choice (i.e. we could have behaved differently given the same circumstances). 2 Not being coerced or constrained (i.e. we’re not having a loaded gun put to our heads – literally or metaphorically). 3 Voluntary (as related to the subjective experience/phenomenology of voluntary movement: i.e. what it feels like to voluntarily move one’s arm upwards – see Chapter 5). 4 Deliberate control (i.e. the opposite of automatic, as in consciously trying to make a decision or solve a problem). If we accept one or more of these as capturing what ‘free’ conveys, then Skinner’s account is extremely limited; while it may be consistent with his account of operant conditioning (which includes the processes of behaviour shaping and modification), his failure to grant cognitive processes any kind of causal power means that the above senses of ‘free’ are totally overlooked. As Leslie (2002) maintains, from the perspective of BA, the theories of Cognitive Psychology (see below and Chapter 7) are doomed to fail, based on a mistaken assumption about the necessary features of psychological explanations: both overt (visible) behaviour and the other apparently ‘private’ aspects of human psychology arise from interaction with the environment. While Methodological Behaviourism proposes to ignore such inner states (they are inaccessible), Radical Behaviourism ignores them only as variables used to explain behaviour (they are irrelevant): they can be translated into the language of reinforcement theory (Garrett, 1996). According to Nye (2000), Skinner’s ideas are also radical because he applied the same type of analysis to both covert and overt behaviour. According to Skinner (1974): Behaviourism is not the science of human behaviour; it is the philosophy of that science. Some of the questions it asks are these: Is such a science really possible/Can it account for every aspect of human behaviour? What methods can it use? Are its laws as valid as those of physics and biology? Will it lead to a technology, and if so, what role will it play in human affairs? (p. 3) (Many of these questions can be asked of Psychology as a whole and are addressed in Chapter 2.) So, Radical Behaviourism isn’t a scientific law or set of empirical findings. Rather, it is meta-scientific: it attempts to define what a science of behaviour should look like. As O’Donohue and Ferguson (2001) put it, Radical Behaviourism is a philosophy of Psychology. At the end of his life, Skinner (1990) stated that ‘Cognitive science is the creation science of psychology’ (p. 1209).

L L

What do you understand by ‘creation science’? (Another term commonly used in ‘creationism’.) What do you think Skinner means when he says that Cognitive Psychology is the creation science of Psychology. (Recall what he says about mental events as ‘explanatory fictions’.)

13

Historical perspectives

According to creationists, the world was purposely designed by ‘God’ (or some creator). By analogy, Skinner regards Cognitive Psychology as adopting the same view of human behaviour, i.e. it’s produced by our beliefs, intentions, etc. As we saw above, Skinner rejects mental events as having any causal (and, hence, explanatory) properties. As a believer in the evolutionary continuity of species, Skinner would reject creationism in favour of evolutionary theory (see Chapter 8). According to Lattal and Rutherford (2013), Skinner’s BA continues to dominate mainstream Behaviourism, in both theory and practice, and is a far cry from Watson’s original version.

Gestalt Psychology and the cognitive revolution According to Murray (1995), Behaviourism had a delayed effect: it wasn’t until about 1930 that Anglo-American Psychologists took Watson’s ideas seriously. But then, especially under the influence of Hull, Behaviourism came to dominate Experimental Psychology until the mid-1950s. At about that time, several researchers proposed turning the clock back and allowing mentalistic terms such as ‘images’, ‘goals’, and ‘the self ’ to reenter scientific Psychology (terms which had been widely used at the end of the nineteenth century). However, in Germany, the use of such terms had never been ‘outlawed’ as they were in the US in particular. Following the publication in 1912 of an influential paper (on the perception of apparent movement) by Max Wertheimer (1880–1943) and up until 1933, the Gestalt movement came to dominate German Psychology. Its influence then shifted to North America with the enforced emigration of the three pioneers of Gestalt Psychology, Wertheimer, Kurt Koffka (1886–1941), and Wolfgang Köhler (1887–1967). Only some of their writings were translated into English, including Koffka’s (1935) Principles of Gestalt Psychology, but it seemed to have little influence compared with the ideas of Hull, Skinner, and other neo-behaviourists (Murray, 1995).

Gestalt Psychology and empiricism

BOX 1.8 Empiricism and its influence on natural science L Watson’s extreme Behaviourism represents an extreme form of empiricism, a

philosophical theory associated mainly with seventeenth-century British philosophers, in particular John Locke (1632–1704), David Hume (1711–1776), and Bishop Berkeley (1685–1753). L Central to empiricist philosophy is the belief that the only source of true knowledge about the world is sensory experience – what comes to us via our senses or what can be inferred about the relationships between such sensory facts. This belief proved to be one of the major influences on the development of physics and chemistry. L It follows from this belief that, as applied to explaining human behaviour, it is environmental influences that are key (i.e. events that impinge on the individual from outside). L The word ‘empirical’ is often used synonymously with ‘scientific’, implying that what scientists do is carry out experiments and observations as means of collecting ‘facts’ about the world. This, in turn, implies other very important assumptions

14

Historical perspectives regarding the nature of scientific activity and its relationship to the phenomena under investigation: 1

2 3

An empirical approach is different from a theoretical one, since the latter doesn’t involve the use of experiment, measurement, and other forms of data collection. Philosophers, rather than scientists, use theory and rational argument (as opposed to data collection) in trying to establish the truth about the world. The objective truth about the world (what it is ‘really like’) can be established through properly controlled experiments and other empirical methods. Science can tell us about reality independently of the scientist and of the activity of trying to observe/measure it.

(These and other related issues are examined in Chapter 2 when discussing the nature of science and, in particular, their relevance and application to the study of human behaviour.) According to Murray (1995), like Behaviourism, Gestalt Psychology represented a rebellion against trends that had been current in German Psychology departments in the first decade of the twentieth century. But rather than being a rebellion aimed predominantly at introspection (as Watson’s had been), the Gestalt rebellion was aimed at the empiricism as described in Box 1.8, and, in particular, at the claim that the starting point for a valid Psychology was the assumption that the brain took simple, single sensations and, by a process of association, identified, evaluated, and combined them. (It is this principle of association that lies at the heart of classical and operant conditioning, but Watson and Skinner argued that association between stimuli and responses occurs ‘automatically’, without any intervening cognitive events.) Instead, the Gestalt Psychologists argued that sensing, identifying, evaluating, and combining all formed part of the same immediate process. The perceiver is presented with a representation of reality that is instantly ‘clear’ in the sense that she or he could act on the basis of accurate information about the environment. They were rejecting Wundt’s structuralism, which – as we saw above – claimed that introspection could be used to analyse conscious experience into its most basic elements (sensations and feelings). Sensations are the raw sensory content of consciousness, devoid of all ‘meaning’ or interpretation; all conscious thoughts, ideas, and perceptions were assumed to be combinations of sensations. Introspection made it possible to cut through the learned categories and concepts that define our everyday experience of the world, and so expose the ‘building blocks’ of experience. According to Murray (1995): The whole thrust of their [Gestaltists’] intellectual endeavour, insofar as sensation and perception were concerned, was to stress the predominance of the whole over the parts (or, rather, the difference between the whole and the parts); individual parts are always judged by the subject, in a manner that depends on automatic functioning of the subject’s brain, within their global context. (p. 12; emphasis in original) Indeed, Wertheimer, Koffka, and Köhler devoted their lives to trying to answer the question: ‘Why do brain processes tend to produce perceptual organizations of remarkable clearness of structure?’ (Köhler, 1969, p. 164). Part of the answer was provided by Christian von Ehrenfels (1859–1932), not himself a Gestaltist, but someone who anticipated the Gestalt

15

Historical perspectives

School. He invented the term ‘form quality’ or ‘Gestalt quality’, as in the example of a square that not only consists of four lines combined in a particular way, but which also has ‘squareness’. It’s the squareness of a square that is immediately perceived, rather than the four lines that are somehow (unconsciously) combined to form the perception of a square. The perceived square has a quality of wholeness or independence of form (Gestalt quality). Von Ehrenfels anticipated the later work of Rubin (1915) on figure–ground perception: he stressed that if the Gestalt quality (e.g. squareness) is to emerge, it must somehow stand out (i.e. figure) against its background (ground). According to Boring (1950), the Gestalt School was new and different enough from the work of von Ehrenfels (and others who’d anticipated Gestalt ideas) to be considered a movement in itself. It was less devoted to philosophy and more to experimental demonstrations, many of which have become a familiar part of mainstream Psychology (such as those illustrating the laws of Prägnanz, including figure/ground, constancy, and closure). Best known for their studies of perception, the Gestalt Psychologists also conducted studies of memory and problem-solving. American Social Psychologist Solomon Asch, best known for his studies of conformity, was hugely influenced by Gestalt principles; this influence was perhaps seen most clearly in his studies of impression formation (e.g. the halo effect: Asch, 1946: see Gross, 2015). While not claiming that the Gestalt Psychologists directly influenced the cognitive revolution (see below), Murray (1995) believes that they anticipated many of its findings; also, there’s a correspondence between their scientific terminology and the terminology that has become acceptable as a result of the cognitive revolution. Murray also argues that the scientific terminology acceptable in a system of Cognitive Psychology will depend on the level of discourse: at a high level, terms like ‘image’, ‘goals’, and ‘self ’ are acceptable, while at a lower level ‘habits’ (as used by Watson) might be acceptable. However, since humans have the ability to form ‘mental representations’ of the world that might include images, it’s doubtful whether a psychological system based entirely on ‘habits’ would be adequate to describe the mental life or overt behaviour of humans. (This relates to the debate regarding the kind of differences that exist between humans and non-human animals – see Gross, 2012. It also relates to the philosophical issue of reductionism – see Chapter 2.) Indeed, Gestalt Psychologists maintained that human behaviour – and probably that of apes, cats, and dogs – could only be properly described and explained if it is accepted that they do have mental representations such as images. More specifically, these mental representations need to concern possible future events, thus allowing the formulation of goals, plans, and intentions. Koffka believed that in humans it was the ‘ego’ (self ) that would be in charge of this planning activity. In the case of learning strings of words or nonsense syllables (list-learning), the neo-behaviourists had thought of the human memorizer as passively learning a sequence of speech responses (‘speech-habits’). In contrast, the Gestalt Psychologists stressed the active processes of re-organizing and re-structuring the material that had been memorized. This was extended to problem-solving in both humans and other primates, famously in Köhler’s studies involving chimpanzees (see Gross, 2015). Just as during the period of Behaviourist domination there were those who argued for a mentalistic description even of animal behaviour (notably Edward C. Tolman (1886–1959) and his theory of latent learning – see Chapter 6), so the ‘cognitive revolution’ is more of an ongoing movement (Murray, 1995) (but see Box 1.9). Particular individuals have also questioned whether there really was a ‘cognitive revolution’: many of the scientific concepts used by cognitive scientists in the late 1900s had been widely used

16

Historical perspectives

in a general European/American tradition dating back to the 1800s and had persisted in the Gestalt School. A claim can be made … that the Gestalt psychologists defended mentalistic traditions throughout the period of behaviourist domination; it can also be claimed that the cognitive revolution involved a re-discovery of basic mentalistic concepts that had been discussed not only by the Gestalt psychologists and the later nineteenth century psychologists but by almost all psychologists going back to Aristotle. (Murray, 1995, p. 3) Murray believes that Gestalt Psychology formed an important barrier against the spread of Behaviourism into Europe and also provided many original ideas which have yet to be absorbed properly into cognitive science. The ‘cognitive revolution’ is commonly taken to refer to the events described in Box 1.9.

BOX 1.9 The cognitive revolution L According to Gardner (1985) George A. Miller fixed the exact date as 11 Septem-

ber 1956. At a Symposium on Information Theory held at the Massachusetts Institute of Technology (MIT) over 10–12 September, the 11th stood out by virtue of two particular paper presentations: (1) Allen Newell and Herbert Simon demonstrated that a proof of symbolic logic could be carried out by a computer; (2) Noam Chomsky proposed a new theory of language based on the idea that natural language shares certain properties with mathematics, including transformations (such as that from an active sentence to a passive sentence). L Also, Miller described his work on the limitations of memory (published as ‘The

Magic Number Seven, Plus or Minus Two’), which can be enhanced by chunking (see Gross, 2015). L There was extensive discussion of the mental attributes of conscious experience. This was addressed in Donald Broadbent’s (1958) Perception and Communication, which, while referring frequently to Hull’s work, now conceptualized learned motor habits as residing in a long-term store. Items entered long-term memory usually by first entering consciousness. While an attention process filtered out the important from the unimportant material, the more the individual ‘processed’ or rehearsed the material in consciousness, the more likely it was to enter the long-term store. These processes were represented by a flowchart, including feedback systems; this was, at the time, a novel way of scientifically representing psychological functioning. L Broadbent’s attempts to explain selective attention represents an informationprocessing approach; he also helped to popularize analogies between human memory systems and (other) physical storage systems. One of the best-known and, arguably, most debated and controversial is that between humans as information processors and computers (see Gross, 2014). L Broadbent’s model of selective attention has become a key feature of mainstream Cognitive Psychology, along with alternative models and accounts of divided attention.

17

Historical perspectives

Pause for thought … 3 Try to give some examples of flowcharts as used in psychological theories or research. Try drawing one yourself to represent some process of your choice. What do you consider to be one or more advantage and disadvantage of flowcharts?

As we noted in Box 1.9, Broadbent drew heavily on Hull’s learning theory, and Broadbent’s later book (Behaviour, 1961) was described by Joynson (1974) as ‘representative of the position which the modern behaviourist adopts’ (p. 30). What this shows is that Behaviourism wasn’t ‘killed off ’ by the ‘cognitive revolution’, and that it’s possible for one and the same Psychologist to believe in theories and ideas that would be considered Behaviourist and Cognitive simultaneously. (We’ll say more about Broadbent in relation to Psychology and common sense in Chapter 4.) Apart from the presentations and publications listed in Box 1.9, 1956 is also significant for the publication of Bruner et al.’s A Study of Thinking. The authors discussed the role of different cognitive strategies that participants use when trying to figure out what unifying ‘concept’ is illustrated by a set of diverse objects or drawings (i.e. what features/attributes they have in common). This book represents another major break from the Behaviourist stranglehold that American Psychologists, in particular, were in since 1913. ‘Thought’ had now finally reappeared on the scientific psychological map and was once more valid, respectable subject-matter. According to Murray (1995): From the 1950s onwards writings on cognitive psychology have proliferated to the extent that behaviourism has come to be seen as providing an historically important but old-fashioned and unhelpful vocabulary for describing the myriad-faceted aspects of human conscious experiences associated with memorizing, retrieving, decision-making, problem-solving, concept-forming, recognizing, attending, identifying, imaging, skilllearning, language-learning, and creative imagining. (p. 21) Murray lists a number of other key landmarks from the 1960s onwards, including Ulric Neisser’s Cognitive Psychology (1967), which christened the discipline, Paivio’s Imagery and Verbal Processes (1971), Newell and Simon’s Human Problem Solving (1972) (largely responsible for introducing the computer analogy), Tulving’s Elements of Episodic Memory (1983; see Chapter 7), and Parallel Distributed Processing (McClelland et al., 1986). Despite these and a vast number of subsequent publications, the cognitive revolution did not spawn a unified school of cognitive scientists (Murray, 1995).

A critical history of Psychology The history of Psychology presented above constitutes what Harris (2009) describes as a narrowly intellectual history: Dissociated from national and world events, the history of psychology becomes a history of the intellectual discussions within elite groups such as university professors.

18

Historical perspectives

Removed from the social world, the discoveries of psychologists are presented as the products of individual inspiration, motivated by a timeless quest for knowledge. (p. 21) Such a historical account is a list of great theories and the great experiments designed to test the theories from which they are derived. When such theories conflict, their accuracy is determined on logical (as opposed to ideological or political) grounds. While there may be reference to the current zeitgeist (‘spirit of the time’), it is said to operate through the genius of the individual psychologist, in a process resembling telepathy more than Social Psychology (Schultz, 1969). Such histories are ‘Whiggish’, ‘presentist’, and ‘celebratory’. Box 1.10 explains what is meant by these terms.

BOX 1.10 The components of an intellectual history of Psychology L Like political histories written in England when the Whig party (the political

L

L

L

L

opponents of the Tories) was in power (during the period 1680–1850), Whiggish histories assume that the current status quo is a preordained result of historical progress, i.e. Psychology’s history describes gradual progression from ignorance to enlightenment. Guided by this false assumption, such histories view events according to the values and biases of the present; this produces an essentially non-historical presentist view of the past. From the perspective of Critical Psychology (see text below and Chapter 2), the most relevant feature of such histories is their failure to appreciate the validity of earlier scientific trends if they conflict with today’s orthodoxy. So, instead of appreciating past versions of Psychology by the standards of their time, they are categorized as either helping or hindering the development of currently accepted, dominant, theories; this reinforces today’s orthodoxy and provides its supporters with a celebratory account of their inevitable rise to power (Samelson, 1974). For example, Cognitive Psychologists may acknowledge Wundt’s pioneering research, at the same time ignoring its true nature. They’re likely to focus on the experiments that seem most familiar or relevant today, overlooking his social psychological and anthropological work. Remember that for Wundt, only the most basic human attributes/abilities could be investigated using the methodology of natural science (in particular the experiment), while distinctive human traits and behaviours required the methods used in the humanities (such as hermeneutics: see Chapter 3). The result is a view of Wundt as the father of today’s Cognitivists, robbing him of his wider, more philosophically complex vision (Bock, 1988). Also, the presentist history ignores the more egalitarian social relationships that existed in Wundt’s laboratory, where the roles of experiment designer and ‘subject’ were interchangeable. Treating one class of ‘participant’ unethically or ignoring their subjectivity – as happens today – would have been unthinkable (Danziger, 1990). But today’s distinction between subject/participant and experimenter seems so natural that it is projected back into Wundt’s era. (Based on Harris, 2009)

19

Historical perspectives

L

What do you think might be considered unethical about the distinction between subject/participant and experimenter?

(It may be useful to think about other social situations in which people play different roles and whether there are any complementary roles that are equivalent to those in the experimental context – see Chapter 3, pages 77–78.)

An unbroken lineage: another origin myth? While the intellectual history offered above starts relatively recently with Wundt in the late 1800s, textbooks of the history of Psychology often go much further back chronologically. For example, Brysbaert and Rastle (2013) start by looking at the creation of writing and the discovery of numbers, before examining the views of the Ancient Greek philosophers, including Plato and Aristotle. Walsh et al. (2014), likewise, begin with the Ancient Greeks, but as we shall see later, their approach to history is very different. By starting with the Ancient Greeks, it’s easy to create the (mistaken) impression that they were concerned with the same problems and issues as present-day Psychologists (Jones and Elcock, 2001): the notion of an ‘unbroken lineage’ reinforces the idea that Psychology has a history as long as any other science, but Jones and Elcock see this as an ‘origin myth’. Why is this a mistaken, misleading view of Psychology’s history?

Have psychological concepts always been the same? Natural and psychological kinds Just as psychological concepts are culturally relative (see e.g. Gross, 2014; Chapter 2), so they are historically relative. According to Danziger (1997), modern Psychology is deeply ahistorical: it fails to see psychological categories and concepts from a historical perspective. One reason for this is Psychology’s wish to identify itself with natural science: Psychological research is supposed to be concerned with natural, not historical, objects, and its methods are considered to be those of natural science, not those of history. Psychology is committed to investigating processes like cognition, perception and motivation, as historically invariant phenomena of nature, not as historically determined social phenomena. Accordingly, it has strongly favoured the experimental approach of natural science and rejected the textual and archival methods of history. (Danziger, 1997, p. 9) Related to this is the implicit belief in scientific progress (what we earlier referred to as ‘Whiggish’ history – see Box 1.10). As a scientific discipline develops, so knowledge accumulates and we move closer to ‘the truth’: the past simply consists of that which has been superseded. The main reason for bothering with it at all is to celebrate progress, to congratulate ourselves for having arrived at the truth which the cleverest of our ancestors could only guess at. (Danziger, 1997, p. 9) Implicit in this view is the assumption that psychological domains, such as ‘intelligence’, ‘personality’, and ‘motivation’ (and other traditional topics covered by textbooks and

20

Historical perspectives

undergraduate courses) are true reflections of the actual structure of a timeless human nature (Psychology’s ontology or fundamental subject-matter). So, even though pretwentieth-century writers may not have organized their reflections around such topics, they are still presented as having had theories about them. If changes in such categories are acknowledged at all, it is their present-day form that is taken to define their ‘true’ nature: older work is interesting only in so far as it anticipates what we now know to be true.

L

Based on the earlier discussion of precursors of Cognitive Psychology in particular, to what extent do you agree with this conclusion?

According to Danziger (1997): the essence of psychological categories (insofar as they have one) lies in their status as historically constructed objects. There are no ‘perennial problems’ driving the history of psychology through the ages…. At different times and in different places psychologically significant categories have been constructed and reconstructed in attempts to deal with different problems and to answer a variety of questions, many of them not essentially psychological at all. (p. 12) Even the categories of physics are historical constructions, and so are subject to change. Danziger observes that Aristotle used concepts such as ‘psyche’ which have become equated with ‘mind’ through translation from Greek into Latin, then into various modern languages. He concludes that many of the fundamental categories of twentieth-century Psychology are, effectively, twentieth-century inventions: concepts such as ‘intelligence’, ‘behaviour’, and ‘learning’ have been given such radically changed meanings by modern Psychology that there were, simply, no earlier equivalents. Similarly, Gergen (2001), in discussing postmodern Psychology (i.e. post-‘mainstream’ Psychology; see above and Chapter 3), argues that to understand psychological concepts we have to analyse the historical conditions giving rise to various conceptions of the mind. For example, how did our conceptions of mental life come into being, and what function did they play in cultural life? Psychologists are now joining historians in examining these questions, resulting in a substantial literature on the historical genesis and transformation of anger, child development, boredom, the sense of smell, the concept of an independent self (see Chapter 12), and more. Gergen goes on to say that the postmodern dialogues make us keenly aware of the historical and cultural location of the empiricist tradition in Psychology (the mainstream). However, there are also other much older psychological categories (such as ‘emotion’, ‘motive’, ‘consciousness’, and ‘self-esteem’) whose meaning has been largely retained as used in modern Psychology. But even here, this continuity of meaning may only apply within our Western cultural tradition (i.e. the one in which Experimental Psychology evolved). As Gergen (2001) points out: The view of human beings as constituted by universal mechanisms (cognitive, emotional etc.), causally related to environmental antecedents and behavioural consequences, is not derived from ‘what is the case’. Rather, this conception of the person is

21

Historical perspectives

an outgrowth of a particular tradition, including both its linguistic genres and the institutions in which they are embedded … In this sense, what we take to be ‘the real’, what we believe to be transparently true about human functioning, is a byproduct of communal construction. (p. 3) At the same time, the very notion of ‘Psychology’ doesn’t exist before the eighteenth century. While there was plenty of reflection about human experience and conduct, to imagine that this reflection was ‘psychological’ in our sense involves projecting the present onto the past. Before the eighteenth century, there were theological, philosophical, rhetorical, medical, aesthetic, and political categories – but no psychological ones (Danziger, 1997).

L

What does Danziger’s analysis imply about psychological concepts/ categories in relation to ‘natural kinds’?

Clearly, psychological concepts/categories do not refer to ‘natural kinds’, that is, aspects of reality that exist independently of any attempt to describe or explain them. While belief in the existence of the physical world (‘nature’) is something that only a philosopher might challenge, the claim that ‘attitudes’ and ‘intelligence’ exist in an equivalent way is much more difficult to defend. Although we might talk as if attitudes and intelligence existed independently of Psychology’s attempt to investigate them, they are mere constructions (or hypothetical constructs), used to make sense of observable behaviour. While they may not refer to ‘real’, objectively existing phenomena, they nevertheless have the power to influence people’s behaviour and experience (that is, their ‘psychology’). This unique feature of Psychology will be discussed further in Chapter 2.

Revisionist histories of Psychology Revisionist historians of Psychology use history to criticize the status quo – challenging the ceremonial/celebratory history that supports it (Harris, 2009). According to Harris (2009): Because the United States contains the vast majority of the world’s psychologists and is a highly psychologised society, the development of US psychology has offered much to revisionist historians … the history of psychology in the United States has been the intellectual terrain most contested since the 1960s. (pp. 23–24) Harris goes on to say that revisionist histories have been written about European Psychology and, more recently, scholars have turned their attention to the developing world to study how Psychology is transformed as it crosses national and cultural boundaries. (See Chapter 11 for a discussion of intelligence and intelligence testing and Chapter 13 regarding mental disorder.) An instance of revisionist history that Harris describes at length centres on the writings of Experimental Psychologist Leon Kamin, and palaeontologist and popularizer of science Stephen J. Gould, both highly critical of the concept of the IQ (intelligence quotient) and its

22

Historical perspectives

measurement. According to Harris, their work represents a challenge to conventional (‘intellectual’) accounts while at the same time avoiding the earlier conspiratorial, simplistic ‘storytelling’, as in the claim (especially during the 1960s) that Psychology developed to serve the forces of racism, male chauvinism, and class bias. Kamin’s and Gould’s revisionist critique of IQ has itself been challenged by the ‘new historians’ of Psychology (see Chapter 11).

The new history of Psychology By the 1980s, amateur historians such as Kamin and Gould had been replaced by scholars trained in the history of science, women’s history (see below), and social history. The result was a more mature and sophisticated view of Psychology’s past, including a critique of earlier histories; this is commonly referred to as ‘the new history of psychology’ (Harris, 2009). This new style of history was ‘more contextual, more critical, more archival, more inclusive, and more past-minded’ (Furomoto, 1989, p. 30). In other words, its practitioners focus on non-elite groups in Psychology, look at consumers of psychological information, use archival records to supplement the field’s official literature, and recreate the social context in which intellectual trends develop. By studying only elite Psychologists – almost all male – Gould inadvertently overlooked the rank-and-file opposition to racist notions of intelligence, which tended to be female (Harris, 2009). In conclusion, Harris states that social injustice would be far easier to reduce if destructive social forces could be blamed on the motives of a scientific elite. However, in the history of Psychology psychologists rarely have a monopoly on ignorance and prejudice. Rather, the social prejudice and blindness of scientists and clinicians is usually no greater than that of politicians, popular writers, or business executives. This is the lesson that the new history draws from the development of the intelligence test. (Harris, 2009, p. 28)

Feminist revisions of the history of Psychology According to Gross (2014), feminism is a social and political movement that arose outside Psychology (often used synonymously with the ‘women’s movement’ of the 1970s in particular); many who would describe themselves as feminists were, and are, academics who criticized their particular discipline for being ‘gender blind’ (Kelly, 1988). If what feminists have in common is a condemnation of any or all forms of oppression of women, then we would expect that those engaged in occupations and professions (including Psychology) would be critical of such treatment of women as goes on within their occupation or profession. However, feminist thinkers and writers are not just against oppression and discrimination against women, they are also for the recognition of the achievements, contributions and experience of women as being valid and important in their own right, and not just as matters to be understood and evaluated in comparison with men. Feminist Psychologists, therefore, criticise Psychology as a discipline – its methods, theories, and applications – from a feminist perspective. (Gross, 2014, p. 218; emphasis in original)

23

Historical perspectives

Sexism within Psychology Bernstein and Russo’s much-cited 1974 article entitled ‘The history of psychology revisited: or, up with our foremothers’ consisted largely of a quiz, which their Psychology colleagues failed miserably.

The questions included the following: 1 Who developed the Cattell Infant Intelligence Test Scale? 2 What do the following have in common? The Bender–Gestalt Test, the Taylor Manifest Anxiety Scale, the Kent-Rosanoff Word Association Test, the Thematic Apperception Test (TAT), and the Sentence Completion Method. 3 The following are the last names of individuals who have contributed to the scientific study of human behaviour. What else do these names have in common? Ausubel, Bellak, Brunswick, Buhler, Dennis, Gardner, Gibson, Glueck, Harlow, Hartley, Hoffman, Horowitz, Jones, Kendler, Koch, Lacey, Luchins, Lynd, Murphy, Premack, Rossi, Sears, Sherif, Spence, Staats, Stendler, Whiting, Yarrow. Answers (1) Psyche Cattell. (2) A woman was either the senior author or the sole author of each test/method. (3) They are the surnames of female social scientists. L

L L

While you may have recognized some of the names in question 3, this may only be because they had more famous and familiar husbands with whom they jointly published research (e.g. Beatrix and Allen Gardner’s (1969) attempt to teach American sign language to Washoe: see Charter 7). Again, we tend to automatically assume that the ‘Harlow’ in the list is Harry Harlow of rhesus monkey/surrogate mother fame. We’re all guilty (including Bernstein and Russo’s colleagues). What other husband and wife pairs can you identify in question 3? And what was the nature of their joint research? An important omission from the list above is that of Mamie and Kenneth Clark, who conducted a famous ‘doll study’ (1939/1947), which provided a means of assessing children’s ethnic preferences and identifications (see Gross, 2008).

Related to this is a strong tendency to assume that a Psychologist whose name is unfamiliar to you is male. Even if statistically it’s very likely you’ll be correct, this is not the basis for making such assumptions. Instead, it reflects a masculinist bias – the belief that the contributions made by men to Psychology are more important than those made by women. As Scarborough and

24

Historical perspectives

Furomoto (1987) state, the history of Psychology is the history of male Psychology. The default state of scientists appears to be male, and history in Psychology appears to be HIS-story – a term often used in archetypal feminist ideology (Griffin, 2012). (Gross, 2014, p. 220; emphasis added) According to Paludi (1992), the answers to Bernstein and Russo’s quiz include what are hardly ‘household names’ precisely because they’ve been kept invisible by male (American) historians of (largely American) Psychology. Based on their colleagues’ poor showing on their quiz, Bernstein and Russo (1974) argued that women Psychologists needed to be rediscovered. In response to the neglect of women’s contributions to Psychology and to the recognition that women’s history has the potential to transform women’s self-understanding, a sub-field of women’s history in Psychology has evolved in the US in recent years (Paludi, 1992). This draws on Lerner’s (1979) model, namely (1) finding lost or overlooked women and putting them back into the history (compensatory history); (2) noting women’s contributions (contribution history); and (3) noting how history is constructed through a male (androcentric) perspective and reconstructing it from the perspective of women (reconstruction history). Examples of (1) and (2) are given below. According to Stevens and Gardner (1982), notable omissions include: (1) Mary Calkins’ theory of self and her invention of the paired associates method (of measuring short-term memory); she was excluded, in 1890, from a graduate Psychology programme because she was female; (2) Christine Ladd-Franklin’s theory of colour vision; (3) Margaret Washburn’s hugely important books on animal behaviour; (4) Magda Arnold’s theory of emotion; (5) Margaret Harlow’s contribution to an understanding of the role of tactile stimulation in mothering Despite completing their theses, both Calkins and Ladd-Franklin were refused their PhDs. In the UK, women’s involvement in Psychology was unusually favourable compared with other sciences (such as physiology). For example, women have been accepted in the British Psychological Society (BPS) since its inception in 1901, although not in the Royal Society until about 1949. While being aware of research taking place in the US and Europe, British female Psychologists weren’t afraid to be theoretically independent; also, there’s no evidence of separate spheres of operation for men and women (‘territorial segregation’), with women occupying ‘caring’ practitioner roles and men ‘understanding’ scientist roles that became prevalent later in the 1900s – especially in the US (Valentine, 2010).

Valentine also notes that female Psychologists investigated a wide range of topics, with no preference for ‘soft’ over ‘hard’ subjects. L What do you think is meant by ‘soft’ and ‘hard’ in this context? L Based on stereotypical views of gender roles, how do you imagine the split between ‘soft’ and ‘hard’ would have taken place? (The reference above to ‘caring’ and ‘scientist’ roles might help you here.)

25

Historical perspectives

BOX 1.11 Two eminent British female Psychologists L Beatrice Edgell (Figure 1.3) was the first British

woman to obtain a PhD in Psychology. She established one of the first psychological laboratories in the country (at Bedford College, London). She was interested in determining the extent to which the mind could be measured; she investigated time perception and memory. L Victoria Hazlitt was a student and subsequent colleague of Edgell’s. Her studies of animal learning (1919) anticipated much later work on learning sets, place vs. response learning (see discussion of Tolman’s research in Chapter 6), and the over-learning reversal effect. In the context of her pioneering study of university selection, her contrast between introverts and extroverts foreshadowed Hudson’s (e.g. 1968) work on convergers and divergers. She was critical of Piaget’s claims regarding when children can understand logical relationships (see Gross, 2015). (Based on Valentine, 2010)

Figure 1.3 Beatrice Edgell.

Conclusions What the chapter has tried to show is that the ‘story’ of how Psychology became a separate field of study, and, specifically, a scientific discipline, can be told in more than one way. As perhaps with any attempt to trace the beginnings of some process or movement, there will be disagreement as to how far back you need to go; also, the deeper you dig, the more complex the emerging picture becomes. The relationship between the present and the past works in both directions: not only does the past help shape the present (without the past there’d be no present), but the way we perceive and assess the past reflects (unconsciously) present values and perspectives. To be truly historicist, we must assess the past in its own terms, which, in turn, means contextualizing the past in the past. Having said that, there can be little argument regarding the fact that what we understand by ‘Psychology’ has changed over time; this reflects the various schools of thought or theoretical perspectives (Structuralism, Functionalism/Pragmatism, Behaviourism, Cognitive Psychology, etc.) that have emerged, sometimes as direct challenges to preexisting approaches, or co-existing. Those discussed in this chapter certainly don’t exhaust all major schools; only brief mention has been made of psychodynamic approaches (see Chapter 9), the relatively recent Evolutionary Psychology (see Chapter 8), and Humanistic Psychology (see Chapter 10). The very existence of such a variety of ways of trying to capture the nature of ‘human psychology’ is likely to reflect the inherent complexity of that psychology; however, as we shall see in Chapter 2, there may be no way of pinning it down to see it for what it ‘really’ is, only theoretical views of what it might be.

26

Historical perspectives

Pause for thought – answers 1 Other theoretical perspectives or ways of accounting for phenomena. These represent different levels of explanation, each of which, according to anti-reductionists, is valid in its own right. (See Chapter 2 for a discussion of reductionism.) 2 Probably the most widely used application is behaviour therapy as a major tool in Clinical Psychology. Linked to this, classical conditioning is a major account of how phobias are learned, and, to a lesser extent, post-traumatic stress disorder (PTSD) (see Gross, 2015). 3 Perhaps the best-known example is Atkinson and Shiffrin’s (1968, 1971) multi-store model of memory (MSM) (see Chapter 7). The MSM emphasizes the role of rehearsal as a means of retaining information within the long-term store. There are plenty of other examples from both Cognitive and Social Psychology.

27

Chapter 2 Scientific perspectives Psychology as the study of … how?

We learned a number of things about Psychology from Chapter 1, including the following: L Its emergence as a separate discipline took place during the late 1800s, with its roots in

philosophy and physiology in particular. L A number of different schools of thought or theoretical perspectives/approaches

appeared between this time and the mid-1900s, often as a reaction to/against an already existing approach and often overlapping with it. L Different schools of thought have tended to be more or less dominant within particular parts of Western Europe and the US, reflecting both scientific/academic and more general social influences and traditions. L It’s more accurate to talk about the ‘histories’ of Psychology, rather than a single, universally accepted ‘history’ taken to reflect ‘what actually happened’. L These different approaches focus on different aspects of ‘human functioning’ (i.e. their subject matter); it’s almost as if human beings are being defined differently according to which theoretical lens we happen to be looking through. This relates to ontology: the branch of philosophy concerned with the fundamental characteristics of reality: what exists. According to Teo (2009), Psychology has excluded or neglected key problems or pretended they don’t exist (see Box 2.1).

BOX 2.1 The problematic nature of Psychology According to Teo (2009), three interconnected issues make Psychology problematic: 1 A limited understanding of the complexities of its subject matter and ontology’, specifically, the nature of human mental life, human nature in general, and the nature of psychological categories. 2 A preference for a selectively narrow epistemology (the branch of philosophy concerned with where knowledge comes from) and methodology. 3 A lack of reflection (critical thinking) on Psychology’s ethical-political concerns and praxis (which emphasizes the ethical-political nature of all psychological practices).

29

Scientific perspectives L If the emphasis of Chapter 1 was on ontology (Teo’s first point), this chapter will focus

more on epistemology and methodology (Teo’s second point). However, something that Chapter 1 demonstrated is that Psychology’s histories are inextricably linked to the notion of science: different schools of thought not only focus on different aspects of human (and non-human) psychology, but they advocate, to varying degrees, and in some cases hardly at all, the use of scientific/empirical methods. L When not qualified, ‘science’ is implicitly taken to denote ‘natural science’. As we noted in Chapter 1, especially in relation to Behaviourism’s rejection of introspective methods used by advocates of Structuralism, subject matter (ontology) and ways of investigating it and finding evidence of it (epistemology and methodology) go hand-in-hand: a central issue for Psychology is the appropriateness of its methods for investigating its subjectmatter, more specifically, the appropriateness of the methods of natural science. L This, in turn, raises three further fundamental questions: (a) What should the subject-matter of Psychology be? (b) What do we mean by (natural) science? (c) Are there different kinds of science (or different ways of doing science) and, if so, is one or more of these more appropriate for studying ‘psychology’ (depending on how this is defined) than others? L Each of the schools of thought discussed in Chapter 1 offers an answer to (a) and we

shall have cause to revisit these in the context of (b) and (c).

A brief history of science Between the time of the Ancient Greek philosophers and about 1500, science (‘knowledge’) and philosophy (‘love of knowledge’) were one and the same; together with theology, they merged into Orthodox scholasticism. Man (i.e. human beings) was regarded as being at the centre of the universe/God’s creation, and, consistent with this belief, the sun was believed to orbit the earth, which is at the centre of the universe (and so doesn’t move). This geocentric model of the universe was based on the views of Claudius Ptolemy (c.85–165), the influential Greek astronomer and geographer. The period from 1500 to 1750 is commonly referred to as the scientific revolution. Box 2.2 describes its major features.

BOX 2.2 Major features of the scientific revolution L The Copernican revolution (1542) (derived from Nicolas Copernicus, 1473–1543)

replaced the geocentric model with the heliocentric model: the planets (including the Earth) revolve around the sun. By the mid-1600s, this had become scientific orthodoxy. L The first of Johannes Kepler’s (1571–1630) ‘Laws of planetary motion’ stated that planets move in ellipses (not circles). L According to Francis Bacon (1521–1626), ‘knowledge = power’; he advocated mastery over the forces of nature, as well as the separation of philosophy and science from theology.

30

Scientific perspectives L Galileo Galilei (1564–1642: usually just ‘Galileo’) was a life-long supporter of the

L

L

L

L

L

heliocentric model. His most enduring contribution was mechanics and he was generally regarded as the first truly modern physicist. He also advocated experimental hypothesis testing, which was a fairly radical proposal at the time. Similarly, he was the first to show that the language of mathematics could be used to describe the behaviour of material objects – and not just purely abstract entities. Both of these innovations are now taken-for-granted aspects of scientific practice. Arguably, the key figure in the scientific revolution as far as Psychology is concerned was René Descartes (1596–1650), the French philosopher, mathematician, and scientist. According to his ‘mechanical philosophy’, the physical world consists simply of inert particles of matter (or ‘corpuscles’) interacting and colliding with one another. The laws governing the motion of these particles held the key to understanding the structure of the Copernican universe (Okasha, 2002). Descartes’ mechanistic philosophy (mechanism or machine-ism) applied to the human body as much as any other physical object. Crucially, he distinguished between the materialist body and the non-materialist mind (or soul); this dualist philosophy (or dualism) has become part of our common sense understanding of the world (at least in Western culture) and is also an implicit aspect of Psychology, psychiatry, and social science. Descartes was also responsible for introducing the concept of reductionism, according to which all complex objects and processes can best be understood by breaking them down into their constituent parts/elements. The scientific revolution culminated in the work of Isaac Newton (1643–1727). Building on Descartes’ mechanism, Newton (in his 1687 Mathematical Principles of Natural Philosophy) proposed three laws of motion and his famous principle of universal gravitation. These laws applied equally to planets and physical objects in our everyday world. Newtonian physics provided the framework for science for the next 200 years or so, quickly replacing Cartesian (i.e. Descartes’) mechanical philosophy/physics. Physics is the most fundamental of all the natural sciences (the others being chemistry and biology).

Associationism and empiricism Just as Descartes’ ideas regarding mechanism, materialism, and reductionism contributed to the scientific revolution, and, much later, to the science of Psychology, so Associationism, which lay at the core of the predominantly British school of empiricist philosophy, became central to science in general, and Psychology in particular. The idea of association lay at the core of empiricism, according to which all knowledge is based on (mostly) direct experience. The use of empirical (‘through the senses’) methods helps to define science as an activity distinct from philosophy. Regarding how one thing can remind us of another, Aristotle made the fundamental observation that the British empiricists came to call the ‘laws of association’: the reminder can occur due to similarity (they’re so much alike), contrast (they’re so different), or contiguity (they’re commonly perceived at the same time or in the same place). Similarity and contrast came to be subsumed under contiguity. For the associationists, association was the only mental operation – except for sensation. Thomas Hobbes (1962/1651), the first of the British associationists, didn’t actually use the term ‘association’, but described a single mental operation which he called ‘motion’: an external object affects the senses through what we now call the ‘stimulus’ of light,

31

Scientific perspectives

sound, pressure, or chemical action (i.e. some kind of physical motion) and the motion of the stimulus is communicated into the organism through a sense organ. Hobbes, therefore, reduced everything to physical terms. However, he had to concede that the organism reacts to the stimulus by muscular movement, the direction of which is either towards – approach – (as in desire) or away from it – avoidance – (as in aversion). Some desires such as that for food are innate (or ‘native’), while others are acquired through experience. While Hobbes was concerned with the sequence of thoughts (later called successive association), John Locke focused more on simultaneous association. Locke is, arguably, the most important of the associationist/empiricist British philosophers as far as Psychology in general, and Behaviourism in particular, are concerned.

BOX 2.3 KEY THINKER: John Locke (1632–1704) L Locke had rejected an academic career in favour of politics and public affairs. But

L

L

L

L

he attempted to integrate his political beliefs with a larger and general philosophy of mind, derived partly from the earlier work of Descartes. Locke is probably best known for his An Essay Concerning Human Understanding (1690), which discussed the nature of human knowledge from an empiricist perspective, i.e. the result of concrete sensory experience. He endorsed Aristotle’s suggestion that the mind, at birth, is a tabula rasa (‘blank slate’) or ‘white paper devoid of all characters’, capable only of recording impressions from the external world and subsequently recalling and reflecting upon them. All human knowledge comes from experience, and the best means of obtaining truth was through observation and experimentation (as used by Galileo, Newton, and other pioneering scientists). For Locke, the recent discoveries of these men represented the pinnacle of human knowledge. Locke rejected Descartes’ claim that the human mind is innately ‘given’ and active before being exposed to any kind of sensory experience. In other words, he denied the existence of innate ideas (such as infinity and perfection and other ‘universals’). (Based on Fancher and Rutherford, 2012)

Pause for thought … Locke adopted the observational/experimental methods of the great contemporary scientists as his ideal model for how the human mind operates best. At least in theory, their ‘discoveries’ were based on a mass of observations which then led to the detection of recurrent patterns and regularities that formed the basis of their scientific laws. Locke’s Essay assumed that a human mind operates basically according to this model, developing all its knowledge from observation of the external world – and without any prior expectations or presuppositions. 32

Scientific perspectives

1 What name is given to this way of collecting data and drawing general conclusions from them? 2 Formulate some arguments against this method as the most accurate way of capturing the nature of the scientific method. 3 What major alternative is there to this method? (See Chapter 1.)

Locke distinguished between two types of ideas: 1 simple ideas (such as redness, roundness, coldness, or sweetness); and 2 complex ideas, formed through combinations of simple ideas (e.g. redness, roundness, and sweetness may combine to produce the idea of an apple). Some complex ideas represent things that don’t actually exist (such as unicorns), but their components must have been experienced concretely (such as horses and long, pointed objects).

Pause for thought … 4 According to Locke, if a blind woman who could identify objects by touch was to have her sight restored (e.g. through cataract removal), would she be able to identify objects through sight alone? Give reasons for your answer by reference to psychological research. (See, e.g. Gross, 2015.)

According to the association of ideas, experience can cause ideas to become linked in infinitely varying combinations. Again, Locke distinguished between two kinds or categories of associations: 1 natural associations include the redness and roundness of apples and (especially) the lawfully interconnected phenomena discovered by scientific analysis; and 2 accidental associations include culturally determined customs, superstitions, and one’s idiosyncratically connected experiences. While only (1) constitutes true valid knowledge, (2) can be just as compelling and convincing (as in the association between goblins and darkness – Fancher and Rutherford, 2012). Finally, Locke’s examples of association suggest that he believed contiguity and similarity to be the key mechanisms (see above).

Locke’s later influence Other British empiricists influenced by Locke’s ideas include the Irish bishop George Berkeley (1685–1753), who applied associationist principles to the analysis of visual

33

Scientific perspectives

depth perception, and the Scottish philosopher David Hume (1711–1776), whose law of association by contiguity claims that ideas experienced either simultaneously or in rapid succession (i.e. contiguously in time) will tend to become connected. His law of association by similarity maintains that ideas or experiences that resemble each other will also tend to become linked.

L L L

Try defining the terms ‘cause’ and ‘effect’. How are they related? Do they exist objectively, independently of the perceiver?

BOX 2.4 Hume’s account of cause and effect L The notions of cause and effect are central to positivism, which, in turn, is a fun-

damental principle of natural science (see text below). L According to Hume, all we know about causal connections between certain

causes and certain effects in the world is that the causes typically precede the effects. There’s no necessary connection between the two: all we have is the experience of ‘constant conjunction’ (or contiguity), which leads to the habitual belief that, given the cause, the effect will follow. L In other words, for Hume, cause and effect relationships don’t exist objectively, but are mentally constructed.

Pause for thought … 5 How might you criticize Hume’s account of cause and effect? (How else might you describe ‘constant conjunction’?). 6 How do Psychology (especially laboratory) experiments claim to tease apart cause and effect?

In the nineteenth century, James Mill (1773–1836) and his son, John Stuart Mill (1806–1873), claimed that almost all important individual differences between people (in character, conduct, and intelligence) arise according to associationist principles: these differences arise largely because of their experiences and associations, rather than because of their native (inborn) endowments (see Chapter 11). (This claim, of course, relates to the nature–nurture debate, which has been central to Psychology for most of its history and which recurs throughout this book.)

34

Scientific perspectives

BOX 2.5 Locke and Leibniz L Gottfried Wilhelm Leibniz (1646–1716) was a German philosopher, born in

Leipzig (where Wundt established the first widely recognized Psychology laboratory – see Chapter 1). L While acknowledging that some knowledge is acquired as Locke described, Leibniz (based on Descartes’ nativism) rejected the tabula rasa view of the mind. He argued that the mind is innately predisposed to organize knowledge in certain specific ways (e.g. according to the laws of logic and other ‘necessary truths’). L While Locke compared the mind to a tabula rasa, Leibniz likened it to a veined block of marble whose internal structure (‘fault lines’) predisposes it to be sculpted into certain shapes more easily than others (the shapes are ‘innate’ in the marble, but it still requires a sculptor to expose and clarify them). Ideas and truths take the form of inclinations, dispositions, tendencies, or natural potentialities; for Locke, we are either actually aware of something or we are not. L Rather than rejecting Locke’s empiricism, Leibniz believed that he was spelling out some of the details that remained implicit in Locke’s account (such as what is involved in ‘reflection’). (Based on Fancher and Rutherford, 2012)

Leibniz’s influence on Psychology Another important difference between Locke and Leibniz was the latter’s claim that the mind is constantly active (even during sleep), including his belief in unconscious mental activity. Leibniz proposed a continuum: 1 At one end of the continuum are clear, distinct, and rational apperceptions: ideas aren’t simply ‘registered’ in consciousness but become subject to focused attention and rational analysis in terms of underlying laws and principles. Apperception is also a reflective activity: through it, we become self-aware. 2 Perceptions are more mechanical and indistinct. 3 Minute perceptions are real, but never actually enter consciousness. They help to maintain our sense of continuity as an individual, distinctive self (see Chapter 12), through countless minute perceptions and unconscious memories of our previous states; also, they play an important role in human motivation.

Pause for thought … As Fancher and Rutherford (2012) observe, Leibniz was ahead of his time in even suggesting the possibility of unconsciously motivated behaviour. 7 How do Freud’s ideas regarding the mind and unconscious motivation differ from Leibniz’s? (See Chapter 9.)

Not only did Leibniz anticipate the role of the unconscious in Freud’s and other psychodynamic theories, but Wundt had explicitly adopted a Leibnizean as opposed to Lockean perspective while trying to establish Psychology as a separate scientific discipline (see Chapter 1).

35

Scientific perspectives

In the context of Developmental Psychology, theories such as Piaget’s (1950s) account of cognitive development progressing through a series of fixed, biologically based stages in an active mind reflect the influence of Leibniz. As we noted in Box 2.5, for Leibniz the mind’s inherent ‘shapes’ (as in marble’s natural fault lines) are ‘innate’, but it still requires a sculptor (environmental experience) to expose and clarify them. This is reflected in Piaget’s characterization of the child as a scientist whose exploration facilitates the unfolding (maturation) of the stages (see Gross, 2015).

BOX 2.6 Nature–nurture interdependence and the flexibility of the human mind L More recently, and relevant to the nature–nurture debate, Karmiloff-Smith (1996)

argues that there is a trade-off in nature between pre-specification (such as maturation), on the one hand, and plasticity on the other; this produces the characteristic flexibility of the human mind. L Inborn biases represent another example of pre-specification. For example, very young babies already seem to understand that unsupported objects will fall (move downwards) and that a moving object will continue to move in the same direction unless it encounters an obstacle (Spelke, 1991). However, these pre-existing conceptions are merely the beginning of the story: what then develops is the result of experience filtered through these initial biases, which constrain the number of developmental pathways that are possible (Bee, 2000). L Both Freud and Piaget focused on interaction between biological maturation and experience with the physical world (especially Piaget) and the social world (especially Freud). L Bandura’s theory of triadic reciprocal causation (e.g. Bussey and Bandura, 1999) adds the person’s behaviour to the interaction: one’s behaviour can change the environment.

Pause for thought … 8 What do you understand by the term ‘plasticity’ when describing the human brain? (See Chapter 5.)

Finally, Leibniz played a role in the history of the idea of ‘Artificial Intelligence’ – the belief that machines (in particular, digital computers) are capable of displaying the intelligence normally displayed only by human beings (see Chapter 6).

The characteristics of ‘classical’ (natural) science As a way of summarizing (and elaborating on) the preceding discussion of the influences on the emergence of science, in particular during the seventeenth and eighteenth centuries, Table 2.1 lists the major features of ‘classical’ or natural science.

36

Scientific perspectives

Table 2.1 The major features of ‘classical’ or natural science 1 Empiricism: based on the theories of Locke and the other British empiricist philosophers, the use of observation, experiments, and other ‘scientific’ methods for the gathering of evidence or ‘facts’ about the world. 2 Mechanism (‘machine-ism’): based on Descartes’ ‘mechanical philosophy’, the belief that the physical world (including the planets) consists simply of inert particles of matter that interact and collide with one another. This applies to the human body conceived as a physical object. 3 Materialism: Descartes’ distinction between the materialist body (governed by mechanistic laws) and the non-materialist mind (or soul); this constitutes his dualistic philosophy (or dualism) (see Box 2.8). 4 Reductionism: Descartes’ claim that all complex objects and processes can best be understood by breaking them down into their constituent parts/elements. 5 Determinism: Derived from Newton’s physics, the belief that every event has a specific cause: given the cause, the event (the effect) is completely predictable. 6 Nomothetic approach: from the Greek ‘nomos’, meaning ‘law’, the attempt to establish general, universal laws or principles. 7 Induction (or the inductive method): these universal laws/principles are based on a number of separate observations or experiments (‘samples’). The laws are ‘discovered’ (i.e. they exist ‘objectively’: see following point) when a sufficient number of simple, unbiased, unprejudiced samples have been taken: from the resulting sensory evidence (‘data’/sense-data), generalized statements of fact take shape. We gradually build-up a picture of what the world is ‘really’ like based on a number of separate samples. 8 Positivism: based on Descartes’ philosophical dualism, which allowed scientists to treat matter as inert and distinct from ‘mind’ or ‘soul’, the world (including the human body) could be described objectively. This became the ideal of science, and was extended to the study of human behaviour, and social institutions in the mid1800s by Auguste Comte (see Box 2.7), who called it positivism. 9 Realism/correspondence theory of truth: this is really another way of addressing the issue of objectivity, also linked to the aims of science (see point below). According to realism (or ‘scientific realism’), science aims at true statements about just what there is in the world and how it behaves – at all levels. The world (‘reality’ or the universe) exists independently of how we represent it (sometimes called ‘external realism’), i.e. through thought, perception, or language. Similarly, the correspondence theory of truth claims that true statements reflect the nature of what they represent, i.e. true (scientific) statements correspond to the ‘facts’ about the world (Chalmers, 2013; Searle, 1995). 10 Aims: prediction and control. 11 Progress is measured in terms of cumulative knowledge: the long-term progress of science involves the accumulation of confirmed facts and laws. Over time, increasing amounts of knowledge (‘facts’) are acquired and the world (‘reality’) reveals itself in greater detail.

37

Scientific perspectives

BOX 2.7 KEY THINKER: Auguste Comte (1798–1857) L Comte was born in Montpelier, France, to devoutly

L

L L

L

L

L

L

religious Catholic parents who were also staunch monarchists; this was five years following the French Revolution that ousted the French monarchy and founded the French republic. The young Comte was an ardent supporter of the Revolution and embraced the causes of individual freedom and republicanism from a young age. He was an academically brilliant child and at just 14 declared that he no longer believed in God. He is widely recognized as the father of positivism and inventor of the term ‘sociology’. He played a key role in the development of the social sciences. He believed that the human mind followed a historical Figure 2.1 sequence (the ‘law of three stages’): theological, metaAuguste Comte. physical, and positive. In the first two stages, attempts were made to understand the nature of the world through supernatural and metaphysical explanations. In the positive stage, observation and experiment became the principal means of searching for the truth. The law of three stages was first applied to the development of science; Comte later applied it to human intellectual development in general and argued that it held the key to the future progress of humanity. Rather than celebrating the rationality of the individual and wanting to protect people from state interference (in keeping with the Enlightenment humanism of the 1700s), Comte’s positivist ideology made a fetish of the scientific method: a new ruling class of technocrats should decide how society should be run and how people should behave. This represents a contradiction within his own thinking. On the one hand, he embraced the Enlightenment beliefs that if people are free from the strait-jackets represented by the Church, monarchy, and aristocracy, they would be able to create a better society where justice and liberty would flourish. On the other hand, people are too easily swayed and need the guidance of philosophers and scientists to find their way. (Based on Hewett, 2008)

Even within the context of natural science, many (if not most) of the features listed in Table 2.1 have been challenged. In the following sections, we shall describe some of these challenges and begin to ask how the use of the methods used in natural science can be appropriately applied to the study of human psychology.

Empiricism We noted in Chapter 1 that the vast majority of Psychologists would regard themselves as Methodological Behaviourists: reflecting the legacy of Watson, the use of empirical (scientific) methods, in particular controlled experiments, is a taken-for-granted modus operandi for most research Psychologists. This quantitative approach is part-and-parcel of what is commonly referred to as mainstream Psychology.

38

Scientific perspectives

More generally, belief in the necessity of collecting data in some form and exposing those data to some form of analysis is a principle shared probably by all Psychologists – including those who are clearly not Experimental Psychologists and who offer radically different alternatives to the experimental approach, such as discourse analysis and other qualitative methods. (These are discussed further below in relation to Social Constructionism.) In In Defence of Empirical Psychology (1973), Donald Broadbent, an ardent Behaviourist (and also a highly influential figure within Cognitive Psychology: see Chapters 1 and 6), equated empirical Psychology with Behaviourism. He also points out that ‘empirical Psychology’ has two meanings: (1) the one relating to Methodological Behaviourism as described above; and (2) a philosophical account of human nature with Associationism at its core (as proposed by the British empiricists, most importantly Locke: see above); this became central to Watson’s brand of Behaviourism (see above and Chapter 6). In this second sense, ‘empiricism’ denotes the view that all, or most, of our knowledge is derived from our experience of the external world; it is usually contrasted with rationalism, according to which our knowledge is prior to, and independent of, experience (e.g. innate/inborn, or derived from logical reasoning). These extreme views lie at the heart of the nature–nurture debate, which has run through much of mainstream Psychology’s history (see Box 2.6). As we noted above, being an empiricist doesn’t oblige you to perform experiments: what makes you a scientist is the commitment to going out into the world and exploring/investigating it. If you’re also a positivist, you’ll claim that the results of your exploration is the identification of ‘facts’: your efforts result in their discovery, implying that they already exist in the form in which you find them. We shall discuss positivism in more detail below, where we will re-emphasize the point that there’s a wide variety of empirical methods open to Psychologists; many methods are totally incompatible with the notion of the researcher as a discoverer of pre-existing facts.

Materialism and mechanism Descartes’ belief in the material nature of the world – including the human body – is one side of a philosophical coin, the other being his belief that the human mind (or soul) is nonmaterial. These twin beliefs constitute his philosophical dualism.

BOX 2.8 Descartes’ philosophical dualism L Descartes divided the universe into two fundamentally different ‘realms’ or ‘reali-

ties’: (1) physical matter (res extensa), which is extended in time and space; and (2) non-material, non-extended mind (res cogitans). L This distinction between matter and mind allowed scientists to treat matter as inert and totally distinct from themselves: the world could be described objectively, without reference to the human observer. Objectivity became the cornerstone of science and was extended by Comte to the study of human behaviour and social institutions (see Box 2.7). L Descartes believed that the material world comprises objects assembled like a huge machine and operated by mechanical laws that could be explained in terms of the arrangements and movements of its parts (mechanism or machine-ism). As Heather (1976) points out, this is consistent with the common sense view of Psychology as the study of ‘what makes people tick’.

39

Scientific perspectives L As far as living organisms are concerned, Descartes compared (non-human)

animals to clocks composed of wheels and springs. Likewise, the human body is part of a perfect cosmic machine, at least in principle controlled by mathematical laws. However, the mind (or soul), possessed only by human beings, can only be known through introspection (see Chapter 1).

Consciousness and the mind–brain/body problem Descartes’ dualism represents one attempt to describe the relationship between mind (or consciousness, the essence of mind) and body, often referred to as the mind–body or mind–brain problem. His particular version of dualism is interactionist – but only in one direction: the mind can influence the body (and behaviour), but not vice versa. While this is consistent with the idea of free will (see Chapters 1, 5, 6, 8, 9, and 10), it also raises a fundamental philosophical problem: L How can two ‘things’ be related when one is physical (the brain has size, weight, shape,

density, and exists in space and time: res extensa) and the other (res cogitans) apparently lacks all these features? L How can the non-material mind influence or produce changes in the material brain/body? L The ‘classic’ example given by philosophers to illustrate this problem is the act of raising one’s arm. From a strictly scientific perspective, it should be impossible for the decision to raise one’s arm to produce/bring about the actual raising of the arm. Science – including Psychology and neurophysiology – has traditionally rejected Descartes’ brand of dualism. This is the problem of mental causation and is discussed further in Chapter 5 in relation to the research of Benjamin Libet in particular. Related to this is what Chalmers (e.g. 2007) calls the hard problem of consciousness.

BOX 2.9 The ‘hard problem of consciousness’ L According to Chalmers (2007), ‘consciousness’ is an ambiguous term, referring to

different phenomena, some of which are easier to explain than others. L The ‘easy’ problems of consciousness are those that seem directly accessible to the

standard methods of cognitive science, including: (1) the ability to discriminate, categorize, and react to environmental stimuli; (2) the integration of information by a cognitive system; (3) the reportability of mental states; (4) the ability of a system to access its own internal states; (5) the focus of attention; (6) the deliberate control of behaviour; and (7) the difference between wakefulness and sleep. L What makes these ‘easy’ is the fact that there’s no real argument about whether these phenomena can, in principle at least, be explained scientifically (in terms of neural – see Chapter 5 – or computational – see Chapter 7 – mechanisms). L By contrast: The hard problem of consciousness is the problem of experience. When we think and perceive, there is a whir of information-processing, but there is also a subjective aspect. As Nagel (1974) has put it, there is something it’s like to be a conscious organism. This subjective aspect is experience. (Chalmers, 2007, p. 226; emphasis in original)

40

Scientific perspectives

Alternative theories of the mind–brain relationship All theories can be categorized as either dualist (both mind and brain exist) or monist (only mind or matter exists). According to epiphenomenalism, the mind is a kind of byproduct of the brain (it has no influence on the brain), but the brain can influence the mind. Like Descartes’ theory, the influence is one-way only (but in the opposite direction). True interactionism involves a two-way influence between mind and brain, while psychophysical parallelism claims there’s no mind–brain interaction of any kind. One form of monism is mentalism or idealism. Within Psychology, Humanisticphenomenological approaches, such as those of Rogers and Maslow (see Chapter 10), and constructionist approaches (such as Social Constructionism discussed below) can be regarded as reflecting a mentalist/idealist view of the mind–brain relationship. Other influential monist accounts within Psychology have a materialist bent. Skinner’s Radical Behaviourism represents a peripheralist form of materialism (see Chapters 1 and 6). According to centralist materialism (or mind–brain identity theory), mental processes, as a matter of fact, turn out to be nothing more than physical states of the brain: we are simply very complicated physico-chemical mechanisms. An even more extreme centralist account is eliminative materialism, which is discussed further in relation to reductionism below.

Pause for thought … 9 What do you think ‘peripheralist’ refers to in Skinner’s Radical Behaviourism? 10 Drawing on your knowledge of Biopsychology (see Chapter 5), try to relate the examples below to the theories of the mind–brain relationship outlined above. Specifically, do these examples involve interactions between mind and brain, and, if so, in what direction is the influence taking place? (1) (2) (3) (4) (5)

the effects of psychoactive drugs; electrical stimulation of the brain; Sperry’s study of split-brain patients; stress; placebo effects.

Reductionism While in Box 2.8 Descartes is attributed with having argued for reductionism as a feature of science, Luria (1987) traces the origins of reductionism to the mid-nineteenth-century view within biology that an organism is a complex of organs, which, in turn, are complexes of cells. To explain the basic laws of the living organism, we have to study as carefully as possible the features of separate cells. From its biological origins, Luria describes how reductionism was extended to science in general. For example, the properties of a protein molecule could be uniquely determined or predicted in terms of properties of the electrons or protons making up its atoms. According to Ellis (2013), a physicist:

41

Scientific perspectives

There’s a basic assumption that the things you see – be it humans, computers or trees – can ultimately be boiled down to the behaviour of the particles they are composed of. Biology is determined by chemistry, which is in turn governed by the underlying physics. (p. 28) For ‘boiled down to’ in this quote from Ellis read ‘explained in terms of ’. Much of modern science is rooted in this bottom-up, reductionist view of cause and effect. The basic principle is that the more basic, fundamental, or ‘micro’ the explanation, the more accurate, precise, and valid the explanation (the closer to ‘the truth’ it is). Physics is usually regarded as the most fundamental of all sciences (Okasha, 2002). While reductionism’s ultimate aim (according to its supporters) is to account for all phenomena in terms of microphysics, any attempt to explain something in terms of its components or constituent parts can be thought of as reductionist. In the previous section when discussing theories of the mind–brain relationship, we noted that eliminative materialism is an extreme form of reductionist materialism. What makes it reductionist is the attempt to replace a psychological account of behaviour with an account in terms of neurophysiology. A much-quoted example of this approach is Crick’s (1994) The Astonishing Hypothesis: The Scientific Search for the Soul. According to Crick: You, your joys and your sorrows, your memories and your ambitions, your sense of identity and free will, are in fact no more than the behaviour of a vast assembly of nerve cells and their associated molecules. (p. 3) Crick seems to be making two separate but related claims here: (1) our joys and sorrows etc. and the behaviour of nerve cells etc. are identical (they are one and the same); and (2) nerve cells etc. have causal powers, i.e. the ability to bring about these psychological states (or ‘behaviour’). According to Harré et al. (1985), as far as (1) is concerned, ‘identity’ has at least two distinct meanings that are relevant to the mind–brain relationship: L Token identity: while it’s generally agreed that we cannot have a mind without a brain,

mind states and brain states aren’t systematically correlated; this is indicated by the neurophysiological and neurological evidence. For example, we cannot just assume that the same neurophysiological mechanism will be used by two different people both engaged in the ‘same’ activity of reading (Broadbent, 1981). There are many ways that ‘the brain’ can perform a particular task. (This relates to the concept of multiple realization – Okasha, 2002). L Type identity: mind–brain identity is usually taken to imply precisely this systematic correlation: whenever a mind state of a certain type occurs, a brain state of a certain type also occurs. L In the absence of evidence to support type identity, token identity is the ‘safer’ option. Token identity also means that there must always be a place for an autonomous psychological account of human thought and action.

42

Scientific perspectives

L L

What do you understand by the term ‘autonomous psychological explanation’? Try to think of some particular behaviours which not only require an autonomous psychological explanation but also cannot legitimately be explained in terms of individual actors (as opposed to social rules or conventions).

According to Rose (1992), a materialist and an anti-reductionist, the mind is never replaced by the brain. Instead, there are two separate, independent ‘languages’, one pertaining to the mind, the other to the brain. Freud was also a materialist who believed that no single scientific vocabulary (such as anatomy) could adequately describe – let alone explain – all aspects of the material world; he believed in the autonomy of psychological explanation (see Chapter 9). The existence of different ‘languages’ for describing minds and brains (or different levels of description or universes of discourse) relates to the question: how relevant or necessary is it to know what’s going on inside our brains when we act (or decide or perform any other mental operation) when trying to understand or explain those actions or operations? As a materialist, you would argue that the brain is necessarily implicated in everything we do – and the mind doesn’t represent a different kind of reality (as Descartes believed); but as an anti-reductionist, you would at the same time believe that what we do can be described/ explained in its own terms, i.e. as a psychological rather than neurophysiological event. This leads logically onto examples of behaviour that require a social level description/ explanation as opposed to a psychological one (i.e. at the level of the individual). A common example involves the ‘debate’ between a football supporter (and lover of ‘the beautiful game’) and the non-supporter who dismisses it as (traditionally) ‘22 men kicking a bit of leather (or plastic) round a field’. The latter is being reductionist (although neither party may realize it!). A different kind of example is given by Heather (1976). He argues that mainstream academic Psychology is biased towards giving individualistic explanations where social accounts would be more appropriate (see Chapter 12). A car driver signals her intention to turn right by sticking her hand out of the window. According to Heather: No explanation which attempted to reduce the behaviour in question to lower levels could ever succeed in explaining its symbolic significance as a gesture. (p. 30) ‘Lower levels’ here might include a complete and accurate account of all the brain activity, muscle activity, and kinaesthetic feedback involved in the movement of the driver’s arm. But however complete this account might be, it could tell us nothing about the meaning of the movement within the culture, and at the point in history, in which it takes place. However, having rejected a neurophysiological explanation, we need to go further and try to explain it within its social context: The fact that the arm signal means what it does exists as a social fact, independently of any individual understandings of it. To explain why the driver acted as she did, we need to take into account not only her understanding of the meaning of the gesture,

43

Scientific perspectives

but her further understanding that other drivers understand it also and will themselves act on that understanding … the behaviour in question is best described as symbolic interaction. (Heather, 1976, pp. 30–31; emphasis in original) (Mead’s (1934) symbolic interactionism is discussed in Chapter 12.)

Determinism Arguably, it is this characteristic which best captures the common sense view of what (classical/natural) science does: whatever else science aims to do, it is the means by which we discover the causes of the phenomena we’re trying to understand. This is made explicit in the work of the great pioneering scientists/philosophers, including Galileo, Descartes, and Newton. According to the philosophical doctrine of determinism: In the case of everything that exists, there are antecedent conditions, known or unknown, given which that thing could not be other than it is…. More loosely, it says that everything, including every cause, is the effect of some cause or causes … if true, it holds not only for all things that have existed but for all things that do or will ever exist. (Taylor, 1963, p. 34) ‘Everything that exists’ includes people and their thoughts and behaviour, so a ‘strict’ determinist believes that these are caused in just the same way as (other) ‘things’ or events in the world. But, according to Gross (2014), this begs a fundamentally important question: Are thoughts and behaviour the same kind of thing or event as chemical reactions in a test tube, a volcanic eruption, or the firing of neurons in the brain? We do not usually ask ourselves if the chemicals ‘agreed’ to combine in a certain way, or if the volcano just ‘felt like’ erupting, or if the neurons ‘decided’ to fire. (p. 132; emphasis in original) If we used such terms literally to refer to the action of chemicals, volcanoes, or neurons, we’d be guilty of anthropomorphism: attributing human abilities and characteristics to nonhuman entities (including animals). The literal, primary use of such terms is when describing people; they are part of our concept of a person, which, in turn, forms an essential part of ‘everyday’ common sense psychology (see Chapter 4). Belief in free will presupposes that we have a ‘mind’; deciding, agreeing, and so on are precisely the kinds of things that minds do. However, while free will implies having a mind, the converse isn’t true: having a mind doesn’t imply free will. In terms of James’ account of free will (see Chapter 1), free will is compatible with soft determinism but not with hard determinism; it is the latter that is usually implied by the term and is consistent with Taylor’s definition above. James (1917) also made an important distinction between determinism and indeterminism. Consistent with Taylor’s definition, determinism maintains that everything is caused, inevitably and predictably, such that things can only turn out (or have turned out) in the way they do: every event is caused by a particular preceding event, and this can be traced back to the beginning of time.

44

Scientific perspectives

By contrast, according to indeterminism, we cannot predict with any certainty or accuracy how things are going to turn out. We cannot know which of two possible outcomes will occur in any one case until one or other possibility actually occurs – thereby excluding the other possibility. As Shotter (1975) puts it: Actualities seem to float in a wider sea of possibilities from out of which they are chosen; and, somewhere, indeterminism says, such possibilities exist, and form a part of truth. (p. 109) In an indeterministic world, the meaning or significance of people’s actions can only be assessed in the context of what they might have done but didn’t actually do – real possibilities (i.e. options) do exist. In the Enlightenment world (see above), it was man’s task as a rational thinker to gain intellectual mastery over it (cf. Bacon’s ‘knowledge = power’), in the hope that this would lead to a technical mastery (i.e. the power to change what already exists). But in an indeterminate world, many of the sharp distinctions possible in theory are blunted in practice; examples of such false dichotomies are (1) mind vs. brain/body (see above); (2) free will vs. determinism; (3) subject vs. object; and (4) self vs. other. According to Shotter (1975), what modern, mainstream Psychology has overlooked – especially in its more extreme mechanistic-behaviouristic manifestations as a natural science of behaviour – is that human beings aren’t simply immersed directly in nature but are in a culture in nature. Thus: People must not be treated like organisms that respond directly in relation to their position in the world, but as rather special organic forms which deal with nature in terms of their knowledge of their ‘position’ in a culture; that is, in terms of a knowledge of the part their actions play in relation to the part played by other people’s actions in maintaining (or progressing) the culture. (pp. 13–14) If there is such a thing as ‘human nature’, it lies in our ability to choose alternative courses of action within an indeterminate world. Indeed, it is the central task of such a self-defining animal to [give] form to the act of living itself; it is up to him to imagine new possibilities for being human, new ways of how to live, and to attempt to realize them in practice – and this is essentially a moral (and a political) task, not just an intellectual one. (Shotter, 1975, p. 111)

Determinism in psychological theory L

In what ways can Watson’s Behaviourism and Freud’s psychoanalytic theory be described as deterministic? (See Chapters 6 and 9.)

While in most respects diametrically opposed, Watson and Freud had one fundamental belief in common, namely the view that behaviour is determined (i.e. caused by influences beyond the individual’s control) – and, hence, predictable. For Watson, these influences are external (environmental stimuli), while for Freud they are internal (unconscious thoughts, memories, etc.)

45

Scientific perspectives

Freud used the term psychic determinism to capture the nature of this influence, and, consistent with it, he argued that free will is an illusion (as did Skinner: see Chapters 1 and 9). For Watson, Psychology as a science had to be based on what could be observed by a detached observer, rather than what the individual reports about his/her thoughts, etc. As we saw in Chapter 1, most Experimental Psychologists would describe themselves as Methodological Behaviourists: they see empirical methods – in particular, the controlled experiment – at the core of the research process. This is related to what is sometimes called scientism or methodolatry (see Chapter 3).

Determinism and the Psychology experiment According to Heather (1976), although Psychology has come a long way since the early days of Behaviourism, it is still fundamentally mechanistic in its account of human beings: Man continues to be described as though he were some complicated piece of machinery. Thus, he is regarded as something passive and inert, propelled into motion only by the action of some force, either external or internal, upon him. His behaviour is fully explicable, in principle, in terms of ‘causes’ which he has no control over. (pp. 18–19) As Heather points out, there’s a supreme irony in this view of human behaviour: stemming from Watson’s bid to turn Psychology into a natural science, this mechanistic approach reflects a now outdated (nineteenth-century) approach within physics – the natural science par excellence. In addition, as we saw in Box 2.4, Hume exposed the widely shared understanding of ‘cause’ as a metaphysical notion: all we can ever see is a correlation between events and causes and effects don’t exist objectively in the real world. Partly for these reasons, Psychologists rarely use ‘cause’ and ‘effect’; instead, they might talk about ‘antecedent events’ and ‘consequences’ (as Skinner did – see Chapter 6) or, more usually, in the context of controlled (especially laboratory) experiments, independent and dependent variables. Criticism of traditional empirical methods in general, and controlled experiments in particular, has focused on their artificiality. This is often discussed as the problem of internal versus external validity (see Box 2.10).

L L L L

What do you understand by ‘internal validity’ and ‘external validity’ in the context of a Psychology experiment? What do you understand by ‘experimental control’? Try to define ‘independent variable’, ‘dependent variable’, and ‘extraneous variables’. Give some examples of ‘situational’ and ‘participant variables’.

When behaviour is studied in the laboratory, it’s being removed from its normal sociocultural context (see Chapter 3). Part of this sociocultural context relates to the social roles and the associated power differences. What makes the laboratory experiment such an unnatural and artificial situation is the fact that it’s almost totally structured by one ‘participant’ – the experimenter. This is as much an ethical issue as it is a practical issue. Feminist Psychologists have been among the most outspoken critics of this decontextualization of behaviour; we shall revisit this issue below when discussing positivism.

46

Scientific perspectives

BOX 2.10 The problem of internal vs. external validity L If the experimental setting (and task) is regarded as similar or relevant enough to

L

L L

L

L

L

L L

L

everyday, ‘real-life’ situations to enable us to generalize the results, then we assess the study as having high external (or ecological) validity. Modelling itself on natural science (see text above), Psychology attempts to overcome the problem of investigating human behaviour, which is often highly complex, by using experimental control. This involves isolating an independent variable (IV) and ensuring that extraneous variables (variables other than the IV which are likely to affect the dependent variable (DV)) don’t affect the outcome. But this begs the crucial question: how do we know when all the relevant extraneous variables have been controlled? While it’s relatively easy to control the more obvious situational variables (task instructions, room temperature, time of day), this is less straightforward in the case of participant variables) (such as age, gender, and cultural/ethnic background); this might relate to the availability of such groups (or other practical reasons) or uncertainty regarding exactly which participant variables are relevant (theoretical considerations or reasons of judgement). Ultimately, it’s a matter of what the experimenter believes is important – and possible – to control (Deese, 1972). If judgement and intuition are involved, then control and objectivity are matters of degree – whether in Psychology or physics. (Objectivity is the critical feature of positivism, which is discussed further in the text below.) It’s the variability/heterogeneity of human beings that makes them so much more difficult to study than, say, chemicals. Chemists don’t usually have to worry about how two samples of a particular chemical might differ, whereas Psychologists have to acknowledge the influence of individual differences between participants. Nor can we just assume that the IV (or ‘stimulus’ or ‘input’) is identical for every participant, that is, definable in some objective way and exerting a standard effect on everyone. The attempt to define IVs (and DVs) independently of the participant can be seen as a form of reductionism. Complete control would mean that the IV alone is responsible for the DV, ignoring the relevance of demand characteristics and experimenter bias (see text below). But even if complete control were possible (i.e. the experiment’s internal validity were assured), we’d still be faced by a fundamental dilemma: the greater the degree of control over the experimental situation, the greater the difference between it and the real-life situation it’s intended to reproduce. The higher the internal validity (i.e. the greater the artificiality), the lower the external/ecological validity. In conducting controlled experiments, the Psychologist is bringing the behaviour into a specially created environment (the laboratory), where the relevant variables can be controlled in a way that’s impossible in naturally occurring settings. In doing so, Psychologists have constructed an artificial environment and the resulting behaviour is similarly artificial. It’s no longer the behaviour they were trying to understand! (Gross, 2015, p. 48) (Based on Gross, 2015)

47

Scientific perspectives

Nomothetic approach If the controlled experiment is regarded as the most powerful method of research within Psychology, this is partly because (according to its advocates) it allows us to generalize (but see Box 2.10 above). This is possible only because the characteristics of individual participants are irrelevant; what matters are group averages – it is these and not individual performance that are subjected to statistical analysis. This nomothetic approach best describes Cognitive, Bio-, and to some degree Social Psychologists, all of whom focus on basic psychological processes, with the emphasis very much on the process (such as learning, memory, perception, motivation, emotion, conformity, and obedience). At times, it’s almost as if the process were disembodied, with the learner or perceiver being almost irrelevant as far as our understanding of the process is concerned. The process is ‘lifted’ from its sociocultural context and studied as a universal aspect of human functioning. The influence of gender, cultural background, personality, etc. are largely ignored, implying that in some respects we are like all other human beings (Kluckhohn and Murray, 1953). However, and ironically, probably the clearest demonstration of the nomothetic approach is in the study of individual differences. Rather than focusing on individuals as individuals (how we are like no other human beings), this area of Psychology aims to establish group norms (how everyone is like some other human beings). The underlying assumption is that there’s a relatively small number of ways in which people differ from each other, the major candidates being personality, intelligence, age, gender, ethnic, and cultural background. As discussed in Chapter 11, people are compared through the use of psychometric tests (‘mental measurement’). The results are then analysed using a statistical technique called factor analysis, used to identify the basic factors or dimensions that constitute personality, intelligence, etc.; this provides the basis for comparing people with each other. Factor-analytic theories of intelligence include Spearman’s two-factor theory (1904, 1967), Burt’s (1949, 1955) and Vernon’s (1950) hierarchical model, Thurstone’s (1938) primary mental abilities, and Guilford’s (1959) structure of intellect. Hans Eysenck’s (1953, 1965), Raymond Cattell’s (1965) factor-analytic theories of personality, and the five-factor model (or the Big Five) (Costa and McCrae, 1992; Digman, 1990; Goldberg, 1993; McCrae and Costa, 1989) are the best-known such theories. While not as high-profile a debate as ‘nature–nurture’ or ‘free will and determinism’, Psychology’s history has seen an ongoing controversy involving the nomothetic approach and its ‘rival’, the idiographic approach (from the Greek idios, meaning ‘own’ or ‘private’). This is the study of individual norms and is often associated with Gordon Allport (1937, 1961) (see Chapter 12). While it has long been believed that Allport was the first to introduce the terms ‘nomothetic’ and ‘idiographic’ into American Psychology, Hurlburt and Knapp (2006) claim that it was Hugo Munsterberg, as far back as the 1890s. (Allport was a student of Munsterberg at Harvard (Robinson, 2012)). The terms were originally used by Windelband and Dilthey (see below).

Having identified the nomothetic approach as a feature of classic/natural science (in Table 2.1), it’s ironic that some of the ‘arch scientists’ within Psychology based their ideas on the intensive study of individual organisms. L Try to name some of these famous scientists and identify the organisms with which they’re associated.

48

Scientific perspectives

Wundt trained Psychology graduates to introspect (see Chapter 1). Ebbinghaus studied his own memory (see Chapter 7). Pavlov studied the digestive process in dogs (see Chapter 6). Skinner investigated operant conditioning in rats and pigeons using a ‘Skinner box’ (see Chapter 6). L Freud used psychoanalysis with individual patients (see Chapter 9). L L L L

This ‘Wundtian’ paradigm involves a series of ‘case studies’ (the study of individuals – human or non-human) to see if a similar effect or pattern occurs in each case; if so, a general law or effect could be proposed. Other users of this paradigm include Titchener, James (see Chapter 1), Watson, Thorndike (see Chapter 6), Alzheimer, and Kraepelin (see Chapter 13). While this approach has become marginalized within Psychology, it’s still going strong in related disciplines such as neurology (Ramachandran, 2011; see Chapter 12). The Humanistic theories of Abraham Maslow (1954, 1968) and Carl Rogers (1951, 1961) also represent the idiographic approach, as does George Kelly’s (1955) personal construct theory (see Chapter 4).

L

L L

Like nature–nurture and free will–determinism, is ‘nomothetic– idiographic’ a false dichotomy (i.e. are they mutually exclusive, so that only one or the other can be true)? Does it make sense to talk about a totally unique individual (a person who is like no other person)? Must we agree with Allport (1937) that, since all science is nomothetic, and since Psychology should be concerned with the study of individuals, therefore Psychology cannot be a science? This could be re-phrased as: what is the relationship between individual cases and general principles?

The first two questions will be revisited in Chapter 11; in the rest of this section, we shall try to address the third.

Individual cases and general principles When discussing the pioneering work of Wundt (in Chapter 1), we noted that he believed that introspection was only applicable to psychophysiological phenomena, that is, ‘lower mental processes’ such as sensations, reaction times, and attention. These mechanisms, he argued, are amenable to study using the methods of the Naturwissenschaften (natural sciences, such as physics and chemistry), with the experiment at their heart. However, in order to study memory, thinking, language, personality, social behaviour, myth, and cultural practices (the ‘higher mental processes’), we need to study communities of people (Volkerpsychologie). These higher mental processes can only be studied using the methods of Geisteswissenschaften (philosophy, history, biography and literary criticism, the social or human sciences, and the humanities). The distinction between the Naturwissenschaften and Geisteswissenschaften (or ‘moral sciences’) was made, independently, by two nineteenth-century German philosophers, Wilhelm Windelband (1848–1915) and Wilhelm Dilthey (1833–1911). L Naturwissenschaften aim to establish general laws, allowing predictions based on

statements regarding cause-and-effect relationships (see above). The natural sciences are

49

Scientific perspectives

concerned with the natural world, and so, quite appropriately, attempt to explain it in terms of ‘natural laws’ (‘laws of nature’) and analyse it into elements. L By contrast, the Geisteswissenschaften involve Verstehen, an intuitive, empathic understanding of human mental activity (i.e. consciousness), stressing the inner unity of individual life and the person as an articulated whole (Valentine, 1992). Rather than treating the individual case as incidental to the discovery of general laws, the social sciences focus primarily on the particular (whether this be a person, historical event, or literary work). Windelband, a German working at Harvard at the invitation of William James, argued (in a 1899 Psychology Review article) that the nomothetic/idiographic distinctions (associated with the Naturwissenschaften and Geisteswissenschaften, respectively) was too divisive: in reality, natural sciences are interested in individual cases, while the humanities are interested in theories and generalizations (Robinson, 2012). (This seems to contradict Holt (1967), who quotes Windelband as claiming that all the disciplines concerned with ‘man and his works’ by their very nature cannot generalize.) See Figure 2.2. Windelband/Dilthey

Natural sciences (Naturwissenschaften)

Moral (human) sciences (Geisteswissenschaften)

General laws Nomothetic

Verstehen Idiographic

Group norms (differential psychology/individual differences)

Normative

Variable-centred

Stern (1921) Allport (1937)

Cattell (1944)

Bem (1983)

Individual norms (personology/psychology of personality)

Ipsative

Person-centred

Mischel (1983)

Figure 2.2 The nomothetic–idiographic distinction and its relationship to other, corresponding distinctions. One way of addressing the relationship between individual cases and general principles is to consider the status of single-subject designs.

50

Scientific perspectives

L L

Before reading on, try to formulate some arguments for and against the use of single-subject designs. Consider how the distinction between these and group studies may be yet another false dichotomy.

As we noted above, an individual case can be, and often is, the subject of scientific investigation. Where data from individual participants are seen as reliable and representative, single-subject designs are considered to be acceptable (Valentine, 1992). From the idiographic perspective, the very notion of an individual being ‘representative’ is contentious: it implies the opposite of ‘unique’. But the fact that the study of individual cases take place at all within mainstream, nomothetic Psychology suggests that the study of individuals may not, in and of itself, be incompatible with the aim of generalization (Gross, 2014). Indeed, Thorngate (1986) claims that the study of averages is often ill-suited to providing information about what people ‘in general’ do; typically, it cancels out systematic patterns in individual persons. Similarly, Eysenck (1966) regards his dimensional theory of personality as a compromise between the (extreme) idiographic and nomothetic approaches, with the latter taking important individual differences between participants as irrelevant (see Chapter 12). According to Lee (2012), in trying to achieve statistical significance, researchers can sometimes forget that experimental populations comprise groups of individuals. This seems rather odd, given that whether it’s a laboratory rat, or someone with a psychological disorder, the principal unit of analysis in the science of Psychology is the individual organism (Barlow and Nock, 2009). Increasingly, the two approaches are being seen as complementary (not opposites) and interdependent (e.g. Hilliard, 1993; Thorngate, 1986). As Hilliard puts it, most single-case research clearly involves determining the generality across subjects (the nomothetic level) of the relationships uncovered at the individual (idiographic) level. Even Allport himself didn’t reject the nomothetic approach out of hand and advocated the use of diverse methods to study the individual (both qualitative and quantitative). Finally, it’s important to note that we shouldn’t equate ‘quantitative methods’ and ‘qualitative methods’ with ‘nomothetic’ and ‘idiographic’, respectively. This is because Windelband’s original use of the terms referred to research objectives – not methods. According to Robinson (2012): If psychology is going to find a harmonious solution to the nomothetic–idiographic riddle, and reconcile the tension between the general and the individual, it must reembrace this lost Wundtian tradition, for there lies the key. (p. 166)

Induction (or the inductive method) As we noted in Table 2.1, the universal laws/principles which the nomothetic approach aims to establish are based on a number of separate observations or experiments (‘samples’). The laws are ‘discovered’ (i.e. they exist ‘objectively’) when a sufficient number of simple, unbiased, unprejudiced samples have been taken: from the resulting sensory evidence (‘data’/ sense-data), generalized statements of fact take shape. We gradually build-up a picture of what the world is ‘really’ like based on a number of separate samples.

51

Scientific perspectives

However, as many philosophers and others have argued, there’s no such thing as unprejudiced/unbiased observation (which is what the inductive method assumes). Observation is always selective, interpretative, pre-structured, and directed: we must have at least some idea of what we’re looking for to know when we’ve found it! Goldberg (2000) cites a philosophy professor who argued that ‘data’ (‘that which is given’) would be more accurately called ‘capta’ (‘that which is taken’). ‘Facts’ don’t gradually ‘emerge’ from the sensory evidence/data accumulated from repeated observations or experiments (as the inductive method would have us believe). ‘Data’ and ‘facts’ are not the same; ‘evidence’ usually implies numbers and recordings which need to be interpreted through the lens of a theory. ‘Facts’ don’t exist objectively and cannot be discovered through ‘pure observation’. As Deese (1972) puts it: ‘Fact’ = Data + Theory. While the use of empirical methods is a distinguishing feature of science, theory is just as crucial; without it, data have no meaning. The major alternative to the inductive method is the hypothetico-deductive method. Karl Popper (1972), for example, favours the deduction of testable statements (hypotheses) from a new theory (i.e. one that meets the objections to/problems associated with an existing theory). While the inductive method is aimed at replicating (i.e. confirming) results from previous research as a way of helping to discover ‘the facts’, Popper’s alternative approach is related to his belief that the fundamental feature of a scientific theory is that it should be falsifiable (i.e. it should be possible to show it to be false). This is discussed further in Box 2.11. In practice, the hypothetico-deductive and inductive methods are both involved in the scientific process and are complementary.

BOX 2.11 Falsifiability and pseudo-science L Setting out to confirm or verify a theory (or specific hypothesis), according to

Popper, is too easy and doesn’t add to our store of knowledge. L Taking a classic demonstration from philosophy, if we want to confirm that all

swans are white, however many white swans we observe only reinforces what we already know or believe. By contrast, we only need to observe a single black swan to show the claim to be false (falsification). That single black swan would far outweigh the information provided by observation of thousands or even millions of white ones. L However, the claim that all swans are white is clearly easily tested – and so easily falsifiable. But what about Freud’s claim that dependent men prefer big-breasted women? When this is found to be the case, the claim is, of course, supported. But if such men actually prefer small-breasted women (Scodel, 1957), Freudians can use the defence mechanism of reaction formation to argue that an unconscious fixation with large breasts may manifest itself as a conscious preference for the opposite (a case of ‘heads I win, tails you lose’: Eysenck, 1985; Popper, 1959). L Based on this and other parts of Freud’s psychoanalytic theory, both Eysenck and Popper rejected the entire theory as pseudo-science. However, it’s a mistake to see reaction formation as representative of the theory as a whole (e.g. Kline, 1984, 1989; see Chapter 9).

So, for Popper, a truly scientific theory is one that makes specific predictions (hypotheses) which can be empirically tested and which can be disproved. While this might seem

52

Scientific perspectives

intuitively quite plausible, the explaining away of data that appear to conflict with their theories appears to be used routinely by ‘respectable’ scientists whom Popper wouldn’t wish to accuse of engaging in pseudo-science – and has led to important scientific discoveries (Okasha, 2002). (Okasha gives an example from Newton’s gravitational theory.)

Pause for thought … 11 Based on the ‘All swans are white’ example in Box 2.11, what conclusions can you draw regarding the validity of claiming that theories can be proved?

Parapsychology and pseudo-science The term ‘Parapsychology’ was first introduced in the 1930s to refer to the scientific investigation of paranormal phenomena (Evans, 1987). Indeed, most researchers in this field (and the related ‘Anomalistic Psychology’) consider themselves to be scientists applying the usual rules of scientific enquiry to admittedly unusual phenomena. However, many critics have dismissed this research area as being a pseudo-science. Yet as measured by various criteria, including Popper’s falsifiability (e.g. Mousseau, 2003), in general, Parapsychology appears to meet the implicit criteria of science rather better than it meets the criteria of pseudo-science (Holt et al., 2012). Another criterion that has been applied is confirmation/verification vs. refutation. Mousseau (2003) reports that in the sample she studied, almost half of the ‘fringe’ articles (those published in ‘fringe’ journals, such as the Journal of Parapsychology) reported a null/ negative outcome (disconfirmation) compared with none of the articles published in ‘mainstream’ scientific journals (such as the British Journal of Psychology). By this criterion, Parapsychology would appear to be more scientific than the more mainstream research areas. This relates to publication bias (or the ‘file-drawer’ problem), described in Box 2.12.

BOX 2.12 Publication bias (or the ‘file-drawer’ problem) L A recurring issue within science in general, and Parapsychology in particular, is

the concern that only successful studies tend to be published (i.e. those that produce statistically significant results); those that fail to find a significant result – or one opposite to the predicted outcome – are more often left in researchers’ file drawers (and so remain unpublished). L The result is that the database of known studies may not accurately reflect the true state of affairs; in the case of paranormal phenomena, this would mean a strong bias in their favour. L With the usual cut-off for statistical significance being set at 0.05, on average, 1 in 20 studies will be apparently successful by chance alone. This makes it necessary to know how many studies have been conducted in total (Milton, 2005). (See Gross, 2014.) L A classic case of publication bias involved the well-known American Parapsychologist Daryl Bem and the prestigious Journal of Personality & Social Psychology (JPSP).

53

Scientific perspectives L In 2011, Bem published, in the JPSP, a series of nine experiments involving over

1,000 participants, which seemed to support the existence of precognition (‘feel the future’). L He invited other researchers to conduct replications, and the invitation was accepted by three British researchers: Chris French, Richard Wiseman, and Stuart Ritchie. Each ran his own independent study and Bem actually provided them with the software used in the original study. All three studies obtained null results. L When they submitted their results for publication (in 2012), several journals, including the JPSP, refused to review, claiming that they didn’t publish replication studies! (This is a problem for mainstream Psychology in general.) This makes it virtually impossible to assess a finding and can create the mistaken impression that an effect is much more robust than it actually is (Wiseman, 2012). L Finally, their results were published by PLoS ONE, an open-access journal. They received widespread coverage around the world, raising doubts about the validity of Bem’s original findings and stimulating much-needed discussion of the place of replication in science and the value of the current peer-review system. (Bem was one of the two reviewers for the British Journal of Psychology; not surprisingly, he didn’t recommend it for publication!) (French, 2012).)

Positivism Along with the ‘imperative’ that science be objective, positivism regards science as unbiased and value-free. The scientist is pursuing ‘the truth’ through the use of highly controlled, experimental methods which shine a light on what the world is ‘really like’ (how things are when not being observed and measured by scientists). The scientist’s own values, biases, and so on have no influence on the scientific process.

L

Try to formulate some objections to this positivist view of science as applied to the study of people.

Feminist critiques of science Some of the most outspoken critics of a positivist Psychology have been Feminist Psychologists (see Chapter 1). They ask whether it’s possible for scientific enquiry to be neutral, totally free of bias, and completely independent of the value system of the human scientists who are doing the science. According to Prince and Hartnett (1993): Decisions about what is, and what is not, to be measured, how this is done, and most importantly, what constitutes legitimate research are made by individual scientists within a socio-political context, and thus science is ideological… . Scientific psychology has reified concepts such as personality and intelligence … and the scientific psychology which ‘objectively’ and ‘rationally’ produced means of measuring these reifications has been responsible for physical assaults on women such as forced abortions and sterilisations. (p. 222; emphasis added)

54

Scientific perspectives

Prince and Hartnett note that, between 1924 and 1972, over 7,500 women in the US state of Virginia alone were forcibly sterilized, in particular unmarried mothers, prostitutes, and children with learning difficulties and behavioural problems; in all cases, the criterion used for sterilization was their mental age as measured by the Stanford–Binet intelligence test (Gould, 1981). (This, and other intelligence tests, are discussed in detail in Chapter 11. They have been accused of both gender and cultural bias.)

L

What is meant by ‘reification’?

Reification refers to treating some human ability or quality (essentially, a hypothetical construct), such as intelligence, as if it has a literal, tangible, independent, objective existence, permitting it to be measured. Scientific ‘facts’ regarding that ability or quality can then be used that may benefit or disadvantage different groups in society. The very decision to study intelligence, for example, and to develop tests for measuring it, implies that (some) Psychologists believe that not only is this possible but – much more relevant to science’s claim to be value-free – that it’s important to do so. Such decisions aren’t made in a politicocultural vacuum and so cannot be regarded as objective, neutral, and value-free. Rather than advocating that Psychology should (aim to) be value-free, objective, and ‘scientific’, many Feminist Psychologists argue that we should stop denying the role of values, and instead acknowledge that psychological research must always take wider social reality into account. They call for a new value-laden approach to research: unless and until Psychology ‘comes clean’ about its values and biases, it will never be able to adequately reflect the reality of its subject-matter, namely, human beings (see Gross, 2014, and Chapter 11).

Realism/correspondence theory of truth As we noted in Table 2.1, this is really another way of addressing the issue of objectivity and is also linked to the aims of science (see section below). What we believe exists objectively (or, what’s ‘real’) depends very much on the particular universe of discourse one is using. As we saw when discussing reductionism above, many scientists regard physics as the ‘ultimate’ science, such that the explanations of the world offered by all other sciences will eventually be ‘taken over’ by one single explanation, namely (micro)physics. However, the reality of microphysics (sub-atomic particles) is very different from that of chemistry or biology – let alone Psychology. What’s more, Psychologists themselves cannot agree about ‘psychological reality’. As we saw in Chapter 1, different theoretical perspectives/schools of thought see psychological reality differently, ranging from Watson’s focus on overt behaviour, through Cognitive Psychology’s computer analogy, to Freud’s unconscious mental life.

Reflexivity, language, and the nature of psychological kinds But there’s an even thornier problem facing Psychology: the very nature of the reality it claims to investigate may be qualitatively different from that of other (natural) sciences. This relates to the ontological status of Psychology’s subject-matter. Whereas in orthodox sciences there is always some external object of enquiry – rocks, electrons, DNA, stars – existing essentially unchanging in the non-human world (even if never finally knowable ‘as it really is’ beyond human conceptions), this is not so for

55

Scientific perspectives

Psychology. ‘Doing Psychology’ is the human activity of studying human activity, it is human psychology examining itself – and what it produces by way of new theories, ideas and beliefs about itself is also part of our psychology! (Richards, 2010, p. 7) Richards is describing reflexivity, a self-referring relationship that is unique to Psychology as a (scientific) discipline. Whereas ‘doing chemistry’ doesn’t require chemical processes or change the nature of such processes, the discipline of Psychology actually contributes to ‘the dynamic psychological processes by which human nature constantly recreates, re-forms, and regenerates itself, primarily in western cultures’ (Richards, 2010, p. 7). In other words, what Psychologists say about ‘what makes people tick’ can – and does – actually affect the ticking. This is what Richards means when he claims that Psychology is part of its own subject-matter. Taking this one step further, the ‘underlying reality’ that Psychologists attempt to reveal may be no deeper than the language they use to identify/describe it. In one sense, what we are studying is quite literally language. As Danziger (1997) observes, the most basic instrument of scientific investigation is language. The entire investigative process is so immersed in language that it’s simply taken for granted, and its role becomes invisible. But if this is true of science in general, in Psychology’s case it becomes absolutely critical. The rocks, electrons, DNA, and stars that Richards refers to above are examples of natural kinds: the (English) words used to denote things that have an independent and tangible existence (although rocks are rather more tangible for non-geologists than electrons are for the non-physicist). By contrast, the ‘intelligence’, ‘emotion’, ‘motivation’, ‘personality’, etc. that constitute Psychology’s subject-matter are more accurately described as psychological kinds. Whereas our knowledge of a chemical compound (a natural kind) can change without changing the nature of the compound itself, a change in how intelligent we think we are (a psychological kind), for example, can change how we think of ourselves. As Danziger (1997) puts it, people’s actions, experiences, and dispositions are not independent of how they are categorized. As we shall see in Chapter 3, psychological categories (such as intelligence) feed into the essentialism of mainstream (Experimental) Psychology, i.e. they have a taken-for-granted quality about them and seem to be describing what people are actually, ‘really’ like (their ‘essence’ or human nature). They seem ‘natural’ – but only to members of that particular linguistic community. While the concept of ‘natural kinds’ has nothing to do with culture, the natural-appearing kinds of Psychology have everything to do with it (Danziger, 1997). Another feature of Psychology’s uniqueness as a science is that ‘doing science’ is just one more kind of human behaviour (or aspect of psychological functioning) and so can – in principle – be explained by Psychology. In recent years, much has been written of (1) the (unconscious) biases and prejudices involved in scientific practice (both individual and institutional); (2) the sociological and political contexts in which it takes place; and (3) in the case of Psychology especially, the historical specificity of scientific concepts. These all detract from the view of science in general – and Psychology in particular – as a positivistic discipline; much of the critique has come in the form of Social Constructionism (see Chapter 3).

Aims Consistent with his view of Psychology as a natural science, Watson (in his 1913 ‘Behaviourist manifesto’: see Chapter 1) spelled out ‘prediction and control’ as the appropriate aims of the discipline.

56

Scientific perspectives

L L L

How does this relate to positivism? How does it relate to Windelband/Dilthey’s distinction between Naturwissenschaften and Geisteswissenschaften? What do you consider to be the most appropriate aims of Psychology?

Prediction and control are what we’d expect to be given priority by a positivist approach, with Skinner giving control almost exclusive emphasis. From a different perspective, Cattell, a major personality theorist (see Chapter 12), states that: The scientific study of personality seeks to understand personality as one would the mechanism of a watch, the chemistry of the life processes in a mammal or the spectrum of a remote star. That is to say, it aims at objective insights; at the capacity to predict and control what will happen next; and at the establishment of scientific laws of a perfectly general nature. (Cattell, 1981, in Kline, 1988) Cattell is clearly writing from the perspective of the Naturwissenschaften, which seeks ‘understanding’ in terms of universal laws of cause and effect. The kind of understanding (Verstehen) associated with the Geisteswissenschaften is very different and has nothing to do with cause and effect (see above). Kline (1988) objects to Cattell’s claims for two main reasons: 1 While there’s no disagreement about what a watch is or isn’t, this cannot be said of personality. As we’ve noted above, ‘personality’ is a hypothetical construct, a ‘psychological kind’ with no independent existence beyond the minds that constructed it. 2 It’s reductionist. While it’s perfectly possible to study human physiology in an objective, scientific way, this isn’t the same as studying personality or psychology. There’s more to, say, our experience of the colour red than the neural, biochemical, or electrical changes which accompany it: you cannot explain the former by accounting for the latter. A much-cited critic of control as the primary aim of Psychology is George Miller, a key figure in the cognitive revolution of the 1950s (see Chapters 1 and 7). See Box 2.13.

BOX 2.13 Psychology as a means of promoting human welfare (Miller, 1969) L In his 1968 presidential address to the American Psychological Association

(APA), Miller set out the role he believed Psychology should play in society, namely ‘a means of promoting human welfare’ by ‘giving psychology away’. L By this he meant encouraging non-Psychologists (‘ordinary people’) to practice Psychology, to be their own Psychologists, helping them to do better what they already do through familiarizing them with (scientific) psychological knowledge. Psychology shouldn’t be the ‘property’ of scientific/professional experts: psychological principles and techniques can usefully be applied by everyone (starting with 12-year-old children).

57

Scientific perspectives L This represents a ‘policy document’ or blueprint for what Psychology ought to be

– a prescription for its social function. In the context of the debate regarding Psychology’s status as a (natural) positivist science, Miller was advocating certain values, quite a radical thing to be doing in the 1960s (Gross, 1999). L Rather than control, Miller argued that understanding and prediction are the appropriate aims of Psychology. Indeed, he was advocating that Psychology should be aimed at helping people achieve self-understanding (‘giving psychology away’).

According to Murphy et al. (1984), Miller’s presidential address captures the turmoil that Psychology was experiencing during the late 1960s. Many Psychologists have endorsed his sentiments, including Shotter (1975) and Kay in his 1972 presidential address to the British Psychological Society (BPS). Murphy et al. identify two major issues raised by the radical critics of Psychology in the late 1960s: 1 Psychology’s dehumanizing image of human beings. This is related to the notion of behavioural control (see above). What makes it dehumanizing is that people are capable of self-control: imposing a behavioural technology of control (as advocated by Skinner and other behaviour analysts) removes a basic human freedom, as well as implying that people are machine-like (Heather, 1976). 2 Psychology’s ignoring of the real-world settings within which human beings live their lives. Miller advocates that the starting point must be what people themselves believe their problems to be. This is reminiscent of Joynson’s (1974) attack on Behaviourists for looking at people as objects, from the outside, while ignoring their experience and rejecting the validity of their attempts to explain their own behaviour (see Chapter 4). Radical (or Critical) Psychologists regard any attempt to study people as objects (‘subjects’) as both unethical and scientifically unsound: since people can and do choose how to act, any theoretical approach which ignores this feature of human beings (‘human nature’?) must be presenting only a partial or inaccurate account. Miller claimed that changing how we see ourselves is a more appropriate aim for Psychology than developing a technology which has, traditionally, been the measure of the effectiveness of natural science; the latter is more tangible than the former and the effects of applications of Skinner’s operant conditioning (see Chapter 6), for example, may be much easier to quantify than the effects of Freud’s view of people as irrational and driven by unconscious forces (see Chapter 9). However, for Miller, it’s precisely the impact on our conception of ourselves – and not technological applications of psychological principles – which makes Psychology (potentially) revolutionary. (This is similar to Richards’ points regarding the uniqueness of Psychology; see above.)

Can – and should – Psychology be value-free? As we noted above, many Feminist Psychologists argue that the major ‘sin’ of mainstream Psychology is its denial of values, resulting in its sexism, specifically, its masculinist bias (see Gross, 2014). Carol Gilligan (1993), for example, argues that not only do Psychologists (and other scientists) have a responsibility to make their values explicit about important social and political issues, but their failure to do so may (unwittingly) contribute to prejudice, discrimination, and oppression.

58

Scientific perspectives

In Lindsay’s (1995) presidential address to the BPS (‘Values, ethics, and psychology’), he reminded his fellow Psychologists that Psychology does not operate in a social or value-free vacuum: we must be aware of the social situation, in particular the predominant values in society at any one time. Similarly, in McAllister’s (1997) BPS presidential address (‘Putting psychology in context’), she argues that while scientific knowledge is being applied in positive ways to enhance human life and address human problems, we need to take account of the wider sociopolitical context in which this takes place. According to Prilleltensky and Fox (1997), some mainstream Psychologists recognize the societal sources of much individual distress and propose minor reforms of social institutions to help individuals function more effectively. In general, when people become Psychologists, they expect to do some good – and often they do. However, Critical Psychologists see Psychology itself as a mainstream social institution with its own negative consequences. The underlying values and institutions of modern societies (especially, but not exclusively, capitalist societies) reinforce misguided efforts to obtain fulfilment, while maintaining inequality and oppression. Because psychology’s values, assumptions and norms have supported society’s dominant institutions since its birth as a field of study, the field’s mainstream contributes to social injustice and thwarts the promotion of human welfare. (Prilleltensky and Fox, 1997, p. 4) Many Critical Psychologists agree that certain values are of key importance, including social justice, self-determination and participation, caring and compassion, health, and human diversity. While this list may not be very controversial, individuals and societies in the real world must make choices between competing values; also, values must be advanced in a balanced way. For example, mainstream Psychology contributes to value imbalance by endorsing individualism (see Chapter 3) and deemphasizing values related to mutuality, connectedness, and a psychological sense of community. Not only is this politically and ethically unacceptable, but it misrepresents the nature of human behaviour, which is inherently social. As Prilleltensky and Fox (1997) say: An individual’s behaviour can only be understood in the context of interaction with other human beings within socially created institutions…. It is this interaction with others throughout our lives that shape our values, our goals, our very views of our selves. (p. 10) The particular combination of values needed for human welfare and to create a better society (a ‘Good Society’) also changes from society to society, culture to culture, group to group, and time to time. Some values have more potential for transforming society than others, including the efforts of Feminist Psychologists to enhance women’s power relative to men’s through encouraging self-determination.

Progress is measured in terms of cumulative knowledge Before the publication of Thomas Kuhn’s (1962) The Structure of Scientific Revolutions, the standard, unquestioned account of how science develops claimed that it involves a steady, continuous accumulation of knowledge providing an increasingly accurate account of how the world works (‘the truth’). This is the Whig version of the history of science that we discussed in Chapter 1. By contrast, Kuhn stressed the discontinuities – a set of alternating ‘normal’ and revolutionary phases in which scientific communities experience periods of turmoil, uncertainty, and angst (Naughton, 2012).

59

Scientific perspectives

BOX 2.14 KEY THINKER: Thomas Kuhn (1922–1996) L Kuhn was a physicist, graduating from Harvard in

L

L

L

L

1943. He worked on radar for the remainder of the Second World War before returning to Harvard to complete his PhD (on quantum physics). In teaching a course on science for humanities students, he examined old scientific texts. This led him to conclude that Aristotle, by the standards of presentday physics, knew almost nothing about mechanics (see Chapter 1). But it became blindingly obvious to Kuhn that Aristotle’s science had to be evaluated within the intellectual context of his time (i.e. Aristotle’s) – not ours. He moved to the University of California, Berkeley, in 1956, where he wrote Structure …, which is one of the most-cited academic books of all time. Kuhn is Figure 2.3 widely regarded as, possibly, the most influential philosopher of science of the twentieth century. Thomas Kuhn. Not only did his contribution mark a fundamental break from several key positivist doctrines, but it also marked the beginning of a new style of philosophy of science that brought it closer to the history of science. His emphasis on the importance of communities of scientists, united around a shared paradigm (see text below), essentially triggered the growth of a new academic discipline, namely the sociology of science: researchers began to examine scientific disciplines in a comparable way to how anthropologists study exotic tribes. Science isn’t regarded as a sacred, untouchable product of the Enlightenment (see Chapter 1), but as just another subculture (see Gross, 2014).

Paradigms, revolutions, and normal science Before a discipline can be considered to be a science, there must be an identifiable paradigm. As we noted above when discussing induction as a feature of classical science, ‘facts’ don’t exist objectively or independently of theoretical interpretation of the data that scientists gather as part of the research process. Kuhn, along with other philosophers of science (e.g. Feyerabend, 1965; see below) claims that empirical observations are theory-laden: our theory literally determines how we see the world. In short, a paradigm is an entire scientific outlook – a constellation of shared assumptions, beliefs, and values that unite a scientific community and allow normal science to take place. (Okasha, 2002, p. 81) Before a science has become established, it’s said to be pre-paradigmatic (‘pre-scientific’ or ‘immature’), with competing schools of thought offering different procedures, theories, even metaphysical presuppositions. Once a paradigm becomes established, a stage of normal science

60

Scientific perspectives

is said to have been reached. The community of researchers sharing some common intellectual framework engage in solving puzzles thrown up by the discrepancies (or anomalies) between what the paradigm predicts and what’s revealed by experiment or observation. Most of the time, these anomalies can be resolved – either by incremental changes to the paradigm or experimental error – and the status quo is restored. However, over long periods (many decades or even centuries), unresolved anomalies accumulate and eventually some scientists begin to question the validity of the paradigm itself. At this point, the discipline enters a period of crisis, which can be resolved only by a new paradigm replacing the now deficient, old paradigm. When this paradigm shift occurs, a scientific revolution has taken place, and there’s a return to a stage of normal science. And so the process recurs (see Figure 2.4).

"!!!  "!  &   !"!&

Agreement re: '"!"  '$ "  '!! '$!! "!$" 

Paradigm

&&!& !!#!&  

 !

%  !





Pre-science

!

 ! !"

 





 





'   ' #

 

$



Normal science

Revolutionary science

Figure 2.4 A summary of Kuhn’s stages of scientific development.

An evaluation of Kuhn’s stage theory L A demonstration of the radical nature of Kuhn’s theory is its complete rejection of

Popper’s account of the scientific process. As we saw above, Popper saw falsifiability as

61

Scientific perspectives

L

L

L

L

L

L

L

L

L

62

the defining characteristic of a truly scientific theory. But for Kuhn, the last thing normal scientists seek to do is to refute the theories embodied in their paradigm. ‘Normal scientists’ aren’t trying to test the paradigm; on the contrary, they accept the paradigm unquestioningly and conduct their research within the limits it sets. While claiming that all sciences – natural (Naturwissenschaften) and human/social (Geisteswissenschaften) – involve interpretation, Kuhn argued that only the latter involve hermeneutic reinterpretation; this reflects changes in social and political systems. Many people were also enraged by Kuhn’s description of most scientific activity as ‘puzzle-solving’ as though this trivializes the scientific process. Puzzles come in all shapes and sizes, and solving them often involves great ingenuity and effort. Naughton (2012) gives the example of the discovery of the Higgs boson in 2012, which had been predicted by the ‘standard model’ of particle physics first proposed in the 1970s. Perhaps most controversially, Kuhn argued that competing paradigms are incommensurable: there’s no rational, objective way of assessing their relative merits (i.e. showing that one or other is closer to ‘the truth’). Two paradigms may be so different as to make it impossible for them to be compared with each other in any straightforward way. But doesn’t this imply that scientific revolutions must be based, at least partly, on irrational grounds? In turn, doesn’t this mean that paradigm shifts aren’t the great intellectual breakthroughs we take them to be? Again, if we cannot choose between competing paradigms in terms of how well they ‘fit the facts’, and if all knowledge is constructed (defined in terms of particular theories), then we’re faced with the problem of relativism: how do we choose between a good and a poor paradigm (Agassi, 1996)? If the facts about the world are paradigm-relative, then they change when the paradigm changes. This challenges the very core of positivist science and implies that a theory’s inherent worth is almost irrelevant. Lakatos (1970) has attempted to combine Kuhn’s analysis of paradigms with the possibility of avoiding the relativity problem; Feyerabend (1978) takes relativism to the extreme, such that ‘anything goes’ – see Gross (2014). In a postscript to the second edition of Structure (1970), and in later writings, Kuhn moderated his tone considerably and accused his early critics of having misread his intentions. He wasn’t trying to cast doubt on the rationality of science, but rather to offer a more realistic, historically accurate picture of how science actually develops. By neglecting the history of science, positivists had provided an over-simplistic, even idealistic, account of the scientific process. Part of Kuhn’s legacy has been to show that the history of science is crucial for understanding its development. At least as importantly, Kuhn has demonstrated that science is an inherently social activity: the existence of a scientific community, united by a common paradigm, is a prerequisite for the practice of normal science. Related to this is the fundamental point that paradigms change not because newer ones explain the available data better than the older ones, but because the former become more popular: part of this involves conformity processes (see Gross, 2015). More broadly, as noted in Box 2.13, Kuhn has had a huge influence on sociologists of science: science should be seen as a product of the society in which it is practised (the ‘Sociology of Scientific Knowledge’). The claim that social and political factors external to science influence the outcome of scientific debates is central to Social Constructionism; the influence of such factors can extend to the very content of accepted scientific theories (see Chapter 3). Despite his anti-positivist views, Kuhn was strongly pro-science; ironically, he played a role in the rise of cultural relativism, which argues that science provides only one version of the ‘truth’ about the world (another feature of Social Constructionism).

Scientific perspectives

Pause for thought … 12 How do you think Kuhn’s argument regarding the incommensurability of different paradigms bears on the Whiggish view that scientific progress is measured in terms of cumulative knowledge? (See text above.)

L

In terms of Kuhn’s account of the developmental stages of science (prescience, normal science, revolutionary science), how would you assess the scientific status of Psychology? (Looking back at Chapter 1, in Psychology’s history did/do any of the schools of thought described constitute a paradigm, and have there been any revolutions?)

Conclusions If old and new paradigms are incommensurable, then we cannot say that scientific revolutions involve the replacement of ‘wrong’ ideas by ‘right’ ones (i.e. the ‘linear’ view of scientific progress). To make this claim presupposes the existence of a common framework for evaluating the two paradigms, which is precisely what Kuhn is denying. As Okasha (2002) puts it: Incommensurability implies that scientific change, far from being a straightforward progression towards the truth, is in a sense directionless: later paradigms are not better than earlier ones, just different (p. 86; emphasis added) Whether Psychology has ever had a paradigm is (still) hotly debated. Table 2.2 shows a variety of views regarding the status of Psychology in terms of Kuhn’s stages/phases of scientific development.

Table 2.2 The status of Psychology in relation to Kuhn’s stages of scientific development Pre-science: Kuhn himself argued that Psychology is pre-paradigmatic; Joynson (1980) and Boden (1980) agree. Normal science: according to Valentine (1982), Behaviourism comes as close as anything could to a paradigm. It provides (1) a clear definition of the subject-matter (behaviour as opposed to ‘the mind’); (2) fundamental assumptions, in the form of the central role of learning (in particular, conditioning), and the analysis of behaviour into stimulus– response units, allowing prediction and control; and (3) a methodology, with the controlled experiment at its core. Revolution: Palermo (1971) and LeFrancois (1983) argue that Psychology has already undergone several paradigm shifts. Structuralism, represented by Wundt’s introspectionism, was replaced by Watson’s Behaviourism, which in turn was largely replaced by Cognitive Psychology (based on the computer analogy and the concept of information processing).

63

Scientific perspectives

Glassman and Hadad (2013) disagree with Palermo and LeFrancois, arguing that, unlike physics, Psychology has never had a single paradigm, i.e. a complete reorientation of the discipline as a whole. Instead, there have been several schools of thought (or theoretical approaches), often overlapping (both historically and geographically), as we saw in Chapter 1. These have all enjoyed varying degrees of popularity (at different times and in different countries), but it’s arguable whether any one merits the status of a paradigm as Kuhn defines it. Smith et al. (1998) identify psychoanalysis, Behaviourism, sociobiology, and the information-processing and cognitive-developmental approaches as paradigms; the last is the most important as far as Developmental Psychology is concerned. For Davison et al. (2004), the current paradigms in the study and treatment of mental disorders (psychopathology) are the Biological, Psychoanalytic, Humanistic-existential, Behaviourist, and Cognitive. In conclusion, this chapter and Chapter 1 have focused on what Harré (e.g. 1989) refers to as the ‘old paradigm’, another term for ‘mainstream Psychology’. In Chapter 3, we shall take a look at some fundamental changes within Psychology during the past 20–30 years; the ‘new paradigm’ (Harré, 1995a) is an umbrella term for a number of theoretical and methodological challenges to the mainstream. Whether these changes justify the application of Kuhn’s concept of a paradigm, the social construction of knowledge, together with its relativism, lie at the heart of this challenge.

Pause for thought – answers 1 2 3 4

Induction (or the inductive method). This would lend itself to a seminar presentation (see pages 51–54). Hypothetico-deductive method. No: she wouldn’t have had the relevant sensory (i.e. visual) experience. In technical psychological language, she would lack cross-modal transfer. L

Much of the early evidence from the study of cataract patients (Hebb, 1949; Sinha, 2013; Von Senden, 1960/1932) supported Locke. A famous exception is Gregory and Wallace’s (1963) study of S.B.; he could recognize objects visually, but only if he was already familiar with them via touch (he displayed good cross-modal transfer). He had difficulty recognizing unfamiliar objects through vision alone.

5 ‘Constant conjunction’ is an alternative term for ‘correlation’; this refers to two (or more) events or stimuli (‘variables’) occurring at the same time. Therefore, if we take the cause to precede the effect, then we cannot equate (or ‘reduce’?) a cause-and-effect relationship to a correlation. L

64

The standard argument for not being able to infer cause and effect from a correlation is that there may be a third variable that is the cause of both the correlated variables. For example, if we claim that smoking causes schizophrenia, it remains possible that there is some genetic factor that predisposes individuals with a certain gene (or genes) both to smoke and to develop the symptoms of schizophrenia.

Scientific perspectives

6 Experimental control involves manipulating one (or more) independent variable (IV) to see its effect on one (or more) dependent variable (DV), while holding all other (relevant) variables constant. This suggests that ‘cause’ and ‘IV’ and ‘effect’ and ‘DV’ are more-or-less synonymous – at least in the context of psychological experiments. 7 For Freud, the unconscious, preconscious, and conscious represent ‘levels’, or separate ‘layers’ of consciousness – rather than a continuum. Thoughts, ideas, etc. don’t move easily between the levels/layers – especially from the unconscious to the conscious. L

Although most present-day Cognitive Psychologists would agree with Kant that consciousness exists on a continuum, they would argue that there’s ample evidence to suggest that much of our mental processing takes place outside our immediate awareness (i.e. the cognitive unconscious; see Chapter 6) (Beyerstein, 2007).

8 In discussing gender and the brain, Eliot (2012) states that, while brain differences are indisputably biological, they aren’t necessarily hardwired. What’s crucial, and often overlooked, is that experience itself changes brain structure and function. 9 It refers to overt behaviour (as opposed to what’s going on in the brain, i.e. a ‘centralist’ form of materialism). 10 (1) External influences ‡ brain ‡ affective (mood)/cognitive changes. (2) External influences ‡ brain ‡ affective/cognitive/behavioural changes. (3) Change to brain structure ‡ two separate ‘minds’/consciousness. (4) Defined in terms of a person’s perception of their ability to meet the demands being made on them, this illustrates ‘mind’ ‡ ‘stress emotions’ (anxiety, depression, etc.). (5) If a placebo is physiologically inert, then it’s the belief that it has real effects which accounts for these (physiologically real) effects; so another case of ‘mind’ ‡ brain/body. 11 Induction alone can never be sufficient to prove the truth or validity of a psychological theory. ‘Prove’ implies ‘certainty’, and because it only takes one black swan to disprove the claim that all swans are white, there is always the (logical) possibility that it will be disproved. Until such disproof occurs, the most we can claim for any theory is that there is x amount of evidence to support it. Rather than certainty, the existing evidence (built-up through inductive methods) provides some degree of probability (which is the basis of statistical significance). 12 Traditional philosophy of science had no real difficulty in choosing between competing theories – you simply make an objective comparison, in the light of available evidence, and decide which is the better (closer to ‘the truth’). But this clearly presumes that the theories use a common language to express their respective claims, and Kuhn rejects 65

Scientific perspectives

this presumption; paradigms represent different worldviews and so cannot be compared. L

66

By the same token, the later paradigm doesn’t supersede the earlier one; i.e. it doesn’t replace the ‘wrong’ version of the truth with the ‘right’ (or ‘more right’) version (as claimed by the Whiggish view of scientific history, which takes a linear view of scientists moving gradually towards ‘the truth’).

Chapter 3 Challenging the mainstream New paradigms for old

As we noted at the end of Chapter 2, the discipline of scientific Psychology as it based itself around classical, natural science (Naturwissenschaften) has been called the ‘old paradigm’ (Harré, 1989) (or modern/modernist science). In that chapter we identified the major characteristics of classical science and considered their relevance and appropriateness as far as Psychology is concerned. The next logical step is to focus on the old paradigm – what are the defining features of positivist Psychology as it has tried to emulate the natural sciences? As before, we shall then assess these features of the old paradigm in terms of their appropriateness for a psychological science of human beings. Finally, we shall consider major assumptions and practices of the ‘new paradigm’ (Harré, 1995a), which represents a range of challenges to mainstream Psychology. These challenges have come in many guises and are known by many different names: some of the most commonly cited are Social Constructionism, Discursive Psychology, Feminist Psychology, and Cultural Psychology. These are sometimes referred to, collectively, as Critical Psychology; sometimes, Critical Psychology (and Critical Social Psychology) is discussed as a challenge to the mainstream in its own right. We should also note that certain challenges have come from directions that lie far outside anything that could ever remotely be described as ‘new paradigm’ (or postmodern/postmodernist).

Mainstream Psychology: modernism and the old paradigm Modernity refers to the ‘age’ (i.e. a particular period in the history of Western culture, dating from around 1770, often called the post-Enlightenment); it marks the emergence from the ‘dark ages’ of the medieval period. Modernism refers to the ‘spirit of the age’ (or zeitgeist) of modernity, a commentary on it (Stainton Rogers et al., 1995), or a cultural expression of it (in art, architecture, literature, music, etc.) (Kvale, 1992). For Enlightenment thinkers, it was no longer necessary to bend unquestioningly to the totalitarian force of religious – or royal – decree (Gergen, 2001). Largely influenced by Descartes’ account of the conscious, rational, mind (see Chapter 2), the Enlightenment

67

Challenging the mainstream

aimed at replacing irrationality with reason, superstition with empirically validated ‘true knowledge’, and through this to pursue ‘human betterment’. Science – especially the human sciences – was crucial for achieving these goals. The bio-social sciences both were founded by way of Modernism and have played a significant role in making the modern world in which we live (i.e. the world which we, the writers – and most of the readers – of this book inhabit)…. Notions of individualsin-society, of interpersonal relations, of ‘selves’ and social forces and so on, are so established that they can appear as if they were ‘natural’ ways of speaking about the world, rather than modern constructions. (Stainton Rogers et al., 1995, p. 16)

L

Give some examples of modern psychological constructions that we implicitly believe reflect what people are really, objectively, like. (See Danziger’s discussion of ‘natural’ and ‘psychological’ kinds in Chapter 1.)

As Stainton Rogers et al. observe, we take constructions such as personality, attitudes, thinking, feeling, and doing as essential qualities of human experience. What they mean here by ‘essential’ is not ‘necessary’, but ‘inherent’ or ‘actually part of the make-up of ’: essentialism is one important assumption made by mainstream Psychology. Science, as a fundamental feature of the Enlightenment, represents the antidote to dogma: the individual person, rather than God and the Church, became the focus for issues of truth and morality. It was now up to individuals to make judgements (based on objective, scientific evidence) about the nature of reality and, therefore, what were appropriate moral rules for humans to live by (Burr, 1995).

BOX 3.1 The centrality of individual knowledge (Gergen, 2001) L The seventeenth- and eighteenth-century construction of the individual mind

L

L

L

L

68

served as the major rationalizing device for the nineteenth-century beginnings of a systematic Psychology. The individual mind came to be a pre-eminent object of study, and knowledge of the human mind could be understood as an achievement of the individual minds of scientific investigators. If individual mentality is the source of all human conduct, then unlocking the secrets of the mind is to gain a certain degree of control over human action. The individual investigator’s capacities for observation and rationality makes him/her best-equipped for such study. These twin assumptions continue to underlie research in modern Psychology. In explaining cognitive schemas, information storage and retrieval (see Chapter 7), emotions (see Chapter 5), and so on, the individual Psychologist improves our capacities for prediction and control of human activity. Understanding of these fundamental processes might aid the treatment of mental illness, improve education, reduce crime and prejudice, create fulfilling lives, and so on. Through systematic enquiry into individuals’ mental states, we may progressively move towards an ideal society.

Challenging the mainstream

Often just referred to as individualism, this modernist focus on processes taking place within individuals reinforces the essentialism described above. Individualism is one of two ‘unexamined pre-assumptions’ which, according to Harré (1989) have haunted – and still do – mainstream Psychology. According to Harré, human action can be fully explained by reference to things taking place ‘inside’ the ‘envelope of the individual’ (such as cognitive processes, based on the analogy of the running of a computer program) or things that individuals ‘possess’ (such as attitudes, motives, and personality). Instead of describing human customs and practices, mainstream Psychologists have looked for (or imagined) mechanisms; and instead of attributing people with the skills necessary for performing correctly, they’ve been attributed with hidden states (Harré, 1989). By implication, society is reduced to the cognitive processes and related behaviours of very large numbers of apparently isolated, self-sufficient ‘units’ or ‘atoms’. A further implication is that by studying any one individual, the Psychologist is able to explain how every person functions: people are interchangeable in the sense that, as human beings, we all display the same basic cognitive processes and related behaviours. This is universalism. In turn, these cognitions and behaviour are removed from any real-life context or situation, laid bare – and reproduced – in the psychological laboratory (much like inspecting a sample of a chemical under a microscope). This is called decontextualization (or context-stripping: see Box 3.2).

Science, scientism, and the science of Psychology Contemporary academic Psychology is a large, multifaceted organism, using a range of techniques to address an array of substantive questions; it’s also clearly developing, as demonstrated by the change of emphasis from behaviour to cognition (Smith et al., 1995) (see Chapter 1). However, despite this significant change of emphasis (what Harré (1995a) calls the ‘first cognitive revolution’), certain central assumptions and practices within the discipline have remained basically the same, and it is these that constitute mainstream Psychology. One of these is individualism, discussed above. The other is scientism (see Box 3.2).

BOX 3.2 Scientism L Van Langenhove (1995) defines scientism as the borrowing of methods and ter-

minology from the natural sciences in order to discover causal mechanisms that explain psychological phenomena. L According to Gergen (2001), Enlightenment figures such as Newton and Bacon (see Chapter 2) convincingly demonstrated that if we view the cosmos as material in nature – as composed of causally related entities – and available to observation by individual minds, then enormous strides can be made in our capacities for prediction. Indeed, Gergen states that we typically define knowledge in terms of our ‘discovery’ of cause–effect relationships L It maintains that all aspects of human behaviour can and should be studied using the methods of natural science, with its claims being the sole means of achieving ‘objective truth’ (see the discussion of positivism in Chapter 2). This can be achieved by studying phenomena removed from any particular context (contextstripping) and in a value-free way (another feature of positivism). L The most reliable way of achieving these goals – in particular, discovering cause– effect relationships – is through the laboratory experiment: it provides the greatest degree of control over relevant variables (see Chapter 2).

69

Challenging the mainstream

According to Smith et al. (1995), although much research has moved beyond the confines of the laboratory experiment, the same ‘positivist logic and empiricist impulse’ (p. 2) is still central to how psychological investigation is conceived and conducted. And within psychology’s conception of itself, method and measurement are accorded privileged status. Whether concerned with mind or behaviour (and whether conducted inside or outside the laboratory), research tends to be construed in terms of the separation (or reduction) of entities into independent and dependent variables and the measurement of hypothesized relationships between them. (Smith et al., 1995, p. 2) Similarly, Van Langenhove (1995) claims that, despite the vigorous criticism of the natural science model since the mid-1970s (see Chapter 2), Psychology is still to a large extent ‘submerged’ in that model; the controlled experiment lies at the core of this model. Van Langenhove sees this as having far-reaching effects on the way Psychology pictures people as more-or-less passive and mechanical information-processing devices (‘objects’) whose behaviour can be split up into variables. It also affects how Psychology deals with people: in experiments, people are treated not as single individuals but as interchangeable ‘subjects’, leaving no room for individualized observations (see above and the discussion in Chapter 2 of the nomothetic–idiographic ‘debate’). According to Stainton Rogers et al. (1995), mainstream Psychology doesn’t just use certain methods, it is ‘methodolatrous’, i.e. it idolizes scientific method, according it superior status as the only self-respecting means of discovering ‘true knowledge’. The reason for this methodolatry has more to do with the way Psychologists wish to present themselves than with an open-minded evaluation of its practical utility: it offers the credentials of being ‘serious’ and ‘scholarly’, since empirical investigation can only be conducted by ‘experts’, and so confers exclusivity. ‘Understanding people’ is the province of the ‘Academy’, which Foucault (1970) defines as the locus of secret and exclusive – and thus prized – scholarly knowledge; only those within the Academy can fully comprehend this knowledge. Methodolatry thus serves the purpose of enabling psychologists to cast themselves as the sole architects of legitimate knowledge about the ‘science of behaviour’. (Stainton Rogers et al., 1995, p. 226) Various terms have been used to describe this focus on methodology rather than on subject matter (these are primary and secondary, respectively): methodologism (Teo, 2005), the cult of empiricism (Toulmin and Leary, 1985), and the methodological imperative (Danziger, 1985). The experimental-statistical methodology is applied to all research questions, almost ignoring the questions of its appropriateness: as Teo (2009) quips, the best thermometer in the world is worthless for measuring speed. Methodolatry etc., in turn, results in a methodological theory of knowledge (or epistemology). Instead of asking about the nature of knowledge in Psychology, such as whether studies will still be valid 100 years from now and in all cultures (which they should be if they identify a ‘natural law’), it’s assumed that accepting/following the methodological rules used by succeeding generations of Psychologists will automatically (inevitably?) produce relevant, valid psychological knowledge. Such an epistemology also prevents critical questions from being asked, such as ‘What are the personal, social, and political-economic interests involved in conducting a particular study?’ and ‘Who benefits from which results?’

70

Challenging the mainstream

Pause for thought … 1 Quite apart from some of the methodological limitations of the laboratory experiment as discussed in Chapter 2, what do you consider to be some of the ethical issues raised by controlled experiments?

Table 3.1 Major features of mainstream Psychology Positivism Quantitative methods; statistical analysis; reduction to numbers Scientism/methodolatry Cause and effect (determinism) Atomism Decontextualization/context-stripping Objectivity Universalism Value-free Realism Essentialism Individualism People-as-objects/entities Mechanism/materialism

Challenging the mainstream: crisis and the emergence of a new paradigm According to Smith et al. (1995): Psychology is in a state of flux. There appears to be an unprecedented degree of questioning about the nature of the subject, the boundaries of the discipline and what new ways of conducting psychological research are available. (p. 1) Different Psychologists will react to this in different ways: turmoil is anxiety-provoking, threatening, challenging, exciting, even a cause for celebration. Smith et al. describe themselves as feeling excited, generated by a discipline engaging in healthy self-reflection: for a discipline that has traditionally taken individualism and, more importantly, scientism as its foundations, turning its attention to the theoretical basis of its work is a positive development. However, despite the move away from the Behaviourist emphasis on external behaviour towards recognition of the mind (i.e. cognition) as valid subject-matter (see Chapter 1), and despite a considerable amount of psychological research moving beyond the confines of the laboratory experiment, mainstream Psychology has retained its attachment to scientism and its underlying equation of ‘science’ with ‘natural science’.

71

Challenging the mainstream

Revisiting Psychology’s Whig history According to Van Langenhove (1995), the problem with Experimental Psychology is that it pretends that its experiments are, in principle, no different from experiments in natural science. This implies that human beings are treated as if they are natural objects. While it’s clear that the natural sciences – which do deal with natural objects – are causal sciences, this is far from obvious in the case of human beings (see Chapter 4). However, the image of natural science (in particular, physics) underlying scientism was out-of-date when Psychologists (such as Watson) began modelling their research on it. Also, it’s a mistake to equate the history of Psychology with the history of experimental laboratory research started by Wundt in 1879.

L

Based on the account given in Chapter 1, in what ways is it a misconception to regard Wundt – as he is traditionally portrayed – as the originator of the Psychology experiment modelled on the Newtonian physics experiment?

L Wundt was interested in private, individual consciousness – not in overt behaviour

(which was why Watson condemned his whole approach as non-scientific/objective). Human beings were certainly not treated as natural objects. L For Wundt, ‘the problem for a scientific psychology of mind was the creation of conditions under which internal perception could be transformed into something like scientific observation’ (Danziger, 1990, p. 35). Laboratories were such a setting because they enabled the Wundtian Psychologist to ‘tap’ the perceptions before the participants could have self-conscious reflections on them (Van Langenhove, 1995). L The social organization of Wundtian experimentation was completely different from today’s ways of conducting experiments: for Wundt, experimenters and ‘subjects’ were interchangeable: an experiment ‘was nothing but the practice of a self-conscious community of investigators’ (Van Langenhove, 1995, p. 17). What was going on inside the minds of individuals was intensively explored, which contrasts sharply with mainstream Psychology’s focus on group data/averages (see Chapter 2). L Also, and as we noted in Chapter 1, Wundt believed that experiments (the major method used in the Naturwissenschaften) were only suitable for investigating ‘lowerlevel’ psychological processes (psychophysiological phenomena, such as sensations, reaction times, and attention). However, in order to study memory, thinking, language, personality, social behaviour, myth, and cultural practices (the ‘higher mental processes’), we need to study communities of people (Volkerpsychologie). Human minds exist within human communities and are too complex to be amenable to experimental manipulation (Danziger, 1990). These higher mental processes can only be studied using the methods of Geisteswissenschaften – social or human sciences and the humanities.

Hermeneutics: behaviour is meaningful According to Van Langenhove (1995), the crucial difference between the working of causal mechanisms (the legitimate subject-matter of natural science) and human actions is that the latter have meaning for the actors themselves and are mainly intentional (as opposed to determined by external forces).

72

Challenging the mainstream

Concern with meaning lies at the heart of hermeneutics, the science of interpretation. What interpretation provides is understanding, as opposed to explanation: the former is necessarily subjective (as when interpreting a passage in the Bible or a Shakespeare play), while the latter is factual and objective (as in natural science, where the ‘facts’ are beyond individual opinion and constitute ‘the truth’) (Goldberg, 2015). (We noted in Chapter 2 a number of arguments against this positivist view of natural science; consequently, where non-natural objects (i.e. human beings) are involved, these arguments assume even greater relevance.) Freud’s psychoanalytic theory – and therapy – represents a major application of hermeneutic principles (see Chapter 9).

BOX 3.3 Hermeneutics explained L The term hermeneutics (a Latinized version of the Greek hermeneutice) has been

L

L

L

L

L

L

L

part of common language since the early 1600s. But its history dates back to Ancient Greek philosophy. According to Spinoza, a Dutch-Jewish philosopher (1632–1677), in order to understand the most dense and difficult sections of the Holy Scriptures, we must keep in mind the historical horizon in which these texts were written, as well as the mind that produced them. In the case of both the Holy Scriptures and nature, our understanding of the parts depends on our understanding of a larger whole, which, again, can only be understood if the parts are understood. Movement back and forth between the parts and the whole of the text is known as the hermeneutic circle. The circle comprises three steps: (1) a tentative knowing of what is to be uncovered/ disclosed; (2) an approach that makes things comprehensible; and (3) a grounding in a definite conception. The circle is the structure of meaning. The process begins with the fact that we routinely know what we’re looking for: as Heidegger (1962/1927), commonly regarded as the father of hermeneutics, would say, interpretation is grounded in a fore-conception (see below). Dilthey, who, independently of Windelband, distinguished between the Naturwissenschaften and Geisteswissenschaften, made a second, related distinction between Erlebnis (lived experience, related to self-understanding) and Verstehen (related to understanding of others). Arguably, Dilthey’s most important contribution to hermeneutics is the fact that he was the first to ground hermeneutics in a general theory of human life and existence, paving the way for what’s referred to as the turn to ontology. Heidegger’s (1927) Sein und Zeit (‘Being and Time’) completely transformed the discipline of hermeneutics: hermeneutics is neither a matter of understanding linguistic communication nor a methodological basis for the human sciences, but rather it concerns the most fundamental conditions of man’s being in the world (ontology). The natural sciences model tends to ignore the pre-scientific aspects of human existence. For Heidegger, understanding isn’t something we consciously do or fail to do, but something we are: understanding is part of what it means to be a human being. The world is familiar to us in a basic, intuitive way: we don’t have to gather a collection of neutral facts, from which we may then infer a set of universal statements, laws, or judgements, in order to understand the world; it is implicitly intelligible to us. The fundamental familiarity with the world is brought to reflective consciousness through the work of interpretation; this makes things, objects, the fabric of the world appear as something (Heidegger, 1927).

73

Challenging the mainstream L Gadamer (e.g. 1976) extended Heidegger’s ideas by exploring the consequences

of the ontological turn for our understanding of the human sciences. It’s through language that the world is opened up for human beings: we cannot really understand ourselves unless we understand ourselves as situated in a linguistically mediated, historical culture. Language is our second nature. L This has consequences for our understanding of art, culture, and historical texts (the subject-matter of the human sciences). These cannot be neutral or value-free objects for scientific investigation (unlike natural objects); they’re part of the horizon in which we live and through which our worldview gets shaped. The past is passed on to us through the complex and ever-changing fabric of interpretations, which gets richer and more complex over the decades and centuries. (Based on Goldberg, 2015; Ramberg and Gjesdal, 2005)

In drawing a distinction between psychoanalytic understanding and that provided by, for example, psychodynamic psychotherapy (see Chapter 9 and Gross, 2015), Goldberg (2015) distinguishes between hermeneutic science, which deals with meanings, and empirical science, which involves rules and establishes facts (see Chapter 2). This is the difference between (1) what things mean to us and (2) what can be measured or explained in a ‘scientific’ manner, respectively. Meanings are open to a number of interpretations, all of which may be true, and so may allow us to pick and choose. Facts and truths do not allow for interpretation and choice, and they are but one category of meaning. Thus, meaning covers a large area. (Goldberg, 2015, p. 18) According to Goldberg (2015), empirical science aims to discover widely applicable categories, thereby deriving normal or abnormal standards and goals; examples include both physiological functions (such as blood pressure) and psychiatric conditions. However, such normative considerations aren’t usually relevant in interpretive science, which resists searching for correct or true decisions. Hermeneutics involves a process of continually revealing what may be thought of as true, a to-and-fro dialogue of questions and answers that continue until an agreement is reached (as distinct from universal truths) (Ingram, 1985); the latter is sometimes referred to as pluralism. At the risk of oversimplification, while empirical (natural) science seeks ‘the truth’, the human sciences and humanities, seek multiple ‘truths’ through the use of hermeneutic methods. Another important difference is that while hermeneutics favours qualitative analysis, leading to knowledge of particular individuals, mainstream Experimental Psychology emphasizes group averages (based on quantitative analysis), in which individual scores are swamped.

However, as we saw in Chapter 2, the study of (1) particular/individual cases and (2) groups is a false dichotomy. L Explain what this means and give some examples of how the study of individuals can generate data that are generalizable in a way that’s required by traditional, natural science (see pages 48–51).

74

Challenging the mainstream

Van Langenhove (1995) argues that, at the dawn of the emergence of the social sciences as institutional practices (i.e. separate disciplines), roughly between 1800 and 1830, the natural sciences and hermeneutic models were both available as approaches to the study of people and society. While the former came to dominate mainstream Psychology, the latter has never disappeared altogether. However: If Psychology should picture and treat people as persons rather than as natural objects, then speech acts should be taken as the substance of the social and psychological world. This would make the hermeneutical approach a far better model for psychology than the misplaced scientism of the natural sciences model. (Van Langenhove, 1995, p. 23)

Discursive Psychology and the ‘second cognitive revolution’ If hermeneutics represents an arguably more valid alternative to the scientism of mainstream Psychology, another challenge came in the form of what Harré (1995a) calls the ‘second cognitive revolution’. This was marked by the return to the traditional, common sense idea that Psychology is the study of active people, singly or in groups, using material and symbolic tools to accomplish all sorts of projects according to local standards of correctness (see Chapter 4). A ‘scientific’ version of the informal psychology of everyday life is realized in Discursive Psychology; this is related to a number of other disciplines, including linguistics, anthropology, and narratology. According to Harré, ‘The second cognitive revolution is nothing other than the advent of discursive psychology!’ (1995a, p. 144). According to this approach, the mind isn’t a mental machine processing information (see Chapter 7); rather, ‘the mind’ denotes certain activities (i.e. skilled uses of symbols), performed both publicly and privately. Just as discourse is primarily public, and only secondarily private, so cognition (the use of various devices for mental tasks) is also primarily public. ‘Discourse’ usually implies verbal presentation of thought or argument; more broadly, it refers to various kinds of cognitive activities which utilize devices that point beyond themselves and which are normatively constrained (i.e. subject to standards of correctness or incorrectness). While language is just one example, it’s by far the most important such device. The first cognitive revolution (using the computer analogy – see Chapters 2 and 7) gave the impression that behind what we’re doing is another set of cognitive processes – the processing of information. But there are no other such processes: There are just the neurophysiological processes running in the brain and nervous system and the discursive processes engaged in by skilled actors in the carrying out of this and that project. Neurophysiological processes are governed by the causality of physics and chemistry, discursive activities are governed by the rules and conventions of symbol use. (Harré, 1995a, p. 158) Psychology is not the study of the reactions of isolated individuals to some environmental contingency, but, rather, is: The study of how, why, when, and where active people use signs for all kinds of purposes: thinking, planning, anticipating, judging, lying, doing calculations, forming consortia, denigrating rivals, presenting recollections of the past, and so on. (Harré, 1995a, pp. 157–158)

75

Challenging the mainstream

According to Potter (1996), the central idea of Discursive Psychology is that the main business of social life is found in interaction, stressing the practical dimension of social life. For example, how does a husband produce a particular narrative (account) of relationship breakdown to show that the problem is his wife’s rather than his own? Or how is a rape victim presented as subtly responsible for the attack and how might she resist such a presentation? Through analysis of tape recordings, transcripts, videos, and other records of interaction, Discursive Psychologists have found that they need to rework notions such as memory (see Chapter 7), attitudes, the self, and attribution of responsibility (see Chapter 12).

L

Broadly speaking, how do you think a Cognitive Psychologist and a Discursive Psychologist might frame ‘memory’?

While a Cognitive Psychologist would see ‘memory’ as a cognitive process taking place inside the head of an individual, or as a manifestation of hidden subjective phenomena, for a Discursive Psychologist ‘memory’ is the phenomenon. The discursive approach sees remembering as a fundamentally social activity; this makes it necessary to consider the structure of the social group in which it takes place, especially the power relationships between the members (both as individuals and as representatives of memorially relevant categories of persons, such as gender) (Harré, 1995a). The difference between the mind or personality as seen in this way and the traditional view is that we see it as dynamic and essentially embedded in historical, political, cultural, social and interpersonal contexts. (Harré, 1993, p. 27)

BOX 3.4 KEY THINKER: Rom Harré (born 1927) L Rom Harré (full name: Horace Romano Harré) was

L

L

L

L

76

born in New Zealand, where he studied chemical engineering, mathematics, and philosophy. He gained a BPhil at Oxford University (1956), after which he taught at the Universities of Birmingham and Leicester. He then returned to Oxford in 1960, where he came to play an important part in the ‘discursive turn’ (see text above). Arguably, his most influential book to date has been The Explanation of Social Behaviour (1972) written with Paul Secord. After compulsory retirement from Oxford in 1995, he joined the Psychology Department of Georgetown University, Washington, DC. Figure 3.1 Harré is one of the world’s most prolific social scienRom Harré. tists and received a Lifetime Achievement Award from the Society for Theoretical and Philosophical Psychology (Division 24 of the American Psychological Association).

Challenging the mainstream L Recently, he has pioneered positioning theory, which he defined as being:

Based on the principle that not everyone involved in a social episode has equal access to rights and duties to perform particular kinds of meaningful actions at that moment and with those people…. A cluster of short-term disputable rights, obligations and duties is called a ‘position’. (Harré, 2012, p. 193) L Positioning theory is concerned with ‘how people use words (and discourse of all

types) to locate themselves and others’. Positioning has direct moral implications (such as an individual or group being located as ‘trusted’/‘distrusted’, ‘with us’/‘against us’, ‘to be saved’/‘to be wiped out’ (Moghaddam and Harré, 2010, p. 2). L Positioning theory is a Social Constructionist approach (see text below), which first emerged in the 1980s primarily in the area of gender studies. Other notable positioning theorists include the Feminist Psychologist Wendy Hollway and Social Psychologists Jonathan Potter (see text above) and Margaret Wetherell.

The feminist critique of science According to Nicolson (1995), during the preceding 100 years there have been two major emphases in feminist scholars’ critiques of the aims, objectives, and methods of positivist science: (1) The traditional scientific insistence on context-stripping as a function of its preferred methodology (see above), and the consequent failure to take into account the implications this has for the construction of knowledge about women. (2) The identification of the clear sense in which positivist science, far from being valuefree, displays a clear bias towards the ‘pathologization’ of women, particularly in relation to reproductive and mental health (see Chapter 13). For Feminist Psychologists, a crucial feature of the context-stripping in mainstream Psychology is the exclusion of structural/power relations between individuals. This results in a failure to identify the relationship between certain kinds of behaviour and the consequences of patriarchy (or ‘patriarchal culture’, a set of pervasive values which privilege maleness/ masculinity over femaleness/femininity – though not necessarily in a conscious or overtly misogynistic way; Nicolson, 1995). Because mainstream Psychology fails to recognize its inherent masculinist bias, it’s taken to be identifying ‘natural’ behaviours (such as those relating to gender stereotypical roles). There is evidence that psychological science, by not problematizing power and context, has actively contributed to the subordination of women through reinforcing misogynist mythology under the label ‘science’. (Nicolson, 1995, p. 123) Nicolson describes the example of studies of the menstrual cycle. While a number of studies have claimed that the menstrual cycle produces behavioural, cognitive, and emotional deficiencies, reanalysis of the data and reassessment of the experimental methods used have shown the flawed nature of the evidence.

77

Challenging the mainstream

Why is the scientific method gender-biased? Nicolson (1995) identifies three major problems associated with the positivist nature of mainstream Psychology in relation to the study of women and gender differences. These are outlined in Box 3.5.

BOX 3.5 What makes mainstream Psychology gender-biased? L The experimental environment aims to take the individual ‘subject’s’ behaviour –

rather than the ‘subject’ herself – as the unit of study; the meaning of the behaviour (including the social, personal, and cultural context in which it occurs) is thus made irrelevant. Consequently, gender differences in competence and behaviour are attributed to intrinsic rather than contextual factors (either biological or those associated with gender role socialization). L Far from being context-free, Experimental Psychology takes place in a very specific context which characteristically disadvantages women (Eagley, 1987). A woman is placed in a ‘strange’ environment, stripped of her social roles and accompanying power and knowledge (whether professional role or the set of competencies by which she defines her capabilities), and is expected to respond to the needs of (almost inevitably) a male experimenter. In this situation, she will have lost any social power she might have achieved in the outside world: an anonymous woman interacting with a man who is in charge, with all the social meaning attributed to gender power relations (Leonard, 1984). L Perhaps most crucially, scientists fail to take account of the influence of the relationship of power to knowledge (e.g. Foucault, 1977). Psychology relies for its data on the practices of socialized and culture-bound individuals, so that to explore ‘natural’ or ‘culture-free’ behaviour (i.e. behaviour unaffected by culture, social structures, or power relations) is, by definition, impossible, Yet this normally goes unacknowledged, especially in relation to how the media report scientific ‘discoveries’. Such reports can influence how individuals assess their own behaviours in relation to the scientific ‘norms’. (Based on Nicolson, 1995)

Social Constructionism: postmodernism and the new paradigm As we’ve seen, Discursive Psychology challenges the mainstream principle of individualism, and positioning theory has been described as a Social Constructionist approach. We’ve also considered some of the major issues identified by Feminist Psychologists in relation to mainstream, positivist Psychology, a crucial example being essentialism.

Feminist Psychology and Social Constructionism According to Buss (1978), a much-cited article by Weisstein (1971/1993) foreshadowed the paradigm shift within Psychology (from old to new), from the view that reality constructs the person to the view that the person constructs reality. (The full title of Weisstein’s article is ‘Psychology Constructs the Female; or, The Fantasy Life of the Male Psychologist (with

78

Challenging the mainstream

Some Attention to the Fantasies of His Friends, the Male Biologist and the Male Anthropologist)’.) She, along with a few other pioneers, explicitly used the term ‘Social Constructionism’ as a way of questioning the bases of psychological knowledge. One form that the construction of gender can take is social expectations of behaviour – both other people’s expectations of our behaviour and our expectations of our own behaviour. More specifically, expectations can influence behaviour through the self-fulfilling prophecy; Weisstein cites classic studies by Rosenthal and his colleagues (Rosenthal, 1966; Rosenthal and Jacobson, 1968), which demonstrate how expectations can change experimental outcomes (see Gross, 2015). Weisstein concludes that, even in carefully controlled experiments, and without any obvious or conscious difference in behaviour, the hypotheses we start with will influence significantly another organism’s behaviour (see Chapter 2). Weisstein also discusses Milgram’s famous obedience experiments as demonstrating the very powerful impact of the social situation on the behaviour of individuals. Consistent with this continuing shift towards studying the importance of context, more recent studies of gender have reliably shown that the behaviour we associate with ‘gender’ depends more on what an individual does than on biological sex (e.g. Eagley, 1987). Maccoby (1990) re-analysed studies that claimed to show that little girls are ‘passive’ and little boys are ‘active’. She concluded that boys and girls don’t differ as groups, in some consistent, trait-like way; rather, their behaviour depends on the gender of the child they’re playing with: girls as young as three are only passive when a boy is present but are just as independent as boys in an all-girl group. These and other similar findings lead Tavris (1993) to conclude that: Gender, like culture, organizes for its members different influence strategies, ways of communicating and ways of perceiving the world. The behaviour of men and women often depends more on the gender they are interacting with than on anything intrinsic about the gender they are – a process that West and Zimmerman (1987) call ‘doing gender’. (pp. 160–161) A major aspect of the context of people’s lives is the power they have (or lack) in influencing others and in determining their own lives. Clearly, the ‘two cultures’ of women and men are unequal with respect to power, status, and resources. Tavris (1993) believes that many behaviours and personality traits thought to be typical of women (such as the ability to ‘read’ non-verbal cues, the tendency to blame themselves for their shortcomings, and low self-esteem), turn out to be typical of women – and men – who lack power: they seem to be the result of powerlessness, not the cause.

What is postmodernism? Earlier in the chapter, we distinguished between modernity (a particular period in the history of Western culture, dating from around 1770) and modernism (the ‘spirit of the age’, or zeitgeist, of modernity). According to Burr (2015), the core of postmodernism as an intellectual movement is to be found in art and architecture, literature, and cultural studies – rather than in the social sciences. It represents a questioning and rejection of the fundamental assumptions of modernism. For example, the German philosopher Nietzsche claimed that the Enlightenment had turned science, reason, and progress into its own dogmas. This represents the beginnings of postmodernism, which rejects both the idea that there can be any kind of ultimate truth and of structuralism, the view that the world as we see it is the result of hidden structures.

79

Challenging the mainstream

Postmodernism emphasizes the co-existence of a multiplicity and variety of situationdependent ways of life (pluralism). We in the West are now living in a postmodern world … that can no longer be understood by the appeal to one over-arching system of knowledge, such as a religion. Developments in technology, in media and mass communications means that there are available to us many different kinds of knowledge…. Postmodernism thus rejects the notion that social change is a matter of discovering and changing the underlying structures of social life through the application of a grand theory or metanarrative. (Burr, 2015, p. 14) As we noted in Chapter 2, modernist, positivist science presupposes the existence of a stable, underlying reality which, through repeated observations (inductive approach) and/or the testing of hypotheses derived from a theory (hypothetico-deductive approach), waits to be ‘discovered’. Diametrically opposed to this is the postmodernist view that the world can be constructed (or construed) in many different ways. By the same token, in the context of Psychology, postmodernist approaches (the ‘new paradigm’) challenge and (often) reject the assumptions and practices of mainstream Psychology (the ‘old paradigm’) (see Box 3.5). Postmodernism has provided the ‘cultural and intellectual backcloth’ against which Social Constructionism has taken shape, and which to some extent gives it its peculiar flavour (Burr, 2015). Similarly, Gergen (1995) claims that ‘Social constructionist dialogues are essentially constituents of the broader dialogues I am terming postmodern’ (p. 148). While not all postmodern theorists would consider themselves as constructionists, most constructionists have ‘drawn great sustenance from the conversations within the postmodern domains …’ (p. 148). Box 3.6 shows how Social Constructionism represents one of the major ‘planks’ of postmodernist thinking.

BOX 3.6 Some major features of postmodernist thinking (based on Gergen, 2001) L From an objective to a socially constructed world. For modernists, the world is

simply ‘out there’, available for observation; for postmodernists, to speak of ‘the world’ or ‘mind’ at all requires language, and words such as ‘matter’ and ‘mental process’ aren’t mirrors which we use to reflect the objective world (or labels for the things in it), but are constituents of language systems. In the context of Psychology, what we take as ‘reality’, what we believe to be transparently true about human functioning is a byproduct of communal construction. This doesn’t imply that nothing exists outside of our linguistic practices, only that whatever exists simply exists – regardless of those practices. L Language: from truthful practice to pragmatic practice. Language is the product not of the individual mind but of cultural process: our descriptions and explanations are generated within our relationships (with each other and the world). As Wittgenstein (1953) argued, language gains its meaning not from its mental or subjective underpinnings, but from its use in action (or ‘language games’, as Wittgenstein called them). On this account, to ‘tell the truth’ isn’t to provide an accurate picture of ‘what actually happened’, but to participate in a set of social conventions, a way of putting things sanctioned within a given ‘form of life’. To be ‘objective’ is to play by the rules (conventions and practices) within a given tradition of social practices (or culture, one example being science).

80

Challenging the mainstream L From individual reason to communal rhetoric. Language is a system that both

precedes and outlives the individual: private rationality is a form of cultural participation simply removed from the immediate pressures/demands of relationship. How could we deliberate privately on matters of justice, morality, or optimal strategies of action, for example, except through the terms of public culture? In the context of science, the individual scientist is only ‘rational’ if she or he adopts the codes of discourse common to her or his particular scientific community.

Given the close ties between postmodern thinking and Social Constructionism, it’s not surprising to find the themes described in Box 3.6 recurring when we try to define the latter. Before we do this, it’s worth looking at some of the influences on the development of Social Constructionism, in particular sociological influences (Box 3.7).

BOX 3.7 Sociological influences and the ‘crisis’ in Social Psychology L Burr (2015) describes the influence of a number of major philosophers, including

Kant, Nietzsche, and Marx. Despite their differences, they shared the belief that knowledge is at least in part a product of human thought rather than grounded in external reality. L Early in the twentieth century, the sociology of knowledge began to consider how sociocultural forces construct knowledge. More specifically, the sociology of scientific knowledge (SSK) focuses on the practices that help construct scientific knowledge. While aimed mainly at physics and biology, Danziger (e.g. 1990, 1997), Richards (e.g. 2002, 2010) and others have applied the SSK to Psychology. As with Feminist Psychology (see above), SSK has influenced both how science is understood and how some researchers within Psychology work (Jones and Elcock, 2001). Edwards (1997) and Potter (1996) use SSK within their Social Constructionist approaches (see Gross, 2014). L A major sociological influence on Social Constructionism was Berger and Luckmann’s (1966) The Social Construction of Reality. This book draws on the subdiscipline of symbolic interactionism (Mead, 1934), according to which people construct their own and others’ identities through their everyday encounters with each other in social interaction (see Chapter 12). Related to symbolic interactionism is another sociological subdiscipline, namely ethnomethodology, which tries to understand the processes by which ordinary people construct social life and make sense of it to themselves and each other (see Chapter 4). L The emergence of Social Constructionism is usually dated from Gergen’s (1973) paper, ‘Social psychology as history’. In it, Gergen argued that all knowledge, including psychological knowledge, is historically and culturally specific, and that we must, therefore, extend our enquiries beyond the individual into social, political, and economic realms for a proper understanding of the evolution of presentday psychology and social life. Moreover, there’s no point in looking for once-and-for-all descriptions of people or society, since the only constant feature of social life is that it’s constantly changing. Social Psychology thus becomes a form of historical scholarship: all we can ever do is try to understand and account for how the world appears to us at the present time.

81

Challenging the mainstream L Gergen’s paper was written at the time of ‘the crisis in Social Psychology’. As a

discipline, Social Psychology was at least partly rooted in the attempts by Psychologists to provide the US and British governments during the Second World War with knowledge that could be used for propaganda and manipulation of the public. The research was almost exclusively laboratory-based, reflecting the influence of Experimental (i.e. mainstream, ‘general’) Psychology; hence, Social Psychology emerged as an empiricist, positivist science that served, and was funded by, those in positions of power within both government and industry (Burr, 1995). L Starting in the late 1960s to early 1970s, some Social Psychologists were becoming increasingly concerned about these issues: the ‘voice’ of ordinary people was seen as missing from Social Psychology’s research practices: by concentrating on decontextualized laboratory behaviour, the discipline was ignoring the real-world contexts which give human action its meaning (see Chapter 4). Several books were published, each proposing its own alternatives to positivist science by focusing on the accounts of ordinary people (e.g. Harré and Secord, 1972: see Box 3.4) and by challenging the oppressive and ideological use of Psychology (e.g. Armistead, 1974; Brown, 1973). These concerns are clearly seen today in those who adopt a Social Constructionist approach.

Defining Social Constructionism According to Burr (1995, 2015), there’s no single definition of Social Constructionism that all those who would describe themselves as Social Constructionists would agree on. Rather, what links them is a kind of ‘family resemblance’ (what Rosch (1973) meant by ‘prototypes’ or ‘fuzzy sets’), i.e. we could loosely categorize as Social Constructionist any approach based on one or more key assumptions (Gergen, 1985) that are described below (Burr, 1995, 2015). 1 A critical stance towards taken-for-granted knowledge: our observations of the world don’t unproblematically reveal the true nature of the world, and conventional knowledge isn’t based on objective, unbiased ‘sampling’ of the world. The categories we use to understand the world don’t necessarily correspond to natural (‘real’) categories or distinctions (i.e. essentialism); Social Constructionism, therefore, adopts an antiessentialist and an anti-positivist position. 2 Historical and cultural specificity: how we commonly understand the world, and the categories and concepts we use, are historically and culturally specific; this means that all ways of understanding are historically and culturally relative. The particular forms of knowledge available within any culture are artefacts (products) of that culture; this includes the knowledge generated by science. The theories produced by Psychology thus become time- and culture-bound and cannot be taken as once-and-for-all, universalist explanations of ‘human nature’. L According to the universalist assumption, since we’re all human, we’re all fundamen-

tally alike in significant psychological functions, and cultural or social contexts of diversity don’t affect the important ‘deep’ or ‘hard-wired’ structures of the mind. The corollary of this assumption is that the categories and standards derived from the study of Western European/North American populations are suitable for ‘measuring’, understanding, and evaluating the characteristics of (all) other populations (Much, 1995).

82

Challenging the mainstream

Pause for thought … 2 What do you understand by ‘ethnocentrism’ and ‘disciplinary parochialism’?

BOX 3.8 Cross-cultural and Cultural Psychology L It follows from the view that knowledge is culturally created that we shouldn’t

L

L

L

L

assume that our ways of understanding are necessarily any better (i.e. closer to ‘the truth’) than other ways. Yet this is precisely what mainstream (Social) Psychology does. According to Much (1995), a new (Trans)Cultural Psychology emerged in North America (e.g. Bruner, 1990; Cole, 1990; Shweder, 1990) as an attempt to overcome biases of ethnocentrism and disciplinary parochialism that have too often limited the scope of understanding within the social sciences. Shweder (1990) makes the crucial distinction between Cultural Psychology and Cross-cultural Psychology (CCP: a branch of Experimental, Social, Cognitive, and Personality Psychology). CCP presupposes the categories and models based on (mostly experimental) research with (limited samples of ) Euro-American populations; it has mostly either ‘tested the hypothesis’ or ‘validated the instrument’ in other cultures, or ‘measured’ the social and psychological characteristics of members of other cultures using the methods and standards of Western populations, usually taken as a valid universal norm. Similarly, Gergen (1996) identifies two major roles played by culture within CCP, in both cases being relegated to secondary importance: (1) cultural differences serve the same basic scientific role as the study of personality, i.e. as a moderator/ qualifier for more general theoretical claims (regarding, say, learning, motivation, memory, perception) – cultural variations are either deemphasized or simply bracketed for ‘later study’; (2) culture provides the testing ground for the universality of the general theory – cultural particularities are mere ‘impediments’ to demonstrating the validity of the theory. Sinha (1990) has questioned the predominance of vertical collaboration, that is, Psychologists from developing countries (Psychology’s Third World: Moghaddam and Studer, 1997) working on research initiated by investigators in developed nations (Psychology’s First and Second Worlds: Moghaddam and Studer, 1997). He advocates horizontal collaboration among researchers working on practical problems across various regions of a country or with those in other developing nations. Misra and Gergen (1993) have explored important limitations of North American theories and research practices when imported into the Indian cultural context (see Gross, 2014). According to Much (1995), the new Cultural Psychology rejects the universalist model of CCP, implying that: An ‘intrinsic psychic unity’ of humankind should not be presupposed or assumed. It suggests that the processes decisive for psychological functioning … may be local to the systems of representation and social organization in which they are embedded and upon which they depend. (Stigler et al., 1990, p. xiii)

83

Challenging the mainstream L In stark contrast to the universalist assumption, a genuinely Transcultural Psy-

chology (‘the interplay between the individual and society and [symbolic] culture’; Kakar, 1982, p. 6) would base its categories, discriminations, and generalizations upon empirical knowledge of the fullest possible range of existing human forms of life, without privileging one form as the norm or standard for evaluation.

Pause for thought … According to Burr (2015), the assumption made by Social Constructionism regarding the historical and cultural specificity of psychological knowledge challenges the traditional view of scientific progress (as championed by the Whig history of Psychology; see Chapter 2). 3 Briefly outline this traditional, widely held view. 4 How does Kuhn challenge this view of science in general (including Psychology)?

3 Knowledge is sustained by social processes: our currently accepted ways of understanding the world (‘truth’) don’t reflect the world as it ‘really’ is (objective reality), but are constructed by people through their everyday interactions. Language, especially, is of central importance. As Burr (1995) says, we’re born into a world where our culture’s conceptual frameworks and categories already exist; we acquire them through our acquisition of language and they’re reproduced every day by everyone who shares a culture and a language. Language is a necessary prerequisite for thought as we know it. L By giving a central role to social interactions and seeing these as actively producing

taken-for-granted knowledge of the world, it follows that language itself is more than simply a way of expressing our thoughts and feelings (as typically assumed by mainstream Psychology). Just as we often don’t know what we’re thinking/feeling until/ unless we try to put these thoughts/feelings into words, so, when people talk to each other, they (help to) construct the world: language is a form of action (i.e. it has a performative role). This represents an anti-realist view of the world (see above). 4 Knowledge and social action go together: There are many possible ‘social constructions’ of the world, each bringing with it, or inviting, a different kind of action: how we account for a particular behaviour (what caused it) will dictate how we react to, and treat, the person whose behaviour it is. Related to this is individualism, which, as we saw above, refers to mainstream Psychology’s approach to explaining attitudes, motives, cognitions, and other kinds of ‘internal’ human attributes. For Social Constructionists, explanations of these ‘things’ are to be found neither inside the individual psyche, nor in social structures or institutions (as advocated by sociologists); rather, the appropriate focus of enquiry is the interactive processes that take place routinely between people. Instead of seeing knowledge as what people have (or don’t have), it is something that people do together (Burr, 1995).

84

Challenging the mainstream

What counts as Social Constructionist? In addition to Discursive Psychology (discussed above), Critical (Social) Psychology represents a major form of Social Constructionism. Compared with Discursive Psychology, Critical Psychology looks more broadly at the structure of the discourse in its cultural context, asks where discourses come from and how they constrain people’s lives. It also focuses on how selfhood, subjectivity, and power are reflected in specific discursive practices. While mainstream Psychology sees ‘single parent’, ‘woman’, ‘individual’, ‘self ’, and so on, as ‘natural’, objective categories (see Chapter 2), for Critical Psychology they are ‘subjective positions’, constructed through discourse, providing us with ways of thinking and communicating about ourselves (Gross, 2014). For Gough and McFadden (2001), Critical Social Psychology challenges social institutions and practices – including the discipline of Psychology itself – that contribute to forms of inequality and oppression. Major influences on the development of Critical (Social) Psychology include Michael Foucault (see Box 3.9), the French psychoanalyst Lacan (1901–1981), Marxism, and contemporary feminism (see discussion of Feminist Psychology above and in Chapter 2). Not surprisingly, Critical (Social) Psychology (compared with Discursive Psychology) is more explicitly political, advocating empowerment and emancipation of those who are oppressed by psychological and other discourses (McGhee, 2001).

BOX 3.9 KEY THINKER: Michael Foucault (1926–1984) L Foucault was the son of a surgeon, who wanted him

L

L

L

L

to pursue medical training. But instead, Foucault graduated in philosophy (1948), having come under the influence of the famous French philosopher Merleau-Ponty. He then graduated in Psychology (1950) and was awarded a diploma in psychopathology in 1952. While teaching in several European countries over the next three years, he was working on his first major book, Madness and Civilization (1961). In it, he combined historical and philosophical analyses in order to throw light on a fundamental aspect of human psychology. Figure 3.2 Foucault’s interest in politics was largely influenced Michael Foucault. by his homosexual relationship with Daniel Defert, who was politically active. Foucault was drawn into the 1968 student rebellion in France and was actively involved in setting up an institution to help prisoners’ voices be heard outside the prison walls. He was elected to the College de France in 1970, the ultimate academic accolade, after which he began work on his most influential book, Discipline and Punish, a study of the changing ways that the criminal justice system uses the bodies of convicted criminals. This work transformed our conception of the role of prisons and the nature of criminality. The rest of his life was devoted to his massive study of sexuality. Central to all his thinking was the idea that human beings’ main characteristics are the product of discursive practices; these aren’t fixed and universal, but are historically unstable (i.e. they change over time). Such central person-categories

85

Challenging the mainstream as criminality, madness, and sexuality have been constructed through changing ways of talking and writing about people; these have become means of subtly exercising power in the social order. L According to Harré (2006): Foucault’s studies [have] been a reinforcement of the post-modernist claim that there is no such thing as a fixed and permanent human nature, the same at all times and places. Human nature has not been just superficially modified by culture and history. This has profound implications for the project of psychology. Psychological research cannot be revealing universal laws of cognitive functioning and social categorizing…. If there is no universal human nature there can be no laws of it … psychologists mistake a local ethnography for a universal science. (p. 252; emphasis in original)

Based on Table 3.1, draw a diagram summarizing the major features of postmodern Psychology (the ‘new paradigm’). Do this before looking at Figure 3.3 (page 87).

L

Micro- and macro-Social Constructionism Burr (2015) distinguishes between micro- and macro-Social Constructionism; the differences are summarized in Table 3.2 (page 88). (Social Representation theory (SRT) is a highly influential Social Constructionist theory within Social Psychology; see Gross et al., 1997.)

Conclusions: the problem of relativism A recurring problem with Social Constructionism relates to relativism, the belief that there’s no absolute, ultimate, objective truth (as claimed by mainstream Psychology and positivist science in general) or universal values. There are only truths – different accounts or versions of the truth as judged from different perspectives, or different values, reflecting different group memberships and experiences (i.e. pluralism). The problem (if there is one) is that relativism/pluralism seems to deny the ‘reality’ of the world as we experience it: our common sense understanding of the world seems to correspond to scientific realism and the related correspondence theory (see above). Taking the example of anger, Gergen (1997) claims that any inner feelings seem to be irrelevant to the meaning of anger: he characterizes emotion as a feature of social interaction (there are social conventions that dictate how one should react when we or another person displays anger), rather than an inner or private experience (McGhee, 2001). But are common sense and Social Constructionist accounts mutually exclusive? Wetherell and Still (1998), for example, say ‘no’. They point out that Social Constructionists share the common sense belief that people will be killed if their plane crashes into a hill in New Zealand; at the same time, ‘New Zealand’ and ‘hills’ are constructed objects. While there really is some land in the Southern Hemisphere called ‘New Zealand’, etc., that shouldn’t blind us to how our understanding and knowledge of it are relative to a whole set of constructions relating to ‘death’, ‘cause of death’, ‘countries’, etc.

86

Primitive belief in magic

Religious belief

‘Enlightenment’ Modernism

Reason replaces Focus on ‘True’ (empirically irrationality individual validated) knowledge replaces superstition

4#(-# #-+.-"/,)!' 4#(*+)!+,,.'.&-#)( of knowledge

‘Old paradigm’ Mainstream Psychology Quantitative methods/ statistical analysis/ reduction to numbers POSITIVISM

Cause-and-effect/ determinism Atomism

SCIENTISM/ METHODOLATRY ‘People as natural objects’

Universalism (Cross-Cultural Psychology) Decontextualization (context-stripping) Objectivity Value-free Realism Essentialism Individualism Entities Mechanism/materialism Structuralism

Postmodernism

Focus on difference

Avoidance of false dichotomies and reification

4 )(-#(.&"(! of perspective 4 #,)(-#(.#-2) %()0&! 4&-#/#,' 4&.+&#,'

‘New paradigm’

Postmodern Psychology

Qualitative methods Understanding/ describing/ interpreting (Hermeneutics)

SOCIAL CONSTRUCTIONISM

Holism Historical/cultural specificity (Cultural Psychology) Subjectivity (Phenomenology) Social nature of knowledge Need to make values explicit Relativism Anti-essentialism Focus on context Processes Poststructuralism

Figure 3.3 Summary of major features of modernism and postmodernism in general, and modern and postmodern Psychology in particular, highlighting the major differences. 87

Challenging the mainstream

Table 3.2 Major differences between micro- and macro-Social Constructivism Micro-Social Constructivism 1 This sees social construction taking place within everyday discourse between people interacting with each other. 2 Multiple versions of the world are potentially available through this process; we have direct access only to the various discourses, not to the ‘real world’. 3 Any reference to power is linked to the effects of discourse. 4 Gergen (e.g. 2009; Gergen and Gergen, 2012) focuses on the constructive force of interaction, stressing the relational embeddedness of individual thought and action. 5 Shotter (e.g. 1995) uses conversation analysis (CA) as his focus, emphasizing ‘joint action’. His form of Social Constructivism is termed Dialogical Psychology. 6 In the UK, several Discursive Psychologists (including Potter, Edwards, Antaki, and Billig) emphasize discourse in interaction, many using CA. They focus on the analysis of naturally occurring interactions in order to reveal the rhetorical devices that people use to achieve their interactional goals (discourse analysis). Macro-Social Constructionism 1 The constructive power of language is derived from – or at least tied up with – material or social structures, social relations, and institutional practices. The concept of power, therefore, is central. 2 This is particularly influenced by the work of Foucault (see Box 3.9). Its major current form is Critical Discourse Analysis (CDA) and has also proved popular with feminist analyses of power (see text above and Chapter 2). Feminist Poststructuralist (postmodern) Discourse Analysis (FPDA) analyses how speakers are ‘positioned’ by different and often competing discourses. 3 The emphasis on power highlights various forms of social inequality, such as gender, race, and ethnicity, disability, and mental health, with a view to challenging these through research and practice. Source: Burr (2015)

Another ‘solution’ to the ‘relativism problem is to resist going down that path’: some Social Constructionists (principally those associated with the macro version) maintain some concept of a reality existing outside of discourse and texts (Burr, 2015). A major reason for this resistance is the implications for morality and political action that follows from a relativist position. If all accounts of the world are equally valid, then we appear deprived of defensible grounds for our moral choices and political allegiances. Discursive psychology … therefore becomes politically impotent and ineffective in terms of applications. (Burr, 2015, p. 27) However, many Discursive Psychologists have rejected this criticism; for example, Kitzinger and Frith (1999) and Wetherell (2012) have argued that CA and DP, respectively, are compatible with feminism.

88

Challenging the mainstream

Pause for thought – answers 1 Traditionally, the most frequently discussed ethical issues arising from controlled experiments are (1) consent; (2) informed consent; (3) deception; and (4) the right to withdraw from the study. 2 Ethnocentrism: the strong human tendency to use our own ethnic or cultural group’s norms and values to define what is ‘normal’ and ‘natural’ for everyone (i.e. ‘reality’; Triandis, 1990). Historically, Psychology has been dominated by white, middle-class males in the US. For the last century, they’ve enjoyed a monopoly as both the researchers and the ‘subjects’ of the discipline (Moghaddam and Studer, 1997), constituting the core of Psychology’s First World (Moghaddam, 1987). Mainstream Psychology has had a history of ethnocentric bias in its assumptions and approach. It’s become almost a ‘standing joke’ that Experimental (Social) Psychology is really the study of the American undergraduate Psychology major: By the standards of experimental psychology’s own espoused principles of positivist science, the practice of limiting one’s observational field would necessarily be an erroneous and misleading one. The population upon which American psychology is founded can hardly be considered a representative sample of humankind. It is not even a representative sample of the contemporary North American population. (Much, 1995, p. 98) Disciplinary parochialism: Psychology’s tendency to focus on ‘locally available’ people to study, which, in practice, as described above, tends to be white, middle-class North American males (who also represent the majority of the researchers). Ethnocentrism, combined with essentialism, then leads to generalization of research findings involving this population to ‘people in general’ (i.e. universalism). 3 Before the publication of Thomas Kuhn’s (1962, 1970) The Structure of Scientific Revolutions, the standard, unquestioned account of how science develops claimed that it involves a steady, continuous accumulation of knowledge providing an increasingly more accurate account of how the world works (‘the truth’). 4 By contrast, Kuhn stressed the discontinuities – a set of alternating ‘normal’ and revolutionary phases in which scientific communities experience periods of turmoil, uncertainty, and angst (Naughton, 2012).

89

Chapter 4 People as Psychologists Common sense Psychology

It could be argued that many of the characteristics of mainstream, positivist Psychology that we’ve discussed in Chapters 2 and 3 have seeped into our everyday thinking about human psychology, that is, our assumptions and the inferences we make regarding ‘what makes people tick’. The very use of the word ‘tick’ by the layperson (non-Psychologist) as a characterization of what the science of Psychology is concerned with suggests the impact of mechanism (as originally proposed by Descartes; see Chapter 2).

L

Try to identify some of the other features of positivist science and indicate how they might have become part of our everyday (common sense) explanation of ‘what makes people tick’.

Running through our everyday thinking about everything (people and the physical world) is realism/correspondence theory of truth. As we noted in Table 2.1 (page 37), this is really another way of addressing the issue of objectivity. Even if, from a philosophical perspective, we would describe ourselves as anti-realists (such as many Social Constructionists would do), in our everyday dealings with other people and physical things, we cannot help but act as if realism were self-evidently true. Similarly, a quantum physicist believes that her coffee will stay contained within the cup, despite her belief as a physicist that ‘reality’ is in fact composed of particles that are too small to see (and which obey different laws than those governing ‘whole’, visible objects). In other words, everyday realism is part of our common sense understanding of the world: it’s what we take for granted, part of the culturally determined account of ‘reality’. This operates at a different level (or universe) of discourse from that of physics, chemistry, and other natural sciences. Another feature of our everyday, common sense understanding of the world is to do with psychological reality. Again, in our day-to-day dealings with other people, we take ‘intelligence’, ‘personality’, ‘motivation’, ‘emotion’, ‘memory’, etc. to have a reality on a par with physical attributes (such as eye/hair/skin colour, height, and weight). As we discussed in

91

People as Psychologists

Chapter 2, ‘intelligence’ etc. are taken to be natural kinds (as are rocks, electrons, DNA, and stars; Richards, 2010), when they’re more accurately described as psychological kinds. This distinction is important for two main reasons: 1 Psychological kinds are hypothetical constructs, that is, they don’t refer to anything that can be directly observed or measured but can only be inferred from observable behaviour. Equally important, they seem to be necessary in order to account for the observed behaviour. However, there’s a risk of thinking of them as ‘things’ or ‘entities’ (i.e. reification), rather than as a way of trying to make sense of behaviour. 2 Whereas our knowledge of a chemical compound (a natural kind) can change without changing the nature of the compound itself, a change in how intelligent we think we are (a psychological kind), for example, can change how we think of ourselves. As Danziger (1997) puts it, people’s actions, experiences, and dispositions are not independent of how they are categorized. Related to this is what Richards calls reflexivity, a self-referring relationship that is unique to Psychology as a (scientific) discipline. As Richards (2010) puts it, the discipline of Psychology actually contributes to ‘the dynamic psychological processes by which human nature constantly recreates, re-forms, and regenerates itself, primarily in western cultures’ (p. 7). In other words, what Psychologists say about ‘what makes people tick’ can – and does – actually affect the ticking (hence, Psychology is part of its own subject-matter). ‘Reflexivity’ has a somewhat different meaning in the context of psychological research (in particular, interpretative research, as in hermeneutics – see Chapter 3). This is described in Box 4.1, which highlights the very different nature of qualitative research (of which hermeneutics is an example) and the quantitative approaches inherent within mainstream, positivist Psychology.

BOX 4.1 Reflexivity in research L Researchers inevitably bring traces of their experiences (such as their social class,

gender, age, ethnicity, historical and social locations) into their research. L This obliges researchers to display reflexive self-awareness; as Wilkinson (1988)

puts it, the researcher and researched are participants in the same enterprise – ‘a dialogue of knowledge construction’ (p. 495). L Two forms of reflexivity identified by Wilkinson are personal and methodological; Willig (2008) has added epistemological. L In personal reflexivity, researchers acknowledge that they’re an active presence at all stages of their research; it involves reflecting on how their values, experiences, interests, beliefs, political commitments, and social identities shape the research. Decisions about the conduct of a study aren’t solely a matter of dispassionate scientific judgement: they’re also shaped by a researcher’s personal history, social identity, values, and experiences. These factors can influence every stage of the research process, from choice of topic to data interpretation. By reflecting on these potential influences, the researcher might consider alternative interpretations of the data otherwise overlooked. L Methodological reflexivity involves considering the use of innovative methods for researching topics that may fall outside mainstream Psychology. For example, Magnusson and Maracek (2012) describe various methods for studying gender in

92

People as Psychologists ‘real time’ and real life; these methods answer questions about gender as a set of meanings and relationships that are continually reproduced and contested – what people do, not what men and women are. Methodological reflexivity can also help researchers choose between research methods which improve knowledge about gender issues and those which don’t. L Epistemological reflexivity requires researchers to scrutinize the categories, methods, and procedures they use. What part do the research methods play in shaping or even creating ‘the evidence’? What assumptions has the researcher made prior to designing the research and how might they have affected the outcome? How could the research question have been investigated differently? (Based on Magnusson and Maracek, 2012)

As we saw in Chapter 3, psychological categories (such as intelligence) feed into the essentialism of mainstream (Experimental) Psychology, i.e. they have a taken-for-granted quality about them and seem to be describing what people are actually, ‘really’ like (their ‘essence’ or ‘human nature’). This can be regarded as yet one more feature of our common sense understanding of what people are like. These categories seem ‘natural’ – but only to members of that particular linguistic community. While the concept of ‘natural kinds’ has nothing to do with culture, the natural-appearing kinds of Psychology have everything to do with it (Danziger, 1997).

L L

What do these observations regarding culture tell us about the very meaning of ‘common sense’? Try to identify some examples of common sense within your own sociocultural group.

We often use the term ‘common sense’ to imply that particular knowledge or understanding is somehow ‘natural’, doesn’t have to be learned in the way that (most) other knowledge has to be, and, most importantly, has nothing to do with wider cultural experience (which also implies that it’s ‘universal’). But Danziger explicitly makes the point that ‘common sense’ is culture-relative and so isn’t universal, i.e. what’s taken to be common sense in one society/culture may not be in another. Perhaps the ‘common’ in ‘common sense’ simply implies ‘shared’ by members of certain (sociocultural) groups, in which case we shouldn’t be surprised that cultural learning is involved. Indeed, the ‘sense’ part of ‘common sense’ may hint at the implicit cultural understanding that we acquire but aren’t explicitly taught. Learning can take place in the absence of explicit attempts to teach (as in Bandura’s famous ‘Bobo-doll’ experiments, where children learned to reproduce the behaviour of adult models through observational learning). Again, given the critical role of language, both in culture generally and as the basis of our knowledge in particular, and as we noted in Chapter 2, there’s a sense in which what Psychologists are studying is quite literally language. As Danziger (1997) observes, language is the most basic instrument of scientific investigation: the entire investigative process is so immersed in language that it’s simply taken for granted, and its role becomes invisible. But if this is true of science in general, in Psychology’s case it becomes absolutely critical. This may apply as much to ‘common sense’ as it does to psychological research.

93

People as Psychologists

The causal explanation of behaviour Another example of the symmetry between positivist science and common sense relates to explanation. As we saw in Chapter 2, the determinist nature of positivism explains observed phenomena in terms of observable causes; similarly, our everyday, common sense accounts of people’s behaviour (our own and others’) are often couched in causal terms (e.g. ‘I went into the back of the car in front because someone went into the back of me!’, or ‘He became aggressive because he had too much to drink!’).

L

But are causes all of the same kind?

L

And are the kinds of cause identified by positivist natural science the most appropriate for explaining human behaviour?

According to Winch (1958), the phenomena of human behaviour differ essentially from those of inert matter in that they have a dimension of ‘meaningfulness’, which the latter do not. Similarly: The phenomena into which the physical sciences inquire are essentially meaningless, in that the order they display is only a causal regularity; insofar as they can be said to have significance, it is only a borrowed significance which our theories lend them. It is human beings who endow natural phenomena with what meaning they have, for natural phenomena do not endow their own actions with meaning, as do human beings. (Ryan, 1970, p. 16; emphasis in original) This attribution of meaning runs parallel with the claim that human beings possess selfunderstanding (unlike inert matter and natural phenomena). But what does ‘meaningfulness’ convey in this context? Ryan (1970) provides one answer of fundamental importance: The categories in terms of which we are to analyse and explain social and political life must involve concepts of purpose and intention, concepts which are those in whose terms the agents themselves understand their own behaviour. We are not simply interested in such regularities as social life happens to display, but in the significance which the agents themselves attach to the actions which go to create these regularities. Hence causation plays a secondary role, if any, and the depth of understanding which we aim at goes beyond anything possible in those sciences where causal regularities are the only object of inquiry. (pp. 16–17) As Ryan then points out, this creates some problems quite unlike any faced by the natural scientist: the account given by the agent of the intentions and goals implied by his/ her behaviour isn’t the only possible account. This takes us to the root of ‘meaningfulness’: even if the agent’s own account could be shown to be mistaken, the very fact that she or he provides an account at all (in terms of goals/intentions etc.) – unlike physical objects and natural phenomena – is what makes the Psychologist’s explanations so much more complex than (and qualitatively different from) those of the natural scientist. Related to this is the difference between different types of cause, as discussed in Box 4.2.

94

People as Psychologists

BOX 4.2 The difference between reasons as causes and mechanical causes L The examples given above of common sense causal explanations of behaviour are

what might be called mechanical causes: the driver who is shunted from behind into the car in front is hardly very different from a billiard ball that’s propelled along the snooker table and then knocks into another; similarly, the effect of alcohol on behaviour (aggressive or otherwise) is describing a mechanical process over which the person has no control (except the choice to consume the alcohol in the first place). L These examples are deliberately atypical: most of our behaviour isn’t ‘caused’ in this mechanical way but is explicable in terms of reasons (in other words, reasons are a special type of cause; Peters, 1960). L Ryan (1970) identifies two main differences between reasons and mechanical causes: 1 Reasons can be assessed as good or bad, proper or improper, but this doesn’t apply to mechanical causes (which just ‘are’). So, reasons (may) have a ‘moral’ dimension. 2 A person making a decision (especially a difficult one) isn’t engaged in a causal inquiry into his own motives (i.e. how it is that we come to make the decisions we do), but is wondering what is the right thing to do. Again, clearly, this indicates the moral dimension of reasons as distinct from causes, which, as such, are morally neutral L This, in turn, relates to the concepts of free will and moral/legal responsibility.

Only because we believe that people have free will, and are able to control their behaviour, are we able to attribute them with legal/moral responsibility for their actions. As Flanagan (1984) says, ‘ought’ and ‘should’ seem to imply ‘can’: we assume that people are capable of rising above the causal pressures presented by the material world and, in turn, this implies some conception of freedom of choice. Similarly, Koestler (1967) states that, whatever our philosophical convictions (regarding the free will issue), in everyday life it’s impossible to carry on without the implicit belief in personal responsibility; this, in turn, implies free will. L Mechanical causes also convey that something happens to us, which contradicts the usual sense of agency that is part-and-parcel of our common sense concept of a person. By contrast, ‘reasons’ convey that we (believe we) are in control of our behaviour (see text below).

What Box 4.2 shows is that, in the context of our discussion regarding the nature of common sense, most of our everyday attempts to explain our own and others’ behaviour takes the form of trying to identify reasons as opposed to mechanical causes. But how far does this distinction move us forward? If reasons aren’t mechanical causes, what if ‘our’ reasons are themselves caused in a more mechanical way? Is this possible? If our reasons can only be inferred (rather than directly identified), how can we be sure they are (the) real reasons? These and many other questions are meant to indicate just how complex our everyday account of behaviour is.

95

People as Psychologists

Pause for thought … 1 What did Freud mean by overdetermination and how is this relevant to our discussion above of different types of cause? (See Chapter 9).

There’s another sense in which the meaning of a person’s actions (for the person him/ herself ) can be determined ‘from the outside’, that is, without knowing the actor’s goals or intentions: this relates to the meaning of the situation in which the action takes place, which usually ensures that the meaning is the same for everyone involved.

Take the often-cited example of writing one’s name on a piece of paper (Peters, 1960). L What are some of the different meanings that this could have – depending on the situation?

The person might be writing a cheque, signing a death certificate, showing a child how to write, and so on – almost indefinitely. However, one of these descriptions is the correct one: The social element in such situations rests on the fact that what an individual can intelligibly intend to do depends on the kinds of rules which go to make up his society; it is these social rules which provide the skeleton of meanings within which the individual can frame intentions, decide on his goals and the like. (Ryan, 1970, p. 17; emphasis added) Regardless of the ‘correct’ description of the example above of writing one’s name, what can crudely be described as the same physical movements are involved. Conversely, we might perform the ‘same’ act (i.e. the same socially meaningful function, following the appropriate set of rules) but in a variety of physical ways (e.g. handing over bank notes, signing a cheque, nodding, or saying ‘OK’). These are all ways of ‘bankrupting’ oneself (Peters, 1960).

Movements, actions, and Behaviourism What these examples demonstrate is the fundamental difference between ‘actions’ (or ‘acts’) and ‘movements’: movements in themselves are meaningless, while actions can be performed using a range of physical movements which only become meaningful when performed in a particular social situation understood as such by all those participating (and by observers); it’s the implicit situational rules which ‘convert’ movements into cheque signing, signing a death certificate, going bankrupt, etc. As Peters (1960) says: Man is a rule-following animal. His actions are not simply directed towards ends; they also conform to social standards and conventions, and unlike a calculating machine he acts because of his knowledge of rules and objectives. (p. 5; emphasis in original)

96

People as Psychologists

Recall from Chapter 1 that Watson, in his 1913 ‘Behaviourist Manifesto’, argued that Psychology must be purely objective, excluding all subjective data or interpretations in terms of conscious experience. He redefined Psychology as ‘the science of behaviour’ instead of the ‘science of mental life’ (Fancher, 1979); all reference to mental concepts was ‘banned’ from Behaviourism. Joynson (1974) believes that Watson seemed to think that getting rid of mental concepts would be relatively easy, once the decision had been made to put Psychology on an objective footing modelled on the natural sciences. However: The use of mental concepts is not a philosophical importation which we can abandon at will. It is deeply ingrained in our habits of thought, and pervades every observation and every interpretation which we make of human behaviour. Our very perception of behaviour is riddled through and through with attributions of mental life. (Joynson, 1974, p. 32) Watson’s use of the term ‘behaviour’ is much closer to what we called ‘movements’ above than it is to ‘actions’; he seemed to have implied that, in order to be objective, Psychologists must reduce actions to movements. But his use of ‘response’ would seem to be at a higher level of description than, say ‘production of x drops of saliva’, and higher still than a description of the physiological processes (those in both the central and autonomic nervous systems; see Chapter 5) that accompany salivation. Returning to Joynson: When we describe a man as smiling, hesitating, waiting, looking, threatening, pausing, approaching, avoiding, nodding or cringing, we are describing his behaviour as the expression of the mental life of an experiencing agent. To describe it objectively – that is, as sheer bodily change, as ‘colourless movement’ – demands an effort of abstraction of which most people, including most behaviourists, are wholly incapable. (1974, p. 32)

The layperson’s understanding In Psychology and Common Sense (1974), Joynson argues (as does Heather, 1976) that human beings aren’t, like the objects of natural science, ‘things which do not understand themselves … we can already predict and control our behaviour to a remarkable extent ourselves’ (p. 2). Again: This ability which we all have, to understand ourselves and others, presents psychologists with a paradoxical task. What kind of understanding does he seek, of a creature which already understands itself? (p. 2) Joynson goes on to say that the Psychologist has often reacted to this problem by simply ignoring it, or by denying that the layperson’s understanding needs to be taken seriously. For Joynson, the fundamental question is: if the Psychologist didn’t exist, would it be necessary to invent him? For Skinner ‘it is science or nothing’ (1971, p. 160) and Broadbent also rejects the validity of our everyday understanding of ourselves and others (see Chapters 2 and 7); Joynson calls this the ‘behaviourists’ prejudice’. Yet it seems inevitable that we try to make sense of our own and other people’s behaviour (by virtue of our cognitive abilities and the nature of social interaction), and to this extent we can all be considered psychologists (notice the use of a lower case ‘p’ here).

97

People as Psychologists

Heather (1976) points to ordinary language as embodying our ‘natural’ understanding of human behaviour: as long as human beings have lived they’ve been psychologists, and language gives us an ‘elaborate and highly refined conceptual tool, developed over thousands of years of talking to each other’ (p. 20).

Common sense vs. psychological research According to Joynson (1974), in cases where common sense understanding is confirmed by scientific research, this provides scientific grounds for accepting what was previously merely an intuitive guess. However, it’s too easy just to assume that the intuitive guess has been justified. Equally, when Psychologists’ conclusions contradict common sense, it’s too easy to infer that it’s the Psychologist’s conclusions that are correct and common sense that is wrong. The layman’s understanding, though often imperfect, is not to be universally dismissed as intuitive guesswork, necessarily inferior to the special methods of the scientific psychologist. On the contrary, the layman’s conclusions may well be based on long and varied experience, frequently interpreted, of course, by a highly trained intelligence. Experiment in psychology, by contrast, typically operates over short periods of time, in very restricted environments, and on narrow segments of behaviour. It would not be surprising if common sense often proved to be as reliable as experiment, and sometimes more reliable. (Joynson, 1974, pp. 8–9) While Joynson is arguing more for recognition of the value of common sense understanding than against the use of scientific method in Psychology, we surely need to look for ways of reconciling these two potentially – and sometimes actually – conflicting viewpoints. One ‘solution’ is proposed by Legge (1975), described in Box 4.3.

BOX 4.3 Formal and informal psychology (Legge, 1975) L Other ways of making this distinction are: ‘professional vs. amateur’; ‘scientific

vs. non-scientific’ (intuitive, ‘natural’, common sense). L Our common sense understanding is unsystematic and doesn’t constitute a body

of knowledge; this makes it very difficult to ‘check’ an individual’s ‘theory’ about human nature, as does the fact that every individual has to learn from his/her own experience. So, part of the aim of formal Psychology is precisely to provide such a systematic body of knowledge, which represents the unobservable bases of our ‘gut reactions’. L But could it be argued that informal/common sense psychology does provide a ‘body of knowledge’ in the form of proverbs or sayings or folk wisdom, handed down from generation to generation (such as ‘birds of a feather flock together’; ‘too many cooks spoil the broth’; and ‘don’t cross your bridges before you come to them’)? (Gross, 2015). L These proverbs may contain at least a grain of truth. But for each one we can find another which states the opposite (e.g. ‘opposites attract’; ‘many hands make light work’; and ‘time and tide wait for no man’, or ‘nothing ventured, nothing gained’).

98

People as Psychologists L Common sense doesn’t help us reconcile these contradictory statements – but

formal Psychology can, by trying to identify the conditions under which each statement holds true: they only appear contradictory if we assume that only one or the other is true. In this way, we can see scientific Psychology as throwing light on our everyday, informal understanding, not necessarily negating or invalidating it (Gross, 2015; see Rolls, 2007). L Legge believes that most psychological research should indeed be aimed at demonstrations of ‘what we know already’ but that it should aim to go one step further: only the methods of science can provide us with the public, communicable body of knowledge that we’re seeking. As Allport (1960) famously stated, ‘Science aims to achieve powers of understanding, prediction, and control above the level of unaided common sense’ (p. 147); this is meant to apply to Psychology as much as it does to the natural sciences.

Two other major ‘solutions’ to the problem of reconciling common sense and scientific understanding come in the form of Fritz Heider’s account of common sense psychology and George Kelley’s personal construct theory.

Heider’s common sense Psychology According to Harré (2006), Heider’s major work, The Psychology of Interpersonal Relations (1958), was a ‘sustained attempt at the analysis of the “commonsense” or vernacular concepts with which people manage their lives’ (p. 194). It is perhaps the most widely acknowledged attempt to formulate an explicit statement about naive psychology (Bennett, 1993). As Heider put it: ‘there exists a system hidden in our thinking about interpersonal relations, and that system can be uncovered’ (1958, p. 14). For Heider, ‘balance’ was the central theoretical concept: cognitive ‘forces’ within the individual would tend towards equilibrium. Attitudes could be positive or negative, which determined how people acted towards one another. Heider combined ‘balance’ with the belief that the concepts embodied in ordinary language are the means through which people manage their social lives. The balance principle was central to the Social Psychology of the mid-1900s (see Chapter 12) and The Psychology of Interpersonal Relations pointed towards Discursive Psychology (see Chapter 3).

BOX 4.4 KEY THINKER: Fritz Heider (1896–1988) L Heider was born in Vienna into a prosperous upper middle-class family, part

Hungarian, part Austrian. L Having failed to become an architect (his father’s profession), he entered the Law

School at the University of Graz in 1914. Like many students in the Germanspeaking world, Heider took courses at several universities, eventually concentrating on philosophy. He also began studying some Psychology as this time. L He moved to Berlin, where he pursued his interest in Psychology; both Wertheimer and Köhler, two of the founders of Gestalt Psychology (see Chapter 1) were teaching at the institute, as was Kurt Lewin, a Gestalt Psychologist and major figure in the development of American Social Psychology.

99

People as Psychologists L After spending some time abroad rather aimlessly, he returned to Berlin in 1926

to work with Lewin. He then moved to Hamburg University, to the Psychology department led by William Stern (see Chapter 11). When visiting Vienna, he attended meetings of the Psychoanalytic Society and also of the positivist Vienna Circle. L In 1930, Heider moved to the US, where he soon married and started a family. He worked at the University of Kansas until 1971. He died, aged 92, in 1988. (Based on Harré, 2006)

Heider wished to apply Gestalt principles of (object) perception to the perception of people (social or person perception) (see below), but he also argued that the logical starting point for studying how people understand their social world is ‘ordinary’ people themselves: 1 How do people usually think about and infer meaning from what goes on around them? 2 How do they make sense of their own and other people’s behaviour? These questions relate to common sense (or naive) psychology. The ‘ordinary’ person (‘person in the street’) is a naive scientist, linking observable behaviour to unobservable causes (much like the professional scientist); it’s these inner causes (such as abilities, wants, emotions, personality), rather than the observable behaviour itself, that provide the meaning of what people do. Such basic assumptions about behaviour need to be shared by members of a culture, for without them social interaction would be chaotic. It is important that we do subscribe to a common psychology, since doing this provides an orientating context in which we can understand, and be understood by, others. Imagine a world in which your version of everyday psychology was fundamentally at odds with that of your friends – without a shared ‘code’ for making sense of behaviour, social life would hardly be possible. (Bennett, 1993, p. 4) Heider’s approach to the study of the naive psychologist is based on three key assumptions: 1 To understand someone’s behaviour requires attending to how she or he construes his or her social world. Regardless of the truth or falsity of a person’s beliefs, ‘If a person believes that the lines in his palm foretell his future, this belief must be taken into account in explaining certain of his expectations and actions’ (Heider, 1958, p. 5). 2 People are motivated in their everyday perception by the needs for prediction and control. For example, we often try to assess in advance how others might judge our intended behaviour, so that we can either design a different plan of action or find ways of averting whatever unfavourable reactions might occur. 3 The perception of objects and people isn’t fundamentally different: in both cases, we’re concerned primarily with establishing the invariant characteristics that predispose an entity to behave in particular ways. As important as it is to identify the internal, unobservable causes of behaviour, common sense psychology (at least that shared by members of western cultures) doesn’t maintain that these are the only causes. In fact, Heider distinguished between (1) personal

100

People as Psychologists

or dispositional (internal) causes; and (2) situational or environmental (external) causes. This distinction lies at the heart of attribution theory, which deals with the general principles that govern how the social perceiver selects and uses information to arrive at causal explanations (Fiske and Taylor, 1991). One of the major ‘tasks’ involved in everyday social interaction is deciding whether other people’s behaviour can/should be explained in terms of abilities, emotions, or other internal causes, or external causes (such as others’ behaviour, the demands of the situation, or aspects of the physical environment). This decision is the attribution process and is what theories of attribution try to explain (see Gross, 2015). Understanding which set of factors should be used to account for another person’s behaviour will make the perceiver’s world seem more predictable and provide a greater sense of control over it. According to Antaki (1984), attribution theory: Promise[s] to uncover the way in which we, as ordinary men and women, act as scientists in tracking down the causes of behaviour; it promises to treat ordinary people, in fact, as if they were psychologists. (p. 210)

L

L

While the attribution process may represent an inherent feature of common sense or naive psychology (at least in Western culture), might there also be psychological theories (i.e. those originally proposed by scientific Psychologists) that have proved so influential that they have become absorbed into the mainstream culture and so have themselves become part of common sense psychology? Try to give an example.

By becoming absorbed into mainstream culture, such theories may have become detached from the identity of the Psychologist(s) responsible for them, becoming part of what we ‘know’ about human beings. Popular beliefs, such as ‘gay men, as children, have had too close a relationship with their mother’, ‘the child’s early years are critical’, and ‘boys need a father’, can all be traced, more or less directly (and more or less accurately) to Freud’s psychoanalytic theory (see Chapter 9). It’s not the objective truth (or otherwise) of these claims that matters here, but the impact they’ve had on the thinking and experience of ordinary people (Gross, 2014: see Chapter 3 for a discussion of reflexivity). As Thomas (1990) observes, Freudian assumptions regarding morality, family life and childhood, and mental illness are now part of the fabric of literature and the arts. (The ‘infiltration’ of aspects of Freud’s theory into French middle-class society, in particular, also illustrates social representation theory: see Gross et al., 1997.)

Kelly’s Psychology of Personal Constructs: persons as scientists Three years before Heider published his most influential book (see above), George Kelly published A Theory of Personality: The Psychology of Personal Constructs. In it, he claimed that not only are scientists human, but (all) humans can be thought of as scientists.

101

People as Psychologists

BOX 4.5 KEY THINKER: George Kelly (1905–1967) L Kelly was an only child, born on a small farm in

L

L

L

L

L

L

L

Kansas; his father had previously been a clergyman, his mother a schoolteacher. At age 13 he was sent away to boarding school, where he studied for a degree in maths; this was followed by a master’s degree in sociology. He took several jobs following graduation and was rather rootless for a while, until he met and married Gladys Thompson. In 1929 he received a fellowship for study abroad. He chose to go to Edinburgh, where he took the bachelor of education degree, after which he became a postgraduate student at the University of Iowa. He graduated in 1931, with a PhD in Educational Psychology Figure 4.1 (speech and reading disabilities). George Kelly. Despite the Great Depression, he secured a teaching post at a college in Kansas, where, during the next ten years, he set up a system of providing psychological support for communities living in the most rural areas of Kansas. What he concluded from working with these mainly farming communities was that they often had difficulty making sense of their lives; the most effective means of helping them was through constructive alternativism: people perceive the world through a specific and personal construction which may need to be revised so as to create alternative construals (see text below). During the Second World War, Kelly worked as an Aviation Psychologist. In 1945, he started work at the University of Maryland, followed almost immediately by appointment as Director of Clinical Psychology at Ohio State University. He stayed there until 1965. During his time at Maryland, A Theory of Personality: The Psychology of Personal Constructs was published (1955). He visited many universities, both abroad and in the US, and had become a highly respected Clinical Psychologist. He was appointed Professor of Clinical Psychology as Brandeis University in 1965. He died just two years later. (Based on Harré, 2006)

Kelly’s Personal Construct Theory (PCT) is a theory about the personal theories of each one of us (making it idiographic; see Chapter 3); it applies as much to Kelly himself (as the originator of the theory) as to everyone else. If science is primarily a human activity, then any valid psychological theory must be able to account for that activity, in this case, the construction of scientific psychological theories; unlike most psychological theories, PCT can do this quite easily (and so displays reflexivity). As Weiner (1992) puts it: Kelly’s theory … can explain scientific endeavours, for Kelly considered the average person an intuitive scientist, having the goal of predicting and understanding behaviour. To accomplish this aim, the naive person formulates hypotheses about the world

102

People as Psychologists

and the self, collects data that confirm or disconfirm these hypotheses, and then alters personal theories to account for the new data. Hence, the average person operates in the same manner as the professional scientist, although the professional scientist may be more accurate and more self-conscious in their attempts to achieve cognitive clarity and understanding. (p. 223) Hypotheses about the world take the form of constructs, which represent our attempts to interpret events (including our own and others’ behaviour); we put them to the test every time we act. Given his background in maths and engineering (see Box 4.5), it’s perhaps not surprising that Kelly chose the model of man-the-scientist. He wondered why it was that only those with university degrees should be privileged to feel the excitement and reap the rewards of scientific activity (Fransella, 1980), and this applies as much to Psychologists as to any other professional group: It is customary to say that the scientist’s ultimate aim is to predict and control. This is a summary statement that psychologists frequently like to quote in characterizing their own aspirations. Yet, curiously enough, psychologists rarely credit the human subjects in their experiments with having similar aspirations. It is as though the psychologist were saying to himself, ‘I being a psychologist, and therefore a scientist, am performing this experiment in order to improve the prediction and control of certain human phenomena: but my subject, being merely a human organism, is obviously propelled by inexorable drives welling up within him, or else he is in gluttonous pursuit of sustenance and shelter’. (Kelly, 1955. p. 5; emphasis in original)

L L L

What can you infer from Kelly’s quote above regarding how Psychologists view the people they study? What does ‘subject’ convey? What objections might you make against its use?

(It might be useful to consider the view of Feminist Psychologists regarding science, as discussed in Chapter 2.)

It’s almost as if, in their scientific Psychologist role, researchers (unconsciously and implicitly) think of themselves and the fellow human participants as different kinds of creature: there’s definitely an implied superiority on the Psychologist’s part in Kelly’s account of the experimental situation. People are reduced to the status of subject, implying that the researcher is in control and dictates what happens in that situation; subjects merely respond to external events in a passive, unthinking way. ‘Subject’ is a dehumanizing term (Heather, 1976; see Chapter 2). Based on Kelly’s concept of ‘constructs’, we might regard the ‘subject’ as one who is desperately trying to construe the construction process of the Psychologist (Fransella, 1980). From this perspective, ‘subject’ is inappropriate: not only are ordinary people scientists, but Psychologists can only hope to understand and predict other people’s behaviour to the extent that they’re aware of the constructs that those others place upon events. As we noted earlier when discussing Heider’s theory, to understand a behaviour requires attending to

103

People as Psychologists

how the actor construes his/her social world – regardless of the truth or falsity of a person’s beliefs. So, while certain behaviour of another person might appear extraordinary to the observer, it can make perfect sense in the context of the actor’s own worldview. As Fransella (1980) says, to understand others’ behaviour, we have to know what construct predictions are being put to the test. From the perspective of PCT, the Psychologist and the client (‘subject’) are equal partners; the former no longer enjoys a higher status or is ‘in charge’ (Weiner, 1992). If ‘subject’ reduces the person to something less than a whole person, for PCT the person is the irreducible unit. According to Salmon (1978), research within a PCT framework would look very different from its traditional (mainstream) form; it would be about ‘the process whereby people come to make sense of things’; it would involve working with and not on subjects, and the researcher’s own constructions would be made explicit. The results obtained will be seen as less important, in the end, than the whole progress of the research itself – which, after all, represents one version of the process it is investigating. The crucial question, about any research project, would then be how far, as a process, it illuminated our understanding of the whole human endeavour to make sense of our lives, and how fruitful it proved in suggesting new exploratory ventures. (Salmon, 1978, p. 43) These views regarding the nature of psychological research are echoed in Feminist Psychology and in collaborative/new paradigm research (see Chapters 2 and 3).

The repertory grid technique The original test used for eliciting personal constructs was the Role Construct Repertory Grid (‘Rep Test’), which was designed for individual use by a Clinical Psychologist. This has been succeeded by the Repertory Grid Test (‘Rep Grid’), which is used as a major research instrument. The Rep Grid is a very flexible instrument and can be used in different ways. The basic method involves the following steps.

L L L

L L

L

104

Write a list of the most important people in your life (elements). Choose any three elements. Ask yourself: ‘In what ways are two of these alike and different from the third?’ The descriptions given (e.g. ‘My father and boyfriend are emotional, my mother is not’) constitute a construct which is expressed as a bipolar opposite (‘emotional–unemotional’). Apply this construct to all the remaining elements. Now select another set of three elements and repeat the whole process. It continues until either (1) you have produced all the constructs you can (usually no more than 25 with one set of elements) or (2) until a sufficient number has been produced as judged by the investigator. All this information can be collated in the form of a grid, with elements across the top and constructs down the side; you insert a tick or a cross indicating which pole of the construct applies (e.g. a tick denotes ‘affectionate’ and a cross ‘not affectionate’, as shown in Figure 4.2).

People as Psychologists Constructs

Elements Mother

Father

Boyfriend

Psychology lecturer

Emotional () /Unemotional ()







N/A

Protective () /Unprotective ()









etc.

etc.

Figure 4.2 A sample Repertory Grid (Rep Grid) (based on Gross, 2015). The Rep Grid can be factor-analysed (see Chapter 11) and this often reveals that many constructs overlap (i.e. they mean more-or-less the same thing): most people’s construct systems comprise 3–6 major constructs. Having said that PCT as a whole is idiographic, the Rep Grid can be used nomothetically, as in Bannister and Fransella’s (1966, 1967) Grid Test of Thought Disorder given to thought-disordered patients with schizophrenia (see Gross, 2015). This comprises standardized elements and constructs (the same for everyone and predetermined by the researcher); it has been standardized on large numbers of similar patients so that an individual score can be compared with group norms. However, this is probably rather far removed from how Kelly intended the technique to be used. It has also been used to study how patients participating in group psychotherapy change their perception of each other (and themselves) during the period of therapy: the group members themselves are the elements and a number of constructs are provided (Fransella, 1970). Fransella (1972) used it extensively with people being treated for severe stuttering. However, its uses aren’t confined to clinical situations. Elements aren’t necessarily people but could be occupations, religions, cars, or whatever; Shackleton and Fletcher (1984) argue that the Rep Grid stands on its own as a technique, that is, you don’t have to believe in Kelly’s PCT in order to use it.

Constructive alternativism While published in 1955, PCT can be thought of as ahead of its time to the extent that it was built on a major theoretical plank of postmodern thinking, namely anti-realism (see Chapter 3). Although a real world of physical objects and events exists, no one individual has the privilege of ‘knowing’ it: all we can do is place our personal constructs upon it. The better our constructs ‘fit’ the world generally, the better our control over our own, personal world. According to Kelly, there’s no way of getting ‘behind’ our interpretation of the world to check if it matches what the world is ‘really’ like. All we have are our own interpretations: we necessarily see the world ‘through goggles’ which cannot be removed. However, they aren’t set in stone, fixed once and for all: the-person-as-scientist is constantly engaged in testing, checking, modifying, and revising his/her unique set of constructs. This is a brief account of constructive alternativism. PCT also adopts a phenomenological approach to the study of personality: it attempts to understand people in terms of their experience and perception of the world – a view of the world through the individual’s own eyes, rather than an observer’s interpretation or analysis imposed on the person (see Chapter 10).

105

People as Psychologists

PCT and free will As we have just noted, people are free to the extent they can change their personal constructs. However, these constructs also restrict freedom, because we can only choose from the constructs we have at any one time and, to a very large extent, they determine our behaviour: Constructs are the channels in which one’s mental processes run. They are two-way streets along which one may travel to reach conclusions. They make it possible to anticipate the changing tide of events … constructs are the controls that one places on life … the life within him as well as the life which is external to him. Forming constructs may be considered as binding sets of events into convenient bundles which are handy for the person who has to lug them. Events, when so bound, tend to become predictable, manageable, and controlled. (Kelly, 1955, p. 126) So, we need constructs to prevent the world from seeming totally chaotic and unpredictable. But control is a special case of determinism: once we construe the situation in a particular way (e.g. ‘serving a meal for guests’ – a superordinate construct), then everything we do within that situation (e.g. cutting a pie for dessert) becomes a subordinate construct relative to the superordinate one. We’re free to define natural events as we wish, but if we want to predict them accurately, we need some kind of construction that will serve the purpose; it’s the structure that we erect that rules us. This means that freedom and determinism are two sides of a coin: neither is absolute, but relative to something else. Once I see the world in a certain way, what I do inevitably follows (it is determined); but I’m at the same time free to change my constructs, just as scientists are free to change their theories.

Pause for thought … 2 While PCT focuses on individuals (rather than groups or social institutions), what parallels are there between Kelly’s theory and Kuhn’s account of scientific development (see Chapter 3)?

Conclusions: PCT as a total Psychology As Bannister and Fransella (1980) and Fransella (1981) point out, Kelly deliberately uses very abstract language when describing his theory; this is intended to avoid the limitations of a particular time and culture and, indeed, is aimed at redefining Psychology as a ‘content free’ account of people. PCT isn’t so much a theory of personality (despite the title of his 1955 book), more a total Psychology. Kelly isn’t concerned with separate subdivisions of the discipline as traditionally dealt with in most textbooks. For example, the concept of motivation can be dispensed with: we don’t need concepts like ‘drive’ or ‘need’ or ‘psychic energy’ to explain what makes people ‘get up and go’. Man is a form of motion and a basic assumption about life is that ‘it goes on’: we don’t need something to make us go, because going on is the thing itself (Kelly, 1962). In other words, humans are inherently, ‘naturally’ motivated.

106

People as Psychologists

Similarly, we don’t need a separate account of ‘emotion’: ‘anxiety’, for example, is the awareness that what we’re confronted with isn’t ‘covered’ by our current construct system (we don’t know how to construe/make sense of it). For some, this account is far too cognitive and rational: what about the subjective experience (‘gut feeling’) that we label as anxiety? It’s almost as if emotional experiences and ‘behaviour’ itself are being drowned in a sea of constructs. Peck and Whitlow (1975) believe that Kelly trivializes important aspects of behaviour, including emotion and motivation, and learning, as well as neglecting situational influences on behaviour; PCT appears to place the person in an ‘empty world’. However, they also argue that it constitutes a brave and imaginative attempt to create a comprehensive, cognitive theory of personality.

Pause for thought – answers 1 Freud used the term ‘overdetermination’ to refer to the fact that much of our behaviour (and our thoughts and feelings) has multiple causes, some conscious, some unconscious. By definition, we only know about the conscious causes, and these are what we normally take to be the reasons for our actions. However, if the causes also include unconscious factors, then the reasons we give for our behaviour can never tell the whole story; indeed, the unconscious causes may be the more important – and relevant. L

L

This view of the individual as never being fully aware of all the reasons for his/her behaviour is one of irrational man – we don’t know ourselves as well as we would like, or as well as we think we do. Overdetermination is one aspect of psychic determinism, the view that everything we do, think, and feel has a cause (often unconscious). It follows that what we often call ‘accidents’ (implying something that happens beyond the control of the victim) aren’t random, chance events; indeed, the cause (or contributory cause) may actually turn out to be the victim him/herself. For Freud, although accidents as we normally understand them can occur, it’s more common for them to be the consequence of our own, unconscious, wishes and motives.

2 We could perhaps take each of the stages in Kuhn’s account and relate them to how individuals act on their constructs and how these may change. L

L

Pre-science (absence of paradigm): babies/young children may lack constructs in Kelly’s sense, but inborn reflexes, then voluntary actions, followed by early language-related schemata could be thought of as early constructs (or proto-constructs?). Normal science (a paradigm has come into existence): the set of personal constructs that an individual possesses at any one time may be the equivalent of a paradigm; it’s these that are constantly being 107

People as Psychologists

L

108

tested through an individual’s behaviour, sometimes being confirmed and sometimes being challenged. Revolution (the ‘old’ paradigm is, over time, replaced by a ‘new’ paradigm): just as a paradigm provides an overall worldview, so a change in the individual’s overall set of constructs provides a different way of interpreting and understanding the world.

Chapter 5 People as organisms Biopsychology

According to Legge (1975), the range of topics that can fall under the umbrella of ‘Psychology’ is as wide as Psychology itself, but they can be classified as focusing either on the processes/mechanisms underlying various aspects of behaviour (the process approach) or more directly on people themselves (the person approach). Clearly, Biopsychology forms part of the process approach. While the biological processes/mechanisms relate to the (mainly) internal body in which they take place, they can also be regarded as ‘disembodied’ in the sense that the process is the same regardless of whose body it is. The processes are described and explained in a totally decontextualized way, are taken to be universal, ahistorical, and unaffected by individual differences (including culture and gender). It’s also assumed that they can be studied objectively. In other words, Biopsychology displays all the characteristics of mainstream, positivist Psychology (see Chapters 2 and 3). But, as Toates (2001) observes, outside the laboratory there’s a limit to how far biological manipulation can take place in order to reveal a cause–effect behavioural chain (a major assumption of determinism): biological factors need to be interpreted within a context of rather subtle psychological principles. Toates (2001) identifies four strands of the application of biology to the understanding of behaviour; these are described in Box 5.1.

BOX 5.1 Applying biology to the understanding of behaviour L Biopsychology is concerned with how things work in the ‘here and now’, i.e. the

immediate (proximate) determinants of behaviour. A biological perspective can provide clear insights into what causes people to behave in particular ways. For example, when we touch a hot saucepan (cause) and immediately withdraw our hand in pain and fear (effect), there are known pathways within the body (in this case, a spinal reflex, the basic functional unit of the nervous system (NS)) that mediate between cause and effect. This example demonstrates that behaviour is an integral part of our biological make-up.

109

People as organisms L We inherit genes from our parents and these genes play a role in determining our

body’s structure. Genes affect behaviour through this bodily structure, most obviously through the NS. L A combination of genes and environment affects the growth and maturation of our body, the main focus being the NS and behaviour. Development of the individual is called ontogenesis. L The assumption that humans have evolved from simpler forms, rooted in Darwin’s theory of evolution, relates to both the physical structure of our body and our behaviour: we can gain insight into behaviour by considering how it has been shaped by evolution (see Chapter 8). Development of species is called phylogenesis. (Based on Gross, 2015; Toates, 2001)

As shown (and implied) by Box 5.1, when considering biological influences on behaviour, we’re looking at the following: L the NS (with an emphasis on the brain, which together with the spinal cord forms the

central nervous system (CNS)); L the peripheral nervous system (PNS) (comprising the somatic nervous system (SNS) and

autonomic nervous system (ANS)); L the endocrine (or hormonal) system, which has important links with both the brain and

the ANS; L in all three cases, we’re concerned with both structure (anatomy) and function

(physiology); L genetics (see Chapter 11).

The emphasis in this chapter is on the brain and the relatively recent use of brain imaging techniques to identify areas of the brain that underlie a wide range of human behaviours and cognitive processes as part of neuroscience. Before we consider some of the issues raised by this research, we need to take a look at the history of how Psychologists came to be interested in human biology.

L L

Try to formulate some arguments against the psychobiological approach as outlined above. For example, in what ways might it be considered to be reductionist? (Look back at Chapter 3.)

The Biopsychological approach attempts to explain human – and non-human – psychological processes and behaviour in terms of the operation of physical/physiological structures (such as interactions between neurons – nerve cells – and hormones). In turn, these processes are explained in terms of smaller constituent processes, such as synaptic transmission between neurons (the means by which information flows from one neuron to another). Ultimately, reductionism claims that Psychology as a whole will be explicable in terms of biology, which, in turn, can be understood in terms of chemistry and physics. For some

110

People as organisms

Psychologists, this entails losing sight of the whole person and fails to reflect experience and everyday social interaction (see Chapter 4 and the discussion below of neuroscience). G.F. Stout had alerted Psychologists to the consequences of this reductionist agenda as far back as 1896. He referred to the belief held by some physiologists that ‘the only way of explaining the phenomena of consciousness is by connecting them with the physical phenomena of the brain and nervous system’ and to the further belief that, if this were accomplished, Psychology would be ‘absorbed in physiology’ (Stout, 1896, Vol. 1, p. 3). He argued that the consequences would be disastrous: The distinctive aim of the psychologist is to investigate mental events themselves, not their mechanical accompaniments or antecedents. If the course of mental events is not regulated by discoverable uniformities capable of being interconnected so as to form a coherent system, the psychologist has nothing to do. It is incorrect to say that on this assumption his science becomes absorbed into physiology. It does not become absorbed; it simply ceases to exist in any form whatever. (Stout, 1896, Vol. 1, pp. 3–4) One form that biological reductionism has taken is epiphenomenalism, the view that, while consciousness or mental life is ‘real’ – and can be included as part of psychological enquiry – it is a mere byproduct of brain activity. Consciousness is produced by brain action, but isn’t itself capable of any reciprocal reaction. Epiphenomenalism represents one philosophical ‘solution’ to the mind–brain/mind–body problem (and a major rejection of Descartes’ dualism: see Chapters 2 and 3). According to epiphenomenalism: Mental life was no more responsible for the determination of behaviour than the smoke from an engine determined its speed or direction. The task of psychology was simply to discover what neural events corresponded to what mental events, and nothing more. (Joynson, 1974, p. 25) Based on Stout’s argument above regarding what the Psychologist’s aim should be, Joynson claims that a scientist cannot legitimately accept epiphenomenalism and still call him/herself a Psychologist. While the natural sciences (Naturwissenschaften) are aimed at acquiring knowledge of the material world, the aim of Psychology is to account for how such knowledge is acquired (more in keeping with the Geisteswissenschaften) (see Chapter 2).

L

Try to formulate some arguments in favour of the view that mental life or consciousness is more than a mere epiphenomenon.

William James (1890) argues that if mental life had no causal efficacy, it couldn’t contribute anything to the struggle for survival – and hence no reason could be given for why it should have evolved. Conversely, if consciousness did evolve because of its survival value, could it have done so unless it had causal properties (i.e. unless it could actually bring about changes in behaviour) (Gregory, 1981)? This is one of the more psychologically (as opposed to philosophically) relevant – and interesting – questions concerning the mind–brain relationship.

111

People as organisms

L

What do you think common sense Psychology tells us regarding whether or not consciousness has causal properties?

There’s no doubt that our everyday understanding of psychology – and our everyday experience – tells us that our mind affects our behaviour. This is part of what we mean by the concept of a person (see Chapter 12). According to Humphrey (1986), our everyday experience is that consciousness ‘makes all the difference in the world’: we’re either awake, alert, and conscious or flat on our backs, inert, and unconscious, and when we lose consciousness (e.g. sleep), we lose touch with the world. Humphrey takes an evolutionary view of consciousness, arguing that, if Darwin’s theory is correct, then consciousness (the ‘inner eye’) – like all other natural abilities and structures – must have come into being because it conferred some kind of biological advantage on those creatures that possessed it. According to Humphrey, that advantage relates to the biological challenge that human beings have had to meet, specifically, the human need to understand, respond to, and manipulate the behaviour of fellow human beings. In evolutionary terms I suspect that the possession of an ‘inner eye’ served one purpose before all: to allow our own ancestors to raise social life to a new level. The first use of human consciousness was – and is – to enable each human being to understand what it feels like to be human and so to make sense of himself and other people from the inside. (Humphrey, 1993, p. 2; emphasis in original) James (1890) claimed that, according to ‘automaton-theory’ it should be possible, in principle, to predict everything that Shakespeare ever wrote, if we had sufficient knowledge of his brain, without in the least understanding what passed through his mind when he wrote it. Not only does this strike us – quite rightly – as absurd (and what does ‘sufficient’ mean in this context?), but for James the claim regarding the obscurity of how mental life can affect bodily behaviour is matched by the obscurity of the claim that brain activity can produce mental life. He rejected the automaton-theory as an ‘unwarrantable impertinence’. In support of reductionism, Toates (2001) gives the example of Parkinson’s disease (PD): the greatest insight into the cause and possible cure of PD has come from reducing it to the biological level. We know that PD is caused by the malfunction and death of certain neurons in a particular brain region. However, while there may be a fairly straightforward causal link between this neuron malfunction and the movement disorder that characterizes PD, explaining the associated mood disorder is rather more difficult. This, in turn, raises the more general philosophical issue regarding the mind–brain relationship.

A history of anatomy and physiology The Ancient Greek philosophers’ reflection on the human condition went under the name of ‘philosophy’, and it wasn’t until the sixteenth century that the word psychologia was used for the first time (in 1540 by the German religious reformer, Philipp Melanchthon). This was a period of religious barbarism in what would become Europe, making ripped-apart human corpses readily available and, thereby, getting around the informal Catholic ban on the dissection of human bodies. This revolutionized the study of anatomy, revealing in particular how nerves innervate muscles (Bazan, 2016).

112

People as organisms

These anatomical discoveries undermined the traditional view of the body, widely held for the previous 1,500 years. Fysica (Aristotle’s term for the natural sciences) had been the principal source of knowledge in the Jewish, Christian, and Muslim worlds. For Aristotle, the soul had primacy over the body; indeed, the body was merely a clay that must be moved to life by inspiration (i.e. by anima – the breath of the soul). Now, functions that had previously been attributed to the soul became attributed to the body. The brain became the major organ of sensory functions and displaced the heart as the seat of the emotions and thinking. According to Melanchthon’s form of dualism, anatomia (pertaining to the body) and psychologia (pertaining to the soul) comprised a new anthropologia (anthropology: doctrine of man), which became ‘absorbed’ into the world of the Reformation (Bazan, 2016). According to the Dutch reformer Rudolph Snellius (1594), the soul possessed new exclusive properties (never previously attributed to it), namely thinking (imagination) and free will (Bazan, 2016). This was incorporated into Descartes’ philosophical dualism, which distinguished between res extensa and res cogitans (see Chapter 2). As well as being a philosopher, Descartes dissected animals and human corpses and was also familiar with research on blood circulation; based on this anatomical research, he concluded that the body is a complex device capable of moving without the soul (thus contradicting Aristotle’s doctrine of the soul). Although Descartes never used the term ‘psychologia’, at the end of the 1600s distinguishing between anatomy and psychology was commonplace, especially in medical literature (Bazan, 2016). Prior to the anatomical ‘revolution’ of the 1500s, Galen, the second-century Roman physician, surgeon, and philosopher, recognized the psychological relevance of the brain. However, instead of focusing on the solid brain tissue, he proposed that it was the brain’s three fluid-filled cavities (the ventricles) that were important. Each ventricle was responsible for a different mental faculty: imagination, reason, and memory. According to Galen, the brain controlled our body’s activities by pumping fluid from the ventricles through the nerves to other organs. (This was related to his theory of the four humours – blood, yellow bile, black bile, and phlegm. A predominance of any one determines a particular type of temperament; we enjoy perfect health when these elements are in balance. Eysenck’s theory of personality dimensions is built on Galen’s account of temperament; see Chapter 13.) Fluid-based theories of the brain continued to dominate until well into the seventeenth century. Indeed, Descartes compared the brain to a hydraulic-powered machine. A major flaw involved in this account is that a fluid couldn’t move quickly enough to explain the speed of our reactions (O’Shea, 2013). A more enlightened approach came when a new generation of anatomists provided an increasingly more accurate picture of the brain’s structure. Some key advances are described in Box 5.2.

BOX 5.2 Some key early discoveries regarding how the brain works L The seventeenth-century English doctor Thomas Willis argued that the key to

understanding how the brain works lay in the solid tissue, rather than the ventricles. L A hundred years later, Luigi Galvani and Alessandro Volta showed that an external source of electricity could activate nerves and muscles. This helped to explain

113

People as organisms

L

L

L

L

L L L

how we’re able to respond to events so rapidly. But it wasn’t until the 1800s that the German physiologist Emil Du Bois-Reymond confirmed that nerves and muscles themselves generate electrical impulses. At the beginning of the 1900s, the Spanish anatomist Santiago Ramon y Cajal identified the neuron as the building-block of the brain. He discovered a variety of neurons that isn’t found in the cells of any other organ of the body. Cajal used the method of silver staining (discovered by Camillo Gorgi) to pick out individual neurons as discrete – but connected – functional and anatomical units. As well as revealing the neuron’s basic structure (see Gross, 2015), this discovery helped form the basis of the widely held view that the brain isn’t a single fused network but is located and shaped in discrete circuits (the neuron doctrine). A nerve impulse is a wave of physical and chemical excitation that passes along a neuron, analogous to (although quite different from) an electric current passing along a wire. This is the basic process by which information is passed between neurons. Cajal proposed that a nerve impulse travels through a neuron in one direction only. The cell body and its branched projections (dendrites) gather incoming information from other neurons; this information is then transmitted along the axon to the synapse (strictly, the synaptic gap or cleft), the ‘junction’ with neighbouring neurons. Here, the nerve impulse triggers the release of molecules of chemicals (neurotransmitters), which carry the signal across the synaptic gap. Once they reach the other side (i.e. the receiving neuron), these molecules briefly flip ‘electrical switches’; this can result in either (1) exciting the neuron into sending its own signal, or (2) temporarily inhibiting its activity, making it less likely to fire in response to other incoming signals. There are several different neurotransmitters, including noradrenaline, dopamine, serotonin, and gamma aminobutyric acid (GABA). Cajal and Golgi shared the 1906 Nobel Prize in Physiology or Medicine. Most surprisingly, Cajal observed that the complexity of insect neurons matched – and sometimes exceeded – that of humans; this suggests that human abilities depend on how neurons are connected, not on any special features of the cells themselves. This ‘connectionist’ view opened the door to a new way of thinking about information processing in the brain; this is still dominant within modern neuroscience (see text below and Chapter 7). (Based on O’Shea, 2013; Tallis, 2011)

A brief journey around the brain The brain, together with the spinal cord, comprises the central nervous system (CNS); the peripheral nervous system (PNS) subdivides into (a) the somatic nervous system (SNS) (involved in voluntary bodily movements) and (b) the autonomic nervous system (ANS) (which controls the activity of the viscera: heart, stomach, intestines, glands, etc.). The ANS has two branches: (1) the sympathetic and (2) the parasympathetic (see Gross, 2015). The NS as a whole comprises approximately 100 billion (100,000,000,000) neurons; about 80 per cent are found in the brain, especially in the cerebral cortex, the topmost outer layer. However, neurons aren’t the only type of brain cell, as described in Box 5.3.

114

People as organisms

BOX 5.3 Non-neuronal brain cells L Glial cells (or glia) are 9–10 times more numerous than neurons, and come in dif-

ferent forms (e.g. astrocytes, oligodendrocytes, or radial glia). L As well as helping to maintain neurons, glial cells play an important role in regu-

L

L

L

L

lating synapse development and functioning; they also promote neuronal survival and protection after injury, support learning and memory, and regulate mood. Radial glia could be important for preventing neurodegeneration (especially relevant to understanding Alzheimer’s disease). Astrocytes modify the connections between neurons (through regulating signals across synapses). Oligodendrocytes provide the insulating myelin sheath around the neuron’s axon. Microglia are ‘master multitaskers’, controlling the growth of new neurons, new connections, and neuronal pruning (the reduction in the size of the cerebral cortex especially in adolescence – but also in the adult brain); they seem to be highly active in the hippocampus, which plays a major role in memory; see Chapter 7). Spindle cells are a recently evolved type of brain cell unique to higher primates (including chimpanzees and gorillas). They’re noticeably large, with unusually long, spindle-shaped bodies, and are found only in the front (anterior) part of the cingulate cortex (the ACC). Spindle cells are implicated in our emotional response to others during social interactions; people with autism and schizophrenia may have abnormal or misplaced spindle cells. (Based on Constandi, 2013; Fields, 2004; Moyer, 2013a, 2013b; Phillips, 2004)

Major structures During the first five weeks of foetal life, the neural tube changes its shape to produce five bulbous enlargements; these are generally accepted as the basic divisions of the brain, namely: L L L L L

the myelencephalon (comprising the medulla oblongata); the metencephalon (pons and cerebellum); the mesencephalon (tectum and tegmentum); the diencephalon (thalamus and hypothalamus); and the telencephalon (cerebral hemispheres or cerebrum, basal ganglia, and limbic system).

‘Encephalon’ means ‘within the head’. During foetal development the outside of the brain gradually becomes more folded/ wrinkled (or convoluted); this is necessary if its 2,500-square-centimetre surface area is to fit inside the relatively small skull. An overlapping, but broader, division into hindbrain (rhombencephalon), midbrain (mesencephalon), and forebrain (prosencephalon) is shown in Figure 5.1. Figure 5.2 shows a lateral (side-on) view of the left cerebral hemisphere.

115

People as organisms Brain

 

  

  

      

 

 

 



  

   

  

  

  

  

Figure 5.1 Division of the human brain into hindbrain, midbrain, and forebrain.

Localization and lateralization of brain function Different functions, such as vision, hearing, movement, and sensation are located in different lobes (occipital, temporal, parietal, and frontal, respectively). The four lobes are found in both cerebral hemispheres, so, in this respect, the hemispheres can be regarded as mirror images of each other. There are also distinct areas dealing with, for example, speech production (Broca’s area) and comprehension (Wernicke’s area); this too illustrates functional localization. However, Broca’s and Wernicke’s areas are found only in the left hemisphere; this illustrates functional lateralization (or hemispheric asymmetry). A very early attempt to chart functional localization came in the form of phrenology. This is described in Box 5.4. Central fissure (fissure of Rolando)

Frontal lobe

Parietal lobe

Occipital lobe (incorporating visual cortex)

Lateral fissure (fissure of Sylvius)

Temporal lobe (incorporating auditory cortex)

Cerebellum

Figure 5.2 Lateral (side-on) view of the left cerebral hemisphere. 116

People as organisms

BOX 5.4 Phrenology L Starting in the 1790s, Gall’s phrenology (originally called craniology) helped the

L

L

L

L

L

brain – together with the sense organs – become the main meeting point between physiology and Psychology. Franz Joseph Gall (1758–1828) had originally tried to address what makes one pupil more clever than another or why a child has one ability rather than another. These questions have intrigued people over and over again in an age of mass education. Phrenology claimed that there’s a correlation between strength of ability and size of brain areas; as the enlargement of one part of the brain rather than another affects the shape of the head, it was possible to ‘read’ the bumps and reveal character. It took a long time for historians to take phrenology seriously: it seems obviously silly to believe in reading a person’s character from feeling the bumps on his/her head. Yet it was Gall perhaps more than anyone else who convinced people that the brain is the organ of the mind; it was public enthusiasm for phrenology that spread interest in the brain and belief that knowledge of the brain would greatly enhance human self-help. Between 1810 and 1819, Gall (assisted by J.C. Spurzheim) published the fourvolume Anatomy and Physiology of the Nervous System, a major contribution to the descriptive study of the brain. In it, Gall ‘elaborated a language of physiognomy familiar to painters and sculptors and in everyday life (for example, “egghead” and the derogatory “flat head”)’ (Smith, 2013, p. 41). What was new, and a sign of the direction of science, was proposing a ‘doctrine of the brain’ as a basis for ‘perfect knowledge of human nature’. Knowledge of the brain was now seen as a guarantor of the truth. Phrenologists described individual differences – and group differences – as built into the physical fabric of people. Such prejudiced thinking about race and gender was very characteristic of nineteenth-century empirical science; it often implied an underlying determinism. (Based on Richards, 2010; Smith, 2013)

Comte, the founder of positivism (see Chapter 2) was an advocate of phrenology, believing that it represented a positive science of facts about human character that could form the basis for social policy. He specifically dismissed the possibility of Psychology as a field of study in its own right: objective scientists should study the brain (i.e. be phrenologists) or study people in society (i.e. be sociologists); they cannot study supposed subjective mental states in-between (Smith, 2013). According to Richards (2010), the main problem with phrenology was that experimental demonstration of localization of brain function (i.e. actually identifying basic faculties which determine innate capacities) was extremely difficult. Also, the Catholic physiologist, Pierre Flourens (1794–1867), based on his 1840s research in which he removed parts of the brains of pigeons (ablation), argued for the unity of the cortex as the organ of the spiritual mind. He claimed that ablation caused only general behavioural deficits, not specific losses (Richards, 2010; Smith, 2013).

117

People as organisms

However, by the 1860s there was a swing back towards localization, following the discovery of Broca’s area in 1869 (see above). In 1870, Fritsch and Hitzig reported highly localized motor regions of the cortex using new electro-stimulation techniques (Richards, 2010). Wernicke’s area was identified in 1874. Against this, Karl Lashley’s research with rats, in the 1920s and 1930s, demonstrated: 1 The law of mass action: the learning of difficult problems depends on the amount of damage to the cortex, and not on the location of the damage. 2 The law of equipotentiality: corresponding parts of the brain are capable of taking over the function normally performed by the damaged area. This was based on Lashley’s failure to find specific neural circuits related to the learning of particular types of problem.

Ablation (the surgical removal of parts of the brain) is just one example of an invasive method used to study the brains of non-humans. L Why do Psychologists study non-human animals? L Try to formulate some arguments for and against the use of such methods: these should focus on ethical (rather than practical) issues, in particular animal suffering.

Another major contributor to the localization/lateralization debate was Roger Sperry, a Neuropsychologist who had conducted split-brain operations (cutting the corpus callosum, which connects the two hemispheres of the brain – callosotomy) on monkeys in the 1950s. He had then devised tests for assessing the abilities of each hemisphere separately. During the 1960s, Sperry and his colleagues adapted these tests for use with human commissurotomy patients (in which, in addition to the corpus callosum, the smaller anterior and hippocampus commissures, and, in some cases, the massa intermedia, are also cut). These patients had undergone the surgery as a last resort treatment for debilitating epilepsy. Sperry’s research is commonly referred to as ‘split-brain studies’ and raises questions regarding whether each hemisphere constitutes a separate brain, or even a separate mind. This is discussed further in Chapter 12.

Why do Psychologists study animals? There are very good practical/scientific reasons for the use of non-humans in psychological research. For example, there’s an underlying evolutionary continuity between humans and other species, which gives rise to the assumption that differences between humans and other species are merely quantitative (as opposed to qualitative): other species may display more simple behaviour and have more primitive nervous systems than humans, but they aren’t of a different order from humans. In fact, the mammalian brain is built on similar lines in rats, cats, dogs, monkeys, and humans, and neurons are the same in all species, and work in the same way. These similarities of biology are, in turn, linked to behavioural similarities. So, studying the more simple cases is a valid and valuable way of finding out about the more complex ones. This claim was one major aspect of Watson’s ‘Behaviourist manifesto’ and Skinner’s analysis of behaviour (see Chapters 1 and 6).

How Psychologists work with animals The Guidelines for Psychologists Working with Animals (British Psychological Society, 2007) point out that research is not the only reason Psychologists work with animals,

118

People as organisms

even though, not surprisingly, it’s what has caused the most controversy and media attention. Animals are sometimes used in practical teaching within Psychology degree courses, and increasingly animals are being used in various forms of psychological therapy (including companion animal visiting schemes in hospitals or hospices, pet-keeping within prison rehabilitation schemes, and in behaviour therapy for the treatment of specific animal phobias). Psychologists may also be asked to advise on therapy for animals whose behaviour appears to be disordered in some way, as well as training animals for commercial purposes (Gross, 2015).

How Psychologists should study/work with animals BOX 5.5 Guidelines for Psychologists working with animals (British Psychological Society, 2007) Ten major areas are covered as follows: 1 Legislation: The Animals (Scientific Procedures) Act (1986) governs any scientific procedure that may cause pain, suffering, distress, or lasting harm to a ‘protected’ animal (i.e. all non-human vertebrates and a single invertebrate species, Octopus vulgaris). Psychologists working with animals in ways not covered by the Act should aim to maintain standards at least as high as those proposed in the Guidelines for research use. In addition, Psychologists should be aware that they have a more general duty of care towards any protected animal under the Animal Welfare Act (2006). 2 Replacing the use of animals: alternatives to intact behaving organisms, such as video recordings from previous work or computer simulations, may be useful – especially in a teaching context. Two specific examples are the ‘Ratlife’ project (video) and ‘Sniffy the virtual rat’ (computer simulation). 3 Choice of species and strain: Psychologists should choose a species that is scientifically and ethically suitable for the intended use: the species should be chosen for the least amount of suffering while still attaining the scientific objective. The choice must be justified as part of the application for a Project Licence (under the 1986 Act). 4 Number of animals: the 1986 Act requires use of the smallest number of animals sufficient to achieve the research goals. 5 Procedures: see ‘Legislation’ above. Permission to perform regulated procedures requires a Project Licence, which is granted only after weighing the benefits and costs (in welfare terms) to the animal subjects. In addition, the actual performance of a regulated procedure requires a Personal Licence, given only after successful completion of appropriate training. When applying for a licence, investigators must also discuss their proposal with a Local Ethical Review Committee (which must include a veterinary surgeon). 6 Procurement of animals: common laboratory animals must come from Home Office Designated Breeding and Supply Establishments. 7 Animal care: the 1986 European Convention (Article 5) provides that: Any animal used or intended for use in a procedure shall be provided with accommodation, and environment, at least a minimum of freedom of movement, food, water and care, appropriate to its health and well being. Any restriction on the extent to which an animal can satisfy its physiological and ecological needs shall be limited as far as practicable.

119

People as organisms 8 Disposing of animals: if animal subjects must be killed during or subsequent to the study, this must be done as humanely and painlessly as possible (as defined by the Act). A veterinary surgeon should be consulted regarding current methods of euthanasia. 9 Animals in Psychology teaching: whoever the students are, ethical issues should be discussed with them. Only advanced undergraduates and postgraduate students would be eligible to apply for a Personal Licence, and any procedures would be carried out only under an existing Project Licence. 10 The use of animals for therapeutic purposes: in all cases, the same considerations concerning the general care and welfare as detailed for experimental animals apply. But there are also specific considerations, such as the individual animal’s temperament and training being suitable for the planned task (e.g. a hospitalvisiting dog should be calm, placid, and sociable with people). Contact with the client/patient needs to be carefully monitored. (Based on Gross, 2015)

The issue of animal suffering: do animals feel pain? The case for: non-humans are sentient The very existence of the Guidelines described in Box 5.5 presupposes that animals are, like humans, sentient, roughly defined as the capacity for emotion, pleasure, and pain (Boyle, 2009). Sentience may be applied to species which Regan (2006) calls ‘subjects of a life’, aware of what happens to them and that events affect their lives. Boyle (2009) cites a range of evidence pointing to the belief that non-humans are sentient: L Comparative neuroanatomical research emphasizes continuity across species with regard

to the CNS, which is found in all vertebrates from apes to bats to fish. The genes underlying NS development have been virtually unchanged throughout evolution, and, as we noted earlier, neurons are similar across species. The same is true for neurotransmitters, hormones, and chemicals. Both the limbic system in general and the amygdala in particular (which mediate emotion), and the sensory input to it, are strikingly similar among vertebrates. L Although most pain research has focused on mammals, other simpler vertebrates also have the neuroarchitecture to allow them to identify stimuli that hurt. But where do we draw the line? Recent research has focused on fish. L According to Sneddon (2006; Sneddon et al., 2003), fish feel pain, implying that angling is cruel. They tested the neural responses of rainbow trout and injected the fish with mild poisons. Undoubtedly, 1 Fish have specific neural receptors that respond to heat, mechanical pressure, and acid; the neurons fire in a way very similar to the firing patterns of human neurons in response to aversive stimuli. 2 Fish behave abnormally when their lips are injected with bee venom and vinegar, rocking from side to side and breathing very rapidly. 3 The abnormal behaviours and symptoms are not seen – or at least not to the same extent – either in fish that are simply handled or given an injection of a harmless substance. L The degree to which invertebrates experience pain is less certain, but the inclusion of octopi

as a ‘protected’ species in the Animals (Scientific Procedures) Act (1986) (see Box 5.5) suggests strongly that they do.

120

People as organisms L Evidence is accumulating that birds display many remarkable skills once thought to

be restricted to humans and/or other great apes. For example, European magpies display mirror self-recognition (Prior et al., 2008), New Caledonian crows make and use several types of tool (Chappell and Kacelnik, 2002), and even chickens can be deceptive and use sophisticated signals to convey their intentions (Smith and Zielinksi, 2014).

The case against: sentience doesn’t equal consciousness According to Robinson (2004), to claim that fish feel pain depends on a scientific definition of pain, but there is no such definition. We might define it in terms of the actions of neurons in response to aversive stimuli, but this is only the physiological cause: when we use the word ‘pain’ we usually mean the experience. Even behavioural responses needn’t be correlated with an experiential mental state. However, isn’t it reasonable to conclude that if another vertebrate species behaves in the same way as we do in response to a similar/equivalent stimulus, and if its physiological responses to that stimulus are the same as ours, that it feels what we feel? Taking a parallel example, DeGrazia (2002) argues that as with pain, we usually use the word ‘anxiety’ to denote more than the physiological and behavioural components, namely, consciousness of the self existing through time or temporal self-awareness. This is what allows us to be aware of what is happening to us (Robinson, 2004). But isn’t this a circular argument? DeGrazia is asking us to assume what we’re trying to prove, namely that non-humans experience things as we do. In other words, ‘anxiety’ (and, by the same token, ‘pain’) comes pre-loaded with meanings that pertain to our subjective experience and emotions – the essence of being human.

Feelings versus emotions According to Wilhelm (2006), scientists are finally beginning to believe that mammals, at least, have some form of emotions. Damasio (2003) distinguishes between: 1 primary, almost instinctive emotions that help an individual mesh with a group (including fear, anger, disgust, surprise, sadness, and joy) – they are physical signals of the body responding to stimuli; and 2 feelings, which stem from self-reflection – they represent sensations that arise as the brain interprets (1) (see below). Damasio attributes (1) to many species. Even the primitive sea slug, Aplysia, shows fear: when its gills are touched, its blood pressure and pulse increase and it shrivels. These aren’t reflexes, but elements of a fear response. However, such organisms don’t have feelings. Damasio also identifies social emotions (sympathy, embarrassment, shame, guilt, pride, envy, jealousy, gratitude, admiration, contempt, and indignation). These aren’t limited to humans (e.g. gorillas, wolves, and dogs all display them). Yet even in such cases, as with (1), some neuroscientists argue that these are largely automatic and innate responses and include them among the routinized survival mechanisms. Extending Robinson’s argument above regarding pain, we can say that, if non-humans are incapable of experiencing pain and feelings, then can they be said to ‘suffer’? According to Robinson (2004): Suffering depends upon our sense of ourselves, our sense of the passage of time and of the changing fortunes in our lives. When we are subjected to adverse stimuli we feel

121

People as organisms

pain, anxiety and fear for the very reason that we are conscious of what is happening to us – we experience the stimuli, not only respond automatically to them. But because we share so much of our evolutionary history with animals, the outward signs of these responses are similar. (p. 22; emphasis in original) Fish don’t have an area of the brain corresponding to our own neural pain-processor – the neocortex: although the same signals are sent to the brain, there’s no recognizable painexperience-producing region to go to when they arrive. There are no brain regions that produce the unpleasantness of pain; they have little more than a brainstem (no cerebral hemispheres). While the ‘sentience argument’ helps us distinguish animals from physical objects, sentience on its own is insufficient to demonstrate that non-humans are capable of experiencing pain, feelings, suffering, and so on. These experiences are part of what we mean by ‘consciousness’ (or, strictly, self-consciousness; see Gross, 2014). Whether or not selfconsciousness is a uniquely human capacity is a moot point (see Gross, 2012). According to Wilhelm (2006), ultimately it’s not possible to prove through observation whether an animal possesses conscious feelings – no more than we can be sure about what another person is truly experiencing. But since we know from experimental studies that some animals, at least, are self-aware, it’s not unreasonable to think they could also be cognizant of their emotions. Also, as Boyle (2009) observes, sentience involves the capacity for emotion and pain, whether or not the experience is cognitively sophisticated or human-like. If non-humans are capable of feeling emotion, then we have yet another reason to seriously consider how well we treat them.

L

What do you think of the claim that particular non-human species, namely those closest to us in an evolutionary sense, should be given ‘special consideration’ in the context of animal experiments?

Research using great apes (gorillas, chimpanzees, bonobos, and orang-utans) was banned in 1998. But about 10,000 experiments, mainly on marmosets and macaques (and other primates), are carried out every year, with the UK leading the field (an annual total of almost 4,000). Beginning in 2008, the European Union called for a ban on all experiments involving primates (McKie, 2008).

Speciesism According to Gray (1991), most people (both experimenters and animal rights activists) would accept the ethical principle that inflicting pain is wrong. But we’re sometimes faced with having to choose between different ethical principles, which may mean having to choose between human and non-human suffering. Gray believes that speciesism (discriminating against and exploiting animals because they belong to a particular (non-human) species; Ryder, 1990) is justified, and argues that not only is it not wrong to give preference to the interests of one’s own species, one has a duty to do so. For Gray (1991), in many cases the decision not to carry out certain experiments with animals (even if they’d be subjected to pain or suffering) is likely to have the consequence that more people will undergo pain or suffering that might otherwise be avoided. One of the

122

People as organisms

problems associated with the pro-speciesism argument is that medical advance may become possible only after extensive development of knowledge and scientific understanding in a particular field (Gray, 1991). In the meantime, scientific understanding may be the only specific objective that the experiment can readily attain. It’s at this interim stage that the suffering imposed on experimental animals will far outweigh any (lesser) suffering eventually avoided by people, and this is at the core of the decisions that must be made by scientists and ethical committees.

Localization revisited Returning to the issue of localization of brain function, Richards (2010) observes how, in the 1940s and 1950s, Wilder Penfield (with various colleagues) reversed the picture again. His work is described in Box 5.6.

BOX 5.6 KEY THINKER: Wilder Penfield (1891–1976) L Penfield was born in the state of Washington, US, but moved to Wisconsin with

L L

L L

L

L

L

his mother and two siblings when he was eight; this followed the failure of his father’s medical practice. His mother fostered in him a burning ambition to win a Rhodes Scholarship to Oxford University, which he achieved in 1914. Two eminent Merton College professors had a great impact on him: (a) Sir Charles Sherrington (physiology) instilled a taste for meticulous research into the mechanisms of the NS; and (b) Sir William Osler (medicine) encouraged a desire to reduce patients’ suffering. Penfield’s discoveries were made in the course of working with people suffering from brain tumours, and so both of these aims were achieved together. He returned to the US to begin his medical training at Johns Hopkins Medical School, gaining his MD in 1918. He then returned to Oxford to work with Sherrington, and later held a research fellowship in London. For several years at Columbia University, New York, he worked as a neurosurgeon (to relieve suffering) and a neuroscientist (conducting research). In 1928 he moved to McGill University’s medical staff (in Montreal, Canada), where he continued his treatment and study of epilepsy. Patients were required to remain conscious during the surgery. The surgeon needs to know when the boundary between the brain abnormality (such as a tumour) and the active healthy brain has been reached: this can only be achieved by asking the patient to perform simple cognitive tasks. While feeling no pain, patients reported vivid experiences when specific parts of the cortex were electrically stimulated (specific memories, highly distinct smells, etc.). In this way, Penfield was able to map out the cortex in terms of its functions based on over 1,000 operations; he stressed that these were always performed in the interests of the patient (i.e. to reduce suffering), but they simultaneously provided an unrivalled neuroscientific opportunity. (Based on Harré, 2006)

123

People as organisms

The implications of Penfield’s work for consciousness and free will As regards the mind–brain relationship, Penfield was an anti-reductionist: while believing that the functions of the mind are largely carried out through brain mechanisms (i.e. the brain is necessary for mental functions), these mental functions are independent of those mechanisms. He was unable to find brain mechanisms that accounted for ‘mind-action’, even though those mechanisms ‘awaken the mind’ and ‘give it energy’ (Penfield, 1975, p. 104). When Penfield electrically stimulated the motor cortex usually associated with normal (i.e. voluntary) limb movements, patients reported feeling that their arms and legs were being moved (i.e. involuntarily) – these were very different experiences.

L

What conclusions can you draw from these findings regarding the existence of free will?

These findings demonstrate that the subjective experience (phenomenology) of voluntary movement cannot be reduced to the stimulation of those brain areas normally associated with such movement. Doing things voluntarily (i.e. ‘freely’) simply feels different from the same things ‘just happening’ (Gross, 2015). Similarly, Delgado (1969) stimulated a part of the primary motor area in a patient’s left hemisphere, causing the patient to form a clenched fist with his right hand. When asked to try to keep his fingers still during the next stimulation, the patient failed, remarking, ‘I guess, doctor, that your electricity is stronger than my will’ (Gross, 2015, p. 857). According to Gross (2015): These examples support the claim that having free will is an undeniable part of our subjective experience of ourselves as people. The sense of self is most acute (and important and real for us) where moral decisions and feelings of responsibility for past actions are involved (Koestler, 1967). (p. 857)

Modern neuroscience: brain imaging Research like Penfield’s, together with data from cases of brain-damaged patients, contributed hugely to our understanding of localization of brain function. However, particularly in the case of brain damage, interpreting the data isn’t as straightforward as it may first appear: just because damage to (or absence of ) a particular brain area is associated with a deficit in some cognitive ability or behaviour, doesn’t mean that that brain area normally controls that ability or behaviour (‘the problem of “subtractive logic” ’; Richards, 2010). Without knowledge of how the system as a whole works, we simply cannot be sure which brain area is normally responsible for the ability or behaviour in question. However, starting in the 1990s, non-invasive scanning/imaging techniques became the major tool of neuroscientists trying to monitor brain activity as a whole. The major techniques are described in Box 5.7.

124

People as organisms

BOX 5.7 Major forms of brain imaging L EEG imaging and the geodesic net: while the long-used electroencephalogram

L

L

L

L

L

(EEG) involves fitting a small number of electrodes to the scalp in order to record the brain’s electrical activity, EEG imaging uses 32 electrodes. The data are fed into a computer, which translates them into coloured moving images on a monitor. This has been adapted for studying brain development in babies in the form of a geodesic net, which comprises 64 or 132 electrodes. The computer calculates the likely brain areas that generated the voltages observed on the scalp. Computerized axial tomography (CAT): a moving X-ray beam takes pictures from different positions around the head and these are converted by the computer into ‘brain slices’ (apparent cross-sections of the brain). CAT scanning is used primarily for the detection and diagnosis of brain injury and disease. Positron emission tomography (PET): this uses the same computer-calculation approach as CAT, but uses radiation to compute the brain slices. A radioactive tracer is added to a substance used by the body (such as oxygen or glucose); as the marked substance is metabolized, PET shows the pattern of how it’s being used. This provides a more accurate diagnosis of possible brain abnormalities than CAT. Magnetic resonance imaging (MRI): this is like a CAT scan, but instead of radiation, MRI involves a strong magnetic field being passed through the head and measures its effects on the rotation of hydrogen nuclei in the body. Because hydrogen molecules are present in substantially different concentrations in different brain structures, the resulting brain slices are much clearer (higher resolution) than CAT pictures. Functional MRI (fMRI): while both CAT and MRI produce only still images, fMRI monitors blood flow in the brain over time as people perform different kinds of task. This provides real-time images of brain function and so is used as much to study the normal as the damaged/diseased brain. Other techniques include (1) average evoked potentials (AEPs); (2) radioactive labelling; (3) single-photon/positron emission computerized tomography (SPECT); (4) superconducting quantum imaging/interference device (SQUID); and (5) 3D brain mapping (see Gross, 2015).

What can brain imaging tell us about brain function? Methodological issues PET scans pinpoint in brilliant colour the brain regions where neurons are active during a particular mental task; they’ve shed new and exciting light on many brain diseases and pathological conditions, including epilepsy, Parkinson’s disease, Alzheimer’s disease, Huntington’s disease, and Down’s syndrome. Because they’re very sensitive to brain changes during episodes of schizophrenia, depression, and other mental disorders, PET scans are being used extensively in psychiatry (Sabbatini, 1997). Perhaps more controversially, PET and MRI scans are also being used to study the brain in relation to criminal behaviour, especially violent crime (e.g. Raine et al., 1997, 2000, 2004). More recently – and more generally – brain imaging studies (using in particular fMRI) have identified the brain regions underlying a wide range of (normal) human behaviour and

125

People as organisms

cognitive processes. Gergen (2010) says that both in terms of professional interest and public recognition, the ‘brain and behaviour’ movement is in evidence everywhere. Cortical accounts of behaviour include aggression, happiness, altruism, social understanding, selfharm and suicide, economics, aesthetic judgement, ethics, jealousy, and social decisionmaking, romantic love, empathy, and envy. The multicoloured images have become iconic symbols of science in general, and neuroscience in particular (Gross, 2015). As we noted in Box 5.7, fMRI claims to reveal brain activity in real time; this is what makes it so persuasive. The areas that ‘light up’ while the participant reads some text or looks at pictures of faces are taken to be the neural correlates of the behaviours/cognitions involved in the task. But is this interpretation valid? According to Dobbs (2005), Psychologists have praised fMRI for finally making psychology quantifiable, and cognitive neuroscientists have cited it frequently in the recent, vastly expanding understanding of the brain. But just how reliable and valid are the findings from these imaging studies? Satel and Lilienfeld (2013) dispute the ‘real time’ claim: Scientists can’t just look ‘in’ the brain and see what it does. Those beautiful colourdappled images are actually representations of particular areas in the brain that are working the hardest – as measured by increased oxygen consumption – when a subject performs a task…. Despite well-informed inferences, the greatest challenge of imaging is that it is very difficult for scientists to look at a fiery spot on a brain scan and conclude with certainty what is going on in the mind of a person. (p. xii) More specifically, PET scans depend on tracing the location of radioactive material (see Box 5.7). So, as McGhee (2001) points out, we need to be sure that sufficient radioactive material was injected in the first place for it to appear during a scan. This is especially important if we’re looking at a small brain region or low-activity processes. If we correctly predict activity in area X, but not in Y, we need to be sure this reflects different neural activity levels, and not the result of insufficient radioactive material being injected to show up at Y. A problem shared by PET, MRI, and fMRI is the sheer computational load involved (McGhee, 2001). High-powered computers are needed to collect, store, and present the data collected from multiple scans, and special software is available to ensure that successive images are matched together correctly to produce accurate displays over time – without compromising the structural detail. Nevertheless, the faster images are captured, the greater the amount of ‘noise’ that threatens to obscure the signal (Menon and Kim, 1999, in McGhee, 2001). A further difficulty is that different studies identify different ‘hot spots’ (areas of greatest neural activity) for the same task. Farah and Aguirre (1999), for example, found that 17 separate studies indicated between them 84 different candidates for the precise brain region involved in object recognition. Gabrielli (1998) has identified some additional technical and methodological issues in the interpretation of neuro-imaging, including the following: L The images generated by PET and fMRI are not images of neural activity as such, but of

nearby (local) blood flow (haemodynamic response) or metabolic changes (such as glucose metabolism); the latter merely indicate neural activity. This makes the images much more indirect evidence of brain activity.

126

People as organisms L Residual activity can be found in parts of the brain that are severely damaged; these

traces may be of processes that aren’t directly involved in those being studied, but merely correlated with them. It could be that in undamaged brains, much of the activity we observe is nothing to do with the critical processes that actually produce the behaviour the participant is displaying. Because neurons take milliseconds to fire, and the blood surge follows two to six seconds later, an increase in blood flow might be ‘feeding’ more than one operation. Also, thousands or even millions of neurons may have to fire to significantly light up a region: it’s as if an entire section of a stadium has to shout to be heard (Dobbs, 2005). According to McGhee (2001), scans can only describe brain activity, not brain function: for the latter we’d need to interpret the stimuli presented to the participant and the patterns of response over several trials. In the context of education, Bennett (2013) dismisses the creative right brain/logical left brain dichotomy (see Gross, 2015) as a ‘neuromyth’. Satel and Lilienfeld (2013) similarly dispute conclusions that have been drawn from imaging studies of addiction, advertising (‘neuromarketing’), and lie detection.

Conceptual issues: the free will debate As Satel and Lilienfeld (2013) observe, criminal defence lawyers (especially in the US) are increasingly drawing on neuroscientific research findings to argue that their client’s brain ‘made’ them commit murder or some other violent crime (neurodeterminism). Not only are such arguments central to the whole notion of criminal (and moral) responsibility, but they’re symptomatic of a wider tendency to grant a kind of inherent superiority to brainbased explanations over all other accounts of human behaviour. What Satel and Lilienfeld call neurocentrism is a form of reductionism (and relates to the debate regarding levels of explanation/universes of discourse; see Chapter 3). As noted in Chapter 4, part of (Western culture’s) common sense psychology is the belief that individuals are criminally – and morally – responsible for their actions. However, according to neurodeterminism, and the related biologism, that common sense belief may be untenable. According to biologism, (1) our minds are our brains, and (2) our brains are evolved organs designed, as are all organs, by natural selection to ‘maximize the replicative ability of the genes whose tool the brain is’ (Tallis, 2011, p. 51). Not only may we have to abandon the notion of free will, and, consequently, of personal responsibility, but to be identified with our brains is to be identified with a piece of matter, and this, like all other pieces of matter, is subject to, and cannot escape from, the laws of material nature…. Our destiny, like that of pebbles and waterfalls, is to be predestined. (Tallis, 2011, p. 51)

L L L

Is it valid to equate ‘me’ with ‘my brain’? Even if it is, does this necessarily exclude the possibility of free will? If it can be shown that my brain becomes activated before I become conscious of my intention to act in a particular way, does this necessarily mean that my belief that I’m acting freely is merely an illusion?

127

People as organisms

According to the eminent neurophysiologist Colin Blakemore, The human brain is a machine which alone accounts for all our actions, our most private thoughts, our beliefs…. All our actions are products of the activity of our brains. It makes no sense (in scientific terms) to try to distinguish sharply between acts that result from conscious attention and those that result from our reflexes or are caused by disease or damage to the brain. (Blakemore, 1990, p. 270) If we are our brains, or certain neural discharges in them, then we’re surely totally ‘unfree’. ‘Free will’ is usually taken to imply – both in common sense terms and philosophically – that when a voluntary (i.e. non-reflex) act is performed, it is ‘I’ who decides to perform it, not ‘my brain’. The sequence of brain areas that are activated (the prefrontal cortex (PFC), which sends signals to the premotor cortex, which programmes the actions of and sends signals to the primary motor cortex), plus the unique involvement of the dorsolateral PFC in all subjective experience of deciding when and how to act, are taken to be a direct result of my decision. But a famous set of experiments conducted by the neurophysiologist Benjamin Libet in the 1980s, and repeated and refined many times since, produced findings suggesting that our brain makes decisions to act before we become consciously aware of them, so that they’re not really our decisions at all. Haggard and Eimer (1999) described Libet’s original experiment as ‘one of the most philosophically challenging … in modern scientific psychology’ (p. 291). Libet’s experiment is described in Box 5.8.

BOX 5.8 Who decides: me or my brain? (Libet, 1985; Libet et al., 1983) L Libet and his colleagues asked participants to flex their finger/wrist at least 40

times, at times of their own choosing. The researchers measured: (1) the time at which the action occurred (M). This could be easily detected by using electrodes attached to the wrist (electromyogram/EMG); (2) the beginning of brain activity in the motor cortex. This could also be detected by placing electrodes on the scalp (EEG: see Box 5.7), which detect a gradually increasing signal (the readiness potential (RP)); (3) the time at which the participant consciously decided to act (the ‘moment of willing’ (W)). This ‘moment of willing’ is the most difficult to measure: participants were asked to note the position of a spot of light (moving around a clock-like circular screen) at the moment they decided to act. They could then report where the spot of light had been at that critical moment. L The critical question is: which comes first? L W occurred about 200 milliseconds (one-fifth of a second) before the action. L The RP began about 300–500 milliseconds before that (i.e. 500–700 milliseconds

before the action).

L

128

What do these findings suggest about the concept of free will?

People as organisms

The occurrence of W before the finger-flexing is consistent with the concept of free will. However, we’d expect the RP to begin after W: the fact that it occurred 300–500 milliseconds before W is clearly contrary to what belief in free will would predict. There was activity in the brain for anything up to half a second before participants became subjectively aware of having made the decision – consciousness lagged behind brain activity. According to Blackmore (2005), for a conscious decision to precede any brain activity would be nothing short of magic: it would mean that consciousness could ‘come out of nowhere’ and influence physical events in the brain (as proposed by dualists such as Descartes; see Chapter 2). Nonetheless, Libet’s results caused a storm of debate among philosophers, neuroscientists, physiologists, and Psychologists, which has been raging ever since (Banks and Pockett, 2007). If our conscious decisions aren’t the cause of our actions (as suggested by Libet’s findings), then we don’t have conscious free will. Even worse, the denial of conscious free will is a challenge to our sense of selfhood: we don’t have control over our choices but are merely conduits for unconsciously made decisions (Banks and Pockett, 2007). However compelling these conclusions may be, they’re also counter-intuitive (see Chapter 4). While this in itself doesn’t mean they’re mistaken, Libet himself refused to reject the concept of free will. Although consciousness clearly couldn’t have initiated the participants’ movements, it was still capable of stepping in and vetoing it before it was performed (‘free won’t’ – Libet, 1985, 1999); this ‘rescues’ free will, but at the cost of seriously restricting its role (see Gross, 2014). However, is the ‘opposition’ between (1) free, consciously willed decisions and (2) unconscious, brain-caused decisions a false dichotomy? Have we become unnecessarily ‘hung-up’ on the need for free will to be manifested as fully, 100 per cent conscious? According to Baggini (2015): Given that the brain plays a fundamental role in consciousness, wouldn’t it be more surprising if nothing was going on in your brain before you made a decision? … And what else could be making thoughts possible other than neurons firing? (p. 32; emphasis in original) Baggini goes on to say that because we don’t yet understand the mind–brain relationship, we don’t yet know how to talk about it. When describing Libet’s experiments, it is very easy to talk about your brain deciding before you become aware of it, as though the brain were not part of you. (p. 32; emphasis in original) This demonstrates what Baggini calls the mereological fallacy: mistaking parts for wholes. This leads us from the perfectly acceptable ‘we make up our minds before we become aware that we have done so’ to the disturbingly different ‘our minds are made up for us, by our brains’ (Baggini, 2015, p. 32). In other words, my brain, just like all my unconscious (or less-than-conscious) decisions, ideas, thoughts, etc., are part of a whole which is ‘me’. (This will be revisited in Chapter 7 in relation to the cognitive unconscious and in Chapter 9 in relation to Freud’s psychoanalytic theory and neuropsychoanalysis.)

Conclusions: the acculturated brain As we noted above, the development of anatomy and physiology helped to focus attention on the brain as the key to understanding behaviour and the mind (as distinct from the soul).

129

People as organisms

Wundt, as a trained physiologist and a pioneer of the new discipline of Psychology, represents a bridge between the two disciplines (his 1874 landmark textbook was titled Principles of Physiological Psychology) (see Chapter 1); his work also marked Psychology’s split from philosophy. Paradoxically, Wundt’s work also established Psychology as a domain distinct from physiology itself (Bazan, 2016). For example, in 1867 Wundt rejected a naively materialistic approach and defended the idea of the autonomy of the mental: the laws governing the mind are fundamentally different from those that govern material nature (including the brain). The physiologist Helmholtz was another proponent of this view (see Chapter 1). By the end of the nineteenth century, therefore, Psychology had emerged as a discipline in its own right, using what was regarded as an appropriate level of explanation of its subjectmatter. However, the influence of physiology was still to be observed in many different areas of Psychology. Gergen (2010) gives the examples of William McDougall, who proposed (in 1908) that virtually all significant behaviour is determined by biological instincts, and Floyd Allport, who began his ground-breaking Social Psychology text (1924) with a chapter on the ‘physiological basis of human behaviour’. If Psychology was to be accepted as a natural science (Naturwissenschaft), it needed mental processes to be grounded in neurophysiology. The mid-1950s ‘cognitive revolution’ (see Chapters 1 and 7) complemented this grounding, in the sense that once the mind (in the form of ‘cognition’) was once more accepted as valid subject-matter for Psychology, the natural next step was to try to account for cognition in the brain. Evolutionary Psychology’s claim to have identified universal, inherent predispositions (‘human nature’) (see Chapter 8) and behavioural genetics’ attempt to quantify how much of the variability for any given trait can be attributed to genetic differences, and different kinds of environment (see Chapter 11), both further increased the plausibility of cortical determination. Finally, brain-scanning allowed Psychologists to move beyond inference and conjecture about mental states, to direct observation (Gergen, 2010): This shift to neurological explanations simultaneously guaranteed psychology status as a natural science. In Edward O. Wilson’s (1998) terms, psychology could join the quest for consilience, or the grand unification of the sciences devoted to establishing natural laws. (Gergen, 2010, p. 3) However, we cannot just assume the objectivity of neuroscientific research findings. As we noted in Chapter 3, in the very framing of their research, scientists necessarily make unjustified and culturally derived assumptions about human nature (Danziger, 1997; Gergen et al., 1996). For example, research into the cortical location of emotion – as opposed to reason – presupposes the Western distinction (since the eighteenth century) between them (another false dichotomy?). Indeed, most of the data supporting a cortical explanation of human action can be interpreted as demonstrating the significance of cultural process: given the vast popularization of brain-based explanations in the national media, the key questions are those to do with their ideological and political impact (Gergen, 2010). For example, to propose that activities such as love, altruism, empathy, justice-seeking, and religious worship are manifestations of neural determinism is to transform their very meaning; in turn, this undermines esteemed cultural traditions (Gergen, 2010). Gergen argues that:

130

People as organisms

1 all attempts to link brain states to psychological processes depend on culturally constructed conceptions of mind; 2 brain states can neither cause nor be correlated with psychological states or behaviour; 3 a dependency on ‘hard-wiring’ explanations of behaviour lends itself to less straightforward and ultimately ‘empty’ explanations; and 4 neural accounts of human activity are largely irrelevant to everyday understanding (see Chapter 4). In effect, the brain in itself proves of limited significance in either determining or providing a basis for understanding human action. On the contrary, it is far more promising, both scientifically and in terms of societal value, to view the brain primarily as an instrument for achieving culturally constructed ends. (Gergen, 2010, pp. 4–5) Finally, while the brain might facilitate most human behaviour, it does not determine it (Gergen, 2010, p. 16). Also, consistent with a postmodern view of scientific knowledge (see Chapter 3), There is no Truth about the brain and its relationship to human activity. There are multiple perspectives, each with its own particular assumptions, values, and goals … the brain does not determine the contours of cultural life; cultural life determines what we take to be the nature and importance of brain functioning. (Gergen, 2010, p. 19; emphasis added)

131

Chapter 6 People as environmentally controlled organisms Behaviourism

When using the term ‘Behaviourism’, people (including Psychologists) may have different things in mind. They might be referring to one or more of the following: 1 A general psychological approach to the study of human beings which focuses on overt (‘public’) behaviour as opposed to the approach advocated by Freud and other psychodynamic theorists and therapists (see Chapter 8). 2 The ideas and research of John B. Watson, who first coined the term ‘Behaviourism’ in 1913 and who argued that all human and non-human animal behaviour can be explained in terms of classical conditioning. This is a form of associative learning. 3 The ideas and research of B.F. Skinner (beginning in the 1930s), which represent an alternative ‘brand’ of Behaviourism, according to which the crucial learning process involves operant conditioning. While this is also a form of associative learning, it differs from classical conditioning in quite fundamental ways. 4 Any account of human functioning which excludes the role of mental (or cognitive) events: the only influences on learning and behaviour in general which are of any importance for scientific Psychologists to consider are external, environmental events (or stimuli). 5 Stimulus–Response (S–R) Psychology. 6 All of the above. While most of these are valid, there’s much more to ‘Behaviourism’ than just these descriptions. This chapter aims to add to these descriptions – and to point out which of the above is/are incorrect – in order to provide a comprehensive picture of this highly influential approach within Psychology. But, as with all the other major approaches, when we discuss ‘Behaviourism’ we’re not denoting a single, unified theory or methodology: Behaviourism is itself diverse, both historically and in terms of the ideas, research methods, and applications which fall under the ‘banner’ of ‘Behaviourism’.

In the beginning … Behaviourism (at least in its Watsonian form) has its roots in Associationism (a philosophical theory), physiology (in particular Pavlov’s study of digestion in dogs), and two earlier

133

People as environmentally controlled organisms

forms of Psychology, namely, Functionalism (beginning with William James) and Animal Psychology (including Watson’s own pre-1913 research with rats).

Pavlov’s physiological research: psychic secretions BOX 6.1 KEY THINKER: Ivan Pavlov (1849–1936) L From an early age, Pavlov was persuaded that science

L

L

L

L

L L L

L

was the key to social and political progress. He also believed in the value of scientific knowledge for its own sake. He was convinced that the nervous system was the main – if not the only – means by which the internal organs were stimulated to perform their various functions. It wasn’t until he was in his forties that Pavlov gained a secure academic post – professor at St. Petersburg’s Military Medical Academy. He could now create and staff his own laboratory and pursue his long-standing ambition, namely the experimental investigation of the physiology of digestion. One of his innovations was to surgically create openFigure 6.1 ings (fistulas) in different parts of the digestive tracts Ivan Pavlov. of dogs, such as the salivary ducts and isolated areas of the stomach (a procedure he made safe through the use of aseptic surgical procedures). After feeding the animals various substances, he collected, measured, and chemically analysed the resulting secretions from the different parts of the digestive system. Funnels were attached to the fistula, allowing secretions to be collected and precisely measured (e.g. the number of drops of saliva). For these studies, Pavlov was awarded the Nobel Prize for Physiology in 1904 (incidentally, the year of Skinner’s birth). Placing a drop of dilute acid on a dog’s tongue immediately, automatically, and involuntarily produced a considerable amount of saliva. An incidental observation was that dogs that were used to the laboratory routine and apparatus would start salivating even before the acid was administered, while merely being placed in the apparatus. Pavlov called these ‘psychic secretions’; they were clearly learned (a result of the dog’s experience), while salivation in response to the acid was an innate reflex.

The ethics of Pavlov’s research As noted in Box 6.1, Pavlov used aseptic techniques (at the time not yet used routinely), which ensured that his experimental animals survived the surgery. Despite a great many dogs being ‘sacrificed’ (i.e. dying in his pursuit of knowledge), he was a foremost advocate of their humane use in scientific research. He set up a memorial to his dogs, describing the dog as ‘man’s helper and friend from prehistoric times’. Any suffering should be kept to an absolute minimum (Harré, 2006).

134

People as environmentally controlled organisms

Conditioned reflexes Having determined that ‘psychic secretions’ are the product of experience (see Box 6.1), Pavlov faced a dilemma: they seemed to fall squarely within the domain of Psychology, whose widely used introspection-based methods he believed inappropriate for a discipline that claimed to be a science – let alone a natural science. Pavlov thought of himself as a rigorous, totally scientific physiologist and he feared being linked with the soft-minded Psychologists (Fancher and Rutherford 2012). Pavlov finally resolved his dilemma after re-reading Sechenov’s (1965/1863) Reflexes of the Brain, which tried to account for all behaviour – including such ‘higher cognitive functions’ as thinking, willing, and judging – in terms of an expanded concept of the reflex. Pavlov now decided that his dogs’ psychic secretions could be redefined in purely physiological terms relating to the reflex.

Pause for thought … 1 How could you characterize Sechenov’s account of higher cognitive functions in terms of brain-related reflexes? 2 In principle, do you agree or disagree – with him and Pavlov – that all behaviour can be explained this way, giving your reasons? (See Chapter 1.)

Some reflexes are natural, biologically determined, reflex responses to particular environmental ‘events’ (or stimuli), such as a drop of acid on the tongue triggering salivation. But if the drop of acid is presented enough times with another neutral stimulus (such as a ringing bell or ticking metronome) which doesn’t naturally trigger salivation, the bell or metronome on its own will trigger salivation. This can be restated like this: the bell or metronome will come to trigger salivation on condition that it is presented simultaneously with the acid. In other words: L The acid is an unconditional (or, more commonly, unconditioned) stimulus (UCS). L Salivation triggered by the acid is an unconditional/unconditioned response (UCR). L A bell or metronome that, on its own, triggers salivation after being paired with the acid

is a conditional/conditioned stimulus (CS). L Salivation triggered by a bell or metronome alone is a conditional/conditioned

response (CR).

Classical conditioning What we have just described is the basic process of classical (Pavlovian or respondent) conditioning (Pavlov, 1927). ‘Respondent’ denotes the automatic nature of the response (conditioned or unconditioned): it’s triggered by the stimulus and is, as we’ve seen, a reflex (learned or unlearned). This is summarized in Figure 6.2. The first line in Figure 6.2 summarizes the situation before any learning has taken place. In the middle line, the ‘+’ denotes the pairing of the CS and UCS, and is what takes place during learning. This pairing can be arranged in different ways, as described in Table 6.1. The third line shows that learning has taken place.

135

People as environmentally controlled organisms

Bell (CS)

+

Acid on tongue (UCS)

Salivation (UCR)

acid on tongue (UCS)

Salivation (UCR)

Bell (CS)

Salivation (CR)

Figure 6.2 The basic procedure involved in classical conditioning.

L

What do you think would happen if the CS were presented after the UCS?

Table 6.1 Four types of classical conditioning and their outcomes Delayed/forward: the CS is presented before the UCS and remains ‘on’ while the CS is presented and until the UCR appears. Conditioning is judged to have occurred when the CR appears before the UCS is presented. Optimum learning occurs when there’s a halfsecond interval between the CS and UCS. This is the ‘basic’ or standard method as shown in Figure 6.2 and as typically used in the laboratory, especially with non-human animals. Backward: the CS is presented after the UCS. Usually, this produces very little, if any, learning in laboratory animals. Simultaneous: the CS and UCS are presented together. Conditioning is judged to have occurred when the CS on its own produces the CR. Trace: the CS is presented and removed before the UCS is presented, leaving only a ‘memory trace’ of the CS to be conditioned. The CR is usually weaker than in delayed or simultaneous conditioning.

Pause for thought … 3 Can you think of any real-life examples of backward and simultaneous conditioning? 4 How do you think Pavlov operationally defined the strength of conditioning?

Not only did Pavlov identify these different kinds of conditioning, he also discovered that there’s much more involved in this type of learning than what we’ve described so far.

136

People as environmentally controlled organisms

tHigher-order conditioning Pavlov demonstrated that a strong CS could be used instead of acid on the tongue (or food) to produce salivation in response to a stimulus never previously paired with the acid/food. For example, he paired a buzzer (previously paired with food) with a black square; after ten pairings, using delayed conditioning, the dog salivated a small but significant amount at the sight of the black square – before the buzzer was sounded. While the buzzer and food pairing is referred to as first-order conditioning, the black square and buzzer is second-order conditioning. The buzzer (originally paired with food) acts as if it were a UCS, so that the black square doesn’t need to be paired with food in order to produce salivation.

Generalization and discrimination In generalization, the CR transfers spontaneously to stimuli that are similar to, but different from, the original CS. However, as the stimuli become increasingly different from the original CS, the CR gradually weakens and eventually stops altogether; this describes discrimination. Pavlov also trained dogs to discriminate in the original conditioning procedure. For example, if a high-pitched bell was paired with food but a low-pitched bell wasn’t, the dog started salivating in response to the former, but not to the latter (discrimination training). An interesting phenomenon related to discrimination is what Pavlov called experimental neurosis. This is described in Box 6.2.

BOX 6.2 Experimental neurosis L Pavlov trained dogs to salivate in response to a circle but not to an ellipse. L He then gradually changed the shape of the ellipse until it became almost circular. L As this happened, the dogs started behaving in ‘neurotic’ ways – whining, trembling,

urinating and defecating, refusing their food, and so on.

Pause for thought … 5 Try accounting for the dogs’ neurotic behaviour in terms of generalization and discrimination. 6 Given Pavlov’s concern for his dogs’ welfare, you might find it odd that he would subject them to such distress. How might he have justified such experiments?

Extinction and spontaneous recovery Once a dog has been conditioned to salivate to, say, a bell, when the bell is repeatedly presented without food, the CR gradually weakens and eventually stops altogether (extinction).

137

People as environmentally controlled organisms

However, if the dog is removed from the experimental situation and returned to it two or so hours later, it will start salivating again in response to the bell, without any food being involved (spontaneous recovery).

Pause for thought … 7 What does spontaneous recovery tell us about what’s happening when extinction takes place?

BOX 6.3 Science and society: the ‘Pavlovianization’ of Soviet Psychology L As we’ve seen above, Pavlov distanced himself from the then current Psychology,

L

L

L

L L

insisting that his conditioning research was ‘strictly physiological’ and concerned with ‘higher nervous activity’. Indeed, he speculated about what was likely to be going on in the brain (specifically, the cerebrum; see Chapter 5). However, this was a theory based entirely on inference from observed reflex behaviours. Despite the physiological nature of his work, in 1950, 14 years after his death, Pavlov was ‘politically canonized’ and placed alongside Marx and Lenin as one of the great classics of Soviet physiology, psychiatry, and Psychology. Stalin was behind the order to reconstruct these sciences according to Pavlov’s teachings: the Soviet dictator was determined to ‘Russificate’ Soviet science (i.e. give credit to Russian predecessors in preference to non-Russians). While there were very few references to Pavlov in Russian Psychology textbooks published before 1950, there were a great many after 1950. Pavlov’s description of language as the ‘second-signal system’ opened the door for Soviet scientific studies of thinking and concept formation (see Chapter 7). (Based on Woodworth, 1964)

Functionalism and the study of animal behaviour According to Marx and Hillix (1963), in the broadest sense of the term, Functionalism is concerned with ‘What do men do?’ and ‘Why do they do it?’ (Woodworth, 1948). More specifically, ‘a functionalist is characteristically concerned with the function of the organism’s behaviour and consciousness in its adaptation to its environment’ (Marx and Hillix, 1963, p. 84). In other words, functionalists ask: what are behaviour and consciousness for? Functionalism was the first recognized school of American Psychology and this largely reflects the influence of evolutionary theory and a practical (‘pioneering’) spirit. As we noted in Chapter 1, one of the two great pioneers of ‘modern’ (Experimental) Psychology, William James, is usually described as a functionalist. With regard to functionalism specifically, James is what Marx and Hillix (1963) call a pioneer, as distinct from founders and developers. The key figures within each category are listed in Table 6.2.

138

People as environmentally controlled organisms

Table 6.2 Leading figures in American Functionalism Pioneers

Founders

Developers

George T. Ladd

John Dewey

Robert S. Woodworth

(1842–1921)

(1859–1952)

(1869–1962)

Edward W. Scripture

James R. Angell

Henry Carr

(1864–1945)

(1869–1949)

(1873–1954)

James McKeen Cattell (1860–1944) G. Stanley Hall (1844–1924) James Mark Baldwin (1861–1934) William James (1842–1910) Edward L. Thorndike (1874–1949) Source: Marx and Hillix (1963)

Marx and Hillix (1963) identify three major influences on the pioneers, all British in origin, namely: 1 Francis Galton’s (1822–1911) study of individual differences, mental (psychometric) tests, and statistics (these are discussed in Chapter 11); 2 Charles Darwin’s (1809–1882) Evolutionary Theory (see Chapter 8); and 3 the studies of animal behaviour by George John Romanes (1848–1894) and C. Lloyd Morgan (1852–1936). William James’ impact on Psychology is discussed in Chapter 1 and at various other points throughout the book, such is his lasting influence. The focus in the rest of this section will be on one of the pioneers, Thorndike, who is also regarded as a pioneering associationist; arguably, this makes his impact on Behaviourism, specifically Skinner’s work on operant conditioning, on a par with Pavlov’s.

BOX 6.4 KEY THINKER: Edwin L. Thorndike L Thorndike studied under William James at Harvard University and under Cattell

at Columbia. L At Harvard, he began his investigation of animal learning, where he trained

chicks to run through improvised mazes (formed by placing books on end). L He continued this sort of research at Columbia, where he now worked with cats

and dogs in his famous puzzle box. His doctoral dissertation was titled Animal Intelligence: An Experimental Study of the Associative Processes in Animals. This was first published in a Psychology Review Monograph Supplement (1898), then, with additional material on associative learning in chicks, fish, and monkeys, in a 1911 book. At the end of his career, he shifted his attention to the study of problems of human learning and education.

139

People as environmentally controlled organisms L He described himself as a connectionist. While a Functionalist in his emphasis on

L

L L

L

utilitarian aspects of Psychology (i.e. how it could maximize the benefits to people and reduce their suffering), Psychology was primarily the study of stimulus– response connections (or bonds). His understanding of ‘stimulus’ and ‘response’ was far broader than how the terms are commonly understood – and certainly far broader than the discrete ‘events’ studied by Pavlov and on which Watson’s Behaviourism was based (see text below). At the same time, Thorndike advocated the quantification of objective data as the basic principle of scientific research. He’s best known for his law of effect (1898), based on his early research with animals in puzzle boxes. He was impressed by their gradual learning of the correct response (e.g. with cats, operating the latch which would automatically release the flap so they could escape) and gradual elimination of incorrect ones. Accidental (i.e. chance/random) success played a large part in this process, which has come to be called trial-and-error learning. What was being learned was a connection between the stimulus (the manipulative components of the box) and the response (the behaviour that resulted in escape). Further, the S–R connection is ‘stamped in’ when pleasure results (e.g. a piece of fish waiting for the cat outside the box) and ‘stamped out’ when it doesn’t. This is the law of effect and represents a crucial way of distinguishing between classical and operant conditioning, which Skinner was to do 40 years later.

Behaviourism: Watson’s new brand of psychology Conditioned emotional reactions Watson was the first Psychologist to apply Pavlovian/classical conditioning to human behaviour, both as an explanatory device and in an experimental setting. The latter involved an 11-month-old baby, Albert B. (better known as ‘Little Albert’), destined to become one of the most famous children in the entire psychological literature (along with Freud’s case study of ‘Little Hans’; see Chapter 8). The study itself was to become part of social science folklore and clinched Watson’s fame as the father of Behaviourism (Simpson, 2000). This is described in Box 6.5.

BOX 6.5 Conditioned emotional reactions (Watson and Rayner, 1920) L The aim of the study was to provide an empirical demonstration of the claim that

various kinds of emotional response can be conditioned, in this particular case, fear. L Albert’s mother was a wet-nurse in the Harriet Lane Home for Invalid Children,

where Watson and Rayner happened to be working. L Albert was described as ‘healthy from birth’ and ‘on the whole, stolid and une-

motional’. When he was about nine months old, Watson and Rayner tested his reactions to various stimuli – a white rat, a rabbit, a dog, a monkey, masks with and without hair, cotton wool, burning newspapers, and a hammer striking a four-foot steel bar just behind his head. Only the last of these frightened him, and

140

People as environmentally controlled organisms so was designated the UCS (and fear the UCR). The other stimuli were neutral with regard to fear. L The experiment began when Albert was just over 11 months old. The rat and UCS were paired: as Albert reached out to stroke the rat, Watson crept up behind him and brought the hammer crashing down on the steel bar. L This occurred seven times in total over the next seven weeks. By this time, the rat (the CS) produced a fear response (CR) without the need for Watson’s ‘intervention’. Watson and Rayner had succeeded in deliberately producing a rat phobia in a baby.

Pause for thought … Watson (1931) believed that the child’s UCRs (fear, rage, and love) to simple stimuli are merely the starting points in building up those ‘complicated habit patterns’ that we later call our emotions. For example, jealousy isn’t innate or inevitable, but rather is a rage response to a (conditioned) love stimulus (e.g. stiffening the whole body, reddening of the face, exaggerated breathing, verbal recrimination, and possibly shouting). 8 To what extent would you agree with Watson’s analysis of emotion? Watson also proposed that as children grow up, their behaviour becomes increasingly complex, but is basically the same kind of behaviour as it was earlier on (i.e. a series of conditioned emotional responses (CERs) that become added and recombined). The basic learning process by which this increasing complexity occurs (i.e. classical conditioning) is involved at all ages. 9 How could you characterize this view of developmental change and how might it be contrasted with theories such as those of Freud (see Chapter 9) and Piaget (see Gross, 2015)?

‘Little Albert’ as ‘classic research’: do we really know what happened? The ‘Little Albert’ study is often cited as an example of ‘classic research’, being reported in every Psychology textbook across the decades. But, ironically (according to Cornwell and Hobbs, 1976), this ‘classic’ reputation has resulted in the details of the experiment being obscured, making Psychologists less (rather than more) cautious, producing a false impression of familiarity with the details and ‘painting out the warts’. In ‘The strange saga of Little Albert’, Cornwell and Hobbs argue that the experiment is a classic example of how a piece of research can become misreported/misrepresented until it assumes ‘mythical proportions’. In 1917, Watson was awarded a grant to conduct research into the development of reflexes and instincts in infants. He began his study of Albert in 1919, aimed at demonstrating how CERs come about – and how they can be removed. The results were published in 1920.

141

People as environmentally controlled organisms

Watson and Rayner stressed the limited nature of their evidence. They may have intended to study other children, but were unable to continue their research at Johns Hopkins. In the same year as publication of the study, in a sensationally publicized case, Watson divorced his wife and immediately married Rosalie Rayner. He was forced to resign his academic post. In 1921, Watson and Rosalie Rayner Watson published a second account of Little Albert (in Scientific Monthly). In it, they stated that Albert did show fear in response to a ‘loss of support’ (being held and then let go) – the opposite of what they’d claimed in the 1920 article. Also, Watson subsequently referred to this 1921 paper as the original (although it’s the earlier one that’s usually cited by others). A third account was given in some lectures by Watson that were eventually incorporated into his Behaviourism (1924). It’s also recounted in other books and articles; each account is referred to at least once as ‘the original’. In a survey of 76 ‘general psychology’ books at Glasgow University, Cornwell and Hobbs found at least one distortion in 60 per cent of the 30 different reports of the experiment. They ask if all the errors are just the result of carelessness – or were other (unconscious) motives at play? Cornwell and Hobbs observe that several accounts seemed to paint the study in a more favourable light, both methodologically and ethically. For example: L The implication is made that Albert was one of a number of infants studied by Watson. L On the assumption that a child will instinctively show a fear of rats, the rat is often

reported as a ‘rabbit’, making Albert’s initial lack of fear seem more plausible and his conditioned fear response more striking. L Many accounts claim that Albert’s conditioned fear was removed before he left the hospital. Indeed, Watson and Rayner knew a month in advance that he would be leaving. Eysenck (1976), for example, gives details of (fictitious) extinction involving pieces of chocolate!

Pause for thought … This last ‘myth’ raises two major questions: 10 Did Watson and Rayner really intend to remove the CER as they say they did? Even if they did, does this ‘let them off the hook’ ethically?

L

How might Watson and Rayner have done it?

Little Peter and the beginnings of behaviour therapy Watson and Rayner state that they would have attempted several different methods in an attempt to remove Albert’s conditioned fear response. One, in which the person is constantly confronted with the feared stimulus without any means of escape sounds like what’s now called flooding, a form of forced reality testing. Another seems to be describing systematic desensitization (SD): the fear is gradually extinguished by exposing the person to increasingly frightening forms of the feared stimulus in combination with a pleasurable stimulus/activity – or, more commonly today, a state of deep muscle relaxation. Finally, modelling involves the person observing another person (the model) interacting with the feared stimulus without fear. All these forms of behaviour therapy have been named and developed since the Little Albert study (starting in the 1950s) (see Gross, 2015). However, just four years after ‘Little

142

People as environmentally controlled organisms

Albert’, ‘Little Peter’ entered the psychological literature by becoming the first person to be treated using what we now call SD.

BOX 6.6 The elimination of children’s fears (Jones, 1924) L Watson supervised the treatment of Peter, a two-year-old living in a charitable

institution; he had an extreme fear of rabbits, rats, fur coats, feathers, cotton wool, and so on. He showed no fear of wooden blocks and similar toys. L The treatment was conducted by Mary Cover Jones, who describes the case of Peter as a sequel to that of Little Albert (‘Albert grown a bit older’). L Jones used the method of direct unconditioning to remove Peter’s naturally occurring phobias. L A rabbit was put into a wire cage in front of Peter while he ate his lunch in his high chair. At first, the caged rabbit placed anywhere in the room was sufficient to induce a fear response; the cage was gradually moved closer and Peter tolerated this (steps 1–4). By step 6, Peter could tolerate the rabbit being out of its cage, and by step 13 he could hold it on his lap. He then stayed alone in the room with the rabbit (step 14), allowed it in the play pen with him (step 15), fondled it affectionately (step 16), and, finally, let the rabbit nibble his fingers (step 17).

Eysenck and Rachman (1965), Wolpe and Rachman (1960), and other Behaviourist Psychologists and therapists claim that the case of Little Albert exemplifies how all phobias (i.e. abnormal fears) are acquired, that is, through classical conditioning. For example, Wolpe and Rachman argue that any neutral stimulus, simple or complex, that happens to make an impact on a person at more-or-less the same time that a fear response is evoked (by some other stimulus) can acquire the ability to evoke that same response (i.e. it becomes a CS); also, this fear response (CR) can be generalized to stimuli that resemble the CS. While there is evidence showing that some phobias are CERs, the claim that all phobias are acquired this way – and only in this way – is extreme and difficult to defend. For example, some phobias are easier to induce experimentally (in people who don’t already have them) than others. Also, it’s widely agreed that certain ‘naturally occurring’ phobias (not deliberately induced) are more common than others: rats, jellyfish, cockroaches, spiders, and slugs are consistently rated as frightening, while rabbits, ladybirds, cats, and lambs are consistently rated as non-frightening (Bennett-Levy and Marteau, 1984). These and similar findings are consistent with Seligman’s (1970) concept of biological preparedness: different species are biologically equipped to acquire certain CRs more easily if they have higher survival value. This implies that there’s more to phobias than just classical conditioning. Indeed, Rachman (1977) maintains that direct conditioning of any kind accounts for relatively few phobias; rather, many phobias are acquired on the basis of information transmitted through observation and instruction (such as in modelling; see above). Interestingly, Jones (1924) thought that Peter’s phobias were probably not directly conditioned fears; she wondered where his fear of white rats, for example, might have originated. According to Harris (1997), Albert’s fear was difficult to induce and transitory, and Watson and Rayner recognized that the whole experiment was a failure! Yet every textbook account tells us how easily they created a rat phobia, which then generalized into a lifelong

143

People as environmentally controlled organisms

fear of rabbits and other white furry things. But why should textbook authors want to present the study in a more favourable light than it appears to deserve? Harris claims that the study served as a ‘celebratory’ origin myth for Behaviourists, who wanted their speciality to have a long and convincing past. Different accounts all suggested that Watson had tapped into the power of Behaviourism as a theory and technology: the case of Little Albert helps to legitimize the whole Behaviourist enterprise. While most textbook authors have no special ‘allegiance’ to Watson, the Little Albert experiment can be seen as a case study in the creation of myths, even within a so-called (or self-proclaimed) scientific Psychology. It also demonstrates the importance of reading original material (i.e. primary sources) whenever possible!

Watson and the nature–nurture debate Given Watson’s views regarding fear, rage, and love as the only unconditioned (i.e. unlearned/ innate) responses involved in human emotion, and his emphasis on classical conditioning in general, it’s perhaps not too surprising that he adopted a radical environmentalist position in relation to behaviour as a whole. According to radical environmentalism, environmental factors are overwhelmingly more important than innate factors in determining behaviour. He denied the existence of ‘capacity, talent, temperament, mental constitution and characteristics’, and, perhaps most famously, he claimed that the systematic application of conditioning principles could give caretakers almost total control over their children’s development: Give me a dozen healthy infants, well-formed, and my own specified world to bring them up in and I’ll guarantee to take any one at random and train him to become any type of specialist I might select – doctor, lawyer, merchant-chief and yes, even beggarman and thief, regardless of his talents, penchants, tendencies, abilities, vocations, and race of his ancestors. (Watson, 1931, p. 104)

Watson’s Behaviourist Manifesto and modernism According to Lattal and Rutherford (2013), Watson’s manifesto was delivered in the era that has subsequently been characterized as modernist. Indeed, Watson’s brand of Behaviourism has itself been regarded as a form of psychological modernism (e.g. Buckley, 1989). While Modernity refers to the period in the history of Western culture, dating from around 1770 (often called the post-Enlightenment), Modernism refers to the zeitgeist (‘spirit of the age’) of modernity, a commentary on it (Stainton Rogers et al., 1995), or a cultural expression of it (in art, architecture, literature, music, and so on) (Kvale, 1992). Like all modernist movements, Watson’s Behaviourism represented a break from tradition. He forcefully distinguished Behaviourism from the older structuralist and functionalist schools, effectively announcing the arrival of what he considered to be a truly modern Psychology (Lattal and Rutherford, 2013): The Modernist Era has been replaced by other historical movements in the hundred years since the manifesto’s publication, and the social, political, and intellectual contexts in which Watson’s ideas arose and took hold have changed. Through all of these changes, however, Watson’s 1913 assessment of what psychology was and what it could be remains a touchstone in the history of psychology … a history of psychology textbook would be incomplete without a review of Watson’s manifesto. (Lattal and Rutherford, 2013, p. 2)

144

People as environmentally controlled organisms

Beyond making Psychology relevant to solving everyday problems, Watson also had a utopian vision (Morawski, 1982): Behaviourism could actually make the world a better place. This isn’t as well known as Skinner’s vision (as depicted in his 1948 utopian novel, Walden Two; see below).

Pause for thought … 11 Are there any utopians among recent past or present Psychologists on the level of Watson or Skinner?

Basic principles of operant conditioning Skinner (1938) made a fundamental distinction between: 1 respondents (or respondent behaviour), which are triggered automatically or elicited by particular environmental stimuli; and 2 operants (or operant behaviour), which are essentially voluntary or emitted by the organism. These are related to classical Pavlovian (or respondent) conditioning and instrumental/ Skinnerian (or operant) conditioning, respectively. In making these distinctions, Skinner wasn’t rejecting Pavlov’s and Watson’s ideas and research achievements. Rather, he was highlighting his belief that most human and non-human animal behaviour is emitted rather than elicited. He was interested in how animals operate on their environment, and how this operant behaviour is instrumental in bringing about certain consequences; these consequences, in turn, determine the probability of that behaviour being repeated. Compared with Pavlov or Watson, Skinner’s learner is much more active. Just as Watson’s ideas were based on the earlier work of Pavlov, so Skinner’s operant conditioning grew out of the earlier work of another American, Thorndike (see Box 6.4). Skinner devised a form of puzzle box (what he described as an ‘automated operant chamber’, but commonly referred to as a ‘Skinner box’), designed for a rat or pigeon to do things in (press a lever or peck at an illuminated key), rather than to escape from. The experimenter decides exactly what the relationship shall be between pressing the lever/ pecking the key and the delivery of a food pellet, providing total control of the animal’s environment – but it’s the animal that has to do the work.

Skinner’s behaviour analysis In Thorndike’s law of effect, ‘stamping in’ refers to the effect that a piece of fish has on the cat’s successful escape from the puzzle box. But for Skinner, this term was too mentalistic; like Watson, the mind was to have no place in a scientific explanation of behaviour (a feature of his Radical Behaviourism; see below). Instead, he used the term ‘strengthen’, which he deemed more objective and descriptive. Regardless of the term, the idea is that certain consequences of operant behaviour make that behaviour more likely to occur again. Similarly, other, aversive (literally, ‘painful’) consequences (such as electric shock) ‘stamp out’ the behaviour they follow or ‘weaken’ it. In Skinner’s terminology, those consequences act as either positive reinforcers or punishers, respectively. Negative reinforcers also strengthen the behaviour they follow, but work in a different way: when behaviour results

145

People as environmentally controlled organisms

in the removal or escape from some aversive state of affairs, the behaviour is being negatively reinforced. According to Skinner’s version of the law of effect, behaviour is shaped and maintained by its consequences. Behaviour analysis can be summarized as the ‘ABC of operant conditioning’ (Blackman, 1980), as summarized in Box 6.7.

BOX 6.7 The ABC of operant conditioning The analysis of behaviour requires an accurate but neutral representation of the relationships (or contingencies) between the following: L Antecedents: the stimulus conditions (such as the lever, the click of the food dis-

penser, a light that may go on when the lever is pressed). L Behaviours: operants (such as lever pressing or key pecking). L Consequences: what happens as a result of the operant behaviour (positive

reinforcement, negative reinforcement, or punishment). (Based on Blackman, 1980)

Another important distinction is that between primary and secondary reinforcers: L Primary reinforcers (such as food, water, and sex) are natural reinforcers (reinforcing in

themselves). L Secondary (or conditioned) reinforcers acquire their reinforcing properties through

association with primary reinforcers, i.e. we have to learn to find them reinforcing (through classical conditioning); examples for humans include tokens and money. In a Skinner box, if a click accompanies presentation of each pellet of food, the rat will eventually come to find the click reinforcing on its own; the click can then be used as a reinforcer for getting the rat to learn a new response. The use of clickers in dog training is an applied example. Secondary reinforcers are important because they ‘bridge the gap’ between the operant response and the primary reinforcer (which it may not be possible to present immediately).

Contingencies and schedules of reinforcement In a Skinner box using rats, each bar press causes a pen mechanism touching a continuously moving roll of paper to rise by a small fixed amount; this provides a permanent cumulative record of all the rat’s bar-presses. When a hungry rat is first placed in the box, bar-presses are, typically, infrequent and accidental; this reflects the fact that the animal is just exploring its new environment. But once the first few accidental presses are reinforced with food, the rate increases dramatically and remains high and steady, provided the rat remains hungry. Having established this as a reliable and predictable pattern, Skinner varied the contingencies of reinforcement, that is, the specific arrangement by which lever pressing was reinforced by food. A major way of doing this was to determine schedules of reinforcement, defined in terms of either (1) the time interval that has elapsed since the last reinforcement (this can be fixed or variable) or (2) the ratio of lever presses to reinforcement (again either fixed or variable).

146

People as environmentally controlled organisms

Each schedule is associated with a characteristic pattern of responding. Rats and pigeons (and probably most mammals and birds) typically ‘work harder’ (press the lever/peck the disc at a faster rate) for scant reward: when reinforcements are relatively infrequent and irregular or unpredictable, they go on pressing/pecking long after the reinforcement has stopped being presented. Each schedule, therefore, can be analysed in terms of (1) pattern/ rate of responding; and (2) resistance to extinction. The five major schedules identified by Ferster and Skinner (1957), together with the characteristic pattern/rate of responding and resistance to extinction, are described in Table 6.3.

Table 6.3 Major reinforcement schedules and associated (1) pattern/rate of responding and (2) resistance to extinction. Continuous reinforcement: every single response is reinforced. Response rate is low but steady. Resistance to extinction is very low: the quickest way to bring about extinction. Fixed interval: e.g. reinforcement is given every 30 seconds (FI 30) – provided the response has occurred at least once during that interval. There’s a pause following each reinforcement, then response rate speeds up as the next reinforcement becomes available. Overall response rate is fairly low. Resistance to extinction is fairly low – it occurs quite quickly. Variable interval: e.g. reinforcement is given on average every 30 seconds (VI 30), but the interval varies, unpredictably, from trial to trial. Response rate is very stable over long periods. There’s still some tendency to increase response rate as time elapses since previous reinforcement. Resistance to extinction is very high; it occurs very slowly and gradually. Fixed ratio: reinforcement is given for a fixed number of responses – e.g. every ten responses (FR 10) – however long this may take. There’s a pronounced pause following each reinforcement, then a very high response rate leading to the next reinforcement. Resistance to extinction: as for FI. Variable ratio: reinforcement is given on average every ten responses (VR 10), but the number varies, unpredictably, from trial to trial. Response rate is very high – and very steady. Resistance to extinction is very high – the most resistant of all the schedules.

Pause for thought … 12 Try to think of at least one example from everyday life of each of the reinforcement schedules described in Table 6.3.

Skinner’s Radical Behaviourism Skinner maintained that cognitions are covert behaviours (‘within the skin’) that should be studied by Psychologists along with overt behaviours (capable of being observed by two/more people). He was not ‘against cognitions’, but argued that so-called mental activities are ‘metaphors or explanatory fictions’; behaviour attributed to them can be more effectively explained in other ways.

147

People as environmentally controlled organisms

For Skinner, these more effective explanations of behaviour come in the form of the principles of reinforcement. What’s ‘radical’ about Radical Behaviourism is the claim that thoughts, feelings, sensations, and other private events cannot be used to explain behaviour but are to be explained in an analysis of behaviour (behaviour analysis). Since private events cannot be manipulated, they cannot serve as independent variables – but they can serve as dependent variables. Some recent studies seem to support Skinner’s claim that our common sense belief regarding free will is an illusion (see Chapter 4).

Skinner and free will Radical Behaviourism probably represents the most outspoken expression among Psychologists of the view that people are not free; the most explicit and accessible account of this view is Skinner’s Beyond Freedom and Dignity (1971). In it, he argues that behavioural freedom is an illusion. Radical Behaviourists argue that their view of behaviour is the most scientific, because it provides an account in terms of material causes; these can all be objectively defined and measured. Free will is one of those ‘explanatory fictions’ that are effects – not, as commonly understood, causes. In terms of the distinction between soft and hard determinism (see Chapter 1), Skinner was undoubtedly a very hard determinist! He claims that the illusion (or myth) of free will survives because the causes of human behaviour are often hidden from us in the environment. When what we do is dictated by force or punishment – or by their threat (i.e. negative reinforcement), it’s obvious to everyone that we’re not acting freely (as in crimes punishable by imprisonment); in these cases, we know what the environmental causes of our behaviour are. Similarly, it’s often obvious what positive reinforcements (‘incentives’ or ‘carrots’) are shaping or maintaining our behaviour. However, most of the time we’re unaware of the environmental causes of our behaviour, so it looks (and feels) as if we’re behaving freely. Often, when we believe we’re acting freely, all this means is that we’re free of punishments or negative reinforcements; on these occasions, our behaviour is still determined by the pursuit of things that have been positively reinforced (or reinforcing) in the past. Doing what we ‘want’ (i.e. behaving ‘freely’) is simply doing what we’ve previously been positively reinforced for doing. When we perceive others as behaving freely, we’re simply ignorant of their reinforcement histories. Strictly, Skinner argues that, rather than our behaviour being determined by positive reinforcements and the threat of punishment, it is merely shaped and modified by them; this is more consistent with his emphasis on operant behaviour (which, remember, is emitted by the active organism, rather than elicited in a passive organism by environmental stimuli). Indeed, Skinner (e.g. 1974) states that intention and purpose are what operant behaviour is all about, being found in the contingencies of reinforcement (the present circumstances and past consequences) – not inside the person. Operant behaviour is also purposive in the sense that its function is to change the environment and produce particular consequences. However, according to O’Donohue and Ferguson (2001), purposive behaviour doesn’t imply that the individual has free will, or that behaviour isn’t determined, because all behaviour is determined. (This argument, in turn, rests on the assumption that ‘free’ and ‘determined’ are opposites. But the real opposite of determined is ‘random’; see Gross, 2014.)

Morality and ‘autonomous man’ In Beyond Freedom and Dignity (1971), Skinner claims that what we call good or bad behaviour more-or-less equates to how others reinforce it: ‘good’ is what benefits others

148

People as environmentally controlled organisms

(what’s positively reinforced) and ‘bad’ is what harms others (what’s punished). This removes morality from human behaviour, either within the individual or within society. If we could arrange reinforcement appropriately, so that there was only mutually beneficial behaviour, we’d have created utopia. But in Skinner’s utopia (as described in Walden Two), how can the planners plan? You must be free in the first place in order to be able to plan. For Skinner, ‘oughts’ are not ‘moral imperatives’: rather than reflecting moral rules and guidelines, they offer practical rules and guidelines (Morea, 1990). Rather than portraying a utopian society, as was Skinner’s intention, Carl Rogers (see Chapter 12) likened Walden Two to Orwell’s Nineteen Eighty-Four, a nightmarish dystopia that warns against a punitive society where people are treated as automata by those in power (O’Donohue and Ferguson, 2001). Many critics saw him as a totalitarian, fascist, evil scientist, with his denial of free will (‘autonomous man’) at the heart of the condemnation. Skinner believed that only a technology of behaviour could rescue mankind: since social ills are caused by behaviour, it follows that the remedy is to change the variables that control behaviour. While his critics claimed that any attempt to try controlling behaviour is an infringement of personal liberty, for Skinner, freedom versus control (or behavioural engineering) is a false dichotomy: all behaviour is controlled all of the time.

Pause for thought … 13 What’s meant by a ‘false dichotomy’? 14 To what extent do you agree with Skinner’s claim that behaviour is controlled all of the time?

According to Leslie (2002), from the perspective of behaviour analysis, the theories of Cognitive Psychology (see Chapter 7) are doomed to fail; based on a mistaken assumption about the necessary features of psychological explanations: The mistake is to assume that behaviour (what someone does) is necessarily caused by cognition (what the person thinks). Behaviour analysis instead states that both overt (visible) behaviour and the other apparently ‘private’ aspects of human psychology arise from interaction with the environment. (p. 8) While Methodological Behaviourism proposes to ignore such inner states (they are inaccessible), Radical Behaviourism ignores them only as variables used to explain behaviour (they are irrelevant); they can be translated into the language of reinforcement theory (Garrett, 1996). According to Nye (2000), Skinner’s ideas are also radical because he applied the same type of analysis to both covert and overt behaviour. According to Skinner (1974): Behaviourism is not the science of human behaviour; it is the philosophy of that science. Some of the questions it asks are these: Is such a science really possible? Can it account for every aspect of human behaviour? What methods can it use? Are its laws as valid as those of physics and biology? Will it lead to a technology, and if so, what role will it play in human affairs? (p. 3)

149

People as environmentally controlled organisms

So, Radical Behaviourism is not a scientific law or set of empirical findings. Rather, it is meta-scientific – it attempts to define what a science of behaviour should look like. According to O’Donohue and Ferguson (2001), Radical Behaviourism is a philosophy of science, or, more precisely, a philosophy of Psychology. At the end of his life, Skinner (1990) described cognitive science (see below) as the creationism of contemporary Psychology. As Lattal and Rutherford (2013) observe, historians have amply demonstrated that Skinner’s Radical Behaviourism has been both friend and foe of many of the entrenched values of twentieth-century American life. Skinner’s behaviour analysis continues to dominate mainstream Behaviourism in both theory and practice and is a far cry from Watson’s original version. Another feature of Skinner’s Radical Behaviourism is his ‘empty organism’ view of the learner (human or non-human): there’s nothing ‘going on’ inside the individual person or animal – either cognitive or physiological – that makes any difference to its emitted behaviour – either before or after learning. For Skinner, only an empty organism view was compatible with a ‘science of behaviour’. But according to Harré (2006): It soon became evident that the constraints which his positivism forced on him as to what a science should comprehend quickly became metaphysical dogmas. The expulsion of thinking and physiology from his positivist conception of the project of psychology meant that at best he would come up with a bit of natural history and at worst a pseudo-science. (p. 20; emphasis in original)

Methodological issues: the Skinner box revisited Referring to the Skinner box, Harré also points out that, as a very rare phenomenon in Psychology: Skinner’s thought was driven by the apparatus he invented…. With the help of this apparatus, Skinner made the discovery that in one way or another dominated the rest of his life. (p. 20) This discovery, of course, was operant conditioning. Not only was the apparatus the driving force in his account of operant conditioning and, indeed, the philosophy of science that Radical Behaviourism represents, but it is often cited in the debate regarding ‘single-subject experimental designs’ and their scientific status.

Pause for thought … 15 Explain the difference between the nomothetic and idiographic perspectives in relation to the study of human beings (see Chapter 1). 16 According to each of these perspectives, how scientifically valid is the study of (1) individual participants/subjects and (2) groups? 17 In what ways might the scientific validity (or otherwise) of studying either individuals or groups be considered a false dichotomy? 18 Is the study of single subjects compatible with an experimental study of human beings (or non-human animals)?

150

People as environmentally controlled organisms

Conclusions: the cognitive nature of conditioning Tolman’s Cognitive Behaviourism The work of Edward C. Tolman (1886–1959) represents an interesting case study of the possibility that one and the same Psychologist could have been described as a Behaviourist in his lifetime, but a Cognitive Psychologist by current criteria. This is discussed in Box 6.8.

BOX 6.8 Tolman: Behaviourist or Cognitive Psychologist? L Although working within the Behaviourist tradition during most of the 1920s,

L

L

L

L

1930s, and 1940s, Tolman can be considered to be an important predecessor of the cognitive revolution. He’d actually spent some time during the 1920s studying with Koffka, one of the founders of Gestalt Psychology, in Germany (see Chapter 1). In his book Purposive Behaviour in Animals and Man (1932), Tolman presented evidence which he believed demonstrated conclusively that no adequate account of learning in rats could omit reference to their goals in solving a problem: a rat put in a maze wasn’t a mere machine that, having by chance found its way to the goal-box, then mechanically repeated the movements that got it there. According to Tolman (1948), rats form a ‘cognitive map’ of the maze, a symbolic representation of the whole (or most of the) maze; the maze constitutes what Tolman called a sign-gestalt for the rat, which leads to the development of ‘means–end readiness’ or a plan to navigate the maze in order to repeat the pleasurable experience of obtaining the food reward. Cognitive maps represent expectations regarding which part of the maze will be followed by which other part, an understanding of its spatial relationships. This is related to his place (or sign) learning theory. Indirect support for the cognitive map explanation comes from a famous experiment which demonstrated latent learning (Tolman and Honzig, 1930). This demonstrated that reinforcement may be important in relation to performance (rats’ ability to find their way to the goal-box) but isn’t necessary for the learning itself (i.e. knowing where the goal-box is located and how to get there) (see Gross, 2015).

The very cognitive notion of ‘expectations’ has subsequently been used to explain what is taking place in classical conditioning (the most ‘un-cognitive’ account of learning!). As Box 6.9. shows, conditioning cannot be reduced to the strengthening of S–R associations by the automatic process called reinforcement. It’s more appropriate to think of it as a matter of detecting and learning about relations between events: animals typically discover what signals or causes events that are important to them (such as food, water, danger, or safety). Salivation (as in classical conditioning) or lever pressing (as in operant conditioning) are simply convenient indices (or measures) of what the animal has learned (i.e. environmental relationships) (Mackintosh, 1978).

151

People as environmentally controlled organisms

BOX 6.9 The role of cognition in conditioning L Pavlov himself described the conditioned stimulus (CS) as a ‘signal’ for the uncon-

ditioned stimulus (UCS), the relationship between the CS and UCS as one of ‘stimulus substitution’, and the conditioned response (CR) as an ‘anticipatory’ response (or ‘psychic secretions’). These terms suggest that his dogs were expecting the food to follow the bell. L Consistent with this interpretation, Rescorla (1968) presented two groups of animals with the same number of CS–UCS pairings, but group 2 received additional presentations of the UCS on its own. Group 1 displayed much stronger conditioning than group 2, suggesting that the most important factor (at least in classical conditioning) is how predictably the UCS follows the CS – not how often the CS and UCS are paired. L According to Bandura (1977), reinforcement’s main function is to provide the learner with information regarding the likely consequences of certain behaviour under certain conditions, i.e. it improves our ability to predict future outcomes (pleasant or unpleasant). It also motivates the learner through anticipation of future outcomes. L This cognitive interpretation of reinforcement forms part of Bandura’s social learning theory (SLT). While not denying the role of either classical or operant conditioning, SLT focuses on observational learning (or modelling), in which cognitive factors are crucial. This is reflected in Bandura’s renaming of SLT as social cognitive theory (1986, 1989).

Pause for thought – answers 1 This is an example of atomism, itself a form of reductionism. Here, Sechenov’s ‘atoms’ are brain-related reflexes and the implication is that higher cognitive functions can be explained away by identifying these reflexes (i.e. psychological explanations will no longer be needed because the physiological explanation will supersede it). Again, the force of a reductionist argument is that, in this case, the physiological components are more basic or fundamental than the psychological ones; in turn, this implies that the former have greater validity. (According to arch-reductionists, the sub-atomic particles identified by physics will ultimately replace all other levels of explanation.) 2 Clearly, most behaviour doesn’t comprise naturally occurring responses to specific environmental triggers (i.e. respondent behaviour). This is why Skinner distinguished between respondent (involuntary) and operant (voluntary) behaviour. Dividing psychological life into stimulus–response (S–R), or independent/dependent variables (as in mainstream Experimental Psychology) neglects subjectivity, agency (i.e. free will), and meaningful reflection and action in concrete contexts (Holzkamp, 1992; Tolman and Maiers, 1991). 152

People as environmentally controlled organisms

Behaviour analysts (i.e. those who study operant rather than classical conditioning) acknowledge the limitations of their approach. For example, Leslie (2002) admits that operant conditioning cannot provide a complete account of psychology from a behavioural perspective – even in principle. Similarly, O’Donohue and Ferguson (2001) admit that the science of behaviour cannot account for creativity (as in music, literature, and science). 3 L B ackward: in the coconut commercial, the idyllic, tropical scene is set, and then the coconut is introduced. L Simultaneous: the sound of the dentist’s drill occurs at the same time as the drill touches your tooth. 4 The number of drops of saliva produced. 5 It was as if the dogs didn’t know how to respond: (1) was the stimulus a circle – in which case, through generalization, they ‘should’ salivate; or (2) was it an ellipse – in which case, through discrimination, they ‘shouldn’t’. 6 Pavlov likened the dogs’ neurotic behaviour to stress-induced breakdowns observed in human beings. Understanding how these breakdowns occur is the first step to developing methods to try to treat them. (Through Watson’s adoption of classical conditioning, behaviour therapies were later designed to help people with phobias and other mental disorders; see Chapter 13.) 7 Instead of the CR being ‘erased’, it’s merely suppressed. 8 By taking overt behaviour as the only way of identifying – or describing – an emotion, Watson is omitting what many would say are the true defining features, namely, the subjective experience of fear, anger, etc. It’s also true (as Schachter’s (1964) cognitive labelling claims) that the context in which our subjective experiences take place influences how we interpret that experience (as anger, fear, etc.). So, clearly, Watson’s theory is a grossly oversimplified account of emotion. It’s also reductionist: complex human emotions are broken down into simple, component CERs, which are added to produce the emotion. But are there really enough discrete/distinct CERs to account for the range of human emotions? Also, Watson’s account takes no account of cultural and historical differences in the experience and expression of emotion. 9 Watson was proposing a quantitative view of development: as the child grows up, its behaviour becomes increasingly complex, but it’s basically the same kind of behaviour it was earlier (i.e. a series of CERs that are added and recombined). The same basic principles are involved at all ages (namely, those of classical conditioning). By contrast, Freud’s and Piaget’s developmental theories see development as passing through a series of qualitatively distinct stages, with different kinds of behaviour, cognition, and feelings involved at each stage. 153

People as environmentally controlled organisms

10 At the beginning of their 1920 article, Watson and Rayner state that ‘a certain responsibility attaches to such a procedure’ (p. 1) (i.e. experimentally inducing a CER) and that they were very hesitant about doing so. They eventually decided to go ahead, since ‘responsibilities would arise anyway as soon as the child left the sheltered environment of the nursery for the rough and tumble of the home’ (p. 1). By ‘responsibility’ they seem to mean ‘risk’, thus giving a false impression of having agonized over the ethical issues involved. But what sort of justification is it to maintain that Albert would have acquired the CERs anyway? How likely was he to have encountered a rat, and was he likely at all to have encountered any of the other stimuli while his ears were being assaulted by the sound of a fullgrown man striking a hammer on a large steel bar right behind his head? (Gross, 2008). The fact that Little Albert was ‘stolid and unemotional’, and the belief that the experiment could do him no harm, cannot be used in their defence either: some stimulus was needed which would frighten him, otherwise the experiment couldn’t have taken place. This means that Watson and Rayner knowingly decided to cause distress to an 11-month-old child. The ethics of their techniques seemed to draw little open criticism at the time – either from the university administration or from other Psychologists. According to Hulse (in Simpson, 2000): ‘Times were just different, people were trusted to behave themselves…. It’s only been in the last 20 to 30 years that issues of ethics in science have become profoundly part of the consciousness of scientists.’ 11 Arguably, Positive Psychology comes closest. According to Linley et al. (2006), Positive Psychology is the scientific study of optimal human functioning. It focuses on the positive aspects of human functioning and experience, such as happiness or wellbeing, fulfilment, and flourishing (see Gross, 2014). 12

L Continuous: L L L L

receiving a high grade for every assignment; receiving a tip from every customer served. Fixed interval: being paid regularly (e.g. weekly/monthly); taking a 15-minute break for every hour of revision. Variable interval: irregular payments received by self-employed people. Fixed ratio: piece work (the more boxes packed, the more money earned); commission or bonuses. Variable ratio: gambling.

13 Seeing two ‘things’ (influences, theories, beliefs) as opposed to each other, or contradictory, or mutually exclusive, when they aren’t. A classic example in Psychology is nature–nurture (see Gross, 2014). 154

People as environmentally controlled organisms

14 If ‘controlled’ is taken to mean ‘not random’, then it seems difficult to disagree with Skinner. But his claim is consistent, of course, with his beliefs regarding free will as an illusion. The issue centres on who – or what – is doing the controlling. For Skinner, it’s always and only environmental influences (reinforcements or punishments), of which we may be aware or unaware. For those who believe in free will, it’s the person, him/herself, who is ‘in control’. 15 The nomothetic approach (from the Greek, nomos, meaning ‘law’) underlies classical, natural scientific research, aimed at establishing/discovering universal laws and principles. The idiographic approach (from the Greek, idios, meaning ‘own’ or ‘private’) relates to the study of unique events, individuals, etc. and is the approach underlying the humanities (such as history and biography) (see Chapter 11). 16 L By definition, the nomothetic approach focuses on groups (i.e. large numbers of largely interchangeable people); the aim is to generalize across or between individuals, thus producing group norms. L The idiographic approach focuses on the study of individuals; the aim is to identify individual norms (generalizing within the individual). 17 This, potentially, could be the topic of a seminar paper (see chapter 3 in Gross, 2014). 18 Skinner would certainly say so! Typically, he studied the behaviour of one rat or pigeon at a time (the ‘single-subject experimental design’). Here, ‘experimental’ denotes that only one variable is manipulated at a time; this allowed him to uncover causal (functional) relationships. Behaviourists adopt a nomothetic approach (see above): they’re not seeking idiosyncratic information about the particular situation under investigation, but general principles that allow accurate prediction and control in a wide variety of situations. Part of Skinner’s rationale for studying individual organisms was that groups do not behave – only individuals do. Although general principles may apply to a group, the specifics of these principles may vary between organisms. The single-subject design allows the researcher to account for individual differences but still identify general principles. The individual rat or pigeon acts as its own control: its behaviour is measured before, during, and after the reinforcement contingencies are applied (O’Donohue and Ferguson, 2001).

155

Chapter 7 People as information processors Cognitive Psychology

As we noted in Chapter 1, there’s a common misperception regarding the relative dominance of Behaviourism and its replacement by Cognitive Psychology as the paradigmatic approach. Behaviourism became dominant in the US (but not until the 1930s), while ‘mentalism’, in one form or another, in particular, Gestalt Psychology, remained influential, especially in Germany. Conversely, even in the 1880s, ‘the mind’ was being studied in a way that has more in common with Behaviourism than with how it was subsequently conceived: Ebbinghaus’ pioneering study of memory was based on Associationism, while the so-called ‘cognitive revolution’ of the mid-1950s translated ‘mind’ into ‘cognition’/cognitive processes, seeing the person as an information processor, with the computer analogy at the centre of the paradigm. Like Behaviourism, this ‘revolution’ was a largely US phenomenon. However, as we saw in Box 1.9 (page 17), the English Psychologist Donald Broadbent was a key figure in this move back towards the mind and the attributes of conscious experience. While Broadbent’s (1958) Perception and Communication referred frequently to Hull’s work, he conceptualized learned motor habits as residing in a long-term store. Items entered long-term memory usually by first entering consciousness. While an attention process filtered out the important from the unimportant material, the more the individual ‘processed’ or rehearsed the material in consciousness, the more likely it was to enter the long-term store. These processes were represented by a flowchart, including feedback systems; this was, for the time, a novel way of scientifically representing psychological functioning. Broadbent’s attempts to explain selective attention represents an informationprocessing approach; he also helped to popularize analogies between human memory systems and (other) physical storage systems. One of the best-known and, arguably, most debated and controversial, is that between humans as information processors and computers (see Gross, 2014). Broadbent’s model of selective attention has become a key feature of mainstream Cognitive Psychology, along with alternative models and accounts of divided attention.

157

People as information processors

The mainstream study of memory Arguably, memory is the most important of all human cognitive abilities. According to Richards (2010), the phenomenon of memory is so pervasive as to be (in its broadest sense) synonymous with consciousness: Without it the world would be a chaotic mess of meaningless sensations; we would know nothing about our surroundings or ourselves, and, bar a few hard-wired reflexes, be unable actually to do virtually anything. Underpinning everything from perception to personality, motivation to intelligence, memory is not just one, discrete category of human behaviour or psychological phenomenon. (Richards, 2010, p. 132; emphasis in original) Similarly, Blakemore (1990) argues that: Without the capacity to remember and to learn, it is difficult to imagine what life would be like, whether it could be called living at all. Without memory we would be servants of the moment, with nothing but our innate reflexes to help us to deal with the world. There could be no language, no art, no science, no culture. Civilisation itself is the distillation of memory. (p. 43) Despite the change from ‘mind’ to ‘cognition’ (relating to Wundt’s approach and the Cognitive revolution respectively), the seeds of the latter had been sown much earlier by a German Psychologist, Hermann Ebbinghaus.

The Ebbinghaus tradition: memory as association In Memory: A Contribution to Experimental Psychology (1885), Ebbinghaus reported on his rigorously controlled memory experiments, which really mark the beginning of the systematic investigation of memory. He wanted to study memory in its ‘purest’ form, that is, unrelated to already existing knowledge (or associations); this also represented a way of studying memory objectively. His method for achieving this is described in Box 7.1.

BOX 7.1 Pure memory and nonsense syllables (Ebbinghaus, 1885) L In order to rule out the use of familiar, meaningful words (which are already part

of a complex web of associated ideas), Ebbinghaus made up a very large number of three-letter nonsense syllables (a consonant, followed by a vowel, followed by another consonant, such as ZUT and JEH). L Ebbinghaus spent several years using only himself as the sole subject of his research. L He read lists of nonsense syllables out loud, and when he felt he’d recited a list sufficiently often to retain it, he tested himself. If he could recite a list correctly twice in succession, he considered it to be learned.

158

People as information processors L After recording the time taken to learn a list, he then started to learn another one. L After specific periods of time, he’d return to a particular list and try to memorize

it again. L He calculated the number of attempts (or trials) it took him to relearn the list as

a percentage of the number of trials it had taken to learn it in the first place (a savings score). L Ebbinghaus found that memory declines sharply at first, before levelling off. For example, on one set of experiments involving a series of eight different lists of 13 nonsense syllables, he found savings scores of: L L L L

58 per cent, 20 minutes after training 44 per cent, 60 minutes after training 34 per cent, 24 hours after training 21 per cent, 31 days after training.

L These findings have been replicated by other researchers many times.

L L

Look back to Chapter 2 and consider the arguments in support of singlesubject designs as consistent with a nomothetic approach. How could you sum up Ebbinghaus’ findings regarding memory loss?

Ebbinghaus found that most memory loss occurs within the first minutes after training; once the memory has survived this ‘hurdle’, it seems to become much more stable (Rose, 2003). By systematically varying the number of times he read a list, or the delay between learning a list and trying to recall it, Ebbinghaus could measure the influence of repetition and delay on the processes of remembering and forgetting (Cohen, 1990).

An evaluation of Ebbinghaus’ research As a number of researchers have pointed out, Ebbinghaus’ associationist approach characterizes memory as a passive process, whereby lists of (meaningless) items are simply repeated enough times to be automatically stored in memory (e.g. Cohen, 1990). More positively, Ebbinghaus’ research was innovative in that it embodied all the fundamental features of the measurement of psychological capacity. According to Danziger (1990): Psychological measurement became a generally useful tool only when it was interpreted as constituting a measurement of individual capacity rather than of individual experience … this … enormously expanded the potential scope of psychological measurement, because the number of measurable capacities was limited only by the ingenuity of psychologists in devising tasks with quantified performance criteria. (pp. 141–142) In contrast with Ebbinghaus’ focus on performance as a way of measuring capacity, Wundt had focused on memory as a subjective experience. As conducted by one of Wundt’s students (the American K.H. Wolfe), the experimental ‘subject’ was presented with a standard tone and subsequently with comparison tones at varying intervals. The subject’s task was to judge whether the pitch of the standard and comparison tones was

159

People as information processors

the same or different. The proportion of correct judgements could then be plotted as a function of the time that had elapsed between the two tone presentations. As Danziger (1990) explains, what was supposedly being measured was the subject’s ability to reproduce accurately the sensory memory of the standard tone in order to be able to compare it with the comparison tone: For Wundt, psychological reproduction meant the reproduction of a subjective experience, and any attempt at quantification implied a truly mental measurement. (Danziger, 1990, p. 143; emphasis added) For Ebbinghaus, psychological reproduction had a rather different significance: it wasn’t about whether a particular subjective sensation could be accurately reproduced, but rather whether a certain objective result could be achieved – irrespective of any private experience. In both cases, psychological measurement depended on matching two (or more) things, but In Wundt’s case there was an attempt at direct matching of two subjective experiences, whereas for Ebbinghaus it was some public product of the individual’s memorizing activity that was matched against the criterion. (Danziger, 1990, p. 143) What Wundt was really interested in was the expression of a subjective experience, while for Ebbinghaus the primary focus was on the matching of the subject’s activity with objective requirements. Ebbinghaus’ methodology established the category of achieved performance as a fundamental organizing principle of experimental psychological research (Danziger, 1990). This is reflected in both Watson’s and Skinner’s brands of Behaviourism, but also in later study of memory – and other cognitive processes – that became central to Cognitive Psychology following the ‘cognitive revolution’. Mainstream memory research has built on Ebbinghaus’ focus on achieved performance, with major theories trying to account for the performance data. Arguably, the single most influential theory of memory is the multi-store model (MSM) (sometimes called the dualmemory model because of its emphasis on short-term memory (STM) and long-term memory (LTM)). The MSM is described in Box 7.2.

BOX 7.2 The multi-store model of memory (Atkinson and Shiffrin, 1968, 1971) L This is an attempt to explain how information flows from one storage system to

another (i.e. sensory memory, STM, and LTM). L The model sees these as permanent structural components of the memory system

(i.e. built-in features of the human information-processing system). L In addition to these structural components, the memory system comprises more

transient control processes; the critical example is rehearsal. L Rehearsal (1) acts as a buffer between sensory memory and LTM by maintaining

incoming information within STM; and (2) transfers information to LTM. L Information from sensory memory is scanned and matched with information in

LTM; if a match occurs (i.e. pattern recognition), then it might be fed into STM together with a label from LTM (see Figure 7.1).

160

People as information processors Rehearsal buffer

Incoming information (via sensory memory)

Short-term memory (STM)

Information transferred

Long-term memory (LTM)

Information not processed while in STM is forgotten

Figure 7.1 The multi-store (two-process) model of memory (MSM) (based on Atkinson and Shiffrin, 1968, 1971). Evidence for the MSM derives from three main sources: 1 Experimental studies of STM and LTM (sometimes called two-component tasks), including the serial position effect (e.g. Glanzer and Cunitz, 1966; Murdock, 1962). These are mainly concerned with (a) capacity (as in Ebbinghaus’ research; see above), that is, how much information can be stored; and (b) duration (how long information can be held in storage) (e.g. the Brown–Peterson technique (Brown, 1958); Peterson and Peterson, 1959)). 2 Studies of coding: how sensory input is represented – or ‘converted’ by the memory system in a way that allows storage to take place. Studies include those of Baddeley (1966) and Conrad (1964). 3 Studies of brain-damaged patients (famous cases including H.M. (Milner et al., 1968); Clive Wearing (e.g. Blakemore, 1990; Wearing, 2005)). If STM and LTM are distinct, there should be certain kinds of brain damage that impair one without affecting the other. For example, surgical treatment of epilepsy can result in anterograde amnesia (severe memory deficits for events occurring after the surgery). In retrograde amnesia, a patient fails to remember what happened before the surgery or accident that caused the amnesia.

While the two-component task studies and studies of coding offer substantial support for the MSM, the study of brain-damaged patients has led to the view of LTM as multiple (i.e. there are different kinds of LTM), whereas, according to the MSM, it is unitary (i.e. LTM is all the same). Tulving (1972) distinguished between episodic memory (EM) and semantic memory (SM), while Cohen and Squire (1980) distinguished between procedural memory (PM) (Anderson, 1985; Tulving, 1985) and declarative memory (DM). The MSM also saw STM as being unitary, but this too has been challenged by Baddeley and Hitch’s (1974; Baddeley, 2007) account of working memory (WM). According to the

161

People as information processors

WM model, STM comprises a central executive (a flexible, modality-free ‘controller’, resembling a pure-attentional system) plus sub- or slave-systems (articulatory/phonological loop, visuospatial scratch/sketch pad, and episodic buffer). A third major alternative to the MSM is the levels of processing (LOP) model (Craik and Lockhart, 1972). While the MSM saw rehearsal as the crucial transient control process (see above), it failed to distinguish between different kinds of rehearsal. What the MSM describes is what Craik and Watkins (1973) call maintenance rehearsal (repeating the to-beremembered material). But according to LOP, it’s elaborative rehearsal that plays a much more important role in remembering (i.e. what we do with/how we process the material). LOP also turns the MSM on its head: it begins with the control processes and takes the structural components (the memory system) as the result of the operation of those processes (essentially, memory is a byproduct of perceptual analysis). (Detailed discussion of all the above models can be found in Gross, 2015.)

The constructivist approach to memory Another way of evaluating Ebbinghaus’ approach is to compare it with the work of Bartlett, who, long before the cognitive revolution, challenged the associationist approach in fundamental ways.

BOX 7.3 KEY THINKER: Frederic Charles Bartlett (1886–1969) L Bartlett had major health problems as a child and

L

L

L

L

162

much of his schooling was home-based. His first taste of higher education was an external degree offered by University College, London, comprising logic and ethics, and sociology. At that time (early 1900s), a course in logic (‘laws of thought’) doubled up as what we’d now call Cognitive Psychology. He then went to St. John’s College, Cambridge, where he was much influenced by W.H.R. Rivers, one of the most eminent anthropologists of the era. Rivers had worked in the German tradition of Psychophysics (see Chapter 1); like Wundt, he had also turned to culFigure 7.1 tural studies as the complement to Psychophysics. Though never losing his interest in anthropology, Frederic Bartlett. Bartlett began work in Experimental Psychology under Cyril Burt (famous for his work on IQ (intelligence quotient) in twins; see Chapter 11). During the First World War, Bartlett stayed in Cambridge, taking over Burt’s Experimental Psychology courses. He became the Director of the Psychology laboratory after the war until 1931, when he became the first Professor of Experimental Psychology. During this time he revived his interest in anthropology, publishing Psychology and Primitive Culture in 1923. But he’s best known for Remembering (1932), which not only anticipated much that emerged during the ‘cognitive revolution’

People as information processors in the mid-1950s, but also laid the foundations for the much more recent research focus on real-world phenomena and the use of the ‘indicative case’. L As with the First World War, Bartlett’s research interests became much more practical during the Second World War (such as refining training programmes for pilots). By 1945, the Cambridge laboratory facilities had become the Applied Psychology Research Unit. L Bartlett now turned to the study of thinking, as a cognitive skill, using methods similar to those used to study remembering (see text below). Thinking: An Experimental and Social Study appeared in 1958. L He enjoyed unusual public recognition for a Psychologist. He was elected to the Royal Society in 1932 and knighted in 1948. (Based on Harré, 2006)

Bartlett criticized Ebbinghaus’ use of nonsense syllables for excluding ‘all that is most central to human memory’: his search for the pure ‘mechanism’ of recall and recognition was deliberately divorced from both context and meaning. So, for Bartlett, Ebbinghaus’ studies of ‘repetition habits’ were practically worthless in trying to understand how real people, in real situations, remembered real matters of interest (Harré, 2006). Instead of studying subjects’ passive responses to meaningless stimuli presented by an experimenter, Psychologists should examine people’s ‘effort after meaning’.

L L

What do you understand by ‘effort after meaning’? According to Bartlett, what is it that Ebbinghaus’ use of nonsense syllables lacked?

According to Bartlett, when we remember something in real life, the memory is rarely, if ever, an exact copy of the original; rather, it is an attempt at interpretation, making the past more logical, coherent, and generally ‘sensible’. This involves making inferences or deductions about what could/should have happened. We reconstruct the past by trying to fit it into our existing understanding of the world (our schemas). Unlike a computer’s memory, where the output exactly matches the input, human memory is an ‘imaginative reconstruction’ of experience. (Interestingly, the faculty of memory had a special status in medieval thinking about the mind (Saunders and Fernyhough, 2016). According to Carruthers (1998), the medieval conception of memoria embodied something much richer than modern notions of memory as a passive store of information. For thinkers of the Middle Ages, remembering was an active reconstructive process involving the recombination of different forms of information into new cognitive representations (a complectio); the latter comprised emotional and motivational elements, as well as cognitive.) For Bartlett, therefore, Ebbinghaus’ studies lacked ecological validity (or mundane realism), that is, they don’t reflect the way that ‘remembering’ occurs in everyday life. The very use of the word ‘remembering’ (the title of Bartlett’s classic text) conveys an active process, in contrast with ‘memory’, which conveys a ‘place’ where experience/learning is stored. In terms of this distinction, Bartlett’s reconstructivist approach has much more in common with the LOP model than the MSM: what they share is a view of remembering as a product of how we process the to-be-remembered material, rather than merely repeating it (through maintenance rehearsal).

163

People as information processors

Famously, Bartlett used serial reproduction, in which one person reproduces some material, a second person has to reproduce the first reproduction, a third reproduces the second, and so on, up to six or seven reproductions. This method is meant to replicate the process by which rumours and gossip are spread, or legends passed from generation to generation (especially in non-literate communities), and so reinforces the claim that Bartlett’s research has the ecological validity that Ebbinghaus’ lacked. One of the most famous pieces of material Bartlett used was the Inuit folk tale translated as ‘The War of the Ghosts’. A summary of his findings using this folk tale is shown in Box 7.4.

BOX 7.4 Examples of reconstructive remembering based on ‘The War of the Ghosts’ L The story became noticeably shorter; after six or seven reproductions, it shrank

from about 330 to 180 words. L Despite becoming shorter and details being omitted, the story became more coherent:

no matter how distorted it became, it remained a story (with a beginning, middle, and end). L It also became more conventional, retaining only those details which could be easily assimilated to the participants’ shared past experiences and cultural backgrounds. L It became more clichéd: any peculiar or individual interpretations tended to be dropped. (Based on Gross, 2015)

The point regarding how the story became more conventional is especially pertinent, given that Bartlett has been accused by some of ‘selling out’ to mainstream Experimental Psychology. While, as we’ve seen, his approach was radically different from that of Ebbinghaus, we noted in Box 7.3 how Bartlett had been drawn to anthropology in the early part of his career. Yet, according to Rosa (1996), as his Psychology career progressed, he found it necessary to be more faithful to mainstream individualism and essentialism (see Chapter 3). Putting it more forcefully, Douglas (1980) claimed that Bartlett (author of the best book on remembering) became absorbed into the institutional framework of Cambridge University Psychology and restricted by the conditions of the experimental laboratory. According to Shotter (1990), Bartlett came to treat remembering as a wholly inner process. However, at the same time, he recognized the importance of understanding the experimental situation as being just as socially located as any other setting (Middleton and Crook, 1996).

Pause for thought … 1 What do you take Middleton and Crook to mean by this claim that the experimental situation is ‘socially located’? Try to think of some examples of the ‘social location’ of Psychology laboratory experiments.

164

People as information processors

Memory and culture In describing Bartlett’s use of serial reproduction above, we noted that this may be an especially important means of passing legends from generation to generation, especially in nonliterate societies. In other words, it may have ecological validity as a method for remembering culturally significant material. It may even enhance remembering in such societies compared with that in literary (text)-biased Western societies. Indeed: Anecdotal evidence abounds that people with oral traditions have phenomenal memories…. Some serious students, perhaps influenced by these anecdotes, have argued that memory skills in preliterate societies develop differently from, if not better than, those in literate societies. (Segall et al., 1999, p. 111) They name Bartlett as one of these ‘more serious students’. Because individuals in literate societies can rely on memory banks (such as telephone directories, history books, and computers), they may have lost memory skills through lack of practice (Segall et al., 1999). Indeed, the very recent advent of Google may be affecting the very meaning of what it means to remember something.

BOX 7.5 How Google is changing our brains L According to Wegner and Ward (2013), the internet isn’t just replacing other

people as sources of memory and someone to share information with, but also our own cognitive faculties, undermining the impulse to ensure that some important, just-learned facts get inscribed into our biological memory banks. They call this the Google effect. L Wegner and Ward describe a study that looked at how quickly we turn to the internet when trying to answer a question. The findings suggested that the internet comes to mind quickly when we don’t know the answer to a question: our first impulse is to think of our all-knowing ‘friend’ that can tell us what we need. L Research has shown that participants who’ve just found answers on a website experience the illusion that their own mental capacities had produced this information – not Google (or whichever search engine was used). Using Google gives people the sense that the internet has become part of their own cognitive tool set. The advent of the ‘information age’ seems to have created a generation of people who feel they know more than ever before – when their reliance on the internet means that they may know ever less about the world around them. (Wegner and Ward, 2013, p. 53; emphasis added)

Segall et al. (1999) cite cross-cultural evidence of superiority of memory among people reared in societies with a strong oral tradition. However, the picture is more complicated than this suggests. Cole et al. (1971) and Cole and Scribner (1974) reported a complex series of memory experiments with schooled and unschooled Kpelle in Liberia. In one experiment involving several trials, Kpelle participants heard and then were asked to recall the names of 20 common items, five each in four categories (food, clothing, tools, and utensils).

165

People as information processors

Compared with US participants, the Kpelle participants recalled fewer and improved less over trials; there was little improvement with age, and there was almost no clustering into semantic categories (a mnemonic skill evident in American children over ten). Neither did the Kpelle groups seem to learn by ‘rote’ (i.e. recalling the words in the order in which they’re presented). This surprising result prompted Scribner (1974) to propose that the categories used were inappropriate (i.e. not meaningful or relevant) for the Kpelle: when allowed to use their own categories, they used clustering in their recall (although more clustering didn’t always produce better recall). A further finding was that the use of conceptual organization as a means of remembering was more likely to be characteristic of people who’d been school-educated. Cole and Scribner (1974) argued that schooling teaches people to remember aggregates (categories or clusters) of material that aren’t first perceived as interrelated; people become practised in learning new organizing principles, whose acquisition then facilitates the remembering of instances that relate to the principle. Based on research in Mexico and Morocco, Wagner (1981) concluded that the structure of memory (STM capacity, including the recency effect, and forgetting rate, as studied by Ebbinghaus) is universal and relatively invariant across populations. By contrast, the control processes (acquisition strategies, such as clustering and rehearsal, and retrieval strategies) are culturally influenced. An important implication of Bartlett’s work is that memory is a social phenomenon that cannot be studied as a ‘pure’ process, as Ebbinghaus had attempted to do. Because he stressed the influence of previous knowledge and background experience, Bartlett found that remembering is integrally related to the social and cultural contexts in which it’s practised. These contexts determine the function of the goals of the activity of ‘remembering’ (Mistry and Rogoff, 1994); this helps account for the amazing memory for lines of descent and history of Itamul elders in New Guinea, needed to resolve disputes over property claims by conflicting clans. Bartlett himself described the prodigious ability of Swazi herdsmen to recall individual characteristics of their cattle. But since Swazi culture revolves around the possession and care of cattle, this ability isn’t so surprising. What these examples demonstrate is that remembering is a means of achieving a culturally important goal, rather than being the goal itself (Mistry and Rogoff, 1994).

Discursive Psychology and everyday remembering A postmodernist, Social Constructionist account of memory is very much in keeping with Bartlett’s view of remembering as located in everyday practice (such as conversation). In contrast with the associationist approach (starting with Ebbinghaus but continuing into the MSM and beyond), which studies memory almost as a disembodied process removed from any real-life, sociocultural setting, Discursive Psychology (DP) focuses on what people are trying to accomplish when they remember something (see Chapter 3). Edwards et al. (1992) point out a fundamental difference between (1) associationist or information-processing approaches to memory and (2) discursive remembering. 1 In the former ‘memory’ is measured as a discrepancy between input and output: lists of items, prose passages, input sentences, etc. are presented to participants who are asked to recall (sometimes with prompts and probes) the input. Any discrepancies between recall (output) and input are then used as the basis for theorizing about intervening cognitive structures and processes. The input and output must be comparable, i.e. they must share the same representational form (e.g. word recall of word presentation).

166

People as information processors

2 In everyday ‘remembering’, people overwhelmingly describe things rather than recite (or reproduce) them. Here, input and output often don’t match (remembering is often ‘crossmodal’; Edwards and Middleton, 1987). Language exists primarily as a domain of social action, communication, and culture: everyday talk about past events is rarely an attempt to reproduce something as accurately as possible. Indeed, one of the prime features of everyday remembering is that through it people try to establish what it is that ‘actually, merely and definitively happened’ (p. 442). In an important sense … the truth of original events is the outcome, not the input, to the ordinary reasoning processes that talk displays. This derives from the vagaries of description, an essential feature of the workings of language as a mode of representation. (Edwards et al., 1992, p. 442; emphasis in original) (Largely inspired by Bartlett’s work, research into eyewitness testimony (EWT) reveals how unreliable/inaccurate people’s recollection of the past can be – even when they’re trying to match ‘input’ and known ‘output’; see Gross 2014, 2015). Beginning in the late 1960s, Ulric Neisser (1967, 1976, 1979) changed the nature of Cognitive Psychology in general, and memory research in particular, basing his approach on Bartlett’s work more than that of any other contemporary Psychologist (Humphreys, 1997). Perhaps the best illustration of Neisser’s work is his analysis of the ‘Watergate-case memory’ of John Dean (Neisser, 1981, 1982). This is described in Box 7.6

BOX 7.6 Remembering it like John Dean L Dean had been adviser to Republican President Richard Nixon during the

L

L

L

L

‘Watergate scandal’. A Congressional investigative committee was set up to examine Nixon’s involvement in illegal activities, namely the break-in at the Democratic Party’s Watergate offices in Washington, DC. This led to Nixon’s resignation in 1974. Dean’s testimony to the Committee was one of the central features of the case against Nixon. Dean (the key witness) initially displayed remarkable feats of recollection, through ‘thinking himself ’ back into crucial meetings held in the Oval Office of the White House (‘retrospective recollection’). However, when tapes of critical meetings later became ‘available’, it became apparent that Dean’s literal recall had in fact been rather poor: the mass of detail he provided had been largely mistaken. Neisser’s aim was to discover what an analysis of Dean’s testimony could reveal about how his memory worked, as well as about memory in general. In particular, he set out to show that there was a sense in which Dean could be accurate, while apparently mis-remembering almost all of the important details. In Neisser’s analysis, not only did Dean mis-remember the details of time, place, and conversation, he also often mis-remembered even the gist of what had happened and what was said. Despite this, ‘there is usually a deeper level at which he is right. He gave an accurate portrayal of the real situation, of the actual characters and commitments of the people he knew, and of the events that lay behind the conversations he was trying to remember’ (Neisser, 1981, p. 4). (Based on Edwards and Potter, 1992)

167

People as information processors

Neisser’s somewhat confusing claim that Dean could be both wrong and right at the same time has echoes of Bartlett’s observation that where the environment is constantly changing, literal recall is extraordinarily unimportant. ‘Accuracy’ doesn’t have to equate with ‘literal recall’. Neisser’s ‘deeper level’ of accuracy is the third of a three-part distinction between varieties of accurate recall: 1 verbatim recall (or ‘literal memory’); 2 gist (a more holistic level of accuracy, in which ‘themes’ and ‘storylines’ are judged to be correct – although this may be difficult to determine in practice – but accuracy of detail is often poor); and 3 repisodic memory (an even more holistic level at which recall is distilled from many different but related experiences, in which some significant essence of the truth remains – despite all sorts of inaccuracies of detail). So, while Dean’s testimony was substantially inaccurate at the verbatim level, he was increasingly accurate at the gist and repisodic levels (Neisser, 1981). Neisser’s concern with identifying various ‘levels’ of accurate recall can be seen as part of the broader attempt to establish an ‘ecological psychology of memory’; this seeks to emphasize true (i.e. accurate) remembering, in contrast with the information-processing and constructivist approaches’ emphases on forgetting and the general unreliability of memory. For Neisser, ‘gist’ and ‘repisodic memory’ are both ways of ‘getting it right about the past’ (Edwards and Potter, 1992, p. 36). Despite believing that Neisser didn’t go far enough, Edwards and Potter (1995) argue that Neisser’s (1981) ‘ecological’ analysis of Dean’s testimony was a welcome departure from much traditional, mainstream memory research in three ways: 1 it examined memory in a natural context; 2 it shifted emphasis to a functional view of memory (by attempting to make sense of Dean’s personal goals); and 3 crucially, it changed the definition and assessment of veridicality (truth or accuracy). Consistent with (3) above, for Edwards and Potter (1995) the question of the accuracy of Dean’s account isn’t what’s relevant: what they’re interested in is how Dean constructed his account and made it effective. While mainstream Psychology takes qualitative data such as interview transcripts as evidence of intra-psychic processes and states, DP both reframes the status of language in Psychology and considers Cognitive Psychology’s usual subject-matter (internal structures and processes) as largely irrelevant (Burr, 2015). As Edwards and Potter (1995) put it, mainstream Psychology sees remembering as A kind of distorted re-experiencing, overlaid or altered by subsequent experience and by the machinations of inner cognitive structures and experiences, with the report serving merely (and directly) as evidence of those underlying processes. (Edwards and Potter, 1995, p. 35) Experience tells us that we’re unlikely to find an absolute version of the truth against which we can measure the accuracy of accounts of past events. When people are asked to provide an accurate account of an event, much ‘memory work’ is done to build ‘an acceptable, agreed or communicatively successful version of what really happened’ (Edwards and Potter, 1995, p. 34).

168

People as information processors

According to Rosa (1996), Bartlett’s view of remembering lay somewhere between the mainstream and DP positions. He rejected the ‘dissolving’ (or ‘reduction’) of the individual into a set of cognitive processes, while acknowledging their reality. However, he tied these inner processes – and personal experience – to social practices and cultural materials. For Bartlett, one of the key interfaces between the individual and the social in remembering is the schema, which is widely regarded as one of his greatest contributions to Psychology (see Box 7.7).

BOX 7.7 The function of schemas L As we noted earlier, Bartlett (1932) argued that we reconstruct the past by trying

to fit it into our existing understanding of the world (our schemas or schemata). L Schemas provide us with ready-made expectations, which help us to interpret the

flow of information reaching the senses. L They also help to make the world more predictable, and allow us to ‘fill in the

gaps’ when our memories are incomplete. L When new information conflicts with existing schemas, distortions of what’s

recalled can result. A famous illustration is an experiment by Allport and Postman (1947), in which white participants were briefly shown a picture of two men, one African-American, the other white, apparently engaged in an argument. The white man is holding a cut-throat razor in one hand and pointing his finger in the black man’s face. As participants described the picture to someone who hadn’t seen it – and using Bartlett’s serial reproduction – the details changed: most significantly, the razor was reported as being held by the black man.

Pause for thought … 2 What can you infer about the schema the participants were using, which helps account for the distortion that occurred? If the participants had been black, might the results have turned out differently – and if so, how?

Someone who was very much influenced by Bartlett is Jerome Bruner, one of the central figures in the 1956 cognitive revolution (see Chapter 1).

BOX 7.8 KEY THINKER: Jerome Seymour Bruner (1915–2016) L Bruner was born in New York to a well-off middle-class Jewish family. L He was born blind and unable to see until he was two, following a series of oper-

ations. His father died when he was 12, and his widowed mother moved house annually, disrupting his schooling.

169

People as information processors L However, he entered Duke University in 1933, quickly finding his feet as a

L L

L

L

L

Psychology major. Following some problems with the university management, he began work on animal experiments under the supervision of William McDougall. But he decided to move to Harvard to begin his graduate studies (in 1938), where the emphasis was much more on human beings than animals. He was greatly influenced by the writings of Gordon Allport (see Chapter 12). His doctoral thesis focused on the power of propaganda broadcasts to influence public opinion. He completed his doctorate in 1947 and remained at Harvard until 1972, when he moved to the new Watts Chair in Psychology at Oxford. By the early 1950s, Bruner and his colleagues (especially George Miller, 1920–2012) became convinced of the need for an institution dedicated to the study of human cognition. A grant from the Carnegie Corporation helped fund the Centre for Cognitive Studies at Harvard; by 1960, this had become the world’s key location for such studies. Bruner, Miller, and their colleagues happily used terms such as ‘mind’, ‘expectation’, ‘perception’, and ‘meaning’, which had been ‘banned’ by hardline Behaviourists (see Chapter 1). When he moved to Oxford in 1972, it was the pre-eminent centre for philosophy in the world; the philosophy of mind was particularly related to his increasingly narratologically oriented Psychology (see text below). Most of his time at Oxford was devoted to the study of human development (see Gross, 2015). He returned to the US in 1982 and published his autobiography in 1983. In the mid-1980s, he joined the New School for Social Research, then New York University. (Based on Harré, 2006)

Bruner first became famous for a series of experiments (on perception) which helped to re-establish cognition’s central place in Psychology; they invalidated one of Behaviourism’s fundamental principles, namely that environmental contingencies and stimuli are sufficient to account for the entire range of human psychological phenomena, either directly or through conditioning. Bruner’s experiments were also relevant to another powerful idea: It had been a widely assumed but rarely explicitly formulated principle that there is a culture-free basis for all human psychology in the functions of the brain and nervous system. The experiments … were on perception, which might have been thought to be the branch of psychology most closely tied to neurophysiological systems common to all humanity. Yet, even there, concepts like ‘meanings’ and ‘conventions’ seem to be called for to account for the experimental results … whatever way his experiments are interpreted, they mark Bruner’s place as one of the founders of cognitive psychology. (Harré, 2006, p. 57) Some of these experiments are described in Box 7.9.

170

People as information processors

BOX 7.9 Bruner’s ‘Judas Eye’ experiments L The experiments were aimed at testing the fundamental principle that ‘the world

L

L

L

L

L

looked different depending on how you thought about it’ (Bruner, 1983, p. 65). The ‘Judas Eye’ metaphor is based on the way a peephole in a door allows a mere glimpse of a visitor, and yet, we’re invariably able to recognize the whole person. We typically ‘go beyond the information given’ in perceiving the world around us. Bruner and various colleagues (including Leo Postman) based the experiments on various principles of Psychophysics (see Chapter 1), such as the relative magnitudes of a sensation and the physical stimulus producing it, and perceptual threshold (how long it takes for something to be consciously recognized/identified). In one experimental series, participants were briefly presented with the words ‘Paris in the the spring’ arranged within a triangle, so that the two instances of ‘the’ were on separate lines. Typically, participants reported seeing ‘Paris in the spring’. This demonstrated the interaction between context and expectations (Bruner and Postman, 1949; Bruner et al., 1952). The influence of expectations and context, plus that of motivation and emotion, instructions, past experience, individual differences (including cultural differences), is closely related to Bartlett’s schema concept (see Box 7.8). A related concept is perceptual set, defined by Allport (1955a) as a perceptual bias or predisposition or readiness to perceive particular features of a stimulus. According to Vernon (1955), set acts as (1) a selector (the perceiver has certain expectations which help focus attention on particular aspects of the incoming sensory information); and (2) an interpreter (the perceiver knows how to deal with the selected data, how to classify, understand, and name them, and what inferences to draw from them). As a whole, this research demonstrated the role played by schemata in a wide range of common psychological phenomena, such as understanding words and recognizing objects. Perception was shown to be an aspect of cognition (rather than neurophysiology), and ‘meaning’ was to become an indispensable concept in Psychology. Existing knowledge was involved in perceiving, which had seemed the most biological of psychological processes (Harré, 2006).

In 1955, Bruner and his colleagues, Goodnow and Austin, began studying how people classify things into groups/categories. They used cards of different shapes and colours, and participants had to work out the classification system they were using; they did this by asking whether or not any two cards with certain characteristics belonged to the same category. The results were published in A Study of Thinking (1956), which is commonly taken to be the inaugural work in what Harré calls the first cognitive revolution (Harré, 1989, 2006; see Chapter 1). Later in his life, Bruner observed that the 1950s cognitive revolution was quickly hijacked by the computationalists. We now turn to their ideas.

Artificial intelligence and the Computational Theory of Mind As we saw in Chapter 1, one of two key paper presentations given at the Symposium on Information Theory held at the Massachusetts Institute of Technology (MIT) in September 1956 was Newell and Simon’s demonstration that a proof of symbolic logic could be carried out by a computer. It was this Symposium that is taken by many to mark the cognitive revolution.

171

People as information processors

We’ve also seen in this chapter how the information-processing approach (which, with the computer analogy, is at the heart of the cognitive revolution) has helped shape the MSM and other major accounts of human memory. Both Newell and Simon’s paper and the information-processing approach in general reflect the impact on Psychology of computer science. According to Garnham (1988), the uneasy relationship between the two disciplines lasted until the late 1970s, when the new discipline of Cognitive Science emerged from the parent disciplines of Cognitive Psychology, Artificial Intelligence (AI), philosophy, linguistics, anthropology, and neuroscience. According to Garnham (1988), by the late 1970s Cognitive Psychologists had more in common with AI researchers than with other Psychologists, and AI researchers had more in common with Cognitive Psychologists than with other computer scientists.

What is AI? According to Dreyfus (1987), the basic goal of AI research is to produce genuine intelligence by means of a programmed digital computer. This requires, in effect, that human knowledge and understanding be defined in terms of formally specified elements and sequences of rule-governed operations. Computer programs comprise formal systems, a set of basic elements or pieces and a set of rules for forming and transforming the elements or pieces (Flanagan, 1984). For example, a computer programmed to play noughts and crosses is an automatic formal system and every modern computer is just such an automated, selfregulating imitator of some formal system. Boden (1987b) defines AI as the science of making machines do the sorts of things that are done by human minds. While the ‘machines’ in question are, typically, digital computers, she’s at pains to point out that AI isn’t the study of computers; rather, AI is the study of intelligence in thought and action. Computers are its tools, because its theories are expressed as computer programs that are tested by being run on a machine.

L

Do you believe that computers literally think/behave intelligently (that is, reproduce/duplicate the human equivalent), or are they merely simulating (mimicking) human thought/intelligence?

This distinction corresponds to that between hard and soft AI, described in Box 7.10.

BOX 7.10 Hard and soft AI (Searle, 1980) L According to weak AI, the main value of the computer in the study of the mind is

that it provides a very powerful tool (e.g. it enables us to formulate and test hypotheses in a more rigorous and precise fashion than before). L But according to strong AI, the computer isn’t merely a tool; rather, the appropriately programmed computer really is a mind in the sense that computers can be literally said to understand and have other cognitive states. L Further, because the programmed computer has cognitive states, the programs aren’t mere tools that enable us to test psychological explanations: the programs are themselves explanations.

172

People as information processors

The Computational Theory of Mind Advocates of strong AI (e.g. Turing, Johnson-Laird, Newell, Simon, Minsky, Boden) believe that people and computers are merely different manifestations of the same underlying phenomenon, namely automatic formal systems (the Computational Theory of Mind (CTM)). One current, high-profile advocate of CTM is the Canadian-born Psychologist and linguist Steven Pinker, for whom: The mind is what the brain does; specifically, the brain processes information and thinking is a kind of computation. The mind is organized into modules or mental organs, each with a specialized design that makes it an expert in one area of interaction with the world … On this view, psychology is engineering in reverse. In forward-engineering, one designs a machine to do something; in reverse-engineering, one figures out what a machine was designed to do. (Pinker, 1997, p. 21) (What ‘designed to do’ refers to here is evolution, which is discussed in Chapter 8.) According to Boden (1987a), the essence of intelligence is the ability to creatively manipulate symbols (or process information), given the requirements of the task in hand (e.g. mathematical tasks involve the processing of numerical information). But symbols themselves have no inherent similarity to what they symbolize, and represent things in a purely formal way. Therefore, in computer languages, symbols stand for whatever objects, relations, or processes we wish: the computer manipulates the symbols, not their meaning. Again, programs consist of rules for manipulating symbols, and don’t refer to anything in the world. According to CTM, all intelligent systems are symbol manipulators, including human minds. If symbols are meaningless to a computer, it follows that they’re also meaningless to a human mind. But, in that case, what’s the ‘meaning’ which, according to Boden, the human programmer assigns to the meaningless symbols?

BOX 7.11 The Chinese room (Searle, 1980) L Suppose that I’m locked in a room and am given a large batch of Chinese writing

(what the people giving it to me call a ‘script’); I know no written or spoken Chinese. L I’m then given a second batch (a ‘story’) plus a set of rules, in English (the

‘program’), for correlating the two batches (which are merely formal symbols for me, i.e. I can identify them entirely by their shapes). L I’m then given a third batch of Chinese symbols (‘questions’), plus some more English instructions; these enable me to correlate elements of this third batch with the first two batches and to give back particular Chinese symbols in response to particular third-batch symbols (‘answers to the questions’). L After a while I get so good at following the instructions that, from the perspective of someone outside the room, my answers are indistinguishable from those of native Chinese speakers. L However, all I’m doing is manipulating uninterpreted formal symbols and in this respect am simply behaving like a computer, i.e. performing computational operations on formally specified elements. I’m simply a realization of a computer program.

173

People as information processors

Searle’s (1980) famous ‘Chinese room’ Gedanken experiment (‘thought experiment’) is aimed at refuting CTM’s (and hence hard AI’s) equation of intelligence and symbol manipulation and its reduction of human intelligence to ‘formal systems’. Crucial to the AI debate is the philosophical notion of intentionality. Consciousness comprises a number of different mental states (including intentions, beliefs, desires, perceptions, wishes, fears, etc.) and what they have in common is that they all refer to things in the world apart from themselves (they all have ‘aboutness’). This is (part of ) what we mean by saying that the world is meaningful and that we understand it. According to Searle, humans possess intentionality, while computers do not: the symbols a computer manipulates (or transforms or produces) have no meaning for the computer: the formal system which is imitated by every modern computer consists of pure syntax (a set of rules for manipulating symbols) and lacks any semantic content (reference to anything in the world). When we use and understand a language, we don’t just shuffle uninterpreted formal symbols (as in the Chinese room); we actually know what they mean. By definition, the program is syntactical and syntax by itself is never sufficient for semantics (Searle, 1987). The Chinese room is intended to demonstrate precisely this point: manipulating symbols doesn’t – and cannot – amount to understanding.

The ‘Chinese room’ and the Turing test Searle is trying to refute the major methodological presupposition of strong AI, the Turing test (another thought experiment). This was proposed by Alan Turing (1950) as an objective way of deciding whether a computer could validly be said to have been programmed to ‘think’. The idea is that a human judge would hold a three-way conversation with a computer and another person, and try to tell them apart. The judge can see neither the person nor the computer, and can only communicate via a computer keyboard and screen. The judge has to decide which is which from their answers to his/her typed questions; if they cannot be distinguished, the computer (that is, its program) has passed the test. Through the ‘Chinese room’, Searle is trying to show that the Turing test (or ‘imitation game’) isn’t the ultimate test of machine intelligence that supporters of strong AI have traditionally taken it to be. Turing himself predicted that by the year 2000 computers could be programmed to play the imitation game so well that the judge wouldn’t have more than a 70 per cent chance of making the correct identification after five minutes of questioning. Clearly, the Turing test represents an operational, behaviouristic definition of ‘thinking’, an appropriate kind of ‘output’ or performance, regardless of what may be going on ‘inside’. Yet this is precisely the definition that the Chinese room is aiming to show is invalid. However, Richard Gregory (1987) argues that it doesn’t show that computer-based robots, for example, cannot be intelligent like we are – we wouldn’t become intelligent if we were raised in such a restricted, and artificial, environment. According to Ford and Hayes (1998), the central limitation of the Turing test is that it’s species-centred: it assumes that human thought is the ultimate, highest form of thinking against which all others must be judged. Most contemporary AI researchers explicitly reject the goal of the Turing test: rather than restricting the scope of AI to the study of how to mimic (or reproduce) human thinking, the proper aim is to create a computational science of intelligence itself, whether human, animal, or machine (Ford and Hayes, 1998).

The computer analogy: humans and computers as information processors It could be argued that the CTM represents the computer analogy taken to its logical conclusion. But while the CTM sees the nature of the human brain as being irrelevant to our

174

People as information processors

intelligence (it’s symbol manipulation that matters), for others the brain is absolutely crucial in trying to explain human intelligence.

L L L

Try to identify some of the similarities and differences between computers and human beings as information processors. Try to think of some limitations of the computer analogy. Can computers ever be like brains?

According to Lachman et al. (1979), computers take a symbolic input, recode it, make decisions about the recoded input, make new expressions from it, store some or all of the input, and give back a symbolic output. By analogy, most Cognitive Psychology is about how people take in information, recode and remember it, make decisions, transform their internal knowledge states and transform these into behavioural outputs. According to Parkin (2000), the parallels between human beings and computers as information processors are compelling. Some of his specific examples are given in Box 7.12.

BOX 7.12 Some similarities between humans and computers as information processors L Computers operate in terms of information streams, which flow between different

components of the system. This is conceptually similar to how we assume symbolic information flows through human information channels (as in the MSM; see text above). L All computers have a central processing unit (CPU), which carries out the manipulation of information. At the simplest level, a central processor might take a sequence of numbers and combine them according to a particular rule in order to compute an average. Many Cognitive Psychologists see this as comparable to how people perform the same operation. L Computers have databases and information stores, which are permanent representations of knowledge the computer has acquired. In many ways, this is comparable to our (permanent) LTM. L Information sometimes needs to be held temporarily while some other operation is performed. This is the job of the information buffer, a feature of computers and information-processing models of human attention (see Gross, 2015) and memory (see text above). (Based on Parkin, 2000)

Pause for thought … 3 Based on Box 7.12, how valid do you consider the computer analogy to be? 4 How useful in general do you consider analogies in Psychology to be?

175

People as information processors

Perhaps surprisingly, although Pinker (1997) believes that thinking is computation, that doesn’t mean that the computer is a good analogy for the mind. Some of the most important reasons for his dislike of the analogy relate to his support of the principles of Evolutionary Psychology (see Chapter 8).

Can computers ever be like brains? According to Modha (2014), an IBM researcher, current computers are very logical, sequential (serial), and quantitative. They’re based on a 70-year-old architecture that separates memory from processing, and works in a step-by-step fashion, executing a series of pre-written ‘if X, then do Y’ equations. They’re fast number-crunchers and can process a lot of data – but they don’t really think. In the brain, memory and computation are closely interconnected. According to Rose (2003), the very concept of AI implies that intelligence is simply a property of the machine itself. However, the neuronal system of brains, unlike computers, is radically indeterminate: Brains and the organisms they inhabit, above all human brains and human beings, are not closed systems, like the molecules of a gas inside a sealed jar. Instead they are open systems, formed by their own past history and continually in interaction with the natural and social worlds outside, both changing them and being changed in their turn. (p. 101) This is consistent with the view that for a computer to ‘think’ and be conscious (i.e. display intentionality), it would have to be equipped with a sensory apparatus (such as a camera); this would enable it to see the objects represented by the symbols it manipulates (Harnad, in Raley, 2006). Harnad proposes a ‘Robotic Turing test’: to merit the label ‘thinking’, a machine would have to both pass the test and be connected to the outside world. Brains are capable of modifying their structural, chemical, and physical output in response to environmental events (they display considerable plasticity); they’re also extraordinarily resilient in the face of injury, with undamaged parts taking over the function of damaged areas (they display great redundancy) (see Chapter 5). None of this can be said (at least, not yet) of computers. Rose (2003) argues that brains process and remember information based on its meaning; this is very different from information in a computer sense (i.e. meaningless symbols). A critical difference between human and computer memory is that: Every time we remember, we in some sense do work on and transform our memories, they are not simply being called up from store and, once consulted, replaced unmodified. Our memories are re-created each time we remember. (p. 104) This, of course, is a way of acknowledging Bartlett’s theory of reconstructive memory (see above).

Conclusions: do we need a brain to be brainy? As we’ve seen, for supporters of the CTM, intelligence is defined exclusively in terms of the manipulation of symbols according to specified rules. As Pinker (1997) puts it:

176

People as information processors

Information and computation reside in patterns of data and in relations of logic that are independent of the physical medium that carries them … a given program can run on computers made of vacuum tubes, electromagnetic switches, transistors, integrated circuits, or well-trained pigeons, and it accomplishes the same things for the same reasons. (p. 24) But where does this leave the brain? Flanagan (1984), as well as others, finds it implausible that our evolutionary history, genes, biochemistry, anatomy, and neurophysiology have nothing essential to do with our defining features – even though it remains logically possible. He asks whether an inorganic device that operated according to all known laws concerning plants would be expected to undergo photosynthesis. Searle (1987) believes that mental states and processes are real, biological phenomena in the world, as real as photosynthesis, digestion, lactation, etc., that they’re caused by processes going on in the brain: the intrinsically mental features of the universe are just higher-level physical features of brains. Penrose (1987) also argues that the actual construction of the brain (the hardware) is important – and not just carrying out some appropriate program (software). Searle’s view has been dubbed carbon/protoplasm chauvinism (Torrance, 1986), i.e. his only basis for denying that computers/robots think is that they’re not made of flesh and blood. Gardner (1985) wonders whether he’s proposing that intentionality is a substance ‘secreted’ by the brain: if Searle is claiming that, by definition, only humans are capable of intentionality/ consciousness, then the controversy is pointless and the Chinese room loses its force. To Boden (1993), it isn’t obvious that neuroprotein (‘that grey mushy stuff inside our skulls’) is the kind of stuff that can support intelligence, while metal and silicon aren’t. A major problem is that we simply don’t know what makes the brain conscious and so we cannot design a conscious machine; merely looking at the brain’s structure reveals nothing about how it functions (Parkin, 2000). Still, the brain is a physical entity and people are conscious, so there must be some design features (presumably physical) which make consciousness possible (McGinn, 1987). While we know that the computer hardware doesn’t produce (initiate) the program, it’s highly probable that the brain does help to produce mental states (Teichman, 1988).

Pause for thought – answers 1 To regard empirical research in general, and the laboratory experiment in particular, as objective (as required by mainstream, positivist Psychology) involves two related assumptions: (1) Researchers only influence participants’ behaviour (the outcome of the experiment) to the extent that they decide what hypothesis to test, how the variables should be operationalized, what design to use, and so on. (2) The only factors influencing participants’ performance are the objectively defined variables manipulated by the experimenter. However, as primarily a social situation, a Psychology experiment involves both the experimenter and the participant bringing things to the experimental situation that aren’t directly/explicitly related to the experiment. Two major forms of influence on the outcome are: 177

People as information processors

(1) experimenter bias (e.g. Rosenthal, 1966; Rosenthal and Fode, 1963; Rosenthal and Lawson, 1964); this doesn’t involve the biased experimenter ‘mishandling’ the data, but the bias somehow creating a changed environment in which human participants or animal subjects (such as rats) actually behave differently; and (2) demand characteristics (Orne, 1962): these refer to the sum total of the cues within the experimental situation that convey the experimental hypothesis to participants. People’s strong tendency to want to please the experimenter (and not ‘upset the experiment’) is what makes the experiment a social situation. The experimenter and participant are playing different but complementary roles, and for this to proceed smoothly, each must have some idea of what the other expects of him or her. This relates to the notion of experimental control and the problem of internal vs. external validity (discussed in detail in Chapter 2). While using laboratory experiments, Bartlett regarded them as ‘emergency’ situations in which participants find a suitable response to simply ‘cope’ (1932). He argued that the individual is always an ‘individual-inthe-laboratory’, an ‘individual-in-his-everyday-working-environment’, or an ‘individual-in-a-given-social-group’ – and never a pure and simple individual. 2 Presumably, the white participants used a schema which included the belief that black men are prone to violence. We’d expect black participants to have a rather different schema of black men, making them less likely to distort the violence-related details of the picture. Allport and Postman’s findings are, clearly, consistent with Bartlett’s theory of reconstructive memory. 3 While the computer analogy can be helpful in this way, there’s a related risk involved, namely that we take it (too) literally: while it (as with all analogies) is saying ‘let’s consider the mind as if it were an information-processing machine with a central processor, long-term storage and so on’, there’s a tendency towards reification, that is, to turn ‘the mind’ (essentially an abstract summary term for a wide range of different skills and processes) into something physical/tangible (a ‘computing machine’). Generally, it’s easier to think about physical, concrete entities than ideas or abstract notions. This makes it more likely that ‘as if ’ will become ‘is’. Notice that in Box 7.12 Parkin states that the flow of information streams between different components is ‘conceptually similar to how we assume symbolic information flows through human information channels’. This is nothing more than an assumption, albeit one that may help us see things more clearly. 4 In general, analogies can be a useful way of trying to make sense of something by comparing it with something else, as when we try to 178

People as information processors

explain something complex and/or unfamiliar by comparing it with something more simple and/or familiar. Often, in everyday interaction, or in more formal situations (such as a consultation with a doctor or in a teaching situation), we draw on what we and others already know as a basis for making sense of something new.

179

Chapter 8 Humans as an evolved species Evolutionary Psychology

Evolutionary Psychology is the most recent major school of Psychology, having come into its own only since the 1990s (Moghaddam, 2005). To appreciate and assess its contribution to our understanding of ourselves, we must trace its origins from Comparative Psychology (the study of animal behaviour), ethology (the study of animal behaviour in its natural habitat) and sociobiology (the systematic study of the biological basis of all social behaviour; Wilson, 1975). However, we must first consider what lay behind all of these disciplines, namely Darwin’s theory of evolution, a revolutionary, momentous, biological theory with huge implications for Psychology as a whole (Fancher and Rutherford, 2012).

Darwin’s theory of evolution BOX 8.1 KEY THINKER: Charles Darwin (1809–1882) L Darwin was born in Shrewsbury, England, into a

wealthy and distinguished family. His father, Robert, was an eminent physician, and his mother came from the Wedgwood chinaware-producing family. L His grandfather, Erasmus, had been one of the most famous intellectual figures of the day: doctor, inventor, poet, and general man of science (including the formulation of an early evolutionary theory). L While not excelling at school and disliking the thenstandard classics curriculum, he displayed a strong curiosity and love of nature, spending hours observing, collecting, classifying, and experimenting. He was also very popular with almost everybody.

Figure 8.1 Charles Darwin. 181

Humans as an evolved species L At 16, he was sent to Edinburgh to study medicine. But this didn’t work out and

L

L

L

L

L

L

he soon moved to Cambridge University to train as an Anglican clergyman. He graduated in 1831 with an ordinary/pass (i.e. non-Honours) degree. During all of these changes, his passion for natural history never diminished, and the opportunity arose to take the post of naturalist on HMS Beagle. The planned two-year voyage stretched to five, aiming initially for the east coast of South America, followed by the west coast, before finally returning home via the Galapagos Islands, Tahiti, New Zealand, Australia, and the Cape of Good Hope. Darwin made a number of geological discoveries, but it would be his biological discoveries that were to shape his future and, quite fundamentally, the future of Western science. He found and shipped home to England thousands of biological specimens (both fossilized remains of several large extinct creatures and living plants and animal species), many of which won immediate scientific recognition (partly by virtue of being previously unknown to science). Darwin habitually asked himself about the possible functional adaptiveness of all animal characteristics – both anatomy and behaviour – and the geographical distributions of species. For example, giant tortoises (‘galapagos’) showed slight but characteristic differences in the shapes of their shells depending on which particular island they lived. Several populations of common brown finches differed only in the shape and size of their bills, again depending on which island they inhabited. HMS Beagle finally docked at Falmouth in 1836; his reputation had preceded him. He was elected to the Royal Society in 1839, and also married that year as well as publishing the edited journal from his voyage. Darwin was now a leading popular naturalist and travel writer. He is buried in Westminster Abbey, next to Newton: England’s two greatest and most influential scientists, side by side. (Based on Fancher and Rutherford, 2012)

Darwin now began to deliberate on how the millions of different species that inhabit the earth originally came into existence. The widely held, orthodox, Church of England view was the argument from design (what we’d now call creationism), with God being the designer-in-chief. His grandfather, Erasmus, had already promoted a rival theory of gradual evolution, and the French zoologist Jean-Baptiste Lamarck (1744–1829) had proposed that species evolve and change as the result of inheriting bodily changes produced by the voluntary exercise or disuse of particular organs. Neither view of evolution was taken very seriously as an alternative to the design argument, and the very notion of evolution was still not ‘respectable’. But Darwin believed that the idea of evolution (or ‘transmutation’) of species deserved to be acknowledged. Animal breeders had long been producing strikingly different varieties or breeds of domestic animals by careful selection of parental stock over several generations. But what was the mechanism by which countless stable species emerged in nature?

182

Humans as an evolved species

BOX 8.2 Evolutionary theory and the Malthusian cycle L Like all scientific theories, Darwin’s theory of evolution is a product of

L

L

L

L

L

(nineteenth-century) Western culture. The enormous diversity of the samples he collected raised many questions about how and why life forms took the shapes they did. The answers Darwin gave were in fundamental ways shaped by the cultural climate of his era. Britain was undergoing enormous economic, political, and social transformation. Starting in the 1750s, the modernization of farming forced hundreds of thousands of people from the countryside to seek work in the new industrial urban centres. Working and living conditions were mostly awful, but the population doubled between 1800 and 1830. Dramatic increases in the numbers of poor people sparked fierce debates about government welfare policies. One group, led by Harriet Martineau, argued that government support for the poor would encourage them to have more children, only increasing the burden on taxpayers. Martineau and others of her political persuasion found scientific support in the writings of Thomas Malthus (1766–1834), an economist and priest. He argued that most human beings are destined to live in poverty because their capacity to increase population greatly exceeds their capacity to increase food production. So, a cycle of population increase and famine, followed by population decline and relative prosperity, is inevitable (the ‘Malthusian cycle’); this was consistent with the thinking of major economists of the time. Even the economist Adam Smith (1723–1790), author of The Wealth of Nations, was pessimistic about the plight of the masses in the newly industrialized society. Despite his belief that the divisions of labour and free market competition (‘capitalism’) would create greater wealth, he also believed that wages would remain at a minimal level: workers had little power over factory owners who had an endless pool of workers they could choose from. The Malthusian cycle directly implied that in human societies there was fierce competition for survival. Indeed, in any species, according to Darwin, countless individuals will be conceived over many generations, but only a proportion of them will survive the demands of their environment in order to be able to reproduce. Survivors will disproportionately tend to be the best adapted to overcoming the dangers of their respective environments; if their adaptive characteristics (what enabled them to survive) are capable of being passed on to their offspring (are inheritable), then the offspring will also tend to have them and so survive in order to pass them on to their own offspring. Here was a possible mechanism for Darwin’s evolutionary theory. (Based on Fancher and Rutherford, 2012; Moghaddam, 2005)

According to Moghaddam (2005), Darwin’s theory of evolution can be ‘boiled down’ to six basic tenets: 1 Members of any species produce new members in excess of the actual numbers that can survive, given limited resources. 2 This results in a fierce competition for survival. 3 Within all species, individuals differ on a variety of characteristics.

183

Humans as an evolved species

4 Different characteristics will increase or decrease the chances of survival. 5 Individuals with more adaptive characteristics are more likely to reproduce and so pass these characteristics to future generations. 6 Thus, there will be a change in the characteristics of species towards the most advantageous variants. As we noted earlier, the very idea of evolution was likely to be met with hostility, especially with respect to its implications for the role of human beings in nature: The literal Bible story placed humanity in a category separate from animals – formed on the sixth and final day of creation, in God’s own image, and granted dominion over the rest of the earth’s inhabitants. But Darwin recognized that human beings with their evident anatomical similarities to many animals, would logically have to be included in any consistent evolutionary system. (Fancher and Rutherford, 2012, p. 247)

Darwin and Psychology On the Origin of Species by Means of Natural Selection was published in 1859. Although it dealt almost exclusively with plants and animals, a fierce debate quickly ensued over its implication regarding human beings: are we God’s special creation or are we ‘descended from the apes’? Darwin’s chief supporter was Thomas Henry Huxley (1825–1895), an expert on primate anatomy. This, together with the discovery of the skulls and stuffed bodies of gorillas in 1861 (previously unknown to Western science) and other fossil finds, helped make evolutionary theory widely accepted within the scientific community. Although the Origin of Species largely ignored the implications of Darwin’s theory for human beings, he did suggest that human mental qualities would eventually be understood as the result of evolution. Others were left to consider this prospect, including his cousin, Francis Galton (see Chapter 11). Darwin eventually published three seminal works devoted to humans: The Descent of Man and Selection in Relation to Sex (1871), The Expression of the Emotions in Man and Animals (1872), and ‘A Biographical Sketch of an Infant’ (1877). In the first of these, he claimed that humans have descended from animal ancestors. He noted the structural, anatomical similarities between humans and the higher animals, including the brain (‘the most important of all the organs’), and that human embryological development involves passing through stages that closely resemble other species. He argued that ‘there is no fundamental difference between man and the higher mammals in their mental faculties’ (1871, p. 66). For example, dogs clearly experience many of the same emotions as humans, including jealousy, pride, shame, and even a rudimentary sense of humour (but see Chapter 2). Animals also demonstrate memory, attention, curiosity, and even imagination (based on their apparent dreaming). They even show basic reason, as when they learn from experience and communicate with each other. He concluded that: The difference in mind between man and the higher animals, great as it is, certainly is one of degree and not of kind. (1871, p. 126)

184

Humans as an evolved species

L L

What kind of difference is Darwin describing here – and what is the opposite? How was Watson, the founder of Behaviourism, influenced by Darwin’s claim regarding human–animal differences?

As we saw in Chapter 1, one of the central features of Watson’s (1913) ‘Behaviourist manifesto’ related to the quantitative (as opposed to qualitative) difference between humans and non-humans such as rats, cats, and dogs: the same basic principles of conditioning apply to all species, with the simpler ones being easier than the more complex ones to study scientifically (see Box 8.8).

Race and gender Also in The Descent of Man, Darwin touched on two topics, race and gender, that have proved controversial ever since. As Fancher and Rutherford (2012) observe, the Victorian era in Britain was characterized by extreme views regarding race and the causes of ethnic differences. According to the polygenists, non-European ‘savage’ peoples represented a distinctly different species of being, while the monogenists believed in the common ancestry and relatedness of all human groups. However, the monogenists also proposed widely varying explanations of racial differences, ranging from cultural/environmental to a belief in descent from Noah’s son, Ham. Darwin was clearly a monogenist, arguing that environmental and educational factors were extremely influential in producing individual differences; slavery, in particular, had horrible effects on its victims. At the same time, he believed that the widely varying environmental conditions that different groups experienced would inevitably create differing selection pressures among them: such pressures might help account for the evolution, over time, of slightly different races or ‘sub-species’ (1871, p. 608; see Box 8.3), as in the case of dark skin colour being a natural adaptation to more direct exposure to sunlight. While this may in itself be a perfectly ‘innocent’ account of racial differences, Fancher and Rutherford (2012) point out that: An elaboration of this theory held that the struggle for survival in harsh northern climates promoted the development of inventiveness and creativity and accounted for a presumed intellectual superiority of the so-called Nordic races. (p. 253) This idea was more strongly supported by some of Darwin’s contemporaries and followers than by Darwin himself, although he never explicitly refuted it. According to Shields and Bhatia (2009, p. 113), Darwin’s ambiguity (or ambivalence?) regarding evolution and race is less significant than the fact that his theory ‘was stripped of nuance and appropriated [by others] to serve a scientific racism that aimed to prove comparative differences in mental abilities’. What has become the ‘race and IQ debate’ is discussed in Chapter 11.

Pause for thought … 1 Do you consider Darwin to have been a racist? What do you understand by the term ‘race’? Does it have a biological reality or could it be a social construction?

185

Humans as an evolved species

BOX 8.3 The concept of race L The classification of people into different racial types based on physical appear-

ance has a long history in Western culture. L Darwin’s theory of evolution introduced the concept of ‘race’, based on his

L

L

L

L

L

L L

186

description of numerous ‘races’ within each animal species. Human beings as a whole constitute a ‘species’ with fertile mating within it; however, individual (human) ‘races’ represent ‘varieties’ or ‘sub-species’, each being partially isolated reproductively from the others (Banton, 1987). According to this model, Nordics (Northern Europeans) and Africans, for example, have maintained their distinctiveness because mating was predominantly within each group (possibly for geographic reasons). Although the Darwinian idea of race as a subspecies promoted the concept of geographical race, it didn’t exclude the view that races may become separate types: a subspecies may evolve to the point where it’s no longer able to interbreed with other subspecies. According to Fernando (1991), Darwin’s view was flexible and egalitarian compared with that of certain of his contemporaries, such as Robert Knox. Knox was a doctor and teacher of medical students at Edinburgh and a strong and extremely influential propagandist for racial theories in Britain (Banton, 1987). In The Races of Men (1850), Knox maintained that external characteristics (mainly skin colour) reflected internal ones (such as intelligence and capacity for cultural pursuits): dark races were generally inferior and ‘hybrids’ (‘mixed-race’) were eventually sterile. The view of race as a human type had been taken to an extreme to produce the concept of humankind being divisible into ‘pure’ races that don’t mix (Fernando, 1991). In The Descent of Man, Darwin seemed to have succumbed to the mood of his times: he talked of the likely extinction of ‘savage races’ because of their inability to change habits when brought into contact with ‘civilized races’. He then joined Galton in calling for eugenic measures to maintain the integrity of the latter (Banton, 1987; see Chapter 11). Twentieth-century genetics at first described different races in terms of blood types, but this proved unreliable. More recently, scientific advances have enabled geneticists to identify human genes that code for specific enzymes and other proteins. But the genetic differences between the classically described races (European, Indian, African, East Asian, New World, and Oceanian) are, on average, only slightly higher (10 per cent) than those that exist between nations within a racial group (6 per cent), and the genetic differences between individuals within a population are far greater than either of these (84 per cent) (Fernando, 1991). Race defined as ‘genetically discrete groups’ doesn’t exist (Bamshad and Olson, 2003). Nevertheless, the traditional view persists that people resembling each other in obvious physical ways belong to a ‘race’ that represents a genetically distinct human type. Anthropologists, biologists, and medical people are all guilty of perpetuating this myth, despite the widely held belief that ‘race’ has lost its scientific meaning. For example, two people of different ‘races’ can have more genes in common than two ‘members’ of the same ‘race’ (Bamshad and Olson, 2003).

Humans as an evolved species L According to Richards (1996), race is a social, not a biological, category. Simi-

larly, Wetherell (1996) argues that: ‘Race’ is a social rather than a natural phenomenon, a process which gives significance to superficial physical differences, but where the construction of group divisions depends [on] … economic, political and cultural processes. (p. 184) L Wetherell points out that many writers prefer to put quotation marks around

‘race’ to indicate that we’re dealing with one possible social classification of people and groups, rather than an established biological/genetic ‘reality’.

In The Descent of Man, Darwin introduced the concept of sexual selection, a variant of natural selection, according to which characteristics that are specifically favourable to reproductive success are gradually selected for and evolve. Within particular species, females and males prefer certain qualities in their mates and choose them accordingly; those individuals displaying such qualities (or displaying them in an extreme or exaggerated way) are more likely to mate, which makes the genes (not Darwin’s term because genetics post-dates his theory) that determine those qualities more likely to be passed on and survive. Classic examples are the spectacular colours, ornamentation, and mating displays of male birds (such as the peacock’s feathers). These characteristic female–male differences are collectively called sexual dimorphism. Darwin also believed that sexual selection had influenced human evolution, resulting in some characteristic mental as well as physical differences. While attributing some positive mental qualities to women (such as tenderness and unselfishness), Darwin was quite clear where he stood regarding intellectual differences: like most of his Victorian male contemporaries, he simply assumed that men were intellectually superior. According to his complementarity hypothesis, men and women have evolved separate but mutually necessary characteristics. According to the variation hypothesis, males have been modified through evolution to a greater extent than females, resulting in greater variability among males than among females. While Darwin himself didn’t apply this latter hypothesis to intellectual differences, others subsequently did, asserting that a large population of men will contain more extreme cases of extremely high intellectual ability (counterbalanced by more cases of extreme stupidity) than a comparably sized group of women, who’ll be more closely clustered around the mean. Here was a potential ‘explanation’, of course. For the great preponderance of males among the eminent figures in history, as well as rationalization for restricting education for the presumably highly ‘gifted’ to boys and young men. (Fancher and Rutherford, 2012, p. 255)

Comparative Psychology As we noted above (and see Chapter 6), Watson was greatly influenced by Darwin’s account of the continuity between species, whereby humans are only more complex animals, than, say, rats. This makes them comparable in a way that hadn’t been accepted previously: Darwin gave the study of animals a new impetus and significance.

187

Humans as an evolved species

Shortly before his death, Darwin granted full access to his notes on animal behaviour to a younger friend, George Romanes (1848–1894). Romanes combined Darwin’s notes with details of his own research and published them in two ground-breaking books, Animal Intelligence (1882) and Mental Evolution in Animals (1883). Romanes described his work as Comparative Psychology: studying the similarities and differences among various nonhuman species’ psychological functions could shed light on their human counterparts (by analogy with comparative anatomy, i.e. the study of similarities and differences in physical structures). This brief definition of Comparative Psychology corresponds to one of four uses to which animal behaviour research has been put (Richards, 2010), namely ‘to trace the evolutionary roots of human behaviour’ (p. 234). According to Richards, belief in an evolutionary continuum between human and animal behaviour quickly generated an interest in ‘the animal in the human’ – how far human behaviour retains pre-human ancestral features. This, in turn, generated elaboration of the concept of ‘instinct’.

BOX 8.4 The changing face of ‘instinct’ L ‘Instinct’ refers to an in-built determinant of behaviour that has evolved over

time; traditionally, instincts were opposed to environmental influences. The Behaviourists represented the polar opposite view, namely that behaviour is almost totally the product of environmental influences (a false dichotomy?). L Inspired by Darwin’s evolutionary theory, William James was interested in applying Darwin’s ideas to human psychology. In particular, he described instincts (such as fear, love, and curiosity) as driving forces of human behaviour, proposing that: Nothing is commoner than the remark that man differs from the lower creatures by the almost total lack of instincts and the assumption of their work by reason. (James, 1890, p. 389) L However, he went on to add that human behaviour might be characterized by

L

L L L

188

more instincts rather than fewer compared with other animals. This idea has been embraced by modern Evolutionary Psychologists. For example, in rejecting Watson’s tabula rasa (‘blank slate’) view, Pinker (1997) claims that humans have an instinct for learning: learning is made possible by innate machinery designed to do the learning (see text below). The concept of instinct has been criticized for being too imprecise to be of scientific value. If the only evidence of an instinct’s existence (I) is the display of certain behaviour (B), then to describe B as determined by I involves circular reasoning; what we need is independent evidence that I exists. A related limitation of the concept is that it sounds deceptively like an explanation for behaviour, when, in fact, it’s nothing more than a label or description. Also, many so-called instinctive behaviours can be modified by experience; this makes it difficult to see where an instinct ends and learning begins. During the 1930s, ethologists, such as Konrad Lorenz (1903–1989), Nikolaas (Niko) Tinbergen (1907–1988), and Karl von Frisch (1886–1982) (who shared the 1973 Nobel Prize in Physiology or Medicine), combined evolutionary/

Humans as an evolved species functional explanations with causal explanations. Unlike Comparative Psychologists before them, ethologists observed animal behaviour in its natural habitat (i.e. in the environments in which it evolved) (see text below). L Growing out of developments in ethology during the 1960s and 1970s, sociobiology represents a further extension of Darwin’s evolutionary theory. Edward O. Wilson (1975) defined it, in Sociobiology: The New Synthesis, as the systematic study of the biological basis of all social behaviour. ‘Instinct’ is now rarely used; instead, controversy and debate centred around whether or not there’s such a thing as ‘human nature’. L Recent alternatives to ‘instinct’ are ‘drive’ and ‘program’. Midgley (1995) distinguishes between two kinds of instinct: (1) closed instincts are behaviour patterns that are genetically fixed in every detail (‘learning is just maturation’ (p. 51)); and (2) open instincts are ‘programs with a gap’ (p. 51): parts of the behaviour pattern are innately determined, but others are left to be filled in by experience. (Based on Richards, 2010; Workman and Reader, 2008)

L L

What do you understand by the term ‘instinct’? How might instincts and environmental factors be related?

Ethology As we noted in Box 8.4, ethologists gave ‘instinct’ a more precise meaning by introducing causal (i.e. environmental) explanations (combining them with biological ones). According to Hinde (1982), when ethologists consider any class of behaviour, they’re concerned with four issues: 1 What immediately causes it. Specific stimuli called sign stimuli (or releasers) trigger instinctive patterns of behaviour, some of which are called fixed action patterns (FAPs). An innate releasing mechanism (IRM) is a hypothetical mechanism within the nervous system which mediates between the sign stimulus/releaser and the FAP. 2 How such behaviour has developed over the course of the animal’s life-cycle (ontogeny). 3 What the useful consequences of such behaviour are (its function). 4 How the behaviour has evolved within the species (phylogeny). Tinbergen’s (1951) The Study of Instinct represents one of the key landmarks in the history of ethology. Essentially, an instinct is an inherited behaviour pattern which is common to all members of a species (and so is often used synonymously with speciesspecific behaviour); it is innate (inborn and unlearned) and stereotyped (i.e. it takes the same form every time it’s displayed). Arguably the best-known example of ethological research is Lorenz’s (1935) study of imprinting in goslings. Imprinting is a genetically determined learning process which occurs in a young bird soon after hatching, when following a moving object; it learns the object’s characteristics, such that it can recognize it and so becomes attached to it. In the wild, the mother is usually the first moving object the bird will see and attachment is manifested as a tendency to follow her: so following is both a cause and effect of imprinting.

189

Humans as an evolved species

BOX 8.5 Konrad Lorenz (1903–1989): ethologist and Nazi sympathizer L In the 1973 Nobel Prize awards ceremony address,

ethology was identified as an important new science: not only did it enhance our understanding of lower organisms (insects, birds, etc.), but it also had a far-reaching influence on ‘social medicine, psychiatry, and psychosomatic medicine’. It could potentially provide a new approach to understanding the human condition (an anthropic shift: Burkhardt, 2005). L Lorenz, an Austrian, was perhaps the least reluctant of his fellow ethologists to make the anthropic shift; indeed: His enthusiasm for the task had some pronounced political implications and his involvement with and contribution to the ideology of National Socialism trailed him for the latter half of his career. (Heneghan, 2012, p. 1)

Figure 8.2 Konrad Lorenz.

L Lorenz greatly welcomed the Anschluss (the unification of Austria and

Germany in 1938). In 1940, he published a paper in which he hypothesized that both the domestication of animals and, by analogy, of people living in civilized conditions (especially large cities) produced deficiencies compared with wild forms of those species. The domesticated forms are generally uglier and more pathological. L In another 1940 paper, Lorenz argued that Darwinian thinking served as a basis for Nazi ideology (National Socialism) because of its emphasis on race as a biological factor (see Box 8.3). (In his insistence on race as the proper evolutionary unit, Lorenz is out of step with contemporary understanding of evolutionary processes, which emphasizes the individual or the gene as the basis for natural selection; see text below). L During the war, Lorenz was posted to Poland, where he became a military Psychologist; part of his role was to help determine the ‘Germanizing potential’ of the local population. One conclusion he reached, based on a battery of psychological tests, was that children of German–Polish marriages were likely to lose the best qualities of both races. L After the war, Lorenz tried to ‘muddy the waters’ of what he’d done during it, claiming that he was frightened (by the prospects of domestication) into expressing his thoughts using the worst of Nazi terminology. (Based on Heneghan, 2012)

190

Humans as an evolved species

Pause for thought … 2 Given what we know about Lorenz’s views regarding National Socialism, how should we regard – and react to – his scientific contribution (i.e. his work as an ethologist)? Assuming that we can separate these two things, is there a case for omitting him from textbook accounts of ethology altogether? If not, are textbook authors morally bound to point out his political views?

Lorenz and aggression According to Heneghan (2012), Lorenz’s work remains important because of his consistent attempts to extend the insights of his science to the human condition. On Aggression (Lorenz, 1966) probably represents the most famous and comprehensive ethological account of human aggression. While based largely on the study of insects and fish, Lorenz, defined aggression as ‘the fighting instinct in beast and man which is directed against members of the same species’ (Lorenz, 1966, p. 3). In non-humans, aggression is basically constructive/ adaptive, under the control of rituals (such as appeasement gestures), but in humans it has become distorted (it’s no longer under ritualized control). However naturally aggressive humans may be as a species compared with other species (and Lorenz argues that they are naturally highly aggressive), their superior brains have enabled them to construct weaponry; this removes combat from close-up, eye-to-eye situations, making appeasement gestures, for example, less effective. According to Lea (1984), human technology enables our intentions to override our instincts. Surely our technology is a part of our cultural environment, in which case Lorenz seems to be arguing that non-biological influences prove greater than our biology. Many Psychologists and others argue that cultural influences are far more important determinants of human aggression than biological factors; whatever potential for aggression we may have inherited as a species, it’s culturally overridden and re-packaged into forms which fit current circumstances. In most cases, cultural forces teach or support non-aggression, but when prosocial aggression is required (such as disciplining children and wrong-doers, assertiveness, self-defence, and even warfare), cultural processes teach and sustain it.

Sociobiology According to Workman and Reader (2008), the term ‘sociobiology’ had been around for at least 20 years before the 1975 publication of Wilson’s landmark Sociobiology: The New Synthesis (see Box 8.4). It attempts to understand all types of human and non-human social behaviour (including altruism, aggression, dominance, and sexual behaviour) in evolutionary terms. Hinde (1982) regards Wilson’s book as a landmark in biology, integrating population biology, ecology, ethology, and related disciplines, helping to bring evolutionary theory and behavioural biology together. However, Hinde is also very critical of Wilson’s claim that, eventually, sociobiology would engulf ethology and Comparative Psychology and that behaviour would be reduced to neurophysiology and sensory physiology. By definition, social behaviour takes place between two or more individuals – and so cannot be reduced to what determines the behaviour of any one individual in isolation. In the rest of this section, we shall focus on altruism to illustrate what sociobiological explanations look like and what their limitations might be.

191

Humans as an evolved species

The case of altruism L L L

What do you understand by the term ‘altruism’? Are there different types of altruism (relating to humans and nonhumans, respectively)? Is the concept compatible with Darwin’s theory of natural selection?

Rabbits commonly bang their feet on the ground as a warning to other rabbits of some threat or danger. This illustrates biological altruism, as distinct from psychological altruism; these roughly apply to non-human species and human beings respectively. We wouldn’t normally attribute the rabbit which warns its fellow rabbits of an approaching hunter with altruistic motives/intentions, unlike the human kidney donor: we’d normally assume the decision to donate to be based on a number of considerations and values. While the rabbit’s foot-banging is part of its biologically determined repertoire of behaviour (in response to certain environmental conditions), there’s certainly no necessity or inevitability about kidney donation: it may, however, arouse strong feelings and raise moral, religious, and practical questions. For these reasons, we do usually infer altruistic motives and intentions from altruistic acts. Having made this distinction, we need to ask if (taking the foot-banging example) rabbits are as altruistic (i.e. unselfish) as they seem. According to Brown (1986), the biological world abounds in examples of altruism, defined in terms of the prospects for survival and reproductive success of the altruistic organism relative to the ‘beneficiaries’ of the altruistic behaviour. By drumming its feet on the ground, the altruistic rabbit increases the chances of other rabbits escaping and, ultimately, producing offspring while at the same time reducing its own chances (by, for example, drawing attention to itself or wasting valuable seconds in making its own escape). Perhaps the ultimate in altruism is displayed among bees, wasps, and ants, in which specialized castes of workers or soldiers are produced which are completely sterile: sterile worker bees, for example, forgo their ability to reproduce through helping their mother, the queen, to do so (Trivers and Hare, 1976). However, aren’t animals naturally selfish according to Darwin’s evolutionary theory? This produces the paradox of altruism, which is described in Box 8.6.

BOX 8.6 The paradox of altruism L From the perspective of Darwin’s theory of natural selection, it’s truly remarkable

for members of a species to help each other in the ways described in the text above – and quite the opposite of what could be considered ‘natural’. Indeed, Wilson saw this as ‘the central theoretical problem of socio-biology; how can altruism, which by definition reduces personal fitness, possibly evolve by natural selection’ (1975, p. 3). L As we’ve seen, individual animals survive if they can adapt to their environment by virtue of physical (and behavioural) characteristics (produced by random genetic variation or mutations). These better-adapted individuals will, on average, produce more offspring and, since those offspring will tend to carry the genes for

192

Humans as an evolved species

L

L

L

L

those adaptive characteristics, those genes and characteristics become increasingly more commonplace within the population. In this way, animal populations become differentiated, and when different strains become so different that they can no longer interbreed (because their genotypes, i.e. their genetic make-up, are too dissimilar), a new species has evolved. This process of natural selection, therefore, ‘operates single-mindedly and relentlessly in favour of traits that improve the chances of survival and the number of offspring of the individual animal acting’ (Brown, 1986, p. 90; emphasis added). But isn’t this the complete reverse of what happens when animals act altruistically? Such animals wouldn’t last long enough to reproduce successfully! Natural selection predicts that individuals will act to benefit themselves – and not their group or species. The ‘paradox of altruism’ refers to this apparent contradiction between Darwin’s theory of natural selection and observed altruistic behaviour in several species.

This then raises the question: is it possible for an animal to behave altruistically and in accordance with the laws of natural selection at the same time? The answer is ‘yes’, because altruism turns out to be only apparent: altruistic behaviour is in fact selfish behaviour in disguise. In order to understand this, we need to shift attention away from the individual, self-contained organism (e.g. the individual rabbit) to the gene as the fundamental unit of evolution (Roediger et al., 1984). This is the approach of sociobiology: the selfish gene as opposed to the selfish rabbit.

Can genes be selfish? The most general explanation of apparent altruism is Hamilton’s (1964) theory of kin selection. If we think of an individual animal as a set of genes rather than as a separate, ‘bounded’ organism, then it should be regarded as distributed across kin: it shares some proportion of its genes with relatives, according to how closely related they are. It follows that an individual can preserve its genes through self-sacrifice: if a mother dies in the course of saving her three offspring from a predator, she will have saved 1.5 times her own genes (since each offspring inherits one half of its mother’s genes). So, in terms of genes, an act of apparent altruism can turn out to be extremely selfish: surrendering your own life as an individual may reap a net profit as measured by the survival of your genes in your relatives. This means that individuals are selected to act not to maximize their own fitness (their own survival and reproductive success), but to maximize their inclusive fitness (their own survival and reproductive success plus that of relatives) (Hinde, 1982). Perhaps taking these arguments to their logical conclusion, Dawkins, in The Selfish Gene (1976), argued that we are ‘survival machines’ – robot vehicles blindly programmed to preserve the selfish molecules known as genes: Selfishness, whatever that might be, was not just transmitted by genes but also actually belonged to them. Genes were not only – as Wilson had said – the real scene of the process but also its only active agents. Humans and other animals were not agents but the genes’ helpless vehicles, though they were still, also, themselves selfish. (Midgley, 1995, pp. xvii–xviii; emphasis added)

193

Humans as an evolved species

As Pinker (1997) puts it: People don’t selfishly spread their genes; genes selfishly spread themselves. They do it by the way they build our brains.… Our goals are subgoals of the ultimate goals of the genes, replicating themselves. (p. 44)

L L L

What kind of theoretical/conceptual approach does Dawkins’ selfish gene argument represent? What does the selfish gene concept imply for the concept of ‘society’? Based on the distinction between biological and psychological altruism (see text above), how does Dawkins’ argument relate to human altruism?

Dawkins’ argument is clearly reductionist: all human and non-human animal behaviour is explained in terms of genes’ ‘need’ to replicate themselves. This renders human motives and intentions (and human consciousness as a whole) irrelevant and lacking any causal influence (i.e. free will doesn’t exist or is, at best, an illusion). (Interestingly, this is the same conclusion reached by the Behaviourists, in particular Skinner, albeit coming from a completely different theoretical position.) Dawkins could also be accused of anthropomorphism: he’s attributing human characteristics (selfishness) to non-human entities (genes); as Midgley (1995), puts it, he’s ‘personifying’ the genes (p. xvii). This view of genes also implies that there’s no such thing as ‘society’: Sociobiologists have ridiculed the idea that groups of organisms might gain a survival advantage over other groups because they shared some beneficial characteristic. However, this focus on genes may not tell the whole story. A small but growing number of evolutionary biologists claim that it ignores crucial evolutionary processes at higher levels – among groups, species, and even whole ecosystems. For example, genes rarely act alone, but operate as part of networks of interacting genes; on this basis, it’s the network that’s selected, not the individual gene. Even Dawkins agrees that the idea of ‘species selection’ has some credibility. Natural selection may not only favour certain genes, but it can also favour particular societies. Provided a group of individuals can cooperate without any cheats trying to sneak an unfair advantage, then it may evolve as a single unit. Indeed, Darwin’s own solution to the altruism paradox was to suggest that natural selection operates among groups of organisms. The Sociobiological explanation fails to make the fundamental distinction between biological (or evolutionary) and psychological (or vernacular) altruism (Sober, 1992); only higher mammals, in particular primates and, especially, human beings, display the latter. But are human beings capable of biological altruism?

BOX 8.7 Human beings and biological altruism L Even if there were no evidence for psychological altruism (which there is: e.g.

Batson, 1991; see Gross, 2015), trying to reduce human altruism to apparent (i.e. biological) altruism would prove tricky.

194

Humans as an evolved species L The very meaning of ‘kinship’ differs within human contexts in a way that

doesn’t apply within animal populations. L According to Brown (1986), the closeness of kinship is constructed very differ-

ently in different societies: there’s no simple correspondence between perceived and actual (genetic) kinship. L If altruistic behaviour directly reflected actual (‘objective’) kinship, rather than learned perceptions of kinship, it would be impossible for adoptive parents to give their adopted children the quality of care they do. L As a species, much human behaviour is altruistic, and, as Brown says: Human altruism goes beyond the confines of Darwinism because human evolution is not only biological in nature but also cultural, and, indeed, in recent times primarily cultural. (Brown, 1986, p. 98; emphasis added) L At the same time, biological altruism may be triggered under very specific condi-

tions, such as highly arousing emergency situations. People often display a rapid, almost unthinking, reflexive type of helping (impulsive helping) in extreme situations (such as natural – and other kinds of – disasters).

Universal Darwinism and selfish memes The idea of natural selection acting on genes isn’t the only way of applying evolutionary theory to the mind (Blackmore, 2010). Natural selection can be thought of as a simple algorithm (a set of precise rules/instructions that can predict a precise outcome from a known starting point). According to the principle of ‘Universal Darwinism’ (Dawkins, 1976), if you have variation, selection, and heredity, then you must get evolution (a simple, mindless algorithm); genes aren’t the only replicators and natural selection shouldn’t be confined to biology. (Dawkins coined the term ‘replicator’ to refer to the information that’s copied with variations or errors: its nature influences its own probability of replication.) Dawkins wanted to break people’s habit of thinking only about genes as replicators, and so proposed a new one. Whenever people copy skills, habits, or behaviours through imitating others, a new replicator is at work: ‘meme’ (based on the Greek ‘mimeme’) is a unit of cultural transmission or imitation, such as tunes, ideas, catch-phrases, clothes, fashions, ways of making pots or of building arches (Dawkins, 1976). Darwinism is another example. Memetics refers to the scientific study of memes (Blackmore, 1999). According to Blackmore (2010): The whole of human culture can be seen as a vast new evolutionary process based on memes, and human creativity can be seen as analogous to biological creativity. On this view, biological creatures and human inventions are both designed by the evolutionary algorithm. Human beings are the meme machines that store, copy and recombine memes. (p. 234). Blackmore (2007) claims that memetics provides the best explanation of what makes us human (see Gross, 2014), and memes have used us to build the cultures that we live in.

L

Try to formulate some arguments against this view of culture.

195

Humans as an evolved species

Is there more to culture than memes? Blackmore (2007) herself acknowledges that after more than 30 years, memetics is still not a thriving science. She considers several reasons for this, including some writers finding memetics deeply unsettling in the way it undermines free will and the power of human creativity and consciousness. For example, instead of thinking of our ideas as our own creations, and working for us, Blackmore (1999) argues that we must think of them as autonomous, selfish memes, working only to get themselves copied. Again, she rejects the reality of beliefs and selves: there are only memes being copied or not. But is this a good reason for rejecting any theory? According to Malik (2006), the trouble with arguments like Blackmore’s is that, by their own criteria, they provide us with no reasons for believing in them: From an evolutionary point of view, truth is contingent. Darwinian processes are driven by the need, not to ascertain truth, but to survive and reproduce. Of course, survival often requires organisms to have correct facts about the world. A zebra that believed that lions were friendly … would not survive for long. But although natural selection often ensures that an organism processes the correct facts, it does not always do so. Indeed, the argument that self-consciousness and agency are illusions designed by natural selection relies on the idea that evolution can select for untruths about the world because such untruths aid survival. (p. 168) Malik is saying that the logic of Blackmore’s argument undermines our confidence in its own truth. If we’re simply sophisticated animals or machines, then we cannot have any confidence in that claim: the claim is unreliable because it’s made by mere machines! (Gross, 2014). Humans are only able to do science ‘because we possess the capacity to transcend our evolutionary heritage, because we exist as subjects, rather than simply as objects’ (Malik, 2006, p. 170). As Malik says, the relationship between humans as physically determined beings (humans-as-objects) and as conscious agents (humans-as-subjects) is clearly one of the most difficult problems for both scientists and philosophers (see Chapters 2 and 3). What makes humans ‘exceptional’ (i.e. unique as a species) is this ‘dual character’ of being simultaneously both the scientist and what’s being investigated (cf. Richards, 2010).

L

What do we mean by ‘human nature’?

BOX 8.8 What do we mean by ‘human nature’? L According to Malik (2006):

Human nature is not simply natural…. On the one hand, human nature means that which expresses the essence of being human, what Darwinists call ‘species-typical’ behaviour. On the other hand, it means that which is constituted by nature, in Darwinian terms, that which is the product of natural selection. (p. 170)

196

Humans as an evolved species L In non-humans, the two meanings are synonymous, but, unlike non-humans:

The human essence – what we consider to be the common properties of our humanity – is as much a product of our historical and cultural development as it is of our biological heritage…. Being both social and rational means that the common social goals, opportunities and constraints are often tackled in a similar fashion in different societies. (Malik, 2006, p. 170) L Ironically, it’s also our history and culture that accounts for certain individual

L L

L

L

L

differences, i.e. people differ partly due to cultural differences (see Chapter 3, 4, and 11). Midgley (1995) argues that neither are non-humans mere machines (or automata as Descartes described them; see Chapters 1 and 2): only machines are machines. Midgley also argues that taking biological/evolutionary/genetic and social/cultural arguments as alternatives represents a false dichotomy (even in relation to nonhumans). As applied to human beings, this represents an even greater potential error, since it’s so much more difficult – both practically (scientifically) and conceptually – to separate the relative influences of the two sets of factors (but see Chapter 11). If we take Malik’s first sense of ‘human nature’ (i.e. species-typical/species-specific behaviour), then we could argue that human nature is the sum total of all such distinctive human behaviours and abilities. But they are no more than tendencies or capacities, which become channelled through interaction with environmental influences (both physical and sociocultural) (see Box 2.6, page 36). This view takes both the ‘natural’ and the ‘sociocultural’ as equally necessary, thus avoiding a number of false dichotomies (e.g. nature–nurture, individual–cultural). If culture is itself a product of human nature, then we have an endless cycle of interaction, as shown in Figure 8.3.

‘Human nature’ = Sum total of species-typical/-specific behaviour Environmental influences (physical and socio-cultural)

Figure 8.3 Ongoing cycle of interaction between human nature and culture. L In some respects, human nature may represent qualitatively different behaviours/

abilities from those of non-humans (‘exceptionalism’), while in other respects, the differences may be quantitative (‘evolutionary continuity’) (see Gross, 2014). L (See Box 8.9.)

197

Humans as an evolved species

Evolutionary Psychology Evolutionary Psychology (EP) is a development of sociobiology (and is often referred to as ‘neo- or modern Darwinism’). The term was coined by the anthropologist John Tooby and Psychologist Leda Cosmides (Pinker, 1997). EP brings together two scientific revolutions: (1) the cognitive revolution of the mid-1950s (see Chapters 1 and 7); and (2) the revolution in evolutionary biology of the 1960s and 1970s. The former helps us to understand how a mind is possible and what kind of mind we have; the latter helps us to understand why we have our kind of mind. While acknowledging their debt to sociobiology, Evolutionary Psychologists claim that it often ignored the role of the mind in mediating links between genes and behaviour. EP tries to explain human behaviour in terms of the underlying computations that occur within the mind (see Chapter 7) in addition to Darwinian theory. This marriage of sociobiology and Cognitive Psychology put EP squarely in the centre ground of Psychology (Workman and Reader, 2008).

BOX 8.9 Some basic features of EP L The fundamental assumption of EP is that the human mind (our ‘organs of

L

L

L

L

198

computation’/‘the Blind Programmer’: Pinker, 1997, p. 36) is the product of evolution just like any other (bodily) organ; we can gain a better understanding of the mind by examining evolutionary pressures that shaped it. This occurred in the Environment of Evolutionary Adaptedness (EEA), thought to be the African savannah (Rose, 2000) during the Pleistocene period between 10,000 and one million years ago (Tooby and Cosmides, 1997). According to Barkow et al. (1992), the mind consists of a collection of specialized, independent modules, designed by natural selection to solve problems that faced our hunter-gatherer ancestors (such as acquiring a mate, raising children, and dealing with rivals). The solutions often involve such emotions as lust, fear, affection, jealousy, and anger. Together, these modules and the related emotions comprise human nature (see Box 8.8). Traditionally, Psychology has tried to identify proximate mechanisms (‘immediate’ causes relating to our goals, knowledge, disposition, or life-history, such as ‘Why are some people more prejudiced than others?’). By contrast, EP asks ultimate questions (e.g. ‘What evolutionary advantage was conferred by prejudice?’). EP rejects the Standard Social Science Model (SSSM), according to which: (1) Humans are born as blank slates (see Chapters 1 and 6): knowledge, personality traits, and cultural values are acquired from the cultural environment – there’s no such thing as human nature (see Box 8.8); (2) human behaviour is infinitely malleable – there are no biological constraints on how people develop; (3) culture is an autonomous force that exists independently of people; (4) human behaviour is determined by a process of learning, socialization, or indoctrination; and (5) learning processes are general – they can be applied to a variety of phenomena. EP is largely concerned with universal features of the mind; individual differences are expressions of the same universal human nature as it encounters different environments. The crucial exception is gender: Men’s and women’s mental modules have evolved very differently (sexual dimorphism; see text above). (Based on Workman and Reader, 2008)

Humans as an evolved species

EP, male violence, and rape One example of sexual dimorphism relates to feelings about whether, when, and how often it is in their interests to mate. Because women are more selective about their mates and more likely to delay intercourse (since she has more to lose or gain genetically than he does), men, to achieve sexual access, more often try to overcome female resistance. According to Thornhill and Wilmsen-Thornhill’s (1992) rape adaptation hypothesis (RAH), men’s use of violence can be a very effective means of controlling the reluctant woman. During human evolutionary history there was enough directional selection on males in favour of traits that solved the problem of forcing sex on a reluctant partner to produce a psychological tendency specifically towards rape. L L

What’s your response to the RAH? Try to formulate some arguments that challenge its validity.

Not only does the RAH recast an oppressive form of behaviour in a much more positive light (it’s ‘adaptive’), but it also portrays it as a natural characteristic of men (‘they can’t help it’). Not surprisingly, it has been condemned as not simply trying to explain men’s sexual coercion but also as justifying it (Edley and Wetherell, 1995). Feminists especially would be repelled by such a proposal. Ironically, they also share with Evolutionary Psychologists the view that male violence cannot be dismissed as pathological – but for very different reasons. This is illustrated by the case of Marc Lepine (see Box 8.10).

Feminism and the politics of masculinity From a feminist perspective, what defines men is their power relative to women, their advantages and privileges; inevitably, men’s character and psychology will be structured and shaped through this relationship with women (Edley and Wetherell, 1995). More importantly, from a practical point of view, feminism has provided a context in which many women, globally, have been enabled to name their experiences of violence and abuse and ‘break the silence’ (Kelly, 1988).

BOX 8.10 The case of Marc Lepine L In 1989, Marc Lepine murdered 14 female students at the University of Montreal,

before killing himself. L Inevitably, the media, as well as social scientists and others, speculated as to the

motive for the killings. L For many feminists, the answer was horrifyingly clear: Lepine had made a politi-

cal statement that pushed violence against women to its ultimate expression, namely mass murder. L Lepine’s suicide note revealed the very real link between gender and aggression in his troubled mind: it was followed by a ‘hit list’ of 19 radical feminists. But instead of murdering high-profile feminist activists, he acted against women engineering students whose inroads into a male-dominated world might just as easily have fed his rage. (Based on Cherry, 1995)

199

Humans as an evolved species

Lakeman (1990, cited in Cherry, 1995), a feminist activist, analysed the media’s tendency to avoid viewing Lepine’s actions as the expression of male violence towards women and the women’s movement. They individualized the murders, portraying Lepine as a madman acting out a brutal scenario. By asking ‘Is there a little bit of him in all men?’, an analytic framework is generated that contextualizes Lepine’s actions in the study of men’s daily lives and the cultural construction of masculinity (Cherry, 1995). In Against Our Will: Men, Women and Rape (1975), Susan Brownmiller looked at the extensiveness of rape in wartime; she used the Vietnam War, including the infamous My Lai massacre, as one of the most horrific examples. In 1981, an entire issue of Journal of Social Issues was devoted to the study of rape, including an article by Malamuth (1981) on how men assessed the likelihood that they’d use rape under various conditions. Malamuth located ‘rape proclivity’ (natural tendency/predisposition) as an aspect of social learning theory and the processes of behavioural inhibition/disinhibition (see Gross, 2015). Feminist theory and research suggests that it’s better located in the dual frameworks of gender role and power-dominance relations. Rape proclivity has meaning at both a psychological (i.e. individual) and societal level (i.e. society’s tolerance of both the abuse of power and the abuse of women as legitimate targets) (Cherry, 1995).

Conclusions: are we ‘stuck’ in the Pleistocene period? By presenting ‘rape proclivity’ as an evolved male adaptation (as the RAH does), the social meaning of rape is largely ignored/overlooked. As we’ve seen, the focus of EP as a whole is on our evolution as a species (or, as male or female in the case of sexual dimorphism) during the Pleistocene period. This relates to one of four fallacies of what Buller (2009, 2013) calls ‘Pop EP’, best represented by Buss (e.g. 1995), Pinker (e.g. 1997), and Tooby and Cosmides (e.g. 1997). The four fallacies are: 1 2 3 4

analysis of Pleistocene adaptive problems provides clues to the mind’s design; we know, or can discover, why distinctively human traits evolved; our modern skulls house a Stone Age mind; and the psychological data provide clear support for EP (see Gross, 2015).

As far as (3) is concerned, Evolutionary Psychologists claim that the timescale of human history has been too short for evolutionary selection pressures to have produced significant change. But Buller (2009, 2013) argues that this claim is mistaken at both ends of the scale: L Some human psychological mechanisms emerged in a more ancient evolutionary past.

For example, Panskepp (in Buller) claims that the emotional system he calls Care, Panic, and Play date back to early primate evolutionary history, while Fear, Rage, Seeking, and Lust have even earlier, pre-mammalian origins. L The idea that we’re stuck with a Pleistocene-adapted psychology greatly underestimates the rate at which natural and sexual selection can drive evolutionary change. Selection can radically change the life-history of a population in as few as 18 generations (for humans, roughly, 450 years). Environmental changes since the Pleistocene – both natural (such as climate change) and human-made (such as the agricultural and industrial revolutions) have undoubtedly altered the selection pressures on human psychology. L If, as EP maintains, human psychological characteristics are the product of gene– environment interaction, then even with negligible genetic evolution since the Pleistocene, such environmental changes would have produced traits that are likely to differ in important ways from those of our ‘Stone Age’ ancestors.

200

Humans as an evolved species

Pause for thought – answers 1 This would lend itself well to a seminar discussion. Ultimately, any belief, scientific or otherwise, must be considered in the social and historical context in which it is expressed. However, arguably, Darwin’s view regarding race could be considered the basis on which scientific racism is built – i.e. the attempt to present racist views as scientific ‘fact’. For example, Chris Brand, a Psychologist at the University of Edinburgh, had his book, The g Factor: General Intelligence and its Implications, withdrawn by the publisher (John Wiley) in April 1996 before its publication. He denied being a ‘racist’ while being perfectly proud to be a racist in the scientific sense; that is, a believer in the view that race and psychology have deep links (most likely genetic). This is discussed further in Chapter 11. 2 This too would make an excellent topic for a seminar. There follow a few ‘pointers’. L From a strictly rational perspective, the validity (or otherwise) of a scientist’s scientific claims should have nothing to do with the application of those claims by others and/or by the scientist him/herself. Besides, much of Lorenz’s work, especially that on which the Nobel Prize award was based (including the collaboration with Tinbergen), both pre- and post-dated the Nazi era. L The counter-argument is that research and scientific practice as a whole is driven (consciously or unconsciously) by attitudes, biases, and values (see Chapter 2). As far as National Socialism is concerned, Lorenz made his views and his affiliations perfectly clear. L Margaret Mead (1973, in Sax, 1997), the famous anthropologist, expressed shock that questions about Lorenz’s Nazism were still being raised, pointing out that he was a prisoner-of-war in Russia and had been ‘systematically persecuted ever since’. L Tinbergen chose to forgive Lorenz after the war, so that they could continue with the scientific work that the war had interrupted (Burkhardt, 2005). Even at the time of the Nobel Prize award (1973), Lorenz continued to claim that he was guilty of little more than naivety regarding Nazi intentions; Tinbergen corroborated this (Heneghan, 2012). One might … [refuse] to read the works of those who have been besmirched by their involvement with so heinous a regime as the Third Reich…. Since Lorenzian ethology, that is, classic ethology, has … been somewhat absorbed into socio-biology, behavioural ecology, and evolutionary psychology, it may be that one can choose to leave the works of Lorenz on the shelf. (Heneghan, 2012, p. 3)

201

Chapter 9 Individuals as driven by unconscious forces Psychodynamic Psychology

For many non-Psychologists, ‘Psychology = Freud’ and ‘Freud = sex(uality) and the unconscious mind’. Freud certainly regarded himself as a scientist (see Chapter 2), having originally wanted to pursue a career in physiological research, but (for practical and cultural reasons) trained as a doctor, specializing in neurology (disorders of the nervous system). He’s probably best known as the founder of psychoanalysis, at the time a revolutionary way of treating people with psychological disorders (psychoneuroses); this represents the original form of psychotherapy from which all subsequent methods have developed (see Chapter 13 and Gross, 2015). Freud’s psychoanalytic methods evolved alongside the associated explanations of what lay at the root of his patients’ problems, namely unresolved unconscious conflicts, often stemming from childhood trauma. His psychoanalytic theory as a whole included his metapsychology, his account of the structure of the psyche, or a general model of the mind (both conscious and unconscious). The impact of his ideas is reflected in the number of ‘Freudian’ concepts used in everyday language in Western cultures (most of the time not being recognized as such). This illustrates very well the uniqueness of Psychology as a scientific discipline: it has the ability to change how we speak and think about ourselves and other people (i.e. Psychology can change psychology; Richards, 2010; see Chapter 2). As Harré (2006) says, these have become part of our common sense (or folk) Psychology (see Chapter 4).

Pause for thought … 1 How many ‘Freudian’ and other psychodynamic terms and concepts can you name that are commonly used in everyday language?

Defining ‘psychodynamic’ The term ‘psychodymanic’ denotes the active forces within the personality that motivate behaviour, and the inner causes of behaviour (in particular, the unconscious conflict

203

Individuals as driven by unconscious forces

between the id, ego, and superego that comprise the ‘psychic apparatus’ or the personality as a whole). While Freud’s psychoanalytic theory (sometimes ‘psychoanalysis’, denoting both his meta-Psychology and his psychotherapeutic methods) was the original psychodynamic theory, the psychodynamic approach as a whole includes all those theories and approaches to therapy based on his ideas; examples include: L L L L L

Ego Psychology (e.g. Freud’s daughter, Anna (1895–1982)) Psychosocial theory (Erik Erikson (1902–1994)) Analytical Psychology (Carl Gustav Jung (1875–1961)) Individual Psychology (Alfred Adler (1870–1937)) Object relations school (e.g. Ronald Fairbairn (1889–1964), Melanie Klein (1882–1960), Margaret Mahler (1897–1985), Donald Winnicot (1896–1971), John Bowlby (1907–1990)).

So, while Freud’s psychoanalysis is psychodynamic, all the other approaches listed above are psychodynamic but not psychoanalytic (i.e. the two terms aren’t synonymous). Because of their enormous influence, this chapter will focus on Freud’s ideas.

Freud, science, and Psychology According to Richards (2010), psychoanalytic thought has always represented a problem as far as academic (mainstream) Psychology is concerned. While psychoanalysis represents a hugely rich source of concepts and hypotheses, they notoriously resist experimental evaluation (Richards, 2010). We noted in Chapter 2 that Popper rejected psychoanalytic theory as a scientific theory on the grounds that it is unfalsifiable. As well as discussing the validity of Popper’s claim, we shall consider the recent subdiscipline of neuropsychoanalysis (NP), one of the many spin-offs of neuroscientific research (see Chapter 5); NP aims to link psychodynamic concepts and neuroscientific mechanisms (Northoff, 2012). In terms of the distinction between the Naturwissenschaften (natural sciences) and the Geisteswissenschaften (humanities), Freud’s work as a whole probably belongs more to the latter. As we noted in Chapter 2, Freud’s theory provides methods and concepts that help us to interpret and ‘unpack’ underlying meanings (it has great hermeneutic strength). Popper’s claim of unfalsifiability underlines the fact that these meanings (both conscious and unconscious) cannot be measured in any precise, objective way; Freud offers a way of understanding that, while less easily tested, may capture the nature of human experience and action more appropriately (Stevens, 1995).

The concept of the unconscious mind According to Harré (2006): Freud likens the ‘discovery’ of the role of the unconscious as the main force in our mental lives to the Copernican revolution in astronomy and Darwin’s proof of the descent of human beings from the animal kingdom. It is the third blow to human selfesteem. We are not in absolute control of our thoughts, feelings and actions. (p. 276)

204

Individuals as driven by unconscious forces

BOX 9.1 KEY THINKER: Sigmund Freud (1856–1939) L Freud was born in Freiberg, Moravia (now in the

L

L

L

L

L

L

L

L L

L

L L

Czech Republic), then part of the Austro-Hungarian empire, to a quite poor middle-class Jewish family. In 1860, his family moved to Vienna. His father, a wool merchant, 22 years older than his mother, had two sons from a previous marriage; one had a son of his own just before Sigmund’s birth (making him younger than his nephew). Freud was one of eight children borne by his mother. His early interests in history and the humanities pointed towards a law career, but he then ‘discovered’ science and enrolled in 1873 in the University of Figure 9.1 Vienna’s medical school. Sigmund Freud. One outstanding teacher there was the philosopher Franz Brentano (1838–1917), whose Psychology from an Empirical Standpoint (1973/1874) distinguished between Psychology’s essential subject-matter (‘acts’) and that of the natural sciences (‘objects’). Brentano coined the term intentionality to denote the ‘aboutness’ of mental phenomena (see Chapter 4). Brentano also argued that any adequate psychological theory must be ‘dynamic’, capable of accounting for the influence of constantly changing motivational factors on thought. He also distinguished between the ‘objective reality’ of physical objects and the ‘subjective reality’ of private thought. In addition to these issues, Brentano introduced Freud to the literature on unconscious thought (see text below) and he had planned to take a philosophy degree with Brentano after completing his medical training. However, another teacher, Ernst Brucke (1819–1892), director of the Physiological Institute and co-founder of the ‘new’ physiology (which sought mechanistic explanations for all organic phenomena), proved an even greater influence on Freud. Freud’s wish to pursue a career in physiological research was scuppered by several factors: Jews were much less discriminated against in Austro-Hungary compared with other Eastern European countries, but entry to the professions was still difficult – with the notable exception of medicine. Also, he needed to become financially independent of his father so that he could marry Martha Bernays, with whom he’d recently fallen in love. So he chose the more realistic path of completing his medical training that would allow him to set up a private medical practice. He became a prize student, supervised by the famous brain anatomist Theodor Meynert (1833–1892). In 1885, he was awarded a grant to study in Paris under the celebrated neurologist Jean-Martin Charcot (1825–1893). He returned to Vienna in 1886 and gave a lecture to the Vienna Medical Society about Charcot’s theory of hysteria and use of hypnosis to treat it. But this didn’t find favour with Meynert and Freud felt himself to be an outsider within the Viennese medical establishment. Despite this, and needing to supplement his income, he began to specialize in the treatment of patients with hysteria. He lived and worked in Vienna until 1938, when the Nazis allowed him to move to London; he died there in 1939. (Based on Fancher and Rutherford, 2012; Harré, 2006)

205

Individuals as driven by unconscious forces

Freud blamed this ‘discovery’ for the fierce attacks made on his theories, especially when they were first made public. However, it’s probably much more accurate to blame his account of infantile sexuality in general (the claim that sexual needs are present from birth) and the seduction theory in particular. The seduction theory (i.e. the claim that child sexual abuse was rife among the middle-class families of Vienna) was the precursor of Freud’s infamous Oedipal theory Also, underlying Freud’s claim that the discovery of the unconscious represented as momentous a scientific achievement as those of Copernicus (see Chapter 2) and Darwin (see Chapter 8) is his belief that he was largely – if not exclusively – responsible for its discovery. However, we noted in Box 9.1 that Brentano had introduced his student, Freud, to an already considerable literature on the unconscious. The belief that Freud ‘discovered the unconscious’, or coined the concept, or was the first to explore it in any systematic way, are all myths. All we can say for sure is that Freud discovered the Freudian unconscious (see below). As Moghaddam (2005) points out, the notion of an unconscious has historical roots dating back to Plato, one of the great Ancient Greek philosophers. According to the famous simile of the cave, unenlightened people are like prisoners in a cave: there’s a fire behind them, but they cannot turn around to see the light of the fire, or the daylight outside the cave; all they can see are the shadows cast by the fire onto the walls of the cave. If a person manages to break free from the chains and escape into the daylight, they will be dazzled by the light, making it more difficult to see the shadows inside the cave again. The simile of the cave is part of a long tradition of scholarship about how people can be mistaken in their beliefs about the world and themselves: we’re often unaware of what we do and don’t know, and so we often act on the basis of mistaken beliefs (Moghaddam, 2005). Moghaddam identifies five particularly important influences on the idea of the unconscious since the scientific revolution of the seventeenth and eighteenth centuries (see Chapter 2). These are described in Box 9.2.

BOX 9.2 Influences on the concept of the unconscious L Perceptual thresholds: Leibnitz and other German philosophers had discussed

ideas relating to levels of consciousness, and how some things pass a threshold and enter consciousness while others do not. The idea of thresholds became a major feature of Psychophysics (see Box 1.2, page 3). One of Fechner’s ideas that had a particularly profound impact on Freud’s thinking was the iceberg metaphor: most mental activity (and content) lies beneath the water (i.e. is subconscious) but has an enormous impact on behaviour. L Unconscious inferences: the role of the unconscious was highlighted by Hermann Helmholtz (1821–1894) and later by the Gestalt Psychologists (see Chapter 1, pages 14–17). Helmholtz (1866) stated that: The psychic activities that lead us to infer that there in front of us … is a certain object of a certain character, are generally not conscious activities, but unconscious ones. In their result they are equivalent to a conclusion. (trans. 1925, pp. 2–4)

206

Individuals as driven by unconscious forces Similarly, according to Richard Gregory’s constructivist theory of perception: We may think of sensory stimulation as providing data for hypotheses concerning the state of the external world. The selected hypotheses … are perceptions. (Gregory, 1973, pp. 61–63; emphasis in original)

Both Helmholtz and Gregory regard perception as indirect and based on inference (see Gross, 2015). L Variations in idea intensity: according to Johan Friedrich Herbart (1776–1841), the human

mind is a collection of ideas varying in intensity; those that aren’t strong enough to become conscious remain intact and can be brought into consciousness through associated ideas. These speculations find an echo in Ebbinghaus’ (1885) experimental study of memory (see Chapter 7): ‘forgotten’ material remains within memory (available but inaccessible) as demonstrated by the phenomenon of re-learning. L Hypnotism: initially popularized by the Austrian Franz Mesmer (1734–1815), it gained some scientific respectability through its adoption by Charcot (see Box 9.1) and his student Pierre Janet (1859–1947). One implication of Charcot’s work is that there are aspects of experience that lie outside the reach of introspection and conscious awareness; this, in turn, implies that personality comprises several layers, some of which can influence behaviour without our knowledge. L False consciousness: according to the German Karl Marx (1818–1883), the capitalist controllers of media, education, the Church, and other major institutions also control the perceptions of the proletariat regarding society and their place within it. In particular, the proletariat fails to recognize that (and how) their class interests differ from those of the capitalist owners of industry, etc. Class interest acts as a ‘hidden hand’ to move capitalists to preserve the class system. Discussions of false consciousness highlight the belief that individuals are affected by influences they’re unaware of.

L

What would you say is the common denominator in the five influences described in Box 9.2 (i.e. what do they have in common regarding the nature of the unconscious)?

If there is a common thread running through these disparate accounts of the unconscious, it is the idea that there’s much more going on within our minds than we can possibly know at any one time: you don’t have to be a Freudian (or any other kind of psychodynamic theorist) to believe in the conscious-mind-as-the-tip-of-an-iceberg metaphor. It could be argued that this belief has become part of our common sense understanding of our own and others’ psychology (see Chapter 4); in itself, there’s nothing especially contentious about this description of the unconscious. So, what’s different about Freud’s account of the unconscious and is there anything about it which is contentious?

The Freudian unconscious To appreciate the distinctive character of Freud’s account of the unconscious, we need to consider most other parts of his metapsychology, as well as how his therapeutic methods helped to clarify for him the nature and content of the unconscious.

207

Individuals as driven by unconscious forces

The unconscious represents one of three levels of consciousness, which, in turn, are interrelated with the three components of the ‘psychic apparatus’ or structure of the personality (the id, ego, and superego): 1 The conscious mind refers to those thoughts, feelings, wishes, memories, and so on that are currently accessible (i.e. we are fully aware of them). The ego represents the conscious part of the mind, together with some aspects of the superego (see below and Box 9.3), namely those moral rules and values that we’re able to express in words. 2 The ego also controls the preconscious, a kind of ‘ante-room’, an extension of the conscious, whereby thoughts etc. that we’re not fully aware of at this moment could become so quite easily if we direct our attention to them (e.g. you suddenly notice a ticking clock that’s been ticking away all the time). The preconscious also processes ill-defined id urges or impulses (see Box 9.3) into perceptible images, and part of the superego also functions at a preconscious level. 3 The unconscious comprises (1) id urges/impulses; (2) all repressed material (see below); (3) the unconscious part of the ego (the part involved in dream work, neurotic symptoms, and defence mechanisms); and (4) part of the superego (such as the free-floating anxiety or vague feelings of guilt or shame which we find difficult to explain, and ‘finding yourself ’ behaving in ways that seem to reflect how your parents treated you or parental values without being able to say what these values are). L Unconscious material can only become conscious through the use of special tech-

niques, in particular free association, dream interpretation, and transference; these are the basic methods that Freud used in his psychoanalytic therapy, all designed to ‘make the unconscious conscious’. Others include (the interpretation of ) resistance and parapraxes (‘Freudian slips’, which constitute the ‘psychopathology of everyday life’). This account of levels of consciousness is sometimes taken – mistakenly – to indicate that Freud believed they correspond to particular parts of the brain. However, he did believe there had to be ‘somewhere’ where thoughts etc., ‘were’ at any one moment (a ‘topographical’ account), and he left his options open (Jacobs, 1992): our psychical topography has for the present nothing to do with anatomy; it has reference not to anatomical localities, but to regions in the mental apparatus, wherever they may be situated in the body. (Freud, 1915a, p. 177; emphasis in original)

Repression If there’s one feature of the Freudian unconscious that makes it distinctive from all other accounts, it’s the part played by repression. According to Jacobs (1992), this arguably represents the single most important theoretical concept, and Freud himself singled it out as a special cornerstone ‘on which the whole structure of psychoanalysis rests. It is the most essential part of it’ (Freud, 1914a, p. 73). Repression is needed by virtue of the inherent conflict within the psychic apparatus (see below). The term ‘repression’ had been previously used by the German philosopher Arthur Schopenhauer (1788–1860) in 1844, although Freud said that he didn’t read his work until much later in life. The term was also used by Herbart in 1924 (see Box 9.2). As for Freud, he first used the term in an initial publication co-authored with Joseph Breuer; this later formed the first chapter of Studies of Hysteria (1895). There, repression described a

208

Individuals as driven by unconscious forces

phenomenon whereby unacceptable feelings are ‘removed’ from conscious thought and ‘forced’ to stay in the unconscious; however, this isn’t always successful, and the feelings can manifest as (are converted into) physical/bodily symptoms (such as blindness, deafness, paralysis, headaches). In the absence of any physical disease or injury, these symptoms were described as ‘hysterical’. According to Jacobs (1992), such an account uses a ‘mechanistic, quasi-hydraulic image’ (p. 37): feelings and ideas are dammed up, but under growing pressure find an alternative route back into consciousness (‘the return of the repressed’). Not only is repression highly individual, it’s also an ongoing process (rather than a one-off event); this requires a great deal of psychic energy. Repression can be thought of as the ‘master’ ego defence mechanism (or just ego defence): repression is often just the ‘first step’ in keeping threatening or forbidden thoughts or feelings out of consciousness; a second line of defence involves the use of one or more of several others (such as displacement, denial, isolation, reaction formation, projection, regression, rationalization, and sublimation). Many of these were originally proposed (or implied) by Freud, and later elaborated by his daughter, Anna Freud (1936).

The structure of the personality As Jacobs (1992) observes, this represents one of the more hypothetical/speculative aspects of Freud’s theorizing: rather than trying to explain the direct observations of himself and his patients, the id, ego, and superego (Freud’s metapsychology) are hypothetical constructs designed to make sense of the unobservable.

The id It is the dark, inaccessible part of our personality…. We approach the id with analogies: we call it a chaos, a cauldron full of seething excitations…. It is filled with energy reaching it from the instincts, but it has no organization … but only a striving to bring about the satisfaction of instinctual needs subject to the observance of the pleasure principle. (Freud, 1933, pp. 73–74; emphasis added) The laws of logic don’t apply within the id, so that (as in dreams), ideas can sit side by side, which elsewhere would be considered contradictory; also, there’s no recognition of the passage of time. Again: It contains everything that is inherited, that is present at birth, that is laid down in the constitution – above all, therefore, the instincts. (Freud, 1940, pp. 145) For Freud, the human organism is a complex energy system. The kind of energy needed to fuel or operate the psychic apparatus is psychic energy, which performs ‘psychological work’. The id is the source of psychic energy. Since the id is in closer contact with the body than with the outside world, and since it’s unaffected by logic or reason, and its sole aim is to reduce excitation (tension) to a minimum, it’s said to be governed by the pleasure principle (seeking pleasure and avoiding pain). In this way, the id can be thought of as the infantile part of the personality, what we are before the environment (especially other people) has begun to exert any influence over us – it’s the pre-socialized part of the personality.

209

Individuals as driven by unconscious forces

Pause for thought … 2 What, in Skinner’s account of operant conditioning, corresponds to Freud’s pleasure principle?

The id retains its infantile character throughout our lives: whenever we act on impulse, selfishly, or demand ‘I want it and I want it now!’, our id is controlling our behaviour (the ‘spoiled child’ of the personality). The only real development that occurs within the id is the primary process: a form of thinking in which an image of the object needed to reduce tension is formed. However, the id is incapable of distinguishing between the subjective memory-image and the real thing; that’s left to the ego.

The ego [T]he ego seeks to bring the influence of the external world to bear upon the id and its tendencies, and endeavours to substitute the reality principle for the pleasure principle which reigns unrestrictedly in the id…. The ego represents what may be called reason and common sense, in contrast to the id, which contains the passions. (Freud, 1923, p. 25; emphasis added) The ego gradually develops (starting at a few months after birth) as psychic energy is ‘borrowed’ from the id and directed outwards towards external reality. It can be described as the ‘executive’ of the personality, the planning, decision-making, rational, and logical part; these functions are made possible by secondary process thinking, roughly equivalent to the cognitive processes of attention, perception, remembering, reasoning, problem-solving, and so on (see Chapter 7). It enables us to distinguish between a wish and reality, inside and outside, subjective from objective, and so on (through the reality principle). While the ego enables us to postpone the satisfaction of our needs until an appropriate time and place (deferred gratification), its priority is the consequences of our actions rather than whether they are (inherently) good or bad, right or wrong. So, like the id, the ego is amoral, although other people are taken into account (but for reasons of expediency rather than morals).

The superego The long period of childhood, during which the growing human being lives in dependence on his parents, leaves behind it as a precipitate the formation in his ego of a special agency in which this parental influence is prolonged. It has received the name of super-ego … … This parental influence of course includes … not only the personalities of the actual parents but also the family, racial and national traditions handed on through them, as well as the demands of the immediate social milieu which they represent. (Freud, 1940, pp. 146–147) Only when the superego has developed (at age 5–6 when the child’s Oedipal conflict is resolved; see Gross, 2015) can a person be described as a moral being. The superego represents the internalization or introjection of a set of moral values which determine that certain behaviour is right or wrong. While moral judgements often involve the belief that particular actions are inherently (i.e. in themselves) good or bad, these judgements are actually culturally determined and culturally relative; in other words, cultural (and sub-cultural) rules and values

210

Individuals as driven by unconscious forces

determine how individual members perceive the rightness or wrongness of particular behaviour without consciously linking it to those rules and values. Internalization of these values occurs, according to Freud, through identification with the same-sex parent (identificationwith-the-aggressor in the case of boys, anaclitic identification in the case of girls). The superego represents the ‘judicial’ branch of the personality and comprises two components: (1) the conscience, which threatens the ego with punishment (in the form of guilt) for wrongdoing; and (2) the ego-ideal, which promises the ego rewards (in the form of pride and high self-esteem) for good, socially positive behaviour. Several critics of Freud have argued that terms like ‘id’, ‘ego’, and ‘superego’ are bad metaphors: they don’t correspond to any aspect of psychology or neurophysiology, and they encourage reification (treating metaphorical terms, or hypothetical constructs, as if they were ‘things’ or entities). However, in Freud and Man’s Soul (1985), Bruno Bettelheim defends Freud and criticizes his translators (see Box 9.3).

BOX 9.3 Lost in translation: Bettelhiem’s defence of Freud L Bettelheim (1983) points out that much of Freud’s terminology was mistranslated,

which has led to a misrepresentation of those parts of his theory. L For example, Freud himself never used the Latin words, id, ego, and superego;

instead, he used the German das Es (‘the it’), das Ich (‘the I’), and das Über-Ich (the ‘over-I’), which were meant to capture how the individual relates to different aspects of the self. L The Latin terms tend to depersonalize Freud’s use of ordinary, familiar language, giving the impression they describe different ‘selves’ that we all possess! L The Latin words (preferred by his American translators to lend greater scientific credibility to the theory) turn the concepts into cold, technical terms which arouse no personal associations: whereas the ‘I’ can only be studied from the inside (through introspection), the ‘ego’ can be studied from the outside (as behaviour observable by others). L In translation, Freud’s ‘soul’ became scientific Psychology’s ‘psyche’ or ‘personality’. Freud’s careful and original choice of words facilitated an intuitive understanding of his meaning. No word has greater and more intimate connotation than the pronoun ‘I’…. If anything, the German Ich is invested with stronger and deeper personal meaning than the English ‘I’…. Where Freud selected a word that, used in daily parlance, makes us feel vibrantly alive, the translators present us with a term from a dead language that reeks of erudition precisely when it should emanate vitality. (Bettelheim, 1983, pp. 53–55; emphasis in original)

Psychic determinism and free will According to James Strachey, one of the American translators and editor of the ‘Standard Edition’ of Freud’s collected works: Behind all of Freud’s work … we should posit his belief in the universal validity of the law of determinism … Freud extended the belief [derived from physical phenomena] uncompromisingly to the field of mental phenomena. (e.g. Strachey, 1962, p. 17)

211

Individuals as driven by unconscious forces

Similarly, Sulloway (1979) states that Freud’s entire life’s work in science (and he very much saw himself as a scientist) was characterized by an abiding faith in the notion that all vital phenomena, including psychical ones, are rigidly and lawfully determined by the principle of cause and effect (see Chapter 2). Combined with his belief that dreams have meaning and can, therefore, be interpreted, the extreme prominence he gave to the technique of free association in his psychoanalytic therapy is perhaps the most explicit manifestation of this determinist position. However, once again we find that Freud’s own German term has been mistranslated, producing a misrepresentation of the original. This time, Freud’s intention to convey an uncontrollable ‘intrusion’ (‘Einfall’) by preconscious ideas into consciousness was translated (as ‘free association’), implying almost the complete opposite; in other words, while the German word is perfectly consistent with the idea of thoughts being determined beyond the person’s control, the English translation is perfectly consistent with the idea of free will. In turn, this preconscious material reflected unconscious ideas, wishes, and memories, which was what Freud was really interested in: here lay the principal cause(s) of his patients’ neurotic problems (Sulloway, 1979). Ironically, the fact that the causes of our thoughts, actions, and (apparent) choices are unconscious (mostly actively repressed) is what accounts for the illusion that we are free: we believe we have free will because we are (by definition) unaware of the true, unconscious causes of our actions.

Pause for thought … 3 How does Skinner’s account of free will mirror Freud’s (see Box 1.7, pages 11–12)?

The application of this general philosophical belief in causation to mental phenomena is called psychic determinism. Freud’s aim was to establish a ‘scientific Psychology’ through applying to the human mind the same principles of causality as were in his time considered valid in the natural sciences (see Chapter 2). If all mental activity is the result of unconscious forces that are instinctual, biological, or physical in origin, then human Psychology could be formulated in terms of the interaction of forces that were, in principle, quantifiable (Rycroft, 1966). Brown (1961) argues that, strictly speaking, the principle of causality isn’t a scientific ‘law’ but rather a necessary assumption for science to happen at all – see Hume’s account of cause and effect (Box 2.4, page 34). Freud’s predecessors and contemporaries (including James, Watson, and McDougall) all took the principle of causation for granted. But they distinguished between (1) behaviour for which one or more clear-cut cause(s) were known (or could be claimed) and (2) random events resulting from many separate and apparently trivial causes. These Psychologists took most psychological events to be of the latter kind, meaning that they could only be discussed in broad descriptive terms (Brown, 1961). However, Freud disagreed: in his early studies of hysterical patients, he showed that their apparently irrational symptoms were in fact meaningful when seen in terms of painful, unconscious memories; they weren’t chance events and their causes could be revealed by psychoanalysis. The same reasoning was then applied to other seemingly random, irrational events, to parapraxes, and to dreams. According to Gay (1988), a crucial feature of Freud’s theory as a whole is that there are no accidents in the universe of the mind:

212

Individuals as driven by unconscious forces

Every event, no matter how accidental its appearance, is as it were a knot in intertwined causal threads that are too remote in origin, large in number, and complex in their interaction to be readily sorted out. True: to secure freedom from the grip of causality is among mankind’s most cherished, and hence most tenacious, illusory wishes. But Freud sternly warned that psychoanalysis should offer such fantasies no comfort. Freud’s theory of the mind is, therefore, strictly and frankly deterministic. (Gay, 1988, p. 119) However, Gross (2014) suggests that Gay’s conclusions need qualifying in the following ways: 1 Freud didn’t deny that human choices are real and, indeed, one of the aims of therapy is to ‘give the patient’s ego freedom to decide one way or another’ (Freud, in Gay, 1988). If we become aware of our previously unconscious memories, feelings, and so on, then we’re freed of their stranglehold (although there’s more to therapeutic success than simply ‘remembering’). The whole of psychoanalysis is based on the belief that people can change. While change might be very limited (famously, he claimed that therapy aims at converting ‘neurotic misery into everyday unhappiness’), at least it is possible. 2 Freud acknowledged that ‘accidents’ can and do occur: sometimes, things happen beyond an individual’s control which have nothing to do with his/her unconscious mind (such as being struck by lightning). However, an ‘accident-prone’ person is likely to be unconsciously helping to bring the event(s) about (and so these aren’t true accidents). 3 Freud’s concept of psychic determinism doesn’t require that there’s a one-to-one correspondence between cause and effect. One form of psychic determinism is overdetermination, according to which much of our behaviour (and thoughts and feelings) has multiple causes (some conscious, some unconscious). The conscious causes are what we normally take to be the reasons for our behaviour; but if the causes are also unconscious, then these reasons can never tell the whole story. For Freud, the latter are always the more important. 4 According to the psychoanalyst Charles Rycroft (1966), the principle of psychic determinism remains an assumption, which Freud made out of scientific faith rather than on actual evidence. Freud denied more than once that it’s possible to predict whether a person will develop a neurosis, or what kind it will be. Instead, he claimed that all we can do is ascertain the cause retrospectively, a process more in keeping with history than science (Rycroft, 1966).

L

How does this distinction between history and science relate to that between Naturwissenschaften and Geisteswissenschaften (see Chapter 2)?

As noted in Chapter 2, history belongs to the Geisteswissenschaften (humanities) and ‘science’ is commonly used to refer to the Naturwissenschaften (natural sciences). As well as Freud’s own emphasis on science and, in particular, the principle of causality, Rycroft observes that: Much of Freud’s work was really semantic and … he made a revolutionary discovery in semantics, viz. that neurotic symptoms are meaningful disguised communications, but … owing to his scientific training and allegiance, he formulated his findings in the conceptual framework of the physical sciences. (Rycroft, 1966, p. 14, emphasis added)

213

Individuals as driven by unconscious forces

Freud and hermeneutic science This ‘semantic argument’ is supported by the title of what many people regard as Freud’s greatest work, The Interpretation of Dreams (1900) (as opposed to the ‘Cause’ of Dreams). This, in turn, relates to the hermeneutic strength of Freud’s theories. As we saw in Chapter 3, Goldberg (2015), in drawing a distinction between psychoanalytic understanding and that provided by, for example, psychodynamic psychotherapy distinguishes between hermeneutic science, which deals with meanings, and empirical science, which involves rules and establishes facts (see Chapter 2). This is the difference between (1) what things mean to us and (2) what can be measured or explained in a ‘scientific’ manner, respectively. Goldberg (2015) cites Brandom’s (2008) distinction between (1) algebraic understanding, which follows rules, and (2) hermeneutic understanding, which is more basic and involves concepts and meanings that can never be systematized: Ordinary discourse or conversation is regularly an effort at grasping meanings and so often becomes a hermeneutic exercise in the attempt to interpret what another person has said. Although some may hope that all human discourse and forms of writing may come to admit of … clear and exact understanding, the richness of language makes rule-following impossible…. Psychoanalysis should be recognized as a hermeneutic activity: efforts to make it an empirical science on par with brain studies or to reduce it to some collection of core beliefs … are, by definition, acts of futility. (Goldberg, 2015, p. 19) At the same time, to oppose hermeneutic studies (understanding/interpretation) to empirical studies (facts, truths, and predictions) would present us with a false dichotomy. Goldberg believes that it’s probably better to consider the two approaches as relevant at different times and to various degrees: ‘it is a rare fact that has no meaning whatsoever’ (Goldberg, 2015, p. 20). Goldberg suggests that recent advances in neuroscience (see below and Chapter 5) could happily sit beside psychoanalytic interpretation; what matters is that they complement each other, rather than neuroscientific explanations trying to absorb those interpretations (i.e. reducing the latter to the former).

Freud and neuroscience Support for certain aspects of Freud’s theories have come from the relatively new subdiscipline of neuropsychoanalysis (NP), which aims to link psychodynamic concepts (such as unconscious motivation and memory, dreams, and the ego defence mechanisms) and neuroscientific mechanisms (Northoff, 2012) (see Chapter 5). Solms (2006) describes the aim of NP as introducing the psyche into neuropsychology – to demonstrate that we cannot possibly understand the brain if its subjective aspects are neglected or even ignored. Solms and other leading figures in NP see their research as a continuation and completion of Freud’s attempt to establish a scientifically based account of the human mind (Northoff, 2012). (For a brief summary of some of this research, see Gross, 2015.)

The unconscious: Freud and Jung compared In terms of Jung’s Analytical Psychology, repressed material that plays such a crucial role in the Freudian unconscious represents only one kind of unconscious content. For Jung, the Freudian unconscious is predominantly ‘personal’, composed of the individual’s particular and unique experiences. The personal unconscious also includes things we’ve forgotten, as well as all those things we think of as being ‘stored in memory’ and which could be consciously remembered without too much effort (Freud’s preconscious).

214

Individuals as driven by unconscious forces

Associated groups of feelings, thoughts, and memories may cluster together to form a complex, a quite autonomous and powerful ‘mini-personality’ within the total psyche; an example would be Freud’s Oedipus complex. Jung looked for the origin of complexes in the collective (or racial) unconscious, which is arguably what most distinguishes his theory from Freud’s. While Freud’s id is part of each individual’s personal unconscious and represents our biological inheritance (see above), for Jung the mind (through the brain) has inherited characteristics that determine how a person will react to life experiences, and what type of experiences these will be. Our evolutionary history as a species is all-important as far as the collective unconscious is concerned (see Chapter 8). The collective unconscious can be thought of as a reservoir of latent (or primordial) images (or archetypes – a prototype or ‘original model or pattern’). They relate to the ‘first’ or ‘original’ development of the psyche, stemming from our ancestral past, human, pre-human, and animal (Hall and Nordby, 1973) and constitute predispositions or potentialities for experiencing and responding to the world in the same way as our ancestors (e.g. we’re naturally predisposed to fearing the dark or snakes). Jung identified a large number of archetypes, including birth, rebirth, death, power, magic, the hero, the child, the trickster, God, the demon, the wise old man, earth mother, and the giant. He gave special attention to the persona, the anima/animus, the shadow, and the self (see Chapter 12 and Gross, 2015).

Pause for thought … 4 Can you think of any supporting evidence for Jung’s collective unconscious? (One research area you may want to consider is the concept of preparedness and its relationship to how we acquire phobias of certain classes of stimuli.)

Brown (1961) identifies three major sources of evidence for the collective unconscious; these are described in Box 9.4.

BOX 9.4 Evidence for the collective unconscious L The ‘extraordinary’ similarity of themes in the mythologies of various cultures. L The recurring appearance in therapy of symbols which have become detached

from any of the patient’s personal experiences, and which become increasingly more like the primitive and universal symbols found in myths and legend. L The content of the fantasies of psychotic patients (especially schizophrenics; see Chapter 13), which are full of themes such as death and rebirth (similar to those found in mythology). L According to Brown, members of all cultures share certain common experiences, so it’s not surprising that they dream or create myths about archetypal themes. (Based on Brown, 1961)

215

Individuals as driven by unconscious forces

The scientific status of Freud’s theory We saw in Chapter 2 that both Popper (1959) and Eysenck (1985) rejected Freud’s theory on the grounds of its unfalsifiability. The example given in Box 2.11 was the defence mechanism of reaction formation. However, Kline (1984, 1989) believes that it’s a mistake to take reaction formation as typical of Freudian theory as a whole; it comprises a collection of hypotheses, some of which are more central than others, and some of which are better supported by empirical evidence than others. The very use of the term ‘evidence’ in relation to psychoanalytic concepts implies that some at least are testable, which, in turn, implies that they are, in principle, falsifiable. As we noted above, recent support for certain aspects of the Freudian unconscious has been provided by NP, one of the many spin-offs of neuroscientific research. Whatever the criticisms of the use of brain-scanning techniques in relation to psychological processes and the conclusions that have been drawn from this research (see Chapter 5), the very fact that NP exists seems to counter the unfalsifiability charge. Some would argue that you ‘can’t get much more scientific’ than using brain-scanning methods (although, as with any scientific method, it’s always how the resulting data are interpreted that ultimately matters). According to Bargh (2014), contemporary Cognitive Psychologists have recast the Freudian worldview, adopting a more pragmatic view of what defines our unconscious self. For example, Nobel laureate Daniel Kahneman (2013) has described the modern distinction between automatic and controlled thought processes (corresponding to unconscious and conscious, respectively). Automatic thought processes represent one facet of the cognitive unconscious (Kihlstrom, 1987); others include: L blindsight (Weiskrantz, 1986, 2007), the ability to ‘see’ without conscious awareness

(and despite damage to/surgical removal of some part of the visual cortex); L prosopagnosia (e.g. McNeil and Warrington, 1993; Ramachandran, 1998), a form of

‘face blindness’ the inability to consciously perceive faces (including those of their partners etc.). In both cases, loss of explicit conscious recognition is combined with the capacity for implicit behavioural recognition. It’s now widely believed that most of the processing undertaken by the brain occurs without our awareness (Velmans, 1991). In a much-cited article, Nisbett and Wilson (1977) argue that we don’t have direct access to cognitive processes at all; instead, we have access only to the ideas and inferences that are the outputs of those processes. Nisbett and Wilson claim that our common sense, intuitive belief that we can accurately account for our own behaviour is illusory: what really guides our behaviour is unavailable to consciousness. (This, of course, is consistent with Freud’s claim that the most important reasons for our actions (‘the’ reasons) are unconscious, although there may be accompanying conscious reasons (‘our’ reasons); see above.) Similarly, Frith and Rees (2007) argue that probably the major development in consciousness research during the past 150 years has been the demonstration of unconscious, automatic psychological processes in perception, memory, and action. But the downside to this development is that when we believe we’re consciously detecting or discriminating a stimulus, we may be merely guessing (because the ‘real work’ is being performed unconsciously).

Recovered memories and the false-memory debate Since the early 1990s, considerable publicity has been devoted to court cases in the US in which parents are being sued for damages by their teenage or adult children who accuse

216

Individuals as driven by unconscious forces

them of child sexual abuse (CSA) that has come to light in the course of psychotherapy. It’s assumed that the therapist has provided a safe and supportive environment, enabling the victim to consciously remember these hitherto repressed events; these recollections are referred to as recovered memories (RMs). However, from the perspective of the accused parents, these are false memories (FMs), implanted by therapists into the minds of their emotionally vulnerable patients. These unethical, unscrupulous therapists are, in turn, accused by parents of practising recoveredmemory therapy, which induces false-memory syndrome (FMS). Both parents and retractors (those who accused their parents, then later withdrew the accusations) have sued therapists and hospitals for implanting FMs in the patients’ minds. The False Memory Syndrome Foundation was set up in the US in 1992, followed, in 1993, by the British False Memory Society. As Gross (2015) observes, this account of the FM debate raises several interrelated issues, ranging from the Psychology of memory and forgetting (see Chapter 7), and the nature and ethics of psychotherapy in general (and psychoanalysis in particular). When children sue their parents over alleged CSA, inevitably the family is torn apart and individual lives can be ruined. But the FM debate has also caused division within the ranks of Psychology and psychiatry. Two key questions are (1) do RMs exist? And (2) do FMs exist and, if so, how might they be created? As far as (1) is concerned, the answer depends very much on how the concept of repression is understood. If these memories have been repressed and are then retrieved from the unconscious during therapy, then there must first be sound evidence that repression exists. Up until recently, the strongest evidence in support of repression has been clinical (rather than experimental), but evidence from NP is accumulating (see above). It’s also important to understand Freud’s view of memory. According to Mollon (2000), a common misrepresentation is that repressed memories are preserved in their original form, like video recordings. But in a paper on Screen Memories (1899), Freud claimed that memories, especially those of events that occurred some time ago, may be constructed like dreams. A ‘screen memory’ is one that’s apparently emotionally insignificant, but is actually a substitute for a more troubling memory with which it has become associated: Our childhood memories show us our earliest years not as they were but as they appeared at the later periods when the memories were aroused … the childhood memories did not … emerge; they were found at that time. And a number of motives, with no concern for historical accuracy, had a part in forming them, as well as in the selection of the memories themselves. (Freud, 1899, p. 303) So, the subjective sense of remembering doesn’t mean that the memory is literally accurate. Memories are like dreams or works of fiction, constructed out of psychodynamic conflict, serving wish-fulfilment and self-deception (Mollon, 2000). If Freud is right, then RMs can no longer be memories of actual CSA, but phantasies of abuse. This is consistent with Freud’s rejection of the seduction theory (his original belief that actual CSA was rife at the time) in favour of the Oedipal theory (the universal conflict between every child’s desire for the opposite-sex parent and dealing with the rivalry of the same-sex parent). Despite making this distinction, Freud’s theory of repression and his therapeutic methods constitute the basic tools of RM therapists, making Freud the arch-enemy of accused parents (Esterson, 2000, personal communication).

217

Individuals as driven by unconscious forces

However, if memories are essentially constructed, rather than ‘discovered’ or ‘recovered’ (‘unearthed’ to use an archaeological analogy which Freud himself used), it becomes easier to understand how FMS occurs: vulnerable patients can easily be ‘persuaded’ that a constructed memory (a phantasy that CSA took place) is, in fact, an objectively true, historically verifiable event (the CSA actually happened). (Gross, 2015, p. 357; emphasis added) A report published in the British Journal of Psychiatry (Brandon et al., 1998) distinguished between (1) CSA that is reported in childhood or kept secret although unforgotten (as in the many recent cases of serial abuse such as perpetrated by Jimmy Saville), and (2) RMs of CSA previously totally forgotten, that emerge in adulthood during therapy, usually in women in their thirties or forties. For some patients, RMs can escalate into FMS, in which a person’s identity comes to centre around the ‘memory’ of a traumatic experience which is objectively false but strongly believed to be true. Brandon et al. cite the findings of studies that demonstrate these two kinds of CSA-related memories. Regarding how FMs might be created, Elizabeth Loftus and her colleagues (e.g. Laney et al., 2008) have found evidence broadly consistent with Bartlett’s theory of reconstructive memory and Loftus’ own research into eyewitness testimony (see Chapter 7 and Gross, 2014). This is supported by a review of the research by Clifasefi et al. (2013). But the fact that FMs can be created, doesn’t, of course, mean that all RMs are false (Loftus, 1997). Indeed, in British Psychological Society Guidelines for Psychologists working with clients in contexts in which RM-related issues may arise, Frankland and Cohen (1999) state that at least some RMs of CSA are recollections of historical events. However, there’s a genuine cause for concern that FMS is also a real phenomenon.

Conclusions: Freud and feminism According to the American feminist writer Shulamith Firestone (1970): Both Freudianism and Feminism came as reactions to one of the smuggest periods in Western civilization, the Victorian Era, characterized by its family-centredness, and thus its exaggerated sexual oppression and repression. Both movements signified awakening: but Freud was merely a diagnostician for what Feminism purports to cure. (pp. 43–44) Appignanesi and Forrester (2000) discuss the much-cited historical link between psychoanalysis and feminism; any hint of a link of any kind was anathema to one substantial section of feminist thought during the 1980s and 1990s. But, beginning in the 1970s, some feminists argued that feminism needed psychoanalysis for its own purposes: to develop a theory of sexual difference in patriarchal society, in order to answer the question of the nature and origins of women’s oppression and social subordination, and of how society transforms biological sexuality into products of human activity. In her ground-breaking Psychoanalysis and Feminism (1974), Juliet Mitchell (New Zealand-born, British-based feminist) argued that: a rejection of psychoanalysis and of Freud’s works is fatal for feminism…. If we are interested in understanding and challenging the oppression of women, we cannot afford to neglect it. (p. xv)

218

Individuals as driven by unconscious forces

Mitchell claimed that psychoanalysis operates entirely within the human, cultural field (as distinct from the psychobiological field): it represents the best guarantee against a lapse back into the essentialist doctrine of original femininity, that is, what women are rather than what they become. By contrast, the American feminist Kate Millett (in Sexual Politics, 1969) condemned Freud as the ‘strongest individual counterrevolutionary force in the ideology of sexual politics’ (p. 178). This view of Freud as ‘the principal ideologue in the modern oppression of women, the patriarchal apologist for male chauvinism’ (Appignanesi and Forrester, 2000) was fairly representative of feminist opinion at that time. The revolution that Millett alludes to above is, of course, the sexual revolution of the 1960s, which had been partly responsible for the women’s liberation movement. The ideal of each individual woman taking control of her life was a consequence of the transformations in legal and medical sexual technology, in particular the advent of the female contraceptive pill (‘the Pill’) and the struggles for abortion law reform. While Freud’s emphasis on sexuality might have been liberating in itself, his account of female sexuality was regarded as highly oppressive. For example, Betty Friedan (in The Feminine Mystique) argued that: Freud was accepted so quickly and completely at the end of the forties that for over a decade no one even questioned the race of the educated American woman back to the home…. After the depression, after the war, Freudian psychology became much more than a science of human behaviour, a therapy for the suffering. It became an allembracing American ideology, a new religion…. Freudian and pseudo-Freudian theories settled everywhere, like fine volcanic ash. (Friedan, 1963, pp. 115–116) So what were the features of the Freudian zeitgeist that feminists found so abhorrent? These included (1) the view of a woman as a castrated, stunted man; (2) the claim that women’s superego was weak and underdeveloped compared with men’s (making them morally ‘inferior’); (3) the belief that women’s sexuality was passive and masochistic; and (4) the claim that rather than being a theory of sexual differences, psychoanalysis was a rationalization and legitimization of already existent social roles (Appignanesi and Forrester, 2000). Two specific features aroused the greatest anger and opposition to Freud’s work: (1) penis envy and (2) the vaginal orgasm. Penis envy is central to the girl’s Oedipus complex, whereby, once she has observed anatomical sex differences, she looks to her father to give her a baby (unconsciously, a penis substitute). Famously, Karen Horney (1924) and Clara Thompson (1943), both eminent psychoanalysts, argued that while Freud’s observations of penis envy were correct, what females envied was not the male’s sexual ‘privileges’ but his social privileges: The penis of penis envy is not a real, bodily penis, it is simply a phallic symbol, but not a Freudian phallic symbol. The young girl immediately perceives it as a symbol of power and prestige. In this interpretation, penis envy is a real and justified index of female oppression in a patriarchal society. (Appignanesi and Forrester, 2000, p. 458) Moreover, it is men, not women, who equate lack of a penis with inferiority! Freud seems to have been largely responsible for the ‘Myth of the Vaginal Orgasm’ (Koedt, 1974). He claimed that clitoral orgasm is ‘adolescent’ (i.e. immature) and when

219

Individuals as driven by unconscious forces

women start having regular sexual intercourse they should transfer the ‘site’ of the orgasm to the vagina. (This, of course, is consistent with his view of the superior, active, ‘penetrating’ male: the woman is merely a passive recipient of the penis.) Also, whereas for men psychoanalytic therapy aims to develop their capacities, for women the aim, according to Freud, is to help them resign themselves to their limited, inferior sexuality. Interestingly, Koedt cites evidence that women who do prefer a vaginal orgasm tend to be more anxious and to experience their bodies as more depersonalized (i.e. less ‘their own’). In line with medical opinion at the time, Freud argued that to have a sensitive clitoris is to be flawed by male and scientific standards. According to Bleier (1984): Obscured by this scientific mysticism about pleasure and women’s eroticism was scientific information that only the modern women’s movement made widely available to women – that the clitoris is richly supplied with nerve endings, while the vagina … is not so supplied. Instead of sexual surgery, the far more subtle and pervasive medical ‘treatment’ of women for their sexuality or their restlessness became their assignment to neurotic immaturity and psychoanalysis. (p. 170) Freud’s theory of the vaginal orgasm required women to deny their own senses and knowledge about their own eroticism in order to be a mature female; the effects were profound and far-ranging, deepening many women’s sense of inferiority, inadequacy, and guilt. It reinforced the phallocentricity of sexuality by defining women’s sexuality in terms only of the penis and her sexual ‘normality’ in terms of her orgasmic ability in the conventional missionary position of heterosexual intercourse. (Bleier, 1984, p. 171) The ‘solution’ for many women was to fake orgasm with vaginal intercourse, something recommended in gynaecological textbooks at least into the 1960s (Scully and Bart, 1973). Freud’s theories on women’s sexuality pervaded popular consciousness and were also incorporated into everyday medical opinion and practice. The findings of Kinsey and of Masters and Johnson regarding the importance of clitoral responses were largely ignored. It was through the women’s movement that these findings became widely available; this, along with greater openness and acceptability of discussion around masturbation and lesbianism ‘have greatly advanced the liberation of women’s sexual pleasure from mere service to men’s sexuality, from male standards and values in sexual practices, and from phallocentrism’ (Bleier, 1984, p. 173).

Pause for thought – answers 1 Richards (2010) suggests the following examples: (a) (b) (c) (d) (e) (f ) 220

having an inferiority or mother complex; projecting one’s anger; regressing; doing something unconsciously; suffering from neurotic anxiety; being repressed or fixated;

Individuals as driven by unconscious forces

(g) having a fragile persona; (h) having an inflated ego; (i) free-associating. Others might include: Oedipus complex, penis envy, rationalization, displacement (‘kicking the cat’), erogenous zones, anal-retentive personality, ego-trip. 2 The principle of reinforcement. According to Skinner, behaviour that results in either positive reinforcement (the presentation of something pleasant, such as food) or negative reinforcement (the removal or avoidance of something unpleasant, such as electric shock) is more likely to be repeated. 3 Look back at Box 1.7 and highlight the parallels in the two accounts. Despite coming from radically different overall theoretical positions regarding free will/determinism, there’s remarkable common ground. 4 Rosenhan and Seligman (1984) propose an interaction between biological and conditioning factors that predisposes us to acquire certain phobias more readily than others. L According to the concept of (biological) preparedness (or prepared conditioning), we’re genetically predisposed to be afraid of things that were the source of danger in our evolutionary past. However, while certain animals that might have posed a threat to our ancestors tend to be those about which we’re most likely to develop a phobia (such as snakes and spiders – both of which are poisonous in many parts of the world), whether or not we do so depends very much on our early experiences with them (Workman and Reader, 2008). L Hugdahl and Ohman (1977) have shown that in laboratory experiments, people without pre-existing phobias can be conditioned more easily to snakes than to flowers. However, snakes have a negative ‘reputation’ or social status, while flowers are generally viewed positively and are completely non-threatening. So, although preparedness as an explanation for direct conditioning may not be valid, preparedness for observational and instructional learning (learning from others) is possible (Murray and Foote, 1979; Rachman, 1977).

221

Chapter 10 People as self-determining organisms Humanistic-phenomenological and Positive Psychology During the 1950s, certain psychoanalysts and other psychotherapists encountered a puzzling phenomenon: social standards had become far more permissive than in Freud’s day, especially with regard to sexuality. In theory, this more liberal attitude should have helped to reduce id–superego conflicts and the number of resulting neuroses (see Chapter 9). However, while hysterical neurosis and repression did seem to be less common than during the Victorian era, more people than ever were opting for psychotherapy: And they suffered from such new and unusual problems as an inability to enjoy the new freedom of self-expression (or … to feel much of anything), and an inner emptiness and self-estrangement. Rather than hoping to cure some manifest symptom, these patients desperately needed an answer to a more philosophical question: how to remedy the apparent meaninglessness of their lives. (Ewen, 1988, p. 369; emphasis added) Some theorists approached this development from within a psychodynamic perspective (such as Erikson’s identity crisis theory and Fromm’s escape from freedom). Freud’s insights may well have been brilliantly relevant to the Victorian mentality (sex was ‘officially’ repulsive – at the very least not meant to be pleasurable, especially for women – and people were rational and fully aware of their motives etc.; see Chapter 9). However, constructs such as psychic determinism and id, ego, and superego, together with Freud’s pessimistic view of human nature, were now reinforcing the modern patient’s apathy and depersonalization by portraying personality as mechanical, fragmented, malignant, and totally preordained by prior causes (Ewen, 1988). Two of the better-known and most outspoken critics of Freudian pessimism were Abraham Maslow and Carl Rogers; they shared the view that human nature is inherently positive, healthy, and constructive, albeit coming from the positions of academic Psychologist and psychotherapist, respectively. Both believed that we strive to fulfil our potential ((self-)actualization) unless we’re prevented from doing so by destructive external forces; such destructive influences are very common, making (self-)actualization extremely difficult to achieve. Their respective ‘solutions’ to the changes within psychotherapy described above

223

People as self-determining organisms

represented a major alternative to both the psychodynamic approach and to the application of Behaviourist principles in behaviour therapy (or ‘behavioural psychotherapy’; see Chapter 6).

Eastern and Western Psychology compared: inner, outer, or both? Much of the discussion of Psychology in previous chapters has focused on what, strictly, should be referred to as ‘Western Psychology’, an empirical science that, as revealed through concepts such as ‘scientism’ and ‘methodolatry/methodologism’ (see Chapter 3), puts objectivity, measurement, cause and effect, experimentation, and so on above its subject-matter in importance (‘method’ before ‘meaning’). The literal meaning of ‘psychology’ (from the Greek, logos = ‘study’ and psyche = ‘soul’) no longer (if, indeed, it ever did) reflects what takes place in its name within the Western ‘version’. According to Graham (1986): All knowledge is fundamentally cosmology inasmuch that it is an attempt by man to explore the universe in which he finds himself, and to understand thereby his own existence and nature. In the sense that personality, intellect, will and emotions comprise the human self, essence, or soul, man’s attempts to understand himself constitute the study of the soul, or literally … psychology. Cosmology is thus intrinsic to psychology. (Graham, 1986, p. 11; emphasis added) However, as Graham points out, cosmologies differ, often radically, between peoples and cultures; so, not surprisingly, ‘psychology’ differs according to the worldview within which it is embedded. In contrast to Western science’s positivism (see Chapter 2), Eastern culture and its institutions (including its religions, Buddhism, Taoism, and Zen Buddhism) are traditionally humanistic: they’re centred around the human potential for self-transcendence or becoming (i.e. to place value outside oneself, to pursue some higher purpose or cause; Batson and Stocks, 2004). Eastern Psychology is rooted in the tradition of mysticism, with an emphasis on the spiritual (in the non-religious sense), the subjective, and the individual. So, while Eastern Psychology’s dominant ethos is humanistic, that of Western Psychology is mechanistic and impersonal. As Fromm (1951) puts it, Psychology: [in] trying to imitate the natural sciences and laboratory methods of weighing and counting, dealt with everything except the soul. It tried to understand those aspects of man which can be examined in the laboratory, and claimed that conscience, value judgements, and knowledge of good and evil are metaphysical concepts, outside the problems of psychology; it was often more concerned with insignificant problems which fitted the alleged scientific method than with devising new methods to study the significant problems of man. Psychology thus became a science lacking its main subject matter, the soul. (pp. 13–14) As Graham (1986) puts it, ‘Bereft of its soul or psyche, psychology became an empty or hollow discipline; study for its own sake’ (p. 21). This, of course, raises again the

224

People as self-determining organisms

fundamental issue as to what is the appropriate subject-matter for Psychology, or, put another way, what should its subject-matter be? (See Chapters 1 and 4.)

L

L

Having familiarized yourself with the history of Western Psychology (see Chapter 1), and in the light of the discussion above of Eastern Psychology, what do you consider the focus of Psychology should be? An alternative way of thinking about this issue is to ask what, ultimately, are we trying to find out about ourselves (i.e. human beings)?

In as much as Western Psychology is no longer faithful to its original subject-matter, the soul, it could be argued (especially from an Eastern perspective) that it’s not true Psychology at all. Many Western Psychologists would deny that the spiritual traditions of the East constitute ‘Psychology’ in anything but the loosest sense. What’s at issue here is the fundamental perspective adopted by each: Psychology East and West represent the polar extremes of mystical insight and scientific outlook, respectively (Graham, 1986). Graham observes that this dichotomy can be seen as corresponding to the two aspects of mind as conceived by Indian thought: the inward-looking aspect directed towards the essential nature of human beings, and the outward-looking aspect directed towards the world of things and external appearances. In the Eastern tradition, both aspects are viewed as complementary facets of one whole or unity; virtue and harmony consist in maintaining a dynamic balance between them. However, humans tend to divide and separate, emphasizing one or the other (what Ornstein, 1976, likens to blindness to half of the visual field/hemianopia). In the pursuit of understanding innermost being the cultures of the East, most notably India, have tended to ignore the material world, developing their spiritual, poetic, artistic and mystical traditions and cultivating thereby an attitude to life quite alien to Western eyes. For in the West, with its reverence for the intellect and rationality, and the outward appearance of things, the inner man is neglected as science and technology progress apace. (Graham, 1986, p. 21) If Western science in general, and Psychology in particular, is ‘blind’ to the ‘inner’ human being, then they may be ‘missing the point’ in terms of their ultimate aims (or, at least, missing half the point!).

L

Looking back at Chapter 2, try to explain how mainstream Psychology dehumanizes the person.

As Graham (1986) observes, scientific method is implicitly reductionist (from the Latin reduction = ‘to take away’). Mainstream Psychology, in reducing the study of man to ‘objective facts’ (i.e. overt behaviour) and ‘banning’ study of experience, takes away what is ‘essentially and fundamentally his humanness’ (p. 25). This reduces people to mere things or objects; it’s a small step from this to accepting the idea that man is a machine, and nothing but a machine (Heather, 1976).

225

People as self-determining organisms

Psychology, in adopting the mechanistic formulations of nineteenth century physical science, did precisely that. Man came to be viewed as functioning like a clock or engine, the workings or mechanisms of which could be elucidated and regulated by psychological science, hence the notion that in identifying these mechanisms or laws of behaviour, psychologists are discovering what makes man ‘tick’. (Graham, 1986, pp. 25–26)

Humanistic Psychology: the ‘third force’ As a reaction against such a mechanistic, dehumanizing view of the person, and in an attempt to reconcile Eastern and Western perspectives, Abraham Maslow in particular, along with other Humanistic Psychologists (including Rogers, Fromm, and Rollo May) emphasized the ‘human’ characteristics of human beings. (It seems ridiculous that it was necessary at all to explicitly address these characteristics: it seems ‘obvious’ that Psychology would be concerned with subjective experience, but we know from Chapter 1 why this became necessary.) The term ‘Humanistic Psychology’ was first coined by John Cohen, a British Psychologist (in a 1958 book with that title), aimed at condemning ‘ratomorphic robotic psychology’. But it was primarily in the US, and especially through the writings of Maslow, that Humanistic Psychology became popularized and influential, hailing it as a ‘third force’ (the other two being Behaviourism and Psychoanalytic theory). Rather than rejecting these two major approaches, Maslow hoped that his approach would act as a unifying force, integrating subjective and objective, the private and public aspects of the person, providing a complete, holistic Psychology. He insisted that a truly scientific Psychology must embrace a humanistic perspective, treating its subject-matter as fully human. Before we look at what ‘fully human’ means, let’s consider some of the influences on Maslow’s thinking.

BOX 10.1 KEY THINKER: Abraham Maslow (1908–1970) L Maslow was the oldest child of Jewish immigrants

to New York City from Russia. L His childhood was both economically and emo-

tionally deprived and growing up in a non-Jewish neighbourhood made him feel isolated and discriminated against. He found solace in libraries and learning. L He originally enrolled at Cornell University (at that time the only Ivy League institution willing to accept more than token numbers of Jewish students), but soon transferred to the University of Wisconsin, having been greatly influenced by Watson’s Behaviourism. L At Wisconsin, Maslow gained his Bachelor’s Figure 10.1 degree in Psychology, then his PhD (in 1934). His Abraham Maslow. doctoral thesis, supervised by Harry Harlow, dealt with the asexual behaviour of monkeys. During this time, still a committed Behaviourist, Maslow began to read the works of Freud and Adler.

226

People as self-determining organisms L In 1935, he contributed to an American Psychological Association symposium at

L

L

L

L

Michigan, chaired by the eminent Edward L. Thorndike; as a consequence, Thorndike offered Maslow a postdoctoral fellowship to work as his assistant at Columbia Teachers College on a large project (‘Human Nature and the Social Order’). This was a huge breakthrough for a Jewish academic, with anti-Semitism still rife within most Psychology departments. Having married his high-school sweetheart, Bertha Goodman, in 1928, his firsthand experience with his two daughters soon persuaded him to abandon his Behaviourist convictions. In 1937, he accepted a low-paid, rather modest post at Brooklyn College. But on his own initiative, he established personal friendships with several neo-Freudian psychoanalysts (including Adler, Fromm, and Horney; see Chapter 9) and Gestalt Psychologists (Werheimer and Kurt Goldstein (1878–1965); see Chapter 7), along with some important anthropologists, mainly émigrés from Nazi Germany and other parts of Europe. Goldstein was actually a neurologist, who applied Gestalt concepts in his analysis of brain-injured soldiers: the brain as a whole seemed to try to take over the functions of damaged areas. He described this overarching tendency of the human ‘organism’ to maintain its integrity and wholeness, despite injury, as a motive towards ‘self-actualization’. One of the anthropologists he befriended was Ruth Benedict, who persuaded Maslow to spend several weeks living within a Blackfoot Indian community in western Canada. He concluded from this experience that (1) cultural factors set conditions within which specific personality traits are more or less likely to occur; but also that (2) all human beings share a basic humanity and basic needs that override cultural differences. As Leibnitz had put it in the 1600s, experience shapes but doesn’t create the mind, which has a pre-existing structure (see Box 2.5, page 35). (Based on Ewen, 1988; Fancher and Rutherford, 2012)

Major features of Humanistic Psychology 1 Humanistic Psychology acknowledges individuals as perceivers and interpreters of themselves and their world, trying to understand the world from the perceiver’s perspective, rather than from the position of a detached observer. This represents a phenomenological approach, which is described in Box 10.2. Both Gordon Allport (see Chapter 12) and George Kelly (see Chapter 4) were influenced by phenomenology.

BOX 10.2 Phenomenology L Edmund Husserl (1859–1938), the founder of phenomenology as a philosophical

movement, was critical of both the introspectionism of Wilhelm Wundt and Watson’s Behaviourism (see Chapter 1). L However, his fundamental aim was to provide a firm basis for all disciplines – sciences, arts, and humanities – by establishing the meaning of their most fundamental concepts (such as ‘perception’ in Psychology) through providing a valid method.

227

People as self-determining organisms L To achieve this, Husserl decided to begin with the problem of how objects and

L

L

L

L L

L

events appeared to consciousness: nothing could even be spoken about or witnessed if it didn’t come through someone’s consciousness (which included preconscious and unconscious processes). He advocated (Husserl, 1925/1977, 1931/1960, 1936/1970) a return to the things themselves, as experienced. His core philosophical belief was a rejection of the presupposition that there’s something ‘behind’ or ‘underlying’ or ‘more fundamental than’ experience. Rather, what appears is to be taken as ‘reality’: we should begin our investigation with what is experienced, the thing itself as it appears (i.e. the ‘phenomenon’). While phenomenology was originally intended to apply to all disciplines, the infant science of Psychology’s concern with conscious thought, not surprisingly, soon became the most ‘fertile’ area for Husserl’s approach. Contrary to positivism, Husserl maintained that human experience in general is not a lawful response to the ‘variables’ assumed to be in operation. Rather, experience comprises a system of interrelated meanings (or gestalten) that’s bound up in a totality (the ‘lifeworld’) (Husserl, 1936/1970): the human realm essentially entails embodied, conscious relatedness to a personal world of experience. The natural scientific approach is inappropriate: human meanings are the key to the study of lived experience, not causal variables. For phenomenology, then, the individual is a conscious agent, whose experience must be studied from the ‘first-person’ perspective. Experience is of a meaningful lifeworld. Because of the crucial influence of phenomenology, the approach of Maslow, Rogers, etc. is often referred to as the Humanistic-phenomenological approach. (Based on Ashworth, 2003; Giorgi and Giorgi, 2003)

2 Humanistic Psychology recognizes that people help determine their own behaviour and aren’t simply slaves to environmental contingencies (as proposed by Skinner) or to their past (as proposed by Freud). Probably the most well-developed account of free will within Humanistic Psychology is that of Rogers. If we want to understand another person, experience is all-important; in particular, we need to understand his/her selfconcept. Every experience is evaluated in terms of our self-concept, and most human behaviour can be regarded as an attempt to maintain consistency between our actions and our self-image (see Chapter 12). Understanding the self-concept is also central to Rogers’ client-/person-centred therapy. His experience over many years as a therapist convinced him that real change does occur in therapy: people choose to see themselves and their life situation differently. Therapy and life are about free human beings struggling to become more free. While personal experience is important, it doesn’t imprison us; how we react to our experience is something we ourselves choose and decide (Morea, 1990). However, we sometimes fail to acknowledge certain experiences, feelings, and behaviours if they conflict with our (conscious) self-image: they’re incongruent precisely because they’re not consistent with our view of ourselves, which makes them threatening. They’re denied access to awareness (they remain unsymbolized) through actual denial, distortion, or blocking; these defence mechanisms prevent the self from growing and changing, and widen the gulf between our self-image and reality (our true feelings, and our actual

228

People as self-determining organisms

behaviour). Defensiveness, lack of congruence, and an unrealistic self-concept can all be seen as a lack of freedom, which therapy is designed to restore. Rogers’ view of human beings as growth-oriented contrasts dramatically with Freud’s ‘savage beasts’ view (in Civilization and its Discontents, 1930), whose aggressive tendencies and unpredictable sexuality can only be controlled by civilization’s structures. However, Rogers’ deep and lasting trust in human nature didn’t blind him to the reality of evil behaviour: In my experience, every person has the capacity for evil behaviour. I, and others, have had murderous and cruel impulses, desires to hurt, feelings of anger and rage, desires to impose our wills on others…. Whether I, or anyone, will translate these impulses into behaviour depends … on two elements: social conditioning and voluntary choice…. I believe that, theoretically at least, every evil behaviour is brought about by varying degrees of these elements. (Rogers, 1982, in Thorne, 1992) By distinguishing between ‘human nature’ and ‘behaviour’, Rogers manages to retain his optimistic view of human beings (‘good people can behave badly’). But in Freedom to Learn for the 80s (1983), he states that science is making it clear that human beings are complex machines and their behaviour is determined.

L

How do you think Rogers might reconcile this belief in determinism with self-actualization, psychological growth, and freedom to choose? (Looking back at William James’ views in Chapter 2, are there different kinds of determinism?)

One proposed solution is a version of soft determinism. Unlike neurotic and incongruent people, whose defensiveness forces them to act in ways they’d prefer not to, the healthy, fully functioning person chooses to act and be the way she/he has to: it’s the most fulfilling option (Rogers, 1983). To whatever extent you may agree with Rogers, Humanistic Psychologists regard the self, soul, or psyche, personal responsibility and agency, choice and free will, as legitimate issues for Psychology. Indeed, these in many ways define what it means to be human.

Maslow’s contribution Maslow is probably best known for his (1) hierarchy of human needs (1954) and (2) his study of self-actualizers.

Hierarchy of needs According to Maslow, human beings are subject to two quite different sets of motivational states or forces: L those that ensure survival by satisfying basic physical and psychological needs

(physiological, safety, love and belongingness, and esteem needs) – these deficiency (or D-motives) are mostly engaged in because they satisfy those needs (they’re a means to an end, exceptions being sexual arousal, elimination, and sleep); and

229

People as self-determining organisms L those that promote self-actualization, that is, realizing one’s potential, ‘becoming

everything that one is capable of becoming’ (Maslow, 1970), especially in the intellectual and creative domains – these growth, being, or B-motives are intrinsically satisfying (satisfying in themselves). Examples include being a good doctor or carpenter, playing the violin, the steady increase of understanding about the universe or about oneself, the development of creativeness in whatever field, and, most importantly, simply the goal of becoming a good human being (Maslow, 1968). We share the need for food with all living creatures, the need for love with (perhaps) the higher apes, and the need for self-actualization with no other species. Traditionally, the hierarchy has been presented in textbooks as a triangle or pyramid, with physiological needs at the base and self-actualization needs at the apex. According to Rowan (2001), Maslow himself never presented it in this form; he believes that it’s much more logical to portray it as a simple ladder. Rowan also proposes that ‘competence or mastery’ should be inserted between ‘safety’ and ‘love and belongingness’. He also distinguishes between ‘self-esteem’ needs and ‘esteem from others’ needs; these are two quite different things, as Maslow himself later observed (1965). However the hierarchy might be pictured, it was intended to emphasize the following points: 1 Needs lower down in the hierarchy must be satisfied before we can fully attend to needs at the next level above. L L

Try to think of some specific examples of lower-level needs that must be met before higher-level needs can be addressed. Try to think of an exception to this general rule.

If you’re trying to concentrate on what you’re reading while your stomach is rumbling with hunger, you probably won’t absorb (pun intended!) much of Maslow’s theory; similarly if you’re tired or in pain (Gross, 2015). However, exceptions include the starving artist who finds inspiration despite hunger or the mountaineer who risks his/her life for the sake of challenge and adventure (what Maslow would call a peak experience – again, pun intended!) (see below). 2 Higher-level needs are a later evolutionary development in the development of the human species (phylogenesis); self-actualization is a relatively recent need to appear. This applies equally to the development of individuals (ontogenesis): clearly, babies are more concerned with their bellies than their brains; however, this is always a relative preference (babies’ brains need stimulation from birth, but this becomes relatively more important as they get older). 3 The higher up the hierarchy we go, the greater the need becomes linked to life experience, and the less its biological ‘flavour’. Individuals will achieve self-actualization in different ways, through different activities, and by different routes; this is related to experience, not biology: Self-actualization is idiosyncratic, since every person is different…. The individual [must do] what he, individually, is fitted for. A musician must make music, an artist must paint, a poet must write, if he is to be ultimately at peace with himself. What a man can be, he must be. (Maslow, 1968, pp. 7, 25; emphasis in original)

230

People as self-determining organisms

4 Following (3), the higher up the hierarchy we go, the more difficult the need is to achieve: many human goals are remote and long-term, and can only be reached in a series of steps. This pursuit of ends that lie very much in the (sometimes quite distant) future is a unique feature of human motivation, and individuals differ considerably in their ability to set and achieve such goals.

Maslow and Rogers compared While Maslow put ‘self-actualization’ at the top of his need hierarchy, Rogers preferred the term ‘actualizing’ (or ‘actualizing tendency’); these relate to ‘a psychology of being’ and ‘a psychology of becoming’, respectively. According to Graham (1986): A danger inherent in any psychology of being such as that proposed by Maslow is that it has the tendency to be static and not account for movement, change, direction and growth, with the result that self-actualization or self-discovery comes to be viewed as an end in itself rather than as a process. (pp. 53–54) Rogers, although taking a broadly similar view to Maslow, draws particular attention to the individual in the process of becoming a fully functioning person; this is central to his Self Theory, which is discussed in Chapter 12. As Graham (1986) observes, Rogers and Maslow clearly share common emphases, as outlined in Box 10.3.

BOX 10.3 Major similarities and differences between Maslow and Rogers L They both recognize the fundamental pre-eminence of the subjective and the ten-

dency towards self-actualization; the latter is synonymous with psychological health and represents the realization of the person’s inherent capacities for growth and development (both of which are viewed as good or neutral). (But see text above regarding Roger’s use of ‘actualization’.) L Their approach is essentially holistic (rather than reductionist): every individual is a unique totality, no aspect of which can be studied in isolation. L While Maslow was fundamentally concerned with human motivation and the effects of goals and purposes on behaviour, Rogers was essentially concerned with perception: the primary object for psychological study is the person and the world as viewed by that person him/herself. So, for Rogers, the individual’s internal phenomenological frame of reference constitutes the proper focus of Psychology (again, see Chapter 12). L As we noted earlier, Rogers was first and foremost a therapist. By emphasizing the therapist’s personal qualities (genuineness/authenticity/congruence, unconditional positive regard, and empathic understanding), he helped open up the provision of psychotherapy to non-medically qualified therapists (‘lay therapy’), including Psychologists. This is especially relevant in the US, where, until recently, only psychiatrists could practice psychoanalysis. Rogers originally used the term ‘counselling’ as a strategy for silencing psychiatrists who voiced their opposition to Psychologists offering psychotherapy. In the UK, the outcome of Rogers’ campaign has been the development of a counselling profession whose practitioners are drawn from a wide variety of disciplines, with neither psychiatrists nor Psychologists dominating (Thorne, 1992).

231

People as self-determining organisms

Maslow’s study of self-actualization Although in theory we’re all capable of achieving self-actualization, most of us won’t do so – or only to a limited degree. Maslow was particularly interested in the characteristics of people whom he considered to have achieved their potential as persons, including Albert Einstein, William James, Eleanor Roosevelt, Abraham Lincoln, Baruch Spinoza, Thomas Jefferson, and Walt Whitman. In Motivation and Personality (1954, 1970, 1987), he identified 19 characteristics of the self-actualized person, as shown in Table 10.1.

Table 10.1 Characteristics of the self-actualized person 1 Perception of reality: an unusual ability to detect the spurious, the fake, and the dishonest in personality, and in art. Also, not frightened by the unknown and tolerant of ambiguity. 2 Acceptance: both self-acceptance and acceptance of others, a relative lack of overriding guilt, crippling shame and anxiety. Also a lack of defensiveness. 3 Spontaneity: simplicity and naturalness, lack of artificiality or straining for effect, a superior awareness of their own desires, opinions, and subjective reactions in general. Not the same as impulsiveness. 4 Problem-centring: not ego-centred, usually having a mission in life, some problem outside themselves which enlists much of their energies. 5 Solitude: a liking for solitude and privacy, not needing other people in the ordinary sense (and still liking others’ company – but being able to choose). 6 Autonomy: self-contained/self-sufficient, able to maintain a relative calm in the midst of circumstances that would drive others to suicide. Self-movers. 7 Fresh appreciation: the ability to see familiar things in a new way (‘through a child’s eyes’), with awe, wonder, and even ecstasy. 8 Peak experiences: spontaneous mystical experiences. 9 Human kinship: a deep sense of identification, sympathy, and affection, and connection with others (as if we all belonged to a single family). 10 Humility and respect: a democratic character structure in the deepest sense; can learn from anyone who has something to teach them. 11 Interpersonal relationships: these can be profound. 12 Ethics: definite moral standards, although these may not always be conventional. 13 Means and ends: experiences and activities are valued for their own sake – not just as means to an end. 14 Humour: but this is never at other people’s expense. 15 Creativity: they are creative (they don’t ‘have creativity’). 16 Resistance to enculturation: maintain a certain detachment from the surrounding culture. 17 Imperfections: can be ruthless, absent-minded, impolite, stubborn, and irritating, and may experience internal conflicts. 18 Values: topmost portion of their value system is entirely unique. 19 Resolution of dichotomies: no conflict between head and heart, reason and instinct (they’re synergistic). Source: Rowan (2001)

232

People as self-determining organisms

Rowan (2001) adds a further 11 characteristics to Maslow’s list (Table 10.2). One way of measuring self-actualization is to study people’s peak experiences, moments of ecstatic happiness when people feel most ‘real’ and alive (see Table 10.1). Maslow (1962) interviewed several people, many of whom were successful in their chosen fields. His view was confirmed that at such moments, the person is concerned with ‘being’ and is totally unaware of any deficiency needs or the possible reactions of others. Czikszentmihalyi (1975) interviewed a wide variety of prominent sportsmen and reported experiences similar to those reported by Maslow, of ecstatically losing themselves in the highly skilled performance of their sport. Such peak experiences cannot, normally, be consciously planned and, yet, for many, the growth of Humanistic Psychology is almost synonymous with deliberate attempts to enhance personal growth through Encounter Groups and other short, intensive, group experiences. (The ‘Encounter Group movement’, popular in California during the 1960s in particular, is more closely associated with Rogers’ ‘psychology of becoming’ than with Maslow’s ‘psychology of being’; see above.) Whatever the empirical support or otherwise for Maslow’s idiographic theory, it undoubtedly represents an important counterbalance to the nomothetic approach of other personality theorists, such as Raymond Cattell and Hans Eysenck (see Chapter 12) by attempting to capture the richness of the personal experience of being human.

Humanistic Psychology as a science of human being According to Wilson et al. (1996), the Humanistic approach doesn’t constitute an elaborate or comprehensive theory, but rather should be seen as a set of uniquely personal theories of

Table 10.2 Rowan’s (2001) 11 additional characteristics of self-actualizers 1 Authenticity: combination of self-respect and self-enactment (‘walks the talk’). No gap between intentions and actions. 2 Integration: no split between thinking/feeling, mind/body, masculine/feminine, etc. Any new conflicts are worked through. 3 Non-defensiveness: more inclined to acknowledge the truth of what the other person says, rather than try to disprove it. 4 Vision-logic: not constrained by the rules of formal logic or ‘bullied by an either–or’ (false dichotomy). 5 Paradoxical theory of change: change occurs not by trying to go somewhere you’re not, but by staying with what is. 6 The real self: without this, there can be no authenticity: the ultimate, pure ‘I’ considered as a separate being. 7 ‘I created my world’: we are responsible for everything. 8 Intentionality: this and commitment go very close together. 9 Intimacy: the setting aside of roles. 10 Presence: to be genuinely ‘with’ another person. 11 Openness: totally receptive and responsive to another.

233

People as self-determining organisms

living created by humane people optimistic about human potential. It has wide appeal to those looking for an alternative to more mechanistic, deterministic theories. Like Freud’s theories, many of its concepts are difficult to test empirically and it cannot account for the origins of personality. Since it describes but doesn’t explain personality, it’s subject to the nominal fallacy (Carlson and Buskist, 1997). However, also like Freud’s theories, it shouldn’t be condemned as a whole. As we’ve seen, self-actualization has been investigated empirically (and not just by Maslow) and Rogers was a prolific researcher, during the 1940s, 1950s, and 1960s, into his client-/personcentred therapy. According to Thorne (1992), this body of research constituted the most intensive investigation of psychotherapy attempted anywhere in the world up to that time. Its major achievement was to establish beyond all question that psychotherapy could and should be subjected to the rigours of scientific enquiry. In The Psychology of Science: A Reconnaissance (1969), Maslow identified ten major dilemmas faced by everyone trying to study human beings. In most cases, he resolves the dilemma by grasping both sides of it in the manner of Eastern thought (see above).

L

Before reading on, remind yourself of the major characteristics of science as identified in Chapter 2 and the challenges to these described in Chapter 3.

1 Humanism vs. mechanism. (Western) science is often seen as mechanistic and dehumanized. Maslow sees his work as concerned with the rehumanization of science, but this doesn’t involve rejecting anything: his conception of science in general and Psychology in general is inclusive of mechanistic science, which he regards as too narrow and limited to serve as a comprehensive philosophy (but not ‘wrong’). 2 Holism vs. reductionism. If we want to do Psychology, in the sense of learning about people, in practice we often have to approach one person at a time: Any clinician knows that in getting to know another person it is best to keep your brain out of the way, to look and listen totally, to be completely absorbed, receptive, passive, patient and waiting rather than eager, quick and impatient. It does not help to start measuring, questioning, calculating or testing our theories, categorizing or classifying. If your brain is too busy, you won’t hear or see well. Freud’s term ‘free-floating attention’ describes well this non-interfering, global, receptive, waiting kind of cognizing another person. (Maslow, 1969, pp. 10–11) If we adopt this approach, we have a chance of being able to describe the person holistically rather than reductively, seeing the whole person, rather than some selected and split-off aspects. But this can only be achieved by approaching the person as a person – not as a physical object. 3 I–Thou vs. I–It. Maslow was way ahead of his time in recognizing the distinction made by the German philosopher Martin Buber (1878–1965) between two ways of approaching another person; this I–Thou vs. I–It distinction (Buber, 1958/1923) is only today being adopted by many others as an important feature of the research process (Rowan, 2001). Maslow takes the ‘I–Thou’ and compares it with the more

234

People as self-determining organisms

conventional starting point of spectator knowledge, advocating the former for all sciences, not just Psychology: Can all the sciences, all knowledge be conceptualized as a resultant of a loving or caring interrelationship between knower and known? What would be the advantages to us of setting this epistemology alongside the one that now reigns in ‘objective science’? Can we simultaneously use both? … we can and should use both epistemologies as the situation demands. I do not see them as contradictory but as enriching each other…. Reality seems to be a kind of alloy of the perceiver and the perceived, a sort of mutual product, a transaction. (Maslow, 1969, pp. 108–111; emphasis in original) 4 Courage vs. fear. Most research and most knowledge comes from deficiency motivation (i.e. it’s based on fear and conducted to allay anxiety, making it basically defensive). 5 Science and sacralization. Science notoriously seems to oppose religion and also emotions such as reverence, mystery, wonder, and awe (this is what’s meant by ‘desacralization’ and is related to (4) above). But is it in the intrinsic nature (essence) of science or knowledge that it must strip away values in a ‘countervaluing’ way? On the contrary, claims Maslow. 6 Experiential knowledge vs. spectator knowledge. For Maslow, experiential (firstperson) knowledge is necessary but not sufficient and shouldn’t be separated from verbal-conceptual (third-person) knowledge; they’re hierarchically integrated and need each other. Most psychological problems do and should begin with phenomenology rather than with objective, behavioural laboratory techniques … we must usually press on from phenomenological beginnings towards objective, experimental, behavioural laboratory methods. (Maslow, 1969, p. 47; emphasis in original) 7 The comprehensive vs. the simple. Scientific work has two directions or goals: one towards simplicity and condensation, the other towards total comprehensiveness and inclusiveness. To make human sense, we should move back and forth between these two goals. 8 Suchness vs. abstraction. ‘Abstractness meaning’ (classifications), which tends to reduce things to some unified explanation, and ‘Suchness meaning’ (direct experience of something’s true nature), are complementary. ‘Cool’ scientists might stress abstraction and explanation, while ‘warm’ scientists might stress suchness and understanding; great scientists integrate both. (This distinction mirrors that between the Naturwissenschaften and the Geisteswissenschaften, respectively; see Chapter 2.) 9 Values and value-free. If we claim that science can tell us nothing about why, only about how, and that it cannot help us choose between good and evil, then science becomes merely an instrument, a technology, to be used equally by good and bad people. But Maslow believes that science can discover the values by which people should live. Science itself is governed by a set of values, and some scientists actually confess to trying to shape the culture as they’d like it to be; certainly, in the human sciences this idea is becoming more popular and a critical approach is valued now more than it has ever been (Rowan, 2001; see Chapter 3).

235

People as self-determining organisms

10 Maturity vs. immaturity. Science is incredibly ‘masculine’ in the sense of idealizing the stereotyped image of the male. Maslow sees this as a sign of immaturity: the mature scientist will have many ‘feminine’ traits as well. (See the feminist critique of science in Chapter 2.) Summing up, Rowan (2001) claims that: Humanistic Psychology is not just psychology. It is indebted to Eastern thought. And it is interested in science – not from the point of view of simply accepting the standard view of science as postulated in a myriad academic texts, but rather of creating a newer view of science as a human endeavour, which calls on the whole person rather than just on the intellect. (p. 21) Rowan goes further: Humanistic Psychology has some claim to be the only true Psychology. Most Psychology, using ‘empiric-analytic inquiry’, makes the classic mistake of trying to study people by using the ‘eye of flesh’ (Wilber, 1983), that is, how we perceive the external world of space, time, and objects; this ‘isolates their behaviour – the observable actions they pursue in the world – and ignores most of what is actually relevant – their intentions, their meanings, their visions’ (Rowan, 2001, p. 20). By contrast, Humanistic Psychology is the classic way of using the eye of the mind/ reason, by which we obtain knowledge of philosophy, logic, and the mind itself. While positivist science (including Behaviourism and Cognitive Psychology) involves a monologue (‘a symbolizing inquirer looks at a nonsymbolizing occasion’), Humanistic Psychology involves a dialogue (‘a symbolizing inquirer looks at other symbolizing occasions’) (Rowan, 2001, p. 22). The paradigm of empiric-analytic inquiry is, ‘I see the rock’; the paradigm of humanistic inquiry is, ‘I talk to you and vice versa’. Empiric-analytic inquiry can proceed without talking to the object of its investigation – no empirical scientist talks to electrons, plastic, molecules, protozoa, ferns, or whatever, because he or she is studying preverbal entities. But the very field of humanistic inquiry is communicative exchange or intersubjective and intersymbolic relationships (language and logic), and this approach depends in large measure on talking to and with the subject of investigation … any science that talks to its subject of investigation is not empirical but humanistic, not monologic but dialogic. (Rowan, 2001, p. 22; emphasis in original) In other words, Humanistic Psychology is ‘real psychology, proper psychology, the type of psychology that is genuinely applicable to human beings’ (Rowan, 2001, p. 22).

Humanistic Psychology and Existentialism So far, we’ve seen how both Eastern philosophy and religion, and Western philosophy (in the form of phenomenology) helped to shape Humanistic Psychology. One further major influence, again from Western philosophy, was Existentialism, which, in turn, has helped generate the recent subdiscipline of Existential Psychology. While most Existentialists are also Phenomenologists, there are many Phenomenologists who aren’t Existentialists. Box 10.4 describes some of the origins of Existentialism.

236

People as self-determining organisms

BOX 10.4 Some of the historical roots of Existentialism L Existential thinking can be traced back to one of the oldest known written docu-

L

L

L

L

ments, the 4,000 year-old Babylonian The Gilgamesh Epic. In it, the hero, Gilgamesh, reflecting on the death of his friend, Endiju, expresses his fear of his own death; this (‘death terror’) has become one of the core issues within Existentialist philosophy and Existential Psychology (see text below). Consideration of existential issues can also be found in the work of the great thinkers of the Western classical era, such as Homer, Plato, Socrates, and Seneca, and continued through the work of theologians, such as Augustine and Aquinas. Existential issues were also explored in the blossoming arts and humanities of the European Renaissance, as in the writing of Cervantes, Dante, Milton, Shakespeare, and Swift. The arts became even more focused on these issues during the Romantic period of the nineteenth century, as in the poetry of Byron, Shelley, and Keats, the novels of Balzac, Dostoyevsky, Hugo, and Tolstoy, and the music of Beethoven, Brahms, Bruckner, and Tchaikovsky. More recently, the plays of Beckett, O’Neill, and Ionesco, the classical music of Mahler and Cage, the rock music of John Lennon and the Doors, and the surrealist paintings of Dali, Ernst, Tanguy, and many others have explored fundamental issues relating to human being. One could even say that virtually everyone who is considered a ‘great artist’ explored existential issues in his or her work in one form or another. Indeed, the expression of deep existential concerns may be the underlying commonality of all great artistic creation. (Pyszczynski et al., 2004, p. 5) (Based on Pyszczynski et al., 2004)

Within Psychology, a loosely defined Existentialist movement began to emerge, initially as a reaction to orthodox Freudian theory. In Europe, theorists such as Ludwig Binswanger (1881–1966), Swiss psychoanalytic psychiatrist Medard Boss (1903–1990), and Viktor Frankl (1905–1997), an Austrian neurologist, psychiatrist, and Holocaust survivor, argued for the importance of basing our analyses of human behaviour in the phenomenological world of the subject (see Box 10.5). In Binswanger’s words, ‘There is not one space and time only, but as many spaces and times as there are subjects’ (1956, p. 196). Otto Rank (1884–1939), an Austrian psychoanalyst, was perhaps the first theorist to incorporate existential concepts into a broad account of human behaviour: the twin fears of life and death play a critical role in the development of the child’s self-concept and throughout the lifespan. Rank also discussed art and creativity, the soul and the will, all to be found in later existential psychological theorizing (Pyszczynski et al., 2004). Like Binswanger, Boss, Frankl, and Rank, other major individuals with existential leanings, such as Karen Horney (see Chapter 9) weren’t just influential theorists but practising psychiatrists, psychoanalysts, or both. Horney emphasized our conception of the future as a critical determinant of behaviour. Another noteworthy figure is Erich Fromm (see Chapter 9), perhaps best known for his analysis of the pursuit and avoidance of Freedom.

237

People as self-determining organisms

BOX 10.5 KEY THINKER: Viktor E. Frankl (1905–1997) L Frankl was Professor of Neurology and Psychiatry

at the University of Vienna Medical School. L He founded logotherapy, what’s come to be called

the Third Viennese school of psychotherapy (after Freud’s psychoanalysis and Adler’s Individual Psychology). L Logotherapy, Frankl’s own version of existential psychotherapy, is much less retrospective and introspective than psychoanalysis: Logotherapy focuses rather on the future … on the meanings to be fulfilled by the patient in his future. (Logotherapy, indeed, is a meaning-centred psychotherapy.) … In logotherapy, the patient is actually confronted with and reoriented toward the meaning of life. And to make him aware of this meaning can contribute much to his ability to overcome his neurosis. (Frankl, 2004, p. 104)

Figure 10.2 Viktor Frankl.

L In 1945, shortly after his release from a Nazi concentration camp, he spent nine

L

L

L

L

238

intensive days writing a psychological account of his three years in Auschwitz, Dachau, and other Nazi prison camps. The original German version bears no title on the cover because Frankl was initially committed to publishing an anonymous report that would never earn its author literary fame. The English version, expanded to include a short overview of logotherapy, first appeared as From Death Camp to Existentialism, and finally under its wellknown title, Man’s Search for Meaning (1946/2004). The book describes Frankl’s harrowing experiences and his desperate efforts, and those of many fellow inmates, to sustain hope in the face of unspeakable suffering. Those who lost meaning simply gave up and died at Auschwitz. But those who managed to retain some sense of purpose maintained at least some chance of survival. Frankl asserted that the human quest for meaning is a fundamental human tendency. Under certain extreme conditions, finding meaning could be the difference between life and death. His Existential Psychology of meaning and purpose was aimed at replacing psychoanalysis and Behaviourism. Frankl’s 30 books have been translated into 23 languages, including Japanese and Chinese. He was a visiting professor at Harvard, as well other US universities. He received 29 honorary doctoral degrees from around the world. (Based on Frankl, 2004; McAdams, 2012)

People as self-determining organisms

More recent examples of existential practitioner-theorists include R.D. Laing’s involvement in the radical ‘anti-psychiatry’ movement during the 1960s and 1970s (see Chapter 13), Ernest Becker’s discussion of ‘death terror’ (in his classic The Denial of Death, 1973), and Ervin D. Yalom’s ‘givens of existence’ (as discussed in Existential Psychiatry (1980) and Staring at the Sun: Overcoming the Dread of Death (2008) (see below).

Defining Existentialism According to the philosopher H.J. Blackham (1961): The peculiarity of existentialism … is that it deals with the separation of man from himself and from the world … existentialism goes back to the beginning of philosophy and appeals to all men to awaken from their dogmatic slumbers and discover what it means to become a human being. (pp. 151–152) There cannot be any objective, universal answers to the question of what it means to be human: ‘man is and remains in his being a question, a personal choice’ (p. 152) and is more than anything that can be said of him. Another philosopher, John Macquarrie (1972), believes that part of the problem of defining Existentialism is that what was intended as a serious philosophy has often been vulgarized to the level of a fad. Consequently, ‘existentialist’ gets applied to all sorts of people and activities that are only remotely – if at all – connected with Existentialist philosophy. However, there’s also a kind of elusiveness built into Existentialism itself. Agreeing with Blackham, Macquarrie notes that Existentialist philosophers deny that reality can be neatly packaged in concepts or presented as an interlocking system. Our experience and our knowledge are always incomplete and fragmentary; only a divine Mind, if there is one, could know the world as a whole – and perhaps even for such a Mind there’d be gaps and discontinuities. Because there’s no common body of doctrine shared by all Existentialists (unlike certain other schools of philosophy), Macquarrie prefers to describe Existentialism as a ‘style of philosophizing’, rather than as ‘a philosophy’. As such, it can lead those who adopt it to very different conclusions regarding the world and human beings’ place in it. This is demonstrated by the three ‘greats’: Soren Kierkegaard (1813–1855), Danish philosopher, theologian, and religious author; Martin Heidegger (1889–1976), German philosopher; and Jean-Paul Sartre (1905–1980), French philosopher, novelist, and playwright. Despite the lack of ‘doctrine’, it’s possible to identify some recurring Existentialist themes, which distinguish it from other philosophical schools/approaches. These include freedom, decision, and responsibility, which constitute the core of personal being. The exercise of freedom and the ability to shape the future are what distinguish humans from all other creatures on the planet (see Gross, 2012): it’s through free and responsible decisions that we become authentically ourselves. This has been expressed in the concept of ‘self as agent’, in contrast with traditional Western philosophy’s ‘self as (thinking) subject’ (especially since Descartes; see Chapters 2 and 12). The focus has been very much on the individual, whose quest for authentic selfhood concerns the meaning of personal being; this implies a view of the person as an isolated, if not dislocated, creature. Other recurring themes include finitude, guilt, alienation, despair, and death (Macquarrie, 1972) (see below). While the key Existentialist thinkers (including Kierkegaard, Heidegger, and Sartre) approach existential questions from very diverse perspectives and sometimes draw dramatically different conclusions,

239

People as self-determining organisms

[they all] … addressed the questions of what it means to be a human being, how we humans relate the physical and metaphysical world that surrounds us, and how we can find meaning given the realities of life and death. Most important, they considered the implications of how ordinary humans struggle with these questions for what happens in their daily lives. Thus, existential issues were not conceived of as material for abstruse musings of philosophers and intellectuals, but, rather, as pressing issues with enormous impact on the lives of us all. (Pyszczynski et al., 2004, p. 6)

Experimental Existential Psychology In his classic text on existential psychotherapy, Yalom (1980) described existential thought as focused on human confrontation with the fundamentals of existence. He viewed Existential Psychology as rooted in Freudian psychodynamics, in the sense that it explored the motivational consequences of important human conflicts. However, the fundamental conflicts of concern to Existentialists are very different from those emphasized by Freud (namely, those involving ‘suppressed instinctual strivings’ or ‘internalized significant adults’) (see Chapter 9); they focus on conflicts that flow from the individual’s confrontation with the givens of existence (see Box 10.6). In other words, Existential Psychology attempts to explain how ordinary human beings come to terms with the basic facts of life that we all have to deal with; these are deep, potentially terrifying issues, and, consequently, people typically avoid confronting them directly. Indeed, many people claim that they never think about such things. Nevertheless, Yalom argues, these basic concerns affect us all – whether we realize it or not.

BOX 10.6 The ‘givens of existence’ (Yalom, 1980) L Fear of death. The inevitability of death is a simple fact of life of which we’re all

aware; this awareness in an animal that desperately wants to live creates a conflict that cannot be brushed aside. This is the most-studied of Yalom’s four ‘givens’ (see Gross, 2012). L Freedom. The concern with freedom reflects the conflict between (a) a desire for self-determination/self-control, and (b) the sense of groundlessness and ambiguity that results when we realize that much of what happens in our lives is really up to ourselves – and that there are few, if any, absolute rules to live by. L Existential isolation. No matter how close each of us becomes to another, there remains a final, unbridgeable gap; we each enter life alone and must depart from it alone. This fundamental isolation is the inevitable consequence of the very personal, subjective, and individual nature of human experience that can never be fully shared with another being. L Meaninglessness. This is a result of the first three givens. In a world where the only true certainty is death, where meaning and value are subjective human creations rather than absolute truths, and where one can never fully share one’s experience with others, what meaning does life have? The very real possibility that human life lacks meaning lurks just behind the surface of our attempts to cling to whatever meaning we can find or create. According to Yalom, the crisis of meaninglessness stems from the dilemma of a meaning-seeking creature who finds itself in a universe that has no meaning.

240

People as self-determining organisms

All these ‘givens of existence’ have become the focus of the (relatively) new subdiscipline of Experimental Existential Psychology (known as XXP). Yalom acknowledged that these four concerns were by no means a complete list; others that have been (and are being) actively explored under the XXP banner include: how we humans fit into the physical universe, how we relate to nature, and how we come to terms with the physical nature of our bodies, questions regarding beauty, spirituality, and nostalgia, and questions about the role of existential concerns in intrapersonal, interpersonal, and intergroup conflict. Another of these additional concerns is identity. We all feel the need to ‘find ourselves’ – to make sense of our diverse views and experiences of the world, and to integrate them into a coherent and consistent sense of who we are. Uncertainty about our identity can lead to defensive psychological moves, such as more zealous defence of our attitudes (McGregor, 2006). Work on the self has told us a great deal about the malleability and multiplicity of identities, their socially constructed nature, and the desire to sustain a coherent sense of self over the lifespan – a story about the self, or self-narrative. (Greenberg, in Jones, 2008)

L

L L

Does any one of the ‘givens of existence’ and other existential issues described above strike you as more fundamental – or have more personal significance for you – than the others? Try to identify the reasons for your choice. Are there any existential concerns that you think could/should be added to the list?

According to Jones (2008), XXP crystallized out of a 2001 conference organized by Psychologists Jeff Greenberg, Sander Koole, and Tom Pyszczynski. The Handbook of Experimental Existential Psychology was published in 2004. Since then, the field has continued to grow: hundreds of researchers around the world are exploring the ‘human confrontation with reality’. This has required crossing traditional academic boundaries, since traditionally it has been existentialist philosophers (such as the ‘great three’; see above), novelists (such as Camus), and psychotherapists (such as Yalom), plus cultural anthropologists (such as Becker), who have addressed such issues. A crucial factor in overcoming scepticism about the very possibility of a Psychology of existential concerns has been the development of rigorous experimental tools for probing these issues. The rise of XXP has coincided with, and drawn inspiration from, renewed interest in the role of non-conscious psychological processes in guiding attitudes and behaviour (such as the ‘cognitive unconscious’; see Chapter 9). Indeed, a key tenet of XXP is that the effects of existential concerns are mediated by processes outside of conscious awareness (Jones, 2008).

Terror management theory As we noted in Box 10.6, fear of death is the most-studied of Yalom’s ‘givens of existence’. Terror management theory (TMT) (e.g. Solomon et al., 1991a, 1991b, 2004) represents a broad theoretical account of how we cope with this fundamental fact of life. According to TMT, human beings, like all forms of life, are the products of evolution by natural selection; over extremely long periods of time, they have acquired adaptations

241

People as self-determining organisms

(either gradually or in abrupt ‘punctuated’ moments; Gould, 2002) that enabled individual members of their species to successfully compete for resources needed for survival and reproduction in their respective environmental niches (see Chapter 8). So, what are the distinctive human evolutionary adaptations? One answer relates to our highly social nature, linked, in turn, to our vast intelligence: These attributes fostered cooperation and division of labour and led to the invention of tools, agriculture … and a host of other very useful habits and devices that allowed our ancestral forbears to rapidly multiply from a small band of hominids in a single neighbourhood in Africa to the huge population of Homo sapiens that currently occupy almost every habitable inch of the planet. (Solomon et al., 2004, p. 16) A major aspect of human intelligence is self-awareness: we’re alive and know we’re alive, and this sense of self enables us to reflect on the past and contemplate the future, which help us function effectively in the present. While knowing we’re alive is tremendously uplifting and potentially joyous and awe-inspiring, we’re also perpetually troubled by the knowledge that all living things, including ourselves, ultimately die: death can rarely be anticipated or controlled (Kierkegaard, 1944/1844). Human beings, therefore, by virtue of our awareness of death and our relative helplessness and vulnerability, are in constant danger of being overwhelmed by terror; this terror is compounded by our profound unease at being corporeal creatures (creatures with a body) (Rank, 1941/1958). Becker (1973) neatly captured this uniquely human existential dilemma like this: Man … is a creator with a mind that soars out to speculate about atoms and infinity…. Yet at the same time, as the Eastern sages also knew, man is a worm and food for worms. (p. 26) Homo sapiens solved this existential dilemma by developing cultural worldviews: humanly constructed beliefs about reality shared by individuals in a group that serves to reduce the potentially overwhelming terror resulting from death awareness. Culture reduces anxiety by providing its constituents with a sense that they are valuable members of a meaningful universe. Meaning is derived from cultural worldviews that offer an account of the origin of the universe, prescriptions of appropriate conduct, and guarantees of safety and security to those who adhere to such instructions – in this life and beyond, in the form of symbolic and/or literal immortality. (Solomon et al., 2004, p. 16)

Pause for thought … 1 What do you understand by ‘symbolic immortality’? Give some examples. 2 How can we achieve literal immortality?

242

People as self-determining organisms

According to the mortality salience hypothesis, if cultural worldviews and self-esteem provide beliefs about the nature of reality that function to reduce anxiety associated with death awareness, then asking people to think about their own mortality (the mortality salience paradigm (MS paradigm)) should increase the need for the protection provided by such beliefs. (For discussion of XXP research using the MS paradigm and other research relating to the TMT, see Greenberg et al., 2004 and Gross, 2012.)

Conclusions: Humanistic Psychology and Positive Psychology Positive Psychology (PP) is about ‘happiness’ (Seligman, 2003). It can be defined as the scientific study of the positive aspects of human subjective experience, of positive individual traits, and of positive institutions. It can be understood as a reaction against Psychology’s almost exclusive emphasis on the negative side of human experience and behaviour, namely, mental illness, during the second half of the twentieth century.

The history and origins of PP PP as we know it can be traced to Martin Seligman’s 1998 presidential address to the American Psychological Association (Seligman, 1999). He realized that Psychology had largely neglected two of its three pre-Second World War aims, namely (1) helping all people lead more productive and fulfilling lives, and (2) identifying and nurturing talent and giftedness. Following the end of the war in 1945, with the establishment of the US Veterans Administration (1946) and the US National Institute of Mental Health (1947), the third aim – curing mental illness – became, as we noted above, the focus of Psychology; it became a healing discipline, based on a disease model and illness ideology (Linley et al., 2006; see Chapter 13). However, according to Linley et al.: Positive psychology has always been with us, but as a holistic and integrated body of knowledge, it has passed unrecognized and uncelebrated, and one of the major achievements of the positive psychology movement to date has been to consolidate, lift up, and celebrate what we do know about what makes life worth living. (p. 4) Research into what we now call PP has been taking place for decades. In broad terms, PP has common interests with aspects of Humanistic Psychology, in particular the latter’s emphasis on the fully functioning person (Rogers, 1951) and the study of healthy individuals (Maslow, 1968). More than 50 years ago, Maslow stated: The science of psychology has been far more successful on the negative than on the positive side. It has revealed to us much about man’s shortcomings, his illness, his sins, but little about his potentialities, his virtues, his achievable aspirations, or his full psychological height. It is as if psychology has voluntarily restricted itself to only half its rightful jurisdiction, and that, the darker, meaner half. (Maslow, 1954, p. 354)

243

People as self-determining organisms

Maslow even talked specifically about a positive psychology, that is, a more exclusive focus on people at the extreme positive ends of the distribution, rather than what’s understood today by PP. Nevertheless, in a broad sense, there’s a strong convergence between the interests of Humanistic Psychology and modern PP (Linley, 2008). Given the often contentious relationship between PP and Humanistic Psychology, Seligman et al.’s (2005) acknowledgement that PP has built on the earlier work of Rogers and Maslow represents quite a significant development (Joseph and Linley, 2006). While Rogers and Maslow and Positive Psychologists shared the aim of wanting to understand the full range of human experience, Rogers and Maslow were also vigorous critics of the medical (or disease) model as applied to Psychology and it was their alternative view of human nature that made their positive Psychology also a Humanistic Psychology (Joseph and Linley, 2006). Rogers and Maslow recognized that Psychology’s adoption and application of the medical model could help those with psychological problems. But at the same time it also served to alienate and damage people: the medical model (then and now) pervaded how Western culture views psychological problems and, as such, it was implicitly accepted and went unchallenged (see Chapter 13). While some Positive Psychologists are similarly critical of the medical model, PP as a movement largely continues to operate within it and so condones the medicalization of (certain kinds of ) human experience.

Pause for thought – answers 1 Symbolic immortality can be achieved by perceiving oneself as part of a culture that endures beyond one’s lifetime, or by creating visible testaments to one’s existence in the form of great works of art or scientific accomplishments, impressive buildings or monuments, amassing vast fortunes or properties, or (simply) by having children. 2 Literal immortality is achieved via the various afterlives promised by almost all organized world religions.

244

Chapter 11 People as diverse Group and individual differences

All the major theoretical perspectives discussed in previous chapters (especially Chapters 1, 5–10) have focused on universals, i.e. the defining processes, abilities, and characteristics of (human) psychology. As we noted (in, say, Chapter 7), these processes and abilities are often examined as if they’re disembodied: for example, remembering is typically ‘laid bare’ through controlled experiments, in which the assumption is made that the particular participants being tested are typical or representative of ‘people’ (i.e. people in general). Having controlled for ‘participant variables’ (i.e. any differences between participants that might affect the outcome), usually through random assignment to different experimental conditions, the researcher is then ‘free’ to focus on manipulating key variables that relate directly to the hypothesis being tested. This describes how mainstream Psychology largely proceeds and corresponds to the respects in which every man is like all other men (Kluckhohn and Murray, 1953). This corresponds to what Legge (1975) calls the process approach (or ‘General Psychology’). According to the universalist assumption, since we’re all human, we’re all fundamentally alike in significant psychological functions, and cultural/social contexts of diversity don’t affect the important ‘deep’ or ‘hardwired’ structures of the mind; the corollary of this assumption is that the categories and standards developed on Western European and North American populations are suitable for ‘measuring’, understanding, and evaluating the characteristics of other populations. But are people as interchangeable as this account of mainstream Psychology would have us believe? Can we safely and validly generalize the findings from controlled experiments to ‘people in general’ as is commonly done? The conclusions drawn from such experiments typically emphasize how the process works, under different conditions, based on the assumption that it works in the same way for everyone. But are such conclusions necessarily valid?

L

How do people differ from each other in ways that are of interest/ relevance to Psychology?

245

People as diverse

The answer to this question is addressed by Differential Psychology: the study of individual differences. These differences correspond to ‘participant variables’ as described above, with the major categories being personality, intelligence, age, gender, and cultural/ethnic background. (According to Kluckhohn and Murray (1953), these illustrate the respects in which every man is like some other men and falls within Legge’s (1975) person approach.) Why should these matter to Psychologists interested in, say, memory, learning, or perception? Let’s take the example of cultural background. Some answers to the question are provided in Box 11.1.

BOX 11.1 The biases of mainstream Psychology: Eurocentrism and stressing the similarities L According to Amir and Sharon (1987), two Israeli Psychologists, ‘For all intents

and purposes, social psychology is the study of second-year American psychology students’ (p. 385). L As we’ve seen in earlier chapters, Psychology as a discipline has been largely dominated by Psychologists from the US, UK and other Western cultures, and the large majority of participants in psychological research have been members of those same cultures. L Apart from their accessibility, the main argument used to justify the practice of studying mostly student behaviour is based on the universalist assumption (see text above). L According to Moghaddam et al. (1993): Until very recently there were few members of minority groups among professional Social Psychologists or the subjects they studied. Thus, Social Psychology is largely monocultural in that both the researchers attracted to the discipline and the subjects it has studied share a single [US] culture. Historically, both researchers and subjects … have shared a lifestyle and value system that differs not only from that of most other people in North America, such as ethnic minorities and women, but also the vast majority of people in the rest of the world. (p. 10; emphasis added) L Yet the findings from this research, and the theories based upon it, have been

applied to people in general, as if culture were irrelevant. It’s implicitly assumed that ‘human being’ = ‘human being from Western culture’ and this is commonly referred to as the Anglocentric or Eurocentric bias (Gross, 2014). L This, in turn, represents a form of ethnocentrism, according to which we take the values and standards of our own culture to judge (usually unfavourably) members of (all) other cultures. L Based on an analysis of the best-selling Social Psychology textbooks, Smith and Bond (1998) estimated that only about 10 per cent of the world’s population is being sampled. While this may not be a problem in, say, physics, it very definitely is in the study of behaviour (particularly social behaviour). Instead of an objective, universal account of behaviour, what mainstream Psychology presents is a predominantly North American, and to a lesser degree European, picture of human behaviour.

246

People as diverse

Pause for thought … 1 What corresponding bias is involved in mainstream Psychology’s study of (mainly) males and generalizing the results to females? (See Chapter 2.)

Cross-cultural Psychology: differences between individuals as members of different cultures As we saw in Chapter 2, the search for universal principles of human behaviour is perfectly consistent with mainstream positivist Psychology. However, the recognition of culture as an important independent variable in its own right (i.e. it represents an influence on behaviour) makes the study of the variability in behaviour among different social and cultural groups around the world hugely important as a way of trying to test and challenge the validity of the universalist approach; this is the approach adopted by Cross-cultural Psychology (CCP).

L

What do you think the major goals of CCP might be?

According to Jahoda (1978), the immediate (and modest) goals of CCP are (1) to describe varieties of social behaviour encountered in different cultural settings and to try to analyse their origins; and (2) to sort out what’s similar across different cultures and, thus, likely to represent our common human heritage (the universals of human behaviour). For Segall et al. (1999), CCP is trying to determine how sociocultural variables influence human behaviour; to do so, it sometimes focuses on behavioural differences across cultures, and sometimes on universal patterns of behaviour. But the ultimate goal is always to discover how culture and individual behaviour are related. To attain this goal, Cross-cultural Psychologists are confronted by different general orientations, which Segall et al. refer to as extreme absolutism and relativism (see Box 11.2).

BOX 11.2 Absolutism vs. relativism L Absolutism is associated with mainstream Psychology as it has been conducted in

most US and European universities during the 1900s; relativism is the approach central to anthropology during the same period. L CCP sits between the two, borrowing certain aspects from each. For example, cultural relativism (Boas, 1911), extended by Herskovits (1948), was meant primarily to warn against invalid cross-cultural comparisons, flavoured by ethnocentric value judgements (see Box 11.1). Berry et al. (1992) ‘borrowed’ the term ‘relativism’ to denote one pole of a dichotomy, with ‘absolutism’ at the other pole. L Relativists give more weight to cultural influences than to biological ones; the reverse is true for absolutists. L Relativists attribute group differences mainly to cultural factors, while absolutists attribute them largely to non-cultural factors.

247

People as diverse L Relativists have little/no interest in intergroup similarities, while absolutists

believe that species-wide basic processes can cause many similarities between groups (‘the search for the psychic unity of mankind’). L Relativists advocate strictly ‘emic’ research, while absolutists use standardized psychological instruments, resulting in ‘imposed etics’. Essentially, this distinction corresponds to whether or not it’s considered valid (or possible) to measure human behaviour independently of the context in which it takes place (see Box 11.3). (Based on Segall et al., 1999)

L

How do you think relativists and absolutists would argue that behaviour cannot or can be studied independently of its (or any) context, respectively?

The emic–etic distinction Although CCP lies somewhere between absolutism and relativism (see Box 11.2), Shweder (1990) sees it as a branch of Experimental Social, Cognitive, and Personality Psychology. Most of the research that’s been conducted under the banner of CCP has presupposed the categories and models of mainstream Psychology involving (limited samples of ) EuroAmerican populations. It has mostly either ‘tested the hypothesis’ or ‘validated the instrument’ in other cultures, or ‘measured’ the social and psychological characteristics of members of other cultures using the methods and standards of Western populations and taken the latter as a valid universal norm (see Box 11.1). Perhaps inevitably – if not justifiably – when a Western Psychologist studies members of some other culture, she/he will use theories and measuring instruments developed in his/her ‘home’ culture; these can be used for studying both cross-cultural differences and universal aspects of human behaviour. For example, aggression is a cultural universal – but how it’s expressed may be culturally specific. Similarly, there are good reasons for believing that schizophrenia is a universal mental disorder, with core symptoms found in a wide range of cultural groups; but there also appear to be culture-specific factors that influence (1) the form the symptoms will take; (2) the specific reasons for the onset of the disorder; and (3) the likely outcome (the prognosis) (Brislin, 1993; see Chapter 13). The distinction between universal and culture-specific behaviour is one version of what’s come to be known in CCP as the emic–etic distinction. This also refers to problems inherent in the cross-cultural use of instruments developed in a single culture (Segall et al., 1999; see Box 11.3).

BOX 11.3 The emic–etic distinction L The terms ‘emic’ and ‘etic’ are based on Pike’s (1954) distinction in linguistics

between phonemics (the study of sounds as they contribute to the meaning of a language) and phonetics (the study of universal sounds used in human language, independently of their relationship to meaning).

248

People as diverse L As applied to the study of cultures, etics refers to culturally general concepts; these

are easier to understand because, by definition, they’re common to all cultures. L Emics refers to culturally specific concepts; these include all the ways that specific

cultures deal with etics: it’s the emics of another culture that are often so difficult to understand (Brislin, 1993). L According to Pike (1954), the terms should be thought of as referring to two different viewpoints regarding the study of behaviour: the etic approach studies behaviour from outside a particular cultural system, while the emic approach looks at behaviour from the inside (Segall et al., 1999). L When the researcher uses an instrument or observational technique from their own culture, this represents an emic for that culture. When this emic is used in an ‘alien’ culture, based on the assumption that it’s a valid way of comparing the two cultures, it’s said to be an imposed etic for the alien culture (Berry, 1969).

According to Hwang (2005), when the research paradigm of Western Psychology is transplanted blindly to non-Western countries, without adequate modification to fit the local cultures, it’s usually irrelevant, inappropriate, or incompatible for understanding the mentalities of non-Western people: such a practice has been described as a kind of academic imperialism/colonialism. Many attempts to replicate (i.e. repeat) American studies in other parts of the world involve an imposed etic: they all assume that the situation being investigated has the same meaning for members of the alien culture as it does for members of the researcher’s own culture (Smith and Bond, 1998). That assumption has often been shown to be false when the results from the original (i.e. American) study aren’t replicated (i.e. reproduced or confirmed). A much-cited example is given in Box 11.4.

BOX 11.4 Making fools of Western Psychologists L Glick (1975) asked members of the Kpelle people (of Liberia and Guinea, West

Africa) to sort 20 familiar objects into groups. L They did this by using functional groupings such as knife with orange, potato

with hoe), rather than taxonomic groups (which the researcher thought more appropriate). L When their way of classifying the objects was challenged, they often explained that this was how a wise man would do it! L When the exasperated researcher finally asked ‘How would a fool do it?’, the objects were immediately arranged into four neat piles of food, tools, clothing, and utensils (i.e. taxonomic groups!). L The Kpelle participants and the American researcher differed in their beliefs regarding the intelligent way of doing things!

According to Gross (2014): This is a kind of ‘putting two fingers up’ at the researcher for being ethnocentric, imposing an etic, and making culturally inappropriate assumptions.

249

People as diverse

Being asked to perform a task outside its usual context (from practical to abstract) could so easily produce the conclusion that the people being studied lack the basic ability to classify, when it is the inappropriateness of the task that is at fault. (p. 251)

Can intelligence tests be culture-fair? The example of the Kpelle people is commonly cited in the context of the measurement of intelligence (through intelligence quotient (IQ) tests); specifically, whether it’s possible to design a culture-fair IQ test.

L L

What do you understand by the concept of a ‘culture-fair’ IQ test? Do you think it’s (theoretically) possible and why (or why not)?

According to Moghaddam et al. (1993), the emic–etic distinction implies that Social Psychology cannot discover cultural universals unless it adopts a cross-cultural approach: the methods used by Psychologists need to be adapted so that the same processes or abilities are being studied in different cultures. But how do we know that we’re studying the same process or ability? What does ‘same’ mean in this context? For Brislin (1993), the question is ‘Do the concepts being investigated, and especially the way the concepts are being measured, have the same meaning in the different cultures?’ (p. 77). Brislin describes three approaches that have been used to deal with this fundamental issue of equivalence: translation, conceptual, and metric. 1 Translation equivalence: discovering whether concepts can be easily expressed in the language of the different cultures being studied represents the first step. If the material doesn’t translate well, this might be due to emic aspects with which the translators are unfamiliar. Alternatively, there may not be readily available terms in the language to capture certain aspects of the concept being translated. 2 Conceptual equivalence: this begins with the assumption that there will probably be different aspects of a concept that serve the same purpose in different cultures. This often begins by identifying the etic aspects, followed by a further identification of the emic aspects related to the etic in the various cultures being studied.

L

What do you think the etic for ‘intelligence’ might be?

The etic for ‘intelligence’ might be ‘solving problems the exact form of which haven’t been seen before’, while the emic might include ‘mental quickness’ (US and Western Europe), ‘slow, careful, deliberate thought’ (the Baganda people of Uganda; Wober, 1974), and ‘responsibility for the community’ (getting along well with others) (the Chi-Chewa people of Zambia; Serpell, 1982). All these emics are conceptually equivalent – they all form part of the definition of intelligence as used by respected adults in their

250

People as diverse

respective cultures (and are the responses that these adults are likely to give if asked ‘Which young people in your community are considered intelligent?’). 3 Metric equivalence: this focuses on the analysis of the same concepts across cultures, based on the assumption that the same scale (after proper translation procedures have been carried out) can be used to measure the concept. Brislin gives an example that again concerns intelligence. After careful translation, an IQ test produces a score of, say, 120 for an American woman and for a woman in Chile. The assumption is that the intelligence scale (metric) is measuring exactly the same concept (intelligence) in the two countries and that a score in one country can be directly compared with that in another. But even if translation had been carried out satisfactorily, this is no guarantee of conceptual equivalence; this, surely, is the most important criterion for assessing the cross-cultural equivalence of an IQ test (as well as being a prerequisite for metric equivalence). Brislin states that while translation equivalence has actually been used in cross-cultural research (and isn’t particularly controversial), conceptual and metric equivalence are largely theoretical and are much more controversial (they beg more questions than they answer). However, even if conceptual equivalence can be established, there remains the question of who constructs the test: (1) if it’s a Western Psychologist, the test needs to be translated, which begs the question of metric equivalence; (2) if it’s a local test constructor, there’s no need for translation. But in either case, we seem to be left with the question of whether the test is culture-fair (Gross, 2014; see Box 11.5).

BOX 11.5 Can a test be culture-fair? L According to Frijda and Jahoda (1966), a culture-fair test could comprise:

(1) a set of items that are equally unfamiliar to all possible persons in all possible cultures, so that everyone would have an equal chance of passing (or failing) it; or (2) multiple sets of items, modified for use in each culture to ensure that each version of the test would be equally familiar; this would give members of each culture about the same chance of being successful with their respective version. L While (1) is a virtual impossibility, (2) is possible in theory, but very difficult to

construct in practice. L Clearly, culturally mediated experience always interacts with a test’s content to

influence actual test performance: The root of all measurement problems in cross-cultural research is the possibility that the same behaviours may have different meanings across cultures or that the same processes may have different overt manifestations. As a result, the ‘same’ test … might be ‘different’ when applied in different cultures. Therefore the effort to devise culturally fair testing procedures will probably never be completely successful. The degree to which we are measuring the same thing in more than one culture, whether we are using the same or different test items, must always worry us. (Segall et al., 1999, p. 137)

251

People as diverse

None of the three types of equivalence takes into account the meaning of the experience of taking an intelligence test (what we might call ‘experiential equivalence’; Gross, 2014). Taking tests of various kinds is a familiar experience for members of Western cultures, both within and outside the educational context. But what about cultures in which there’s no generally available schooling? The very nature or form of the tasks involved in an intelligence test (as distinct from its content) has a cultural meaning (as illustrated by Glick’s (1975) study of the Kpelle people; see Box 11.4) (Gross, 2014).

Individual differences within cultures The preceding discussion of individual differences has placed them in the context of cultural differences: one way in which human beings differ from each other is in terms of their cultural background. We have seen how a fundamental human attribute, intelligence, is defined differently, manifests itself differently, and has different meanings, depending on the cultural context in which it’s being studied. All of these differences, in turn, make attempts to measure it through use of the ‘same’ tests in different cultural settings highly problematical (the ‘equivalence problem’). But does this imply that measuring differences in intelligence (or other fundamental abilities or characteristics) within a particular culture is uncomplicated and without its own problems and controversies? The answer is a resounding ‘no’. The rest of the chapter will be devoted to charting and examining the attempts by Psychologists to measure and compare the intelligence of members of America and Western European countries; these attempts have helped define some of the major debates and controversies within mainstream Psychology, including nature–nurture (or heredity and environment), the ‘Race and IQ’ debate, and the reification of abstract concepts (or hypothetical constructs; see Chapter 3). As Smith (2013) notes, the inspirational figure for much of the modern interest in individual differences was Francis Galton, younger cousin of Charles Darwin (see Box 11.6).

BOX 11.6 KEY THINKER: Francis Galton (1822–1911) L Galton was born in Birmingham, England, to a wealthy banker father, a descend-

ant of founders of the Quaker religion. L As a young child, he appeared to be academically gifted, but his later schooldays

weren’t happy and he excelled only at maths, at the time considered less important than classics. L At 16, he enrolled at Birmingham General Hospital as a medical student. He interrupted his medical training at 18 to attend Cambridge University, but graduated aged 22 with an ordinary degree (having failed to achieve his childhood ambition of gaining an Honours degree). L He went to London to resume his medical training, but when his father died the next year, leaving him a substantial fortune, he gave up formal study and joined the idle rich. L He left England in 1850 to go to south-west Africa, where he developed a talent for map-making (winning him the Royal Geographical Society’s gold medal for 1853). For the next ten years, he became involved in geography, travel, and meteorology.

252

People as diverse L Beginning in the early 1860s, stimulated by Darwin’s Origin of Species (see

L

L

L

L

L L

Chapter 7), Galton turned his attention to measuring and trying to account for individual differences. Although Darwin hadn’t discussed human beings in his book, Galton soon grasped its implication that humans must be constantly evolving, like other species. The most distinctive human variations, and those most likely to form the basis of future evolution, were intellectual and psychological (although presumably mediated by small inheritable differences in the brain). As well as the influence of evolutionary theory, Galton was motivated by his own fears about ‘the condition of England’, that is, the population’s persistent poverty. A significant feature of his intense interest in individual differences was concern for their social consequences. His creative intellectual step was to argue that the way to understand individual inheritance is to study the distribution of variation in populations, rather than directly studying the physiology of heredity (which at the time was fraught with difficulty). His belief in the importance of using statistics would also encourage people to think of human beings, like the rest of nature, as subject to natural law. Galton had a sacred respect for the laws of nature. To bring knowledge of people within those laws required recognition that (1) mental variation is inherited in the same way as physical variation; and (2) inherited variation is overwhelmingly responsible for character. He had no time for the soul and specifically opposed Victorian belief in the moral will and self-help shaping a person’s life. He opposed ‘nature’ and ‘nurture’ (terms that he coined in 1874, ‘nature’ referring to everything we bring with us into the world, and ‘nurture’ denoting everything that influences us after our birth) and fully backed the former. He was convinced about the continuity of biological evolution and human advancement: progress depends on the quality and distribution of inheritable variations passed down from generation to generation. (Based on Fancher and Rutherford, 2012; Smith, 2013)

Not only had Galton’s personal experience led him to believe that intellectual differences between individuals must be primarily innate, but he’d also observed that intellectual eminence tended to run in families. After reading Origin of Species, he decided to examine his belief statistically and his conclusions were published in Hereditary Genius: An Inquiry into its Laws and Consequences (1869). In the book, Galton offered three new arguments in support of his belief in the major role of heredity: (1) the normal distribution curve; (2) specific patterns of eminent relatives; and (3) the comparison of adoptive versus biological relatives (Fancher and Rutherford, 2012).

The normal distribution As part of the International Health Exhibition held in London in 1884, Galton set up an anthropometric (human measuring) laboratory. Here, members of the public could test themselves on such things as muscular strength, visual acuity, reaction time, head size, and other physical measurements. In all, over 9,000 people had been tested by the end of the Exhibition.

253

People as diverse

Pause for thought … At the time, these tests were thought of as mental tests, measuring aspects of intelligence. From a present-day perspective, how would we challenge this way of classifying them? 2 What kinds of processes/abilities do you think modern IQ tests are claiming to measure?

If we’re surprised that Galton believed these tests were measuring ‘intelligence’, what was his rationale? He reasoned that people with the highest intellectual abilities must have the most powerful and efficient nervous systems and brains. Since the power of a person’s brain was probably related to its size, the initial and simplest indicator of his/ her natural intelligence was head size (reflecting brain size). He further reasoned that people’s neurological efficiency must be related to their reaction time. As Fancher and Rutherford (2012) observe, these earliest ‘intelligence tests’ involved measures and phenomena that had been very important in the then recent rise of Experimental Psychology – but with a new twist. As we saw in Chapter 1, Fechner’s Psychophysics had explored the limits of sensory discrimination, and Wundt had measured reaction times. However, these earlier studies were aimed at establishing general psychological principles, applicable to all people (with individual differences being largely irrelevant). By contrast, Galton was operating within the new Darwinian framework that emphasized variability and adaptation: individual differences were the focus of his thinking and research. We also saw in Chapter 1 that William James directly inspired Functionalism (or Pragmatism), according to which ideas must be useful and meaningful to people’s lives. For example, he emphasized the functions of consciousness rather than its content (which is consistent with Darwinian views regarding why consciousness evolved). Functionalism, in turn, helped stimulate interest in individual differences, which determine how well or poorly people will adapt to their environments. According to the great Belgian statistician, Adolphe Quetelet (1796–1874), when measurements such as height or weight are collected from large populations, they typically produce a bell-shaped curve (the normal distribution). The 9,000 sets of measurements made in the Anthropometric Laboratory formed a normal distribution. (It was the applied mathematician Karl Pearson (1857–1936) who first used the term ‘normal distribution’ in the 1890s; he became the first Galton Professor of Eugenics at University College, London, in 1911; Smith, 2013; see Box 11.7).

Pause for thought … 3 Draw a normal curve for some hypothetical data of your own choosing. Describe the key features of the curve. 4 What can it tell us about the hereditary nature of the characteristic/ ability being measured?

254

People as diverse

Patterns of genius Based on the family trees of 12 groups of eminent people (including judges, writers, scientists, and champion wrestlers), Galton identified two general patterns; (1) the eminent relatives of eminent people tended to be genetically closer than more distant (e.g. first- or second-degree relatives rather than third-degree); and (2) there was a clear tendency for relatives to excel in the same fields.

L

While these findings are consistent with Galton’s hereditarian position, how might you challenge this conclusion?

Since closer relatives are more likely to share (similar) environments than more distant relatives are, those supporting an environmentalist position would predict the same pattern of results.

Studies of adoptive versus biological relatives Galton proposed a research design in which the intelligence of adopted relatives is compared with that of biological relatives, predicting that the latter should be far more alike than the former. As we shall see below, this basic design represents a major way in which modern behaviour genetics has attempted to tease apart the relative influence of genetic and environmental influences in relation to IQ test scores (and other individual differences). While being a staunch believer in the influence of heredity, Galton acknowledged the (limited) role of environmental factors. He also recognized that nature and nurture can often interact with each other in complicated ways. In an 1875 article, he proposed another research technique which lies at the heart of behaviour genetics – the twin study method. This is based on the fact that there are two types of twin: (1) identical or monozygotic (MZ) twins have developed from a single fertilized ovum that has split at an early point following conception; and (2) non-identical or dizygotic (DZ) twins have developed together in the womb, following the fertilization of two separate ova by two different sperm. So, while the former are genetically identical, the latter are no more alike genetically than ordinary siblings (and so are also called fraternal twins). The basic twin method involves comparing separated MZs with same-sex DZs who have grown up together. Another feature of Galton’s legacy to mainstream Psychology is described in Box 11.7.

BOX 11.7 Eugenics L Galton introduced the term ‘eugenics’ in Inquiries into Human Faculty and Its

Development (1883); it refers to the attempt to improve the human race through selective breeding. L In the opening paragraph of Hereditary Genius, he’d stated: As it is easy … to obtain by careful selection a permanent breed of dogs or horses gifted with peculiar powers of running, or of doing anything else, so would it be quite practicable to produce a highly-gifted race of men by judicious marriages during several consecutive generations. (pp. 449–450)

255

People as diverse L Based on the belief that industrial society and international competition placed

L

L

L

L

L

growing demands on people, he advocated a social policy to increase the number of individuals with the necessary energy and intelligence. Galton called for a policy to solve social problems and enhance human wellbeing by promoting differential birth rates for groups with different inherited aptitudes. He was especially keen to make it easier for young people of ‘good stock’ to marry and have many children. At the start of the twentieth century, with fears of degeneration widespread and intense international competition a reality, this policy proved very popular. His conviction that human ability is strongly inheritable suggested to him that eugenics should be a workable reality. This became his consuming passion for the second half of his life. The impact of evolutionary theory on Psychology was symptomatic of its broader cultural impact in reinforcing such beliefs as the ‘naturalness’ of competitive capitalism and the natural superiority of the white ‘races’. Galton’s eugenics concerns were the principal manifestation of this in Psychology. Of the many ideas that he developed after 1883, two of the most important for the history of Psychology – and which had implications beyond their original eugenic purposes – were intelligence tests and statistical correlation (see text below). While Galton originated the idea of intelligence testing (in a eugenic context), it was left to others to make them a reality, starting with Alfred Binet in France. Without doubt, Galton is responsible for ensuring that the whole issue of intelligence testing became inextricably linked to genetics, eugenics, and the nature– nurture controversy (see text below). (Based on Fancher and Rutherford, 2012; Richards, 2010; Smith, 2013)

Statistical correlation In 1888, Galton presented the idea of correlation coefficients, which were adopted and refined by Pearson, (in)famous for his ‘product moment’ test of correlation (Pearson’s r), one of the most widely used of all statistical tools in psychological, biological, and sociological research. Its application extends far beyond the biological and hereditary relationships that Galton originally investigated, to literally any situation in which we want to explore the degree of association between any two measurable variables. Many modern researchers regard Galton’s pioneering statistical work as his greatest contribution to science (Fancher and Rutherford, 2012).

Binet and the advent of the IQ test As we’ve seen, the measurements that Galton used in the Anthropometric Laboratory proved to be invalid as measures of intelligence (ironically, through the use of Pearson’s r). The history of IQ tests as we know them is closely tied to the importance of education in the history of Psychology. As Smith (2013) observes, in response to the large expansion of public education in many countries in the late nineteenth century, there were calls to make education a science; this, in turn, called for a Psychology of the child. The reality of mass education was often large classes, a standard curriculum, and instruction that was difficult to distinguish from discipline. Educationalists referred to intelligence in comparing a child’s performance relative to that of other children and to expected standards (as measured by examinations).

256

People as diverse

It was a small step to refer to intelligence as if it were a natural object, a ‘thing’ in each child, and thus to make intelligence the explanation rather than the measure of performance. The social nature of the measurement process disappeared from view, while the psychological factor supposedly observed gained prominence. (Smith, 2013, p. 111; emphasis added)

BOX 11.8 KEY THINKER: Alfred Binet (1857–1911) L Binet was born in Nice, France, and raised there and

L

L

L

L

L

L

L

L

in Paris, mainly by his mother (his parents separated soon after his birth). Having studied for a law degree, he decided not to practise. He then tried unsuccessfully to complete medical training. This triggered a severe breakdown. By chance, Binet came across accounts of the new Experimental Psychology and realized he’d found his vocation. He then ‘discovered’ psychodynamic Psychology and was taken on by Charcot as an unpaid assistant and trainee (see Chapter 9). He stayed with Charcot for eight years, publishing prolifically on a range of topics Figure 11.1 (from mental imagery to hypnotic reactions of hysterics Alfred Binet. to sexual ‘fetishism’, a term that he coined). At about the time he finished working with Charcot, he began testing his two young daughters using Galton’s tests designed for the Anthropometric Laboratory. Although these tests were intended primarily for young adults, his daughters often performed just as well if not better than normal adults. Binet also found that major differences between children and adults required skills that were largely untapped by Galton’s tests. He concluded that Galton’s measures were totally inappropriate as a means of discriminating intelligence differences between adults; much more promising would be tests involving more complex functions, such as language and abstract reasoning. Testing his daughters and his intensive study of Charcot’s hysterical patients convinced Binet of the essential individuality of all participants in psychological research (see Chapter 2). In 1891, he had a chance meeting with Henri Beaunis, a physiologist and director of the newly created Laboratory for Physiological Psychology at the Sorbonne in Paris. He started work there as an unpaid assistant. Binet soon gained recognition as France’s leading Experimental Psychologist, and succeeded Beaunis as director in 1894. In 1895, he founded L’Annee Psychologique, the first French journal explicitly devoted to Experimental Psychology. He stayed at the Sorbonne until his death. (Based on Fancher and Rutherford, 2012)

One research area that Binet had pursued while at the Sorbonne was the role of suggestibility in psychological experiments. An early and striking example of the ‘tenacity of unconscious bias and the surprising malleability of “objective” data in the interest of a preconceived idea’ (Gould, 1981, p. 147) is Binet’s discovery of a positive correlation

257

People as diverse

between head size and intelligence. When he first decided to study intelligence, it was ‘natural’ that Binet should use craniometry (the measurement of skulls, first used by Paul Broca; see Chapter 5). However, he suspected that, unconsciously and unknowingly, he would distort the actual measurements, producing the results he expected. He concluded that ‘The idea of measuring intelligence by measuring heads seemed ridiculous’ (Binet, 1900, in Gould, 1981).

Pause for thought … 5 Explain what is meant by ‘a positive correlation between head size and intelligence’. 6 What would a negative correlation between them mean? 7 What term is used for the process by which a researcher unconsciously produces the results she/he expects?

At the beginning of the 1900s, Binet and many others became increasingly interested in the problem of mental subnormality. Prior to the advent of universal education, most such children had either left school at an early age – or never attended in the first place. Now they were legally obliged to go to school, so special attention and special schools became necessary. In 1904, Binet was commissioned by the French minister of public education to develop ways of identifying those children whose lack of success in normal (mainstream) classrooms suggested the need for some form of special education. Together with Theodore Simon (1873–1961), a young doctor who’d come to Psychology with him in 1899, Binet brought together a large series of short tasks, related to everyday life problems (such as counting coins and assessing the ‘prettier’ face), but supposedly involving such basic reasoning processes as ‘direction (ordering), comprehension, invention and censure (correction)’ (Binet, 1909, in Gould, 1981). Binet and Simon started out with few theoretical presuppositions regarding ‘intelligence’ and instead proceeded empirically, identifying groups of children who’d already been diagnosed as subnormal or normal (by teachers or doctors).

Pause for thought … 8 If Binet’s and Simon’s tests subsequently proved capable of successfully identifying those children requiring special education, what conclusion could we draw about the tests? 9 What do we call these groups of children already assessed (by teachers or doctors) as being normal or not in the context of test construction? 10 Try to identify a different example of such groups in relation to personality differences.

The outcome was the first recognized test of intelligence (1905), with tasks arranged in ascending order of difficulty. In 1908, the concept of mental age (MA) was introduced, and a child’s general intellectual level was calculated as chronological age (CA) minus MA. Chil-

258

People as diverse

dren whose MA was sufficiently behind their CA could then be recommended for special education. In 1912, the German Psychologist William Stern pointed out that division is more appropriate than subtraction: it’s the relative (not the absolute) size of the difference that matters, hence MA / CA × 100 / 1 (multiplying by 100 gives a whole number). The IQ (intelligence quotient) had been born! Binet refused to speculate on the meaning of the IQ: intelligence is too complex to capture with a single number and represents the average of many performances, not an entity with an independent, objective existence (see above). Not only did he refuse to label IQ as a measure of innate intelligence, Binet also rejected it as a general device for ranking all pupils in terms of mental ability (as opposed to those needing special education). Also, whatever the cause of poor school performance, Binet’s aim was to identify these children in order to help and improve, not to label in order to limit. It is this, rather than belief in/denial of innate intellectual differences which, according to Gould (1981), distinguishes hereditarians from their opponents. It was H.H. Goddard (1866–1957) who first introduced the Binet–Simon Scale to the US, translating Binet’s related articles and generally promoting their use. While agreeing with Binet that the tests worked best in identifying those falling just outside (i.e. below) the normal range, Goddard regarded the scores as measuring a single, innate entity. (Similarly, in the UK Spearman (1904, 1927, 1967) claimed that every intellectual activity involves a general factor – g, or general intelligence – and a specific factor – s. Individual differences are largely attributable to differences in their g, which is actually an abbreviation for neogenesis – the ability to infer relations.) Goddard’s purpose in advocating use of the tests was to recognize people’s limits in order to segregate them; this, in turn, was aimed at preventing them from ‘breeding’, so as to prevent further declines of the American ‘stock’. Like Galton, Goddard was a eugenicist who saw both immigration into the country and the prolific reproduction of the feebleminded within it as already threatening that stock (Gould, 1981).

The Army Alpha and Beta tests Robert Yerkes (1876–1956), an ethologist, primatologist, and Psychologist, was frustrated by Psychology’s image as a ‘soft’ science, if a science at all. He wanted to prove that Psychology could be as rigorous a science as physics (i.e. it must involve measurement and quantification) (see Chapter 2). The most promising source of such quantification lay in the embryonic field of psychometrics (mental measurement). With the approach of the First World War, Yerkes wondered if Psychologists might be able to persuade the US Army to test all of its recruits: this could be the opportunity that Psychology had been waiting for to change itself from dubious art into respected science. Yerkes campaigned, within both the Federal government and Psychology, and finally got his way. As Colonel Yerkes, he supervised the testing of 1.75 million recruits, bringing together all the major hereditarians of the day, notably Goddard and Lewis Terman (1977–1956), a Stanford University Psychologist. In 1916, Terman had introduced ‘The Stanford Revision of the Binet–Simon Scale’, an extensive re-working of Binet’s test adapted for the US population and standardized on a considerably larger sample of children. From 1916 onwards, the Stanford–Binet became the standard for almost all IQ tests that followed, including most of the written (group) tests.

259

People as diverse

Pause for thought … 11 What is meant by ‘standardization’ in the context of a psychometric test? 12 Why is it so important?

The ‘Stanford–Binet’ quickly became the most widely used individual intelligence test in North America. Using Stern’s ‘formula’, the mean level of intelligence as defined/measured by this test was 100. Together, Yerkes, Goddard, and Terman constructed the new Army Alpha and Beta tests in 1917. These are described in Box 11.9.

BOX 11.9 The Army Alpha and Beta tests 1

2

5

3

6

9

7

10

12

8

11

13

14

15

17

4

16

18

19

20

Figure 11.2 Army Beta for testing innate intelligence. 260

People as diverse L The eight-part Army Alpha test was designed for literate recruits and consisted of

a written examination, taking less than an hour and given to large numbers of people at the same time. The test items included analogies, filling in the next number in a sequence, etc. L The seven-part Army Beta test was designed for illiterate recruits and those who failed the Army Alpha. It comprised pictorial tests as shown in Figure 11.2.

Those who failed the Army Beta would be recalled for an individual examination and guidance for proper military placement.

L

L

Yerkes claimed that the tests measure ‘native intellectual ability’ (i.e. unaffected by environmental factors, such as acquired knowledge and education). But how could this possibly be true when the Army Alpha included questions such as: 1 Crisco is a: patent medicine, disinfectant, toothpaste, food product? 2 The number of Kaffir’s legs is: 2, 4, 6, 8? 3 Christy Mathewson is famous as a: writer, artist, baseball player, comedian? Regarding the Army Beta, in what sense could (un)familiarity with phonographs (18), tennis courts (16), and light bulbs (7) (see Figure 11.2) be considered a measure of innate intelligence?

Men who were illiterate in English (either through lack of schooling or foreign birth) should have taken the Army Beta, but there was considerable inconsistency between camps in their ability to allocate recruits appropriately. This resulted in a systematic bias that substantially lowered the mean scores of black people (who were treated with less concern and more contempt by everyone) and immigrants: many men took the Alpha and scored either zero or close to zero, not because they were innately stupid but because they were illiterate and should have taken the Beta but didn’t. Although the Beta comprised only pictures, numbers, and symbols, it still required pencil work, and, in three parts, a knowledge of numbers and how to write them. Yerkes had overlooked – or consciously bypassed – this crucial aspect of test taking. The conditions of testing, and the basic character of the test, make it ludicrous to believe that the Beta measured any internal state of ‘intelligence’. While the tests had a strong impact on screening men for officer training, their major impact was felt outside the military. They were the first mass-produced written tests of intelligence, and Yerkes (1921) claimed that they could now rank and stream everybody (not just those with special educational needs). The era of mass testing had begun. Based on a sample of 160,000 cases, Boring (1923) converted the Alpha, Beta, and individual examination scales to a common standard, so that racial and national averages could be constructed. From this analysis, three interrelated ‘facts’ emerged, which continued to influence social policy in the US for many years to come: L The average MA of white American adults was a shocking 13 – to put this in perspective,

‘high-grade defectives’ (Goddard called them morons, from the Greek for ‘foolish’) had

261

People as diverse

an MA of 8–12! Terman had previously set the standard at 16. These findings became a rallying point for eugenicists, who attributed the decline of national intelligence to the unrestrained interbreeding of the poor and feeble-minded, the spread of Negro blood through interbreeding, and immigration of Southern and Eastern Europeans – ‘dregs’. L The darker-skinned people of Southern Europe and the Slavs of Eastern Europe were less intelligent than the fair-skinned Western and Northern Europeans. L The lowest average MA was the Negro’s 10.41. Where intensity of skin colour was graded, the lighter groups score higher! While Boring’s ‘facts’ seemed to convincingly support the hereditarian claim that differences in intelligence are innate, there was also evidence that average test scores for foreignborn recruits rose consistently with years of residence in the US.

L L

How would you express this finding as a correlation? What does it suggest regarding the role of environmental factors?

This positive correlation between time spent in the country and test scores indicates that familiarity with American culture – and not innate intelligence – determined differences in the scores. (By implication, it also strongly suggested that the Alpha and Beta tests were not valid tests of innate intelligence, if, indeed, this can be achieved by any test.) Yerkes admitted this possibility, but argued that those immigrants who’d been in the country longer were the innately more intelligent Northern Europeans! Heads I win, tails you lose! The Army data had their most immediate and profound impact on the ‘great immigration debate’, a major political issue in the US at that time. Although the 1924 Restriction Act may have been passed without scientific backing, its timing, and especially its peculiar character, clearly reflected the lobbying of eugenicists who used the Army data as their major weapon (debates in Congress continually cited the Army data). Eugenicists not only wanted limits to overall immigration, they also wanted to impose harsh quotas on nations of inferior stock – a feature of the Act that might never have been implemented, or even considered, without the Army data and eugenicist propaganda. The eugenicists fought and won one of the greatest victories of scientific racism in US history (Gould, 1981). The quotas stood and slowed immigration from Southern and Eastern Europe to a trickle. Throughout the 1930s, Jewish refugees, anticipating the Holocaust, tried to emigrate but weren’t admitted – even when quotas from Western and Northern Europe weren’t filled. Estimates suggest that up to six million were barred between 1924 and 1939; we know what happened to many who wished to leave but had nowhere to go. As Gould (1981) puts it, the paths to destruction are often indirect, but ideas can be agents as surely as guns and bombs. One illustration of the destructiveness of eugenicist dogma is the way that Yerkes explained away data that threatened to challenge his eugenicist conclusions. For example, he found a correlation of 0.75 between test score and years of education for 348 men who scored below the mean on the Alpha: only one had ever attended college, four had graduated from highschool, and only ten had ever attended high-school. But Yerkes argued that men with more innate intelligence spend more time in school: that’s why they spend more time in school! The strongest correlations came from black–white differences. Again, the fact that black people spent relatively little time in school compared with white people was explained in terms of the former’s low motivation, based on low innate intelligence.

262

People as diverse

L

Try to formulate some arguments against Yerkes’ reasoning. (You may wish to think in terms of what you can legitimately infer from a correlation.)

Yerkes seems to be illegitimately inferring a cause from a correlation: (1) it may be a lack of schooling that causes the low IQ scores; and (2) there may be a third factor that accounts for both the low IQ scores and the short time spent in school, such as racial segregation (at that time officially sanctioned – if not mandated), poor conditions in black schools, and economic pressure to leave school and find work among the poor (which African-Americans are disproportionately likely to be). You can only infer which of two correlated factors is the cause of the other based on some theory about how they are related (Deese, 1972); in Yerkes’ case, this is the hereditarian theory. So, Yerkes is presenting data to support a theory, but for that to work, the data must first be interpreted according to that very same theory: a classic example of circular reasoning. (Gross, 2008, p. 397; emphasis added) In the second half of the twentieth century, it was Arthur Jensen (1923–2012) in the US who revived the ‘race and IQ’ debate, with a highly controversial 1969 article entitled ‘How much can we boost IQ and scholastic achievement?’. In it, he claimed that the failure of preschool compensatory programmes (such as Operation Headstart) was due to the innate inferiority of black children (see Gross, 2015). In 1980, Jensen published Bias in Mental Testing, in which he argued (a little perversely given the title) that IQ tests are not biased – consistent with the hereditarian position of his predecessors. In his review of Jensen’s book, Gould (1987) notes that Jensen bypasses the whole issue of heritability (a highly ambiguous and misunderstood concept; see Gross, 2015) and seems to advocate dropping discussion of what causes differences in test scores. But Gould regards causation as the explicit motivating theme of the 1969 article and is implicit throughout the book. For Gould, the crucial question is what exactly Jensen means by ‘bias’. A crucial distinction is made in Box 11.10.

L

What do you think is meant by ‘bias’ in relation to a psychometric measure such as an IQ test?

BOX 11.10 V-bias and S-bias (Gould, 1987) L Our ordinary understanding of the term in the present context is that poorer per-

formance of black people is the result of environmental deprivation relative to white people; it’s linked to the idea of fairness, maintaining that black people have received a poor deal, for reasons of education and upbringing (i.e. nurture), rather than nature. This is the non-technical (or vernacular) meaning (V-bias).

263

People as diverse (This is mirrored by the intercultural bias of tests discussed in the text above in relation to the emic–etic distinction.) L Jensen confines his discussion to a far narrower, technical (exclusively statistical) sense of bias (S-bias). An IQ test is S-biased if the same IQ score predicts different school grades (or some equivalent performance criterion) for black and white children (intercept bias). Both groups have the same slope, but whites have a higher y-intercept (i.e. the same IQ score predicts higher school grades for white than for black children).

As Gould points out, no sensible tester would want to construct a test in which the same score means different things for different kinds of people. Jensen devotes most of his book to showing that S-bias doesn’t affect mental tests and that it can be corrected when it does. But in showing that tests are S-unbiased, all Jensen has managed to show is that the lower black and higher white average scores lie on the same line. While Jensen acknowledges the difference between this and V-bias, he argues that the culture-fairness of a test (or its degree of ‘culture-loadedness’ – its V-bias) cannot be defined objectively, and so we should only discuss S-bias. But in saying this he seems to be undermining the hereditarian position: only by assuming that tests have no V-bias can IQ differences be taken to reflect innate, genetic differences. But if V-bias cannot be defined or measured objectively, it cannot be ruled out as an influence on test scores. In short, Jensen’s book fails to address the critical issue, namely the meaning of black children’s lower average scores. Gould (1996) contends that Herrnstein and Murray (1994) make the same basic mistake (of confusing S- and V-bias and explicitly only addressing the former) in their controversial The Bell Curve. This is just one of several books since Jensen’s that promote the hereditarian argument and right-wing social policies of a strongly racial nature (Howe, 1997). Even more overtly racist are Rushton’s (1995) Race, Evolution, and Behaviour and The g factor (Brand, 1996); the latter achieved the rare notoriety of being withdrawn by its British publisher after Brand announced that he was ‘perfectly proud to be a racist in the scientific sense’.

Gould’s and Kamin’s revisionist history of IQ testing As we noted in Chapter 1, both Gould and Kamin (1974) are highly critical of the concept of the IQ and its measurement. According to Harris (2009), their work represents a challenge to conventional (‘intellectual’) accounts while at the same time avoiding the earlier conspiratorial, simplistic ‘storytelling’, as in the claim (especially during the 1960s) that Psychology developed to serve the forces of racism, male chauvinism, and class bias.

BOX 11.11 Kamin’s and Gould’s revisionist histories of IQ and its measurement L In The Science and Politics of IQ (1974), Kamin argued that Terman, Yerkes,

and Goddard were motivated as much by social concerns as by scientific curiosity. IQ tests were biased in favour of prosperous, white Protestant males whose families had lived in the US for many generations. Terman et al. argued that immigrants, African-Americans, Native Americans, Jews, and women are genetically inferior.

264

People as diverse L According to Kamin, these prejudices were reflected not just in their academic

writings, but also in their complicity with inhumane social policies, including (1) the 1920s’ restrictive immigration laws (see text above); and (2) involuntary sterilization laws passed by many states to stop the spread of mental retardation. L The account of the Army test in the text above was based on Gould’s The Mismeasure of Man (1981). He suggested that the hereditarianism of the 1920s and 1970s echoed nineteenth-century attempts to reduce human personality traits to biology (as in assessing criminals by measuring the angles of noses and foreheads, or craniometrists measuring intelligence in terms of skull shape and volume). Gould argued that Jensen’s racial interpretation of IQ data was no more scientific than craniometry. L Building on Kamin’s portrayal of Goddard, Gould described Goddard’s famous study of an allegedly degenerate, rural New Jersey family – the Kallikaks. Consistent with eugenic beliefs, Goddard depicted successive generations of Kallikaks as mentally retarded, immoral, and criminal. L Gould reprinted some of the illustrations from Goddard’s book and accused him of retouching photos to make the Kallikaks look demented and depraved.

However, many of Kamin’s and Gould’s criticisms have themselves been criticized and challenged by the ‘new history of Psychology’ (see Chapter 1). In relation to intelligence testing, for example, it was concluded that Psychologists and their IQ tests had played at best a peripheral role in the restrictive US immigration laws in the 1920s (see Box 11.11). Rather, racist politicians had decided long before that Eastern and Southern Europeans were inferior and didn’t need IQ tests to tell them that (Samelson, 1975). The shared racism of Terman, Yerkes, and Goddard was also shown to be an illusion: it was found that they disagreed sharply among themselves on issues ranging from the inferiority of immigrants to the link between IQ and crime and delinquency (Zenderland, 1998). An assistant professor at Smith College, Margaret Curti, had made similar arguments to those of Gould back in 1926. According to Fancher (1988), not only was Gould mistaken in accusing Goddard of retouching photos of the Kallikak family, but he totally misjudged Goddard’s motives and methods for promoting Psychology. Goddard wanted the Kallikaks to look normal in order to emphasize the point that mental retardation isn’t always obvious: his motive was professional expansion as opposed to oppressing the underprivileged.

Conclusions: revisiting culture and the ethics of IQ research According to Howard Gardner (1993), contextualization is all-important: Rather than assuming that one would possess a certain ‘intelligence’ independent of the culture in which one happens to live, many scientists now see intelligence as an interaction between, on the one hand, certain … potentials and, on the other, the opportunities and constraints that characterize a particular cultural setting . (p. xvii) This can be seen as an anti-hereditarian argument: Jensen, Yerkes, and the others accused by Kamin and Gould of being hereditarians, do assume that intelligence can be measured separately from its cultural expression – because it’s essentially biological.

265

People as diverse

Another anti-hereditarian argument rests on the concept of race (see Chapter 8). According to Fernando (1991), the genetic differences between the ‘classically described’ races (European, African, etc.) are, on average, only slightly greater than those between nations within a racial group. This means that it’s very difficult to discuss white–black differences in terms of racial differences: this would account for only a small fraction of the genetic differences needed by hereditarians to explain the commonly found 15-point IQ difference. While the action of biological evolution (Darwinian selection) is very slow, ‘cultural evolution’ (‘the inheritance of acquired characteristics’; Gould, 1981, p. 325) works quickly. This means that differences between groups are likely to be more cultural than genetic in origin (Segall et al., 1999). According to the perspective of Cultural Psychology (as distinct from CCP; see above), culture influences which behaviours are considered to be intelligent, the processes underlying intelligent behaviour, and the direction of intellectual development (Miller, 1997). While not rejecting all use of psychometric tests, Miller argues that all measures of intelligence are culturally grounded, with performance at least partly dependent on culturally based understandings (see the example above of the Kpelle people tested by Glick). Even in the controlled conditions of the experimental laboratory, intellectual performance reflects individuals’ interpretations of the meaning of situations and their background presuppositions, rather than pure g (Miller, 1997; see Chapter 3). Psychological theories of intelligence must offer accounts that are relative to a particular time and context. But the hereditarians argue for a universal, culture-free, unchanging, objectively measurable, biologically determined property called g (Miller, 1997). Furthermore, these ‘universal’ theories are associated with right-wing social philosophy, which their advocates aren’t afraid of revealing. For example, in the Preface to The Bell Curve, Herrnstein and Murray claim that affirmative action (in the UK ‘positive discrimination’) in education and the workplace ‘is leaking poison into the American soul’; they advocate a return to living with inequality. Their underlying assumption is that the US is a meritocracy, a society in which the position of individuals on the status hierarchy is determined by inherited ability (and effort), rather than inherited wealth and privilege. In a meritocracy, more intelligent people rise to the top and marry others like themselves; their children also inherit higher intelligence and are successful in their turn. The overrepresentation of ethnic minorities among the poor and at the bottom of the status hierarchy is explained by their supposedly having inherited lower g (Moghaddam, 2005). Arguably, the most fundamental question we can ask about group differences in IQ relates to the ethical status of studying such differences in the first place. This, in turn, relates to an even more fundamental question: what are the underlying assumptions involved in the major ethical codes (such as the Code of Ethics and Conduct (BPS, 2006) and the Ethical Principles of Psychologists and Code of Conduct (APA, 2002)? According to Brown (1997), a core assumption underlying ethical codes is that what Psychologists do as researchers, clinicians, teachers, etc. is basically harmless and inherently valuable – because it’s based on (positivist) science (see Chapter 2). Consequently, a researcher can adhere strictly to ‘scientific’ research methodologies, get technically adequate informed consent from participants (and not breach any of the other major prescribed principles), but still conduct research which claims to show the inferiority of a particular group (a finding which, presumably, wouldn’t be to the advantage of that group). Because it’s conducted according to ‘the rules’ (both methodological and ethical), the question of whether it’s ethical in the broader sense to conduct such research at all, is ignored.

266

People as diverse

Brown gives the examples of Jensen’s (1969) and Herrnstein’s (1971) research, neither of which was ever considered by mainstream Psychology to have violated Psychology’s ethics by the very questions they asked regarding the intellectual inferiority of African-Americans. While individual black participants weren’t harmed by being given IQ tests – and may have even found them interesting and challenging – how the findings were interpreted and used: weakened the available social supports for people of colour by stigmatizing them as genetically inferior, thus strengthening the larger culture’s racists attitudes. Research ethics as currently construed by mainstream ethics codes do not require researchers to put the potential for this sort of risk into their informed consent documents. (Brown, 1997, p. 55)

Pause for thought … 13 What does ‘informed consent’ involve and how does it differ from mere ‘consent’? 14 Try to identify some limitations to the concept of ‘informed consent’: can it ever be total? 15 If Herrnstein’s and Jensen’s black participants had been told of the potential effects of the findings, how might this have affected their willingness to consent?

According to Anderson (2007), what might be harmless curiosity when applied to other topics can have profound and negative consequences on individuals’ lives when the subject matter is racial differences. Nor does knowledge of racial differences add anything to our understanding of the underlying mechanisms involved in intelligence and its development. Jensen’s and Herrnstein’s research (highlighted in The Bell Curve) has profoundly harmed African-Americans. Ironically – and poignantly – while The Bell Curve received much methodological criticism, only black Psychologists, such as Hilliard (1995) and Sue (1995) have raised the more fundamental question of whether simply conducting such studies might be ethically dubious. To ask this question about the risks of certain types of inquiry challenges science’s hegemony as the source of all good in psychology. (Brown, 1997, p. 55)

Pause for thought – answers 1 Androcentrism (or the masculinist bias) takes men as some sort of standard or norm, against which women are compared and judged (e.g. Tavris, 1993). (See Chapter 3 and Gross, 2014.) 2 Today, we take it for granted that intelligence involves ‘higher’ mental processes, such as thinking, decision-making, problem-solving, verbal and mathematical reasoning, and logic. 267

People as diverse

3 The majority of the scores fall within the middle ranges of the curve (more specifically, roughly 68 per cent fall within one standard deviation (SD) either side of the mean – but the SD will differ from population to population). At the ‘tails’ of the curve, the scores become more extreme (in either a high or low direction). 4 Normal distributions don’t, in themselves, ‘prove’ that the characteristic/ ability being measured is hereditary, but they’re consistent with such a claim. Failing to produce such a distribution suggests that the characteristic/ability probably isn’t largely hereditary (so it represents a necessary condition or prerequisite). 5 The larger the head size, the higher the intelligence score (i.e. the two variables ‘change’ in the same direction: as one goes up, the other goes up). 6 The larger the head size, the lower the intelligence score (i.e. the two variables ‘change’ in the opposite direction: as one goes up, the other goes down). 7 Experimenter bias. 8 We can conclude that the tests are valid, i.e. they measure what they claim to measure. (But remember that a prerequisite of validity is reliability: the tests must produce the same or very similar scores whenever they’re used and whoever administers them.) 9 Criterion groups. 10 A way of validating the Neuroticism/Stability scale of Eysenck’s Personality Questionnaire (EPQ) is to test groups of patients diagnosed by psychiatrists as neurotic. 11 Standardization requires testing a large, representative sample of the population for whom the test is intended, otherwise the resulting norms cannot be used legitimately for certain groups of individuals. 12 In the 1960 revision of the Stanford–Binet test, Terman and Merrill took only the population included in the census as their reference group; this excluded many migrant and unemployed workers. More seriously, both the 1916 and 1960 revisions had been standardized on white samples only, yet they were to be used with both white and black children. As Ryan (1972) says, these tests were, therefore, tests of white abilities: any comparison between black and white children was really an assessment of ‘how blacks do on tests of white intelligence’. The 1973 revision did include black children in the 2,100strong standardization sample. 13 While ‘consent’ involves voluntarily agreeing to participate, ‘informed consent’ requires that the participant understands the purpose of the study and the procedure involved. 14 As Gale (1995) argues, participants cannot have full knowledge of the procedure until they’ve actually experienced it, and the researchers themselves may not fully appreciate it without undergoing it 268

People as diverse

themselves. In this sense, it’s difficult to argue that full prior knowledge can ever be guaranteed. Also, there’s more to informed consent than just this ‘information’ criterion. The status of the experimenter, the desire to please others and not let them down, the desire not to look foolish by withdrawing after the experiment has begun, all influence the participant and seem to detract from truly choosing freely in a way that ethical codes assume. 15 We can only assume that (at least some of ) the black participants would not have consented to participate. Assuming that it wasn’t the researchers’ conscious intention to use their findings to the disadvantage of African-Americans as a group, they couldn’t have informed their participants of this in advance; nor could they have been required to tell the participants of what they expected the outcome to be. However, as Brown (1997) argues, is it the very fact that such research is being conducted that is at the heart of the ethical debate involved here?

269

Chapter 12 People as selves Subjectivity, individuality, and social construction of identity

In William James’ classic Principles of Psychology (1890), a whole chapter is devoted to discussion of the Self (see Chapter 1 and below). Within mainstream Psychology, the Self (or self-concept) is a central topic and you’re sure to find a whole section (or even a whole chapter) devoted to it in most general, introductory textbooks. Sometimes, the topic is discussed as part of Social Psychology, sometimes as part of Developmental Psychology, and sometimes in relation to personality theory; this illustrates a general feature of mainstream Psychology, namely the artificial division of human experience into separate categories. However, cutting across this artificial separation of different aspects of experience (which, subjectively at least, is a continuous, holistic process) is how the topic of the Self has been dealt with by the major theoretical approaches which, together, comprise mainstream Psychology.

L

Briefly describe how each of the major theoretical approaches (discussed in Chapters 5–10) deals with the Self.

Biopsychology and the self As far as Biopsychology is concerned, we said very little specifically about the Self. However, we might begin with an infamous, extreme reductionist, neurobiological account provided by Francis Crick in The Astonishing Hypothesis (1994): You, your joys and your sorrows, your memories and your ambitions, your sense of personal identity and free will, are in fact no more than the behaviour of a vast assembly of nerve cells and their attendant molecules. (p. 3; emphasis added) This is meant to suggest that our consciousness isn’t real because the only real things in the world are those revealed by the natural sciences. Crick doesn’t provide any evidence for this

271

People as selves

claim, which looks like wild philosophical speculation (Midgley, 2014), yet he assures us that it has somehow been scientifically established: The scientific belief is that our minds – the behaviour of our brains – can be explained by the interaction of nerve cells (and other cells) and the molecules associated with them. This is to most people a really surprising concept. It does not come easily to believe that I am the detailed behaviour of a set of nerve cells. (Crick, 1994, p. 7; emphasis added) Midgley (2014) quotes neuroscientist Susan Greenfield (2000), who, looking at an exposed brain in an operating theatre, reflected that ‘This was all there was to Sarah, or indeed to any of us…. We are but sludgy brains.’ Similarly, in The Mind Machine (1990), the eminent neurophysiologist Colin Blakemore states that: The human brain is a machine which alone accounts for all our actions, our most private thoughts, our beliefs…. All our actions are products of the activity of our brains. It makes no sense (in scientific terms) to distinguish sharply between acts that result from conscious attention and those that result from our reflexes or damage to the brain. (p. 270; emphasis added) However, not all neuroscientists are extreme reductionists. One such is Vilayanur S. Ramachandran. Box 12.1 presents a striking example of what a non-reductionist psychobiological explanation of the Self looks like.

BOX 12.1 Jason Murdoch: a man with a fragmented self (Ramachandran, 2011) L Following a car accident, Jason Murdoch entered a semi-conscious state of vigi-

lant coma (akinetic mutism). He couldn’t walk, talk, understand what others said to him, or initiate actions, and didn’t recognize anyone, including his parents and siblings. L He’d suffered damage to his anterior cingulate cortex in the front of the brain. L However, if his father phoned him from an adjoining room, Jason suddenly became alert and talkative, recognizing his father and engaging him in conversation. When his father re-entered Jason’s hospital room, he lapsed back into his ‘zombie’ state. This cluster of symptoms is called telephone syndrome. L Except when he’s on the phone, Jason is no longer a person.

How does Ramachandran justify this claim? Except when speaking to his father by phone, Jason is unable to form ‘rich, meaningful metarepresentations, which are essential to not only our uniqueness as a species but also our uniqueness as individuals and our sense of self ’ (Ramachandran, 2011, p. 246; emphasis added).

272

People as selves

L L

What do you understand by the term ‘metarepresentations’? Why do you think they’re essential to our uniqueness as a species and our individual sense of self?

Very early in evolution, the mammalian brain developed the ability to create first-order sensory representations of external objects (such as a rat’s perception of a cat as a furry, moving thing), eliciting only a very limited number of responses (such as a reflex escape response). However, the human cortex evolved to enable us to form metarespresentations (representations of representations); these higher forms of abstraction are linked to our capacity for language and symbolic thought. A cat can be thought of/described as a mammal, pet, a bird-predator, or any number of other things and we can form a whole series of associations and meanings with it: unlike a rat, we can just think or talk about cats without having to respond to them in any way at all. According to Ramachandran: Metarepresentations are also a prerequisite for our values, beliefs, and priorities…. They are linked to our sense of self and enable us to find meaning in the outside world – both material and social – and allow us to define ourselves in relation to it. (Ramachandran, 2011, p. 247) Returning to Jason Murdoch, his ‘visual self ’ is essentially dead and gone: he’s no longer able to form metarepresentations of what he sees. However, the auditory Jason lives on and functions much as it did before his accident. As Ramchandran says: Some of the ‘pieces’ of Jason have been destroyed, yet others have been preserved and retain a surprising degree of functionality. Is Jason still Jason if he can be broken into fragments? … a variety of neurological conditions show us that the self is not the monolithic entity it believes itself to be. This conclusion flies directly in the face of some of our most deep-seated intuitions about ourselves…. What the neurology tells us is that the self consists of many components, and the notion of one unitary self may well be an illusion. (p. 247; emphasis added)

Pause for thought … 1 What does Ramachandran mean by the self as a ‘monolithic entity’ and the self as ‘unitary’?

Ramachandran identifies seven major defining features of the self: 1 2 3 4 5 6 7

unity (see ‘Answers to “Pause for Thought” questions’ below) continuity embodiment privacy social embedding free will self-awareness.

273

People as selves

He goes on to discuss a number of syndromes/disorders that ‘straddle the boundary between psychiatry and neurology’ (p. 253) which provide invaluable clues regarding how the self is created and sustained in normal brains.

Split-brain studies: do we have more than one self? While Ramachandran identifies unity as one key defining feature of the self, studies of splitbrain patients (whose hemispheres are separated by cutting the corpus callosum that connects them) suggests otherwise (see Chapter 5). The basic methodology used by Sperry was to present (mainly visual) stimuli to one or other hemisphere by controlling the direction of the input (lateralized testing of the left and right halves of the visual field) and then, for example, asking the participant to select from a collection of objects with one or other hand, the hands and objects concealed by a screen): by ensuring that only the left or right hemisphere received any particular stimulus, and knowing that the right hand is controlled by the left hemisphere (and vice versa), it was always known (strictly, inferred) which hemisphere was responding in any particular task situation (see Gross, 2012).

Pause for thought … 2 Try to identify some of the methodological issues that arise when results based on the study of split-brain patients are applied to people in general. 3 In what ways could split-brain patients be considered unrepresentative of adults in general?

Arguably, the most interesting and controversial question relating to lateralization in general, and split-brain patients in particular, is: do the two halves of the brain represent two kinds of consciousness (two minds)? Ornstein (1986) cites reports of an entire hemisphere (right or left) being removed (hemispherectomy) – again for the treatment of epilepsy – without destroying the ‘person’. So, if possession of a ‘mind’ requires only one hemisphere, does having two hemispheres make it possible that we have two minds? Indeed, do split-brain patients have two minds, two separate, distinct modes of consciousness? (Gross, 2012, p. 342) Both Ornstein and Sperry certainly think so (thus supporting the double brain theory, which essentially reduces the mind – or self or personality – to a hemisphere of the brain). According to Sperry (1964, in Apter, 1991), ‘when the brain is bisected, we see two separate “selves” (essentially a divided organism with two mental units, each with its own memories and its own will) competing for control over the organism’. Does this mean that the normal role of the corpus callosum is to keep the two hemispheres in exact synchrony, affording us a single, unified self? According to Colvin and Gazzaniga (2007), the first descriptions of split-brain patients’ abilities to simultaneously execute conflicting actions, directed by each hemisphere independently, led to the strong version of the bicameral mind argument (BMA); they cite

274

People as selves

Puccetti’s (1981) version of this, but an earlier version was proposed by Jaynes (1976). The BMA is described in Box 12.2.

BOX 12.2 The bicameral mind argument (BMA) L According to Jaynes (1976), ancient human beings had no sense of an interior,

directing self; rather, they accepted commands from what appeared to them to be an externalized agency, which they blindly obeyed. L This externalized self was a consequence of the split between the two hemispheres. Unlike the modern unicameral (i.e. integrated) brain, the ancient brain was bicameral, with the two hemispheres working essentially independently of each other. L The logical, language-using left hemisphere generated ideas and commands, which the right hemisphere then obeyed. These commands were subjectively perceived by the right brain as coming from ‘outside’ – as if spoken by a god! L One source of evidence given by Jaynes in support of his BMA is the auditory hallucinations (‘voices’) heard by schizophrenic patients (see Chapter 13). (Based on Colvin and Gazzaniga, 2007)

Colvin and Gazzaniga propose that the two hemispheres are co-conscious and that their functional independence in the split-brain patient represents an exaggeration of the normal (i.e. intact) state. While this proposal sounds like support for the BMA, Colvin and Gazzaniga say that the critical test of the BMA is how the following question is answered: is anything gained by functional independence of the two hemispheres? Puccetti (1981) says ‘no’: in the normal individual, the two hemispheres operate in parallel and the function of the corpus callosum is to duplicate conscious experience on both sides of the brain, without subsequent fusion. Colvin and Gazzaniga agree with others (e.g. Bogen, 1981) who have argued against the BMA: for example, although the two hemispheres may be co-conscious in parallel, hemispheric asymmetries in cognitive processing produce an inequality between their conscious representations – even in the intact brain. According to Thomas Nagel (e.g. 1994), the American philosopher of science, there are neither two nor just one selves or minds associated with the two hemispheres. There’s no whole number of individual minds present: what would these numbers mean? What, really, is a ‘single person’? Our sense of unity of our own consciousness may be less clear than we had supposed. The natural conception of a single person controlled by a mind possessing a single visual field may come into conflict with the physiological facts when it is applied to ourselves (Nagel, 1994).

Behaviourism and the self As we saw in Chapters 1 and 6, Watson, the founder of Behaviourism, rejected all earlier attempts at observing the ‘mind’ (as in Wundt’s structuralism/introspectionism) because this cannot be undertaken objectively (as required by a scientific Psychology). Similarly, Skinner, while not denying the existence of inner, private mental activity, claimed that mental events are ‘explanatory fictions’ – they contribute nothing to the explanation of behaviour.

275

People as selves

While not explicitly addressing the concept of self, by implication Watson, Skinner, and other (philosophical/radical) Behaviourists would deny its existence, and/or its explanatory value.

Cognitive Psychology and the self While the self has never been a mainstream topic within Cognitive Psychology, much recent mainstream Social Psychology – especially in the US – has become overwhelmingly cognitive in its basic approach to explaining a wide range of traditionally social behaviours (i.e. social behaviours and phenomena are explained in terms of – reduced to – the cognitive processes taking place within the heads of individuals). It’s in this socio-cognitive context that the self has been discussed in recent decades. According to Leary (2004), an American Psychologist and neuroscientist, the self is a cognitive structure that permits self-reflection and organizes information about oneself. (It also has motivational features, in particular, self-consistency, self-evaluation, and self-enhancement.) Although many Psychologists have made a distinction between the public and private selves, strictly speaking, there’s actually only a private self. The psychological self resides in the individual’s cognitive-affective apparatus; all self-processes involve self-reflection (i.e. the private self ) (Leary, 2004). At the most fundamental level, the self is the cognitive apparatus that permits selfreflexive thought – the cognitive structures and associated processes that permit people to take themselves as an object of their own thought and to think consciously about themselves. (Leary, 2004, p. 207) ‘Taking ourselves as objects of our own thought’ describes the ‘I–me’ distinction made by William James in 1890. It’s generally accepted that James was the first to introduce the term ‘self ’ into Psychology (Burns, 1980), seeing it as comprising two components – the ‘I’ and the ‘Me’. The ‘I’ is the inner and centre self, our awareness, that which looks out at all else, while the ‘Me’ refers to aspects of ourselves that we experience and interact with. When ‘we’ look at ourselves in the mirror, it’s our ‘I’ that’s looking and our ‘Me’ that’s seen. Again, the ‘I’ is private, while the ‘Me’ is social. The social self is one of four major components (the others being the spiritual, material, and bodily). But the social self isn’t unitary: there are as many social selves as there are individuals who recognize the person and carry an image of him/her in their minds. The importance of social processes in the nature and development of the self is central to two early, but still influential, accounts, namely those of Charles Cooley (1902) and George Herbert Mead (1934) (see below).

Psychodynamic Psychology and the self As we saw in Chapter 9, Freud himself never used the Latin words id, ego, and superego; instead, he used the German das Es (‘the it’), das Ich (‘the I’), and das Über-Ich (the ‘overI’), which were meant to capture how the individual relates to different aspects of the self. The Latin terms tend to depersonalize Freud’s use of ordinary, familiar language, giving the impression they describe different ‘selves’ which we all possess! The Latin words (preferred

276

People as selves

by his American translators to lend greater scientific credibility to the theory) turn the concepts into cold, technical terms which arouse no personal associations: whereas the ‘I’ can only be studied from the inside (through introspection), the ‘ego’ can be studied from the outside (as behaviour observable by others). In translation, Freud’s ‘soul’ became scientific Psychology’s ‘psyche’ or ‘personality’ (Bettelheim, 1983). We also saw in Chapter 9 the contrast between Freud’s and Jung’s concepts of ‘the unconscious’. Freud’s concept corresponds to Jung’s personal unconscious, which he distinguishes from the collective unconscious; the latter comprises a number of archetypes, including the Self (see Box 12.3).

BOX 12.3 Jung’s Self archetype L Jung took the term ‘self ’ not from its customary usage in Western Psychology but

L

L L

L L

L

from the Hindu notion of the Self (or ‘Atman’), that aspect of divine power which resides in every individual as the source of being. The ordinary, everyday use of ‘self ’ in Western culture is equivalent to developing a capacity to understand the meaning of ‘I’ and is concerned with the experience of subjectivity as a coherent and continuous sense of being a particular person. Jung calls this the personal self (or ‘the ego’), the self of which we’re conscious; as such, it forms a content of consciousness as well as being its centre. By contrast, the Jungian self is always that which transcends consciousness, that which is greater than what I take to be ‘my self ’. It describes the totality of the psyche. As part of the collective unconscious, the Self is the ‘archetype of archetypes’, which unites the personality, giving it a sense of ‘oneness’ and firmness/stability. The ultimate aim of every personality is to achieve a state of selfhood through the process of individuation; this is a lifelong process, attained by very few individuals, Jesus and Buddha among them. The Self is commonly represented as a mandala, an age-old symbol of wholeness and totality, found all over the world. (Based on Colman, 2000)

According to Colman (2000), Jung was ahead of his time in recognizing the shifting and multiple nature of ego consciousness. He saw the ‘ego-complex’ (what I take to be me, my personal identity) as just one among many complexes (or ‘sub-personalities’), any of which could invade or disrupt the conscious mind or even act independently of it. Jung would also have recognized the arguments of Social Constructionism and Discursive Psychology: the self is a socially constructed fiction maintained via a complex web of interweaving social narratives and is what he meant by the persona (another major archetype). The persona (from the mask worn by actors in Ancient Greek drama) is the face we put on for the world, how we appear to others and, sometimes, also to ourselves – but it’s not who we truly are. In this way, Jung takes issue with Social Constructionism when the latter maintains that there’s no self beyond social appearances (Colman, 2000). (The Social Constructionist account of the self is discussed further below.) Since the self includes the unconscious as well as the conscious mind, and the unconscious is, by definition, unknown to consciousness, the greater part of the self must remain

277

People as selves

forever unknowable (Colman, 2000). The archetypes of the collective unconscious underlie conscious experience but can never themselves become conscious: we can’t experience them directly but only how they’re multiply represented in consciousness. Fordham (1987) points out the contradiction between Jung’s description of the self as both (1) a totality and (2) an archetype within the totality (albeit the central one). If the self is the totality of the psyche – including all the archetypes – how can it also be one of them? Rather than thinking of archetypes as discrete entities ‘in’ the psyche, they’re better thought of as modes of experiencing, tendencies to experience the world and ourselves in particular ways. Colman (2000) also notes the similarity between Jung’s concept of individuation (see Box 12.3) and self-actualization. This brings us neatly to Humanistic Psychology.

Humanistic Psychology and the self Central to Maslow’s Humanistic theory is the concept of self-actualization. As we noted in Chapter 10, while Maslow put ‘self-actualization’ at the top of his need hierarchy, Rogers preferred the term ‘actualizing’; these relate to ‘a psychology of being’ and ‘a psychology of becoming’, respectively. According to Graham (1986), an inherent danger in any psychology of being (like Maslow’s) is that self-actualization (or self-discovery) comes to be viewed as an end in itself rather than as a process (as it is for Rogers). Maslow’s description of ‘peak experiences’ (see Table 10.1) undoubtedly refers to experiences of psychic wholeness such as Jung describes, as well as those described in the literature of spirituality and mysticism (Colman, 2000). Rogers, although taking a broadly similar view to Maslow, draws particular attention to the individual’s actualizing tendency and the process of becoming a fully functioning person; this is central to his Self Theory (see below).

BOX 12.4 KEY THINKER: Carl Rogers (1902–1987) L Rogers was born in Chicago, to deeply religious

parents of a fundamentalist evangelical persuasion. L He was very isolated as a child and teenager, largely

as a result of his parents’ efforts to protect their children from the temptations of modern life. L After the family moved to a farm outside Chicago, Rogers developed an intense interest in science. He enrolled at the University of Wisconsin to study scientific agriculture. L The new-found intellectual and emotional freedom led him to explore his religious convictions, leading him to the decision to become a Christian minister. He switched from agriculture to history, which he thought more appropriate for religious work. L Rogers was selected to spend six months in China, which provided the perfect context in which to break free of his parents’ restrictive religion and to achieve intellectual, spiritual, and emotional independence.

278

Figure 12.1 Carl Rogers.

People as selves L On returning to the US, he enrolled for a correspondence course in Introduc-

L

L

L

L

L

L

L

L L

L L

L

tory Psychology; the main text was James’ The Principles of Psychology. He also married his long-term girlfriend, Helen, in 1924, just after graduating in History. Shortly after the wedding, the Rogers moved to New York, where Carl began his theological studies. But during his two years there, he became increasingly restless and found an outlet by taking several courses at the nearby Teachers’ College of Columbia. In 1926, he left the Seminary and enrolled at Teachers’ College to train as a Clinical and Educational Psychologist. His doctoral research involved developing a test for measuring the personality adjustment of 9–13-year-old children. In the academic year 1927–1928, Rogers was a awarded a Fellowship at the Institute of Child Guidance, which was largely committed to psychoanalytic theory and methods. His test for 9–13-year-olds was deemed useful as a clinical instrument: it enabled them to explore their attitudes to themselves, their peers, and their families within the context of their daydreams and fantasy life. In 1928, he began his first professional job as a Psychologist at the Child Study Department of the Rochester Society for the Prevention of Cruelty to Children. Rather than applying any particular theoretical approach, he responded in a pragmatic way to the many highly disturbed children – and their parents – who came for diagnosis and help. However, he became influenced by the ideas of the heretic psychoanalyst Otto Rank (see Chapter 9), leading him to see people as self-directing and finding their own way forward. The therapist’s task is to rely on the client for the direction of therapeutic movement (see text below). In 1939 he moved to Ohio State University. On 11 December 1940 he delivered a lecture (‘Newer concepts in psychotherapy’), which he came to consider the birthday of client-centred therapy: he argued that therapy shouldn’t be interested in solving problems but rather in helping individuals to grow and develop, focusing on the present (not the past); the therapeutic relationship itself is central in that growth process. In 1945, Rogers moved to the University of Chicago, having been invited there to establish a counselling centre. In 1951 he published Client-Centred Therapy. For Rogers, the pinnacle of his career was receiving the Distinguished Scientific Contribution Award by the American Psychological Association in 1956. During his 12 years at Chicago, his research into therapeutic relationships was to have a profound effect on the whole field of counselling and psychotherapy in the years ahead. His On Becoming a Person (1961) had a huge impact on educators, philosophers, therapists, scientists, artists, and ‘ordinary people’. In 1963 he moved to California’s newly created Western Behavioural Sciences Institute, a non-profit organization concerned with mainly humanistically oriented research. He became involved in the encounter group movement (see Chapter 10). In his old age, he became increasingly drawn to the concerns of everyday life and the problems confronting the global community, in particular world peace and crossing cultural and racial boundaries. Unbeknown to Rogers, when he died in 1987 he’d been nominated for the Nobel Peace Prize. (Based on Thorne, 1992)

279

People as selves

Rogers’ self theory Central to Rogers’ theory and his client-centred therapy (later renamed ‘person’-centred therapy) is the concept of the self: an organized, consistent set of perceptions and beliefs about oneself. It includes my awareness of ‘what I am’ and ‘what I can do’ and influences both my perception of the world and my behaviour; we evaluate every experience in terms of it and most human behaviour can be understood as an attempt to maintain consistency between our self-image and our actions. However, this consistency isn’t always achieved and our self-image (and related self-esteem) may differ quite radically from our actual behaviour and from how others see us. For example, a person may be highly successful and respected by others and yet regard herself as a failure! (an example of what Rogers calls incongruence): being told you’re successful is incongruent (inconsistent) with the fact that you don’t hold this view of yourself (Gross, 1987).

Pause for thought … 4 What do you understand by the terms ‘self-image’ and ‘self-esteem’?

Incongruent experiences, feelings, actions, and so on conflict with our (conscious) selfimage; because we prefer to act and feel in ways that are consistent with our self-image, such experiences may be threatening and so denied access to awareness (they may remain unsymbolized) through actual denial, distortion, or blocking. These defence mechanisms (see Chapter 9) prevent the self from growing and changing and widen the gulf between self-image and reality (our actual behaviour or true feelings). As the self-image becomes more and more unrealistic, so the incongruent person becomes increasingly confused, vulnerable, dissatisfied, and, eventually, seriously maladjusted. The self-image of the congruent person is flexible and realistically changes as new experiences occur; the opposite is true for the incongruent person. The congruent person is in the best position to self-actualize, while the greater the gap between self-image and ideal-self, the less fulfilled the individual will be. Most of us are sufficiently flexible and realistic to be able to acknowledge these discrepancies and not have to use defence mechanisms to cope when things don’t go exactly as we’d want them to. How our self-concept develops is described in Box 12.5.

BOX 12.5 How the self-concept develops L Many of Rogers’ therapeutic clients had trouble accepting their own feelings and

experiences; they seemed to have learned during childhood that, in order to obtain the love and acceptance of significant others (especially their parents), they had to act and feel in dishonest ways (they had to deny parts of themselves). Rogers called this conditional positive regard. L This applies, in varying degrees, to almost every child: love and praise are withheld until the child conforms to parental and wider social standards of conduct. In order to maintain conditional positive regard, we suppress actions and feelings that significant others disapprove of, rather than using our own spontaneous perceptions and feelings as guides to our behaviour. L We gradually develop conditions of worth: those conditions under which positive regard will be forthcoming; these become internalized.

280

People as selves L In this way, we perceive and are aware of those experiences that coincide with

the conditions of worth but distort or deny those that don’t. L This denial and distortion leads to a distinction between the organism – the

whole of one’s possible experience, everything we do, feel, and think – and the self – the recognized, accepted, and acknowledged part of a person’s experience. Ideally, the two would coincide; but for most of us, they don’t. L Corresponding to the need for positive regard (the universal wish to be loved and accepted by significant others) is the need for positive self-regard (the internalization of those values and behaviour approved of by others, whereby we come to think of ourselves as good and lovable and worthy). Positive self-regard corresponds to high self-esteem. L In order to experience positive self-regard, our behaviour and experience must match our conditions of worth. The problem here is that this can produce incongruence through the denial of our true thoughts and feelings. But since the need for positive regard and positive self-regard is so strong, these conditions of worth can supersede the values associated with self-actualization. L Many adult adjustment problems are bound up with an attempt to live by other people’s standards instead of one’s own.

Congruence and self-actualization are enhanced by substituting organismic values for conditions of worth; this blurs the distinction between the self and the organism. The greater the unconditional positive regard (total, non-judgemental acceptance of everything we say and do), the greater the congruence between (1) self-image and reality; and (2) selfimage and ideal-self. This complete acceptance is precisely what the therapist offers the client in Rogers’ client-/person-centred therapy; it enables the client to accept certain feelings and thoughts as their own instead of denying, distorting, and disowning them (illustrated by responses such as ‘I don’t know why I did that’ or ‘I wasn’t feeling myself ’). Finally, positive self-regard is no longer dependent on conditions of worth.

The self in historical context The preceding discussion is a summary of how mainstream Psychology’s major theoretical orientations have addressed the concept of self. What they all share is an implicit acceptance of the psychological reality of the self, but we must beware of mistaking psychological kinds for natural kinds (see Chapter 2).

L

Briefly explain the difference between natural and psychological kinds.

While natural kinds have always existed independently of human scientists, psychological kinds are language-based, hypothetical concepts which, by definition, change over time (within the same culture) and vary between cultures (at any one time). According to the French Psychologist Serge Moscovici (1925–2014), ‘the individual’ is the greatest invention of modern times (1985). According to Baumeister (1987), the ‘problematic nature’ of selfhood has concerned the layperson as well as professional Psychologists (see Chapter 4). Popular books and movies commonly recognize the need to ‘find oneself ’ and to ‘be oneself ’. Erik Erikson (1968) observed that the rapid popularization of the term ‘identity

281

People as selves

crisis’, originally a term of psychological jargon, suggested that there was already widespread interest in the phenomena it described (see Chapter 9). (Erikson originally used the term to describe the adjustment problems faced by First World War veterans returning to civilian life, then to adolescents.) Self-actualization has become increasingly accepted by mainstream society (especially US society) as a legitimate and important aspect of life. According to Baumeister (1987): It is plausible that the self simply has some ineffable fascination that has made it a perennial puzzle. One can read Kant and Descartes and even the ancient thinkers as if they were grappling with all these same issues of selfhood. A careful look at historical evidence suggests, however, that the concern with problems of selfhood is essentially a modern phenomenon. The medieval lords and serfs did not struggle with selfdefinition the way modern persons do. (p. 163) For Baumeister, the self has become a problem in the course of historical development. He identifies four major forms of this problem: 1 2 3 4

How identity is actively or creatively defined by the person. The nature of the relationship between the individual and society. How the person understands his/her potential and then fulfils it. How and how well persons know themselves.

Based on historical data and literature (especially fiction), Baumeister summarizes the development of the self concept as described on Box 12.6.

BOX 12.6 The historical development of the self concept L The late medieval period (from about the eleventh to the fifteenth centuries) grad-

ually developed a crystallized concept of the unity of the single human life. L The early modern period (roughly 1500–1800) came to stress the distinction

L

L L L

282

between the inner self and the outer self, to value individuality, and increasingly to recognize human development and change. Puritanism increased self-consciousness and recognized the possibility of self-deception. In the Romantic era (late 1700s and early 1800s), persons began to seek and emphasize secular forms of fulfilment; this involved a deep conflict between the individual and society. During the Victorian era (roughly 1830–1900), there were crises relating to each of the four forms of the problem described in the text above. In the early 1900s, themes of alienation and of devaluation of selfhood indicated concern over the individual’s helpless dependency on society. Since 1945 (the end of the Second World War), individuals have accommodated to the changed social realities but have continued to seek ideals and means of selfdefinition and fulfilment. In addition, in relation to the four forms of the problem, respectively, there has also been (1) an emphasis on personal uniqueness, and value of self-exploration; (2) personality and socioeconomic status; (3) quest for celebrity and for means of self-actualization; and (4) accommodation and myth-making. (Based on Baumeister, 1987)

People as selves

L L

What do you understand by ‘unique’ when applied to a person? Does the wholly unique individual exist?

You’ll recall from Chapter 11 that Kluckhohn and Murray (1953) equated (1) General Psychology with the respects in which ‘every man is like all other men’; and (2) Differential Psychology (or individual differences) with the respects in which ‘every man is like some other men’. They also identified a third way of thinking about human variability, namely those respects in which ‘every man is like no other men’ (i.e. what makes us unique). However, does it make sense to talk about a completely unique individual? We could argue that to focus on individuals in this sense of uniqueness goes against the grain of mainstream (positivist) Psychology (as was argued in Chapter 2). According to Gordon Allport, one of the greatest of all personality theorists: Each person is an idiom unto himself, an apparent violation of the syntax of the species. An idiom develops in its own peculiar context, and this context must be understood in order to comprehend the idiom. Yet at the same time, idioms are not entirely lawless and arbitrary; indeed they can be known for what they are only by comparing them with the syntax of the species. Now the scientific training of the psychologist leads him to look for universal processes common to the species, and to neglect the idiomatic pattern of becoming. While he may say that his subject matter is human personality, his habits lead him to study mind-in-general rather than mind-in-particular. (Allport, 1955b, pp. 19–20; emphasis in original) What Allport says here is consistent with our discussion of the nomothetic/idiographic distinction in Chapter 2, namely that these approaches are compatible with one another and not mutually exclusive. However, is there a crucial difference between (1) the study of particular cases (e.g. Skinner’s multiple experiments using a small number of individual rats/ pigeons) in order to draw general conclusions and (2) the study of particular cases (e.g. people) for their own sake (i.e. studying them as unique individuals)? Allport often uses the term ‘unique’ and he leaves us in no doubt what he means. In Pattern and Growth in Personality (1961), he distinguished between three types of personal traits or dispositions: L Cardinal traits refer to a particular, all-pervading disposition (e.g. greed, ambition, or

lust) which dictates and directs almost all of an individual’s behaviour; in practice, these are very rare. L Central traits are the basic building blocks that make up the core of the personality; they constitute the individual’s characteristic ways of dealing and interacting with the world (e.g. honest, happy-go-lucky, and loving); a surprisingly small number of these is usually sufficient to capture the essence of a person. L Secondary traits are less consistent and influential than central traits, referring to tastes and preferences that may change quite quickly and don’t define ‘the person’ as central traits do. These three types of individual traits are peculiar (idiosyncratic) to each person, in at least three senses:

283

People as selves

1 A trait that’s central for one person may only be secondary for another – and irrelevant for a third. What makes a trait central or secondary isn’t what it is but how often and how strongly it influences the person’s behaviour (Carver and Scheier, 1992). 2 Some traits are possessed by only one person; indeed, there may be as many separate traits as there are people. 3 Even if two different people are given (for convenience) the same descriptive label (say, ‘aggressive’), it may have different meanings for the two individuals – to that extent, it isn’t the same trait. For Allport, since personality dispositions reflect the subtle shadings that distinguish a particular individual from all others, they must often be described at length, making it very difficult to compare people. While the idiographic approach contends that people are not comparable (everyone is, in effect, on a ‘different scale’), comparing people in terms of a specified number of traits or dimensions (in order to determine individual differences) is precisely what the nomothetic approach involves. According to this view, traits have the same psychological meaning for everyone … people differ only in the extent to which the trait is present. (Gross, 2014, p. 47; emphasis in original) For example, according to Eysenck (e.g. 1965), everyone is more or less introverted: everyone will score somewhere on the introversion–extroversion scale. According to this nomothetic approach, the difference between individuals is quantitative (a matter of degree only); by contrast, the idiographic approach sees differences between people as qualitative (a difference in kind). Allport himself acknowledged that people may be compared with each other, but only in terms of common traits – basic modes of adjustment applicable to all members of a particular cultural, ethnic, or linguistic group. Common traits are what’s measured by personality scales, tests, or ratings, but at best they can provide only a rough approximation to any particular personality. This again relates to the concept of psychological meaning: there are many different ways of being, say, aggressive, competitive, materialistic, or ambitious within the same cultural setting. Disagreement between Allport and nomothetic theorists (such as Eysenck) isn’t so much to do with whether or not they believe in the idea of uniqueness, but rather how uniqueness is defined. For Eysenck (1953), for example, the unique individual is simply the point of intersection of a number of quantitative variables (such as scores on the introversion/extroversion, neuroticism/stability, and psychoticism scales). Since he believes that we can all be placed somewhere on each of these scales/dimensions, Eysenck is defining uniqueness in terms of common traits. But for Allport, this is a contradiction in terms, since only individual traits can capture the individuality of individuals, the structure and coherence of the individual’s personal make-up. According to Krahé (1992), the idiographic claim that there are unique traits that apply to only one individual is undoubtedly false, if taken literally: traits are defined as differential constructs referring to a person’s position on a trait dimension relative to others. But at the other extreme, Krahé believes that the traditional (nomothetic) view of traits as explanatory constructs that apply to everyone is equally misguided. For Holt (1967), the nomothetic/idiographic distinction is a false dichotomy. All descriptions involve some degree of generalization, so that when we describe an individual case, there’s always (at least implicitly) a comparison being made with other instances of the category or class to which the individual belongs. To describe this individual, we must already have (and be applying) our concept of ‘a person’.

284

People as selves

If our concept of a person includes their uniqueness, this at least is something that everyone has in common and is perfectly consistent with Eysenck’s (nomothetic) concept of uniqueness. Indeed, could we even recognize a person who was totally unlike any other, in any respect, as a person? (Gross, 2014, p. 48)

The social origins of the self L

How do you understand the relationship between the individual (self ) and society? (For example, does ‘society’ exist objectively and independently of the individual, or can they never be completely separated – in a conceptual/theoretical sense?)

Michael Argyle (1925–2002), one of Britain’s most eminent Social Psychologists of the twentieth century, was arguably one of the most notable exponents of the mainstream essentialist James/Allport view of the self. People have a need for a distinct and consistent self-image and a need for self-esteem. This may result in attempts to elicit responses from others which provide confirmation of these images and attitudes towards the self. The self-image is one of the central and stable features of personality, and a person cannot be fully understood unless the contents and structure of his self-image are known. (Argyle, 1983, p. 192) Argyle identifies four major influences on the development of the self-concept, namely (1) the reaction of others (see Box 12.7); (2) comparison with others; (3) social roles; and (4) identification with models (see Chapter 9). As we might expect from a Social Psychologist, the first three influences imply a greater emphasis on the ‘Me’ than the ‘I’. One of the earliest and most influential theories of self was proposed by George Herbert Mead, the American philosopher, sociologist, and Psychologist (1863–1931). Mead’s Mind, Self and Society (1934) was influenced by both James’ distinction between ‘I’ (self as knower) and ‘Me’ (self as known) (see above) and Cooley’s (1902) theory of the ‘lookingglass self ’ (see Box 12.7).

BOX 12.7 The ‘looking-glass self ’ (Cooley, 1902) L Cooley maintained that the self is reflected in the reactions of other people, who

represent the ‘looking-glass’ for oneself: in order to understand what we are like, we need to see how others see us. This is how the child gradually builds up an impression of what she/he is like. L At first, the infant is unaware of self and others and makes no distinction between ‘me’ and ‘not-me’; it simply experiences a ‘stream of impressions’; these gradually become integrated and discriminated, so that the self/other distinction is finally made.

285

People as selves

Mead also believed that knowledge of self and other develops simultaneously, both being dependent on social interaction: self and society represent a common whole and neither can exist without the other; the self doesn’t (cannot) pre-exist social interaction, and emerges from it. This represents a thoroughly social account of the individual, which transcends the dualism of self/other (Burr, 2015). Mead turns on its head mainstream psychology’s model of individual persons, who are conscious and have minds, and who come to interact with other individuals, affect and be affected by them, so producing something that is called society. Instead, he sees consciousness and mind … as the outcomes of social interaction, Mead’s individual does not exist independently of society but is instead made possible by social interaction. (Burr, 2015, p. 216) The key to development of mind is the distinctly human ability to use symbols to represent things and events, especially our use of language. Not only is language crucial for social interaction, but it’s also the fundamental means by which we come to represent ourselves to ourselves. This self-interaction means that the person ceases to be a mere responder, whose behaviour is the product of what acts upon him/her from outside, or inside, or both; instead, we act towards our world, interpret what we encounter and organize our actions on the basis of this interpretation. The person is ‘over against’ their world, not merely ‘in’ it, defining it and constructing his/her action, rather than merely ‘releasing’ it. Mead also stresses that the self is a process and not a structure (such as Freud’s ego); neither is it an organized body of needs and motives, nor a collection of attitudes, norms, and values. What makes the self a self is a reflexive process: it acts upon and responds to itself. (This is Mead’s way of making the ‘I’/‘Me’ distinction: the experiencing ‘I’ cannot be an object, cannot itself be experienced, since it is the very act of experiencing; what we experience and interact with is our ‘Me’.) Mead’s developmental theory is outlined in Box 12.8.

BOX 12.8 Mead’s developmental theory of the self L Initially, the child thinks about its behaviour as good or bad in terms of his or her

L

L L

L

L

286

memory of how his or her parents describe it. ‘Me’ at this stage is a combination of the child’s memory of its own actions and how they were reacted to by others. The child’s pretend play (especially playing ‘mummies and daddies’ or ‘doctors and nurses’) helps it to understand and incorporate adult attitudes and behaviour. The child here isn’t merely imitating but also ‘calls out in himself the same responses as he calls out in the other’. For example, she/he is being both the child and the parent, and, as the parent, is responding to him/herself as the child. Play then becomes distinguished from games, which involve rules; this requires the child to assume the roles of all the participants. In these ways, the child acquires a variety of social viewpoints/‘perspectives’, which are then used to accompany, direct, and evaluate its own behaviour. This is how the socialized part of the self (Mead’s ‘Me’) expands. At first, these perspectives are based on specific adults. But in time, the child comes to react to itself and its behaviour from the viewpoint of a ‘typical mother’, ‘typical doctor’, or ‘people in general’ (the generalized other). The incorporation of the generalized other marks the final, qualitative change in the ‘Me’; it provides the child with a self.

People as selves

Mead’s work represents the beginning of symbolic interactionism (SI); this was continued by Herbert Blumer (1900–1987), a sociologist, like Mead, also based at the University of Chicago. A major and more recent contribution, with its roots in the sociology of knowledge, is Berger and Luckmann’s classic The Social Construction of Reality (1966). Central to SI is the view that people construct their own and others’ identities through their everyday encounters in social interaction. (As Burr (2015) points out, this is consistent with ethnomethodology, which arose in the US in the 1950s and 1960s; it focuses on the processes by which ordinary people construct social life and make sense of it to themselves and others; see Chapter 4.) Berger and Luckman offer a solution to the problem of how to conceptualize the relationship between the individual and society. They argue that the apparently objective social world is constructed by human action and interaction; humans are thoroughly social animals: ‘Man’s specific humanity and his sociality are inextricably intertwined. Homo sapiens is always, and in the same measure, homo socius’ (p. 69; emphasis added). The relationship between individual and society is bi-directional: human beings continually construct the social world, which then becomes a reality to which they must respond. Because we’re born into a social world that already exists, it assumes the status of an objective reality for them and subsequent generations. Like Mead before them, Berger and Luckmann emphasize the role of language as a shared symbolic system through which we construct huge social structures; these appear to have an existence and an origin outside of human activity but which can only be human constructions: Language now constructs immense edifices of symbolic representations that appear to tower over the reality of everyday life like gigantic presences from another world. Religion, philosophy, art, and science are the historically most important symbol systems of this kind. (Berger and Luckmann, 1966, p. 55) The relationship between individual and society, therefore, is a dialectical process rather than a conflict between two pre-existing entities. It allows us to think of the person as both agentic, always constructing the social world, and at the same time constrained by it to the extent that we cannot avoid living within the institutions and frameworks of meaning handed down to us by previous generations (Burr, 2015).

The Postmodernist self We noted in Chapter 3 that Berger and Luckmann’s (1966) book was a major influence on the development of Social Constructionism (which includes Discursive Psychology and Critical (Social) Psychology). In turn, Social Constructionism represents the major theoretical approach within postmodernism. In the present context, we want to know the status of the person in Discursive Psychology. We also saw in Chapter 3 how Discursive Psychology radically departs from mainstream Anglo-American Psychology, which has focused on internal states (such as cognitions, emotions, attitudes); these are taken to lie behind what people say and do. Put another way, these internal states are seen as having an independent existence, such that when people, for example, express an attitude, there exist both the attitude itself (inside the head) and the verbal (or some other) expression of it.

287

People as selves

Discursive Psychologists have taken these internal states (usually regarded as private) and moved them into the public, social realm. According to Burr (2015), while their arguments for doing this are convincing (see below), the underlying model of the person is ambiguous: The person here is an active and skilful participant in social life, busily engaged in constructing accounts for various purposes, but it is hard to answer the question of why accounts might be constructed in one way rather than another without recourse to concepts such as self, belief or motivation. We are left without any clue as to ‘who’ is doing the constructing and why. (Burr, 2015, p. 151; emphasis added) Burr discusses the work of Rom Harré (see Chapter 3), who locates his theoretical approach within Discursive Psychology, using it to re-frame various psychological and physiological phenomena (including emotion, coughing, memory, and practical skill) as social and performative. He also provides a way of understanding the role of neurological and physiological functioning in psychological phenomena; this is a much more inclusive and integrated approach than most discursive accounts (which don’t acknowledge the role of neurology/physiology). Harré’s approach also contrasts with mainstream Psychology’s reductionist treatment of biological factors (whereby psychological phenomena are explained away by reference to brain activity etc.) So, for Harré, neurological functions are a prerequisite for psychological events (such as thoughts and emotions) – but they don’t cause (or constitute) those events. In other words, our brains represent an underlying foundation for all our cognition and action, but beyond that, it is we, the actors, who determine what we do and think (not our brains; see Chapter 5). However, we can only understand these actions and cognitions by considering the sociolinguistic context in which they take place. Harré gives the example of skills (such as playing the piano or carving wood): To demonstrate a skill means to have performed in accordance with some culturally and historically specific definition, and to have had one’s efforts accounted for in skills-type language … thinking of even simple skills as the straightforward manifestation of physiological events is very shaky. We are immediately drawn into issues about who decides what constitutes a skill, how it is deemed to have been performed, and under what circumstances claims to skilful performance are accepted or rejected. (Burr, 2015, p. 152) Harré’s major contribution has been his account of self as a linguistic phenomenon. As we’ve seen, in mainstream Psychology the self is understood as what motivates and directs the person’s behaviour; it’s ‘located’ within the person and, despite being conceptualized differently from different theoretical perspectives, it’s seen as existing as part of every person’s make-up. Harré reframes the self as a function of language: the language we use when discussing ‘the self ’ lures us into the mistaken belief that it exists as an entity, an object that can be studied like any other. Typically, the words we use (i.e. nouns and pronouns) denote things, tangible objects that we can see and feel (they have an indexical function – they’re labels for the object). Sometimes, however, nouns and pronouns appear to have an indexical function, but there’s nothing to which they actually refer. The pronouns ‘I’ and ‘Me’ fall into this second category: we mistakenly assume that there must be specific entities that these words denote in the way that most nouns and pronouns do. The fact that ‘I’ and ‘Me’ exist leads

288

People as selves

us to believe that we’re ‘autonomous individuals, that each of us is represented by a coherent, unified self, and … that this self contains mechanisms and processes, the subject matter of psychology, that are responsible for our actions’ (Burr, 2015, p. 153). According to Harré (1995b, 1999), ‘I’ specifies a location for the acts performed by a speaker: To be a person is to be a singularity, to be just one person…. It is part of the grammar of the ‘person’ concept…. Personhood is so bounded by the singularity of each human being’s embodiment that neither more nor less than one person per body is permitted to stand. (Harré, 1999, p. 103) In other words, ‘I’ draws attention to the body of a specific speaker who occupies a unique location, both physically (literally) and socially (metaphorically). ‘I’ also commits the speaker to the consequences of what they say; so, for example, when we make a promise, we’re making a public commitment (not describing an internal thought or feeling). To promise something, then, is a moral act. We use such words to perform actions in a ‘moral universe’: The human individual is, above all, in those societies that recognize autonomy, a moral phenomenon…. ‘I’ is a word having a role in conversation, a role that is not referential, nor is the conversation in which it dominates typically descriptive, factstating. It is a form of life, a moral community that has been presupposed by the uses of the first person, not a kind of hidden inner cognitive engine. (Harré, 1989, p. 26)

L

How plausible do you find Harré’s account of the self?

It could be argued that there’s something counter-intuitive about Harré’s account; ironically, this might have something to do with the kind of language he uses, which is quite abstract, painting a picture of a person that’s quite different from what we’re familiar with through our everyday experience and common sense understanding (see Chapter 4). Indeed, recently some Social Constructionists have become increasingly dissatisfied with the status given to subjective experience within that approach. Many constructionist accounts imply a Psychology devoid of persons. Burr (2015) and Cromby (2004) argue that this neglect of subjectivity ‘reifies the social’ and creates a Psychology that is ‘devoid of much that is significantly and recognizably human’ (p. 799). These criticisms have led many Social Constructionists to re-examine their conceptions of subjectivity, particularly with regard to embodiment (the importance of having a body) and emotion. (For further discussion, see Burr, 2015 and Gross, 2015.) Burr (2015) cites Phenomenology (see Chapter 10) as sharing with Social Constructionism a rejection of essentialism while taking embodied experience seriously. Also, Kelly’s Personal Construct Theory (see Chapter 4), which she describes as a constructivist theory that rejects mainstream Psychology’s assumptions and contains rich insights drawn from its clinical history, is ‘one of the most promising resources for developing a social constructionist psychology’ (p. 233). Bringing together the concepts of self, personhood, subjectivity (and self-consciousness) and the moral community, it might be instructive to ask: when is a non-human animal a person? (See Box 12.9.)

289

People as selves

BOX 12.9 When is an animal a person? L Rutkin (2016) cites a conference held in June 2016 at the Institute for Research in

L

L L L

L

L

L

L

L L

Cognitive Science in Philadelphia, to discuss the ethical implications of recent neuroscientific research involving non-human animals. The organizer, Martha Farah, a cognitive neuroscientist, claims that neuroscience is remodelling many conventional boundaries (such as those between animals and machines, and between one species and another). From an ethical perspective, perhaps the most troubling boundary-blurring involves our closest evolutionary relatives, chimpanzees and other great apes. While animal welfare has been hotly debated in Western countries for many years, some recent research has brought the issue to the fore. In 2015, under pressure from activists and Congress, the US National Institutes of Health closed its chimp research programme and sent the animals to sanctuaries. The Nonhuman Rights Project (NRP) continues to pursue legal action to free captive chimps. The NRP’s president, Stephen Wise, has been focusing its legal challenge on the notion of ‘personhood’. In the eyes of the law a person is something distinct from a human, and distinct from a thing. Personhood carries major implications for the legal, ethical, and psychological status of the being said to possess it. Respect for the animal may be a core aspect of what ‘personhood’ means. But personhood has been attributed to entities that aren’t even animate, such as a New Zealand river of importance to an indigenous group, and a mosque in Pakistan. In the case of non-human animals, the debate centres around which cognitive and other capacities they need to possess; this is another way of asking ‘what makes a person a person?’. But rather than trying to define personhood in this way (i.e. a creature is either a person or not a person – a false dichotomy), we could place species on a scale, with ‘zero moral status’ at one end and ‘full moral status’ at the other; rights could then be awarded in proportion to the species’ intelligence and other attributes (including degree of self-consciousness; see Gross, 2012). (Based on Rutkin, 2016)

What do you understand by the term ‘personhood’? Is it simply another term for ‘human being’?

Conclusions: the self-concept as a cultural phenomenon Potter and Wetherell (1987) suggest that the kinds of ways we have available for talking about ourselves shapes our experience of ourselves as human beings. To illustrate this claim, they describe the Maori, a non-Western, non-industrialized culture. In Maori culture, the person is invested with a particular kind of power (mana), a gift from the gods in accordance with their family status and birth circumstances. ‘Mana’ is

290

People as selves

what enables the person to be effective, whether in battle or in everyday social interaction. However, this power isn’t a stable resource: it can be increased or decreased depending on the person’s day-to-day conduct. For example, it could be reduced by forgetting a ritual observance, or committing some misdemeanour. People’s social status, and successes and failures, are attributed to external forces, not internal states (such as personality or motivation). In fact, mana is only one of these external forces that inhabit the individual. People living in such a culture would necessarily experience themselves quite differently from the way people in Western culture are used to. If one views the world in this kind of way, with the individual seen as the site of varied and variable external forces … then different kinds of self-experience become possible. Specifically, individuals can cease to represent themselves as the centre and origin of their actions, a conception which has been taken to be vital to Western concepts of the self. The individual Maori does not own experiences such as … fear, anger, love, grief; rather they are visitations governed by the unseen world of powers and forces. (Potter and Wetherell, 1987, p. 105; emphasis added) As Burr (2003) observes, this suggests that the very experience of being a person, the kind of mental life one can have, perhaps even how we experience sensory information, depends on the particular ways of accounting for ourselves that are available in our culture. As Harré (1985) puts it: ‘To be a self is not to be a certain kind of being, but to be in possession of a certain kind of theory’ (p. 262). Consistent with these claims, Smith and Bond (1998) make the distinction between independent and interdependent selves, a feature of individualist and collectivist cultures, respectively. The individualism–collectivism cultural syndrome (Triandis, 1990) refers to whether one’s identity is defined by personal choices and achievements (the autonomous individual: individualism) or by characteristics of the collective group to which one is more or less permanently attached, such as the family, tribal or religious group, or country (collectivism). While people in every culture display both, the relative emphasis in the West is towards individualism, and in the East towards collectivism.

Pause for thought – answers 1 The self as a ‘monolithic entity’ implies that it is in total, overall control of everything we do. It also conveys the idea of unity. So, the idea of the self as ‘unitary’ is already contained within the ‘monolith’ metaphor (a monolith is a large piece of stone used in buildings). Unity is one of seven features of the self as defined by Ramachandran (see text above). Despite the teeming diversity of sensory experience that you are deluged with moment to moment, you feel like one person. Moreover, all of your various (and sometimes contradictory) goals, memories, emotions, actions, beliefs, and present awareness seem to cohere to form a single individual. (Ramachandran, 2011, pp. 250–251) 291

People as selves

2 Behaviour is normally the product of an interaction between the two hemispheres (Sperry, 1974): the performance of one in isolation might be deceptive regarding its role prior to surgery when it interacts with the opposite hemisphere. As Toates (2001) says, processing systems might be reorganized following surgery, especially if the patient is young, so that the hemisphere’s performance is changed. 3 The surgery is a last resort after years of suffering and failed medication (Kosslyn et al., 1999); we cannot just assume that these patients are like controls in all other respects. Even split-brain patients wouldn’t normally act in such a way that information is projected only to one hemisphere. Research has shown that even a few spared callosal fibres (i.e. connecting tissue that hasn’t been cut during surgery) can support transfer of information between hemispheres (Funnell et al., 2000). 4 Self-image and self-esteem represent two of three major components of the self-concept; the third is ideal-self. L

L

292

Self-image refers to how we describe ourselves, the kind of person we think we are (including social roles, personality traits, and physical characteristics (bodily self )).The bodily self includes (usually temporary) bodily sensations (such as pain, cold, hunger, etc.) and more permanent features (including skin and eye colour, and biological sex); the latter also includes what we count as belonging to our body (what’s part of us). Gordon Allport (1955b), a very eminent personality theorist, gives two examples of how intimate our bodily sense is and just where we draw the boundaries between ‘me’ and ‘not me’: Imagine swallowing your saliva – or actually do it! Now imagine spitting it into a cup and drinking it! Clearly, once we spat it out, we have disowned it – it’s no longer part of us! Imagine sucking blood from a cut in your finger (something we do quite automatically if the cut is minor). Now imagine sucking the blood from a plaster on your finger! Again, once it’s soaked into the plaster, it has ceased to be part of ourselves. While the self-image is essentially descriptive, self-esteem is essentially evaluative: it refers to the extent to which we like or approve of ourselves, how worthwhile a person we think we are. This can relate to specific aspects of our self (such as how physically attractive we think we are) or to our overall ‘value’ as a person. Our self-esteem is (partly) determined by how much our selfimage differs from our ideal-self (ego-ideal or idealized self-image), i.e. the kind of person we’d like to be. Generally, the greater the gap between our self-image and our ideal self, the lower our selfesteem. (see text above).

Chapter 13 People as deviant Psychiatry and the construction of madness

As we saw in Chapter 12, every person is in certain respects like all other persons (Kluckhohn and Murray, 1953). This captures the nature of ‘General’ Psychology, which in turn describes mainstream Psychology. Kluckhohn and Murray also claim that every person is in certain respects like some other people; this describes the field of Differential Psychology (or individual differences) (also a part of mainstream Psychology). As regards individual differences, those ‘respects’ might relate to personality, intelligence, culture, gender, or age. These characteristics are ways of defining human diversity and are commonly investigated by mainstream Psychology in one of two ways: 1 As ways of comparing people, based on the nomothetic (law-like) approach; it’s assumed that (except, arguably, in the case of gender) every individual can be placed on a continuum (say, of intelligence). People differ only in where on the continuum they belong. 2 As participant (or person) variables that need to be controlled in the context of (usually laboratory) experiments; this ensures that only the independent variable affects the outcome (the dependent variable) and not individual differences between participants. (Experimental designs are aimed at achieving this control.) There are other ways of comparing people, this time in terms of characteristics and/or behaviour relating to social norms of acceptability and desirability. For example, while people with either very high or very low IQs are statistically deviant (by definition, most people have ‘average intelligence’; see Chapter 11), the latter are likely to be judged less favourably, as being less ‘worthy’ members of society. In the case of personality, ‘deviants’ are either (1) those who score at extreme ends of dimensional scales (such as introversion– extroversion; neuroticism–stability); or (2) those who display behaviours/patterns of thinking which would lead them to be judged as having a particular psychological disorder. In (1), the difference between deviants and non-deviants is quantitative: introverts display many more introvert traits than extroverts; in (2), the difference is qualitative: for example, someone either has schizophrenia or they don’t. These are often referred to as the dimensional and categorical approaches, respectively, and are traditionally supported by Psychologists and psychiatrists, respectively.

293

People as deviant

So, in a scientific context, ‘deviant’ usually means either (1) those who fall outside the statistical average (deviation-from-the-average or statistical infrequency), or (2) those who are designated as belonging to a category of psychological disorder or abnormality, which, by definition, most people don’t belong to. According to Littlewood and Lipsedge (1989), every society has its own characteristic pattern of normative behaviour and beliefs – expectations about how people should behave and what they should think. These norms define what’s (un)acceptable and (not) permissible, as well as what’s (un)desirable.

L

Try to identify some examples of what’s (1) acceptable; (2) tolerable; (3) acceptable/desirable; (4) desirable; (5) required/obligatory.

Burglary and fraud are illegal, while adultery and abortion breach fundamental moral or religious principles; child sexual abuse, rape, and bigamy are both illegal and breaches of moral/religious principles. While there’s usually little disagreement as to whether something is illegal, there’s more scope for disagreement when it comes to the immorality of (illegal) acts: most people will condemn child abuse (in any form) and rape, but people have very different views regarding the smoking of marijuana (Gross, 2014). ‘Tolerable’ behaviour is at the fringes of illegality and/or immorality (such as gambling, heavy drinking, and co-habiting). Getting married is both acceptable/permissible and desirable as far as most people are concerned, but how do we judge people who aren’t married (or don’t have a partner)? While it’s clearly neither illegal nor immoral to be single, is it somehow ‘not quite right’ (is being ‘coupled’ our ‘default option’?). So, what about schizophrenia and homosexuality? Again, neither is illegal (although the latter has only relatively recently – since 1967 – ceased to be a criminal offence in the UK), and homosexuality is still immoral as far as many religious people are concerned (and condemned by religious institutions). But might there be additional ways in which these both may be considered to be deviant? L They could both be seen as threatening and challenging our basic view of the world,

what Scheff (1966) called residual rules: the ‘unnameable’ expectations we have regarding such things as ‘decency’ and ‘reality’. L Because these rules are themselves implicit, taken for granted, and not articulated, behaviour that violates them is seen as strange and sometimes frightening – but it’s usually very difficult to say why! L Similarly, Becker (1963) believes that the values on which psychiatric intervention is based are, generally speaking, middle-class values regarding decent, reasonable, proper behaviour and experience. These influence the process of diagnosis of patients who, in state-funded (National Health Service (NHS) in the UK) hospitals, at least, are predominantly working class. L In addition to the breaking of residual rules, the mere fact of being different may contribute to the unacceptability of people with schizophrenia (‘schizophrenics’) and others deemed ‘abnormal’. If every society has its own characteristic pattern of normative behaviour and beliefs, then ‘outsiders’ (those who deviate from these norms), even if they are not seen as physically dangerous, are threatening simply because they are different. As a way of confirming our own identity (which is so much bound up with these norms), ‘we push

294

People as deviant

the outsiders even further away, and ‘By reducing their humanity, we emphasize our own’ (Littlewood and Lipsedge, 1989, p. 27). (Gross, 2014, p. 157) L ‘Outsiders’ include criminals (who break the ‘laws of the land’), those whose behaviour

and beliefs conflict with our moral code, and those, like the ‘mentally ill’ (or who have ‘mental health problems’ or a ‘psychological disorder’), who break residual rules. All three groups can be regarded as deviants: their behaviour is considered to deviate from certain standards as held by the person making the judgement (Gross, 2014). Although some forensic psychiatrists and Criminological (or Forensic) Psychologists are interested in the causes and treatment of criminality as a form of social deviance, the main focus of Abnormal and Clinical Psychology (as well as psychiatry) is on the kinds of unacceptable behaviour and experience which, in their extreme form (as in schizophrenia and other forms of psychosis) have been referred to as madness.

A brief history of madness As we noted above, homosexuality was a criminal offence in the UK until as recently as 1967. This illustrates the historical relativism of what we define as deviant; the change in the law was to some extent the outcome of changing social attitudes and moral opinions. While it was de-criminalized, this applied only to consenting males over the age of 21; the age of consent was dropped to 18 in 1994 and only brought in line with the heterosexual age of consent (16) in 2001. In 2012, Russia passed a law banning the promotion of ‘non-traditional sexuality’ to under18s and in several countries (including Iran, Saudi Arabia, Sudan, Yemen, and parts of Nigeria and Somalia), homosexuality (at any age) is punishable by death! So, what’s defined as deviant is also culturally relative. A brief historical tour shows clearly that deviancy – in some form or another – is an inherent part of human society and culture: when people live together, certain types of difference will always be given a negative valuation. As Littlewood and Lipsedge (1989) put it, there will always be outsiders in our midst.

Prehistoric period Archaeological findings suggest that some types of psychological abnormality must have been recognized as far back as the Stone Age. Skeletal remains reveal that attempts were made to relieve brain pressure by drilling small holes in the skull (a procedure similar to what’s now called trephining). While this appeared to be done deliberately, it’s unlikely to have been based on any knowledge of brain pathology: it’s much more likely that the operation was performed in the belief that the holes would allow ‘evil spirits’ to escape.

Pre-Classical period Although primitive superstitions persisted into and beyond the Classical period (see below), earlier attempts were being made to find a more rational approach to the understanding and treatment of the mentally abnormal. For example, in about 2600 bce in China, some forms of faith healing, diversion of interest, and change of environment began to be used; by 1140 bce, institutions for the ‘insane’ had been established, where patients were cared for until ‘recovery’. Old Testament sources indicate that the Hebrews saw mental illness as a punishment from God.

295

People as deviant

The Classical period Ancient Greece saw the replacement of supernatural explanations with observation and reason – what many describe as the most significant change of worldview humanity has experienced. There appeared the first form of a medical approach to abnormality: priests were gradually replaced by physicians (as described in Box 13.1).

BOX 13.1 The early Classical medicine men L Hippocrates (460–377 bce), the ‘Father of Medicine’, was among the first to

promote the ‘medical model’ of madness. His was the first attempt to identify categories of illness, each with its own physiological cause (see discussion of Kraepelin in the text below). Men ought to know that from the brain and from the brain only arise our pleasures, joys, laughters and jests…. Those who are mad through phlegm are quiet, and neither shout nor make a disturbance; those maddened through bile are noisy, evil-doers, and restless, always doing something inopportune. (Hippocrates, 1931, p. 9) UNSTABLE Moody

Touchy

Anxious

Restless

Rigid

Aggressive

Sober

Excitable

Pessimistic Changeable

Reserved

Optimistic

Unsociable

Active

Quiet INTROVERTED

Melancholic

Choleric

Phlegmatic

Sanguine

EXTROVERTED Sociable

Passive

Outgoing

Careful

Talkative

Thoughtful

Responsive

Peaceful Controlled

Easygoing

Reliable

Lively

Even-tempered

Carefree

Calm

Leadership STABLE

Figure 13.1 Eysenck’s dimensions of personality (1965), showing the lasting influence of Galen's theory of the four bodily fluids (or humours). 296

People as deviant L Terms such as ‘phlegmatic’ survive today. Hippocrates was just as convinced as

L

L

L

L L

biological psychiatry is today (see text below) that he was discovering the physical causes (i.e. bodily fluids/humours) of illnesses, rather than promoting a simplistic theory that justifies and camouflages the social control of unacceptable or disturbing behaviour. The Dogmatist sect argued that Hippocrates had already discovered everything worth discovering. Pythagoras (c.569–475 bce) was the first to teach a natural explanation for psychological abnormality: he identified the brain as the centre of intelligence and attributed mental illness to brain disorder. Plato (428–347 bce) recognized the existence of individual differences in intelligence (and other psychological characteristics). Mental disorder is partly moral, partly physical, and partly divine in origin. He was the first to claim that criminals are mentally disturbed. Aristotle (384–322 bce) believed in a physiological basis for mental illness (as taught by Hippocrates). He considered but rejected the possibility of psychological causes. Greek physicians settled in Rome, the most eminent of them being Aesclepiades (124–40 bce) and Galen (129–199). Galen recognized the duality of physical and psychological causation in mental illness, identifying such varied factors as head injuries, alcoholism, fear, adolescence, menopause, and financial difficulties. But at the same time he assigned specific divine or astrological influences to particular bodily organs. He’s best known for his extension of Hippocrates’ ideas: three forms of mental illness (mania, melancholia, and phrenitis (brain fever)) are caused by imbalances in the four bodily fluids: black bile, yellow bile, phlegm, and blood. So, for example, melancholia (similar to ‘depression’) is due to an excess of black bile. Galen’s influence lives on, notably in Eysenck’s personality theory, in which the four temperaments – melancholic, phlegmatic, sanguine, and choleric – are clear derivatives (see Chapter 11 and Figure 13.1). (Based on Costello et al., 1996; Read, 2013a)

The medieval period/Middle Ages With the dissolution of the Roman Empire, observation, reason, and physiological theories were replaced by variations on the old religious themes. For example, in the New Testament Jesus was venerated as the healer of the sick and the caster out of demons. According to Sue et al. (1990), the Church (which had become inseparable from the State) demanded unconditional adherence to its tenets: certain truths were deemed sacred and those who challenged them – including scientists – were denounced as heretics. Rationalism and scholarly scientific works went ‘underground’ for many years, preserved mainly by Arab scholars and European monks. Natural and supernatural explanations of illness became (con)fused: illness was perceived as punishment for sin, so the sick person was assumed to be guilty of wrongdoing. The Dark Ages (400–900) were especially bleak for the mentally ill. But also during the earlier Middle Ages (up to about 1000), care for the mentally ill was relatively humane, as demonstrated by the original Bethlem Hospital in London (later to degenerate into the infamous Bedlam ‘snake-pit’). People thought to be possessed by the Devil weren’t held responsible, making exorcism – the treatment of choice – a benevolent process directed against the Devil (not a punitive process aimed at the individual). However, from 1000 onwards attitudes and practices began to change, with the focus on witchcraft (see Box 13.2).

297

People as deviant

BOX 13.2 Mental illness and witchcraft L The ancient belief in demonic possession was once again widespread as an expla-

nation for mental illness, punishable by stoning and other forms of torture. L However, some of the afflicted were more guilty than others, in particular those

thought to be the Devil’s agents (rather than his victims). Witches (predominantly but almost exclusively women) were to be identified and destroyed. L To assist with that task, a manual for identifying and examining witches (Malleus Maleficarum: ‘Witch Hammer’, 1486) was published. This became a widely respected authority and guide for witch-hunts. Unfortunately for the mentally ill, the manual equated abnormal behaviour with possession by the Devil: possessed individuals were assumed to be witches. L Spanos (1978) estimated that between 1450 and 1600, well over 100,000 people were executed as witches. Feminist writers (such as Ussher, 1991) have argued that witchcraft constituted a systematic persecution of women substantially because of sexism and bigotry deriving, largely, from a fear of erosion of male power and privilege. L Religious and social factors had created an intense fear and hatred of women and a fiercely misogynist Church. Women were officially described as a ‘foe to friendship, an unescapable punishment, a necessary evil, a natural temptation, a desirable calamity … an evil of nature painted with fair colours’ (Malleus Maleficarum, p. 43). Many of those who were tortured and murdered for witchcraft were guilty of nothing more than using herbs and brews for healing – a clear threat to male priests’ monopoly on healing – or, simply, being female! (Read, 2013a).

The Renaissance period According to Foucault (1971), up to the 1400s the greatest fear was of death, but at this point in time madness ‘makes an entrance as the ghastly scourge of the Western mind’ (in Parker et al., 1995). Whereas death occurs at the end of a life, madness was regarded as an ever-present possibility. At the end of the Middle Ages, those with ‘diseased minds’ began to fill the space previously occupied by those suffering from leprosy (Parker et al., 1995). The Asylums and Houses of Correction (such as the Bethlem Hospital, originally the monastery of St. Mary of Bethlehem and converted to a hospital by Henry VIII in 1547; see above) began to appear all over Europe. At first, the ‘mad’ were incarcerated with all kinds of social deviants, but, increasingly, specialist asylums developed; this reflected changes in attitudes with regard to how such individuals should be treated. An important figure in this attitude change was Johan Weyer (1515–1588), a German physician who published a scientific analysis of witchcraft in 1563 which rejected the notion of demonic possession. Weyer is regarded by some as the ‘Father of Modern Psychiatry’ (Costello et al., 1995). In many ways, he represented a return to the ideas of Hippocrates and Galen, but he was severely criticized by both the Church and the State.

1700–1900 Despite the use of ‘hospital’ or ‘asylum’ for those institutions where the ‘mad’ were taken, they were more like prisons and ‘care’ is a wildly inaccurate term to describe what

298

People as deviant

they experienced once inside. Nothing was done for the inmates other than to confine them under terrible conditions, often being chained to the wall (preventing them from lying down to sleep) or to a large iron ball which they had to drag around whenever they moved. At least in France, ‘The mad were locked away not for being mad but for being poor’ (Read, 2013a; emphasis added). Foucault (1967) calls this period the ‘Great Confinement’, which served the economic function of forcing inmates to work for very little pay (under the guise of ‘exercise’ or ‘occupational therapy’); it also served the political function of suppressing, under the guise of helping the poor and sick, the increasing number of uprisings among the unemployed. It also bolstered a moral belief in hard work. Foucault argues that the mad were being locked away, all over Europe, along with anyone else considered inconvenient or threatening: calling prisons ‘hospitals’ conceals this. The treatments being provided behind the locked doors (such as blood-lettings and purges) were ‘physical cures whose meaning had been borrowed from a moral perception’ (Foucault, 1967, p. 159; emphasis added). Many were based on purification of the body, symbolizing the notion that mental illness was caused by biological moral contamination. The quest for the best purifying agent led to force-feeding tartar, chimney soot, woodlice, and soap! (Read, 2013a). Two much-cited men, Philippe Pinel (1745–1826) in France and William Tuke (1732–1822) in England, are portrayed by most British and American textbooks of Abnormal Psychology as great liberators and reformers, who freed the insane from their chains and generally improved their living conditions. Their approaches have come to be known as ‘Moral Treatment’ (Scull, 1981). Significantly, Pinel found it impossible to differentiate the effects of madness from the effects of cruel treatment. Unlike many before them, Pinel and Tuke and other fellow reformers (notably Benjamin Rush (1745–1813) and Dorothea Dix (1802–1887) in America) were honest about their aim to impose society’s moral code on deviant individuals (such as eradicating celibacy, promiscuity, apathy, and laziness): it was, as they themselves acknowledged, a form of social control. However, this moral treatment approach had little impact on the stranglehold of biological theories of madness: the categorizations and the quest for physical explanations continued and doctors began to play a major part in this new-style asylum (unlike their role in the huge confining hospitals). But they weren’t taking on the role of medical expert; rather, they were seen as an authority figure to add weight to the efforts of the attendants: The doctor’s intervention is not made by virtue of a medical skill or power that he possesses in himself and that would be justified by a body of objective knowledge. It is not as a scientist that homo medicus has authority in the asylum, but as a wise man. If the medical profession is required, it is as a juridicial and moral guarantee, not in the name of science. (Foucault, 1967, p. 270; emphasis added) Foucault also used the leper as a metaphor for the mentally ill. From the eleventh to the fourteenth century, leprosy swept through Europe and the leper house became part of the social landscape. When the Crusades ended, the disease began to disappear and the vast structure set up to deal with it started to become redundant. Over the following few centuries, the emptying leper houses would be filled by the mentally ill, taking on the role that the lepers had once played. As we’ve seen, they were chained to the walls, coerced into obedience, then later subjected to ‘moral’ improvement.

299

People as deviant

In providing this account, Foucault was re-interpreting the role of Pinel and Tuke: It was possible to take the chains off only after madness was mastered by Reason; it was only possible after the essence of the mad person’s discourse had been smothered by Reason’s silence. The birth of psychiatry, as we know it today, is the breakdown of dialogue between Reason and Unreason. The psychiatric hospital is the City of Reason which borrows some of its repressive measures from the penal system but most of all invents its own code which is personified by the Doctor. He is at the same time the Father, the Judge, the Family and the Law. (Foucault, 1967, p. 58) While Foucault is describing the situation pertaining in the early 1800s and beyond, his observations still have relevance – and validity – today. As we shall see later when discussing the views of anti-psychiatrists (such as Szasz and Heather), the authority of psychiatrists still originates more from the political power we all grant them than any scientific expertise. Perhaps psychiatrists themselves were – and are – unaware of the source of this power (Foucault, 1967).

The emergence of psychiatry: the obsession with classification Throughout the 1800s, progress in medicine was impressive; the search for similar success in relation to madness is understandable. The founding of insane asylums meant that, for the first time, sizeable groups of patients with mental disorders were brought together, providing physicians their first opportunity to observe and assess their behaviour. These studies laid the groundwork for the scientific model of mental disorder (Wilson et al., 1996). One of the earliest ‘modern’ attempts at classifying mental disorders was Pinel’s fourway classification: (1) melancholia (severe depression); (2) mania (marked agitation, grandiose thinking, and elation) without delirium (lost awareness of one’s surroundings, time, and self ); (3) dementia (memory loss, personality change, deterioration in judgement and personal habits); and (4) idiotism (mental retardation). While this and many other new classifications of madness were soon abandoned, the fragile new profession of psychiatry couldn’t afford to give up the search for mental ‘illnesses’ altogether: its survival as a medical science depended on it (Read, 2013b). In 1822, one physiological cause for one form of madness was found: Bayle’s discovery of brain damage in dementia paralytica was later identified as syphilis. This fuelled hope that the physiological causes of other mental ‘illnesses’ would be found; however, this hope was to prove unfounded (Read, 2013b). From the late 1800s up until the present, much of the misplaced hope, together with research effort and theoretical speculation, became focused on (what became known as) schizophrenia. The story of modern psychiatry begins with Kraepelin, whose ideas – if not his terminology – continue to be influential in the twenty-first century (see Box 13.3). Kraepelin’s early research (into the effects of alcohol etc.) focused on exogenous (i.e. external) conditions (as opposed to endogenous). He mapped this distinction onto a general distinction between neurosis and psychosis.

300

People as deviant

BOX 13.3 KEY THINKER: Emil Kraepelin (1856–1926) L Kraepelin was born in Germany, but otherwise little is

known about his childhood and family background. L He began his medical training in Wurzburg, but also

L

L

L L L

studied for a while in Leipzig, where he met Wundt (see Chapter 1). Wundt encouraged him to undertake psychological research, but he chose psychiatry. Even before graduating, he won a prize for an essay on treating mental disorders. Kraepelin graduated from Wurzburg in 1878 and became an assistant to von Gudden in Munich. He studied the effects of alcohol, fatigue, and infectious diseases on mental illness, but was already beginning to focus on causation rather than content. In 1882 he moved to Leipzig to take a post in a clinic, Figure 13.2 close to Wundt’s laboratory, where he began further Emil Kraepelin. experimental research. He became a university lecturer in 1883, moving back to Munich to work again with von Gudden. After more changes of job, Kraepelin returned to Heidelberg, where he began a long collaboration with Alois Alzheimer. In 1904 he moved to Munich, taking Alzheimer with him. He became director of Deutsche Forshungsanstalt für Psychiatrie until his retirement in 1922. He could now implement his life-long plan to link scientific research into the causes of mental disorders with their diagnosis and treatment. (Based on Harré, 2006)

Pause for thought … 1 What do you understand by the difference between neurosis and psychosis? (It might help to give an example of each: phobias are neuroses, while schizophrenia is a psychosis.)

Kraepelin, who published the first recognized textbook of psychiatry in 1883, was firmly focused on psychoses, the four most important being: L dementia praecox (‘senility of youth’). Kraepelin himself wasn’t happy with the term,

which was renamed schizophrenia by Bleuler in 1911. In 1893, he claimed to have discovered a group of people in whom deterioration (delusions, hallucinations, attention deficits, and bizarre motor activity) begins in adolescence and continues inevitably into a permanent dementia; L paraphrenia (paranoid delusions or feelings that later develop into dementia praecox); L manic-depressive insanity (since renamed ‘bipolar disorder’); and L paranoia proper.

301

People as deviant

Psychological abnormality was now individualized (located within the patient with little or no acknowledgement of the social context) and pathologized (seen as an organic, i.e. physical/bodily dysfunction). As summarized above, Kraepelin claimed that certain groups of symptoms occur together sufficiently often to be called a ‘disease’ or ‘syndrome’. Each category of mental illness is distinct from the others, with its own origins (probably hereditary), symptoms, course and outcome. Kraepelin’s classification has formed the basis for the two major diagnostic/classificatory systems used in modern Western psychiatry, namely the Diagnostic and Statistical Manual of Mental Disorders (first published by the American Psychiatric Association in 1952; the latest edition, DSM-5, was published in 2013) and the World Health Organization’s International Classification of Diseases (ICD) (first published 1948; latest edition ICD-10, 2000). According to Harré (2006), Kraepelin described his patients in vivid and living detail. But he recognized his ignorance regarding the underlying neuropathology involved in each category: he just took it as read that some malfunctioning of the brain was the cause of abnormal thinking and behaviour. While claiming that dementia praecox was an incurable degenerative illness, some people so diagnosed did actually recover; in these cases, he claimed that they were wrongly diagnosed (they didn’t have the illness in the first place!). So, only those who don’t recover provide evidence that the illness exists! According to Read (2013b): This circular logic is still invoked today. When people diagnosed ‘schizophrenic’ go on to lead productive lives, they are often told that the diagnosis must have been wrong. (Read, 2013b, p. 22) The American psychiatrist Harry Stack Sullivan (1892–1949) complained, in 1927, that ‘The Kraepelinian diagnosis by outcome has been a great handicap, leading to much retrospective distortion of data, instead of careful observation and induction’ (p. 760).

Pause for thought … 2 What do you understand by the quote from Harry Stack Sullivan? Could Kraepelin’s reasoning have been any different?

Kraepelin couldn’t, of course, wait for outcomes before making his diagnoses; all he could do, like his predecessors, was to identify particular behaviours (i.e. symptoms) and apply a medical-sounding label. Then he claimed to have proved the existence of dementia praecox by showing that people who had it exhibited its symptoms. This is another circular argument (Read, 2013b). In other words, the only evidence for the existence of the illness are the symptoms, and the symptoms are already identified/designated as those belonging to the illness. There’s no independent evidence for the existence of dementia praecox – only the presupposition that it exists! To break into this vicious circle, Kraepelin needed to be able to identify some form of underlying physical pathology that might be linked to the symptoms. Autopsies, conducted by Alzheimer, revealed nothing abnormal (Bentall, 2009). By 1913, Kraepelin admitted that the causes ‘are at the present time still wrapped in impenetrable darkness’ (p. 224). As Read (2013b) observes, ‘at the present time’ has been used ever since by researchers ‘forever on the verge of finding the biological cause of schizophrenia’ (p. 23).

302

People as deviant

To claim to have discovered an illness without identifying any consistent symptoms or outcome, and with no observable cause, stretches the basic rules of medical science. To propose a meaningless name to avoid assumptions that might be tested positions Kraepelin’s invention beyond the realms of science altogether. (Read, 2013b, p. 23)

What is ‘mental illness’? Given that research has (so far) failed to identify an underlying biological cause of schizophrenia (whether this is a disease of the brain, such as a biochemical disorder or abnormal brain anatomy, or genetic), we need to ask in what sense ‘schizophrenia’ and other ‘mental illnesses’ actually exist? But before doing that, we need to address a more general, fundamental question, namely what do we mean by ‘mental illness’? Throughout the chapter so far, various terms have been used to denote the basic subject-matter: ‘psychological abnormality’, ‘mental disorder’, ‘madness’, ‘insanity’, and ‘mental illness’. While ‘insanity’ is, strictly, a legal term, ‘the insane’ is grammatically more acceptable than ‘the mad’. Psychologists are likely to prefer ‘psychological abnormality’ or ‘mental disorder’, while psychiatrists (especially those of a biological persuasion) will prefer ‘mental illness’ (or, the alternative ‘psychopathology’); the latter, as medically trained practitioners, work within the medical model (see Box 13.4).

BOX 13.4 The medical model: abnormality as mental illness L Many writers have observed that the vocabulary we use to refer to psychological

abnormality is borrowed from medical terminology. L Deviant behaviour and thinking are referred to as mental illness or psychopathol-

ogy, and classified on the basis of symptoms, the classification being called a diagnosis. L The methods used to try to change the behaviour and thinking are called therapies and these are often administered in mental or psychiatric hospitals. L If the symptoms disappear, the patient is said to be cured (Maher, 1966). L The use of such vocabulary – both in a Psychology context and in everyday discourse – reflects the pervasiveness of the ‘sickness’ model of psychological abnormality (together with terms such as ‘syndrome’, ‘prognosis’, ‘in remission’). In other words, whether we realize it or not, when we think about psychological abnormality, we construe it as if it indicated some underlying illness.

L L

How valid do you think the medical model is as a way of conceptualizing psychological abnormality? Try to formulate some arguments for and against it.

1 Many defenders of the medical model (MM) have argued that it’s more humane to regard a psychologically disturbed person as sick (or ‘mad’) than plain ‘bad’: it’s more

303

People as deviant

stigmatizing to be regarded as morally defective. However, when we label someone as sick or ill, we’re removing from them all responsibility for their behaviour: just as we don’t usually hold someone responsible for having cancer, for example, so ‘mental illness’ implies that something has happened to the person who is a victim and who is, accordingly, to be put in the care (and sometimes the custody) of doctors and nurses who will take over responsibility. (It could be argued that the stigma attached to mental illness is actually greater than that attached to labels which do imply the person’s responsibility: our fear of mental illness is even greater than our fear of becoming involved in crime or other immoral/illegal activities due to our belief that the former is something over which we have little or no control.) L While it may be more humanitarian to care for people in hospitals than to torture

them for witchcraft, exorcize their evil spirits, or chain them to asylum/prison walls, there’s a sense in which these ‘inhumane’ practices were more honest than what many consider to be the current abuses of psychiatry. When people were imprisoned, society was saying, quite unambiguously, ‘We don’t approve of your behaviour and will not tolerate it!’, making its values clear but also not removing responsibility from the person whose behaviour was being condemned. However, when political dissidents in the former Soviet Union were diagnosed as suffering from schizophrenia, that society was saying ‘No one in their right mind could hold the views you espouse, so you must be out of your mind!’; this bypasses the actual issues raised by the dissident’s beliefs and removes responsibility for those beliefs from the ‘patient’ (Szasz, 1973, 1974; see below). 2 Defining psychological health and illness is much more problematic than defining physical/bodily health and illness. As we saw when discussing homosexuality earlier, not only do norms differ between cultures (they’re culturally relative), but they change over time within the same culture (they’re historically relative). For these reasons, Heather (1976) believes that the criteria used by psychiatrists to assess abnormality must be seen in a moral context and not a medical one. The fact of cultural relativism, he argues, makes psychiatry an entirely different kind of enterprise from legitimate medicine. Psychiatry’s claim to be an orthodox part of medical science rests upon the concept of mental illness, but far from being another medical specialism, psychiatry is a ‘quasi-medical illusion’ (Heather, 1976, p. 63; emphasis in original). 3 Probably the most radical critic of the concept of mental illness is Thomas Szasz (1920–2012), the Hungarian-born American psychiatrist, famous for books such as The Myth of Mental Illness (1972), The Manufacture of Madness (1973) and Ideology and Insanity (1974). L According to Szasz, the basic assumption made by psychiatrists is that ‘mental illness’ is

caused by diseases or disorders of the nervous system (in particular, the brain), which are revealed in abnormal behaviour and thinking. If this is the case, it would be better (and more accurate) to call them ‘diseases of the brain’ or neurophysiological disorders: this would then remove the confusion between any physical, organic defect (which must be understood in an anatomical and physiological context) and any ‘problems in living’ the person may have (which must be understood in an ethical and social context). L As we noted earlier, since Kraepelin’s time, no progress has been made in trying to identify the underlying cause of schizophrenia (and other major disorders) – there are only theories (such as the dopamine hypothesis and the genetic theory; see Gross, 2015). But even if there were conclusive evidence for any one of these theories, all

304

People as deviant

this would do, in Szasz’s terms, is reinforce his distinction between neurophysiological disorders and problems in living: it would do nothing to demonstrate the reality of ‘mental illness’. In practice, Szasz argues, the vast majority of cases of ‘mental illness’ are actually cases of problems of living and they should be referred to as such. It is the exception to the rule to find a ‘mentally ill’ person actually suffering from some organic brain disease (as in Alzheimer’s disease or alcohol poisoning). L Along with the neurosis/psychosis distinction (see above), a distinction was traditionally made between organic and functional psychosis: ‘functional’ conveys that there’s no demonstrable physical basis for the abnormal behaviour and that something has gone wrong with how the person functions in the network of relationships making up his/her world (Bailey, 1979). Most organic/biological psychiatrists believe that medical science will, in time, identify the physical causes of all disorders (making all abnormality ‘mental illnesses’) (see Box 13.5). However, this belief hardly constitutes evidence (Heather, 1976); even if such evidence were provided in the case of schizophrenia and other psychoses, it would still leave major categories of mental disorder (such as what was denoted by ‘neurosis’ and personality disorder), which even the biological ‘hardliners’ acknowledge aren’t bodily diseases in any sense. Nevertheless, since 1980, classification and diagnosis have moved increasingly in the ‘mental illness’ direction, as described in Box 13.5.

BOX 13.5 The re-medicalization of psychiatry L An important difference between diagnosis in general medicine and psychiatry is

L L

L

L

L

L

the role of signs and symptoms. When doctors diagnose physical illnesses, they look for (1) signs of disease (the results of objective tests, such as blood tests, X-rays, plus physical examination) and (2) symptoms (the patient’s report of pain etc.); more weight is attached to the former. By contrast, psychiatrists, traditionally, are much more reliant on symptoms; there’s no real psychiatric equivalent of blood tests etc. However, observation of the patient’s behaviour, talking to relatives and others about the patient’s behaviour, and, increasingly, the use of brain-scanning techniques (such as CAT and PET; see Chapter 5) contribute additional data. Nevertheless, DSM and ICD are based largely on the abnormal experiences and beliefs reported by patients; we have no objective or biological markers for most neurotic or psychotic disorders (Frith and Cahill, 1995). Alongside this increase in the use of scanning techniques is the way that mental disorders are conceptualized within DSM and ICD. The early editions of DSM (1952, 1968) were strongly influenced by the Freudian psychodynamic approach (see Chapter 9); this had superseded psychiatry’s biomedical view as embodied in Kraepelin’s pioneering work (see text above). However, beginning in the 1970s, psychiatrists in many Western countries – especially the US – abandoned this approach in favour of the biomedical view; this once again became the ‘official’ approach. This re-medicalization of mental health problems wasn’t the result of new scientific discoveries, but rather it reflected dramatic changes in the external economic, political, and social environment of medicine as a whole (Horwitz, 2002). The future of psychiatry demanded that it reinvent itself to resemble biomedicine (see text above).

305

People as deviant L Szasz’s ‘problems in living’ and the kind of troubling experiences and conflicted

relationships that psychoanalysis was aimed at interpreting became reconceptualized as diseases; this made it necessary to devise specific concrete indicators that would enable clinicians to diagnose them. L DSM-III (1980) embodied this re-conceptualization. One of its innovations was to define disorders in terms of symptom checklists: the symptom criteria were meant to be objective and, as a major feature of this, all psychodynamic assumptions about aetiology (i.e. the causes of disorders) were removed. L A major change in DSM-IV (1994) (compared with DSM-III-R, 1987) was the removal of the category ‘organic mental disorders’, being replaced by ‘delirium, dementia, amnesic and other cognitive disorders’. This change was made because ‘organic’ implies (as we noted earlier) that the other major categories don’t have a biological basis. Since the prevailing view was now that all disorders are biologically caused, ‘organic’ was both misleading and redundant. (ICD-10, 1992, retains a separate category for organic disorders.) L If it’s not the brain that is diseased, then in what sense can we think of the mind as being diseased? Szasz (1972) responds by saying that we can only attribute disease to the mind metaphorically: minds can be sick only in the sense that jokes or economies are sick. In a literal sense, it’s logically impossible for a non-spatial, non-physical mind to be suffering from a disorder of a physico-chemical nature. The only ‘escape’ from this conclusion is to equate the mind with the brain (see Chapter 5), which, for Szasz, simply reduces everything to neurophysiological disorders and excludes ‘problems in living’ (except those that are a consequence of the neurophysiological disorder).

4 In The Manufacture of Madness (1973), Szasz re-writes the history of psychiatry. It wasn’t a new science beginning with Pinel and Tuke but a practice with a long lineage going back to the Spanish Inquisition and witch-hunting (see above). Words such as ‘Jew’, ‘witch’, ‘homosexual’, ‘Communist’, ‘mentally ill’ are interchangeable. Belief in mental illness has replaced beliefs in demonology and witchcraft and exists – or is ‘real’ – in exactly the same sense in which witches existed or were real, serving the same political purposes. Whenever society wishes to exclude certain others from its midst, it attaches to them stigmatizing labels (Szasz, 1974). Under the guise of a science, psychiatry is engaged in issues of a moral and political dimension. Szasz’s argument is often referred to as the conspiratorial model of madness (Kotowicz, 1997). According to Siegler et al. (1972): Schizophrenia is a label which some people pin on other people, under certain social circumstances. It is not an illness, like pneumonia. It is a form of alienation which is out of step with the prevailing state of alienation. It is a social fact and political event. (p. 101; emphasis added) According to Szasz, underlying the labelling process is the need to predict other people’s behaviour: the ‘mentally ill’ are far less predictable and we find this unsettling and disturbing. Attaching a diagnostic label represents a symbolic recapture and this may be followed by a physical capture (hospitalization, drugs, etc.). Just as psychiatric diagnosis describes the whole person, representing a new and total identity (unlike diagnosis in general medicine), so psychiatric hospitals (like prisons and boarding schools) are total institutions (Goffman, 1968): they embrace all aspects of an inmate’s life. Like all

306

People as deviant

institutions, psychiatric hospitals have both an overt, official purpose and a covert, unofficial purpose; the former is to help the mentally ill recover from their illness, while the latter is to destroy the patient’s previous personal identity and to re-mould it into a form required by the institution. In this way, they operate as agencies for social control.

Do mental disorders exist? As we saw above, homosexuality was illegal in the UK until as recently as 1967; it was also defined by the DSM as a mental disorder (in some form or another) until 1987 (see Gross, 2014). These changes occurred as a result of social protest movements (such as the women’s and gay rights movements, especially in the US). The very fact that successive editions of the DSM have revised both the number and definitions of mental disorders, removing some but (often) adding more than were removed, suggests that they don’t exist in the way that bodily diseases (such as diabetes or arthritis) exist. For example, DSM-5 (2013) has removed the various sub-types of schizophrenia (catatonic, paranoid, hebephrenic, etc.). These changes to classification (and diagnosis) don’t reflect medical scientific discoveries, but are changes in how psychiatrists think about mental disorders, as well as changes in social and cultural values. As we noted above, the DSM has moved progressively towards a re-medicalization of mental disorder: this represents a change in how (a majority of ) biologically oriented psychiatrists believe that it should be understood. Indeed, the head of the National Institute of Mental Health (NIMH) has indicated that priority for future research funding will be given to studies that formally adopt a ‘clinical neuroscience’ perspective that contributes to an understanding of mental disorders as ‘developmental brain disorders’ (Insel, 2009). This is being achieved partly through the development of research domain criteria (RDoC) that will represent a biological alternative to DSM-5, with a strong focus on biological processes and emphasis on neural circuits (Sanislow et al., 2010). The RDoC framework conceptualizes mental illnesses as brain disorders; this isn’t just reductionist, but also excludes the possible validity of alternative theoretical perspectives (Widiger, 2012). According to Kupfer et al. (2002), both epidemiological and clinical studies have shown extremely high rates of comorbidities, i.e. the same patient being diagnosed with two or more different disorders. This undermines the hypothesis (originally proposed by Kraepelin) that the categories represent distinct syndromes with distinct aetiologies: while the categories show considerable overlap, they’re based on the assumption that each disorder has a specific and distinct cause. Also, the crucial criterion of validity as far as a categorical system like DSM is concerned, is that it should specify the treatment required for each diagnosed disorder. But the lack of treatment specificity is the rule rather than the exception (Kupfer et al., 2002). Single diagnostic categories – especially viewed as they are in DSM-5 as ‘psychobiological dysfunctions’ – are hardly likely to do justice to the complexity of most (if not all) mental disorders (Widiger, 2012). According to Rutter (2003), mental disorders appear to be the result of a complex interaction between (1) an array of biological vulnerabilities and dispositions and (2) several significant environmental, psychosocial events that often exert their progressive effects over an extended period of time. The symptoms and pathologies of mental disorders appear to be highly responsive to a wide variety of neurobiological, interpersonal, cognitive, and other mediating and moderating variables that develop, shape, and form a particular individual’s psychopathology profile. Another way of addressing the question regarding the (objective) existence of mental disorders is to take a close look at their cultural relativism. Both DSM and ICD have been

307

People as deviant

criticized for making the unwarranted assumption that diagnostic categories have the same meaning when carried over to a new cultural context (Kleinman, 1977, 1987). (This is another example of the issue we discussed in relation to IQ tests in Chapter 11.) This issue has potentially been obscured by the fact that the panels that finalize these diagnostic categories are unrepresentative of the world’s population (White, 2013). Inevitably, this has led to the omission of so-called culture-bound syndromes (CBSs) (see Box 13.6).

BOX 13.6 Culture-bound syndromes L A large number of anthropological studies have found that, in a wide range of

L L

L

L

L

non-Western cultures, there are apparently unique ways of ‘being mad’ (Berry et al., 1992): there are forms of abnormality that aren’t documented and recognized within DSM and ICD. Examples include ‘brain fag syndrome (West Africa), and Koro (South-East Asia, China, and India). These ‘exotic’ disorders are usually described in terms that relate to the particular culture in which they’re reported. So, while anthropologists stress ‘culture-specific’ disorders (thus focusing on cultural differences), traditional Western (biological) psychiatrists emphasize ‘universal’ disorders (stressing the similarities) (see Chapter 11). When a ‘new’ disorder is observed in some racially alien, non-Western society, the syndrome itself is perceived as alien to the existing classification system: it’s sufficiently ‘outside’ the mainstream to be unclassifiable. According to Fernando (1991), the concept of a CBS has been generated by the ideology of Western psychiatry: psychopathology in the West is seen as culturally neutral while psychopathology that’s distinctively different from that seen in the West is culture-bound. In other words, Western psychiatrists have, traditionally, regarded the disorders included within DSM and ICD as culture-free. Fernando believes this distinction represents a form of ethnocentrism within Western psychiatry; the concept of a CBS has a distinctly racist connotation.

Pause for thought … 3 Are there any mental disorders familiar to Western psychiatrists that could be thought of as being a CBS?

According to White (2013), because people living in ‘Western’ countries tend to see the world through a cultural lens that has been tinted by psychiatric conceptualizations of ‘mental illness’, they’re blind to how specific to ‘Western’ countries these conceptualizations are. In other words, like the fish being the last to discover water, we fail to see the culture that surrounds us and attribute it only to those we see as very different from ourselves. This is not only ethnocentric (see above) but very short-sighted! As Magnusson and Maracek (2012) argue, judgements about suffering, dysfunction, and deviance are necessarily – and unavoidably – made through the lens of culture. These judgements often serve to bolster the prevailing norms and social structure; in modern societies, the mental health professions have become a major regulatory force (supporting Szasz’s

308

People as deviant

observations; see above). As feminist critics have pointed out, these judgements often encode gender imperatives, as well as class, cultural, and ethnic bias. Despite the widespread agreement that schizophrenia has a substantial genetic component and associated brain pathology (making it a candidate for being a ‘real’, objective ‘mental illness’), culture seems to play a role in patterning how it’s expressed, including (1) recognition of the disorder; (2) the symptoms an individual presents; (3) its course/progression; and (4) family and community responses (Jenkins and Barrett, 2004). Comparisons of high-income, industrialized countries and non-industrialized, ‘Third World’ countries have shown that, at least in some of the latter, people diagnosed with schizophrenia have shorter periods of acute illness and are more likely to experience substantial or complete recovery/ remission over a period of several years (Hopper et al., 2007; World Health Organization, 1979). Although men and women in Europe and North America are equally likely to be diagnosed with schizophrenia, in Ethiopia, for example, men are five times more likely to be diagnosed than women (Alem et al., 2009). Again, while women in Europe and North America tend to have a better prognosis than men (Canuso and Padina, 2007), this trend is reversed in some non-Western countries (Alem et al., 2009).

Pause for thought … 4 What conclusions might you draw from these findings regarding male– female differences in diagnosis rates and prognosis?

Transcultural psychiatry (TCP) is a branch of psychiatry concerned with the cultural and ethnic context of mental illness (White, 2013). In its early incarnation, TCP reflected the racist attitudes prevailing at that time regarding naive ‘native’ minds, but over time it began to be understood that psychiatry is itself a cultural construct. It now concerns itself with how a medical symptom, diagnosis, or practice reflects the social, cultural, and moral context (Kirmayer, 2001). Evidence from various parts of the world (including China, Japan, Peru, Sri Lanka, and Tanzania) shows how the introduction of Western psychiatric conceptualizations of mental illness has potentially changed how distress is manifested, or introduced barriers to recovery. For example, in Japan understanding of depression has changed over the last 20 years, leading to a massively increased market for antidepressants. This ‘aggressive pharmaceuticalization’ has resulted in the abandonment of psychological and social treatments for depression (Kitanaka, 2011).

Making sense of madness: the work and legacy of R.D. Laing In Chapter 12, R.D. Laing was named as a fairly recent example of an existential practitioner-theorist. At the height of his career, in the mid-to-late 1960s, Laing (1927–1989), a Scottish psychiatrist, was the most widely read psychiatrist in the world. His work was also foundational for the development of the existential-phenomenological approach to existential therapy in the UK (Cooper, 2017). But perhaps more importantly, Laing advocated a radically different attitude towards people with ‘mental illness’ – especially schizophrenia – from the then predominant approach within mainstream psychiatry:

309

People as deviant

Although Laing was ultimately ostracised from the psychiatric community, his impassioned calls for the ‘mentally ill’ to be treated with care, respect and understanding may have played a critical role in the development of more humane attitudes towards sufferers of severe psychological distress. (Cooper, 2017, p. 115) Indeed, it’s been claimed that ‘Laing did for the psychotic what Freud did for the neurotic’ (Kirsner, 2015, p. 70). He fostered a culture in which their experiences and behaviours are more likely to be listened to, and treated with respect, rather than dismissed out of hand as ‘madness’.

BOX 13.7 Influences on Laing’s ideas L While Laing (1965) acknowledged that his key work, The Divided Self: An Exis-

L

L

L

L

L

310

tential Study in Sanity and Madness, wasn’t a direct application of any established existential philosophy, he also stated that the existential tradition was the major influence on his own thinking (see Chapter 11). He drew liberally from Kierkegaard, Nietzsche, Tillich, Heidegger, MerleauPonty, Sartre, Buber, and Husserl. Of these, he ‘lifted’ in particular Husserl’s phenomenological claim that to fully understand another person’s lived experience, we must put to one side all attempts at categorizing, labelling, or diagnosing them; instead, we should try to stay with that person’s lived experience at a purely descriptive level. Laing was also influenced by those European psychiatrists who’d attempted to formulate psychological difficulties in existential-phenomenological terms. The earliest of these was Karl Jaspers, whose General Psychopathology (1963/1913) tried to move away from detached, scientific observations and causal explanations of ‘abnormal psychic phenomena’ to describing them in terms of the patients’ actual, meaning-oriented, conscious lived experiences. Despite Laing’s intense dislike of his unethical attitude towards severely mentally ill people, the influence of Ludwig Binswanger’s (1963) existential account of schizophrenia can be seen clearly in his own. For example, Binswanger claimed that schizophrenics withdraw from independent, autonomous selfhood, try to defend themselves against dissolution, and experience a sense of naked horror in the face of potential annihilation. While ‘mad behaviour’ is typically seen as chaotic, it actually displays method and meaning (see Box 13.8). Laing trained as a psychoanalyst (at the Tavistock Institute in London) and met many of the psychoanalytic ‘greats’ of the British school of ‘object relations’, including John Bowlby, D.W. Winnicott, and more traditional psychoanalysts such as Marion Milner (his supervisor) and Charles Rycroft (his therapist). It was these figures – especially Winnicott – who influenced Laing, rather than Freud’s theories directly (see Chapter 9 and Gross, 2015). However, the key Freudian therapeutic practice of free association (see Chapter 9) formed a bridge between psychoanalysis and the existential philosophical tradition: it provided clients with the freedom to express themselves without the restraints represented by the therapist’s expectations and assumptions. Laing also believed in the importance of interpreting the transference (the client’s displacement of feelings and memories relating to significant others onto the therapist).

People as deviant L The anthropologist Gregory Bateson and the ‘Palo Alto’ group (Bateson et al.,

1956) provided another source of influence: they argued that schizophrenia may be the result of dysfunctional (pathological/contradictory) patterns of communication (‘double-binds’) within the family. L Like Karl Marx, Laing was deeply committed to challenging social injustice, and to the view that an individual cannot be understood in isolation from their family system, which, in turn, can only be understood within the broader socio-politicocultural context. Also like Marx, as well as Heidegger and other existential philosophers, Laing came to believe that ‘normal’ human beings are hugely alienated from their own selves and potential (see Chapter 12). (Based on Cooper, 2017)

Pause for thought … 5 Based on the definition above, try to think of some examples of ‘double-binds’.

As a trainee psychiatrist in the 1950s, Laing came to reject the prevailing psychiatric worldview. L While being associated with the ‘anti-psychiatry’ movement, he explicitly rejected the term

L L L L

(coined by David Cooper). For Laing, what mattered was the fact that psychiatric treatments could be imposed against a patient’s will (even if they were a voluntary patient). He detested the ‘unspeakable violence’ of lobotomies (psychosurgery), electro-convulsive therapy (ECT) and padded cells. He felt there was a complete breakdown of genuine, human relationships between psychiatrists and patients. He saw psychiatrists as having an unparalleled degree of power over patients. By labelling, objectifying, ‘thingifying’, and dismissing certain people as ‘mentally ill’ and ‘dysfunctional’, the psychiatric system failed to acknowledge the sense and meaning of patients’ symptoms.

In contrast to this approach, Laing argued that psychiatrists and psychotherapists should aim to enter their clients’ phenomenologically lived world; this would reveal a far greater meaning and purposiveness in clients’ thoughts, feelings, and behaviours than they’d initially believed. At its core, then, Laing’s work can be seen as an attempt to extend the existential ethic of humanisation … to all human beings, however much distress and psychosis they are experiencing – that is, that schizophrenics, as much as anyone else, should be treated as human beings, and not as something other than human. (Cooper, 2017, pp. 119–120; emphasis in original)

The meaningfulness of schizophrenia The Divided Self was a study of schizoid and schizophrenic people. Its aim was to make madness, and the process of going mad, comprehensible, and to offer an existential account

311

People as deviant

of some forms of madness. However, he wasn’t trying to present a comprehensive theory of schizophrenia, nor to explore constitutional and organic aspects. The mere attempt to make schizophrenia intelligible was itself revolutionary and the book’s publication marked an important moment in the history of psychiatry (Kirsner, 2015). He attempted to listen to psychotic patients and to treat their speech and actions as potentially understandable and meaningful. Psychotics for Laing were not beyond all reason; like other human beings psychotics could be seen as agents whose experience could be understood as meaningful. (Kirsner, 2015, p. 70) Bleuler, who coined the term ‘schizophrenia’ (literally, ‘divided self ’) remarked that schizophrenics were stranger to him than the birds in his garden (Laing, 1965, p. 24). ‘For Laing, such a lack of understanding of the world of the psychotic wasn’t so much given as constructed’ (Kirsner, 2015, p. 70). Throughout his life, Laing challenged Jaspers’ description of psychotic phenomena as ‘ununderstandable’ and as qualitatively different from sane experience. According to Kirsner (2015), The Divided Self was devoted to showing that the thoughts and actions of psychotics are ‘expressions of human subjectivity, not simply emanations of psychobiological processes’. Seen in context and from the point of view of the patient as agent, madness could be understood as resulting from choices within psychosocial and biological parameters. (Kirsner, 2015, p. 70; emphasis added)

So what is schizophrenia like? Laing tried to ‘get inside the head’ of someone with schizophrenia by trying to see the world as that person sees it. This existential-phenomenological analysis retained the categories of classical (mainstream) psychiatry, but proceeded from the assumption that what the schizophrenic person says and does is intelligible if you listen carefully enough and relate to their ‘being-in-the-world’. What Laing found was a split in the patient’s relationship with the world and with the self. The schizophrenic (and the schizoid personality, who may well develop full-blown schizophrenic symptoms) experiences an intense form of ontological insecurity and everyday events may threaten his/her very existence. The major features of ontological insecurity are described in Box 13.8.

BOX 13.8 The major features of ontological insecurity L Engulfment: this refers to the dread of being swallowed up by others if involve-

ment becomes too close; this is commonly expressed as ‘being buried, drowned, caught and dragged down into quicksand’, being ‘on fire, bodies being burned up’, feeling ‘cold and dry – dreads fire or water’. To be loved is more threatening than to be hated; indeed, all love is a form of hate. L Implosion: this refers to fear that the world, at any moment, will come crashing in and obliterate their identity; the schizophrenic feels empty, like a vacuum, and they are the vacuum; anything (‘reality’) can threaten that empty space which must be protected at all costs.

312

People as deviant L Petrification/depersonalization: this involves fear of being turned to stone (catato-

nia), fear of being turned into a robot or automaton (thought-control), and fear of turning others into stone. To consider another person as a free agent can be threatening: you can become an it for them; in order to prevent the other depersonalizing you, you may have to depersonalize the other.

Laing’s later models of madness The model of schizophrenia explored within The Divided Self (originally published in 1960) is commonly referred to as the psychoanalytic model. According to Heather (1976), this book represents the first of three major landmarks in the development of Laing’s thought; the other two also correspond to the publication of major books. In Self and Others (1961), Laing maintained that ‘schizophrenia’ doesn’t refer to any kind of entity (clinical, existential, or otherwise) but rather refers to an interpersonal ploy used by some people (parents, GPs, psychiatrists, etc.) in their interactions with others (the ‘schizophrenic’). According to the family interaction model, schizophrenia can only be understood as something which takes place between people (and not inside them, as maintained by the psychoanalytic model). To understand individuals, we must study not individuals but interactions between individuals (the subject-matter of social phenomenology). The family interaction model was influenced by, and is consistent with, Bateson et al.’s (1956) research into ‘double-binds’ (see above). In Sanity, Madness and the Family (1964), Laing and Esterson present 11 family case studies, in all of which one member becomes a diagnosed schizophrenic; they try to make schizophrenia intelligible in the context of what happens within the patient’s family, and in so doing further undermine the disease model (see above). Finally, in The Politics of Experience (1967), two new models emerged. 1 According to the conspiratorial model, schizophrenia is a label, a form of violence perpetrated by some people on others. The family, GPs and psychiatrists conspire against the schizophrenic in order to keep him/her in check: to maintain their definition of reality (the status quo), they treat the schizophrenic as if they were sick; they imprison them in a psychiatric hospital where they’re degraded and invalidated as a human being. 2 According to the psychedelic model, the schizophrenic person is an exceptionally eloquent critic of society and schizophrenia itself is a natural way of healing our own appalling state of alienation (‘normality’). Schizophrenia is a voyage into ‘inner space’, a ‘natural healing process’; unfortunately, this is rarely allowed to occur because we’re too busy treating the patient.

Conclusions: the social construction of madness As we noted at the beginning of the chapter, in a scientific context, one way in which ‘deviant’ is defined is as deviation-from-the-average (or statistical infrequency). But, as we also noted, different examples of such deviation can be judged differently depending on particular sociocultural norms. For example, both Picasso’s creative genius and Hitler’s megalomania are statistically rare (‘deviant’/abnormal), but we tend to value the former much higher than the latter (Gross, 1987).

313

People as deviant

What this illustrates is the difficulty – if not impossibility – of defining normality/abnormality in an objective way (i.e. free of value judgements, biases, prejudices, etc.). This is mirrored by the difficulty involved in trying to define ‘mental illness’ as a whole, or particular examples or categories of mental disorder, in an objective way (in terms of measurable, identifiable underlying physical causes, such as brain disorders). So, how are we to understand the nature of psychological abnormality? According to Maddux et al. (2012), we need to try to understand the process by which people go about trying to conceive of, and define, psychopathology, what they’re trying to achieve by doing this, and how and why these conceptions are continually debated and revised. They begin their attempt to address these issues by accepting that ‘psychopathology’ isn’t scientifically but socially constructed. As discussed in Chapter 3, Social Constructionism involves ‘elucidating the process by which people come to describe, explain, or otherwise account for the world in which they live’ (Gergen, 1985, pp. 3–4). From this perspective, words and concepts such as ‘psychopathology’ and ‘mental disorder’ ‘are products of a particular historical and cultural understanding rather than … universal and immutable categories of human experience’ (Bohan, 1996, p. xvi). Universal or ‘true’ definitions of concepts don’t exist because they depend primarily on who does the defining: the definers are usually people with power, and so the definitions reflect and promote their interests and values (Muehlenhard and Kimes, 1999). From the Social Constructionist perspective, sociocultural, political, professional, and economic forces influence professional and lay conceptions of psychopathology: Our conceptions of psychological normality and abnormality are not facts about people but abstract ideas that are constructed through the implicit and explicit collaborations of theorists, researchers, professionals, their clients, and the culture in which all are imbedded and that represent a shared view of the world and human nature. For this reason, mental disorders and the numerous diagnostic categories of the DSM were not ‘discovered’ in the same manner that … a medical researcher discovers a virus. Instead, they were invented (Raskin and Lewandowski, 2000). (Maddux et al., 2012, p. 14; emphasis added) However, Maddux et al. stress that to say that mental disorders are invented doesn’t mean that they’re ‘myths’ in the sense that Szasz described them (see above). In fact, Szasz himself would agree with Maddux et al. in stating that ‘human psychological distress and suffering are … real’ (p. 17).

Pause for thought … The reality of the distress associated with ‘mental illness’ is demonstrated by the debate regarding assisted suicide for those who aren’t terminally ill but who are experiencing unbearable mental suffering. 6 Try to formulate arguments for and against the rights of such individuals to assisted suicide.

314

People as deviant

Also, to claim that conceptions of psychopathology are socially constructed – rather than scientifically derived – doesn’t mean that ‘the patterns of thinking, feeling, and behaving that society decides to label as psychopathology cannot be studied objectively and scientifically’ (Maddux et al., 2012, pp. 17–18). A way of thinking about the validity of psychiatric diagnoses is to ask whether mental disorders are real and possess an underlying essence (an underlying reality or true nature that all members of a category have in common; Gelman, 2003; see Chapters 2 and 3). As we’ve seen, proponents of the medical/disease model maintain that each disorder is universal and has a biologically based causation with discrete boundaries: each disorder is distinct and separate from all the others (Ahn et al., 2006). Another way of describing essentialism is in terms of ‘natural kinds’ (e.g. Hacking, 1994; see Chapter 3). According to Plato, the world is divided into fundamental or natural categories that exist as categories whether or not humans know about them. In this view, the task of science is to produce knowledge that gets increasingly closer to reality (the ‘truth’) by discovering these naturally – and objectively – existing kinds and describing their true properties; these true properties are taken to be the inherent – or essential – meaning of the categories (Magnusson and Maracek, 2012). According to an alternative view, thought to have originated from another group of Greek philosophers – the Sophists – the categories, assumptions, and measurements people use to classify the world aren’t found in nature, but are human-made: they’re the product of people’s efforts to make sense of the world and distinctions between categories (in this case, mental disorders) and the meaning attached to them are matters of social negotiation. More generally, there’s no reason to expect that any categorization scheme (such as DSM) will be used everywhere … nor can we assume that newer categorizations (later editions of DSM) come closer to reality (‘the truth’) than earlier ones (Hacking, 1994). If essentialism sees science as discovering reality, this alternative view [that of the Sophists] is consistent with SC. (Gross, 2014, p. 168) Finally, and more generally, because: What it means to be a person is determined by cultural ways of talking about and conceptualizing personhood … identity and disorder are socially constructed, and there are as many disorder constructions as there are cultures. (Neimeyer and Raskin, 2000, pp. 6–7)

315

People as deviant

Pause for thought – answers 1

Table 13.1 Comparison between neurosis and psychosis Neurosis

Psychosis

1 Only a part of the personality

1 The whole personality is involved/

involved/affected.

affected.

2 Contact with reality maintained.

2 Contact with reality lost.

3 Person has insight (recognizes she/

3 Person has no insight.

he has a problem). 4 An exaggeration of ‘normal’ behaviour (a quantitative difference).

4 Discontinuous with ‘normal’ behaviour (a qualitative difference).

5 Often triggered by stress.

5 No precipitating cause.

6 Treated mainly by psychological

6 Treated mainly by physical methods.

methods. Source: Gross (1987)

L L

Note that this distinction is no longer made, although the terms ‘neurotic’ and ‘psychotic’ are still used (especially the latter). The exogenous/endogenous distinction was used mainly in relation to depression: reactive (‘exogenous’ or ‘neurotic’) depression was thought to be triggered by life events, while endogenous (‘psychotic’) depression was thought to reflect inherent aspects of the person’s make-up (in particular, chemical imbalances in the brain). This distinction is no longer recognized (see Gross, 2015).

2 He waited to see whether or not his patients improved before concluding that, in hindsight (‘on reflection’) they couldn’t have had dementia praecox after all. Instead, he could/should have built-up evidence based on a large number of patients, studied over a long period of time, to see whether or not they recovered. If they didn’t (or, at least, a majority of them didn’t), then he could have drawn the conclusion that they did, indeed, have the illness (i.e. people with dementia praecox don’t recover). L

As we saw in Chapter 2, the weakness of induction is that we can never be certain that our conclusions are true: the next observation could show it to be false (e.g. ‘all swans are white’ only requires a single black swan to disprove the claim). Similarly, it only takes one patient diagnosed with dementia praecox to recover to cast doubt on the claim that ‘patients with dementia praecox do not recover’.

3 It has been suggested that anorexia nervosa, pre-menstrual syndrome (PMS), and chronic fatigue syndrome might actually be largely 316

People as deviant

culture-bound to European-American populations (Fernando, 1991; Kleinman, 2000; Lopez and Guarnaccia, 2000). 4 Unless you have reason to believe that there are male–female genetic differences that themselves differ between different countries/cultural groups, the only logical conclusion is that there are sociocultural differences between Western and non-Western countries that account for the gender differences. 5 A mother induces her son to give her a hug, but when he does so tells him ‘don’t be such a baby’ (Gross, 1987). 6 This lends itself to a seminar or debate, but the following facts might help: L

L

L

L

L

The Netherlands, Belgium, and Switzerland currently permit assisted suicide for non-terminal illnesses that are causing unbearable suffering – including mental suffering. Switzerland was the first country to permit assisted suicide, in 1942, subsequently joined by several other countries (most recently Canada). In the UK there’s a long-standing movement to legalize assisted dying, but a high-profile bill was thrown out by Parliament in 2015. However, most UK campaigners draw the line at individuals who aren’t terminally ill. Paulan Starcke is a Dutch psychiatrist who has helped individuals with psychiatric problems to end their lives. But in her clinic (‘End of Life’, in The Hague), in 2012–2013, only six out of 121 requests from people with a psychological condition were granted. Psychiatrists (at least two) must agree that (1) the person is mentally competent; (2) has had a long-standing wish to die (usually decades); and (3) there’s no prospect of treatment. Dignitas, the Swiss assisted dying clinic, claims that just having the option of assisted suicide can help: paradoxically, making it available may actually make it less likely to happen. (Based on Wilson, 2016)

317

References

Agassi, J. (1996) Prescriptions for responsible psychiatry. In W. O’Donohue and R.F. Kitchener (eds) The Philosophy of Psychology. London: Sage. Ahn, W.-k., Flanagan, E.H., Marsh, J.K., and Sanislow, C.A. (2006) Beliefs about essences and the reality of mental disorders. Psychological Science, 17(9), 759–766. Alem, A., Kebede, D., Fekadu, A., Shibre, T., Fekadu, D., Beyero, T., Medlin, G., Negash, A., and Kullgren, G. (2009) Clinical course and outcome of schizophrenia in a predominantly treatment-naive cohort in rural Ethiopia. Schizophrenia Bulletin, 35(3), 646–654. Allport, F.H. (1924) Social Psychology. Boston, MA: Houghton Mifflin. Allport, G.W. (1937) Personality: A Psychological Interpretation. New York: Holt. Allport, G.W. (1955a) Theories of Perception and the Concept of Structure. New York: Wiley. Allport, G.W. (1955b) Becoming. New Haven, CT: Yale University Press. Allport, G.W. (1960) Personality and Social Encounter. Boston, MA: Beacon Press. Allport, G.W. (1961) Pattern and Growth in Personality. New York: Holt, Rinehart & Winston. Allport, G.W. and Postman, L. (1947) The Psychology of Rumour. New York: Holt, Rinehart & Winston. American Psychiatric Association (1952) Diagnostic and Statistical Manual of Mental Disorders. Washington, DC: American Psychiatric Association. American Psychiatric Association (1968) Diagnostic and Statistical Manual of Mental Disorders (2nd edition). Washington, DC: American Psychiatric Association. American Psychiatric Association (1980) Diagnostic and Statistical Manual of Mental Disorders (3rd edition). Washington, DC: American Psychiatric Association. American Psychiatric Association (1987) Diagnostic and Statistical Manual of Mental Disorders (3rd edition revised). Washington, DC: American Psychiatric Association. American Psychiatric Association (1994) Diagnostic and Statistical Manual of Mental Disorders (4th edition). Washington, DC: American Psychiatric Association. American Psychiatric Association (2000) Diagnostic and Statistical Manual of Mental Disorders (4th edition revised). Washington, DC: American Psychiatric Association. American Psychiatric Association (2013) Diagnostic and Statistical Manual of Mental Disorders (5th edition revised). Washington, DC: American Psychiatric Association. American Psychological Association (2002) Ethical Principles of Psychologists and Code of Conduct. Washington, DC: American Psychological Association. Amir, Y. and Sharon, I. (1987) Are social-psychological laws cross-culturally valid? Journal of Cross-Cultural Psychology, 18, 383–470. Anderson, J.R. (1985) Cognitive Psychology and its Implications. New York: Freeman.

318

References

Anderson, M. (2007) Biology and intelligence: The race/IQ controversy. In S. Della Sala (ed.) Tall Tales about the Mind & Brain: Separating Fact from Fiction. Oxford: Oxford University Press. Antaki, C. (1984) Core concepts in attribution theory. In J. Nicholson and H. Beloff (eds) Psychology Survey 5. Leicester: British Psychological Society. Appignanesi, L. and Forrester, J. (2000) Freud’s Women. London: Penguin Books. Apter, A. (1991) The problem of who: Multiple personality, personal identity and the double brain. Philosophical Psychology, 4(2), 219–248. Argyle, M. (1983) The Psychology of Interpersonal Behaviour (4th edition). Harmondsworth: Penguin. Armistead, N. (1974) Reconstructing Social Psychology. Harmondsworth: Penguin. Asch, S.E. (1946) Forming impressions of personality. Journal of Abnormal & Social Psychology, 41, 258–290. Ashworth, P. (2003) The origins of qualitative psychology. In J.A. Smith (ed.) Qualitative Psychology: A Practical Guide to Research Methods. London: Sage. Atkinson, R.C. and Shiffrin, R.M. (1968) Human memory: a proposed system and its control processes. In K.W. Spence and J.T. Spence (eds) The Psychology of Learning and Motivation, Vol. 2. London: Academic Press. Atkinson, R.C. and Shiffrin, R.M. (1971) The control of short-term memory. Scientific American, 224, 82–90. Baddeley, A.D. (1966) The influence of acoustic and semantic similarity on long-term memory for word sequences. Quarterly Journal of Experimental Psychology, 18, 302–309. Baddeley, A.D. (2007) Working Memory, Thought and Action. Oxford: Oxford University Press. Baddeley, A.D. and Hitch, G. (1974) Working memory. In G.H. Bower (ed.) Recent Advances in Learning and Motivation, Vol. 8. New York: Academic Press. Baggini, J. (2015) Freedom Regained: The Possibility of Free Will. London: Granta. Bailey, C.L. (1979) Mental illness: A logical misrepresentation? Nursing Times, 3 May, 761–762. Bamshad, M.J. and Olson, S.E. (2003) Does race exist? Scientific American, 289(6), 78–85. Bandura, A. (1977) Social Learning Theory (2nd edition). Englewood Cliffs, NJ: PrenticeHall. Bandura, A. (1986) Social Foundations of Thought and Action. Englewood Cliffs, NJ: Prentice-Hall. Bandura, A. (1989) Social cognitive theory. In R. Vasta (ed.) Six Theories of Child Development. Greenwich, CT: JAI Press. Banks, W.P. and Pockett, S. (2007) Benjamin Libet’s work on the neuroscience of free will. In M. Velmans and S. Schneider (eds) The Blackwell Companion to Consciousness. Oxford: Blackwell Publishing. Bannister, D. and Fransella, F. (1966) A grid test of schizophrenic thought disorder. British Journal of Social & Clinical Psychology, 5, 95–102. Bannister, D. and Fransella, F. (1967) A Grid Test of Schizophrenic Thought Disorder. Barnstaple: Psychological Test Publications. Bannister, D. and Fransella, F. (1980) Inquiring Man: The Psychology of Personal Constructs (2nd edition). Harmondsworth: Penguin. Banton, M. (1987) Racial Theories. Cambridge: Cambridge University Press. Bargh, J.A. (2014) Our unconscious minds. Scientific American, 310(1), 20–27. Barkow, J., Cosmides, J., and Tooby, J. (eds) (1992) The Adapted Mind: Evolutionary Psychology and the Generation of Culture. New York: Oxford University Press.

319

References

Barlow, D.H. and Nock, M. (2009) Why can’t we be more idiographic in our research? Perspectives on Psychological Science, 4(1), 19–21. Bartlett, F.C. (1923) Psychology and Primitive Culture. Cambridge: Cambridge University Press. Bartlett, F.C. (1932) Remembering: A Study in Experimental and Social Psychology. Cambridge: Cambridge University Press. Bartlett, F.C. (1958) Thinking: An Experimental and Social Study. London: George Allen & Unwin. Bateson, G., Jackson, D.D., Haley, J., and Weakland, J. (1956) Towards a theory of schizophrenia. Behavioural Science, 1, 251–264. Batson, C.D. (1991) The Altruism Question: Toward a Social-Psychological Answer. Hillsdale, NJ: Erlbaum. Batson, C.D. and Stocks, E.L. (2004) Religion: Its core psychological functions. In J. Greenberg, S.L. Koole, and T. Pyszczynski (eds) Handbook of Experimental Existential Psychology. New York: The Guilford Press. Baumeister, R.F. (1987) How the Self became a problem: A psychological review of historical research. Journal of Personality & Social Psychology, 52(1), 163–176. Bazan, A. (2016) The role of biology in the advent of psychology: Neuropsychoanalysis and the foundation of a mental level of causality. In J. De Vos and E. Pluth (eds) Neuroscience and Critique: Exploring the Limits of the Neurological Turn. London: Routledge. Becker, E. (1973) The Denial of Death. New York: Free Press. Becker, H.S. (1963) Outsiders: Studies in the Sociology of Deviance. New York: Free Press. Bee, H. (2000) The Developing Child (9th edition). Boston, MA: Allyn & Bacon. Bennett, M. (1993) Introduction. In M. Bennett (ed.) The Child as Psychologist: An Introduction to the Development of Social Cognition. Hemel Hempstead: Harvester Wheatsheaf. Bennett, T. (2013) Leave those kids alone! New Scientist, 219(2932), 26–27. Bennett-Levy, J. and Marteau, T. (1984) Fear of animals: What is prepared? British Journal of Psychology, 75, 37–42. Bentall, R. (2009) Doctoring the Mind. London: Allen Lane. Berger, P.L. and Luckmann, T. (1966) The Social Construction of Reality. Harmondsworth: Penguin. Bernstein, M.D. and Russo, N.F. (1974) The history of psychology revised: Or, up with our foremothers. American Psychologist, 29, 130–134. Berry, J.W. (1969) On cross-cultural comparability. International Journal of Psychology, 4, 119–128. Berry, J.W., Poortinga, Y.H., Segall, M.H., and Dasen, P.R. (1992) Cross-Cultural Psychology. Cambridge: Cambridge University Press. Bettelheim, B. (1983) Freud and Man’s Soul. London: Flamingo. Beyerstein, B.L. (2007) The neurology of the weird: Brain states and anomalous experience. In S. Della Sala (ed.) Tall Tales about the Mind & Brain: Separating Fact from Fiction. Oxford: Oxford University Press. Binswanger, L. (1956) Existential analysis and psychotherapy. In F. Fromm-Reichman and J. Moreno (eds) Progress in Psychotherapy. New York: Grune & Stratton. Binswanger, L. (1963) Being-in-the-World: Selected Papers of Ludwig Binswanger (J. Needleman, trans.). London: Condor Books. Blackham, H.J. (1961) Six Existentialist Thinkers. London: Routledge & Kegan Paul. Blackman, D.E. (1980) Image of man in contemporary behaviourism. In A.J. Chapman and D.M. Jones (eds) Models of Man. Leicester: British Psychological Society.

320

References

Blackmore, S. (1999) The Meme Machine. Oxford: Oxford University Press. Blackmore, S. (2005) Consciousness: A Very Short Introduction. Oxford: Oxford University Press. Blackmore, S. (2007) Imitation makes us human. In C. Pasternake (ed.) What Makes Us Human? Oxford: Oneworld. Blackmore, S. (2010) Consciousness: An Introduction (2nd edition). London: Hodder Education. Blakemore, C. (1990) The Mind Machine. London: BBC Publications. Bleier, R. (1984) Science and Gender: A Critique of Biology and Its Theories on Women. New York: Pergamon Press. Boas, F. (1911) The Mind of Primitive Man. New York: Macmillan. Bock, P.K. (1988) Continuities in Psychological Anthropology: A Historical Introduction (2nd edition). New York: Freeman. Boden, M.A. (1980) Artificial intelligence and intellectual imperialism. In A.J. Chapman and D.M. Jones (eds) Models of Man. Leicester: British Psychological Society. Boden, M.A. (1987a) Artificial Intelligence and Natural Man (2nd edition). Cambridge, MA: Harvard University Press. Boden, M.A. (1987b) Artificial intelligence. In R. Gregory (ed.) Oxford Companion to the Mind. Oxford: Oxford University Press. Boden, M.A. (1993) The impact on philosophy. In The Simulation of Human Intelligence. Oxford: Blackwell. Bogen, J.E. (1981) Mental numerosity: Is one head better than two? Behavioural & Brain Sciences, 4(1), 100–101. Bohan, J. (1996) The Psychology of Sexual Orientation: Coming to Terms. New York: Routledge. Boring, E.G. (1923) Intelligence as the tests test it. New Republic, 6 June, 37. Boring, E.G. (1950) A History of Experimental Psychology (2nd edition). Englewood Cliffs, NJ: Prentice-Hall. Boyle, E. (2009) Neuroscience and animal sentience. www.animalsentience.com. Brand, C. (1996) The g Factor: General Intelligence and its Implications. New York: John Wiley. Brandon, S., Boakes, J., Glaser, D., and Green, R. (1998) Recovered memories of childhood sexual abuse: Implications for clinical practice. British Journal of Psychiatry, 172, 293–307. Brentano, F. (1973) Psychology from an Empirical Standpoint. London: Routledge. (Originally published 1874, Leipzig: Dunker Humbolt.) Brislin, R. (1993) Understanding Culture’s Influence on Behaviour. Orlando, FL: Harcourt Brace Jovanovich. British Psychological Society (2006) Code of Ethics and Conduct. Leicester: British Psychological Society. British Psychological Society (2007) Guidelines for Psychologists Working with Animals. Leicester: British Psychological Society. Broadbent, D.E. (1958) Perception and Communication. New York: Pergamon. Broadbent, D.E. (1961) Behaviour. London: Eyre & Spottiswoode. Broadbent, D.E. (1973) In Defence of Empirical Psychology. London: Methuen. Broadbent, D.E. (1981) Non-corporeal explanations in psychology. In A.F. Heath (ed.) Scientific Explanation. Oxford: Clarendon Press. Brown, J. (1958) Some tests of the decay theory of immediate memory. Quarterly Journal of Experimental Psychology, 10, 12–21.

321

References

Brown, J.A.C. (1961) Freud and the Post-Freudians. Harmondsworth: Penguin. Brown, L.S. (1997) Ethics in Psychology: Cui bono? In D. Fox and I. Prilleltensky (eds) Critical Psychology: An Introduction. London: Sage. Brown, P. (1973) Radical Psychology. London: Tavistock. Brown, R. (1986) Social Psychology: The Second Edition. New York: Free Press. Brownmiller, S. (1975) Against Our Will: Men, Women and Rape. New York: Simon & Schuster. Bruner, J.S. (1983) In Search of Mind. New York: Harper & Row. Bruner, J.S. (1990) Acts of Meaning. Cambridge, MA: Harvard University Press. Bruner, J.S. and Postman, L. (1949) On the perception of incongruity: A paradigm. Journal of Personality, 18, 206–223. Bruner, J.S., Busiek, R.D., and Minturn, A.J. (1952) Assimilation in the immediate reproduction of visually perceived figures. Journal of Experimental Psychology, 44, 151–155. Bruner, J.S., Goodnow, J.J., and Austin, G.A. (1956) A Study of Thinking. New York: John Wiley. Brysbaert, M. and Rastle, K. (2013) Historical and Conceptual Issues in Psychology (2nd edition). Harlow: Pearson Education Ltd. Buber, M. (1958) I and Thou (trans. R. Gregory Smith). Edinburgh: T. & T. Clark. (Originally published 1923.) Buckley, K.W. (1989) Mechanical Man: John Broadus Watson and the Beginnings of Behaviourism. New York: The Guilford Press. Buller, D.J. (2009) Four fallacies of pop evolutionary psychology. Scientific American, 300(1), 60–67. Buller, D.J. (2013) Four fallacies of pop evolutionary psychology. Scientific American, 22(1), 44–51. Bunn, G. (2010) The experimental psychologist’s fallacy. The Psychologist, 23(12), 964–967. Burkhardt, R.W. (2005) Patterns of Behaviour: Konrad Lorenz, Niko Tinbergen, and the Founding of Ethology. Chicago, IL: University of Chicago Press. Burns, R.B. (1980) Essential Psychology. Lancaster: MTP Press. Burr, V. (1995) An Introduction to Social Constructionism. London: Routledge. Burr, V. (2003) Social Constructionism (2nd edition). Hove: Routledge. Burr, V. (2015) Social Constructionism (3rd edition). London: Routledge. Burt, C. (1949) The structure of the mind: A review of the results of factor analysis. British Journal of Educational Psychology, 19, 110–111, 176–199. Burt, C. (1955) The evidence for the concept of intelligence. British Journal of Educational Psychology, 25, 158–177. Buss, A.R. (1978) The structure of psychological revolutions. Journal of the History of the Behavioural Sciences, 14, 57–64. Buss, D.M. (1995) Evolutionary psychology: A new paradigm for psychological science. Psychological Inquiry, 6, 1–49. Bussey, K. and Bandura, A. (1999) Social cognitive theory of gender development and differentiation. Psychological Review, 106, 676–713. Canuso, C.M. and Padina, G. (2007) Gender and schizophrenia. Psychopharmacology Bulletin, 40(4), 178–190. Carlson, N.R. and Buskist, W. (1997) Psychology: The Science of Behaviour (5th edition). Needham Heights, MA: Allyn & Bacon. Carruthers, M. (1998) The Craft of Thought. Cambridge: Cambridge University Press. Carver, C.S. and Scheier, M.F. (1992) Perspectives on Personality (2nd edition). Boston, MA: Allyn & Bacon.

322

References

Cattell, R.B. (1944) Psychological measurement: normative, ipsative, interactive. Psychological Review, 51, 292–303. Cattell, R.B. (1965) The Scientific Analysis of Personality. Harmondsworth: Penguin. Chalmers, A. (2013) What is this Thing Called Science? (4th edition). Maidenhead: McGraw-Hill Education. Chalmers, D. (2007) The hard problem of consciousness. In M. Velmans and S. Schneider (eds) The Blackwell Companion to Consciousness. Oxford: Blackwell Publishing. Chappell, J. and Kacelnik, A. (2002) Tool selectivity in a non-primate, the New Caledonian crow (Corvus moneduloides). Animal Cognition, 5, 71–78. Cherry, F. (1995) The Stubborn Particulars of Social Psychology: Essays on the Research Process. London: Routledge. Clark, K. and Clark, M. (1939) The development of consciousness of self in the emergence of racial identification in Negro pre-school children. Journal of Social Psychology, 10, 591–597. Clark, K. and Clark, M. (1947) Racial identification and preference in Negro children. In T. Newcomb and E. Hartley (eds) Readings in Social Psychology. New York: Holt, Rinehart & Winston. Clifasefi, S.L., Bernstein, D.M., Mantonakis, A., and Loftus, E.F. (2013) ‘Queasy does it’: False alcohol beliefs and memories may lead to diminished alcohol preferences. Acta Psychologica, 143, 14–19. Cohen, G. (1990) Memory. In I. Roth (ed.) Introduction to Psychology. Hove: Erlbaum, in association with the Open University. Cohen, J. (1958) Humanistic Psychology. London: Allen & Unwin. Cohen, N.J. and Squire, L.R. (1980) Preserved learning and retention of pattern-analysing skills in amnesia: Dissociation of knowing how from knowing that. Science, 210, 207–210. Cole, M. (1990) Cultural psychology: A once and future discipline? In J.J. Berman (ed.) Nebraska Symposium on Motivation: Cross-Cultural Perspectives. Lincoln, NE: University of Nebraska Press. Cole, M. and Scribner, S. (1974) Culture and Thought: A Psychological Introduction. New York: John Wiley. Cole, M., Gay, J., Glick, J.A., and Sharp, D.W. (1971) The Cultural Context of Learning and Thinking: An Exploration in Experimental Anthropology. New York: Basic Books. Colman, W. (2000) Models of the Self. In E. Christopher and H.M. Solomon (eds) Jungian Thought in the Modern World. London: Free Association Books. Colvin, M.K. and Gazzaniga, M.S. (2007) Split-brain cases. In M. Velmans and S. Schneider (eds) The Blackwell Companion to Consciousness. Oxford: Blackwell Publishing. Conrad, R. (1964) Acoustic confusion in immediate memory. British Journal of Psychology, 55, 75–84. Constandi, M. (2013) The mind minders. New Scientist, 220(2938), 44–47. Cooley, C.H. (1902) Human Nature and Social Order. New York: Schocken. Cooper, M. (2017) Existential Therapies (2nd edition). London: Sage. Cornwell, D. and Hobbs, S. (1976) The strange saga of Little Albert. New Society, March, 602–604. Costa, P.T. and McCrae, R.R. (1992) Revised NEO Personality Inventory (NEO-PI-R). Odessa, FL: Psychological Assessment Resources. Costello, T.W., Costello, J.T., and Holmes, D.A. (1995) Abnormal Psychology. London: HarperCollins. Craik, F.I.M. and Lockhart, R. (1972) Levels of processing. Journal of Verbal Learning & Verbal Behaviour, 11, 671–684.

323

References

Craik, F.I.M. and Watkins, M.J. (1973) The role of rehearsal in short-term memory. Journal of Verbal Learning & Verbal Behaviour, 12, 599–607. Crick, F. (1994) The Astonishing Hypothesis: The Scientific Search for the Soul. London: Simon & Schuster. Cromby, J. (2004) Between constructionism and neuroscience: The societal co-construction of embodied subjectivity. Theory & Psychology, 14(6), 797–821. Czikszentmihalyi, M. (1975) Beyond Boredom and Anxiety: Experiencing Flow in Work and Play. San Francisco, CA: Jossey-Bass. Damasio, A.R. (2003) Looking for Spinoza: Joy, Sorrow, and the Feeling Brain. Orlando, FL: Harcourt. Danziger, K. (1985) The methodological imperative in psychology. Philosophy of the Social Sciences, 15, 1–13. Danziger, K. (1990) Constructing the Subject: Historical Origins of Psychological Research. New York: Cambridge University Press. Danziger, K. (1997) Naming the Mind: How Psychology Found its Language. London: Sage. Darwin, C.R. (1859) The Origin of Species by Means of Natural Selection. London: John Murray. Darwin, C.R. (1871) The Descent of Man and Selection in Relation to Sex. London: John Murray. Darwin, C.R. (1872) The Expression of Emotion in Man and Animal. Chicago, IL: University of Chicago Press. Darwin, C.R. (1877) A biographical sketch of an infant. Mind, 2, 285–294. Davison, G.C., Neale, J.M., and Kring, A.M. (2004) Abnormal Psychology (4th edition). New York: Wiley & Sons. Dawkins, R. (1976) The Selfish Gene. Oxford: Oxford University Press. DeGrazia, D. (2002) Animal Rights: A Very Short Introduction. Oxford: Oxford University Press. Deese, J. (1972) Psychology as Science and Art. New York: Harcourt Brace Jovanovich. Delgado, J.M.R. (1969) Physical Control of the Mind. New York: Harper & Row. Digman, J.M. (1990) Personality structure: Emergence of the five-factor model. Annual Review of Psychology, 41, 417–440. Dobbs, D. (2005) Fact or phrenology. Scientific American Mind, 16(1), 24–31. Douglas, M. (1980) Evans-Pritchard. London: Fontana. Draaisma, D. and de Rijcke, S. (2001) The graphic strategy: The uses and functions of illustrations in Wundt’s Grundzuge. History of the Human Sciences, 14, 1–24. Dreyfus, H.L. (1987) Misrepresenting human intelligence. In R. Born (ed.) Artificial Intelligence: The Case Against. Beckenham: Croom Helm. Eagley, A.H. (1987) Sex Differences in Social Behaviour: A Social-Role Interpretation. Hillsdale, NJ: Lawrence Erlbaum. Ebbinghaus, H. (1885) On Memory. Leipzig: Duncker. Edley, N. and Wetherell, M. (1995) Men in Perspective: Practice, Power and Identity. Hemel Hempstead: Harvester Wheatsheaf. Edwards, D. (1997) Discourse and Cognition. London: Sage. Edwards, D. and Middleton, D. (1987) Conversation and remembering: Bartlett revisited. Applied Cognitive Psychology, 1(2), 77–92. Edwards, D. and Potter, J. (1992) Discursive Psychology. London: Sage. Edwards, D. and Potter, J. (1995) Remembering. In R. Harré and P. Stearns (eds) Discursive Psychology in Practice. London: Sage.

324

References

Edwards, D., Middleton, D., and Potter, J. (1992) Towards a discursive psychology of remembering. The Psychologist, October, 441–455. Eliot, L. (2012) The truth about boys and girls. Scientific American Mind, 21(2), 22–29. Ellis, G. (2013) View from the top. New Scientist, 219(2930), 28–29. Erikson, E.H. (1968) Identity: Youth and Crisis. New York: Norton. Evans, C. (1987) Parapsychology: A history of research. In R.L. Gregory (ed.) The Oxford Companion to the Mind. Oxford: Oxford University Press. Ewen, R.B. (1988) An Introduction to Theories of Personality (3rd edition). Hillsdale, NJ: Lawrence Erlbaum Associates. Eysenck, H.J. (1953) The logical basis of factor analysis. In D.N. Jackson and S. Messick (eds) Problems in Human Assessment. New York: McGraw-Hill. Eysenck, H.J. (1965) Fact and Fiction in Psychology. Harmondsworth: Penguin. Eysenck, H.J. (1966) Personality and experimental psychology. Bulletin of the British Psychological Society, 19, 1–28. Eysenck, H.J. (1976) The learning theory model of neurosis: A new approach. Behaviour Research & Therapy, 14, 251–267. Eysenck, H.J. (1985) Decline and Fall of the Freudian Empire. Harmondsworth: Penguin. Eysenck, H.J. and Rachman, S. (1965) The Causes and Cure of Neurosis. London: Routledge & Kegan Paul. Fancher, R.E. (1979) Pioneers of Psychology. New York: Norton. Fancher, R.E. (1988) Henry Goddard and the Kallikak family photographs: ‘Conscious skulduggery’ or ‘Whig history’? American Psychologist, 42, 585–590. Fancher, R.E. and Rutherford, A. (2012) Pioneers of Psychology (4th edition). New York: W.W. Norton & Company, Inc. Farah, M.J. and Aguirre, G.K. (1999) Imaging visual recognition: PET and fMRI studies of the functional autonomy of human visual recognition. Trends in Cognitive Sciences, 3, 179–186. Fechner, G. (1966) Elements of Psychophysics, Vol. 1 (trans. H.E. Adler). New York: Holt, Rinehart & Winston. (Originally published 1860.) Fernando, S. (1991) Mental Health, Race & Culture. London: Macmillan Press in association with MIND Publications. Ferster, C.B. and Skinner, B.F. (1957) Schedules of Reinforcement. New York: AppletonCentury-Crofts. Feyerabend, P.K. (1965) Problems of empiricism. In R. Colodny (ed.) Beyond the Edge of Certainty. Englewood Cliffs, NJ: Prentice-Hall. Feyerabend, P.K. (1978) Science in a Free Society. London: NLB. Fields, R.D. (2004) The other half of the brain. Scientific American, 290(4), 26–33. Firestone, S. (1970) The Dialectics of Sex: The Case for Feminist Revolution. New York: Bantam. Fiske, S. and Taylor, S.E. (1991) Social Cognition (2nd edition). New York: McGraw-Hill. Flanagan, O. (1984) The Science of the Mind. Cambridge, MA: MIT Press. Ford, K.M. and Hayes, P.J. (1998) On computational wings: Rethinking the goals of artificial intelligence. Scientific American Presents, 9(4), 78–83. Fordham, M. (1987) Explorations into the Self. London: Academic Press. Foucault, M. (1967) Madness and Civilization: A History of Insanity in the Age of Reason. London: Tavistock Publications. (Originally published 1961.) Foucault, M. (1970) The Order of Things. New York: Vintage Books. Foucault, M. (1977) Discipline and Punish: The Birth of the Prison. London: Allen Lane. Frankl, V.E. (2004) Man’s Search for Meaning. London: Rider. (Originally published 1946.)

325

References

Frankland, A. and Cohen, L. (1999) Working with recovered memories. The Psychologist, 12(2), 82–83. Fransella, F. (1970) And there was one. In D. Bannister (ed.) Perspectives in Personal Construct Theory. London: Academic Press. Fransella, F. (1972) Personal Change and Reconstruction: Research on a Treatment of Stuttering. London: Academic Press. Fransella, F. (1980) Man-as-scientist. In A.J. Chapman and D.M. Jones (eds) Models of Man. Leicester: British Psychological Society. Fransella, F. (1981) Personal construct psychology and repertory grid technique. In F. Fransella (ed.) Personality: Theory, Measurement and Research. London: Methuen. French, C. (2012) Peering into the future of peer review: A curious case from Parapsychology. Psychology Review, 18(2), 26–29. Freud, A. (1936) The Ego and the Mechanisms of Defence. London: Chatto & Windus. Freud, S. (1899) Screen Memories: Standard Edition of the Complete Psychological Works of Sigmund Freud, Vol. 3. London: Hogarth Press. Freud, S. (1900/1976a) The Interpretation of Dreams. Pelican Freud Library (4). Harmondsworth: Penguin. Freud, S. (1901/1976b) The Psychopathology of Everyday Life. Pelican Freud Library (5). Harmondsworth: Penguin. Freud, S. (1905/1977a) Three Essays on the Theory of Sexuality. Pelican Freud Library (7). Harmondsworth: Penguin. Freud, S. (1914a) On the History of the Psychoanalytic Movement. Pelican Freud Library (15). Harmondsworth: Penguin. Freud, S. (1914b) Remembering, Repeating and Working Through. Standard Edition of the Complete Psychological Works of Sigmund Freud, Vol. 12. London: Hogarth Press. Freud, S. (1915a/1984) The Unconscious. Pelican Freud Library (11). Harmondsworth: Penguin. Freud, S. (1915b) Repression. Standard Edition of the Complete Psychological Works of Sigmund Freud, Vol. 14. London: Hogarth Press. Freud, S. (1920/1984) Beyond the Pleasure Principle. Pelican Freud Library (11). Harmondsworth: Penguin. Freud, S. (1923/1984) The Ego and the Id. Pelican Freud Library (11). Harmondsworth: Penguin. Freud, S. (1930) Civilization and its Discontents. Pelican Freud Library (12). Harmondsworth: Penguin. Freud, S. (1933) New Introductory Lectures on Psychoanalysis. Standard Edition of the Complete Psychological Works of Sigmund Freud, Vol. 22. London: Hogarth Press. Freud, S. (1940) An Outline of Psychoanalysis. Standard Edition of the Complete Psychological Works of Sigmund Freud, Vol. 23. London: Hogarth Press. Freud, S. and Breuer, J. (1895) Studies on Hysteria. Pelican Freud Library (3). Harmondsworth: Penguin. Friedan, B. (1963) The Feminist Mystique. New York: Norton. Frijda, N. and Jahoda, G. (1966) On the scope and methods of cross-cultural research. International Journal of Psychology, 1, 110–127. Frith, C. and Cahill, C. (1995) Psychotic disorders: Schizophrenia, affective psychoses, and paranoia. In A.A. Lazarus and A.M. Colman (eds) Abnormal Psychology. London: Longman. Frith, C. and Rees, G. (2007) A brief history of the scientific approach to the study of consciousness. In M. Velmans and S. Schneider (eds) The Blackwell Companion to Consciousness. Oxford: Blackwell Publishing.

326

References

Fromm, E. (1951) Psychoanalysis and Religion. London: Gollanz. Funnell, M.G., Corballis, P.M., and Gazzaniga, M.S. (2000) Cortical and subcortical interhemispheric interactions following partial and complete callosotomy. Archives of Neurology, 57, 185–189. Furomoto, L. (1989) The new history of psychology. In I.S. Cohen (ed.) The G. Stanley Hall Lecture Series, Vol. 9. Washington, DC: American Psychological Association. Gabrielli, J.D.E. (1998) Cognitive neuroscience of human memory. Annual Review of Psychology, 49, 87–115. Gadamer, H.-G. (1976) Philosophical Hermeneutics (trans. David E. Linge). Berkeley, CA: University of California Press. Gale, A. (1995) Ethical issues in psychological research. In A.M. Colman (ed.) Psychological Research Methods and Statistics. London: Longman. Galton, F. (1869) Heredity Genius: An Inquiry into its Laws and Consequences. London: Macmillan. Galton, F. (1883) Inquiries into Human Faculty and its Development. London: Macmillan. Gardner, H. (1985) The Mind’s New Science: A History of the Cognitive Revolution. New York: Basic Books. Gardner, H. (1993) Multiple Intelligences: The Theory in Practice. New York: Basic Books. Garnham, A. (1988) Artificial Intelligence: An Introduction. London: Routledge & Kegan Paul. Garrett, R. (1996) Skinner’s case for radical behaviourism. In W. O’Donohue and R.F. Kitchener (eds) The Philosophy of Psychology. London: Sage. Gay, P. (1988) Freud: A Life for Our Time. London: J.M. Dent & Sons. Gelman, S.A. (2003) The Essential Child: Origins of Essentialism in Everyday Thought. New York: Oxford University Press. Gergen, K.J. (1973) Social psychology as history. Journal of Personality & Social Psychology, 26, 309–320. Gergen, K.J. (1985) The social constructionist movement in modern psychology. American Psychologist, 40, 266–275. Gergen, K.J. (1995) Culture and self in postmodern Psychology: Dialogue in trouble? (Interview with K.J. Gergen, Part 1). Culture & Psychology, 1(1), 147–159. Gergen, K.J. (1996) Postmodern culture and the revisioning of alienation. In F. Geyer (ed.) Alienation in Contemporary Culture. New York: Greenwood Press. Gergen, K.J. (1997) The place of the psyche in a constructed world. Theory & Psychology, 7(6), 723–746. Gergen, K.J. (2001) Psychological science in a postmodern context. American Psychologist, 56, 803–813. Gergen, K.J. (2010) The acculturated brain. Theory & Psychology, 20(6), 1–20. Gergen, K.J., Gulerce, A., Lock, A., and Misra, G. (1996) Psychological science in cultural context. American Psychologist, 51, 496–503. Gergen, M.M. and Gergen, K.J. (2012) Playing with Purpose: Adventures in Performative Social Science. Walnut Creek, CA: Left Coast Press. Gilligan, C. (1993) Letter to readers (preface). In a Different Voice (2nd edition). Cambridge, MA: Harvard University Press. Giorgi, A. and Giorgi, B. (2003) Phenomenology. In J.A. Smith (ed.) Qualitative Psychology: A Practical Guide to Research Methods. London: Sage. Glanzer, M. and Cunitz, A.R. (1966) Two storage mechanisms in free recall. Journal of Verbal Learning & Verbal Behaviour, 5, 928–935. Glassman, W.E. and Hadad, M. (2013) Approaches to Psychology (6th edition). Maidenhead: McGraw-Hill Higher Education.

327

References

Glick, J. (1975) Cognitive development in cross-cultural perspective. In F. Horowitz (ed.) Review of Child Development Research, Vol. 4. Chicago, IL: University of Chicago Press. Goffman, E. (1968) Asylum: Essays on the Social Situation of Mental Patients and Other Inmates. Harmondsworth: Penguin. Goldberg, A. (2015) The Brain, the Mind and the Self: A Psychoanalytic Road Map. Hove: Routledge. Goldberg, L.R. (1993) The structure of phenotypic personality traits. American Psychologist, 48, 26–34. Goldberg, S. (2000) Attachment and Development. London: Arnold. Gough, B. and McFadden, M. (2001) Critical Social Psychology: An Introduction. Basingstoke: Palgrave. Gould, S.J. (1981) The Mismeasure of Man. New York: Norton. Gould, S.J. (1987) An Urchin in the Storm. Harmondsworth: Penguin. Gould, S.J. (1996) The Mismeasure of Man (revised and expanded edition). Harmondsworth: Penguin. Gould, S.J. (2002) The Structure of Evolutionary Theory. Cambridge, MA: Harvard University Press. Graham, H. (1986) The Human Face of Psychology: Humanistic Psychology in Historical, Social and Cultural Context. Milton Keynes: Open University Press. Gray, J.A. (1991) On the morality of speciesism. The Psychologist, 4(5), 196–198. Greenberg, J., Koole, S.L., and Pyszczynski, T. (eds) (2004) Handbook of Experimental Existential Psychology. New York: The Guilford Press. Greenfield, S. (2000) Brain Story: Unlocking our Inner World of Emotions, Memories, Ideas and Desires. London: BBC Books. Gregory, R.L. (1973) Eye and Brain (2nd edition). New York: World University Library. Gregory, R.L. (1981) Mind in Science. Hove: Lawrence Erlbaum. Gregory, R.L. (1987) In defence of artificial intelligence: A reply to John Searle. In C. Blakemore and S. Greenfield (eds) Mindwaves. Oxford: Blackwell. Gregory, R.L. and Wallace, J. (1963) Recovery from Early Blindness. Cambridge: Heffer. Gross, R. (1987) Psychology: The Science of Mind and Behaviour. London: Edward Arnold. Gross, R. (1999) Key Studies in Psychology (3rd edition). London: Hodder Education. Gross, R. (2008) Key Studies in Psychology (5th edition). London: Hodder Education. Gross, R. (2012) Being Human: Psychological and Philosophical Perspectives. London: Routledge. Gross, R. (2014) Themes, Issues and Debates in Psychology (4th edition). London: Hodder Education. Gross, R. (2015) Psychology: The Science of Mind and Behaviour (7th edition). London: Hodder Education. Gross, R., Humphreys, P., and Petkova, B. (1997) Challenges in Psychology. London: Hodder & Stoughton. Guilford, J.P. (1959) Three faces of intellect. American Psychologist, 14, 469–479. Hacking, I. (1994) The looping effects of human kinds. In D. Sperber, D. Premack, and A.J. Premack (eds) Causal Cognition: A Multidisciplinary Approach. Oxford: Clarendon Press. Haggard, P. and Eimer, M. (1999) On the relation between brain potentials and awareness of voluntary movements. Experimental Brain Research, 126, 128–133. Hall, C.S. and Nordby, V.J. (1973) A Primer of Jungian Psychology. New York: Mentor. Hamilton, W.D. (1964) The genetical evolution of social behaviour, I and II. Journal of Theoretical Biology, 7, 1–16, 17–52.

328

References

Harré, R. (1985) The language game of self ascription: A note. In K.J. Gergen and K.E. Davis (eds) The Social Construction of the Person. New York: Springer. Harré, R. (1989) Language games and the texts of identity. In J. Shotter and K.J. Gergen (eds) Texts of Identity. London: Sage. Harré, R. (1993) Rules, roles and rhetoric. The Psychologist, 6(1), 24–28. Harré, R. (1995a) Discursive psychology. In J.A. Smith, R. Harré, and L. Van Langenhove (eds) Rethinking Psychology. London: Sage. Harré, R. (1995b) The necessity of personhood as embodied being. Theory & Psychology, 5(3), 369–373. Harré, R. (1999) Discourse and the embodied person. In D.J. Nightingale and J. Cromby (eds) Social Constructionist Psychology: A Critical Analysis of Theory and Practice. Buckingham: Open University Press. Harré, R. (2006) Key Thinkers in Psychology. London: Sage. Harré, R. (2012) Positioning theory: Moral dimensions of social-cultural psychology. In J. Valsiner (ed.) Oxford Handbook of Culture and Psychology. New York: Oxford University Press. Harré, R. and Secord, P.F. (1972) The Explanation of Social Behaviour. Oxford: Blackwell. Harré, R., Clarke, D., and De Carlo, N. (1985) Motives and Mechanisms: An Introduction to the Psychology of Action. London: Methuen. Harris, B. (1997) Repoliticizing the history of psychology. In D. Fox and I. Prilleltensky (eds) Critical Psychology: An Introduction. London: Sage. Harris, B. (2009) What Critical Psychologists should know about the history of Psychology. In D. Fox, I. Prilleltensky, and S. Austin (eds) Critical Psychology: An Introduction (2nd edition). London: Sage. Heather, N. (1976) Radical Perspectives in Psychology. London: Methuen. Hebb, D.O. (1949) The Organization of Behaviour. New York: Wiley. Heidegger, M. (1962) Being and Time. London: SCM Press. (Originally published 1927.) Heider, F. (1958) The Psychology of Interpersonal Relations. New York: Wiley. Helmholtz, H. von (1866) Concerning the perceptions in general. In Treatise on Physiological Optics, Vol. 3 (3rd edition) (trans. J.P.C. Southall, Opt. Soc. Am., Sect. 26) (Reprinted New York: Dover, 1962.) Heneghan, L. (2012) 10 things wrong with environmental thinking: Konrad Lorenz and Nazism. http://10thingswrongwithenvironmentalthought.blogspot.co.uk/2012/08/konradlorenz-and-nazism.html. Herrnstein, R.J. (1971) IQ. Atlantic Monthly, September, 43–64. Herrnstein, R.J. and Murray, C. (1994) The Bell Curve: Intelligence and Class Structure in American Life. New York: Free Press. Herskovits, M.J. (1948) Man and His Works: The Science of Cultural Anthropology. New York: Alfred A. Knopf. Hewett, C.J.M. (2008) Progress of the Human Mind: From Enlightenment to Postmodernism. Workshop, September. www.thegreatdebate.org.uk/Comte1Print.html. Hilliard, A.G. (1995) The nonscience and nonsense of the bell curve. Focus: Notes from the Society for the Psychological Study of Ethnic Minority Issues, 10–12. Hilliard, R.B. (1993) Single-case methodology in psychotherapy process and outcome research. Journal of Consulting & Clinical Psychology, 61(3), 373–380. Hinde, R.A. (1982) Ethology. London: Fontana. Hippocrates (1931) Sacred Heart (trans. W. Jones). London: Heinemann. Hobbes, T. (1962) Leviathan. London: Collins. (Originally published 1651.) Holt, N.J., Simmonds-Moore, C., Luke, D., and French, C.C. (2012) Anomalistic Psychology. Basingstoke: Palgrave Macmillan.

329

References

Holt, R.R. (1967) Individuality and generalization in the psychology of personality. In R.S. Lazarus and E.M. Opton (eds) Personality. Harmondsworth: Penguin. Holzkamp, K. (1992) On doing psychology critically (trans. C.W. Tolman). Theory & Psychology, 2(2), 193–204. Hopper, K., Harrison, G., Janca, A., and Sartorius, N. (2007) Recovery from Schizophrenia: An International Perspective. Results from the WHO-coordinated International Study of Schizophrenia. Oxford: Oxford University Press. Horney, K. (1924) On the genesis of the castration complex in women. International Journal of Psychoanalysis, 5, 50–65. Horwitz, A.V. (2002) Creating Mental Illness. Chicago, IL: University of Chicago Press. Howe, M. (1997) IQ in Question: The Truth about Intelligence. London: Sage. Hudson, L. (1968) Frames of Mind: Ability, Perception and Self-Perception in the Arts and Sciences. London: Methuen. Hugdahl, K. and Ohman, A. (1977) Effects of instruction on acquisition of electrodermal response to fear relevant stimuli. Journal of Experimental Psychology, 3, 608–618. Humphrey, N. (1986) The Inner Eye. London: Faber & Faber. Humphrey, N. (1993) Introduction. In N. Humphrey, The Inner Eye (new edition). London: Faber & Faber. Humphreys, P. (1997) Memory. In R. Gross, P. Humphreys, and B. Petkova, Challenges in Psychology. London: Hodder & Stoughton. Hurlburt, R.T. and Knapp, T.J. (2006) Munsterberg in 1898, not Allport in 1937, introduced the terms ‘idiographic’ and ‘nomothetic’ to American Psychology. Theory & Psychology, 16(2), 287–293. Husserl, E. (1925; trans. 1977) Phenomenological Psychology. The Hague: Martinus Nijhoff. Husserl, E. (1931; trans. 1960) Cartesian Meditations: An Introduction to Phenomenology. The Hague: Martinus Nijhoff. Husserl, E. (1936; trans. 1970) The Crisis of European Sciences and Transcendental Phenomenology. Evanston, IL: Northwestern University Press. Hwang, K.K. (2005) The indigenous movement. The Psychologist, 18(2), 80–83. Ingram, D. (1985) Hermeneutics and truth. In R. Hollinger (ed.) Hermeneutics and Praxis. Notre Dame, IN: University of Notre Dame Press. Insel, T.R. (2009) Translating scientific opportunity into public health impact: A strategic plan for research on mental illness. Archives of General Psychiatry, 66, 128–133. Jacobs, M. (1992) Freud. London: Sage. Jahoda, G. (1978) Cross-cultural perspectives. In H. Tajfel and C. Fraser (eds) Introducing Social Psychology. Harmondsworth: Penguin. James, W. (1890) The Principles of Psychology. London: Macmillan. James, W. (1917) The dilemma of determinism. In The Will to Believe and Other Essays in Popular Philosophy. London: Longmans. Jaspers, K. (1963) General Psychopathology (trans. J. Hoenig and M.W. Hamilton), Vol. 1. Baltimore, MD: Johns Hopkins University Press. (Originally published 1913.) Jaynes, J. (1976) Origin of Consciousness in the Breakdown of the Bicameral Mind. Boston, MA: Houghton Mifflin Co. Jenkins, J.H. and Barrett, R.J. (eds) (2004) Schizophrenia, Culture and Subjectivity: The Edge of Experience. Cambridge: Cambridge University Press. Jensen, A.R. (1969) How much can we boost IQ and scholastic achievement? Harvard Educational Review, 39, 1–23. Jensen, A.R. (1980) Bias in Mental Testing. London: Methuen. Jones, D. (2008) Running to catch the sun. The Psychologist, 21(7), 580–583.

330

References

Jones, D. and Elcock, J. (2001) History and Theories of Psychology: A Critical Perspective. London: Arnold. Jones, M.C. (1924) The elimination of children’s fears. Journal of Experimental Psychology, 7, 382–390. Joseph, S. and Linley, P.A. (2006) Positive psychology versus the medical model? American Psychologist, May–June, 332–333. Joynson, R.B. (1974) Psychology and Common Sense. London: Routledge and Kegan Paul. Joynson, R.B. (1980) Models of man: 1879–1979. In A.J. Chapman and D.M. Jones (eds) Models of Man. Leicester: British Psychological Society. Kahneman, D. (2013) Thinking, Fast and Slow (reprinted edition). New York: Farrar, Strauss & Giroux. Kakar, S. (1982) Shamans, Mystics and Doctors. London: Unwin. Kamin, L.J. (1974) The Science and Politics of IQ. Potomac, MD: Lawrence Erlbaum Associates. Karmiloff-Smith, A. (1996) The connectionist infant: Would Piaget turn in his grave? Society for Research in Child Development Newsletter, Fall, 1–2, 10. Kay, H. (1972) Psychology today and tomorrow. Bulletin of the British Psychological Society, 25, 177–188. Kelly, G.A. (1955) A Theory of Personality: The Psychology of Personal Constructs. New York: Norton. Kelly, G.A. (1962) Europe’s matrix of decision. In M.R. Jones (ed.) Nebraska Symposium on Motivation. Lincoln, NE: University of Nebraska Press. Kelly, L. (1988) Surviving Sexual Violence. Cambridge: Polity Press. Kierkegaard, S. (1944) The Concept of Dread (trans. W. Lowrie). Princeton, NJ: Princeton University Press. (Originally published 1844.) Kihlstrom, J.F. (1987) The cognitive unconscious. Science, 237(4821), 1445–1452. Kirmayer, L.J. (2001) Cultural variations in the clinical presentation of depression and anxiety: Implications for diagnosis and treatment. Journal of Clinical Psychiatry, 62(13), 22–30. Kirsner, D. (2015) Laing’s The Divided Self and The Politics of Experience. In M. Guy Thompson (ed.) The Legacy of R.D. Laing: An Appraisal of His Contemporary Relevance. London: Routledge. Kitanaka, J. (2011) Depression in Japan: Psychiatric Cures for a Society in Distress. Princeton, NJ: Princeton University Press. Kitzinger, C. and Frith, H. (1999) Just say no? The use of conversation analysis in developing a feminist perspective on sexual refusal. Discourse & Society, 10, 293–317. Kleinman, A. (1977) Depression somatisation and the ‘new cross-cultural psychiatry’. Social Science & Medicine, 11, 3–10. Kleinman, A. (1987) Anthropology and psychiatry: The role of culture in cross-cultural research on illness. British Journal of Psychiatry, 151, 447–454. Kleinman, A. (2000) Social and cultural anthropology: Salience for psychiatry. In M.G. Gelder, J.J. Lopez-Ibor, and N.C. Andreasen (eds) New Oxford Textbook of Psychiatry. Oxford: Oxford University Press. Kline, P. (1984) Personality and Freudian theory. London: Methuen. Kline, P. (1988) Psychology Exposed. London: Routledge. Kline, P. (1989) Objective tests of Freud’s theories. In A.M. Colman and J.G. Beaumont (eds) Psychology Survey No. 7. Leicester: British Psychological Society. Kluckhohn, C. and Murray, H.A. (1953) Personality formation: the determinants. In C. Kluckhohn, H.A. Murray, and D.M. Schneider (eds) Personality in Nature, Society and Culture (2nd edition). New York: Knopf.

331

References

Knox, R. (1850) The Races of Men: A Fragment. London: Renshaw. Koedt, A. (1974) The myth of the vaginal orgasm. In The Radical Therapist. Harmondsworth: Penguin. Koestler, A. (1967) The Ghost in the Machine. London: Pan. Koffka, K. (1935) Principles of Gestalt Psychology. London: Routledge & Kegan Paul. Köhler, W. (1969) The Task of Gestalt Psychology. Princeton, NJ: Princeton University Press. Kosslyn, S.M., Gazzaniga, M.S., Galaburda, A.M., and Rabin, C. (1999) Hemispheric specialization. In M.J. Zigmond, F.E. Bloom, S.C. Landis, J.L. Roberts, and L.R. Squire (eds) Fundamental Neuroscience. San Diego, CA: Academic Press. Kotowicz, Z. (1997) R.D. Laing and the Paths of Anti-Psychiatry. London: Routledge. Kramer, M. and Sprenger, M. (1941) Malleus Maleficarum (trans. J. Summer) London: Pushkin. (Originally published 1486.) Kraepelin, E. (1913) Dementia Praecox. In Psychiatrica (8th edition) (trans. R. Barclay). Melbourne, FL: Krieger. Krahé, B. (1992) Personality and Social Psychology: Towards a Synthesis. London: Sage. Kuhn, T.S. (1962) The Structure of Scientific Revolutions. Chicago, IL: University of Chicago Press. Kuhn, T.S. (1970) The Structure of Scientific Revolutions (2nd edition). Chicago, IL: University of Chicago Press. Kupfer, D.J., First, M.B., and Regier, D.A. (2002) Introduction. In D.J. Kupfer, M.B. First, and D.A. Regier (eds) A Research Agenda for DSM-V. Washington, DC: American Psychiatric Association. Kvale, S. (1992) (ed.) Psychology and Postmodernism. London: Sage. Lachman, R., Lachman, J.L., and Butterfield, E.C. (1979) Cognitive Psychology and Information Processing. Hillsdale, NJ: Lawrence Erlbaum Associates. Laing, R.D. (1961) Self and others. Harmondsworth: Penguin. Laing, R.D. (1965) The Divided Self: An Existential Study in Sanity and Madness. Harmondsworth: Penguin. Laing, R.D. (1967) The Politics of Experience and the Bird of Paradise. Harmondsworth: Penguin. Laing, R.D. and Esterson, A. (1964) Sanity, Madness and the Family. Harmondsworth: Penguin. Lakatos, I. (1970) Falsification and the methodology of scientific research programmes. In I. Lakatos and A. Musgrave (eds) Criticism and the Growth of Knowledge. Cambridge: Cambridge University Press. Laney, C., Bowman Fowler, N., Nelson, K.J., Bernstein, D.M., and Loftus, E.F. (2008) The persistence of false beliefs. Acta Psychologica, 129(1), 190–197. Lattal, K.A. and Rutherford, A. (2013) John B. Watson’s Behaviourist Manifesto at 100. Mexican Journal of Behavioural Analysis, 39, 1–9. Lea, S.E.G. (1984) Instinct, Environment and Behaviour. London: Methuen. Leahey, T.H. (2000) A History of Psychology: Main Currents in Psychological Thought (4th edition). Englewood Cliffs, NJ: Prentice-Hall. Leary, D.E. (1990) The psychologist’s dilemma: To subject the self to science – or science to the self? Theoretical & Philosophical Psychology, 10(2), 66–72. Leary, M.R. (2004) The self we know and the self we show: Self-esteem, self-presentation, and the maintenance of interpersonal relationships. In M.B. Brewer and M. Hewstone (eds) Emotion and Motivation. Oxford: Blackwell Publishing. Lee, A. (2012) The person in psychological science. The Psychologist, 25(4), 292–293.

332

References

LeFrancois, G.R. (1983) Psychology. Belmont, CA: Wadsworth Publishing Co. Legge, D. (1975) Introduction to Psychological Science. London: Methuen. Leonard, P. (1984) Personality and Ideology. Basingstoke: Macmillan. Lerner, G. (1979) The Majority Finds its Past: Placing Women in History. New York: Oxford University Press. Leslie, J.C. (2002) Essential Behaviour Analysis. London: Arnold. Libet, B. (1985) Unconscious cerebral initiative and the role of conscious will in voluntary action. Behavioural & Brain Sciences, 8, 529–539. Libet, B. (1999) Do we have free will? Journal of Consciousness Studies, 6, 47–57. Libet, B., Gleason, C.A., Wright, E.W., and Pearl, D.K. (1983) Time of conscious intention to act in relation to onset of cerebral activity (readiness potential): The unconscious initiation of a freely voluntary act. Brain, 106, 623–642. Lindsay, G. (1995) Values, ethics, and psychology (1995 presidential address). The Psychologist, 8(11), 493–498. Linley, P.A. (2008) Positive psychology (history). In S.J. Lopez (ed.) The Encyclopaedia of Positive Psychology. Oxford: Blackwell. Linley, P.A., Joseph, S., Harrington, S., and Wood, A.M. (2006) Positive psychology: Past, present, and (possible) future. Journal of Positive Psychology, 1(1), 3–16. Littlewood, R. and Lipsedge, M. (1989) Aliens and Alienists: Ethnic Minorities and Psychiatry (2nd edition). London: Routledge. Locke, J. (1690) An Essay Concerning Human Understanding. Oxford: P.H. Nidditch. Loeb, J. (1901) Comparative Physiology of the Brain and Comparative Psychology. London: John Murray. Loftus, E. (1997) Creating false memories. Scientific American, September, 50–55. Lopez, S.R. and Guarnaccia, P.J. (2000) Cultural anthropology: Uncovering the social world of mental illness. Annual Review of Psychology, 51, 571–598. Lorenz, K.Z. (1935) The companion in the bird’s world. Auk, 54, 245–273. Lorenz, K.Z. (1966) On Aggression. London: Methuen. Luria, A.R. (1987) Reductionism. In R.L. Gregory (ed.) The Oxford Companion to the Mind. Oxford: Oxford University Press. McAdams, D.P. (2012) Meaning and personality. In P.T.P. Wong (ed.) The Human Quest for Meaning: Theories, Research and Applications (2nd edition). New York: Routledge. McAllister, M. (1997) Putting psychology in context (1997 presidential address). The Psychologist, 11(1), 13–15. McClelland, J.L., Rumelhart, D.E. and the PDP Research Group (1986) Parallel Distributed Processing: Explorations in the Microstructures of Cognition, Vol. 2, Psychological and Biological Models. Cambridge, MA: MIT Press. Maccoby, E.E. (1990) Gender and relationships: A developmental account. American Psychologist, 45, 513–520. McCrae, R.R. and Costa, P.T. (1989) More reasons to adopt the five-factor model. American Psychologist, 44, 451–452. McDougall, W. (1908) An Introduction to Social Psychology. London: Methuen. McGhee, P. (2001) Thinking Psychologically. Basingstoke: Palgrave. McGinn, C. (1987) Could a machine be conscious? In C. Blakemore and S. Greenfield (eds) Mindwaves. Oxford: Blackwell. McGregor, I. (2006) Offensive defensiveness. Psychological Inquiry, 17, 299–308. McKie, R. (2008) Ban on primate experiments would be devastating, scientists warn. Observer, 7 October, 14.

333

References

Mackintosh, N.J. (1978) Cognitive or associative theories of conditioning: Implications of an analysis of blocking. In S.H. Hulse, M. Fowler, and W.K. Honig (eds) Cognitive Processes in Annual Behaviour. Hillsdale, NJ: Lawrence Erlbaum. McNeil, J.E. and Warrington, E.K. (1993) Prosopagnosia: A face-specific disorder. Quarterly Journal of Experimental Psychology, 46A, 1–10. Macquarrie, J. (1972) Existentialism. Harmondsworth: Penguin. Maddux, J.E., Gosselin, J.T., and Winstead, B.A. (2012) Conceptions of psychopathology: A social constructivist perspective. In J.E. Maddux, J.T. Gosselin, and B.A. Winstead (eds) Psychopathology: Foundations for a Contemporary Understanding (3rd edition). New York: Routledge. Magnusson, E. and Maracek, J. (2012) Gender and Culture in Psychology: Theories and Practices. Cambridge: Cambridge University Press. Maher, B.A. (1966) Principles of Psychopathology: An Experimental Approach. New York: McGraw-Hill. Malamuth, N.M. (1981) Rape proclivity among males. Journal of Social Issues, 37, 138–157. Malik, K. (2006) What science can and cannot tell us about human nature. In R. Headlam Wells and J. McFadden (eds) Human Nature: Fact and Fiction. London: Continuum. Malone, J.C. (1982) The second offspring of General Process Learning Theory: Overt behaviour as the ambassador of the mind. Journal of the Experimental Analysis of Behaviour, 38, 205–209. Marx, M.H. and Hillix, W.A. (1963) Systems and Theories in Psychology. New York: McGraw-Hill. Maslow, A.H. (1954) Motivation and Personality. New York: Harper and Row. Maslow, A.H. (1965) Eupsychian Management. New York: Irwin Dorsey. Maslow, A.H. (1968) Toward a Psychology of Being (2nd edition). New York: Van Nostrand Reinhold. Maslow, A.H. (1969) The Psychology of Science: A Reconnaissance. New York: Henry Regnery (Gateway Edition). Maslow, A.H. (1970) Motivation and Personality (2nd edition). New York: Harper and Row. Maslow, A.H. (1987) Motivation and Personality (3rd edition). New York: Harper and Row. Mead, G.H. (1934) Mind, Self and Society. Chicago, IL: University of Chicago Press. Middleton, D. and Crook, C. (1996) Bartlett and socially ordered consciousness: A discursive perspective. Culture & Psychology, 2(4), 379–396. Midgley, M. (1995) Beast and Man: The Roots of Human Nature (revised edition). London: Routledge. Midgley, M. (2014) Are You an Illusion? London: Routledge. Miller, G.A. (1969) Psychology as a means of promoting human welfare. American Psychologist, 24, 1063–1075. Miller, J. (1997) Theoretical issues in cultural psychology. In J.W. Berry, Y.H. Poortinga, and J. Pandey (eds) Handbook of Cross-Cultural Research, Vol. 1, Theory and Method. Boston, MA: Allyn & Bacon. Millett, K. (1969) Sexual Politics. London: Rupert Hart-Davis. Mills, J.A. (1998) Control: A History of Behavioural Psychology. New York: New York University Press. Milner, B., Corkin, S., and Teuber, H.L. (1968) Further analysis of the hippocampal amnesic syndrome: 14-year follow-up study of H.M. Neuropsychologia, 6, 215–234. Milton, J. (2005) Methodology. In J. Henry (ed.) Parapsychology: Research on Exceptional Experience. London: Routledge.

334

References

Misra, G. and Gergen, K.J. (1993) In the place of culture in psychological science. International Journal of Psychology, 38, 225–253. Mistry, J. and Rogoff, B. (1994) Remembering in cultural context. In W.J. Lonner and R.S. Malpass (eds) Psychology and Culture. Boston, MA: Allyn & Bacon. Mitchell, J. (1974) Psychoanalysis and Feminism: A Radical Reassessment of Freudian Psychoanalysis. London: Penguin. Modha, D. (2014) A computer that thinks. New Scientist, 224(2994), 28–29. Moghaddam, F. (1987) Psychology in the three worlds: As reflected by the crisis in social psychology and the move towards indigenous third world psychology. American Psychologist, 47, 912–920. Moghaddam, F. (2005) Great Ideas in Psychology: A Cultural and Historical Introduction. Oxford: Oneworld Publications. Moghaddam, F. and Harré, R. (2010) (eds) Words, conflicts and political processes. In F. Moghaddam and R. Harré (eds) Words of Conflict, Words of War: How the Language We Use in Political Processes Sparks Fighting. Santa Barbara, CA: Praeger. Moghaddam, F. and Studer, C. (1997) Cross-cultural psychology: The frustrated gadfly’s promises, potentialities and failures. In D. Fox and D. Prilleltensky (eds) Critical Psychology: An Introduction. London: Sage. Moghaddam, F., Taylor, D.M., and Wright, S.C. (1993) Social Psychology in CrossCultural Perspective. New York: W.H. Freeman & Co. Mollon, P. (2000) Freud and False Memory Syndrome. Cambridge: Icon Books. Morawski, J. (1982) Assessing psychology’s moral heritage through our neglected utopias. American Psychologist, 37, 1082–1095. Morea, P. (1990) Personality: An Introduction to the Theories of Psychology. Harmondsworth: Penguin. Moscovici, S. (1985) Social influence and conformity. In G. Lindzey and E. Aronson (eds) Handbook of Social Psychology (3rd edition). New York: Random House. Mousseau, M.-C. (2003) Parapsychology: Science or pseudo-science? Journal of Scientific Exploration, 17, 271–282. Moyer, M.W. (2013a) Glia spark seizures. Scientific American Mind, 24(2), 16. Moyer, M.W. (2013b) Without glia, the brain would starve. Scientific American Mind, 24(2), 17. Much, N. (1995) Cultural psychology. In J.A. Smith, R. Harré, and L. Van Langenhove (eds) Rethinking Psychology. London: Sage. Muehlenhard, C.L. and Kimes, L.A. (1999) The social construction of violence: The case of sexual and domestic violence. Personality & Social Psychology Review, 3, 234–245. Murdock, B.B. (1962) The serial position effect in free recall. Journal of Experimental Psychology, 64, 482–488. Murphy, J., John, M., and Diener, E. (1984) Dialogues and Debates in Social Psychology. London: Lawrence Erlbaum/Open University. Murray, D.J. (1995) Gestalt Psychology and the Cognitive Revolution. Hemel Hempstead: Harvester Wheatsheaf. Murray, E.J. and Foote, F. (1979) The origins of fear of snakes. Behaviour Research & Therapy in Personality, 17, 489–493. Nagel, T. (1974) What is it like to be a bat? Philosophical Review, 83, 435–450. Nagel, T. (1994) Consciousness and objective reality. In R. Warner and T. Szubka (eds) The Mind–Body Problem. Oxford: Blackwell. Naughton, J. (2012) Thomas Kuhn: The man who changed the way the world looked at science. Guardian, 19 August. www.theguardian.com/science/2012/aug/19/thomas-kuhnstructure-scientific-revolutions.

335

References

Neimeyer, R.A. and Raskin, J.D. (2000) On practising postmodern therapy in modern times. In R.A. Neimeyer and J.D. Raskin (eds) Constructions of Disorder: Meaning-making Frameworks for Psychotherapy. Washington, DC: American Psychological Association. Neisser, U. (1967) Cognitive Psychology. New York: Appleton-Century-Crofts. Neisser, U. (1976) Cognition and Reality: Principles and Implications of Cognitive Psychology. San Francisco, CA: W.H. Freeman. Neisser, U. (1979) The concept of intelligence. In R.J. Sternberg and D.K. Detterman (eds) Human Intelligence: Perspectives on its Theory and Measurement. Norwood, NJ: Ablex. Neisser, U. (1981) John Dean’s memory: A case study. Cognition, 9, 1–22. Neisser, U. (1982) Memory Observed: Remembering in Natural Contexts. San Francisco, CA: W.H. Freeman. Newell, A. and Simon, H.A. (1972) Human Problem Solving. Englewood Cliffs, NJ: Prentice Hall. Nicolson, P. (1995) Feminism and psychology. In J.A. Smith, R. Harré, and L. Van Langenhove (eds) Rethinking Psychology. London: Sage. Nisbett, R.E. and Wilson, T.D. (1977) Telling more than we can know: Verbal reports on mental processes. Psychological Review, 84, 231–259. Northoff, G. (2012) Psychoanalysis and the brain: Why did Freud abandon neuroscience? Frontiers in Psychology, 3, 1–11. Nye, D. (2000) Three Psychologies: Perspectives from Freud, Skinner and Rogers (6th edition). Belmont, CA: Wadsworth/Thomson Learning. O’Donohue, W. and Ferguson, K.E. (2001) The Psychology of B.F. Skinner. Thousand Oaks, CA: Sage Publications. O’Shea, M. (2013) The human brain. New Scientist Instant Expert 31, (i–viii); 6 April. Okasha, S. (2002) Philosophy of Science: A Very Short Introduction. Oxford: Oxford University Press. Orne, M.T. (1962) On the social psychology of the psychological experiment: With particular reference to demand characteristics and their implications. American Psychologist, 17, 776–783. Ornstein, R.E. (1976) The Mind Field. Oxford: Pergamon. Ornstein, R.E. (1986) The Psychology of Consciousness (revised 2nd edition). Harmondsworth: Penguin. Paivio, A. (1971) Imagery and Verbal Processes. New York: Holt, Rinehart & Winston. Palermo, D.S. (1971) Is a scientific revolution taking place in psychology? Psychological Review, 76, 241–263. Paludi, M.A. (1992) The Psychology of Women. Dubuque, IA: William C. Brown. Parker, I., Georgaca, E., Harper, D., McLaughlin, T., and Stowell-Smith, M. (1995) Deconstructing Psychopathology. London: Sage. Parkin, A.J. (2000) Essential Cognitive Psychology. Hove: Psychology Press. Pavlov, I. (1927) Conditioned Reflexes. Oxford: Oxford University Press. Peck, D. and Whitlow, D. (1975) Approaches to Personality Theory. London: Methuen. Penfield, W. (1975) The Mystery of the Mind. Princeton, NJ: Princeton University Press. Penrose, R. (1987) Minds, machines and mathematics. In C. Blakemore and S. Greenfield (eds) Mindwaves. Oxford: Blackwell. Peters, R.S. (1960) The Concept of Motivation. London: Routledge & Kegan Paul. Peterson, L.R. and Peterson, M.J. (1959) Short-term retention of individual items. Journal of Experimental Psychology, 58, 193–198. Phillips, H. (2004) The cell that makes us human. New Scientist, 182(2452), 32–35. Pike, K.L. (1954) Emic and etic standpoints for the description of behaviour. In K.L. Pike (ed.) Language in Relation to a Unified Theory of the Structure of Human Behaviour, Pt. 1. Glendale, CA: Summer Institute of Linguistics.

336

References

Pinker, S. (1997) How the Mind Works. New York: Norton. Popper, K. (1959) The Logic of Scientific Discovery. London: Hutchison. Popper, K. (1972) Objective Knowledge: An Evolutionary Approach. Oxford: Oxford University Press. Potter, J. (1996) Attitudes, social representations and discursive psychology. In M. Wetherell (ed.) Identities, Groups and Social Issues. London: Sage, in association with the Open University. Potter, J. and Wetherell, M. (1987) Discourse and Social Psychology: Beyond Attitudes and Behaviour. London: Sage. Prilleltensky, I. and Fox, D. (1997) Introducing critical psychology: Values, assumptions, and the status quo. In D. Fox and I. Prilleltensky (eds) Critical Psychology: An Introduction. London: Sage. Prince, J. and Hartnett, O. (1993) From ‘psychology constructs the female’ to ‘females construct psychology’. Feminism & Psychology, 3(2), 219–224. Prior, H., Schwartz, A., and Gunturkun, O. (2008) Mirror-induced behaviour in the magpie (Pica pica): Evidence of self-recognition. PLoS Biology, 6(8). Puccetti, R. (1981) The case for mental duality: Evidence from split-brain data and other considerations. Behavioural & Brain Sciences, 4, 93–123. Pyszczynski, T., Greenberg, J., and Koole, S.L. (2004) Experimental Existential Psychology: Exploring the human confrontation with reality. In J. Greenberg, S.L. Koole, and T. Pyszczynski (eds) Handbook of Experimental Existential Psychology. New York: The Guilford Press. Rachman, S. (1977) The conditioning theory of fear-acquisition: A critical examination. Behaviour Research & Therapy in Personality, 15, 375–387. Raine, A., Buchsbaum, M., and LaCasse, L. (1997) Brain abnormalities in murderers indicated by positron emission topography. Biological Psychiatry, 42, 495–508. Raine, A., Lencz, T., Bihrle, S., LaCasse, L., and Colletti, P. (2000) Reduced prefrontal grey matter volume and reduced autonomic activity in antisocial personality disorder. Archives of General Psychiatry, 57(2), 119–127. Raine, A., Ishikawa, S.S., Arce, E., Lencz, T., Knuth, K.H., Bihrle, S., LaCasse, L., and Colletti, P. (2004) Hippocampal structural asymmetry in unsuccessful psychopaths. Biological Psychiatry, 55(2), 185–191. Raley, Y. (2006) Electric thoughts? Scientific American Mind, 17(2), 76–81. Ramachandran, M.S. (1998) The unbearable likeness of being. Independent on Sunday, 22 November, 22–24. Ramachandran, M.S. (2011) The Tell-Tale Brain: Unlocking the Mystery of Human Nature. London: Windmill Books. Ramberg, B. and Gjesdal, K. (2005) Hermeneutics. Stanford Encyclopaedia of Philosophy, 9 November. http://plato.stanfoed.edu/entries/hermeneutics. Rank, O. (1958) Beyond Psychology. New York: Dover Books. (Originally published 1941.) Raskin, J.D. and Lewandowski, A.M. (2000) The construction of disorder as human enterprise. In R.A. Neimeyer and J.D. Raskin (eds) Construction of Disorder: Meaningmaking Frameworks for Psychotherapy. Washington, DC: American Psychological Association. Read, J. (2013a) A history of madness. In J. Read and J. Dillon (eds) Models of Madness: Psychological, Social and Biological Approaches to Psychosis (2nd edition). London: Routledge & ISPS. Read, J. (2013b) The invention of schizophrenia. In J. Read and J. Dillon (eds) Models of Madness: Psychological, Social and Biological Approaches to Psychosis (2nd edition). London: Routledge & ISPS.

337

References

Regan, T. (2006) Sentience and rights. In J. Turner and J. D’Silva (eds) Animals, Ethics and Trade: The Challenge of Animal Sentience. London: Earthscan. Rescorla, R.A. (1968) Probability of shock in the presence of and absence of CS in fear conditioning. Journal of Comparative & Physiological Psychology, 66, 1–5. Richards, G. (1996) Arsenic and old race. Observer Review, 5 May, 4. Richards, G. (2002) Putting Psychology in its Place: A Critical Historical Overview (2nd edition). Hove: Routledge. Richards, G. (2010) Putting Psychology in its Place: Critical Historical Perspectives (3rd edition). London: Routledge. Robinson, A. (2004) Animal rights, anthropomorphism and traumatized fish. Philosophy Now, 46, 20–22. Robinson, D.K. (2010) Founding fathers. The Psychologist, 23(12), 976–977. Robinson, O. (2012) A war of words: The history of the idiographic/nomothetic debate. The Psychologist, 25, 164–167. Roediger, H.L., Rushton, J.P., Capaldi, E.D., and Paris, S.G. (1984) Psychology. New York: Little, Brown & Co. Rogers, C.R. (1951) Client-Centred Therapy: Its Current Practice, Implications and Theory. Boston, MA: Houghton-Mifflin. Rogers, C.R. (1961) On Becoming a Person: A Therapist’s View of Psychotherapy. Boston, MA: Houghton-Mifflin. Rogers, C.R. (1983) Freedom to Learn for the 80s. Columbus, OH: Charles E. Merrill. Rolls, G. (2007) Taking the Proverbial: The Psychology of Proverbs and Sayings. London: Chambers. Romanes, G.J. (1882) Animal Intelligence. London: Kegan Paul, Trench, Trubner. Romanes, G.J. (1883) Mental Evolution in Animals. London: Kegan Paul, Trench, Trubner. Rosa, A. (1996) Bartlett’s psycho-anthropological project. Culture & Psychology, 2(4), 377–378. Rosch, E.H. (1973) Natural categories. Cognitive Psychology, 4, 328–350. Rose, S. (1992) The Making of Memory: From Molecules to Mind. London: Bantam Books. Rose, S. (2000) Escaping evolutionary psychology. In H. Rose and S. Rose (eds) Alas, Poor Darwin: Arguments Against Evolutionary Psychology. London: Jonathan Cape. Rose, S. (2003) The Making of Memory: From Molecules to Mind (revised edition). London: Vintage. Rosenhan, D.L. and Seligman, M.E.P. (1984) Abnormal Psychology. New York: Norton. Rosenthal, R. (1966) Experimenter Effects in Behavioural Research. New York: AppletonCentury-Crofts. Rosenthal, R. and Fode, K.L. (1963) The effects of experimenter bias on the performance of the albino rat. Behavioural Science, 8, 183–189. Rosenthal, R. and Jacobson, L. (1968) Pygmalion in the Classroom: Teacher Expectations and Pupils’ Intellectual Development. New York: Holt, Rinehart & Winston. Rosenthal, R. and Lawson, R. (1964) A longitudinal study of the effects of experimenter bias on the operant learning of laboratory rats. Journal of Psychiatric Research, 2, 61–72. Rowan, J. (2001) Ordinary Ecstasy: The Dialectics of Humanistic Psychology (3rd edition). Hove: Brunner-Routledge. Rubin, E. (1915) Synsoplevede Figuere. Copenhagen: Gyldendlasde Boghandel. (Translated into German (1921) as Visuell wahrgenommene Figuren (same publisher.) Rushton, J.P. (1995) Race, Evolution and Behaviour. New Brunswick, NJ: Transaction Publishers.

338

References

Rutkin, A. (2016) Almost human? New Scientist, 231(3080), 16–17. Rutter, M. (2003) Pathways of genetic influences on psychopathology. Zubin Award Address at 18th Annual Meeting of the Society for Research in Psychopathology, Toronto, Canada (October). Ryan, A. (1970) The Philosophy of the Social Sciences. London: Macmillan. Ryan, J. (1972) IQ: The illusion of objectivity. In K. Richardson and D. Spears (eds) Race, Culture and Intelligence. Harmondsworth: Penguin. Rycroft, C. (1966) Introduction. In C. Rycroft (ed.) Psychoanalysis Observed. London: Constable. Ryder, R. (1990) Open reply to Jeffrey Gray. The Psychologist, 3, 403. Sabbatini, R.M.F. (1997) The PET scan: A new window into the brain. Brain & Mind, March/May. www.cerebromente.org.br/n01/pet.htm. Salmon, P. (1978) Doing psychological research. In F. Fransella (ed.) Personal Construct Psychology. London: Academic Press. Samelson, F. (1974) History, origin myth, and ideology: Comte’s ‘discovery’ of social psychology. Journal for the Theory of Social Behaviour, 4, 217–231. Samelson, F. (1975) On the science and politics of the IQ. Social Research, 42, 467–488. Samelson, F. (1981) Struggle for scientific authority: The reception of Watson’s behaviourism. Journal of the History of the Behavioural Sciences, 17, 399–425. Samelson, F. (1985) Organizing for the kingdom of behaviour: Academic battles and organizational policies in the twenties. Journal of the History of the Behavioural Sciences, 21, 33–47. Sanislow, C.A., Pine, D.S., Quinn, K.J., Kozak, M.J., Garvey, M.A., Heinssen, R.K., Wang, P.S., and Cuthbert, B.N. (2010) Developing constructs for psychopathology research: Research domain criteria. Journal of Abnormal Psychology, 119(4), 631–639. Satel, S. and Lilienfeld, S.O. (2013) Brainwashed: The Seductive Appeal of Mindless Neuroscience. New York: Basic Books. Saunders, C. and Fernyhough, C. (2016) The medieval mind. The Psychologist, 29(11), 880–883. Sax, B. (1997) What is a ‘Jewish Dog’? Konrad Lorenz and the cult of wildness. Society and Animals, 5(1), 3–21. Schachter, S. (1964) The interaction of cognitive and physiological determinants of emotional state. In L. Berkowitz (ed.) Advances in Experimental Social Psychology, Vol. 1. New York: Academic Press. Scheff, T.J. (1966) Being Mentally Ill: A Sociological Theory. Chicago, IL: Aldine Press. Schultz, D.P. (1969) A History of Modern Psychology. New York: Academic Press. Scodel, A. (1957) Heterosexual somatic preference and fantasy dependence. Journal of Consulting Psychology, 21, 371–374. Scribner, S. (1974) Developmental aspects of categorized recall in a West African society. Cognitive Psychology, 6, 475–494. Scull, A. (1981) Moral treatment reconsidered. In A. Scull (ed.) Madhouses, Maddoctors and Madmen. Philadelphia, PA: University of Philadelphia Press. Scully, D. and Bart, P. (1973) A funny thing happened on the way to the orifice: Women in gynaecology textbooks. American Journal of Sociology, 78, 1045–1049. Searle, J.R. (1980) Minds, brains and programs. The Behaviour & Brain Sciences, 3, 417–457. Searle, J.R. (1987) Minds and brains without programs. In C. Blakemore and S. Greenfield (eds) Mindwaves. Oxford: Blackwell. Searle, J.R. (1995) The Construction of Social Reality. London: Penguin Books. Sechenov, I.M. (1965) Reflexes of the Brain. Cambridge, MA: MIT Press. (Originally published 1863.)

339

References

Segall, M.H., Dasen, P.R., Berry, J.W., and Poortinga, Y.H. (1999) Human Behaviour in Global Perspective: An Introduction to Cross-Cultural Psychology (2nd edition). Needham Heights, MA: Allyn & Bacon. Seligman, M.E.P. (1970) On the generality of the laws of learning. Psychological Review, 77, 406–418. Seligman, M.E.P. (1999) The president’s address. American Psychologist, 54, 559–562. Seligman, M.E.P. (2003) Positive psychology: Fundamental assumptions. The Psychologist, 16(3), 126–127. Seligman, M.E.P., Steen, T.A., Park, N., and Peterson, C. (2005) Positive psychology progress: Empirical validation of interventions. American Psychologist, 60, 410–421. Serpell, R. (1982) Measures of perception, skills and intelligence: The growth of a new perspective on children in a Third World country. In W. Hartup (ed.) Review of Child Development Research, Vol. 6. Chicago, IL: University of Chicago Press. Shackleton, V.J. and Fletcher, C.A. (1984) Individual Differences: Theories and Applications. London: Methuen. Shields, S. and Bhatia, S. (2009) Darwin on race, gender and culture. American Psychologist, 64, 113. Shotter, J. (1975) Images of Man in Psychological Research. London: Methuen. Shotter, J. (1990) Social construction of remembering/forgetting. In D. Middleton and D. Edwards (eds) Collective Remembering. London: Sage. Shweder, R.A. (1990) Cultural psychology – what is it? In J.W. Stigler, R.A. Shweder, and G. Herdt (eds) Cultural Psychology. New York: Cambridge University Press. Siegler, M., Osmond, H., and Mann, H. (1972) Laing’s models of madness. In R. Boyers and R. Orrill (eds) Laing and Anti-Psychiatry. Harmondsworth: Penguin. Simpson, J.C. (2000) It’s all in the upbringing. Johns Hopkins Magazine, April. www.jhu. edu/~jhumag/0400web/35.html. Sinha, D. (1990) Concept of psychological well-being: Western and Indian perspectives. National Institute of Mental Health and Neurosciences Journal, 8, 1–11. Sinha, D. (2013) Once blind and now they see. Scientific American, 309(1), 36–43. Skinner, B.F. (1938) The Behaviour of Organisms. New York: Appleton-Century-Crofts. Skinner, B.F. (1948) Walden Two. New York: Macmillan. Skinner, B.F. (1971) Beyond Freedom and Dignity. New York: Knopf. Skinner, B.F. (1974) About Behaviourism. New York: Knopf. Skinner, B.F. (1986) Is it behaviourism? Behavioural & Brain Sciences, 9, 716. Skinner, B.F. (1987) Skinner on Behaviourism. In R.L. Gregory (ed.) The Oxford Companion to the Mind. Oxford: Oxford University Press. Skinner, B.F. (1990) Can Psychology be a science of mind? American Psychologist, 45, 1206–1210. Smith, C.L. and Zielinksi, S.L. (2014) Brainy bird. Scientific American, 310(2), 46–51. Smith, J.A., Harré, R., and Van Langenhove, L. (1995) Introduction. In J.A. Smith, R. Harré, and L. Van Langenhove (eds) Rethinking Psychology. London: Sage. Smith, P.B. and Bond, M.H. (1998) Social Psychology Across Cultures (2nd edition). Hemel Hempstead: Prentice Hall Europe. Smith, P.K., Cowie, H., and Blader, M. (1998) Understanding Children’s Development (3rd edition). Oxford: Blackwell. Smith, R. (2013) Between Mind and Nature: A History of Psychology. London: Reaktion Books. Sneddon, L.U. (2006) Ethics and welfare: Pain perception in fish. Bulletin of the European Association of Fish Pathology, 26(1), 6.

340

References

Sneddon, L.U., Braithwaite, A., and Gentle, M. (2003) Do fish have nociceptors: Evidence for the evolution of a vertebrate sensory system. Proceedings of the Royal Society: Biological Sciences, 270(1520), 1115–1121. Sober, E. (1992) The evolution of altruism: Correlation, cost and benefit. Biology & Philosophy, 7, 177–188. Solms, D. (2006) Putting the psyche into neuropsychology. The Psychologist, 19(9), 538–539. Solomon, S., Greenberg, J., and Pyszczynski, T. (1991a) A terror management theory of social behaviour: The psychological functions of self-esteem and cultural worldviews. In M. Zanna (ed.) Advances in Experimental Social Psychology, Vol. 24. Orlando, FL: Academic Press. Solomon, S., Greenberg, J., and Pyszczynski, T. (1991b) A terror management theory of self esteem. In C.R. Snyder and D. Forsyth (eds) Handbook of Social and Clinical Psychology: The Health Perspective. New York: Pergamon Press. Solomon, S., Greenberg, J., and Pyszczynski, T. (2004) The cultural animal: Twenty years of terror management theory and research. In J. Greenberg, S.L. Koole, and T. Pyszczynski (eds) Handbook of Experimental Existential Psychology. New York: The Guilford Press. Spanos, N.P. (1978) Witchcraft in histories of psychiatry: A critical analysis and an alternative conceptualization. Psychological Bulletin, 85, 417–439. Spearman, C. (1904) General intelligence, objectively determined and measured. American Journal of Psychology, 15, 201–293. Spearman, C. (1927) The Abilities of Man. London: Macmillan. Spearman, C. (1967) The doctrine of two factors. In S. Wiseman (ed.) Intelligence and Ability. Harmondsworth: Penguin. (Originally published 1927.) Spelke, E.S. (1991) Physical knowledge in infancy: Reflections on Piaget’s theory. In S. Carey and R. Gelman (eds) The Epigenesis of Mind: Essays on Biology and Cognition. Hillsdale, NJ: Erlbaum. Sperry, R.W. (1974) Lateral specialization in the surgically separated hemispheres. In F.O. Schmitt and F.G. Worden (eds) The Neurosciences Third Study Program. Cambridge, MA: MIT Press. Stainton Rogers, R., Stenner, P., Gleeson, K., and Stainton Rogers, W. (1995) Social Psychology: A Critical Agenda. London: Polity Press. Stevens, R. (1995) Freudian theories of personality. In S.E. Hampson and A.M. Colman (eds) Individual Differences and Personality. London: Longman. Stevens, R. and Gardner, S. (1982) The Women of Psychology: Expansion and Refinement. Cambridge, MA: Schenkman. Stigler, J., Shweder, R.A., and Herdt, G. (eds) (1990) Cultural Psychology: Essays on Comparative Human Development. New York: Cambridge University Press. Stout, G.F. (1896) Analytic Psychology, Vols 1–2. London: Swann Sonnenschein & Co. Ltd. Strachey, J. (1962–1977) Sigmund Freud: A sketch of his life and ideas. (This appears in each volume of the Pelican Freud Library.) Originally written for the Standard Edition of the Complete Psychological Works of Sigmund Freud, 1953–1974. London: Hog arth Press. Sue, D., Sue, D.W., and Sue, S. (1990) Understanding Abnormal Behaviour. Boston, MA: Houghton Mifflin. Sue, S. (1995) Implications of the bell curve: Whites are genetically inferior in intelligence? Focus: Notes from the Society for the Psychological Study of Ethnic Minority Issues, 16–17. Sullivan, H.S. (1927) Tentative criteria of malignancy in schizophrenia. American Journal of Psychiatry, 84, 759–782.

341

References

Sulloway, F.J. (1979) Freud, Biologist of the Mind: Beyond the Psychoanalytic Legend. New York: Basic Books. Szasz, T. (1972) The Myth of Mental Illness. London: Paladin. Szasz, T. (1973) The Manufacture of Madness. London: Paladin. Szasz, T. (1974) Ideology and Insanity. Harmondsworth: Penguin. Tallis, R. (2011) Aping Mankind: Neuromania, Darwinitis and the Misrepresentation of Humanity. London: Routledge. Tavris, C. (1993) The mismeasure of woman. Feminism & Psychology, 3(2), 149–168. Taylor, R. (1963) Metaphysics. Englewood Cliffs, NJ: Prentice Hall. Teichman, J. (1988) Philosophy and the Mind. Oxford: Blackwell. Teo, T. (2005) The Critique of Psychology: From Kant to Postcolonial Theory. New York: Springer. Teo, T. (2009) Philosophical concerns in critical psychology. In D. Fox, I. Prilleltensky, and S. Austin (eds) Critical Psychology: An Introduction (2nd edition). London: Sage. Thomas, K. (1990) Psychodynamics: The Freudian approach. In I. Roth (ed.) Introduction to Psychology, Vol. 1. Hove: Open University/Lawrence Erlbaum. Thompson, C. (1943) Penis envy in women. Psychiatry, 6, 12–35. Thorndike, E.L. (1898) Animal intelligence: An experimental study of the associative processes in animals. Psychological Review Monograph Supplement 2 (Whole No. 8). Thorndike, E.L. (1911) Animal Intelligence. New York: Macmillan. Thorne, B. (1992) Carl Rogers. London: Sage. Thorngate, W. (1986) The production, detection and explanation of behaviour patterns. In J. Valsiner (ed.) The Individual Subject and Scientific Psychology. New York: Plenum Press. Thornhill, R. and Wilmsen-Thornhill, N. (1992) The evolutionary psychology of men’s coercive sexuality. Behaviour & Brain Sciences, 15(2), 363–375. Thurstone, L.L. (1938) Primary mental abilities. Psychometric Monographs, No. 1. Tinbergen, N. (1951) The Study of Instinct. Oxford: Clarendon Press. Toates, F. (2001) Biological Psychology: An Integrative Approach. Harlow: Pearson Education Ltd. Tolman, C.W. and Maiers, W. (eds) (1991) Critical Psychology: Contributions to an Historical Science of the Subject. Cambridge, MA: Cambridge University Press. Tolman, E.C. (1932) Purposive Behaviour in Animals and Man. New York: Century. Tolman, E.C. (1948) Cognitive maps in rats and man. Psychological Review, 55, 189–208. Tolman, E.C. and Honzig, C.H. (1930) Introduction and removal of reward and mazelearning in rats. University of California Publications in Psychology, 4, 257–275. Tooby, J. and Cosmides, L. (1997) Evolutionary psychology: A primer. www.psych.ucsb. edu/research/cep/primer.html. Torrance, S. (1986) Breaking out of the Chinese room. In M. Yazdani (ed.) Artificial Intelligence: Principles and Applications. London: Chapman & Hall. Toulmin, S. and Leary, D.E. (1985) The cult of empiricism in psychology, and beyond. In S. Koch and D.E. Leary (eds) A Century of Psychology as Science. New York: McGraw-Hill. Triandis, H.C. (1990) Theoretical concepts that are applicable to the analysis of ethnocentrism. In R.W. Brislin (ed.) Applied Cross-Cultural Psychology. Newbury Park, CA: Sage. Trivers, R.L. and Hare, H. (1976) Haplodiploidy and the evolution of social insects. Science, 191, 249–263. Tulving, E. (1972) Episodic and semantic memory. In E. Tulving and W. Donaldson (eds) Organization of Memory. London: Academic Press.

342

References

Tulving, E. (1983) Elements of Episodic Memory. New York: Oxford University Press. Tulving, E. (1985) How many memory systems are there? American Psychologist, 40, 385–398. Turing, A.M. (1950) Computing machinery and intelligence. Mind, 59, 433–460. Ussher, J.M. (1991) Women’s Madness: Misogyny or Mental Illness? Hemel Hempstead: Harvester Wheatsheaf. Valentine, E. (1992) Conceptual Issues in Psychology (2nd edition). London: Routledge. Valentine, E. (2010) Women in early 20th-century experimental psychology. The Psychologist, 23(12), 972–974. Valentine, E.R. (1982) Conceptual Issues in Psychology. London: Routledge. Van Langenhove, L. (1995) The theoretical foundations of experimental psychology and its alternatives. In J.A. Smith, R. Harré, and L. Van Langenhove (eds) Rethinking Psychology. London: Sage. Velmans, M. (1991) Intersubjective science. Journal of Consciousness Studies, 6(2/3), 299–306. Vernon, M.D. (1955) The functions of schemata in perceiving. Psychological Review, 62, 180–192. Vernon, P.E. (1950) The hierarchy of ability. In S. Wiseman (ed.) Intelligence and Ability. Harmondsworth: Penguin. Von Senden, M. (1960) Space and Sight: The Perception of Space and Shape in the Congenitally Blind Before and After Operations (trans. P. Heath). London: Methuen. (Originally published 1932.) Wagner, D.A. (1981) Culture and memory development. In H.C. Triandis and A. Heron (eds) Handbook of Cross-Cultural Psychology, Vol. 4, Developmental Psychology. Boston, MA: Allyn & Bacon. Walsh, R.T.G., Teo, T., and Baydala, A. (2014) A Critical History and Philosophy of Psychology. Cambridge: Cambridge University Press. Watson, J.B. (1913) Psychology as the behaviourist views it. Psychological Review, 20, 158–177. Watson, J.B. (1924) Behaviourism. New York: Norton. Watson, J.B. (1931) Behaviourism (2nd edition). London: Kegan Paul, Trench, Trubner & Co. Watson, J.B. and Rayner, R. (1920) Conditioned emotional reactions. Journal of Experimental Psychology, 3(1), 1–14. Wearing, D. (2005) Forever Today. London: Corgi Books. Wegner, D.M. and Ward, A.F. (2013) How Google is changing your brain. Scientific American, 309(6), 50–53. Weiner, B. (1992) Human Motivation: Metaphors, Theories and Research. Newbury Park, CA: Sage. Weiskrantz, L. (1986) Blindsight: A Case Study and Implications. Oxford: Clarendon Press. Weiskrantz, L. (2007) The case of blindsight. In M. Velmans and S. Schneider (eds) The Blackwell Companion to Consciousness. Oxford: Blackwell Publishing. Weisstein, N. (1993) Psychology constructs the female, or, the fantasy life of the male psychologist (with some attention to the fantasies of his friends, the male biologist and the male anthropologist). [This is a revised/expanded version of ‘Kinder, Kuche, Kirche as scientific law: psychology constructs the female; 1971]. Feminism & Psychology, 3(2), 195–210. West, C. and Zimmerman, D.H. (1987) Doing gender. Gender & Society, 1(2), 125–151. Wetherell, M. (1996) Group conflict and the social psychology of racism. In M. Wetherell (ed.) Identities, Groups and Social Issues. London: Sage, in association with the Open University. Wetherell, M. (2012) Affect and Emotion: A New Social Science Understanding. London: Sage.

343

References

Wetherell, M. and Still, A. (1998) Realism and relativism. In R. Sapsford, A. Still, M. Wetherell et al. (eds.) Theory and Social Psychology. London: Sage in association with the Open University. White, R. (2013) The globalization of mental illness. The Psychologist, 26(3), 182–185. Widiger, T.A. (2012) Classification and diagnosis: Historical development and contemporary issues. In J.E. Maddux and B.A. Winstead (eds) Psychopathology: Foundations for a Contemporary Understanding (3rd edition). New York: Routledge. Wilber, K. (1983) Eye to Eye: The Quest for the New Paradigm. Garden City: Anchor. Wilhelm, K. (2006) Do animals have feelings? Scientific American Mind, 17(1), 24–29. Wilkinson, S. (1988) The role of reflexivity in feminist psychology. Women’s Studies International Forum, 11(5), 493–502. Willig, C. (2008) Introducing Qualitative Research in Psychology (2nd edition). Milton Keynes: Open University Press. Wilson, C. (2016) Psychiatry’s last taboo. New Scientist, 231(3083), 16–17. Wilson, E.O. (1975) Sociobiology: The New Synthesis. Cambridge, MA: Harvard University Press. Wilson, G.T., O’Leary, K.D., Nathan, P.E., and Clark, L.A. (1996) Abnormal Psychology: Integrating Perspectives. Needham Heights, MA: Allyn & Bacon. Winch, P.G. (1958) The Idea of a Social Science. London: Routledge. Wiseman, R. (2012) Wired for weird. Scientific American Mind, 22(6), 52–57. Wittgenstein, L. (1953) Philosophical Investigations. Oxford: Blackwell. Wober, M. (1974) Towards an understanding of the Kiganda concept of intelligence. In J.W. Berry and P.R. Dasen (eds) Culture and Cognition. London: Methuen. Wolpe, J. and Rachman, S. (1960) Psychoanalytic evidence: A critique based on Freud’s case of Little Hans. Journal of Nervous & Mental Diseases, 131, 135–145. Woodworth, R.S. (1948) Contemporary Schools of Psychology. New York: Ronald. Woodworth, R.S. (1964) Contemporary Schools of Psychology (9th UK edition). London: Methuen. Workman, L. and Reader, W. (2008) Evolutionary Psychology: An Introduction (2nd edition). Cambridge: Cambridge University Press. World Health Organization (1979) Schizophrenia: An International Follow-up Study. New York: Wiley. World Health Organization (1992) The ICD-10 Classification of Mental and Behavioural Disorders: Clinical Descriptions and Diagnostic Guidelines. Geneva: WHO. Wundt, W. (1974) Grundzuge der Physiologischen Psychologie (Principles of Physiological Psychology). Leipzig: Engelmann. (Originally published 1874.) Yalom, I.D. (1980) Existential Psychotherapy. New York: Basic Books. Yalom, I.D. (2008) Staring at the Sun: Overcoming the Dread of Death. London: Piatkus Books. Yerkes, R.M. (1921) Psychological examining in the United States Army. Memoirs of the National Academy of Sciences, 15, 1–890. Zenderland, L. (1998) Measuring Minds: Henry Herbert Goddard and the Origins of American Intelligence Testing. New York: Cambridge University Press.

344

Index

Page numbers in italics denote tables, those in bold denote figures. 3D brain mapping 125 ablation 117 absolutism 247–8 abstractness meaning 235 accidental associations 33 accidents 212–13 achieved performance 160 actions and movements 96–7 actualizing tendency 231, 278 Adler, Alfred 204, 227 adoptive versus biological relatives studies 255 Aesclepiades 297 affirmative action 266 agency 95 aggression 191 Aguirre, G.K. 126 AI see Artificial Intelligence (AI) aims 56–9 akinetic mutism 272 algebraic understanding 214 Allport, Floyd 130 Allport, Gordon 48, 51, 99, 169, 170, 171, 227, 283, 284 altruism 192–5 Alzheimer, Alois 49, 301, 302 Alzheimer’s disease 115, 125 American Psychiatric Association 302 American Psychological Association (APA) 57, 243 Amir, Y. 246 anaclitic identification 211 Analytical Psychology 204, 214–15

anatomy, history of 112–14 Ancient Greek philosophy 20, 112, 206, 315 Anderson, M. 267 androcentric perspectives 24–5 Angell, James R. 139 Anglocentric bias 246 anima/animus 215 animal studies 118–23; Comparative Psychology 182, 187–8; ethical issues 119, 120, 122–3, 290; ethology 182, 188–91; Functionalism 139–40; Pavlov 134–8, 136, 136; Skinner 145, 146–7, 147 Animal Welfare Act (2006) 119 animals: altruism 192; consciousness 121, 122; emotions 121–2, 184; pain 120–3; and personhood 290 Animals (Scientific Procedures) Act (1986) 119 Antaki, C. 101 anterior cingulate cortex 272 anterograde amnesia 161 anthropic shift in ethology 190 anthropometric laboratory 253–4 anthropomorphism 44, 194 anti-essentialism 82 anti-positivism 82 anti-psychiatry movement 239, 300, 311 anti-realism 105 anti-reductionism 43, 124 apperception 35 Appignanesi, L. 218, 219 Apter, A. 274

345

Index

archetypes 215, 277, 278 argument from design 182 Argyle, Michael 285 Aristotle 20, 21, 31, 32, 60, 113, 297 Army Alpha and Beta tests 259–63, 260 Arnold, Magda 25 Artificial Intelligence (AI) 36, 171–4, 176 Asch, Solomon 16 Ashworth, P. 227–8 association 15, 31–4; memory as 158–62 Associationism 31–4, 39; and Behaviourism 133, 139–40; memory 157, 158–62, 166 associative learning 133 astrocytes 115 Atkinson, R.C. 160–1, 161 atomism 71 attribution process 101 attribution theory 101 auditory hallucinations 275 Austin, George A. 171 autism 115 automatic formal systems 172, 173 automatic thought processes 216 automaton-theory 112 autonomic nervous system 110, 114 autonomous man 149 autonomy of psychological explanation 42–3 average evoked potentials (AEPs) 125 axons 114 backward conditioning 136 Bacon, Francis 30, 69 Baddeley, A.D. 161–2 Baggini, J. 129 Baldwin, James Mark 139 Bamshad, M.J. 186 Bandura, A. 36, 93, 152 Bannister, D. 105, 106 Bargh, J.A. 216 Barkow, J. 198 Bartlett, Frederic Charles 162–4, 165, 167, 169, 218 basal ganglia 115, 116 Bateson, Gregory 311, 313 Baumeister, R.F. 281, 282 Bazan, A. 112, 113 Beaunis, Henri 257 Becker, Ernest 239, 242

346

Becker, H.S. 294 becoming 224, 231 behaviour: biological influences on 109–10; causal explanation of 94–6, 100–1; and construction of gender 79; decontextualization of 46; in James’ theory of emotion 6; social level explanations of 43–4, 59; species-specific 189; unconsciously motivated 35 behaviour analysis 11, 13, 14, 145–6, 148, 149, 150 behaviour genetics 130, 255 behaviour therapy 142–3 behavioural control 56–8 Behaviourism 4, 18, 39, 133–52, 157; ethical issues 142; and Functionalism 134, 138–40, 139; Little Albert study 140–4; Methodological 9, 13, 38, 149; and modernism 144–5; and morality 148–50; as paradigm 63, 64; Pavlov 134–8, 136, 136; and the self 275–6; Skinner 10–14, 15, 41, 49, 133, 145–50, 147, 275–6; Tolman 16, 151; Watson 7–10, 14, 15, 39, 45–6, 56, 97, 133, 140–5, 185, 275–6 behaviourists’ prejudice 97 Bem, Daryl 53–4 Benedict, Ruth 227 Bennett, M. 100 Bennett, T. 127 Berger, P.L. 81, 286 Berkeley, George 14, 33–4 Bernstein, M.D. 24, 25 Berry, J.W. 247 Bethlem Hospital, London 297, 298 Bettelheim, Bruno 211 Bhatia, S. 185 biases 55, 246; ethnocentrism 83, 246, 308; Eurocentrism 246; experimenter 47; gender 24–5, 58, 77–8; hereditarian 8; inborn 36; intelligence tests 263–4; intercept 264; masculinist 24–5, 58, 77; publication 53–4; V-bias and S-bias 263–4 bicameral mind argument (BMA) 274–5 Big Five model 48 Binet, Alfred 256–9 Binet–Simon Scale 259 Binswanger, Ludwig 237, 310

Index

biological altruism 192, 194–5 biological preparedness 143 biological reductionism 110–12 biologism 127 Biopsychology 109–12; animal studies 118–23; brain imaging 124–9, 130, 216, 305; history of anatomy and physiology 112–14; and the self 271–5; see also brain bipolar disorder 301 birds 121 Blackham, H.J. 239 Blackman, D.E. 146 Blackmore, S. 129, 195–6 Blakemore, Colin 128, 158, 272 blank slate 32, 35, 188, 198 Bleier, R. 220 Bleuler, Eugen 301, 312 blindsight 216 Blumer, Herbert 286 BMA see bicameral mind argument (BMA) Boas, F. 247 Boden, M.A. 64, 172, 173, 177 Bohan, J. 314 Bois-Reymond, Emil Du 114 Bond, M.H. 246, 291 Boring, E.G. 16, 261–2 Boss, Medard 237 Bowlby, John 204, 310 Boyle, E. 120, 122 BPS see British Psychological Society (BPS) brain 110, 113–18, 123–31, 176–7; fluidbased theories of 113; function 113–14, 125–9; functional lateralization 116–18, 274–5; functional localization 116–18, 123–4; neurodeterminism 127, 130–1; phrenology 116, 117; split-brain studies 118, 274–5; structure 113, 114–18, 116; see also mind–brain relationship brain cells: neurons 114; non-neuronal 115 brain fag syndrome 308 brain imaging 124–9, 130, 216, 305 brain-damaged patient studies 161 Brandom, Robert 214 Brandon, S. 218 Brentano, Franz 205, 206 Breuer, Joseph 208 Brislin, R. 250–1 British False Memory Society 217

British Journal of Psychiatry 218 British Psychological Society (BPS) 25, 58, 59, 118–20, 218 Broadbent, Donald 17, 18, 39, 97, 157 Broca, Paul 258 Broca’s area 116, 118 Brown, J. 161 Brown, J.A.C. 212, 215 Brown, L.S. 266–7 Brown, R. 192, 195 Brownmiller, Susan 200 Brown–Peterson technique 161 Brucke, Ernst 205 Bruner, Jerome Seymour 18, 169–71 Brysbaert, M. 20 Buber, Martin 234, 310 Buller, D.J. 200 Bunn, G. 3 Burr, V. 79, 80, 81, 82, 84, 86, 88, 88, 286, 288, 289, 291 Burt, C. 48 Burt, Cyril 162 Buss, A.R. 78 Buss, D.M. 200 Cajal, Santiago Ramon y 114 Calkins, Mary 25 callosotomy 118 carbon chauvinism 177 cardinal traits 283 Carr, Henry 139 Carruthers, M. 163 case studies 49–51, 50 CAT scanning 125 categorical approach 293 Cattell, James McKeen 139, 139 Cattell, Raymond 48, 57, 233 causality, principle of 212, 213 cause and effect 34, 46, 69, 71, 212, 213 CBSs see culture-bound syndromes (CBSs) CCP see Cross-cultural Psychology (CCP) celebratory history 19, 144 central nervous system (CNS) 110, 114; animals 120; see also brain central traits 283, 284 centralist materialism 41 Centre for the Study of Human Behaviour 10 cerebellum 115, 116

347

Index

cerebral cortex 114 cerebrum 115, 116 CERs see conditioned emotional responses (CERs) Chalmers, D. 40 Charcot, Jean-Martin 205, 207, 256 Cherry, F. 199, 200 child sexual abuse 206, 216–18 Chinese room 173–4, 177 Chomsky, Noam 17 chunking 17 cingulate cortex 115 class interest 207 classical conditioning 9–10, 15, 133, 135–8, 136, 136, 140–4; role of cognition in 151, 152 Classical period 113, 296–7; see also Ancient Greek philosophy client-centred therapy 228–9, 279 Clifasefi, S.L. 218 clitoral orgasm 219–20 clustering 166 CML see conscious mental life (CML) CNS see central nervous system (CNS) Cognitive Behaviourism 151 cognitive development 36, 63 cognitive maps 151 Cognitive Psychology 9, 13, 14, 16–18, 19, 64, 76, 149, 157; Artificial Intelligence (AI) 36, 171–4, 176; Computational Theory of Mind (CTM) 173–5, 176–7; computer analogy 17, 18, 157, 174–6; and the self 276; unconscious mind 216; see also memory cognitive revolution 16–18, 69, 130, 157, 169–72; second 75–6 cognitive unconscious 216 Cohen, John 226 Cohen, L. 218 Cohen, N.J. 161 Cole, M. 165–6 collective unconscious 215, 277, 278 collectivism 291 Colman, W. 277 Colvin, M.K. 274–5 commissurotomy 118 common sense psychology 44, 91–107; causal explanation of behaviour 94–6, 100–1; Heider 99–101; Personal

348

Construct Theory (PCT) 101–7, 105, 289; vs. psychological research 98–9 communal construction 80 Comparative Psychology 182, 187–8 compatibilism 6 compensatory history 25 complementarity hypothesis 187 complexes 215, 277 Computational Theory of Mind (CTM) 173–5, 176–7 computer analogy 17, 18, 157, 174–6 computerized axial tomography (CAT) 125 Comte, Auguste 37, 38, 38, 39, 117 conceptual equivalence 250–1 conditional positive regard 280–1 conditioned emotional responses (CERs) 140–4 conditioned reflexes 135 conditioned response (CR) 135, 136, 136, 152 conditioned stimulus (CS) 135, 136, 136, 152 conditioning: classical 9–10, 15, 133, 135–8, 136, 136, 140–4; operant 10, 13, 15, 133, 145–7, 147, 150; role of cognition in 151, 152 conditions of worth 280–1 confirmation vs. refutation 53 connectionism 140 Conrad, R. 161 conscience 211 conscious mental life (CML) 6 consciousness 15, 111–12, 124; animals 121, 122; apperception 35; and brain activity 128–9; Descartes’ dualism 40; epiphenomenalism 41, 111; evolutionary view of 111, 112; false 207; hard problem of 40–1; idea intensity 207; intentionality 174, 176, 177; James’ view of 5–6, 111; levels of 206, 208; and memetics 196; and memory 17, 157, 158; perceptual thresholds 206; and sentience 121, 122; see also Self; unconscious mind conspiratorial model of madness 306 conspiratorial model of schizophrenia 311–13 Constandi, M. 115 constant conjunction 34

Index

constructive alternativism 105 constructivist approach to memory 162–71 constructivist theory of perception 207 constructs 22, 57, 92; id, ego, and superego 209–11; Personal Construct Theory (PCT) 101–7, 105 context-stripping 46, 69, 71, 77, 82, 109 contiguity 31, 33, 34 contingencies of reinforcement 146, 148 contrast 31 contribution history 25 controlled experiments 2–3, 9, 15, 38, 46–7, 48, 69, 70, 79 controlled thought processes 216 conversation analysis (CA) 88, 88 Cooley, Charles 276, 285 Cooper, David 311 Cooper, M. 310–11 Copernicus, Nicolas 30 Cornwell, D. 141, 142 corpus callosum 118 correlation coefficients 256 correspondence theory of truth 37, 55–6, 86, 91 Cosmides, Leda 198, 200 Costa, P.T. 48 Costello, T.W. 296–7 counselling profession 231 covert behaviours 11, 13 CR see conditioned response (CR) Craik, F.I.M. 162 craniometry 258 creationism 13–14, 182 Crick, Francis 42, 271–2 criminal responsibility 127 Critical Discourse Analysis (CDA) 88 Critical Psychology 59, 85 Crook, C. 164 Cross-cultural Psychology (CCP) 83–4, 247–52 Crozier, William 11 CS see conditioned stimulus (CS) CTM see Computational Theory of Mind (CTM) cult of empiricism 70 Cultural Psychology 83–4, 266 cultural relativism 20, 62, 82, 93, 247; mental illness 295, 304, 307–9 cultural transmission 195–6

cultural worldviews 242–3 culturally general concepts 248, 249 culturally specific concepts 248, 249 culture 45; and aggression 191; Crosscultural Psychology 83–4, 247–52; and human nature 197, 197; individualism– collectivism cultural syndrome 291; and intelligence tests 250–2, 264, 265–6; and memory 165–6; and mental illness 295, 304, 307–9; and moral values 210–11; and the self 290–1; transcultural psychiatry 309 culture-bound syndromes (CBSs) 308 cumulative knowledge 37, 59–63 Curti, Margaret 265 Czikszentmihalyi, M. 233 Damasio, A.R. 121 Danziger, K. 20, 21, 22, 56, 72, 81, 92, 93, 159, 160 Darwin, Charles 139, 181–8, 181, 252, 253 Darwin, Erasmus 181, 182 Davison, G.C. 63 Dawkins, Richard 193, 194, 195 Dean, John 167–8 death, fear of 240 death terror 237, 239 declarative memory 161 decontextualization 46, 69, 71, 77, 82, 109 Deese, J. 52 defence mechanisms 208, 228, 280 deferred gratification 210 Defert, Daniel 85 DeGrazia, D. 121 dehumanizing image of human beings 58 delayed conditioning 136 Delgado, J.M.R. 124 demand characteristics 47 dementia 300 dementia paralytica 300 dementia praecox 301, 302 dendrites 114 dependent variables 46, 47 depersonalization 313 depression 125 Descartes, René 31, 32, 35, 37, 39–40, 67, 113 determinism 6, 37, 44–7, 71, 94, 106; Freud’s view of 46, 212–14, 223; hard 6, 44, 148;

349

Index

determinism continued Humanistic Psychology 228–9; neurodeterminism 127, 130–1; psychic determinism 46, 212–13, 223; Skinner’s view of 11–13, 148; soft 6, 44, 148, 229 Developmental Psychology 36, 63 developmental theory of self 286 deviance 293–5; see also mental illness Dewey, John 7, 139 Diagnostic and Statistical Manual of Mental Disorders (DSM) 302, 305, 306, 307 Dialogical Psychology 88, 88 diencephalon 115, 116 Differential Psychology 246, 283; see also individual differences Digman, J.M. 48 Dilthey, Wilhelm 48, 49 dimensional approach 293 dimensional theory of personality 51, 296, 297 disciplinary parochialism 83 discourse analysis 39, 88 discrimination, in conditioning 137 Discursive Psychology 75–6, 85, 88, 88; everyday remembering 166–9; and the self 277, 287–9 disease model 243, 244, 303–4 divided attention 17, 157 Dix, Dorothea 299 dizygotic twins 255 Dobbs, D. 126 Donaldson, Henry 7 dopamine 114 double brain theory 274 double-binds 311 Douglas, M. 164 Down’s syndrome 125 dream interpretation 208, 212, 214 Dreyfus, H.L. 172 DSM see Diagnostic and Statistical Manual of Mental Disorders (DSM) dual-memory model 160–2, 161 dualism 31, 37, 39–41, 113 Eastern Psychology 224, 225, 236 Ebbinghaus, Hermann 49, 157, 158–60, 163, 207 ecological validity 163, 164, 165 Edgell, Beatrice 26, 26

350

Edwards, D. 81, 166–7, 168 EEG imaging 125 effect: cause and 34, 46, 69, 71, 212, 213; law of 140, 145–6 ego 16, 204, 208, 210, 223, 276–7 ego defence mechanism 208, 209 Ego Psychology 204 ego-ideal 211 Ehrenfels, Christian von 15–16 Eimer, M. 128 elaborative rehearsal 162 Elcock, J. 4, 20, 81 electroencephalogram (EEG) 125 eliminative materialism 41, 42–3 Ellis, G. 42 emic–etic distinction 248–52 emotional reactions, conditioned 140–4 emotions: animals 121–2, 184; James’ theory of 6; social 121; and spindle cells 115 empirical methods 9, 15, 38, 46–7, 48, 69, 70 empiricism 14–15, 31–4, 37, 38–9; cult of 70 empty organism view 150 Encounter Groups 233 endocrine system 110 engulfment 312 Enlightenment 38, 45, 67–8, 79 Environment of Evolutionary Adaptedness (EEA) 198 epiphenomenalism 41, 111 episodic memory 161 epistemological reflexivity 92, 93 epistemology 1, 29, 70 equipotentiality, law of 118 equivalence of concepts 250–1 Erikson, Erik 204, 223, 281–2 Erlebnis (lived experience) 73 essentialism 56, 68, 69, 71, 82, 93, 315 Esterson, A. 313 ethical codes 266–7 ethical issues 19, 29, 46, 58; animal studies 119, 120, 122–3, 290; Behaviourism 142; intelligence tests 266–7; Little Albert study 142; Pavlov’s research 134; see also morality ethnocentrism 83, 246, 308 ethnomethodology 81, 286

Index

ethology 182, 188–91 etics, imposed 247, 249 eugenics 186, 255–6, 259, 262 Eurocentrism 246 evil behaviour 229 Evolutionary Psychology 130, 181, 188, 198–200 evolutionary theory 14, 110, 139, 181–7, 253, 256; altruism 192–5; consciousness 111, 112; and eugenics 186, 255–6, 259, 262; and gender 187; kin selection 193, 195; and race 185–7, 190; rate of evolutionary change 200; sexual selection 187, 200; species selection 194; and terror management theory 241–2; Universal Darwinism 195 Ewen, R.B. 223, 226–7 existential isolation 240 Existential Psychology 236, 240–3 Existentialism 236–40, 310, 311 existential-phenomenological analysis 309, 312 exorcism 297 expectations: in conditioning 151, 152; and perception 171 experiential knowledge 235 Experimental Existential Psychology 240–3 experimental neurosis 137 Experimental Psychology 2–4, 72, 162, 254, 257 experimenter bias 47 explicit conscious recognition 216 external validity 46, 47 extinction of conditioning 137, 147, 147 eyewitness testimony 167, 218 Eysenck, Hans 48, 51, 52, 113, 142, 143, 216, 233, 284, 296, 297 face blindness 216 factor analysis 48 Fairbairn, Ronald 204 false consciousness 207 false dichotomies 45, 149, 197, 214, 284 false memories 216–18 False Memory Syndrome Foundation 217 false-memory syndrome (FMS) 217, 218 falsifiability 52–3, 61–2, 204, 216 family interaction model of schizophrenia 311–13

Fancher, R.E. 4, 5, 6, 7, 8, 32, 35, 97, 181–2, 183, 184, 185, 187, 205, 226–7, 252–3, 254, 255–6, 257, 265 FAPs see fixed action patterns (FAPs) Farah, Martha J. 126, 290 fear of death 240 Fechner, Gustav 3, 206, 254 feelings, animals 121–2 female sexuality, Freud’s theories on 219–20 Feminist poststructuralist Discourse Analysis (FPDA) 88 Feminist Psychology 46, 58, 59; critiques of positivism 54–5, 77–8; and Freud 218–20; male violence 199–200; revisions of history of Psychology 23–5; and Social Constructionism 78–9 Ferguson, K.E. 12, 13, 148, 150 Fernando, S. 186, 266, 308 Ferster, C.B. 147 Feyerabend, P.K. 62 Fields, R.D. 115 figure–ground perception 16 ‘file-drawer’ problem 53–4 Firestone, Shulamith 218 first-order conditioning 137 first-order sensory representations 273 fish 120, 122 five-factor model 48 fixed action patterns (FAPs) 189 fixed interval reinforcement 147 fixed ratio reinforcement 147 Flanagan, O. 95, 177 Fletcher, C.A. 105 flexibility of human mind 36 flooding 142 Flourens, Pierre 117 fluid-based theories of brain 113 fMRI see functional MRI (fMRI) FMS see false-memory syndrome (FMS) foetal development 115 forced reality testing 142 Ford, K.M. 174 Fordham, M. 278 forebrain 115, 116 Forrester, J. 218, 219 forward conditioning 136 Foucault, Michel 70, 85–6, 85, 88, 298, 299–300 Fox, D. 59

351

Index

Frankl, Viktor 237, 238 Frankland, A. 218 Fransella, F. 104, 105, 106 fraternal twins 255 free association 208, 212, 310 free will 40, 44, 95, 113, 124; Freud’s view of 212–14; Humanistic Psychology 228–9; James’ view of 5, 6, 44; and memetics 196; neuroscientific research 127–9; Personal Construct Theory (PCT) 106; Skinner’s view of 11–13, 148 freedom, desire for 240 French, Chris 54 Freud, Anna 204, 209 Freud, Sigmund 5, 12, 35, 36, 43, 45–6, 49, 52, 73, 101, 203, 204–20, 205, 223, 229, 276–7 Friedan, Betty 219 Frijda, N. 251 Frisch, Karl von 188 Frith, C. 216 Frith, H. 88 Fritsch, Gustav 118 Fromm, Erich 223, 224, 226, 227, 237 frontal lobe 116 fully functioning person 229, 231, 243, 278 functional lateralization of brain 116–18, 274–5 functional localization of brain 116–18, 123–4 functional MRI (fMRI) 125, 126 functional psychosis 305 Functionalism 5–6, 254; and Behaviourism 134, 138–40, 139 Furomoto, L. 23 fuzzy sets 82 Gabrielli, J.D.E. 126 Gadamer, H.-G. 74 Galen 113, 297, 298 Galilei, Galileo 31 Gall, Franz Joseph 117 Galton, Francis 139, 186, 252–6 Galvani, Luigi 113 gamma aminobutyric acid (GABA) 114 Gardner, Howard 17, 177, 265 Gardner, S. 25 Garnham, A. 172 Gay, P. 212–13

352

Gazzaniga, M.S. 274–5 Geisteswissenschaften (humanities) 4, 49–50, 50, 57, 62, 72, 111, 213 gender: construction of 79; and Evolutionary Psychology 198, 199–200; and evolutionary theory 187; and power 77, 78, 79, 199–200; see also Feminist Psychology gender bias 24–5, 58, 77–8 generalization 48–51, 50; in conditioning 137 genes 110, 192–4 geocentric model of the universe 30 geodesic net 125 Gergen, K.J. 21–2, 68, 69, 80, 81–2, 83, 86, 88, 126, 130–1, 314 Gestalt Psychology 14–17, 99, 100, 157, 206, 227 Gestalt quality 16 Gilgamesh Epic, The 237 Gilligan, Carol 58 Giorgi, A. 227–8 Giorgi, B. 227–8 gist 167, 168 givens of existence 239, 240–1 Gjesdal, K. 73–4 Glassman, W.E. 63 glial cells 115 Glick, J. 249 Goddard, H.H. 259, 260, 264, 265 Goldberg, A. 73–4, 214 Goldberg, L.R. 48 Goldberg, S. 52 Goldstein, Kurt 227 Goodnow, Jacqueline J. 171 Google 165 Gorgi, Camillo 114 Gough, B. 85 Gould, Stephen J. 22–3, 257–8, 259, 263–5 Graham, H. 224, 225–6, 231, 278 Gray, J.A. 122–3 Greenberg, Jeff 241 Greenfield, Susan 272 Gregory, Richard 174, 207 Grid Test of Thought Disorder 105 group averages 74 group norms 48, 50 group psychotherapy 105 Gudden, Bernhard von 301

Index

Guidelines for Psychologists Working with Animals 118–20 Guilford, J.P. 48 habits 16 Hadad, M. 63 Haggard, P. 128 Hall, G. Stanley 139 halo effect 16 Hamilton, W.D. 193 happiness 243 hard AI 172, 173, 174 hard determinism 6, 44, 148 Harlow, Harry 226 Harlow, Margaret 25 Harnad, Stevan 176 Harré, Rom 11, 42, 63, 69, 75, 76–7, 76, 86, 99–100, 102, 123, 150, 162–3, 169–70, 203, 204, 205, 288–9, 291, 301, 302 Harris, B. 18–19, 22–3, 143–4, 264 Hartnett, O. 54–5 Hayes, P.J. 174 Hazlitt, Victoria 26 Heather, N. 39, 43–4, 46, 98, 304, 313 Heidegger, Martin 73–4, 239, 310, 311 Heider, Fritz 99–101 heliocentric model of the universe 30, 31 Helmholtz, Hermann 2, 130, 206, 207 hemispherectomy 274 hemispheric asymmetry see functional lateralization of brain Heneghan, L. 190, 191 Herbart, Johan Friedrich 207, 208 hereditarian bias 8 heredity 8, 253, 255; see also nature– nurture debate hermeneutic understanding 214 hermeneutics 72–5; and Freud 214 Herrnstein, R.J. 266, 267 Herskovits, M.J. 247 heterogeneity 47 Hewett, C.J.M. 38 hidden states 69 hierarchical model 48 hierarchy of needs 229–31 higher mental processes 4, 49, 72 Hilliard, A.G. 267 Hilliard, R.B. 51

Hillix, W.A. 138–9, 139 hindbrain 115, 116 Hinde, R.A. 189, 191 hippocampus 115 Hippocrates 296–7, 298 historical relativism 20–2, 82, 295 Hitch, G. 161–2 Hitzig, Eduard 118 Hobbes, Thomas 31–2 Hobbs, S. 141, 142 holistic approach 226, 231, 234 Hollway, Wendy 77 Holt, R.R. 284 homosexuality 294, 295, 307 horizontal collaboration 83 hormonal system 110 Horney, Karen 219, 227, 237 Hull, Clarke L. 10, 14, 17, 18, 157 human nature 45, 82, 86, 130, 196–7, 197, 198 human welfare 57–8 Humanistic Psychology 41, 49, 226–9; and Existentialism 236–40; hierarchy of needs 229–31; phenomenology 227–8; Positive Psychology 243–4; as a science of human being 233–6; and the self 228–9, 278–81; self-actualization 223, 227, 230, 231, 232–3, 232, 233, 234, 280, 281, 282 Hume, David 6, 14, 34, 46 humours 113, 297 Humphrey, N. 112 Huntington’s disease 125 Hurlburt, R.T. 48 Husserl, Edmund 227–8, 310 Huxley, Thomas Henry 184 Hwang, K.K. 249 hypnotism 207 hypothalamus 115, 116 hypothetical constructs 22, 57, 92; id, ego, and superego 209–11; Personal Construct Theory (PCT) 101–7, 105 hypothetico-deductive method 52–3 ‘I–me’ distinction 276, 285 I–Thou vs. I–It 234–5 ICD see International Classification of Diseases (ICD) id 204, 208, 209–10, 223, 276–7

353

Index

idea intensity 207 idealism 41 identical twins 255 identification-with-the-aggressor 211 identity 241; token 42; type 42 identity crisis theory 223, 281–2 idiographic approach 48–51, 50, 233, 283–4 idiotism 300 imagination 113 implicit behavioural recognition 216 implosion 312 imposed etics 247, 249 impression formation 16 imprinting 189 impulsive helping 195 inborn biases 36 inclusive fitness 193 incommensurability 62, 63 incongruence 228, 229, 280, 281 independent variables 46, 47 indeterminism 44–5 individual cases 49–51, 50 individual differences 6, 34, 47, 48, 139, 197, 245–6, 283, 293; and cultural differences 247–52; Galton 252–6; see also intelligence tests individual norms 48–51, 50 Individual Psychology 204 individual traits 283–4 individual uniqueness 282, 283–4 individualism 43, 59, 68–9, 71, 84, 291 individualism–collectivism cultural syndrome 291 individuation 277 inductive method 37, 51–4 infantile sexuality 206 inferences, unconscious 206 information-processing approach 17, 63, 157, 158–62, 166, 172 innate releasing mechanism (IRM) 189 instinct 188–9 instrumental conditioning see operant conditioning intellectual history of Psychology 18–19 intelligence quotient (IQ) 22–3, 259 intelligence tests 55, 256; Army Alpha and Beta tests 259–63, 260; Binet 256–9; culture-fairness 250–2, 264, 265–6;

354

ethical issues 266–7; Galton 253–4, 256, 257; mental age 258–9, 261–2; and race 261, 262–5, 266, 267; V-bias and S-bias 263–4 intentionality 174, 176, 177 interactionism 41 intercept bias 264 internal validity 46, 47 internalization of moral values 210–11 International Classification of Diseases (ICD) 302, 305 internet, and memory 165 interpretation see hermeneutics intersubjective verifiability 9 introspectionism 2–4, 7, 8, 15, 64 involuntary behaviour 12 IQ (intelligence quotient) 22–3, 259; see also intelligence tests IRM see innate releasing mechanism (IRM) Itamul elders, New Guinea 166 Jacobs, M. 208, 209 Jacobson, L. 79 Jahoda, G. 247, 251 James, William 5–6, 44, 49, 50, 111, 112, 138, 139, 139, 188, 254, 271, 276, 285 Janet, Pierre 207 Jaspers, Karl 310, 312 Jensen, Arthur 263–4, 265, 267 Johnson, Virginia E. 220 Jones, D. 4, 20, 81, 241 Jones, Mary Cover 143 Journal of Personality & Social Psychology 53–4 Journal of Social Issues 200 Joynson, R.B. 18, 58, 64, 97, 98, 111 ‘Judas Eye’ experiments 171 Jung, Carl Gustav 204, 214–15, 277–8 just noticeable difference (j.n.d) 3 Kahneman, Daniel 216 Kakar, S. 84 Kamin, Leon 22–3, 264–5 Kant, Immanuel 81 Karmiloff-Smith, A. 36 Kay, H. 58 Kelly, George 49, 101–7, 102, 227, 289 Kepler, Johannes 30 Kierkegaard, Soren 239, 310

Index

kin selection 193, 195 Kinsey, Alfred 220 Kirsner, D. 310, 312 Kitzinger, C. 88 Klein, Melanie 204 Kline, P. 57, 216 Kluckhohn, C. 246, 283, 293 Knapp, T.J. 48 knowledge: cumulative 37, 59–63; experiential 235; methodological theory of 70; and Social Constructionism 84; sociology of scientific knowledge (SSK) 81 Knox, Robert 186 Koedt, A. 219, 220 Koestler, A. 95 Koffka, Kurt 14, 15, 16 Köhler, Wolfgang 14, 15, 99 Kool, Sander 241 Koro 308 Kpelle people, Liberia 165–6, 249 Kraepelin, Emil 49, 300–3, 301 Krahé, B. 284 Kuhn, Thomas 59–63, 60, 61, 64 Kupfer, D.J. 307 Lachman, R. 175 Ladd, George T. 139 Ladd-Franklin, Christine 25 Laing, R.D. 239, 309–13 Lakatos, I. 62 Lamarck, Jean-Baptiste 182 language 17, 56, 84, 93; and postmodernism 80–1; and the self 286, 287, 288–9 Lashley, Karl 118 latent images 215 latent learning 16, 151 lateralization of brain function 116–18, 274–5 Lattal, K.A. 9, 10, 14, 144, 150 law of association by contiguity 34 law of association by similarity 34 law of effect 140, 145–6 law of equipotentiality 118 law of mass action 118 law of three stages 38 Lea, S.E.G. 191 learning: associative 133; imprinting 189;

instinct for 188; latent 16, 151; observational 93, 152; place 151; re-learning 207; trial-and-error 140; see also classical conditioning Leary, M.R. 276 Lee, A. 51 LeFrancois, G.R. 63, 64 legal/moral responsibility 95, 127 Legge, D. 98–9, 109, 245, 246 Leibniz, Gottfried Wilhelm 35–6, 206, 227 LepineIn, Marc 199–200 leprosy 299 Lerner, G. 25 Leslie, J.C. 13, 149 levels of processing (LOP) model 162 Lewin, Kurt 99, 100 Libet, Benjamin 128–9 Lilienfeld, S.O. 126, 127 limbic system 115, 116 Lindsay, G. 59 Linley, P.A. 243 Lipsedge, M. 294, 295 literal recall 167, 168 Little Albert study 140–4 Little Peter study 142–3 Littlewood, R. 294, 295 localization of brain function 116–18, 123–4 Locke, John 6, 14, 32–3, 35 Loeb, Jacques 7, 8 Loftus, Elizabeth 218 Logotherapy 238 long-term memory (LTM) 160–2, 161 looking-glass self 285 LOP see levels of processing (LOP) model Lorenz, Konrad 188, 189, 190–1, 190 lower mental processes 3–4, 49, 72 LTM see long-term memory (LTM) Luckmann, T. 81, 286 Luria, A.R. 41–2 McAdams, D.P. 238 McAllister, M. 59 McClelland, J.L. 18 Maccoby, E.E. 79 McCrae, R.R. 48 McDougall, William 130, 170 McFadden, M. 85 McGhee, P. 126, 127

355

Index

Macquarrie, John 239 macro-Social Constructionism 86, 88 Maddux, J.E. 314–15 magnetic resonance imaging (MRI) 125, 126 Magnusson, E. 92–3, 308 Mahler, Margaret 204 mainstream Psychology 9, 38, 67–70, 71, 87, 293; gender bias 77–8 maintenance rehearsal 162 Malamuth, N.M. 200 Malik, K. 196–7 Malone, J.C. 9 Malthus, Thomas 183 Malthusian cycle 183 mania 300 manic-depressive insanity 301 Maori 290–1 Maracek, J. 92–3, 308 Martineau, Harriet 183 Marx, Karl 81, 207, 311 Marx, M.H. 138–9, 139 masculinist bias 24–5, 58, 77 masculinity, politics of 199–200 Maslow, Abraham 49, 223, 226–7, 226, 229–33, 232, 234–6, 243–4, 278 mass action, law of 118 Massachusetts Institute of Technology (MIT) 17, 171 Masters, William H. 220 material causes 11–12, 148 materialism 31, 37, 39–41, 42–3, 71 maturation 36, 110 May, Rollo 226 Mead, George Herbert 276, 285, 286–7 meaning, human quest for 238 meaningfulness 94 meaninglessness 240 mechanical causes 95 mechanism 31, 37, 39–41, 46, 71, 224, 225–6, 234 medical model 243, 244, 303–4 medieval period 163, 282, 297–8 medulla oblongata 115, 116 melancholia 300 Melanchthon, Philipp 112, 113 memes 195–6 memetics 195–6 memory 16, 17, 76, 113, 157, 158; as

356

association 158–62; brain-damaged patient studies 161; constructivist approach to 162–71; and culture 165–6; declarative 161; episodic 161; everyday remembering 166–9; false memories 216–18; Freud’s view of 217; and internet 165; levels of processing (LOP) model 162; long-term 160–2, 161; multi-store model (MSM) 160–2, 161; procedural 161; reconstructive 163–4, 176, 218; recovered memories 216–18; re-learning 207; repisodic 168; repression 208–9, 217; and schemas 163, 169, 171; screen memories 217; semantic 161; sensory 160, 161; short-term 160–2, 161; as subjective experience 159–60; working 161–2 menstrual cycle 77 mental age 258–9, 261–2 mental causation 40–1 mental illness: classification of 300–2; cultural relativism 295, 304, 307–9; culture-bound syndromes 308; defining 303–7; emergence of psychiatry 300–3; history of 295–300; Laing 309–13; re-medicalization of 305–6, 307; social construction of 313–15; transcultural psychiatry (TCP) 309; see also schizophrenia mental representations 16 mental subnormality 258 mentalism 41 mereological fallacy 129 meritocracy 266 Merleau-Ponty, Maurice 310 mesencephalon 115, 116 Mesmer, Franz 207 metapsychology, Freud’s 203, 208, 209–11 metarepresentations 272–3 metencephalon 115, 116 methodolatry 46, 70, 71, 87, 224 Methodological Behaviourism 9, 13, 38, 149 methodological imperative 70 methodological reflexivity 92–3 methodological theory of knowledge 70 methodologism 70, 224 methodology 1, 29 metric equivalence 250 Meynert, Theodor 205

Index

microglia 115 microphysics 42, 55 micro-Social Constructionism 86, 88 midbrain 115, 116 Middle Ages 163, 282, 297–8 Middleton, D. 164 Midgley, M. 189, 193, 194, 197, 272 Milgram, Stanley 79 Mill, James 34 Mill, John Stuart 34 Miller, George A. 17, 57–8, 170 Miller, J. 266 Millett, Kate 219 Mills, J.A. 9 Milner, Marion 310 mind–brain identity theory 41 mind–brain relationship 31, 37, 39–41, 42–3, 111–12, 124; neuroscientific research 127–9 minute perceptions 35 misogyny 298 Misra, G. 83 Mitchell, Juliet 218–19 modelling 142, 152 modernism 67–70, 87; and Behaviourism 144–5 modernity 67 Modha, D. 176 Moghaddam, F. 77, 183–4, 206, 246, 250 Mollon, P. 217 monism 41 monogenists 185 monozygotic twins 255 morality: and Behaviourism 148–50; and superego 210–11 moral/legal responsibility 95, 127 Morea, P. 12 Morgan, C. Lloyd 139 Morgulis, S. 9–10 mortality salience hypothesis 243 Moscovici, Serge 281 Mousseau, M.-C. 53 movements and actions 96–7 Moyer, M.W. 115 MRI see functional MRI (fMRI); magnetic resonance imaging (MRI) Much, N. 82, 83 multi-store model (MSM) of memory 160–2, 161

Munsterberg, Hugo 48 Murdoch, Jason 272, 273 Murphy, J. 58 Murray, C. 266 Murray, D.J. 14, 15, 16, 17, 18 Murray, H.A. 246, 283, 293 My Lai massacre 200 myelencephalon 115, 116 myelin sheath 115 mysticism 224, 225 Nagel, Thomas 40, 275 naive psychology 99, 100 National Institute of Mental Health (NIMH) 307 nativism 35 natural associations 33 natural kinds 20–2, 56, 91–2, 93, 281 natural selection see evolutionary theory nature–nurture debate 34, 39, 144, 198, 252, 253, 255 Naturwissenschaften (natural sciences) 4, 49–50, 50, 57, 62, 67, 72, 111, 213 Naughton, J. 62 Nazi concentration camps 238 Nazi ideology 190 needs, hierarchy of 229–31 negative reinforcement 12, 145–6, 148 Neimeyer, R.A. 315 Neisser, Ulric 18, 167–8 nerve impulses 114 nervous system 109, 110, 114; animals 120; see also brain neurocentrism 127 neurodegeneration 115 neurodeterminism 127, 130–1 neuronal pruning 115 neurons 114 neurophysiology 42 neuroprotein 177 neuropsychoanalysis 204, 214, 216 neuroscience 124–9, 130, 290; and Freud 214 neurosis 300 neurotransmitters 114 new history of Psychology 23–5, 265 Newell, Allen 17, 18, 171, 173 Newton, Isaac 31, 69 Nicolson, P. 77–8

357

Index

Nietzsche, Friedrich 79, 81, 310 Nisbett, R.E. 216 Nixon, Richard 167 nominal fallacy 234 nomothetic approach 37, 48–51, 50, 233, 283–4, 293 Nonhuman Rights Project (NRP) 290 nonsense syllables 158–9, 163 noradrenaline 114 normal distribution 254 normal science 60–1, 61, 62, 64 NRP see Nonhuman Rights Project (NRP) Nye, D. 13, 149 object relations school 204 objectivity 37, 39, 47, 55–6, 71 observational learning 93, 152 occipital lobe 116 octopus 119, 120 O’Donohue, W. 12, 13, 148, 150 Oedipus complex 206, 215, 217, 219 Okasha, S. 53, 60, 63 oligodendrocytes 115 Olson, S.E. 186 ontogenesis 110, 189, 230 ontological insecurity 312–13 ontology 29 operant behaviour 12, 145, 148 operant conditioning 10, 13, 15, 133, 145–7, 147, 150 oral traditions 165–6 organic psychosis 305 origin myths 4, 20, 144 Ornstein, R.E. 274 Orthodox scholasticism 30 O’Shea, M. 113–14 Osler, William 123 overdetermination 213 pain, animals 120–3 Paivio, A. 18 Palermo, D.S. 63, 64 Paludi, M.A. 25 paradigm shifts 61, 61, 62, 64 paradigms 60–1, 61, 62, 63 paradox of altruism 192–3 paranoia 301 paraphrenia 301 parapraxes 208, 212

358

parapsychology 53–4 parasympathetic nervous system 114 parietal lobe 116 Parkin, A.J. 175 Parkinson’s disease 112, 125 parochialism, disciplinary 83 participant variables 47, 245, 246, 293 pathologization of women 77 patriarchy 77 Pavlov, Ivan 9–10, 49, 134–8, 134, 136, 136, 152 Pavlovian conditioning see classical conditioning PCT see Personal Construct Theory (PCT) peak experiences 230, 233, 278 Pearson, Karl 254, 256 Pearson’s r 256 Peck, D. 107 Penfield, Wilder 123–4 penis envy 219 Penrose, R. 177 people-as-objects 71 perception 15–16, 35, 100, 171, 231; constructivist theory of 207 perceptual set 171 perceptual thresholds 206 peripheral nervous system 110, 114 peripheralist materialism 41 person approach 246 persona 215, 277 personal construct theory 49 Personal Construct Theory (PCT) 101–7, 105, 289 personal reflexivity 92 personal self 277 personality 57; dimensional theory of 51, 296, 297; factor-analytic theories of 48; id, ego, and superego 203–4, 208, 209–11, 223, 276–7; individual traits 283–4; see also Personal Construct Theory (PCT) person-centred therapy 228–9, 279 personhood 290 PET scanning 125, 126 Peters, R.S. 96 Peterson, L.R. 161 Peterson, M.J. 161 petrification 313 phenomenology 124, 227–8, 235, 289, 313

Index

Phillips, H. 115 philosophical dualism 31, 37, 39–40, 113 phobias 140–4 phrenology 116, 117 phylogenesis 110, 189, 230 physiology, history of 112–14 Piaget, Jean 26, 36 Pike, K.L. 248, 249 Pinel, Philippe 299, 300, 306 Pinker, Steven 173, 176–7, 188, 194, 200 place learning theory 151 plasticity 36, 176 Plato 20, 206, 297, 315 pleasure principle 209 Pleistocene period 200 pluralism 74, 80 polygenists 185 pons 115, 116 Popper, Karl 12, 52–3, 61–2, 204, 216 population growth 183 positioning theory 77 positive discrimination 266 Positive Psychology (PP) 243–4 positive reinforcement 12, 145, 148 positive self-regard 281 positivism 7, 11, 34, 37, 38, 39, 54–5, 57, 62, 71, 87; Feminist critiques of 54–5, 77–8 positron emission tomography (PET) 125, 126 Postman, Leo 169, 171 postmodernism 21, 79–81, 87; and the self 287–9 Potter, Jonathan 76, 77, 81, 167, 168, 290–1 power 88; and gender 77, 78, 79, 199–200 power differences 46 PP see Positive Psychology (PP) Pragmatism see Functionalism pre-science 60, 61, 64 pre-specification 36 preconscious 208, 212, 214 prediction and control 56–8 predispositions 215 prefrontal cortex 128 Prehistoric period 295 presentist history 19 Prilleltensky, I. 59 primary mental abilities 48

primary process thinking 210 primary reinforcers 146 primates 16, 115, 118, 122, 290 primordial images 215 Prince, J. 54–5 principle of causality 212, 213 problematic nature of Psychology 29 problem-solving 16 procedural memory 161 process approach 245 progress see scientific progress prosencephalon 115, 116 prosopagnosia 216 protoplasm chauvinism 177 prototypes 82 proximate mechanisms 198 pseudo-science 52, 53–4 psychedelic model of schizophrenia 311–13 psychiatry 304; emergence of 300–3; Laing 309–13; re-medicalization of 305–6, 307; transcultural 309 psychic determinism 46, 212–13, 223 psychic energy 209 psychoanalysis 203, 204, 212, 213, 218–19, 305, 306, 310 psychoanalytic model of schizophrenia 311–13 psychoanalytic theory 12, 45–6, 63, 73, 101, 203, 204; false memories 216–18; and feminism 218–20; and free will 212–14; and hermeneutic science 214; and neuroscience 214; psychic determinism 46, 212–13, 223; recovered memories 216–18; repression 208–9, 217; scientific status of 52, 204, 216; structure of the personality 203–4, 208, 209–11, 223, 276–7; unconscious mind 204, 206–9, 214–15 Psychodynamic Psychology 203–4, 223, 256; and the self 276–8; see also psychoanalytic theory psychological abnormality see mental illness psychological altruism 192, 194 psychological kinds 20–2, 56, 57, 92, 93, 281 psychological reality 91–2 psychologists’ fallacy 3 psychometric tests 48, 139, 259; see also intelligence tests

359

Index

psychopathology 63; see also mental illness psychophysical parallelism 41 psychophysics 3, 171, 206, 254 psychophysiological processes 3–4, 49, 72 psychosis 300, 301; organic and functional 305 Psychosocial theory 204 psychotherapy 74, 105, 203, 208, 212, 214, 223, 231, 234 Ptolemy, Claudius 30 publication bias 53–4 Puccetti, R. 275 punishment 12, 145–6, 148 Puritanism 282 puzzle boxes 139, 140, 145, 146–7, 147, 150 Pyszczynski, Tom 237, 240, 241 Pythagoras 297 qualitative methods 39, 51, 74 quantitative methods 38, 51, 71, 74 Quetelet, Adolphe 254 race: and evolutionary theory 185–7, 190; and intelligence tests 261, 262–5, 266, 267 Rachman, S. 143 radial glia 115 Radical Behaviourism 10–14, 41, 147–50 radical environmentalism 144 radioactive labelling 125 RAH see rape adaptation hypothesis (RAH) Ramachandran, Vilayanur S. 272, 273–4 Ramberg, B. 73–4 Rank, Otto 237, 279 rape 199, 200 rape adaptation hypothesis (RAH) 199, 200 Raskin, J.D. 315 Rastle, K. 20 rationalism 39 RDoC see research domain criteria (RDoC) reaction formation 52, 216 Read, J. 296–7, 298, 299, 302–3 Reader, W. 188–9, 191, 198 realism 37, 55–6, 71, 86, 91 reality principle 210 reasons, as causes 95 reconstruction history 25 reconstructive memory 163–4, 176, 218

360

recovered memories 216–18 reductionism 31, 37, 41–4, 47, 57, 225, 234; biological 110–12; neurocentrism 127; and the self 271–2; selfish gene concept 194 redundancy, brain 176 Rees, G. 216 reflective activity 35 reflexes 135 reflexive self-awareness 92 reflexivity 55–6, 92; in research 92–3 Regan, T. 120 rehearsal 160, 161, 162, 166 rehumanization of science 234 reification 55, 211 reinforcement: cognitive interpretation of 151, 152; negative 12, 145–6, 148; positive 12, 145, 148 reinforcement schedules 146–7, 147 relativism 20–2, 62, 86–8, 247–8; historical 20–2, 82, 295; see also cultural relativism re-learning 207 Repertory Grid Test 104–5, 105 repetition, and memory 159, 163 repisodic memory 168 replication studies 54 repression 208–9, 217 Rescorla, R.A. 152 research domain criteria (RDoC) 307 research objectives 51 residual rules 294 resilience, brain 176 resistance to extinction 147, 147 respondent behaviour 12, 145 respondent conditioning see classical conditioning retrograde amnesia 161 revisionist histories of Psychology 22–5 revolutionary science 61, 61, 62, 63, 64 rhombencephalon 115, 116 Richards, G. 3, 4, 7–8, 55–6, 81, 92, 117, 123, 158, 187, 188–9, 204, 255–6 Ritchie, Stuart 54 Rivers, W.H.R. 162 Robinson, A. 121–2 Robinson, O. 51 Rogers, Carl 49, 149, 223, 226, 228–9, 231, 234, 244, 278–81, 278 Role Construct Repertory Grid 104

Index

Romanes, George John 139, 188 Romantic era 282 Rosa, A. 164, 169 Rosalie Rayner 140–4 Rosch, E.H. 82 Rose, S. 43, 176 Rosenthal, R. 79 Rowan, J. 230, 232, 233, 233, 236 Rubin, E. 16 Rush, Benjamin 299 Russo, N.F. 24, 25 Rutherford, A. 6, 7, 9, 10, 14, 32, 35, 144, 150, 181–2, 183, 184, 185, 187, 205, 226–7, 252–3, 254, 255–6, 257 Rutkin, A. 290 Rutter, M. 307 Ryan, A. 94, 95, 96 Rycroft, Charles 213, 310 S-bias 263–4 Salmon, P. 104 Samelson, F. 10 Sartre, Jean-Paul 239, 310 Satel, S. 126, 127 schedules of reinforcement 146–7, 147 Scheff, T.J. 294 schemas 163, 169, 171 schizophrenia 105, 115, 125, 248, 275, 294, 300; cultural relativism 309; Kraepelin’s work on 301, 302; Laing’s models of 311–13; ontological insecurity 312–13; Szasz’s concept of 304–5 Schopenhauer, Arthur 208 scientific progress 19, 20, 37, 59–63 scientific realism see realism scientific revolution 30–1, 61, 61, 62, 63, 64 scientism 46, 69–70, 71, 72, 87, 224 screen memories 217 Scribner, S. 165–6 Scripture, Edward W. 139 Searle, J.R. 172, 173–4, 177 Sechenov, I.M. 135 second-order conditioning 137 secondary process thinking 210 secondary reinforcers 146 secondary traits 283, 284 seduction theory 206, 217 Segall, M.H. 165, 247–8, 251 selective attention 17, 157

self 14, 16, 35, 215; as agent 239; animals and personhood 290; Behaviourism 275–6; Biopsychology 271–5; Cognitive Psychology 276; and culture 290–1; defining features of 273; developmental theory of 286; in historical context 281–5; Humanistic Psychology 228–9, 278–81; individual uniqueness 282, 283–4; and language 286, 287, 288–9; looking-glass self 285; postmodernism 287–9; Psychodynamic Psychology 276–8; social origins of 285–7; splitbrain studies 118, 274–5 self-actualization 223, 227, 230, 231, 232–3, 232, 233, 234, 277, 280, 281, 282 self-awareness 242; temporal 121 self-concept see self self-consciousness 122 self-control 58 self-determination 59 self-esteem 230 self-image 228–9, 280, 281, 285 self-interaction 286 self-transcendence 224 self-understanding 73, 94 selfish gene concept 193–4 selfish memes concept 195–6 selfishness 193–4 Seligman, Martin 143, 243, 244 semantic memory 161 sensory memory 160, 161 sentience 120–2 serial position effect 161 serial reproduction 164, 165, 169 serotonin 114 sexism 58, 298; within Psychology 24–5 sexual dimorphism 187, 198, 199–200 sexual revolution 219 sexual selection 187, 200 sexuality: infantile 206; women’s 219–20 Shackleton, V.J. 105 shadow 215 Sharon, I. 246 Sherrington, Charles 123 Shields, S. 185 Shiffrin, R.M. 160–1, 161 short-term memory (STM) 160–2, 161 Shotter, J. 45, 58, 88, 164

361

Index

Shweder, R.A 83, 248 Siegler, M. 306 sign stimuli 189 silver staining 114 similarity 31, 33; law of association by 34 simile of the cave 206 Simon, Herbert 17, 18, 171, 173 Simon, Theodore 258 simultaneous association 32 simultaneous conditioning 136 single-photon/positron emission computerized tomography (SPECT) 125 single-subject experimental designs 50–1, 150 Sinha, D. 83 situational variables 47 Skinner, B.F. 8, 10–14, 15, 41, 49, 57, 97, 133, 145–50, 147, 275–6 Skinner box 145, 146–7, 147, 150 Skinnerian conditioning see operant conditioning SLT see social learning theory (SLT) Smith, Adam 183 Smith, J.A. 70, 71 Smith, P.B. 246, 291 Smith, P.K. 63 Smith, R. 117, 252–3, 255–7 Sneddon, L.U. 120 Snellius, Rudolph 113 social cognitive theory 152 Social Constructionism 41, 62, 77, 78–86, 87; defining 82–5; everyday remembering 166–9; and Feminist Psychology 78–9; and mental illness 313–15; micro- and macro- 86, 88; and postmodernism 80–1; and relativism 86–8; and the self 277, 287–9; sociological influences 81–2 social emotions 121 social interaction 81, 84, 86, 100, 101, 115 social learning theory (SLT) 152, 200 social level explanations of behaviour 43–4, 59 social phenomenology 313 Social Psychology 81–2, 246, 276, 285–7 Social Representation Theory (SRT) 86, 101 social roles 46 sociobiology 63, 182, 189, 191–7 sociology of science 60, 62 sociology of scientific knowledge (SSK) 81

362

soft AI 172 soft determinism 6, 44, 148, 229 Solms, D. 214 Solomon, S. 242 somatic nervous system 110, 114 Sophists 315 soul 113, 225 Soviet Psychology 138 Spanos, N.P. 298 Spearman, C. 48, 259 species selection 194 speciesism 122–3 species-specific behaviour 189 Sperry, Roger 118, 274 spindle cells 115 Spinoza, Baruch 73 split-brain studies 118, 274–5 spontaneous recovery of conditioning 138 Spurzheim, J.C. 117 Squire, L.R. 161 SRT see Social Representation Theory (SRT) stages of scientific development 60–3, 61, 64 Stainton Rogers, R. 68, 70 Standard Social Science Model (SSSM) 198 Stanford–Binet Scale 55, 259–60 statistical correlation 256 statistics 139 sterilization, forced 55 Stern, William 100, 259, 260 Stevens, R. 25 Stigler, J. 83 STM see short-term memory (STM) Stout, G.F. 111 Strachey, James 212 strong AI 172, 173, 174 Structuralism 2–4, 7, 8, 15, 64, 79 structure of intellect 48 successive association 32 suchness meaning 235 Sue, S. 267, 297 suffering, animals 120–3 suggestibility in psychological experiments 257–8 Sullivan, Harry Stack 302 Sulloway, F.J. 213 superconducting quantum imaging/ interference device (SQUID) 125 superego 204, 208, 210–11, 219, 223, 276–7

Index

Swazi herdsmen 166 symbol manipulation 173–4 symbolic immortality 242 symbolic interactionism 43–4, 81, 286 symbolic logic 17, 171 sympathetic nervous system 114 synapses 114 synaptic gaps 114 syphilis 300 systematic desensitization 142–3 Szasz, Thomas 304–5, 306–7, 314 tabula rasa 32, 35, 188, 198 Tallis, R. 113–14, 127 Tavris, C. 79 Taylor, R. 44 TCP see transcultural psychiatry (TCP) technology, and aggression 191 tectum 115, 116 tegmentum 115, 116 telencephalon 115, 116 telephone syndrome 272 temperament 113 temporal lobe 116 temporal self-awareness 121 Teo, T. 29, 70 Terman, Lewis 259, 260, 262, 264, 265 terror management theory (TMT) 241–3 thalamus 115, 116 Thomas, K. 101 Thompson, Clara 219 Thorndike, Edward L. 49, 139–40, 139, 145, 227 Thorne, B. 229, 234, 278–9 Thorngate, W. 51 Thornhill, R. 199 three stages, law of 38 thresholds, perceptual 206 Thurstone, L.L. 48 Tillich, Paul 310 Tinbergen, Nikolaas 188, 189 Titchener, Edward B. 49 TMT see terror management theory (TMT) Toates, F. 109–10, 112 token identity 42 Tolman, Edward C. 16, 151 Tooby, John 198, 200 trace conditioning 136 transcultural psychiatry (TCP) 309

Transcultural Psychology 84 transference 208, 310 translation equivalence 250, 251 trephining 295 triadic reciprocal causation 36 trial-and-error learning 140 truth, correspondence theory of 37, 55–6, 86, 91 Tuke, William 299, 300, 306 Tulving, E. 18, 161 Turing, Alan 173, 174 Turing test 174 twin study method 255 two-component task studies 161 two-factor theory 48 type identity 42 UCR see unconditioned response (UCR) UCS see unconditioned stimulus (UCS) unbroken lineage myth 20 unconditional positive regard 281 unconditioned response (UCR) 135, 136, 136 unconditioned stimulus (UCS) 135, 136, 136, 152 unconscious inferences 206–7 unconscious mind 35, 129, 204, 206–9, 212; cognitive 216; collective unconscious 215, 277, 278; Freud and Jung compared 214–15, 277; repression 208–9, 217; and the self 277–8 uniqueness, individual 282, 283–4 Universal Darwinism 195 universal truths 74 universalism 69, 71, 82, 83, 245 universes of discourse 9, 43, 55 utopia 10, 145, 149 V-bias 263–4 vaginal orgasm 219–20 Valentine, E.R. 25, 26, 64 value-free 54, 55, 58–9, 69, 71, 235 values 54–5, 58–9, 235 Van Langenhove, L. 69, 70, 72, 75 variable interval reinforcement 147 variable ratio reinforcement 147 variation hypothesis 187 verbatim recall 168 verification vs. refutation 53

363

Index

Vernon, P.E. 48, 171 Verstehen 50, 50, 57, 73 vertical collaboration 83 Victorian era 282 Vietnam War 200 vigilant coma 272 Volta, Alessandro 113 voluntary behaviour 12 Wagner, D.A. 166 Walsh, R.T.G. 20 Ward, A.F. 165 Washburn, Margaret 25 Watergate scandal 167–8 Watkins, M.J. 162 Watson, John B. 4, 7–10, 7, 14, 15, 39, 45–6, 49, 56, 97, 133, 140–5, 185, 275–6 weak AI 172 Wearing, Clive 161 Weber, E.H. 3 Weber–Fechner law 3 Wegner, D.M. 165 Weiner, B. 102–3 Weisstein, N. 78–9 Wernicke’s area 116, 118 Wertheimer, Max 14, 15, 99, 227 West, C. 79 Wetherell, Margaret 77, 86, 88, 187, 290–1 Weyer, Johan 298 Whiggish history 19, 20, 59, 72 White, R. 308 Whitlow, D. 107 Wilhelm, K. 121, 122 Wilkinson, S. 92

364

Willig, C. 92 Willis, Thomas 113 Wilmsen-Thornhill, N. 199 Wilson, Edward O. 189, 191, 192 Wilson, G.T. 233–4 Wilson, T.D. 216 Winch, P.G. 94 Windelband, Wilhelm 48, 49, 50, 51 Winnicot, Donald 204, 310 Wise, Stephen 290 Wiseman, Richard 54 witchcraft 297, 298, 306 Wittgenstein, L. 80 Wolfe, K.H. 159–60 Wolpe, J. 143 women, pathologization of 77 women’s sexuality, Freud’s theories on 219–20 Woodworth, Robert S. 138, 139 working memory 161–2 Workman, L. 188–9, 191, 198 World Health Organization 302 Wundt, Wilhelm 2–4, 2, 8, 15, 19, 35, 49, 72, 130, 159–60, 254 XXP see Experimental Existential Psychology Yalom, Ervin D. 239, 240–1 Yerkes, R.M. 9–10 Yerkes, Robert 259, 260, 261, 262–3, 264, 265 Zimmerman, D.H. 79