3-system theory of the cognitive brain : a post-Piagetian approach to cognitive development 9781138069695, 1138069698, 9781138069701, 1138069701

593 25 2MB

English Pages 134 [151] Year 2019

Report DMCA / Copyright


Polecaj historie

3-system theory of the cognitive brain : a post-Piagetian approach to cognitive development
 9781138069695, 1138069698, 9781138069701, 1138069701

Citation preview


3-System Theory of the Cognitive Brain: A Post-Piagetian Approach to Cognitive Development puts forward Olivier Houdé’s 3-System theory of the cognitive brain, based on numerous post-Piagetian psychological and brain imaging data acquired from children and adults. This ground-breaking theory simultaneously anchors itself in a deep understanding of the history of psychology and fuels current debates on thinking, reasoning and cognitive development. Spanning the long-term history of psychology, from Plato and Aristotle to more current experimental psychology, this pioneering work goes beyond the approaches of Kahneman (i.e. System 1 theory) and Piaget (i.e. System 2 theory) to put forward a theory in which the inhibitory-control system (i.e. System 3) takes precedence. Houdé argues that the brain contains a third control system located in the prefrontal cortex which is dedicated to inhibiting Kahneman’s intuitive heuristics system and activating Piaget’s logical algorithms system anywhere in the brain on a case-by-case basis, depending on the goal and context of the task. 3-System Theory of the Cognitive Brain simultaneously explains the early logical abilities discovered in babies, the dynamic, strategic and non-linear process of cognitive development in children, and the fast heuristics and biases observed in adults. Houdé considers the exciting implications of this theory on neuroeducation using examples from the classroom. This book is essential reading for students and researchers in cognitive development and education, child psychology, reasoning and the neurosciences. Olivier Houdé is Professor of Psychology at the University of Paris and the honorary director of the Laboratory for the Psychology of Child Development and Education (LaPsyDÉ) at the Sorbonne. He is the Editor-in-Chief of the Dictionary of Cognitive Science (Routledge, 2004) and the co-Editor-in-Chief of The Cambridge Handbook of Cognitive Development (forthcoming).

Essays in Developmental Psychology

North American Editors: Henry Wellman University of Michigan at Ann Arbor UK Editors: Claire Hughes University of Cambridge Michelle Ellefson University of Cambridge Essays in Developmental Psychology is designed to meet the need for rapid publication of brief volumes in developmental psychology. The series defines developmental psychology in its broadest terms and covers such topics as social development, cognitive development, developmental neuropsychology and neuroscience, language development, learning difficulties, developmental psychopathology and applied issues. Each volume in the series will make a conceptual contribution to the topic by reviewing and synthesizing the existing research literature, by advancing theory in the area, or by some combination of these missions. The principal aim is that authors will provide an overview of their own highly successful research program in an area. It is also expected that volumes will, to some extent, include an assessment of current knowledge and identification of possible future trends in research. Each book will be a self-contained unit supplying the advanced reader with a well-structured review of the work described and evaluated. Published Goswami: Analogical Reasoning in Children Cox: Children’s Drawings of the Human Figure Harris: Language Experience and Early Language Development Garton: Social Interaction and the Development of Language and Cognition Bryant & Goswami: Phonological Skills and Learning to Read Collins & Goodnow: Development According to Parents For updated information about published and forthcoming titles in the Essays in Developmental Psychology series, please visit: www.routledge.com/series/ SE0532

3-SYSTEM THEORY OF THE COGNITIVE BRAIN A Post-Piagetian Approach to Cognitive Development

Olivier Houdé

First published in English 2019 by Routledge 2 Park Square, Milton Park, Abingdon, Oxon OX14 4RN and by Routledge 52 Vanderbilt Avenue, New York, NY 10017 Routledge is an imprint of the Taylor & Francis Group, an informa business © 2014 Presses Universitaires de France Le Raisonnement © 2019 Presses Universitaires de France L’histoire de la psychologie Translation from French: Dr Andrew Brown (Part I) and Louise Kerrigan (Part II), in association with First Edition Translations Ltd, Cambridge, UK. The right of Olivier Houdé to be identified as author of this work has been asserted by him in accordance with sections 77 and 78 of the Copyright, Designs and Patents Act 1988. All rights reserved. No part of this book may be reprinted or reproduced or utilized in any form or by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying and recording, or in any information storage or retrieval system, without permission in writing from the publishers. Trademark notice: Product or corporate names may be trademarks or registered trademarks, and are used only for identification and explanation without intent to infringe. British Library Cataloguing-in-Publication Data A catalogue record for this book is available from the British Library Library of Congress Cataloging-in-Publication Data Names: Houdé, Olivier, author. Title: 3-system theory of the cognitive brain : a post-Piagetian approach to cognitive development / Olivier Houdé. Other titles: Raisonnement. English | Three-system theory of the cognitive brain Description: First Edition. | New York : Routledge, 2019. | Includes bibliographical references. Identifiers: LCCN 2018045247 (print) | LCCN 2018047399 (ebook) | ISBN 9781315115535 (Ebook) | ISBN 9781138069695 (hardback) | ISBN 9781138069701 (pbk.) Subjects: LCSH: Cognition in children. | Inhibition. | Reasoning in children.Classification: LCC BF723.C5 (ebook) | LCC BF723.C5 H678 2019 (print) | DDC 155.4/13–dc23 LC record available at https://lccn.loc.gov/2018045247 ISBN: 978-1-138-06969-5 (hbk) ISBN: 978-1-138-06970-1 (pbk) ISBN: 978-1-315-11553-5 (ebk) Typeset in Bembo by Swales & Willis Ltd, Exeter, Devon, UK


Preface to the English edition: one, two or three systems of thinking in human beings?



History of theories about thinking: philosophy, biology and psychology



From Psyche to the logos: antiquity


Faith, truth and reasoning in the Middle Ages



The inconstancy of the human being: from the Renaissance to the Enlightenment


Towards a science of psychology: the nineteenth and twentieth centuries





3-system theory of thinking and reasoning



Jean Piaget’s theory or the logical system



The dual-system theories: System 1 (intuition) and System 2 (logic)





Inhibiting in order to reason: System 3 (executive)



The paradox of reasoning in infants


Conclusion Index

124 127

PREFACE TO THE ENGLISH EDITION One, two or three systems of thinking in human beings?

Richard H. Thaler received the Nobel Prize in Economics in 2017 for his work in behavioural economics. The award was for taking into account the cognitive biases that cause us to make decisions in ways that may seem illogical or irrational (Thaler, 2015; Thaler & Sunstein, 2008). This was not a new area. Daniel Kahneman had already been awarded the same prize in 2002 for his work on the irrational, often unconscious heuristics behind the way in which we reason and make decisions. He had thus questioned the postulate of the rationality and lucidity of economic agents. In an illuminating book, Thinking, Fast and Slow, published in 2011, Kahneman very eloquently summarized the situation. There are two systems in the human brain: ‘System 1’ is heuristic, approximate and fast, while ‘System 2’ is logical and algorithmic, analytical and exact, but slower. The problem, according to Kahneman, is that we very rarely use this latter system, preferring to rely on System 1, which dominates our thinking via automatic and unconscious cognitive processes, emotions and so on. Hence our illogical and sometimes absurd decisions. Kahneman’s work did not pose a challenge to economics alone, but also to the psychology of cognitive development. In the twentieth century, the psychologist Jean Piaget (1896–1980) taught us that, little by little, stage by stage, children and adolescents become increasingly logical, conscious and rational beings through a slow and incremental process of abstraction (‘reflective abstraction’ in Piaget’s terminology). This meant that at the beginning of childhood, the over-hasty intuitions of System 1 dominated (in the pre-logical, intuitive stages), but that the situation was later reversed and, ultimately, in the older adolescent and the adult, System 2 (logic, reason) should be predominant. Thus, Piaget’s conclusion and observations were the complete opposite of Kahneman’s.


Preface to the English edition

In addition, at the end of the twentieth and the beginning of the twenty-first century, many researchers throughout the world showed that babies are much more logical, conscious (lucid) and rational than was imagined. Their work is summarized by psychologist and philosopher Alison Gopnik in her books The Scientist in the Crib (1999) and The Philosophical Baby (2009), as well as in an article in the journal Science published in 2012: ‘Scientific Thinking in Young Children’. Other experimental researchers have contributed significantly to revealing these early cognitive skills in babies, including Elizabeth Spelke and Renée Baillargeon in the case of physical cognition, and Karen Wynn for mathematical cognition. These works contradict Piaget’s theory that the very young child is still too intuitive and illogical for the concepts of physics and mathematics. All this provides us with a set of paradoxes. Piaget describes the child as becoming more and more logical until adulthood. Kahneman and Thaler describe adults as not logical at all. Conversely, specialists in the psychology of babies describe forms of logical rationality as present at the very beginning of life. So what exactly happens in the human brain during cognitive development? This book is devoted to this issue: it is an attempt to resolve these paradoxes. Science often works like this, seeking to resolve paradoxes. I propose a new complementary theory: that of System 3. Systems 1 and 2 here are still those of Kahneman (and Piaget in the case of System 2). I add System 3, which depends on the maturation and the good functioning of the neurons of the prefrontal cortex, at the front of our brain (this lobe is well known to neurologists for its superior functions of control and arbitrage: see Fuster, 1997, 2003; Luria, 1962). From the physiological point of view, the anterior part of the cortex contains neurons with long axons that can send inhibitory or activating orders throughout the rest of the brain and, if necessary, ‘silence’ entire assemblies of neurons in comparison with others (Dehaene, 2014). The epicentre of inhibitory control is located in the inferior frontal gyrus, often in the right hemisphere (depending on the tasks). This is now demonstrated by brain imaging (functional magnetic resonance imaging), both in adults (Aron et al., 2004, 2014) and in children during development (Houdé et al., 2010). There are, in our brains, very fast and intuitive heuristic processes or cognitive biases (System 1, Kahneman)1 and logical rules or exact algorithms (System 2, Piaget) that can at any time come into competition with one another, resulting in what is called ‘cognitive conflicts’ between assemblies of neurons. They can be observed in all the basic kinds of learning: reading, writing, counting, thinking (or reasoning) and respecting others. The capacity of the whole brain to overcome these conflicts, that is the intelligence or flexibility, depends on the executive control capacity of the prefrontal cortex (System 3) and its ability to inhibit System 1 and activate System 2, case by case, depending on the goal and the context of the task. Metacognitive training in the lab or school can help. These are useful for both children and adults because they are still bad reasoners in many situations where their System 1 dominates, often unconsciously.

Preface to the English edition


The potential role of emotions and feelings in this System 3 via the right ventromedial prefrontal cortex (Houdé et al., 2001; Houdé & Tzourio-Mazoyer, 2003) is very clearly documented by neurologist Antonio Damasio’s work on the homeostasis and ‘valence’ of our decisions. This latter always varies, depending on the context, from positive (what we need to activate) to negative (what we need to inhibit), not only in our brain, but also throughout our body in its chemical make-up and viscera, its musculoskeletal system and sensory portals (Damasio & Carvalho, 2013). It manages the vital process. Damasio, in his latest book, The Strange Order of Things: Life, Feeling, and the Making of Cultures (2018), argues that this process, which was chemical before it was neural, has been at the heart of life at all stages of evolutionary history, from the bacteria, which preceded us on Earth about 3.8 billion years ago, onwards. Homeostasis has caused biological organisms to sense what they need to do or not to do, and what is good or bad for their body, ever since the dawn of time. Damasio insists, however, that the origin of the feelings that map out these bodily emotions goes back to about 500 million years ago with the appearance of the first nervous systems, followed by the brain. This was especially so in the development of human beings, as in the case of our ancestor Toumai, seven million years ago (the phylogeny of modern man properly speaking, Homo sapiens, dates back 300,000 years according to the latest calculations: see Hublin et al., 2017). Nevertheless, while he does discuss positive and negative valence, adaptive regulation and so on, Damasio does not talk explicitly enough about the key process of inhibition. However, this is essential both physiologically and neurocognitively (our System 3), as has been known quite specifically in physiology ever since the nineteenth century, thanks to the work of Charles Sherrington, winner of the Nobel Prize for Physiology or Medicine in 1932 (see Smith, 1992 for a historical study of inhibition). Pavlov and Freud, for example, both used this key concept. In addition, Damasio also does not tackle the more complex cognitive problem of the processes of logical thought itself and the ‘Kahneman– Piaget paradox’ or ‘irrationality vs rationality’. However, this paradox lies at the heart of the problem of human life. Why is it that our brain – which (i) has developed through the whole history of natural selection (phylogeny) thanks to homeostasis ever since the origin of life billions of years ago, (ii) has become more complex thanks to the emotions and feelings it experiences, and (iii) already seems quite logical in the baby after birth (ontogeny) – is nevertheless still so often illogical and irrational in its later decisions, that is in biological adaptation via thought or cognition over the course of a life? It is as if, in the human brain, thought and ideas have failed, whereas homeostasis has been perfectly successful, from bacteria onwards. Here, I attack this issue head on. It is so important, in my opinion, that it cannot be reduced to the contemporary scientific debates that began in psychology and the cognitive sciences with the work of Piaget. We first need to recall all the major stages in the history of philosophy and psychology over two millennia, that is from Greek antiquity (a short time compared to the 300,000


Preface to the English edition

years of Homo sapiens and the 3.8 billion years of the story of life). Since ancient times, all thinkers have posed, directly or indirectly, the same problem: where does our rationality come from and why are we so often irrational? This is the subject of Part I of this book. In its French edition, this part was a separate book, as the reader will probably sense. However, it was a very good idea on the part of Routledge to suggest that two books be put together as one by adding Part II, which comprises another of my recent French books on reasoning. In this one, I present all the experimental data accumulated over the last twenty years by my laboratory at La Sorbonne in Paris, France (Houdé & Borst, 2015) and by others throughout the world to provide support for the theory of System 3 and to resolve the paradox between the models of Kahneman (System 1) and Piaget (System 2). In Greek antiquity, the philosopher Plato had already developed the idea of the Homo triplex. According to him, this model involved the three systems of the soul: impetuous desire, reason and will. In today’s terms, desire corresponds to System 1, reason to System 2, and will to System 3. Therefore, Plato had understood one important thing: System 2, reason or pure logic, cannot do anything without the courage and ardour of System 3. I would now say that System 3 allows us to inhibit System 1 so as to activate System 2. System 3 is the will underpinned by the inhibitory action of the prefrontal cortex. Plato also anticipated the cerebrocentric approach: he had already located the rational part of the soul (System 2: mind, intellect, reason) in the brain, whereas his pupil Aristotle, at the School of Athens, still located it in the heart (cardiocentrism). Plato, however, placed desire and its impulses (System 1) in the lower abdomen, and the will (System 3) in the heart. We now know that everything is in the brain, including the mapping of desires and emotions through feelings, even though the brain’s relations with the whole body are always intimate and continuous (Damasio, 2018). Aristotle believed in the omnipotence of logical reason (the logos) via the syllogisms (System 2) that he invented and formalized. These, he argued, were meant to combat the errors and malicious manipulations of reasoning prevalent in ancient Greece three centuries before Christ: errors such as paralogisms and sophisms, which today we call the cognitive biases inherent in our judgements (System 1). But Plato, his master, had already understood that System 2 did not have, in itself, in its bare formalism and ‘coldness’ (i.e. syllogisms), the necessary force or ardour to control System 1, that is, to inhibit it when necessary. Today, both Kahneman and all the proponents of dual process theory or Homo duplex (e.g. De Neys, 2018; Evans, 2003; Evans & Stanovich, 2013) make the same mistake as Aristotle. They place cognitive and inhibitory control within System 2. It is as if Plato had made the mistake of situating the will within reason. We must reread this wise ancient philosopher to avoid making this mistake in the current debates in cognitive psychology. Between antiquity and the Middle Ages, the Christian philosopher St Augustine introduced the notions of self-awareness and doubt (with its corollary, confidence). A common belief is that René Descartes introduced these notions

Preface to the English edition


after the Renaissance, but it was actually Augustine who did so, long before. He wrote, ‘If I am wrong, I am’ (si fallor sum). These ideas were the roots of the Cartesian programme in the seventeenth century: ‘I think, therefore I am’. We will come back to this. They are also the roots of current work in experimental psychology and brain imaging on the detection of cognitive conflicts, supported by the anterior cingulate cortex (in the medial part of the frontal lobe), and doubt assessed by means of scales measuring confidence in reasoning and decisionmaking problems (De Neys et al., 2008, 2011; Shapira-Lichter et al., 2018). St Augustine also described, with considerable psychological flair, the illusions and perceptive biases of the human mind (System 1). It is the soul that creates falsity and error, he wrote, using the example of the stick that is plunged into water and seems to be bent. Later, in the Middle Ages, the theologian St Thomas Aquinas – who was challenged, like the Catholic Church as a whole, by the return (the rediscovery) of Aristotle’s scientific logic (System 2) – suggested the existence of two systems of truth: faith (System 1) and logic (System 2). These were two different paths taken by the human mind towards the same goal (the truth). This marked the theological birth of the cognitive vicariance studied today in the neurosciences (Berthoz, 2016). In the Renaissance, during the religious wars between Catholics and Protestants in the sixteenth century, Montaigne – the first real secular philosopher and psychologist – harshly denounced human egocentrism (or sociocentrism) and the hastiness of our judgements (System 1) that lead to intolerance. He therefore advocated an education that would help control the mind, starting in childhood, to guard against the illusion of our own egocentric perspective and our mistaken aims and judgements. This is what is called in today’s developmental psychology ‘the training of the executive functions’ (System 3), particularly in the prefrontal cortex: inhibition and flexibility in working memory (Diamond et al., 2007; Diamond & Ling, 2016; Houdé, 2007). Montaigne, like Plato, had understood something essential: reasoning alone (System 2) is not enough. We cannot justify reasoning by reasoning, he said. We need something else: the will (Plato) or the control of the mind (Montaigne). Like Plato, Montaigne avoided the error of Aristotle who believed in the power of pure logic (the logos) alone. Then, in the seventeenth century, Descartes upgraded Aristotle’s logic (System 2) while simplifying the syllogisms that had become too complicated during the Middle Ages. He proposed a simpler Method, consisting of a series of rules to guarantee a logical mind. These rules require us to engage in the methodical doubt underlying the Cartesian cogito (Cogito, ergo sum), a reformulation of Aristotle’s logos enriched by the doubt of St Augustine. However, Pascal was quick to remind Descartes that logic (System 2) is not enough because there are two ways into the human soul: the geometrical spirit and the spirit of finesse. The latter is, in Pascal’s words, an intuitive and rapid judgement, with its flaws and its credulity, of which we are not necessarily aware – just like Kahneman’s System 1 today. The geometry of Pascal is, in turn, the logic (logos) of Aristotle or the method and cogito of Descartes (System 2).


Preface to the English edition

Contemporary authors who talk in terms of the dual system or process of judgement have therefore invented nothing on the theoretical level. It was all there in Pascal – though the French philosopher is not mentioned in Kahneman’s (2011) masterful book. In the Age of Enlightenment, the empiricist philosopher David Hume, very opposed to Descartes’s logical rationalism (System 2), insisted again on System 1, which in his view relied on the association of ideas from experiences in the environment: the spatial and temporal contiguities of objects and events, similarities, and so on. In his view, these powerful assembling mechanisms configure human memory and help produce our ideas and beliefs. Also prefiguring Kahneman (Thinking, Fast and Slow), Hume noted how incredibly fast this system of association is. I could mention many other thinkers here, but we can already see how much the history of philosophy and psychology, for two millennia, has been dominated by the description of two systems of thought, one intuitive and rapid, with its flaws and its credulity (System 1), the other analytical, logical and rational, but slower (System 2). However, a few thinkers, including some of the greatest – Plato and Montaigne – had already recognized that these two systems were not enough to understand human nature and that a third system needed to be identified and educated: a form of cognitive resistance, of self-control and will, which does not arise from the workings of logic itself. This, indeed, is the difference between logic and psychology. There is no will (human emotion or feeling; Damasio, 2018) in pure rational and algorithmic logic (System 2). In the second part of the book, I will show that, thanks to the advent of experimental psychology at the end of the nineteenth century, these questions became truly scientific. For example, Piaget demonstrated the construction of logic (System 2) stage by stage during cognitive development, based on new and very ingenious experiments with children and adolescents. Then, by means of other well-devised experiments in cognitive psychology, Kahneman demonstrated the persistence of the heuristics of judgement and decision-making which ensures that the fast, intuitive System 1 still predominates in the adult, often unconsciously (see also Evans, 2003). But, as I pointed out at the beginning of this preface, Piaget and Kahneman’s models are contradictory and do not suffice for the understanding of the dynamic and non-linear character (the ups and downs) of cognitive development, from the already rational baby (Gopnik, Spelke, Wynn, etc.) to the often irrational adult (Kahneman). To do this, we must reintroduce, as Plato and Montaigne had suggested, a third system. This is my own post-Piagetian theory of cognitive development, supported by new experiments in psychology and brain imaging (see Houdé, 2000, for the first formulation of this theory in English). It allows us to go beyond those theories of the dual system or process that have been in fashion in the cognitive psychology of reasoning for almost twenty years (from Evans & Over, 1996 to De Neys, 2018). In addition, this reinvigorated theory of the three systems, following in the distant wake of Plato’s Homo triplex, means that we can integrate and explain both the experimental data of Piaget (System 2) and of Kahneman (System 1) obtained

Preface to the English edition


in the twentieth century, while offering, in Montaigne’s educational spirit, new perspectives for the training of System 3. This is important in a twenty-first century marked yet again by religious wars and global terrorism and by a rampant irrationality conveyed in digital media (fake news, conspiracy theories, social and racist stereotypes, etc.; Lazer et al., 2018), sometimes reinforced by big data and the overly fast and intuitive statistical mechanisms of artificial intelligence. In all this, psychology and cognitive science have neglected the role played by the body, the emotions and feelings (in the brain) to preserve individual and collective homeostasis, as Damasio (2018) eloquently reminds us. It has been several years since he denounced Descartes’ Error (Damasio, 1994) and turned instead to the conatus of Spinoza (Damasio, 2003): we should speak of the valence of inhibition or activation for internal regulation and external adaptation to preserve the organism. These are not just negative or misleading emotions that interfere with System 1, as Kahneman (2011) suggests, but also positive and conscious emotions, in the form of one’s own intellectual feelings. One of them is the anticipation of regret which allows us to inhibit – before it is too late – the cognitive responses prompted by System 1 when they are misleading and unsuited to the situation. These feelings correspond to metacognition: System 3 is metacognitive par excellence; it is a conscious examining and arbitrage (Damasio, 1999) throughout the whole brain via short and long-distance physiological mechanisms, in an overall neuronal workspace (Changeux, 2017; Dehaene, 2014). If System 3 is trained, however, it can become more automatic (Linzarini et al., 2017), and it can potentially become as fast as System 1 so as to stop it and ‘take the ball (the heuristic) on the fly’. This is the automation of metacognition. It is undoubtedly a question of education and pedagogy; it would solve the problem of the speed of System 1 as identified by Kahneman (2011) – a problem that, in principle, is not psychologically insurmountable. If the System-3 cognitive control has a high level of training, as does a top athlete, it can become as fast as the heuristics of System 1. This book thus returns to the 2,000-year history of philosophical and psychological thought (Part I), too often ignored in the contemporary cognitive sciences. It brings a new vision of development after Piaget (Part II), via three interdependent cognitive systems: approximate heuristics (System 1), exact logical algorithms (System 2) and inhibitory control (System 3). It also gives hope for System-3 training, a hope that was absent from Kahneman’s theory, where the errors of System 1 predominated by definition in the face of a System 2 that was impotent, too slow and always lagging behind. The fact that it is slow, being more analytical and thoughtful, is perfectly normal and does not matter once System 1, which goes too fast, is blocked. My book therefore expresses hope for the science of education . . . but also, perhaps, for the science of economics and artificial intelligence with a human face. Olivier Houdé Paris April 2018


Preface to the English edition

Note 1 Heuristics are formally equivalent to Bayesian inference under the limit of infinitely strong priors (Parpart et al., 2018).

Bibliography Aron, A. et al. (2004). Inhibition and the right inferior frontal cortex. Trends in Cognitive Sciences, 8, 170–177. Aron, A. et al. (2014). Inhibition and the right inferior frontal cortex: One decade on. Trends in Cognitive Sciences, 18, 177–185. Berthoz, A. (2016). The Vicarious Brain, Creator of Worlds. Cambridge: Harvard University Press. Changeux, J.-P. (2017). Climbing brain levels of organisation from genes to consciousness. Trends in Cognitive Sciences, 21, 168–181. Damasio, A. (1994). Descartes’ Error: Emotion, Reason and the Human Brain. New York: Putnam. Damasio, A. (1999). The Feeling of What Happens: Body and Emotion in the Making of Consciousness. Orlando: Harcourt. Damasio, A. (2003). Looking for Spinoza: Joy, Sorrow, and the Feeling Brain. Orlando: Harcourt. Damasio, A. (2018). The Strange Order of Things: Life, Feeling, and the Making of Cultures. New York: Pantheon Books. Damasio, A. and Carvalho, G. (2013). The nature of feelings: Evolutionary and neurobiological origins. Nature Reviews Neuroscience, 14, 143–152. De Neys, W. (Ed.) (2018). Dual Process Theory 2.0. Oxford: Routledge. De Neys, W. et al. (2008). Smarter than we think: When our brains detect that we are biased. Psychological Science, 19, 483–489. De Neys, W. et al. (2011). Biased but in doubt: Conflict and decision confidence. PLoS ONE, e15954. doi:10.1371/journal.pone.0015954. Dehaene, S. (2014). Consciousness and the Brain: Deciphering How the Brain Codes Our Thoughts. New York: Viking. Diamond, A. et al. (2007). Preschool program improves cognitive control. Science, 318, 1387–1388. Diamond, A. and Ling, D. (2016). Conclusions about interventions, programs, and approaches for improving executive functions that appear justified and those that, despite much hype, do not. Developmental Cognitive Neuroscience, 18, 34–48. Evans, J. (2003). In two minds: Dual-process accounts of reasoning. Trends in Cognitive Sciences, 7, 454–459. Evans, J. and Over, D. (1996). Rationality and Reasoning. Hove: Psychology Press. Evans, J. and Stanovich, K. (2013). Dual-process theories of higher cognition: Advancing the debate. Perspectives on Psychological Science, 8, 223–241, 263–271. Fuster, J. (1997). The Prefrontal Cortex. New York: Raven Press. Fuster, J. (2003). Cortex and Mind. Oxford and New York: Oxford University Press. Gopnik, A. (2009). The Philosophical Baby. New York: Farrar Straus and Giroux. Gopnik, A. (2012). Scientific thinking in young children. Theoretical advances, empirical research and policy implications. Science, 337, 1623–1627. Gopnik, A. et al. (1999). The Scientist in the Crib. New York: HarperCollins. Houdé, O. (2000). Inhibition and cognitive development: Object, number, categorization, and reasoning. Cognitive Development, 15, 63–73. Houdé, O. (2007). First insights on neuropedagogy of reasoning. Thinking and Reasoning, 13, 81–89.

Preface to the English edition


Houdé, O. et al. (2000). Shifting from the perceptual brain to the logical brain: The neural impact of cognitive inhibition training. Journal of Cognitive Neuroscience, 12, 721–728. Houdé, O. et al. (2001). Access to deductive logic depends on a right ventromedial prefrontal area devoted to emotion and feeling: Evidence from a training paradigm. NeuroImage, 14, 1486–1492. Houdé, O. et al. (2010). Mapping numerical processing, reading, and executive functions in the developing brain: An fMRI meta-analysis on 52 studies including 842 children. Developmental Science, 13, 876–885. Houdé, O. and Borst, G. (2015). Evidence for an inhibitory-control theory of the reasoning brain. Frontiers in Human Neuroscience, (Research Topic: The Reasoning Brain: The interplay between cognitive neuroscience and theories of reasoning, edited by V. Goel, G. Navarrete, J. Prado, and I. Noveck). doi:10.3389/fnnhum.2015.00148. Houdé, O. and Tzourio-Mazoyer, N. (2003). Neural foundations of logical and mathematical cognition. Nature Reviews Neuroscience, 4, 507–514. Hublin, J.-J. et al. (2017). New fossils from Jebel Irhoud, Morocco and the pan-African origin of Homo sapiens. Nature, 546, 289–292. Kahneman, D. (2011). Thinking, Fast and Slow. New York: Farrar Straus and Giroux. Lazer, D. et al. (2018). The science of fake news. Science, 359, 1094–1096. Linzarini, A. et al. (2017). Cognitive control outside of conscious awareness. Consciousness and Cognition, 53, 185–193. Luria, A. (1962). Higher Cortical Functions in Man. Moscow: Moscow University Press. Parpart, P. et al. (2018). Heuristics as Bayesian inference under extreme priors. Cognitive Development, 102, 127–144. Shapira-Lichter, I. et al. (2018). Conflict monitoring mechanism at the single-neuron level in the human ventral anterior cingulate cortex. NeuroImage, 175, 45–55. Smith, R. (1992). Inhibition: History and Meaning in the Sciences of Mind and Brain. London: Free Association Books. Thaler, R. (2015). Misbehaving: The Making of Behavioural Economics. New York: W.W. Norton and Company. Thaler, R. and Sunstein, C. (2008). Nudge: Improving Decisions about Health, Wealth, and Happiness. New Haven: Yale University Press.


History of theories about thinking Philosophy, biology and psychology


When we think of antiquity, what we tend to remember is Greek mythology, which was given a more concrete form in the eighth century BC by the poets Homer (in the Iliad and the Odyssey) and Hesiod (Works and Days), a tradition that was continued by the Roman poets Virgil and Ovid. This mythology describes the creation of the cosmos (first Chaos and then Gaia, the Earth, who bore the mountains, the sea and the sky) and the gods of Olympus; and there are numerous stories in which Olympians, Titans, Giants and even humans confront one another. One of the humans was the beautiful Psyche; her name means ‘soul’ – and from her name came ‘psychology’. It is often said that myths lost their power to explain the world once Aristotle had promoted the logos, ‘reason’ or ‘rational speech’, the foundation of philosophy and science in ancient Greece. Indeed, rational thought now replaced myth as being regarded as indisputable truth. We can see this in the work of Plato, Aristotle, Herophilos and Galen, who were already starting to develop a rational science of the soul (or mind) and the brain, even if in Plato the mythological aspect persists. In fact, the two forms of thought coexisted among the Greeks.

1. Mythology: Psyche, psychopomp and Oedipus Though ancient myths continued to inspire the arts over the centuries, in science they gradually came to be regarded as ‘primitive’. From the nineteenth century onwards, they were even seen as superfluous and incompatible with science. In the twentieth century, their status changed somewhat: they again became part of contemporary thought, especially in psychology, with Freud and the Oedipus complex, and also Jung and the Elektra complex (the counterpart of the Oedipus complex). Even today, these myths continue to be handed down, with variations, and to stimulate the imagination. Almost all of them have a psychological


History of theories about thinking

dimension: we shall return more particularly to the myth of Oedipus. However, let us begin with Psyche and the psychopomp, which will act as our guides.

1.1 Psyche, both spiritual and material In mythology, Psyche, or Soul, is a young mortal, the daughter of a king, and a woman of incomparable beauty. Eros (the god of love and creative power) falls in love with her, but Aphrodite, his mother, has always been jealous of Psyche and, in order to get rid of her, she subjects her to formidable trials. Psyche succeeds in overcoming these through her courage and tenacity and Eros then leads her to Mount Olympus, where he obtains permission from Zeus, the king of the gods, to marry her. Psyche is thus deified. The Latin novel by Apuleius (123–170) known as the Metamorphoses or The Golden Ass includes the complete Roman version of this story (with Eros given his Roman name, Cupid). A rich album of drawings, paintings and sculptures representing Eros and Psyche by the greatest artists, from the Middle Ages and the Renaissance until the nineteenth century, was published in the early 2000s.1 The cover of this album reproduces one of the most beautiful paintings of Psyche, by Édouard Picot (1817), which can be found in the Louvre in Paris, France. The book shows how this founding narrative has always fascinated Western culture. Apart from the fact that the Greek word psyche means ‘soul’, so that the word ‘psychology’ literally means ‘the study of the soul’, there is in this myth an essential, highly illuminating vision. The most interesting aspect of this very old story is that Psyche has a spiritual and ethereal quality, through being raised to the rank of goddess, but is also (and indeed especially) a creature of flesh and blood thanks to her mortal origin. She is both spiritual and material, like psychology. This is at the very heart of psychology as a discipline. Throughout its history it has sought to achieve a delicate balance between the two aspects, spiritual and material, from the Ideas of Plato to the neurosciences (brain imaging); mind or soul on one side, matter, body and brain on the other. We find this special status even today in the epistemological and institutional position ascribed to psychology, seen as both a human and social science, and as a science of life (i.e. a natural science).

1.2 The psychopomp: a guide and measure of souls The psychopomp, another mythological figure, is as interesting as Psyche in the way he sheds light on psychology. The term ‘psychopomp’ derives from the Greek psykhopompós, meaning ‘guide of souls’. In Greek mythology, he guides the souls of the newly dead into the afterlife. There are many examples: Charon, Hermes, Hecate and Morpheus.2 In 1524, Joachim Patinir took Charon as the subject of a very beautiful oil painting on wood, Landscape with Charon Crossing the Styx, now in the Prado Museum in Madrid, Spain. Psychopomps later reappeared in Christian beliefs: for example, the archangel St Michael, the best known of the Christian psychopomps, guides the dead and

From Psyche to the logos


weighs the souls on the Day of Judgement. This is why he is often depicted holding scales. Two key elements that would play a part in psychology can thus be extracted from the myth of the psychopomp: the guidance and the measurement of souls.

1.3 Oedipus: family oracles and the solution of a cognitive riddle The myth of Oedipus was reintroduced by Freud at the beginning of the twentieth century by giving his name to a psychological state (a complex): in psychoanalysis, Oedipus is a universal family fiction. According to Freud, the Oedipus complex is a fantasy born of the child’s desires for the parent of the opposite sex, desires that are then repressed: the boy is attracted to his mother and thus views his father as a rival. Freud drew these conclusions from clinical observations and his knowledge of Oedipus Rex, the tragedy by Sophocles (495–406 BC). Laius, king of Thebes, learns from an oracle that if his wife Jocasta gives birth to a boy, the child will kill him. So he has the baby exposed out in the wild, with his feet bound so that he will die or be lost forever. Nevertheless, some shepherds find him, and since his bonds have given him ‘swollen feet’ (in Greek oidípous), they call him Oedipus. The child is presented and offered to the royal couple in Corinth. Oedipus, unsure of whose son he really is, goes to consult the oracle at Delphi. Instead of reassuring him, the oracle tells him that he will kill his father and marry his mother. Terrified, Oedipus decides to evade the oracle by not returning to Corinth. He flees, but on the way he encounters a traveller, who provokes such anger in Oedipus that he kills him. This traveller is none other than his biological father, the king of Thebes. On his way to Thebes, Oedipus encounters the Sphinx, a monster who poses to all who enter or leave the city a ‘deadly riddle’: as no one has thought of the answer, everyone so far has been devoured. When Oedipus takes his turn, the Sphinx asks him: ‘Which creature, with a single voice, has at first four legs, then two legs, and finally three?’ Oedipus immediately replies: ‘Man’3 (for in his early childhood man walks on his hands and feet, as an adult he stands on his legs, and in his old age he uses a stick to walk). Oedipus’s intelligence makes the Sphinx so furious that she kills herself. After this exploit, the inhabitants of Thebes offer Oedipus the vacant throne of the city: quite naturally, he marries the king’s widow, Jocasta (his biological mother). The oracle of Delphi is thus fulfilled and Oedipus does not learn the truth until many years later. With admirable insight, Freud took the fate of Oedipus and made it the cornerstone of the development of the human psyche: a psychosexual stage in the life of the child. In a letter to his disciple and friend Fliess, he explained this discovery – a discovery which he firmly believed in. For Freud, the horror which the story of Oedipus had aroused ever since ancient times is explained by the fact that each person, as he develops, imagines himself to be living through such a scenario with his own parents. This is undoubtedly the best-known example from the twentieth century of a belief from Greek mythology being recycled in the human and social sciences: Oedipus has become a


History of theories about thinking

subject of ‘science’, in the psychoanalytical sense of the word; and, surprisingly enough, this leads back to beliefs or, more precisely, to religion. Indeed, according to Freud, all human development involves going through the ‘Oedipus stage’ so that the individual can attain the heterosexual stage and the formation of the ‘superego’ (the psyche’s agency of control), which Freud sees as the source of morality and religion. Thus, the myths have continued to stimulate the scientific imagination, drawing here on clinical observations. Finally, the myth of Oedipus can also illuminate cognitive psychology, for solving the riddle of the Sphinx is a matter of reasoning and quick thinking. In this sense, Oedipus is a cerebral hero. This was Hegel’s interpretation: for him, Oedipus facing the Sphinx is the incarnation of the power of human intelligence. Shortly afterwards, Nietzsche proposed a similar interpretation: in his eyes, the man who triumphed over the Sphinx was the founder of the Greek spirit, trusting to the power of his intelligence to solve problems which he had previously overcome by force. This leads quite naturally to Plato and Aristotle, who were philosophers and already psychologists – friends of wisdom whose new ambition was to try to construct a more rational and scientific vision of the world. These were the beginnings of a ‘science of the soul’ and – as will be seen later – of a science of the brain, as developed by the doctors Herophilos and Galen.

2. Plato: innate ideas and the will of the soul In the centre of Raphael’s famous fresco, The School of Athens (1512) in the Vatican Museum in Rome, Italy, we see Plato (428–347 BC) and Aristotle (384–322 BC). Plato is pointing upwards to the ‘Heaven of Ideas’, while Aristotle, his pupil at the Academy, stretches his hand forward, symbolizing the earthly world. Indeed, for Aristotle (who does not believe in the Ideas as such), the general and the particular are transmitted here below.

2.1 Eternal, immutable ideas: the good, the true and the beautiful In a series of dialogues between his master Socrates (470–399 BC) and Socrates’ disciples and adversaries, Plato states that a demiurge (a god who created the universe) shaped the initial chaos into a cosmos given form by the eternal Ideas. Among these, the Good, a meta-Idea higher than all the others, forms the basis of ethics. This guides the individual and the collective behaviour of men in the sciences and in society (the City). More precisely, Plato sees the cosmos as a five-tiered hierarchy. In descending order, these levels are comprised of: Ideas or intelligible archetypes (the Good, the True, the Beautiful, etc.); numbers; geometric solids; the elements (fire, air, water, earth, ether); and concrete things. With his intuitive grasp of simple geometric solids, Plato prefigures mathematical theories in physics and chemistry.

From Psyche to the logos


For the author of the Republic, souls, thought of as ‘immortal’, have already contemplated the world of Ideas during their prenatal period. Birth disturbs this process. That is why ‘the body is a tomb’, according to the philosopher’s expression. Nevertheless, thanks to the psychological phenomenon of reminiscence, triggered by the perception of concrete things in the sensible world (relations, numbers and qualities), we can rediscover the innate Ideas. For example, Plato states that we grasp the idea of perfect equality with pieces of wood that are almost equal, but that the ‘equal’ itself does not reside in the pieces of wood. It is we ourselves who, from sensible objects, infer their essence. In the same way, when we look at six knuckle bones, we cannot say that the number six is in any one of them, or in all of them together, since it is we ourselves who have made the connection between the Idea and the objects. Even Meno’s slave, who appears in the dialogue of the same name, is able to deduce, starting from a right-angled triangle, Pythagoras’ theorem: the square of the hypotenuse is equal to the sum of the squares of the other two sides. A century before Plato, the geometer Pythagoras (580–495 BC) had indeed demonstrated this theorem – a ‘discovery of the True’. According to Plato, these immutable Ideas are latent knowledge: they are within us from birth, without our knowing it. In this already subtle psychology, it is not a question of ignorance but of latency: dormant truths which man seeks. It is at the cost of mental effort, often requiring a whole education, that the innate Ideas can (re)appear. They are recalled and reactivated. Hence the importance, from a Platonic perspective, of the maieutics of Socrates, the art of ‘giving birth to minds’ (Socrates’ mother was apparently a midwife), causing doubt and astonishment in one’s interlocutor. Plato thus underlines the educational role of the social environment. However, learning is not a matter of filling the mind, seen as an empty tabula rasa (‘blank slate’), by means of mere sensation, as it is in Aristotle, and as the young Theaetetus, an empiricist, suggests in a Socratic dialogue narrated by Plato. On the contrary, the Ideas are already present from birth, as a pre-existing cognitive store, a capital sum of reason that can be reactivated. Here we identify the ancient root of the rationalist and innatist trend that stretched for two millennia, right up to the ‘core knowledge’ of the baby’s cognitive psychology, according to Elizabeth S. Spelke (2000), via Descartes, Kant, Chomsky and Fodor.

2.2 The allegory of the cave: approaching the Ideas In this pedagogical allegory, Plato illustrates the intellectual approach which man, a prisoner in his cave, must follow in order to (re)ascend from perceptions to the Ideas, from this world to the beyond, since the concrete things that we perceive actually exist only as imitations or reproductions, reflections of the Ideas. The prisoners are chained and shackled in a deep cave with their backs to the entrance, and all they can see on the rear wall are the shadows of objects carried along by slaves in front of a fire. These objects themselves are reflections or


History of theories about thinking

representations of real things: the prisoners perceive only reflections of reflections. One prisoner is freed of his shackles and manages to reach the light of the world of Ideas outside, beyond the artificial objects, bypassing both the slaves who carry them and the fire that casts their shadows. Nevertheless, the prisoner has to be forced to go outside, for the light is dazzling. Plato thus expresses metaphorically the role of education and society, which must play an active role, even if this involves coercion. Once he has become accustomed to the outside world, the former prisoner then becomes aware of his full status as a philosopher, since he has seen the light of the Sun, the Good that illuminates the Ideas. However, he still has to act according to the law of the City, for the good of others, and so he goes back down to the prisoners in their cave. Thus, according to Plato, only those who grasp the true reasons for things, the Ideas, can then explain these reasons, and educate and govern men in the City. This is the only way to govern properly a state. As Jean Château (1978) pointed out in Les Grandes Psychologies dans l’Antiquité (The Major Psychological Theories of Antiquity), Plato already understood that intelligence is a detour that we are forced to take. This detour allows the human mind, through dialectics (argument, discussion and reasoning), to circumvent the contradictions of false knowledge, the world of the senses and appearances. It is a psychology of truth: the quest for a knowledge that will be stable in time, immutable, distinct from the variable and subjective opinion (doxa) of the sophists and politicians of the time.4 Plato had also perceived that intelligence must always seek support back in the world of the concrete. But for Plato, all this – one’s aptitude for following a detour and returning – depends very much on the differences between men, both in nature and in education. This was already a differential psychology, a psychology of the intellectual and moral aptitudes that were fostered, to a greater or lesser extent, by the social environment. If Plato favours the immortality of the soul in which the timeless Ideas are found, this is because any access to the Ideas via this ‘beautiful detour’ is very limited during earthly life. Therefore, only death, delivering us from our bodily and temporal constraints, allows us to grasp the whole truth (God). It is in this sense that Platonic philosophy considers the human body as a tomb; but he also opposed the atomistic and medical circles of the time who considered the soul as ‘the harmony of the lyre’, the organization of the living being. For Plato, this would mean forgetting God and the Ideas. He thus introduces a strong dualism between the soul and the body, which would be taken up again in the seventeenth century by Descartes but subsequently rejected by today’s neurosciences. In this regard, Jean-Pierre Changeux’s (2012) book, with its eloquent title On the True, the Beautiful and the Good: A New Neural Approach, is an anti-Platonic manifesto against the dualism of soul and body. However, the human soul in the course of its earthly life is the subject of a rather subtle description by Plato, who in this respect was a good psychologist.

From Psyche to the logos


2.3 The three systems of the soul: desire, reason and will A common belief is that Plato saw the soul as separate from the body, but this is not quite true. According to Plato, our soul is governed by three systems that are located in the lower abdomen, in the head and in the heart. The first, the épithymêticon, corresponds to quasi-animal desires (our ‘wild beast’) such as hunger, thirst and sexuality. The second system, the noûs, is the rational part of the soul (mind, intellect, reason), situated in the head: this is the brain, which gives us access to the Ideas. Finally, the third, the thymos, is the will, situated in the heart. These three components are compared, in the Republic and the Phaedrus, to a celestial team of horses: the ‘chariot of the soul’. According to this Platonic myth, a frail equestrian, the noûs (the ‘intellect’), drives his chariot as it is pulled by an impetuous black horse (the épithymêticon), and a white horse (the thymos) that has the courage and ardour to obey the charioteer. It can pull the chariot in the right direction, that of reason and the Ideas. However, this is not at all easy. By means of this Homo triplex, Plato had already clearly identified the fundamentals of clinical and pathological psychology as theorized by Freud, as well as the cognitive psychology of reasoning. The black horse corresponds to a perversion, a bad habit, an emotional bias, which comes not from the outside but from the internal soul: the enemy is within. In addition, reason, however beautiful and pure it may be (like the Ideas), is always fragile. Without the help of the thymos that governs the épithymêticon by its strength and its ardour (sometimes strong, sometimes less so, acting as a psychological control), reason is impotent. In the Gorgias, Plato also insists that a ‘wicked man’, if he is self-aware, must seek help from society by actually demanding punishment. We see here the idea of a psychological aid coming from outside, just like our contemporary psychotherapies (albeit in a less punitive guise). By placing the noûs in the head, Plato had judiciously foreshadowed the cerebrocentrism of the neurosciences, while Aristotle still defended cardio-centrism. As I have pointed out elsewhere (Houdé et al., 2010), it is in Socrates’ final dialogue, recounted by Plato in the Phaedo, that this theory of cerebro-centrism is described: it is the brain that gives us the sensations of hearing, sight and smell. From these arise our stable memories and opinions, in which truth (sometimes going against opinion) and true knowledge emerge. For Plato, this cerebro-centrism, which corresponds to that of Democritus (460–370 BC),5 does not exclude a world of Ideas very independent of us: a world that man (with his brain) can potentially discover and contemplate once he has come out from the cave. This feeling of an elsewhere – the ‘Heaven of Ideas’ – has been passed on over the centuries, whether or not associated with Plato’s belief in a creator God. In a contemporary debate between professors at the Collège de France on cerebral constructivism (that proposed by Changeux) and realism (Changeux & Connes, 1995), Alain Connes strongly resists the thesis of a neuronal origin of


History of theories about thinking

mathematical reality (numbers, geometry, etc.). He thinks, as do many mathematicians, such as Cedric Villani in France, that this reality exists (like Plato’s Ideas) independently of any human investigation. Therefore, it is not constructed either by the brain or by cognitive development, as Piaget had proposed (Piaget, 2015). However, Connes adds: ‘we have access to it only by means of our brain – at the price, in Valéry’s memorable phrase, of a rare mixture of concentration and desire’ (Changeux & Connes, 1995, p. 26).

3. Aristotle: the science of syllogisms and empiricism Let us return to Raphael’s fresco, The School of Athens, and Aristotle’s place in it. According to Aristotle, at birth we are a tabula rasa. Thus, it is only within things themselves that their essence is to be found, and not in a superior Idea that transcends them.6 What interests Aristotle is the discovery of what the earthly world is made of, the ‘genera of being’: quality, quantity, relation, place, time, action and so on, and their subdivisions. That is why his approach is encyclopaedic and known as ‘systematic’: he classifies diversity. Aristotle has thus influenced separate branches of knowledge: psychology, logic and zoology, all of which include the logos, implying reason and language. He distinguishes between external objects, mental images and their communication by words; but in order to understand the world and to speak correctly of it, he wishes to establish a rigorous logical reasoning that connects objects to words: a science of sciences.7 Therefore, he is important to philosophy, certainly, but he also prefigures in the psychology of reasoning. Indeed, we owe him the discovery of the ‘syllogism’.

3.1 The science of syllogisms It was in the Organon (‘instrument’) that Aristotle introduced the syllogism. To do this, he used abstract patterns with variables. But a simpler example, one which would meet with more historical success, was formulated by William of Occam (1285–1347) in the Middle Ages, and it followed the spirit of Aristotle’s rigorous reasoning: if (a) all men are mortal and (b) Socrates is a man, then (c) Socrates is mortal. This form of reasoning a–b–c is defined as an inference and is formed of two premises (one major, the other minor) and one conclusion. Every syllogism involves three terms that appear twice (here: man, mortal and Socrates), either in a premise and in the conclusion (mortal and Socrates) or in each of the premises (man). Deductive inference is the cognitive operation that forges the link between the premises and the conclusion. From the point of view of logic, the valid character of a deduction (the above example is valid) depends on the structure of the inference and not on the content of the sentences as such. This is why logic is described as ‘independent of the contents’. Thus, Aristotle discovered that certain conclusions could be

From Psyche to the logos


verified and validated just by their form. This is the foundation of logic as a deductive instrument of science. Aristotle used logic to detect and refute the errors of reasoning, the sophisms and paralogisms, of his contemporaries in Greece.8 Here lies the ancient root of the study of the cognitive biases of reasoning (Evans, Kahneman) and the psychological forms of syllogisms – the rules of mental logic (Braine) and visual-spatial models (Johnson-Laird).9 A fundamental problem nevertheless remained for Aristotle: if the science of syllogisms aims at the objective knowledge of the world, how do we obtain the initial true propositions, which do not themselves result from a prior deduction? Indeed, we can produce an argument whose conclusion (c) is logical (valid), but has a premise – either (a) or (b) – that is false. For example, if (a) all men are immortal and (b) Socrates is a man, then (c) Socrates is immortal. However, Socrates, a Greek philosopher, and the master of Plato, was mortal. How can we ensure that the initial propositions are trustworthy? This is where induction and empiricism find their place in Aristotle’s theory. Not believing in the Ideas of Plato (as starting principles that are inside us without our knowledge), he appeals to a sure and certain faculty of recognition and judgement, which results from perception: there is no Idea without such a prior impression. Psychological empiricism is therefore inevitable. The tabula rasa is a corollary of the science of syllogisms.

3.2 Tabula rasa, experience and the biological origin of the soul According to Aristotle, the senses and induction lie at the basis of everything. From sensation arises memory, from memory comes experience, and from reasoned experience (the syllogism) comes the conception of the universal. It is through this inductive empirical process alone – without needing first to attain the pre-existing Ideas – that we can grasp the indemonstrable elements or axioms that are seen as trustworthy, appropriate and self-evident. From these the necessary deductions will then follow. At birth, the intellect is similar to a tablet on which nothing is currently written, argues Aristotle in his Peri psychès (On the Soul) – arguably the first comprehensive treatise on psychology in history (just as the Organon was for logic). He describes in fine detail how a sensible object causes a specialized activity of the cognitive function. The latter does not receive the sensible form ‘from without’, but re-creates it ‘from within’, from its own powers, when it is affected from without. This nuanced empiricism (internal re-creation) already foreshadows the cognitive process of assimilation-accommodation described by Piaget (see Chapter 4). According to Plato’s pupil, the powers of the cognitive function come from a general breath of life, ‘the soul’, shared by all animate beings: plants, endowed with faculties of nourishment and growth; animals, endowed with faculties of motion and perception in addition to the preceding ones; and humans, who also benefit from the cognitive faculties of thought, of reasoning. In this way, three souls intertwined by epigenesis form a hierarchy in


History of theories about thinking

man: the vegetative, sensitive and intellective. Human intelligence is an extension of biological adaptation, once again prefiguring the work of Piaget. The living world of Aristotle, however, is fixed: it is a scala naturae (‘scale of beings’) created by God, in which man dominates an eternal universe (like that of Socrates and Plato), not a transformist world in evolution of the kind later discovered by Buffon, Lamarck and Darwin. Unable to imagine this revolution in biology, Aristotle, as a good logician, wished to avoid an infinite regress: so he formulated the hypothesis of a god being, the cause of all, and who is himself uncaused – in other words, the ‘first cause’. Aristotle, like Plato, therefore has a god. Nevertheless, we must immediately qualify this apparent similarity. The god of Plato is, as we have seen, consonant with the ancient root of rationalism (the Ideas) and innatism, while the god of Aristotle, who emphasized the logos and the tabula rasa, looks forward to empiricism and the associationism of Locke and Hume in the eighteenth century (see Chapter 3). Aristotle also develops a psychology of happiness, which he finds in the felicity provided by the specifically human ability to exercise reason, thanks to the intellective soul. This self-reflexive idea is to be found throughout the whole history of psychology, both in literature and in science, from the Renaissance, with Montaigne and his Essays, to Ribot at the end of the nineteenth century, in his La Psychologie des sentiments (the Psychology of Feelings, a study of intellectual feelings). In the twentieth century, we find it in works from Valéry (Monsieur Teste, and Le Bilan de l’Intelligence or Survey of Intelligence) to Damasio (The Feeling of What Happens: Body and Emotion in the Making of Consciousness). Other observations made by Aristotle demonstrate what a psychological pioneer he was: he stressed the importance of imitation in the child, for example. On the other hand, he was less of a pioneer in that he remained a cardio-centrist, making the heart (the ‘Acropolis of the body’) the main source of the soul’s movement because of its heat (the brain being too ‘cold’). In contrast, Plato was more specifically cerebro-centrist with his noûs, while also ascribing a key role to the thymos, located in the heart. With regard to the still complicated issue of the relationship between the soul (or thought, the mind) and the brain, psychology has long been seeking answers. In 1596, Shakespeare, in The Merchant of Venice, wondered whether the soul was in the heart or in the brain. Who was right, Aristotle or Plato? In this regard, two physicians of antiquity made a decisive contribution to the study of the brain’s role, one that lies at the roots of the neuroscientific approach to psychology: Herophilos and Galen.

4. Herophilos and Galen: the first mapping of the higher functions of the brain 4.1 The ventricular theory The Greek physician Herophilos (330–260 BC) was the first to imagine a precise location for the superior functions of the brain in the cerebral ventricles (cavities

From Psyche to the logos


inside the brain that are filled with ‘cerebrospinal’ liquid and communicate both with each other and with the sheath of the spinal cord). The presence of several intracerebral cavities, quite separate but able to exchange the ‘spirits’ (fluids) with which they are filled, constitutes for Herophilos the ideal basis for the first theory of the mapping of cognitive functions. Over the following centuries, ventricular theory, also known as ‘cell theory’, would develop to an extraordinary degree, especially in the Middle Ages, when three essential faculties were identified, each associated with a ventricular cavity: common sense, reason and memory. This theory was developed until the beginning of the seventeenth century, the position of the new faculties being defined each time in terms of the addition or division of cells (or ventricles).

4.2 Animal physiology and the ‘wonderful network’ Claudius Galen (129–200), another celebrated Greek doctor of antiquity, proposed a synthesis of the cerebro-centrist theory of Plato and the views of Herophilos. His ideas drew on numerous clinical observations and on the first experiments into animal physiology. He advised people not to consult the gods to find out about the soul that guides us, but rather to learn from an anatomist. Galen built his theory on the rete mirabile (‘wonderful network’ or ‘net’): this is a dense array of vessels, observable in some mammals, consisting of the trunks of the main arteries of the brain. By dissecting animals, he inferred its existence in man and ascribed to it the function of converting the vital principles, supposedly made by the left ventricle of the heart, into spiritual principles (the thymos and the noûs of Plato). According to Galen, these vital principles, called ‘humours’ or ‘ethers’, are diffused through the cerebral ventricles to the nerves so as to allow movement and transmit sensations. The theory of the rete mirabile and the localization of brain functions in the cerebral ventricles was handed down through the centuries until the dissection of human brains in the Renaissance demonstrated the absence of such a network. There then appeared the first drawings of the cortex itself, with its convolutions (see the beautiful anatomical plates by Andreas Vesalius), but it was not until the beginning of the nineteenth century (notably with Gall) that a major role was ascribed to it. However, it was Herophilos and Galen in antiquity who launched the great scientific project in which medicine, philosophy and psychology all combined, and which in the twentieth century finally led to the neurosciences.

5. Caring for the diseases of the soul Inspired by Stoicism, Galen also advocated self-control.10 These practices consist of exercises of concentration and meditation intended to calm the emotions. (Meditation was at the heart of Indian psychology, to be described later.) Galen’s aim was not just to map the brain with his animal experiments, but also to cure the diseases of the human soul.


History of theories about thinking

In antiquity, the ‘cure of souls’ was practised on Kos, the island of Hippocrates, the father of medicine (460–370 BC). Here, in order to regain mental health, the patient, after drinking the water of the ‘spring of memory’, had to lie down on a kliné (‘bed’ or ‘couch’ in Greek, hence the word ‘clinic’, meaning what is practised at the patient’s bedside) and then recount what he saw in a dream triggered by frescoes decorating his room. Parot and Richelle (1992) see this as an ancient source of clinical psychology (and, we might add, of projective techniques) and of what in the twentieth century would comprise Freud’s psychoanalytical treatment, intended to remind the patient of some latent element, some ‘pathogenic secret’. Plato already had this idea of a latent world and a necessary reminiscence – indeed, it was often found in antiquity – but less to reveal a pathogenic secret than to gain access to the Good, the True and the Beautiful. What is interesting for the history of psychology (Galen was the main example) is that a direct attempt was being made to develop two complementary approaches. The first one was the scientific and cognitive interest in understanding the way the intellective soul (thought) was connected to the brain. The second one was the clinical interest, first in the medical sense, in calming the emotions, and even in curing diseases of the soul. These two approaches still lie at the heart of contemporary psychology.

6. Non-Western antiquity: Indian psychology and Buddhism Western thought, of course, is closely connected with Christianity – this was the case in late antiquity with St Augustine, as we shall see. Eastern thought, especially in India, owes much to Buddhism (Buddha is said to have lived in about the fifth century BC). Even in ancient times, deep differences between India and Greece were starting to emerge, indicating two opposite directions being followed by psychology. The lucidity of Buddhism is not part of Western understanding. On the one hand, in Buddhism, psychology is experienced through what goes on in the body, through breathing (meditation, for example); on the other hand, in the West, we try to conceive psychology from an intellectualist and scientific point of view (Ideas, logos, syllogisms) according to the tradition inaugurated by Plato and followed by Aristotle. In the Indian approach, we look inside; in the Western world, we look outside, just as Aristotle did in Raphael’s fresco, pointing to the earthly world and endeavouring to systematize it. By looking inside, Indian psychology aims to find in each person the cosmic and divine energy, another god or, at the very least, another way of accessing truth other than the Socratic dialectic and the ‘Heaven of Ideas’ dear to Plato. Moreover, Indian thought ignores the distinction between the soul and the body, which appears in Plato in the shape of independent and pre-existing Ideas, a distinction that was reinforced by Descartes. Finally, Buddhist thought emphasizes the illusory character of this world, and its approach consists in dissolving our ‘mental formations’.

From Psyche to the logos


As Château (1978) has astutely remarked, seeking to conceive psychology is the approach taken by our Western culture in which the modern scientific psychologist endeavours to draw a picture, a kind of ‘blueprint’, of the psyche [the brain, we might now add], to disassemble its cogs, to check their functioning and to create models for them. (p. 63) Conversely, experiencing the psyche from inside (in the Indian and Buddhist tradition), without cognitive introspection, allows us to dissolve our mental formations (our cognitions, our emotions) and to attain a state of ‘mental calm’, clarity of mind and compassion. This non-Western aspect of the history of ancient psychology should be particularly emphasized since the Dalai Lama has recently provided the impulse for the emergence of a completely new trend of experimental cognitive science (neuroscience, developmental psychology and pedagogy) on meditation. Within this field, questions as essential as awareness, perception, attention, executive functions and cerebral plasticity are being addressed, drawing on the most powerful technologies of the contemporary psychological sciences (Tang et al., 2015) and the benefit of 2,500 years of the practice of meditation or mindfulness in the Buddhist tradition. Many scientists wish to ‘secularize’ this research, but it is in this tradition that the historical origins of the field are rooted. There is also, in the same vein, a renewal of interest in the media in so-called ‘positive’ psychology or the psychology of happiness associated with meditation, with less of a scientific purpose and more of an emphasis on personal development and psychotherapeutic practices in psychiatry (André, 2014).

7. From antiquity to the Middle Ages: St Augustine In this very rapid (too rapid) overview of ancient history, we have had to concentrate on the essentials: the founding myths, including Psyche, a figure both spiritual (soul) and material (body, brain); then the Greek philosophers, with Plato and Aristotle emphasizing the soul and the Greek physicians Herophilos and Galen emphasizing the brain, complemented by the different approach taken by Indian psychology. In the course of the long period of antiquity, many other Greek or Roman (Latin) thinkers and many other ideas could well be cited.11 For example, Stoicism appeared between the fourth and second centuries BC (already mentioned in connection with Galen). Zeno of Citium founded it, along with other practitioners including Posidonius (the master and friend of Cicero), Epictetus, Seneca and the Emperor Marcus Aurelius. Then there were the ideas of Epicurus and Lucretius, which Château (1978) has eloquently analysed. Thus, Western antiquity was full of ideas about man, the soul, God, the cosmos, numbers,


History of theories about thinking

syllogisms, the heart, the brain, biology and so on. Psychology sprang from these many roots. However, the major change, or link, between this period and the Middle Ages came from St Augustine (354–430), the bishop of Hippo in the Roman province of Africa, now Annaba in Algeria. Christianity – the life and teachings of Jesus Christ – gave new impetus to this kind of thought at the beginning of our era. With St Augustine, psychology became a matter of personal commitment.

7.1 A Christian psychology: intimacy, faith and experience Psychology was still very new, and it oscillated constantly between drawing on faith and on experience. St Augustine made use of Greek philosophy, especially Plato and Neoplatonism (via Plotinus, 205–270) rather than Aristotle. From Plato, he retained the Ideas, which are the basis of truth in mathematics (the True), beauty (the Beautiful) and justice (the Good). However, in St Augustine, these cannot be the remains of some external world of Ideas that we have ‘contemplated’ in our prenatal lives. On the contrary, Ideas can be revealed in the course of our lives by an inner light, deep within our souls. This conception of truth contrasts with that of the myth of the cave where, according to Plato, the light is outside. In St Augustine, psychology is intimate, subjective and self-oriented (a feature it shares with Indian psychology). Through the faith and love of God (the heart), we can discover within us, according to St Augustine, an inner knowledge. This is the Divine Word, transcendent (above our soul), which a Master (God, Christ) imparts to us. If men can sometimes agree, he says, this is not because information passes from one to the other through language, but because they are listening to the same (inner) voice, the voice of God, acting as the prime mover. This voice tells men about the original Ideas or images thought by God, through which he created things. St Augustine thus proposes a synthesis between the Ideas of Plato and the Gospel story in which the Divine Word is at the beginning of all things. In connection with this, St Augustine formulates the already very psychological hypothesis of a ‘thought without language’: an internal reality, which precedes and animates language, a reality that, in his view, is neither Greek nor Latin, nor any other language. This powerful idea of a thought without language (which is no longer divine thought, but simply a thought) reappeared in the twentieth century: in Piaget – operative thought is the result of action, which precedes and animates language – and in cognitive psychology – when considering the baby or the animal, which already possesses thought (e.g. number) but does not have language. In St Augustine, however, what is at stake is a specifically human and adult access to the truth in itself, that of Christ. St Augustine again draws on Plato’s psychology when he refers to the will and the distinction between ignorance and latency. Thus, in De Trinitate (On the Trinity) he states that man finds within himself not what he did not know but what he was not thinking of. The difference is a subtle one, like our contemporary

From Psyche to the logos


distinction between competence (i.e. our cognitive, linguistic, etc. knowledge or potential) and performance (i.e. what we actually produce or say, including our errors). We have already seen that, in Plato, there was no question of ignorance but of latency when man sought the truth. In addition, for Plato this required mental effort and education. Similarly, for St Augustine, the discovery of truth requires an effort of will, defined here as the desire to know through the faith and love of God. This is a very pure, almost cognitive faith. Like the thymos of Plato (dissociated from the noûs), this will is naked. It comes before knowing; it is the desire to know. There is no understanding without will. But Grace is also needed. It is a personal psychology, from an individual development of understanding that moves towards God, through a particular thought or throughout a whole life, from childhood to old age. Here we can already see a genetic idea of psychology (in the sense of ‘genesis’ as found in Piaget; see Chapter 4): both microgenesis (the unfolding of a thought) and macrogenesis (the ages of life). Individual time is very important in St Augustine, reflecting the life of Christ and the Good News he brought: each individual works for his salvation and immortality (following the Christian sequence from the Creation to the Last Judgement, via the Incarnation and the Redemption). The deep connection with contemporary psychology, which is now scientific and no longer Christian, lies in the way that St Augustine focuses strongly on the possible individual trajectory of all men towards the truth. This truth is no longer the preserve of philosophers or scientists who approach it through the dialectic, as in Plato (for whom any hope of seeing the light outside the cave was very remote). For St Augustine, knowledge, which he associates with faith, is open to all: he advocates a ‘popular Christianity’. In each soul, there exists an inner truth that it can grasp or not grasp. This direct assimilation of knowledge to faith, where the rules of logic (the science of syllogisms) are not needed as intermediaries, becomes easier to understand if we remember that St Augustine preferred Plato to Aristotle. This development of each person’s understanding (as he or she moves towards God) is closely related to memory: the word is formed from a thought, which the will connects to memory. It is an ‘inner trinity’ (a very Christian concept) of memory, understanding (thought) and will. For St Augustine, it is the memory of external things and the memory of God located within us (the truth of mathematics, of beauty, of justice). This twofold dimension stems from the ordering of perceptions and images, associated with judgement: in De Trinitate, St Augustine writes that if he likes the ramparts of Carthage, this is because of an aesthetic criterion, an awareness of the general Ideas (a law or Idea of the Beautiful), which he finds in himself (memory) and not from experience as such. Similarly, if an open space in the city, a stone or a picture are all deemed to be ‘square’, it is by virtue of the law of ‘squareness’ (likewise for the law of equality, etc.). In addition, as for the ramparts of Alexandria, which he has to imagine, since he has never seen the city (unlike Carthage), St Augustine is astonished that a soul can see in itself something that it has not seen anywhere else. This is how he describes mental imagery, associated in this case with the imagination. These are


History of theories about thinking

all psychological observations that attest to a process of an ‘inner grasp of the truth’ (Ideas, Word, God) linked to self-awareness: a form of inner light, or the introspection described by later psychologists.

7.2 Self-awareness: towards the Cartesian cogito In very innovative terms, St Augustine thinks that self-awareness, as part of the soul, makes us perfectly certain of our existence, even if we are making a mistake. For, he says: ‘if I am wrong, I am’ (si fallor, sum).12 Here we find the roots of the Cartesian programme, both doubt and the cogito, which would be the cornerstone of the new philosophical and scientific trends that developed after the Renaissance: ‘I think, therefore I am,’ wrote Descartes in the Discourse on Method (1637). In addition, St Augustine developed a psychology of what was obvious and what was erroneous. It is not any reason in the soul that creates self-evident facts: it discovers them. On the other hand, the soul itself creates falsehood and error. These lie not in things themselves, but in the senses: St Augustine gives the example of the stick immersed in water that appears to be bent. Error is neither in the stick nor in the eyes that see it, but in the assent (a ‘cognitive bias’, as we would call it now) given by our soul to the illusion of the stick’s bentness. This is already a psychology of perceptual and cognitive illusions. If we leave aside (and this would be idiotic) the religious and Christian complexion of the thought of St Augustine, many of the points in his psychology are very modern. However, by its fundamentally Christian character, it also leads directly to the theological psychology of the Middle Ages, dominated by the general question of the relations between God, faith and human reason. St Augustine proposes an intimate vision of these relations – a real psychology of the person.

Notes 1 See Sonia Cavicchioli, The Tale of Cupid and Psyche: An Illustrated History (New York: George Braziller, 2002). 2 There are traces of psychopomps in different mythologies and beliefs across the world, dating back to the most ancient times: for example, Anubis in Ancient Egypt (from the thirty-second century BC) is a man with the head of a wild dog, shown leaning over a mummy and responsible for the ‘weighing of souls’ or ‘psychostasia’ after death. 3 In the rest of the text I will continue to use the word ‘man’ (in lower case) to indicate both men and women. 4 The sophists were peripatetic teachers of eloquence in Ancient Greece who were prone to sacrifice logic to the task of merely persuading their audience. 5 Democritus, unlike Plato, was a materialist and atomist; he thought that only matter existed in a universe formed of atoms and void (thus excluding God and any spiritual substance). 6 In metaphysics, transcendence belongs to what has an absolutely higher nature, and lies outside the world. 7 His vision was political, too: the government of a virtuous city (politics comes from the Greek word polis, the ‘city’) must gain the assent of the citizens by rational argument in public debates, such as those that occurred in discussions at the foot of the Acropolis in Athens.

From Psyche to the logos


8 A sophism (a procedure stemming from the Sophists) is a form of reasoning that is erroneous even though valid in appearance, constructed with the aim of deceiving someone in order to convince them. A paralogism is also an erroneous form of reasoning, but it does not have the aim of deceiving anyone, and the author himself is its first victim. 9 These points are set out in detail in Part II. 10 Stoicism, a doctrine of temperance and detachment, was based on the acceptance of whatever does not depend on us. 11 Even before Socrates, there were thinkers whose conceptions already looked forward to cognitive and developmental psychology, such as Parmenides (sixth to the fifth century BC), who thought that perception was an illusion, and indeed a fraud, compared to thought. 12 Quoted in Eleonore Stump and Norman Kretzmann (eds), The Cambridge Companion to Augustine (Cambridge: Cambridge University Press, 2001), p. 162, n. 7.

Bibliography André, C. (2014). Mindfulness. London: Rider Books. Changeux, J.-P. (2012). The Good, the True, and the Beautiful: A Neuronal Approach. Yale: Yale University Press. Changeux, J.-P. and Connes, A. (1995). Conversations on Mind, Matter, and Mathematics. Princeton, NJ: Princeton University Press. Château, J. (1978). Les Grandes Psychologies dans l’Antiquité. Paris: Vrin. Houdé, O. et al. (Eds.) (2010). Cerveau et Psychologie. Paris: PUF. Piaget, J. (2015). The Psychology of Intelligence. London: Routledge. Parot, F. and Richelle, M. (1992). Introduction à la Psychologie. Histoire et Méthodes. Paris: PUF. Spelke, E. (2000). Core knowledge. American Psychologist, 55, 1233–1243. Tang, Y.-Y. et al. (2015). The neuroscience of mindfulness meditation. Nature Reviews Neurosciences, 16, 213–225.


The Middle Ages was a long period, stretching from the fifth century (St Augustine’s century) to the fifteenth. It corresponded to a ‘transfer of power and knowledge’ (in Latin, translatio imperii et studii), a movement from Greece to Rome, then from Rome to France and England, by way of the Arab world. Far from being an entirely dark and barbarous period, the mediaeval age saw a brilliant flowering of arts and letters (poems, novels, chronicles, songs, legends, illuminated manuscripts and miniatures, etc.).1 As Michel Zink (2015) writes, ‘everything seems different when you look at the Middle Ages through the eyes of poetry. Its castles and forests, its princesses, knights, monsters, wonders and adventures still nourish our imagination’ (p. 13). Everything also seems different when you look at the Middle Ages through the eyes of psychology. We will discover the many different modes of access to the truth, the experimental method, cognitive constructivism, the notion of the individual and so on. In order to understand fully the emergence of these notions, the reader, in particular the secular scientific psychologist of today, must, while reading this chapter, be open to the idea that the whole of life in those days revolved around faith – a faith that could be a bold, serious business, and sometimes quite intransigent.

1. Revealed truths, reasoned truths: faith and science in mediaeval psychology ‘Revelation’ means the knowledge directly obtained from God or through Jesus Christ in religious texts. In France in the Middle Ages these ‘revealed truths’ and the Christian fervour they nourished gradually become more firmly implanted. Both the real and the intellectual landscapes were shaped and magnified by the Church, in the countryside as in the cities: witness the churches, monasteries and

Faith, truth and reasoning


abbeys where illuminated manuscripts were copied (such as Cluny in Burgundy); the sanctuary of Mont Saint-Michel built in honour of St Michael, the most famous of the Christian psychopomps; and the Gothic cathedrals, especially Notre-Dame de Paris on the Île de la Cité (built between 1163 and 1345), which made the twelfth and thirteenth centuries into a Golden Age. Our mediaeval ancestors were convinced – as were the ancient Greeks – that there were no separating walls between the real world and the supernatural world (with ways through, like the threshold of death), and they all aspired to achieving a superiority of the mind over a body which life so often maltreated. The Christian religion added to this by giving the hope and expectation of paradise to each generation. It promoted faith in a better destiny and fostered a fear of sin, which acted as a factor of social cohesion. Ever since Augustine, who was following Plato’s lead (‘pointing upwards to the “Heaven of Ideas”’, as we have seen), ancient philosophy had been reformulated in these more explicitly religious terms. Nevertheless, the mediaeval return of Aristotle and his logic, a logic already scientific in character, would complicate things.

1.1 The scholastic revival Until the twelfth century, the learned elite of the Middle Ages, educated in the monasteries, were trained in the theological – and psychological or introspective – doctrine of St Augustine. However, from that time on, various factors, including the growth of cities, those ‘seed-beds of freedom’, would change economic and social relations and, in particular, the kind of subjects studied. Thus the liberal arts developed and with them the teaching of logic. It was at this time that Aristotle’s works returned to the forefront, via Muslim Spain (notably through the philosopher Averroës, 1126–1198); Arab civilization had preserved them, first in Syria, and then in Baghdad.2 The Arabic translations of the highly prized ancient Greek manuscripts of Aristotle were then retranslated into Latin. This marked the high point of scholasticism (from the Latin schola, ‘school’), the theological and philosophical teaching given by the monastic schools of the European Middle Ages. Aristotle, however, was rather different from Plato, so dear to St Augustine, and a new, more Aristotelian interest in a logical and natural knowledge of the world developed. In the great cities, the first universities came into existence, the most famous being the Sorbonne in Paris, founded in the thirteenth century by Robert de Sorbon and situated in the Latin Quarter, so-called because the students there spoke Latin. However, Franciscan monks (the order of St Francis of Assisi was founded in 1210) soon denounced the scholastic renewal, reinforcing the use of intelligence and reason (as Aristotle’s logic required) in the task of justifying faith and religion, within the Church. These included Roger Bacon (1214–1294), who, in opposition to Aristotle, preferred a return to the letter of the Bible (the revealed truths) without any logical reasoning or discussion. Aristotle’s ‘natural texts’, like the Metaphysics, the De Anima (or On the Soul) and the Physics, were subsequently


History of theories about thinking

forbidden (by the Council of Sens in 1210).3 Nevertheless, the power of reasoning, even when it was not specifically Christian, was too great to be held at bay: Aristotle was definitely reinstated in the minds of the learned elite. After St Augustine and his introspective Platonic psychology, which avoided the logic of Aristotle, a new mediaeval synthesis between Christian faith on the one hand and logic on the other was now on the agenda.

1.2 St Thomas Aquinas: the alternative psychological paths of faith and reason St Thomas Aquinas (1225–1274), an Italian theologian from the Dominican Order (founded by St Dominic in 1215), attempted to reconcile the teaching of the Church with Aristotle’s conception of science. The common denominator, in Thomism, is a psychology of truth, or more exactly of the paths of truth. This psychology is a dual process: Christian revelation and (or) Aristotle’s logic. It represents a kind of methodological separation between the two paths: faith (the basis of theology) and reason (the basis of philosophy and science), which both had the same goal, namely truth. These days we might say that man potentially has two brains in one: the religious brain and the scientific brain. This was the subtle formula that Thomas Aquinas produced when asked by Pope Urban IV, who had summoned him to the Holy See in Rome in order to overcome the tensions provoked in the Church by Aristotle’s texts.4 Aquinas told the Pope that since human reason, like nature, is of divine origin, it cannot contradict the dogmas of revelation. For the Pope as for all Christians, if what we believe is true, we have no need to be afraid. This very psychological use of the logical principle of non-contradiction did not completely convince the Pope, who replied that the Holy See was not anxious to oppose the progress of reason but preferred to keep it under strict observation.5 Thomas Aquinas, a Doctor of the Church, defended his thesis in the Sorbonne in 1257. From the point of view of psychology, what we need to take from Thomism is the affirmation of the plurality of ways, the alternative paths, by which we can gain access to the truth. The human being, which is a focus of psychology, lies at the centre of all Aquinas’s work. Its nature is defined as being both material and spiritual, at the boundary between the visible and the invisible universe. (This is not unlike the twofold nature of Psyche in the Greek founding myths.) For Thomas Aquinas, however, human reason cannot explain everything: for example, it cannot tell us why or how God created the world, since this was an act of pure freedom. In this case, the truth must be revealed. This last objection is a powerful one. In the twenty-first century, it has led the French writer Jean d’Ormesson (1925–2017) to speculate on the ‘nothing’ that came before everything. Responding to the question mark placed by contemporary physics over the origin of the universe, before the Big Bang (and beyond the physicist Max Planck’s ‘wall’), he writes:

Faith, truth and reasoning


God has [in pure freedom] separated everything from nothing. Moreover, he entrusted to man this everything drawn from nothing, so that man can make of it a world where, thanks to space and time, thanks to necessity and chance, absence is changed into presence and mystery into reason. With his senses and his thought [science], man creates the world for a second time.6 Like Thomas Aquinas, d’Ormesson loves science but thinks that it does not exclude God. When psychology became a science at the end of the nineteenth century, as part of the general progress of the other natural sciences, and after the revolutions of Copernicus, Galileo and Darwin in physics and biology, it considered the question of God to be finally resolved (i.e. ruled out). Man as studied by psychology emerged by chance, because of Darwinian biological evolution. At the beginning of the same century, Chateaubriand had published The Genius of Christianity (1802), which once again exalted the faith and soul of the Middle Ages; however, nascent scientific psychology preferred Darwin’s Origin of Species (1859). This choice was logical in terms of scientific progress. And it did not prevent the question of God from returning in the works of a ‘metaphysical detective’ like Jean d’Ormesson, an admirer of Chateaubriand. This question is in fact interwoven, albeit in a hidden form, with the deep roots of psychology in antiquity and the Middle Ages. There are often traces of it to be found even where it is least expected: in Piaget, for example.7

1.3 Roger Bacon: experience against reasoning In the thirteenth century, Roger Bacon, a Franciscan monk from Oxford whom we have already mentioned, powerfully emphasized the idea of experience as the sole source of certainty: the facts. He thus took a stand against the intellectualism of the Thomists and their reference to Aristotle. Bacon fought against the ‘reasoners’ of the monasteries, whose minds built castles in the air and sometimes led people into mistakes in their beliefs. From this point of view, he was not so different from Aristotle, who in antiquity had already denounced the sophisms and paralogisms of his contemporaries. For Bacon, however, Aristotle’s work was too much like a permanent recourse to reason (as in Thomism) and to education, which was not in the spirit of the Franciscans (friars who could not read when they entered the order were forbidden to learn). The monks of this order were resolutely oriented towards action out in the world rather than remaining enclosed in their monasteries reading Aristotle’s pagan texts and discussing them among themselves. This exaggerated contrast between action and reasoning (a contrast which the Franciscans themselves would move beyond)8 actually led Bacon to become, in the Middle Ages, the defender, some even say the precursor, of the experimental method as a way of accessing knowledge (before Claude Bernard introduced it in medicine, in the nineteenth century). This method was of


History of theories about thinking

course already present ‘in practice’, both among artisans (from the Middle Ages or even in antiquity) and Greek scientists such as Archimedes (384–322 BC) and the Roman engineers who laid the foundations of science and technology. Thus, Bacon writes that when it comes to the perfect method, we cannot give an answer without the practice of experience, for we have three ways of knowing: authority, experience, and reasoning; but authority . . . merely makes us believe . . ., and reasoning for its part cannot distinguish sophism from demonstration unless it is verified in its conclusions by the works of experience.9 Therefore, Bacon called for a programme of ‘experimental science’. Even if the mediaeval theological zeitgeist contrasted Bacon with Thomas Aquinas and Aristotle, we can only point out, from the standpoint of today’s psychology, how close Bacon’s experimentalism was to Aristotle’s empiricism, which was in ancient times already the necessary corollary of the science of syllogisms. It was Aristotle’s empiricism, in this world (using conclusions obtained by induction) that allowed his so-called ‘necessary’ deductions to be founded on certainties: the evidence of experience. Thus, we see how, in the course of the history of psychology, the essential notions of empiricism, experimentation and reasoning were contrasted, but also gradually interwoven. They were essential both for what would later be the methods of scientific psychology and for its current objects of study, such as the development of knowledge (or cognition) in children and the processes of reasoning (logic, the bias due to belief, etc.). The lively debates in mediaeval theology about faith and reason were therefore an opportunity, thanks to Bacon, to reaffirm the importance of experimental science. Nevertheless, as Parot and Richelle (1992) emphasize, the meaning of ‘experimentation’ in Bacon is not necessarily the same as ours, for it also covered astrology, magic and alchemy (disciplines he practised). It was not entirely the experimental method in the stricter sense as conceived by Claude Bernard in the nineteenth century and found in today’s psychology (i.e. a more controlled method – albeit one that would not be the polar opposite of reasoning; quite the contrary, in fact, since hypothetico-deductive reasoning lies at its core).10 Another Franciscan monk, Bacon’s pupil at Oxford, left his mark on the Middle Ages through his opposition to Thomas Aquinas. This was William of Occam, famous for his ‘nominalism’ and his linguistic prescriptions (as sharp as a razor) about the proper use of words. Occam gives us a lesson on the epistemology of singular things, that is those things that can be named.

2. Nominalism, the quarrel over universals and ‘Occam’s razor’ The quarrel over universals was a mediaeval debate that focused on the existence or non-existence of generalized things, that is concepts (universals) such as Man,

Faith, truth and reasoning


Animal and any other abstraction that does not directly correspond to the name (or word) referring to a single thing, in its peculiarity, its singularity. This debate was not separate from those preceding it, such as the debate between faith and science, or from the dissemination of ancient texts. Plato (with St Augustine) and Aristotle (with St Thomas Aquinas) were very attached to generalities freed from things, although Aristotle rejected the transcendence of the Ideas as proposed by Plato. We must remember that Plato had divided the cosmos into five hierarchical levels (the Ideas or intelligible archetypes; numbers; geometric bodies; the elements; and concrete things). Aristotle classified diversity in a systematic way so as to understand what this world is made of: the kinds of being (quality, quantity, relationship, place, time, action, etc., and their subdivisions). These were all cognitive tools from which generalities and concepts could be derived (according to Aristotle) or rediscovered (according to Plato). But these tools, which were gradually spread through the monasteries by the scholasticism of the Middle Ages, would raise a question, leading in turn to a quarrel, also connected to the psychology of categorization. The problem was this: general concepts (universals), the Ideas of Plato and the categories of Aristotle, do indeed ultimately refer to this or that object in reality (to concrete things), and they are the bearers of meaning – but what part of their generality truly exists in reality?

2.1 Realism, nominalism and psychological constructivism (Peter Abelard) From the theory of Plato’s Ideas (up to the previously mentioned mathematicians of today, such as Connes or Villani in France) the so-called ‘realistic’ answer consists in saying that these universals actually do exist in reality – beside concrete things or ‘through them’ – as original images, and philosophy and science have the task of discovering them. From this point of view, according to the conceptual realists, the general is regarded as a reality that exists before the individual. The so-called ‘nominalist’ position is the opposite. Its first and most radical formulation, by Roscellinus Compendiensis (1050–1121), states that the universals correspond to articulated sounds to which nothing in reality corresponds. They are therefore arbitrary. A little later, Peter Abelard (1079–1142) developed a more nuanced anti-realist conception in which the general does not really exist but is an abstraction introduced by man. Consequently, abstract concepts are not created arbitrarily, as Roscellinus asserted, but by a reasoned categorization stemming from singular things (here we find the notion of induction as in Aristotle). Contrary to the passive conception of man as a receptacle, which was common in the Middle Ages, Abelard identifies man’s creative part, his ability to abstract, that is to construct concepts from things. What Abelard grasped was the truly psychological dimension of the problem of universals. He thus heralded the cognitive constructivism of the child (Piaget) and the neurocognitive constructivism of the brain (Changeux) that would be proposed in the twentieth century.


History of theories about thinking

In particular, he anticipated the process of reflective abstraction dear to Piaget (see Chapter 4). William of Occam would make his own contribution to this debate between realism and nominalism.

2.2 William of Occam: singular things and the cognitive razor In the same Franciscan spirit as Roger Bacon, Occam reacted to the conceptual excesses of the Thomist ‘reasoners’ who, by dint of bringing everything back to the logic of Aristotle, seemed to deprive God of his powers. In a way, Occam would have preferred for nothing to exist between God and singular things (or persons). Against Aristotle’s science of the general, and thus against Thomas Aquinas, his nominalist epistemology converged with the real and experimental science of the Franciscans as embodied by his master Bacon. According to Occam, knowledge comes directly from the action of singular things on the intellect (through sensible intuition), hence a specific perceptual trace that can be named: the trace of an object, a person or an event. That is why Occam prescribes that we use (and keep) only the words that directly signify these distinct things and rid ourselves of all the confused, even erroneous, concepts that stem from empty reasoning. This is the meaning of the famous expression ‘Occam’s razor’. This is a cognitive razor that must be applied to the analyses of all reasoners: Aristotle, the Thomists and the others. Even today, we often think of Occam’s razor when we hear (at university, on radio or on television) specious reasoning or speeches containing certain concepts or words whose relation to reality is quite unclear to us. For all the sciences, but especially for psychology, Occam’s legacy is therefore epistemological: it is from him that we derive a purified science. This rigour or scientific hygiene dating back to the Middle Ages is essential. However, Occam’s psychology, limited to the action of singular things and their perceptual trace in the intellect (their receptacle), is not as rich as the more creative and cognitive psychology of Abelard, for whom a well-conducted process of abstraction allows man to bring out abstract concepts from singular things. Besides, if we follow Occam’s logic to its logical conclusion, even God must no longer be conceptually described in terms of his (divine) goodness, will or justice, since these words are abstractions without any direct perceptive correspondences in singular things. The Church soon realized this, and in 1324 Occam was summoned by Pope John XXII to Avignon to stand trial for heresy. The conclusion of this mediaeval case was that, even if our reasoning must be purified and properly controlled (this was already one of Aristotle’s concerns), science and religion cannot manage without abstraction. In this quarrel over universals, the nominalist Abelard, the most psychologically inclined of these thinkers, had clearly realized this. From a different angle, we can recognize another contribution made to psychology by Occam’s nominalism: the notion of the individual, the only

Faith, truth and reasoning


singular thing that exists in relation to the concept – too general and abstract in his view – of humanity. The ‘individualistic’11 psychology of Occam’s razor thus highlights the scale, the level, the ‘granular’ aspect of the individual thing being studied. In another way, from the outside, as it were, this complements St Augustine’s intimate psychology of the person, introduced in late antiquity (although the importance of Plato to Augustine sets the latter apart from Occam). During this ancient and mediaeval period, the person or the individual became a particularly important object of attention and analysis for thinkers, both in theology (Augustine) and epistemology (Occam).

3. From the Middle Ages to the Renaissance The main common aspect of the psychology both of antiquity and of the Middle Ages, an aspect which prefigured the Renaissance and the following centuries, was the search for truth and, correlatively, the permanent concern to correct the errors or strayings of the human soul, especially in others (one’s contemporaries) and, by self-examination, in oneself. On this significant aspect, Plato, Aristotle, Augustine, Aquinas, Bacon and Occam agree.12 The Middle Ages added their own powerful specific feature to this: the ubiquitous Christian God, his revealed truth and the ardent faith that he arouses. In the allegory of the cave, intelligence is defined by Plato as a forced detour, which allows the human mind, through argued (dialectical) discussion, to circumvent the contradictions of false knowledge, sensibility and mere appearance. It is a psychology of truth as against the variable and subjective opinion (doxa) of the sophists and politicians of ancient Greece. Later, for the same social reason, Aristotle produced his famous syllogisms. These were certainly the foundation of logic as a deductive instrument of science (truth), but they were also the best way for Aristotle to denounce and refute the errors of reasoning, the sophisms and paralogisms, of his contemporaries, in both the Greek city-state and elsewhere. With St Augustine, we discover that everyone can follow the path to inner truth (the Christian God, a divine Word that expresses Plato’s Ideas about the Good, the True and the Beautiful). His psychology of truth and errors, summarized, as we have seen, in the formula ‘If I am wrong, I am’, is accompanied by a double affirmation. Firstly, ‘it is not the reasoning of the soul that creates the truth; the soul finds that truth’ (as in Plato). Secondly, ‘it is the soul that creates error’ (the example of the stick half immersed in water, which appears to be broken). Augustine points out that error lies in the assent given by our soul to a falsity arising from the senses.13 St Thomas Aquinas, for his part, invented various ways of gaining access to truth, either by reason and science (as in Aristotle), or by faith. As for Roger Bacon, what he opposed in Thomas Aquinas and the Thomists were the excesses of the ‘reasoners’ – those monks whose minds could lead them down blind alleys. This is why, as a good Franciscan, he advocated a return to the only true path of actual experimentation out in the world. William of Occam, finally, reinforced this fight for the truth with his famous cognitive


History of theories about thinking

‘razor’, which aimed to rid us of all the confused and indeed erroneous concepts resulting from empty reasoning. It is interesting to note here that Plato and Aristotle, master and pupil of the School of Athens (the Academy), conceived their psychology of truth (for Plato, dialectic, and for Aristotle, logic) against the errors of their contemporaries, the sophists and politicians of ancient Greece. In the same way, Bacon and Occam, master and pupil at Oxford 1,500 years later, conceived their psychology of truth (experimentation and ‘Occam’s razor’) against the errors of their contemporaries, the Thomists of mediaeval Europe. Therefore, the same psychological struggle for the truth and against human error was involved, but the paradox is that Bacon and Occam fought against the excessive use of Aristotle’s logic, which seemed to them to be a source of error. Now, this very logic had been designed precisely to avoid such error. No doubt it was the omnipresence of God in the debates (and the presumption of the synthesis between faith and knowledge in Thomas Aquinas) that sowed confusion in the minds of the Middle Ages, to such an extent that people doubted the virtues of reasoning and scientific logic. The Renaissance would gradually cease, or almost cease, to doubt them; this new attitude would favour on the one hand harking back to ancient times and on the other hand the emergence of the Enlightenment and the modern scientific spirit. Psychology would continue to take shape, as in the Essays of Montaigne – a philosopher and moralist, certainly, but also a psychologist and a pedagogue. It was St Augustine who surreptitiously introduced us to the Middle Ages with his intense Christian faith. In the next chapter, Montaigne will open to us the doors of the Renaissance with his intense humanism.

Notes 1 A. Lagarde and L. Michard, Moyen Âge et XVIe siècle (Paris: Bordas, 2004). [This is a standard textbook on the literary history of France – Trans.] For example, The Song of Roland, dated to the beginning of the twelfth century, is a combination of the psychology of feelings and epic grandeur (Roland’s honour in battle, faith and fidelity to Charlemagne, his friendship with Olivier, his misfortune). 2 Also worth mentioning is Avicenna (980–1037), a Persian scholar known for his Canon of Medicine (1020), who explained how the human body transmits sensory data to the soul (though the soul has a spiritual and independent substance, illuminated by God). 3 However, Aristotle, like Plato, did have a god (an intellectualized god: the First Cause), albeit not the God of the Christians. Aristotle discussed the soul, including its divine part, with all its logical reasoning and its natural science; this differed from the mediaeval faith derived from St Augustine. 4 Similarly, since the nineteenth century, Neo-Thomism has sought to respond to the objections raised to Catholic Christianity by modernity. 5 Since 1603, there has been in the Vatican a Pontifical Academy of Sciences (this exact title dates from 1936), placed under the protection of the pope. Working freely, it brings a source of objective information (‘reasoned truths’) on which the Holy See can base its reflections. 6 J. d’Ormesson, Comme un chant d’espérance (Paris: Éditions Héloïse d’Ormesson, 2014), p. 111.

Faith, truth and reasoning


7 It is said that Jean Piaget (1896–1980), a non-materialist and reader of Henry Bergson in his youth, used his powers of thought to transform God into morality (hence, his first book on The Moral Judgement of the Child, 1932). Then, in his later, well-known work on the development of logico-mathematical intelligence, he transformed morality into the logic of action. This logical abstraction was totally purified of God – and Piaget, as a scientist, would never address this topic again. God is thus lurking in the apparently closed files of twentieth-century psychology. 8 The plot of Umberto Eco’s best-selling novel The Name of the Rose (English translation by William Weaver, 1983), from which a film was made in 1986, features as its main character William of Baskerville, based loosely on William of Occam. He discovers, during a criminal investigation, that the pages of a book by Aristotle in the monastery library have been poisoned, thereby killing the monks who, perverted by their love of pagan knowledge, have been consulting it in secret. 9 Excerpt from the Compendium studii philosophiae or Compendium of the Study of Philosophy, quoted in R. Carton, L’expérience physique chez Roger Bacon (Paris: Vrin, 1924), pp. 54–55. 10 These points are set out in detail in Part II. 11 To be understood not in the sense of ‘selfish’ but in the scientific sense of individual measures of performance (responses, reaction times) and observations of singular behaviours, as carried out by psychophysics and experimental psychology, differential psychology and clinical psychology. 12 This is true of the myth of Oedipus, too: the heart of this narrative is the search for a biological identity (truth), correlative with the fear of parricide and incest (a double moral error). Moreover, the riddle of the Sphinx requires finding the solution to the problem (a cognitive truth), without any right to error (which means death). 13 This assent must be corrected by exercising control over the senses. In the Middle Ages, the renunciation of the five senses (sight, hearing, smell, taste, touch) in favour of a sixth sense was expressed in the motto found on the six tapestries of The Lady and the Unicorn in the Musée national du Moyen Âge in Paris: À mon seul désir, ‘according to my desire (or will) alone’. Interpretations of this exceptional mediaeval work vary, but all agree on the idea of an education of the senses by the heart and the intelligence.

Bibliography Parot, F. and Richelle, M. (1992). Introduction à la Psychologie. Histoire et Méthodes. Paris: PUF. Zink, M. (2015). Bienvenue au Moyen Âge. Paris: Des Équateurs and France Inter.

3 THE INCONSTANCY OF THE HUMAN BEING From the Renaissance to the Enlightenment

The psychology of the Renaissance is closely linked to the ‘moral crisis’ of the sixteenth century, when religion still lay at the heart of the debate and even triggered a civil war in France. As a reaction, Montaigne (1533–1595) proposed a psychology of wisdom. In contrast to the Wars of Religion, Montaigne embodied moderation and a horror of intolerance and fanaticism.

1. The Renaissance: Montaigne, a psychology of tolerance as opposed to religious fanaticism To understand the role played by this new humanistic psychology, we must return to the context: Francis I, Italy, looking back to Graeco-Roman antiquity, the invention of printing and the Reformation. The year 1515 (a once familiar date to all French schoolchildren) in France was not only the year of the famous military victory won by King Francis I at Marignan, near Milan, but also the year of the transition from the mediaeval French kingdom to the artistic Renaissance that spread from Italy. This new breath of life came mainly from Tuscany (especially the Republic of Florence), the homeland of Catherine de’ Medici, the king’s daughter-in-law. Francis I sought to bring to France the completely new and exceptional creativity that he had observed in Italy, by inviting artists such as Leonardo da Vinci to his court in Amboise. Leonardo was appointed ‘the king’s premier painter, engineer and architect’. Italy, with its light and its dolce vita, was a model both close and seductive. The Renaissance had flourished there for almost a century, as the Fall of Constantinople in 1453 had brought Greek scholars and ancient manuscripts to Italy. First in Italy and then in France, this movement would strengthen the search for inspiration from antiquity.1 After the Middle Ages, which ended

The inconstancy of the human being


with the turbulence of the Hundred Years War between France and England, and amid divisions within a kingdom severely weakened by conflicts and epidemics, Francis I saw to it that France would take the decisive step into the modern period. The Renaissance man sought by the king would need to (re)discover, to create, invent and beautify – like the artist-engineer Leonardo and also the great discoverers and travellers of the fifteenth and sixteenth centuries, who were able to explore the world thanks to the instruments of a new, more scientific geography. These included Christopher Columbus, who was searching for a passage to the East Indies but discovered America, Vasco da Gama and Magellan. In 1530, as the ‘Prince of the Arts, and the Father and Restorer of Letters’,2 Francis I created the Collège de France (initially the ‘College of Royal Readers’) in the Latin Quarter, near and in direct competition to the Sorbonne, which was at that time the Faculty of Theology. It was (and still is) a prestigious college whose professors would for the next few centuries teach Latin, Greek, Hebrew and the sciences – including, at the end of the nineteenth century, psychology, taught by Théodule Ribot (1839–1916). A chair in experimental cognitive psychology, presently occupied by Stanislas Dehaene, now represents this discipline. The architecture of the châteaux of the Loire (Amboise, Chenonceau, Chaumont-sur-Loire, Chambord, etc.), so different from that of the Middle Ages, was emblematic of the Renaissance; but the cultural revival also took a literary form, with Rabelais, Du Bellay and Ronsard (poets of the celebrated ‘Pléiade’ movement), d’Aubigné and Montaigne. At the same time, science was growing ever stronger: there was astronomy with Copernicus (1473–1543) and his revolutionary theory of heliocentrism (the Earth orbits around the Sun and not the other way round) and there was anatomy and surgery, of which Ambroise Paré (1510– 1590) was an eminent representative. Technologically, and so important to the spreading of works and ideas, the Renaissance was heralded by the revolution of printing. Johannes Gutenberg (1400–1468) used movable metallic type to produce the first ever printed Bible (the ‘Gutenberg Bible’) in Mainz, Germany in the 1450s. Printing (like the Internet today) became the organ for new ideas about religion. These ideas germinated in the minds of two reformers who shook the Catholic Church (and effectively destroyed the religious unity of the West): the German Martin Luther (1483–1546) and John Calvin (1509–1564), a Frenchman who had fled from Papist France and settled in Geneva.3 Both were directly influenced by Occam’s mediaeval nominalism. Like Luther, albeit with certain subtle differences, Calvin advocated that every Christian should return directly to the letter of the Bible, without unnecessary intermediaries (e.g. theologians, the hierarchical truth imposed by Rome, the clergy and Catholic rituals, and trafficking in indulgences). According to him, everyone must be his or her own confessor and spiritual master and establish a direct link with the Gospels. This envisaged school, education (through study and reading) and the moral control of


History of theories about thinking

all for the public good playing a primary role in self-improvement (through integrity and austerity) and social relations (through useful activity). In Geneva, Calvin broke away from Rome and launched the Protestant Reformation, which was copied to some extent in France and which then divided Catholics and Protestants once and for all. Protestants also started to question royal authority – as in the so-called affaire des placards, when posters protesting against the Mass were displayed in various French cities and even on the door of Francis I’s bedroom in Amboise. This led to the Wars of Religion, which broke out in 1562 after the massacre of the Huguenots (French Protestants) by the troops of the Duke of Guise in Wassy in Champagne. Henry IV ended the wars in 1598, thanks to the Edict of Nantes, an edict of tolerance, under which Protestants were allowed to practise their religion. Louis XIV would revoke this in 1685. The Renaissance was thus both a great humanist movement connected with Graeco-Roman culture and moral civility (the dual sense of humanitas in Latin), and a period of appalling massacres (including the St Bartholomew’s Day massacre on 24 August 1572) arising from the Reformation and the Wars of Religion. It was in this context of religious turmoil and violent ideological differences that the ‘prince of psychologists’ revealed himself as an exemplar of humanism and wisdom. For in the Renaissance, psychologists also had their own prince – Michel de Montaigne, made famous by the publication of his Essays (between 1580 and 1595).

1.1 The Essays: depicting oneself so as to instruct the reader There were several stages in the French sixteenth-century literary Renaissance: the verve, the appetite, the optimism and the gigantism of Rabelais in Gargantua;4 the artistry and more formal, refined and aristocratic perfection of the Pléiade (Du Bellay and Ronsard); the striking Baroque poetry of d’Aubigné; and, finally, the essays of Montaigne, who sought to express his ideas just in the way he thought them. This is what made him a philosopher, and a psychologist. The Wars of Religion had inspired great works by Ronsard and d’Aubigné, but Montaigne could sense the danger of propaganda lurking in literature. So, as a true Renaissance thinker, he sought a wisdom that would be at the level of human beings and could represent a factor of tolerance. It was in 1580, with the Essays, that modern psychology was born, according to Jean Château: ‘While Montaigne calls himself a philosopher, we need to understand this word as meaning a psychologist and a moralist’ (Château, 1977, p. 14). Château goes on to note that St Augustine, in spite of his genius and his already introspective psychology, preserved in his (Christian) religious ideology only the metaphysics – Plato’s Ideas – out of all the immense labours of antiquity. Because of the grave moral crisis of the Renaissance, Montaigne now rebelled against the dogmatic underpinnings of this philosophy: it seemed to him that the greatest evil in the world was to be found in ideology. From this point on, it

The inconstancy of the human being


became the responsibility of psychology, in association with pedagogy, to help human beings break free of ideologies. For this, we needed to subject ourselves to lucid scrutiny. The position adopted by Montaigne is important for the history of psychology, for he established the contemporary idea of ‘a secular psychologist who wants to be objective, but who opens himself to all the facts, however surprising, so long as they are not tainted with ideology’.5 His was not yet a scientific and experimental psychology, but was still in the form of personal confession (‘I am the one I depict’, he wrote) whose aim was to analyse relationships and general human feelings by focusing on this one very personal example. It is to depict oneself so as to instruct the reader, with knowledge as the primary aim, in accordance with a purely psychological curiosity. Montaigne observed himself, scrutinized himself, as Proust and Valéry would do in the twentieth century, both for themselves and for the sake of literature, and as Piaget would also do, observing in a more scientific way his own children and then children in general, with the sole purpose of acquiring knowledge: studying epistemology through its genesis. By following this approach, Montaigne was the first to identify the difficulty of method in psychology. To those who think that it is easy for us to study ourselves, since we are what is ‘nearest to us’ (or that we can spontaneously observe similar features in our friends or our children), Montaigne replies that, on the contrary, ‘it is a thorny undertaking, and more so than it seems, to follow such a wandering path as that of our mind; to penetrate the opaque depths of its internal folds’.6 He heralds both the experimental sciences of cognitive processes and the work of Freud and Janet on the unconscious and the subconscious. In particular, his method already involved meticulous observation combining introspection with the study of behaviour. He precisely noted and dated facts such as his states of consciousness after falling off his horse (passing from dream to lucid judgement – this prefigured twenty-first-century research into the ‘code of consciousness’ in the neurosciences; Dehaene, 2014), and general attitudes, habits and forms of behaviour (with respect to lies, fame, wine, women, etc.). This was a new genre: the psychological essay. Of course, Montaigne did not imagine the psychological experiments of the future. However, the quest for concrete and precise facts was already very important for him. Moreover, it was purified of something that Montaigne denounced: purely ‘ratiocinative reason’, the ideologies and intellectual systems of the pedants. Hence the humility of his famous motto, inscribed on a beam of his library: ‘What do I know?’ He was by nature experimentalist (he called himself a ‘naturalist’) avant la lettre. In his personal observations and the documents he consulted, what attracted him were human beings and their works, their modes of behaviour, their variations (depending on their country of origin, their biological idiosyncrasies, etc.)7 and even the crazy and deviant modes of human activity; in this respect his work prefigured psychopathology. He also heralded comparative ethology and psychology by observing a form of communication (and even, he thought, language) in animals.


History of theories about thinking

1.2 The inconstancy and diversity of the human being Through this interest in variations and local customs – which might have seemed trivial at the time – Montaigne’s conclusion was clear and, paradoxically, universal: the general law is the inconstancy and diversity of the human being. This vision, deliberately situated at the most variable, changeable and factual level of observation, contrasts with the psychologies of antiquity and the Middle Ages, which described large-scale systems (of a synthetic nature) encompassing the soul, the Ideas, the relationships between human beings and God, and so on. According to Montaigne, if we systematize, we leave reality behind. As he observes, in the whole world there have never been two identical opinions, any more than there have ever been two identical hairs or two identical grains. He prophetically discovered the principle of diversity (or variation) on which modern biology would be based, together with differential psychology (the study of differences between individuals); and he placed the conflict of opinions which had devastated his times within a psychological, almost scientific, framework. Regarding differential psychology, Montaigne had even (re)discovered, in his own way, the idea of alternative paths to the truth, an idea which, as we have seen, characterized the relationship between human beings and truth (faith versus science) in the Middle Ages. The first of his essays was entitled ‘Through various means, we arrive at the same goal’ (this same theme of different paths is still currently being explored in the neurosciences; Berthoz, 2016). Perhaps, for Montaigne, it was a way of solving, or at least understanding and analysing, the conflict between Catholics and Protestants, rather like St Thomas Aquinas had attempted in the Middle Ages to use the principle of different paths in order to reconcile the teachings of the Church with the philosophy of Aristotle. In this general framework, Montaigne at his ‘most psychological’ is the Montaigne who highlights human egocentrism and sociocentrism, rooted mainly in physiological or corporeal factors (‘our eyes see nothing behind [them]’).8 These factors would soon be seen as cognitive and moral factors; and later, in the twentieth century, Piaget would observe the egocentrism of children. But it was adult egocentrism which frightened Montaigne. According to him, the problem was rather that adults needed to avoid making over-hasty judgements (today we would speak of reducing the speed of thought, of the brain’s inhibition or cognitive resistance).9 Adults hold too easily to a single view of a problem, which often leads to error: an illusion of perspective, a mistaken aim. On the contrary, as Montaigne the educator suggested, we need to practise making good use of time (a kind of ‘microgenesis’) by suspending judgement and combining this with a strong imagination capable of simultaneously considering all perspectives and all data as potential hypotheses. This is the authentic psychological way to build up a sense of tolerance. Consequently, Montaigne calls for making good use of reason by ‘looking elsewhere’. It is good, he says, to frequent the world (a well-rounded person needs to ‘mix with’ others), to rub our brains with those of others, to be

The inconstancy of the human being


acquainted with people far distant both in geography and in time so as not to have ‘our sight reduced to the length of our noses’.10 Montaigne’s interests thus focused on variation and inconstancy (in other words, on individual conditions), even if he simultaneously saw them – and this was his goal – as the features of a general human nature (or condition). The rule, not the exception, lies in our inconstancy, our precipitation, our egocentrism. However, can reasoning save us, as Aristotle would have said with his science of syllogisms? No, replies Montaigne, for reasoning cannot be justified by reasoning. We need something else, something more intimate: psychological self-knowledge, the identification of our shortcomings, our mental weaknesses (our cognitive and emotional biases), if we are to purify ourselves of them. There is certainly a power of truth and reasoning in each of us, but we must still learn how to use it. From the psychological point of view, everything still needs to be done: we need to learn how to think well; above all, we need to learn how to learn.

1.3 Montaigne, psychologist and educator ‘Rather a well-made head than a full one!’11 This was Montaigne’s wish as an educator: we need to rely less on raw memory (‘the funnel’ in our ears) and facile reasoning than on wise judgement. For, as Aristotle had glimpsed when he protested against sophisms and paralogisms, Montaigne believed that the apparent flexibility and the deceptive correctness of reasoning could be used to serve fanaticism and to prove everything, so long as we are sufficiently ‘subtle’ about it. According to him, reason is dishonoured in the process; the mind is ‘an outrageous sword for its very possessor’ (the individual human), it is a ‘dangerous and reckless vagabond’.12 Montaigne’s psychology, however, is not negative. On the contrary, it is lucid and positive. He firmly believes that learning self-discipline is necessary if we are to forge a critical mind from childhood onwards, and in this respect he was a forerunner of the contemporary psychology of executive functions (cognitive control, inhibition, etc.). It is against the dangerous fantasizings of our mind that we must look for guidelines, ‘blinkers’, insofar as we have ‘more need of lead than of wings’.13 By disciplining and tightening itself, thought thus gains in strength and force. Here we recognize the man of law, the magistrate at Périgueux and later Bordeaux where he became mayor. In addition, Montaigne expresses these laws (which are now psychological laws) in many ways, introducing even the idea that we need to anticipate everything that may happen. We must avoid the disorders of the soul as it wanders through the pathless fields of the imagination. We need to tie down ideas, to bridle them. We need to erect the most restrictive barriers around the mind. We need strong safeguards. Finally, we need to prepare for every eventuality. Montaigne’s pedagogic strategy, in those troubled times of the Wars of Religion, lay in educating children and adults so that they would acquire the wisest and most robust judgement. This meant that tutors must be wise rather


History of theories about thinking

than knowledgeable, mainly able to shape their pupils’ judgement. In this respect, Montaigne’s hero from antiquity was Socrates. Like him, Montaigne believed in the virtues of education through social communication, dialogue in pairs or in small groups, among friends, with people discussing each other’s opinions, imitating the right models and imbuing themselves with those exemplars (while remaining distrustful of being ‘swept away’ by crowds). In conclusion, we can say of Montaigne that he takes up and reinforces the concern for the inner self found in Christian psychology (that of St Augustine: the introspective psychology of the individual person), but that he frees this gaze turned towards the subjective realm from its religious ideology. This psychological approach has thus become secular (even if Montaigne declared himself a Christian, simply in the sense of being French). The author of the Essays paved the way for the preoccupation with objectivity and neutrality, and for the emphasis that the modern scientific psychologist places on the observation and construction of experimental facts. Montaigne, a ‘naturalist’, heralded a psychology that would increasingly strive to become scientific.

2. The Grand Siècle: Descartes and the cogito, Pascal and persuasion The humanist Renaissance, which began in the sixteenth century, continued into the ‘Grand Siècle’ (‘Great Century’), as the seventeenth century was called – the era of a more rational and stable political order associated with classicism. In France, this was the time of Louis XIV, the Sun King, an absolute monarch and the promoter of a grandiose style of architecture, as in the Palace of Versailles (Le Vau and Mansart being the architects) and its gardens designed by Le Nôtre, which are models of formal perfection and order, following the rules proper to ‘French gardens’. The whole of the century, indeed, was grandiose: in painting (Poussin, Champaigne), in music (Lully), in the theatre (Corneille, Racine, Molière), in other literary forms (La Fontaine, Bossuet), administration (Colbert), diplomacy (Richelieu, Mazarin) and even the art of fortifications (Vauban). The history of psychology continued to make its way in parallel with history at large. After Montaigne’s already modern psychology came the century of Cartesian rationalism (governed by notions of order and rules) and what Pascal called the esprit de finesse (‘spirit of finesse’). When it came to classical reason, Descartes was the defence council and Pascal the prosecutor. Fénelon also advocated, even before Rousseau, that the child’s nascent capacity for reasoning should be carefully fostered (Treatise on the Education of Daughters, 1687). For classicism, the honnête homme or ‘gentleman’ was the man who cultivated an aesthetic based on the quest for perfection, the keystone of which was reason. In the seventeenth century, science and reason imposed their authority across Europe (in France, England, Italy). This movement came largely from England, where Francis Bacon (1561–1626),14 a politician (the Lord Chancellor) and philosopher, developed an empiricist theory of knowledge based on experimental

The inconstancy of the human being


practice. In particular, Bacon advocated experimentation along with the use of reason – real reason, not the ‘reason’ of authority, whatever it might be. As would later become the practice of scientific psychology right up to the present day, the crucial thing was always to associate theory with practice. This was very new in the society of that period because, over time, Aristotelian science had come to appear too theoretical and speculative and given an almost authoritarian status by those who commented on it. Moreover, practical experimentation had, since the Middle Ages, been the preserve of the alchemists. To reconcile theory and practice for the new science was therefore a fundamental reform and Bacon, though not a scientist himself, had the audacity to turn it into a political programme (see his Novum Organum, 1620, and The New Atlantis, 1627). Science was to be useful to society – the same idea of social utility that was already present in the Reformation (in Calvin) and in literature (in Montaigne). To reinforce this link between science, politics and society, two academies were created in the seventeenth century, one in London in 1660 – the Royal Society, in explicit recognition of Bacon – the other in Paris in 1666 – the Royal Academy of Sciences, set up by Louis XIV and Colbert. These two academies (reminding some people, on both sides of the Channel, of the Academy of Plato in Athens) still exist today. The one in France later became the Academy of Sciences, which in 2016 celebrated 350 years of its existence. The philosophical, epistemological and political reflections developed by Bacon were contemporaneous with the birth of modern science. This was based on mechanistic thought, a concept according to which nature was moved, not by God, but by an autonomous mechanism (or mechanisms). This so-called ‘Copernican revolution’ was characterized by the fact that, as we saw above, Copernicus introduced heliocentrism into astronomy as early as the sixteenth century: the Sun was at the centre of the universe and the Earth rotated on its own axis and around the Sun, not the other way around. This thesis was revolutionary, and contrary to the illusions of our senses (we see the Sun moving around the Earth every day). It thus imposed a new objective understanding of the general mechanism of planetary motion in which man, God and the Earth were all displaced from the centre. This did not preclude the belief in a Creator God at the beginning; however, in this new framework, God no longer intervened after the Creation. It can be said that in this period, in the minds of original thinkers, ‘God was retreating’. This objective analysis of nature needed to draw on mathematics and, of course, on the new measuring instruments of science. Galileo (1564–1642), an Italian mathematician, physicist and astronomer, continued Copernicus’s work using a technological invention: the astronomical telescope. In doing so, Galileo perfectly embodied the model of science advocated by Bacon, in that he combined theory (in this case, heliocentrism) and practice (the astronomical telescope), even though the quality of his measurements was already controversial – as it often is in the sciences, where measurements can always be made more accurate. Objectivity, as sought by these scientists, separated man, a mere observer


History of theories about thinking

and discoverer, from Nature, now seen as an object to be measured. Newton (1643– 1727), following on from Galileo, later imposed modern positivism (which Comte would develop further in the nineteenth century), by virtue of which science consists in formulating laws on the basis of measurements and quantifications. The Church was opposed to heliocentrism, and Galileo’s insistence on objectivity led to him being forced to explain himself in front of an ecclesiastical tribunal at a momentous trial. He was condemned in 1633 (and not rehabilitated definitively until 1992, by Pope John Paul II). Mechanistic science, however, continued to make progress: in medicine, William Harvey (1578–1657) identified the circulation of the blood, with the heart (the ‘acropolis’ of the body and the soul according to Aristotle) now recognized as a simple pump. From the point of view of a now viable scientific psychology, the key question was: is man also a machine whose cogs and wheels must be understood and measured – even including his soul? With Montaigne, we have already seen that the psychological gaze had become secular; this was a crucial fact, but was the seventeenth century ready to make it mechanistic in the sense that this term was understood in the new science? At this point, Descartes intervened.

2.1 Descartes: dualism of the soul and the body, rules for the direction of the mind, and innate ideas According to Descartes (1596–1650), men are composed of two natures: the body and the soul, the latter being specifically defined as a ‘thinking substance’. He compares the body to a pipe organ, in which animal spirits act like the air between the ducts. Body and soul may well be joined together and united by the pineal gland in the brain (the epiphysis), but only the body is a machine. This difference in nature (or substance) between the soul and the body constitutes what is called ‘Cartesian dualism’, which left a permanent mark on psychology: the body was the domain of physiologists and doctors, the soul was the domain of psychologists. Posterity retained for Descartes his mechanistic concept of the human body and the way he applied it to an analysis of the triggering of movements by visual or auditory signals, based on patterns very similar to those accepted today as the reflex arc (i.e. the circuit from stimulation to response in the body via the central nervous system). Changeux mentions this in Neuronal Man: The Biology of Mind, in which he pays tribute to Descartes. The road ahead was therefore clear for Descartes’s concept of physiology; but what about his ideas on psychology? According to him, the soul is indivisible,15 intangible and immortal, of divine origin, and therefore not reducible to mechanistic thought. So how do we approach this? Descartes’s response, his discovery (through reflection), was a psychological process of ‘splitting’, of a consciousness of the self: the cogito.

The inconstancy of the human being


Cogito, ergo sum, ‘I think, therefore I am’, was, according to Descartes, the starting point for any exploration of the real. In this respect, Cartesian psychology was close to Montaigne (with its focus on the importance of the inner self)16 and, before him, of St Augustine (‘If I am wrong, I am’). But it quickly became coldly logical and impersonal, based on the sole exercise of reason (rationalism) by an infallible method. In fact, Descartes sought a solid mental basis for modern science, using a process that was simpler and more efficient than the syllogisms of Aristotle, whose extravagant developments had, since the Middle Ages, led to the conceptual excesses of the ‘reasoners’ (whence ‘Occam’s razor’). This Cartesian process was designed to make it possible to divide each individual difficulty, to direct one’s thoughts in an ordered way, and so on. In 1628 Descartes formulated his Rules for the Direction of the Mind (finally published in 1701), followed by the Discourse on Method (1637). The Rules included the following. Rule 1: ‘The purpose of studies must be to give the mind a direction that enables it to make sound and true judgements on all that is presented to it.’ Rule 2: ‘The objects to be dealt with are those alone that our minds seem to be sufficient to know with certainty and without doubt.’17 There were twenty-one similar rules. On this basis, methodical doubt could lead to certainty. These were general rules – and so they were valid, more particularly for René Descartes himself. Like Michel de Montaigne, Descartes depicted himself in order to instruct the reader. So he confessed that he spent nine years of his life exercising doubt and practising the correct use of the rules of method. It is not enough to know these rules, it is necessary to apply them by a cognitive effort, cultivating one’s attention and desire to learn. It is an intellectual attitude. Behind the Descartes seen as a pure logician, we find a Descartes who is a psychologist of effort. He even develops the idea that not all brains are disposed in the same way, since the activity of the pineal gland, which controls what we do, causes fear in one person and courage in another. (The role of the will here clarifies the psychology of Corneille’s theatre, which was contemporary with Descartes.) We can already discern the beginnings of a differential psychology here, as in Plato and Montaigne. In Descartes, however, the psychology of truth dominated, and this truth was at work in the rules and method he established, like Aristotle with his syllogisms. Therefore, the starting point of thought, namely the indemonstrable truths or axioms, had to be ascertained; Aristotle was an empiricist, supposing the mind to be initially a tabula rasa and giving credence to the evidence of sensations, while Descartes was an innatist, like Plato. Indeed, his thought drew on the rationalism and innatism of antiquity. The Cartesian method and rules were designed to discover the Ideas and properly develop them. In Descartes, human doubt stems from an imperfection in method, whereas God is perfection. Thus, to the question ‘From where do we get this precious treasure that is our intelligence?’, Descartes, in his Treatise on Man (1648), answers with the self-evident and apparently inescapable truth: God has deposited in our minds, from birth, clear and distinct logical and mathematical ideas, the core of human intelligence.


History of theories about thinking

A baby is thus ‘potentially intelligent’ (a concept very common these days), but is intelligent thanks to God’s gift. This divine explanation would be shattered by nineteenth-century biology and Darwin’s description of a natural biological evolution of animal and human intelligence – excluding God from the explanation. In the same way, the Cartesian dualism between the mind (or soul) and the body would be shattered by the neurosciences of the twentieth century – prefigured in antiquity by Herophilus and Galen – through the mechanistic study of the brain as an organ of thought, especially using brain imaging techniques. The weakness of the Cartesian edifice was indeed the stumbling block represented by the pineal gland,18 a place of the deeply mysterious interaction between two components, one of which can be explained by mechanistic ideas (the body), the other not (the soul). Moreover, Descartes’s correlative theory of machine-animals was directly challenged by La Mettrie’s Man a Machine (1748), which, though written in the eighteenth century, already foreshadowed contemporary artificial intelligence (the brain-machine), a branch of IT. Finally, even though Descartes wrote a Treatise on the Passions, it was the dualism between the soul and the body that Damasio would denounce in Descartes’ Error (1994), demonstrating, with the help of contemporary neuroscience and the ‘somatic marker theory’, that we think with our bodies and our emotions in a system of generalized equilibrium called homeostasis (Damasio & Carvalho, 2013). Damasio (2003, 2018) agreed more with Baruch Spinoza (1632–1677), who, somewhat later in the seventeenth century, came closer to modern neurobiology than had Descartes by bringing mind and body together, ascribing to the emotions a central role in human survival and culture. In conclusion, we owe to Descartes all the rigour and exertion required in the psychology of reasoning (‘the Cartesian spirit’), which cannot be reduced to the syllogisms of Aristotle alone. This logical inheritance was handed on to science in general and psychology in particular, in both its method and its objects. Descartes was doubtless wrong to exaggerate the dualism between the soul and the body so greatly. Everything suggests that his relationship with God led him to exclude the possibility of a mechanistic approach to the human mind (even though he was already applying this approach to the body and its reflexes). We know, indeed, that he feared the judgement of the Church: on learning of the condemnation of Galileo in 1633, he declined to publish The World, a text in which he also asserted that the Earth revolved around the Sun. What, in fact, did he contribute to psychology? By following in the footsteps of St Augustine and Montaigne, he undoubtedly reinforced the idea of an inner self. This interiority was the precondition for the very exercise of reason. In this sense, he was indeed the child of a ‘Great Century’, in love with order and rules. Pascal, however, modified his forebear’s views to a significant degree, drawing on his own psychology of persuasion, something that was absent in Descartes.

The inconstancy of the human being


2.2 Pascal: the two ways into the soul, geometry and finesse Pascal (1623–1662) was a multifaceted genius: a mathematician of the first order, and the inventor of the calculating machine and urban transport. He was also a psychologist. Like Descartes, he appreciated logical and mathematical reasoning and rigorous demonstrations. Soon they no longer sufficed. In his treatise On the Art of Persuasion (1658), he set out his conception of a soul that could be approached in two ways: by being agreeable and understanding. The latter corresponded to Descartes’s methodical reason (the need to convince by logical arguments), which Pascal called the ‘geometrical spirit’, whereas agreeableness corresponded to intuition, to the heart (‘the heart has its reasons that reason does not know’).19 He called it ‘a spirit of finesse’. According to Pascal, finesse presupposed a rapid, intuitive reasoning of which we are unaware. It was exercised in psychology as well as in morality. Pascal became an ardent supporter of the movement known as Jansenism (a movement that advocated theological reform within the Catholic Church and was in the seventeenth century especially associated with the abbey of Port Royal); he sought to convince, to persuade unbelievers, especially his many readers who were comfortable in the society of their day. These were ‘decent, cultivated people’ (honnêtes gens), but they were indifferent to religious matters, and were sometimes even freethinkers. It was for them that Pascal planned a Defence of the Christian Religion, of which only fragments remain, published under the title Pensées (Thoughts). In the order of divine things, he considered that agreeableness was valid, whereas in the order of natural things understanding or reasoning was, a priori, the only legitimate proof. In this area, like Descartes, he advocated the progress of science rather than the authority of the ancients. In the arts and in science, this opposition was known as ‘the Quarrel of Ancients and Moderns’. However, as a psychologist, Pascal remarked that men have corrupted the natural order by abusively taking the path of agreeableness: we believe hardly anything except what pleases us, and do not heed the advice of reasoning. When we address human beings, proof (the classical reason of Descartes) is often impotent, and the art of persuading consists as much in pleasing as in convincing by logical arguments. Therefore, we need both. Pascal sets an example with his own ‘sublime eloquence’ (in Voltaire’s words), and his dialectic, his art of seeing beyond illogicality and falsehood, his rigorous deductions and his impeccable dilemmas aimed at persuading the reader. One example is his celebrated ‘wager’ on the existence of God.20 As a psychologist of persuasion, Pascal identified and described the characters of the geometrical spirit, then those of the spirit of finesse (which can discern feelings and thoughts through almost imperceptible signs), as well as the sources of the errors arising from both these kinds of thinking. Following Plato and St Augustine, among others, he warned against ‘the powers of deception’: the senses can lead reason astray by their false appearances.21 Pascal also cites, as ‘principles of error’, the imagination, custom (already denounced by Montaigne) and self-love.


History of theories about thinking

According to him, custom triggers the ‘automaton within us’, which influences the mind without the person being aware of it. Pascal’s theory of a soul with two aspects (one that relied on finesse, another on geometry), in other words a flawed and credulous soul, meant that he was not only a council for the prosecution of classical reason in the seventeenth century, but also a prominent modern man. In fact, he anticipated the contemporary theory of Daniel Kahneman, a psychologist and the 2002 Nobel Prize winner in economics. Kahneman describes two systems of thought in each individual: system 1, which is fast, intuitive, emotional and partially unconscious (the spirit of finesse) and system 2, which is slow, reflective, controlled and logical (the geometric spirit).22 Current experimental cognitive psychology, combined with brain imaging, demonstrates that system 1 is prone to illusions of familiarity, halo effects, optimistic biases, anchoring effects and many other perceptual, cognitive and emotional biases in the brain. These are all elements to which Pascal’s spirit of finesse is sensitive. If we ignore the religious dimension, the psychologies of Pascal and Kahneman are very similar. We could almost say today that Piaget (the psychologist of the development of system 2, the logical and reflective system) is to Kahneman what Descartes was to Pascal: Piaget stated the case for reason, Kahneman stated the case against it.

3. The Enlightenment: empiricism, innatism and pure reason The spirit of reason that arose in the Renaissance and the Grand Siècle (the sixteenth and seventeenth centuries), admittedly nuanced by Pascal, would be intensified in the eighteenth century by a powerful and even revolutionary philosophical movement: the Enlightenment that promoted the ‘light’ of understanding as illuminating the world. In that century, the psychology of knowledge, which shaped reason either under the influence of the environment or not (empiricism or innatism), became an important preoccupation. In this respect, the great project of Diderot and d’Alembert, the Encyclopédie (1751–1772), was emblematic. The purpose of this Rational Dictionary of the Sciences, Arts and Crafts was to enlighten the world. A few statistics will help us gauge the extent of this gigantic undertaking (akin to Rabelais’s cognitive gigantism in the sixteenth century: ‘Let nothing be unknown to you!’): there are 17 volumes of text, 11 volumes of plates and 150 contributing authors. In this project, reason was placed at the centre of scientific research and was meant to lead through observation and experiment to the progress and happiness of humanity. This concept promoted by the Encyclopédie was also a concept that drove the whole century. The aim was political and educational: new knowledge would liberate reason from religion23 and monarchical absolutism, especially that of Louis XV. By emancipating reason, clearly man and his self-awareness would be emancipated. Thus, little by little, the revolutionary idea of the universality of reason would take root and, consequently, the idea of equality between men. In this vein, Montesquieu (On the Spirit of the Laws, 1748) and

The inconstancy of the human being


Rousseau (On the Social Contract, 1762) suggested giving more power to the people. In his Émile (1762), Rousseau also formulated advice on the education of children, who in his view were perverted by society. Voltaire, paying tribute to Montaigne, fought tirelessly against the injustice and intolerance often inspired by religion; for example, he took up the defence of Jean Calas, a French Huguenot unjustly condemned to death because of his beliefs, and he composed an influential Treatise on Tolerance (1763). Finally, in order to illustrate (albeit not without a certain irony) the cognitive revolution that took place in the Enlightenment – a logical continuation of the revolution brought about by the Renaissance printing press – Voltaire published his On the Horrible Danger of Reading in 1765. Tensions gradually increased between the ‘reason’ promoted by the Enlightenment, in which philosophers, books and reading played an important role, and the established powers. Thanks to a momentum that had now, after two centuries of gradual progress, become unstoppable, the political idea that all men were equal by virtue of their common reason led to the French Revolution (1789) and the guillotining of Louis XVI (1793). After this radical break with the past, France had a new regime. Fine ideas sometimes lead to violence when people try to apply them: the rapid transition from the desire for democracy to the ensuing Terror, soon after the Revolution, was evidence of this. Indeed, it would be a long and winding political road to the France of the present day: its stages included the Empire under Napoleon, the Restoration under the constitutional monarchy – from Louis XVIII to Louis-Philippe, King of the French – the second Empire, and finally the parliamentary republic. Fine ideas also lead to fine principles, and these have not changed: we find them in the Declaration of the Rights of Man and the Citizen, for example. Article 1 (of 17) reads: ‘Men are born and remain free and equal in rights. Social distinctions can only be based on common utility.’ It is with these words that the history of the Enlightenment was concluded, a literal political expression of the original idea that the universality of reason entailed equality between men. But what, then, was the nature and exact origin of the ‘reason’ possessed by these men, a reason which had overthrown the status quo, decapitated a king and made the whole of Europe tremble? This is a very psychological question: what are the processes at the origin of the construction of knowledge in the human brain, or more exactly in the millions of ‘free and equal brains’ of the common people – as well as those of the philosophers, of course? Montaigne, Descartes and Pascal had already approached the question, each one voicing powerfully independent views: rationalism and innatism dominated in Descartes, as against the finesse and inconstancy of the mind in Pascal and Montaigne. It was initially against Descartes’s innatism that the Enlightenment produced the empiricism of Locke and Hume in England, and the more sensualist views of Condillac, a friend of Diderot, in France. After this, Kant in Germany again promoted innatism. We find here, in slightly different terms, the ancient debate between the innatism of Ideas in Plato on the one hand, and the empiricism of Aristotle that


History of theories about thinking

was a necessary partner of the science of syllogisms. Nevertheless, it was on Newton, a follower of Copernicus and Galileo – in other words on the new mechanistic science of nature – that the empiricists of the Enlightenment based their psychological, anti-Cartesian approach: from the laws of space to the laws of the mind.

3.1 Locke: empiricism and the association of ideas; the role of education Let us begin with ‘Molyneux’s problem’, named after an Irish scholar who, in the eighteenth century, asked John Locke (1632–1704) whether a man who was born blind, then grew old and was cured, would be able to distinguish by sight, without touching them, a sphere and a cube placed on a table in front of him – assuming that he had previously learned to distinguish these two objects by touch alone.24 Molyneux himself believed that he would not. Locke was of the same opinion, for, according to him, visual ideas and tactile ideas were acquired independently by sense experience; and in this case, the necessary association between the two had not been made. It is now known that intermodal touch-vision transfer is possible at a central level in the brain, even in a baby, so the answer to Molyneux’s problem is therefore ‘yes’ from the point of view of the contemporary psychology of development. But the problem fascinated the eighteenth century, which debated opposite points of view: Leibniz thought ‘yes’, for example, and even Diderot seized on the controversy and took an interest in the ‘world of the blind’.25 Today, researchers are still debating it in the framework of the cognitive sciences (Held et al., 2011; Jacomuzzi et al., 2003; Maurer et al., 2005; Proust, 1997). Molyneux’s problem, apparently technical and limited in scope, is historically important because it corresponded to a more general key question, which aroused lively debate in the eighteenth century: what is the role of the environment and experience in the construction of our knowledge? Locke, who inaugurated the Enlightenment in England, answered this question with his empirical philosophy: knowledge results directly from the experience of the reality of the senses. Our ideas are not of divine origin (as in Descartes) but come from perception. It is out in the world that they are found; they do not lie innate within us. Otherwise, how can we explain the diversity of men, the ignorance of children, savages, idiots and so on? Locke was a paediatrician and tutor to the children of an important politician in England, Lord Ashley; what Locke was aiming for was the upbringing of a young English gentleman, but he also understood how each culture and each epoch has its own infancy. Hence his psychology of the child and of education, a corollary of his empiricism, which I shall describe later. Locke, like Montaigne, was struck by the weight of custom and habit, and the role of circumstances. From his observations, from his travels, from the stories he heard, he deduced that human understanding was a matter of environment, of education, and a ‘reflection of nature’.

The inconstancy of the human being


Thus, in his Essay Concerning Human Understanding (1690), Locke refuted Descartes’s innatist psychology (and all those that had preceded it ever since Plato). Inspired rather by Aristotle, he made the human mind a tabula rasa on which sensations are formed during childhood, shaping ideas that associate and combine, from the simple to the complex. He defined the idea as ‘whatsoever is the Object of the Understanding when a Man thinks’ (a representation). According to Locke, experience can be applied either to the external objects of the environment (association between sensation and idea) or to the internal operations of the mind (reflection through the association of ideas). Like Descartes, Locke thought that ideas couldn’t be separated from consciousness.26 Becoming aware of ideas is an awakening of consciousness. Education to ideas is an awakening of consciousness. In the problem of Molyneux, the man born blind possesses the conscious experience of an association between the tactile sensation and the idea of a sphere, but not between the visual sensation and the same sphere. So nothing in his previous experience allows him to associate the two in his mind (hence Locke’s negative answer). The missing mechanism, very simply, was association. What Locke wanted to discover, in line with the Newtonian scientific spirit and in opposition to Descartes, was a simple psychological mechanism, a functioning law that regulated the mind, the mental realm. This was the association between sensations and ideas, and the association between ideas themselves. Thus, association governed the ‘world of ideas’, that is, the world of psychology, like the mechanism of gravitation theorized by Newton, whose physics governed the fall of bodies and the relations between celestial bodies. Locke’s work acceded to a request Newton had expressed in the Principia (1687): finding for the mind, as he had done for space, one, and only one, universal principle of operation. However, this principle or mechanism of association of ideas necessarily implies a psychology of education. This is what Locke proposes in Some Thoughts Concerning Education (1693), in which he outlines a psychology of the child. With a clean slate as his starting point, Locke understands that the powers of the mind will demand social incentives and models if they are to develop. The mechanism needs to be educated, and he insists on the role of imitation and play. A properly understood education must use games that are both free and challenging, but it must reserve a place for spontaneous imitation. What fascinates Locke in children is their drive and enthusiasm, that childlike zest that bursts forth in action, play and even schoolwork. In action, as in play, this was, in his view, all an expression of freedom. Locke had all it needed to be a child psychologist, but he did no more than touch on the essential problem: that of mental structures or frameworks. This was the main issue that Anglo-Saxon empiricism failed to address: beyond the mental content of knowledge (ideas), is there an active centre of the mind, a set of structures to which ideas cling? Locke came close to this when, in line with Aristotle, he described the child’s powers of both feeling and reflecting. However, he said nothing about how these sentient and cognitive powers are


History of theories about thinking

structured (or structure themselves) in the mind. Returning to Descartes’s innatism, Kant said that these powers were already in existence: for him, they were an a priori of understanding. In the twentieth century Piaget discovered the logicomathematical laws of construction of structures (or frameworks) that operated during human development. Empiricism then led on to constructivism. One common thread between Locke and Piaget, however, was the role played by the action and zest for life of children (the élan vital that Piaget took from Bergson).

3.2 Hume: empiricism and imagination Following on from Locke, David Hume (1711–1776), a Scottish philosopher who was an acquaintance of Rousseau, completed the project of empiricism in the same scientific, mechanistic and Newtonian spirit. He insisted on our faculty of imagination that worked on the basis of sensations: we imagine ideas, which involves a certain margin of uncertainty. In order to understand how the very powerful human imagination associates ideas with each other on the basis of experience by bonding them together (by ‘attraction’ as Newton would have put it), Hume described more precisely than Locke three sub-processes: (a) the spatial or temporal contiguity of objects in reality, a contiguity which configures our memory and the mental evocation of ideas; (b) the resemblance of each copy to the general idea; (c) the cause and effect relationship that underpins our belief system and our practical knowledge. According to Hume, these processes of ‘assembly’ can operate incredibly fast in the human mind. His empiricism was original, very cognitive in nature (memory, beliefs, etc.) and even prefigured Kahneman’s system 1, which I have already mentioned in connection with Pascal (as against the pure logic of Cartesian reasoning).

3.3 Berkeley and Condillac: idealism and sensualism George Berkeley (1685–1753), an Irish philosopher, extended Locke’s empiricism more dogmatically into a so-called ‘immaterialist’ idealism, that is an idealism where everything in the world is simply a product of thought, a reconstruction of the mind. Berkeley’s psychological studies focused on the visual perception of distance, abstraction and language. In France, Étienne Bonnot de Condillac (1714–1780) formulated a so-called ‘sensualist’ version of Anglo-Saxon empiricism in his Essay on the Origin of Human Knowledge (1746): all our understanding, from perception to judgement, is and must be derived wholly from sensations. This means that we need to reform our language – and here again we must wield Occam’s razor and purify our language so that it will organize sensations in such a way as to impose a precise correlation between words and things. That is what Condillac, a member of the French Academy, advocated to the scholars and philosophers of his time: they needed to express their knowledge in a correct, pure, clear language so that everyone would

The inconstancy of the human being


be able to grasp that knowledge. This was also the aim of Diderot in the Encyclopédie. Condillac wrote his Grammar, The Art of Writing, The Art of Thinking, The Art of Reasoning and other works with this aim. In his main work, The Treatise on Sensations (1754), he explained that the understanding was like an inert ‘statue’ that resided within us before existing in the outside world; contact with the world through the senses gave this statue life in the following order: smell, hearing, taste, sight, touch. The ideas derived from these five senses were simply sensations designated by words that represented things. These sensations developed and combined, thereby shaping the understanding. (This still leaves the problem of the mysterious inert statue from which it all began.) From Locke to Condillac, empiricism assumed various different forms in order to promote the same general psychology that marked a revolt against the Cartesian cogito: for the empiricists, the environment shaped reason.

3.4 Kant (after Leibniz and Wolff): the a priori and the ‘transcendental schema’ The question of the powers of the understanding touched on by Locke in his psychology of the child, and the ‘inert statue’ within us in Condillac, reveals the difficulty empiricism encounters in giving an account of the frameworks of the mind which can structure and perhaps even precede the experience of sensations. Without this, the associations of ideas by the empirical connections of contiguity, resemblance, or cause and effect, as in Hume, remain contingent, ‘scattered around’ by circumstances without any real active centre (self, or cogito in Descartes’s phrase). This is where Leibniz, Wolff and Kant contributed to the debate. The German mathematician Gottfried Leibniz (1646–1716)27 refuted Locke’s thesis on the non-innate nature of ideas point by point (his critique promoted the view that there are Ideas that exist independently of us). According to Leibniz, the understanding is innate and allows us to process the data from experience through ‘necessary ideas’: that is why he gives the answer ‘yes’ to Molyneux’s problem. By definition, these ideas are not contingent; that is, they do not depend on environmental circumstances, contrary to what the empiricists believed. Moreover, Leibniz observes that when the understanding, endowed with its necessary ideas (the self, the cogito), meets the perceptual world, representations emerge which are not always very clear, mixed with an infinity of ‘little perceptions’ that can confuse consciousness and reflection and sometimes even elude them altogether. This view was still quite different from that of Locke, who supposed that ideas derived from the senses could not be separated from consciousness. In the end, one has the impression that, for Leibniz, the role of the environment was not positive or, at least, not as able to structure mental life as the empiricists seemed to suggest. A pupil of Leibniz, Christian Wolff (1678–1754), then published a two-part treatise on psychology entitled Psychologia empirica (1732) and Psychologia rationalis


History of theories about thinking

(1734) that forged a remarkable methodological synthesis, paving the way for Kant. As these titles indicate, Wolff distinguishes between two types of psychology. One is empirical: it needs to be based on external or internal observation, through introspection, and to lay bare the laws of human conduct and the faculties of the soul by measurement and calculation, as in physics. Wolff called this very prophetically ‘psychometry’, heralding Fechner’s psychophysics in the next century (see Chapter 4). The other psychology is rational, and belongs to pure reason: it enables us to determine a priori, by reasoning, as in algebra or geometry, what the faculties of the soul must be. Wolff advocates combining these two psychologies; the second kind is also able to feed on data known a posteriori. Thus, almost all future psychology found itself already defined. After Leibniz and Wolff, their disciple Kant formulated propositions that are still current in today’s psychology, in line with a tradition that went back to Plato via Descartes. He said that pure concepts exist in us innately, as mental frameworks or ‘categories of the understanding’; these do not come from the sensible world (the environment of the empiricists). Kant, following Wolff, gives the name ‘pure reason’ to so-called ‘transcendental’, metaphysical knowledge, which is superior to and outside of the world. This exists a priori, independently of our sensations. These necessary and universal cognitive principles relating to space, time, number and so on are in us from birth, but only sensible experience, from infancy to adulthood, can reveal them. Thus, according to Kant, neither reason alone (rationalism) nor sensations alone (empiricism) can make it possible for us to know the world. The intermediary here is the ‘schema’, which links pure, innate concepts with intuitions, for example intuitions of space, time and number that are linked to sensible experience and its representations. Thus, the schemas, or frameworks, of the mind allow us to make judgements about reality. This notion of a schema would be made famous in the twentieth century by Piaget, who made it the basic unit in his theory of intelligence in the child, though he mainly thought of schemas of action in a constructivist and not transcendental or innatist sense. For what Kant posited as the starting point, namely pure concepts, corresponded precisely to what the child must construct in Piaget’s theory (see Chapter 4). Other current programmes in the non-Piagetian cognitive sciences are more innatist, and follow Kant in considering the possibility of a priori mental frameworks which sensible experience reveals from infancy to adulthood. This is the case with the explicit title of Dehaene and Brannon’s (2010) ‘Space, Time, and Number: A Kantian Research Program’. Kant’s aim in the eighteenth century was to use these frameworks to define what reason can and cannot do (hence the expression ‘critique of pure reason’). He excluded in particular the idea that reason could prove the existence of God, something that he viewed to be of the order of belief or dogma. Finally, it is to Kant, a man of the Enlightenment, that we owe a very psychological slogan, which summarizes the whole century: ‘Have the courage to use your own understanding!’

The inconstancy of the human being


4. From the Enlightenment to the nineteenth century The path travelled since the Renaissance shows that the history of psychology has been punctuated by three reforms: religious, cognitive and political. The first of these reforms was the religious Reformation, which shook the sixteenth century: Montaigne’s psychology of inconstancy was an avowed counterpart to this, denouncing fanaticism and intolerance. For the author of the Essays, the psychological gaze, turned in on itself, became secular. In the seventeenth century, cognitive reform began with Descartes, the discoverer of the cogito, and Pascal, the critic of reason, who promoted finesse against geometry, but who was still reckoning without a mechanistic science (that of Copernicus, Galileo and Newton) that had not yet affected psychology. We had to wait for empiricists like Locke and Hume to move on from the laws of space to those of the mind. Like nature, the mind itself could now be something to be observed, analysed and understood from an external scientific perspective. The new explanation, however, still lacked ‘frameworks’, but Kant restored them: pure concepts and innatism, schemas for sensible experience. Kant thus saw rationalism and empiricism as two sides of the same coin. While his approach was metaphysical, the repercussions of the a priori on psychology were still great, forming a kind of tradition leading from Plato to Descartes and then on to the innatism of the current cognitive sciences. Political reform, with the French Revolution and the subsequent change of regime, marked the moment of the self-affirmation of the common people, but also of the individual human being. In accordance with the Enlightenment, it posited an individual endowed with reason but also, thanks to the anti-Enlightenment reaction, an individual with states of mind, feelings and an introspective bent that could be both spiritual and religious. This return to the self corresponded to the ‘Romantic’ attitude, to the exaltation of the ego by sensibility and imagination against the immense backdrop of nature: thus, the Romantics waxed lyrical over the Alps, the exoticism of America and so on. This trend, born in Germany and England at the end of the eighteenth century, was to spread in a vast cultural movement (Romantic literature, painting, music, etc.) throughout nineteenth-century Europe – beginning, in France, with Chateaubriand and Lamartine. The most interesting aspect for the history of psychology is that, ever since the eighteenth century, the conception of man has had ‘two faces’ (Parot & Richelle, 1992). The one face, luminous and rational, is the object of today’s experimental cognitive psychology; the other face, darker and more emotional, plunges into the depths of human singularity, and is the object of clinical psychology and psychoanalysis. This new view of the human mind thus has its own light and dark, its own chiaroscuro.

Notes 1 As we have seen, there had already been several ‘little Renaissances’ (in the sense of a return to antiquity) in the Middle Ages.


History of theories about thinking

2 Francis I was an innovative king who issued the edict of Villers-Cotterêts (1539), which established French as the official language of law and administration (rather than Latin). The cultural and political roots of the psychology of the French language lie here. 3 The work of the Dutch writer Erasmus of Rotterdam (1467–1536), however, shows that humanism and the Reformation were originally linked. For him, the two sources of human wisdom were ancient literature and the Bible. 4 In this work, Rabelais expressed some eloquent views on the psychology of war, illustrated by the hasty decisions of the wicked king Picrochole, and on social psychology in connection with the mentality of the mob – witness the common French phrase les moutons de Panurge, used to lambast the ‘sheep’ who follow one another mindlessly. 5 See Château (1977). 6 Montaigne, Essais, ed. P. Villey and V. Saulnier, additional material M. Conche (Paris: PUF, 2004), p. 378. 7 In the twentieth century, Ignace Meyerson (1888–1983) followed the same path as Montaigne, studying man through his works; see I. Meyerson, Existe-t-il une nature humaine? Psychologie historique, objective, comparative (Paris: Éditions Sanofi-Synthélabo, 2000). 8 Montaigne, Essais, p. 929. 9 D. Kahneman, Thinking, Fast and Slow (London: Allen Lane, 2011); O. Houdé, Apprendre à résister (Paris: Le Pommier, 2014). 10 Montaigne, Essais, p. 157. 11 Ibid., p. 150. 12 Ibid., p. 559. 13 Ibid., p. 822. 14 Not to be confused with the Roger Bacon of the Middle Ages. 15 Descartes decided to place the junction between the soul and the body in the pineal gland, for, unlike the other parts of the brain, there is only one such gland, which is in this respect like the indivisible soul. 16 This focus on the I in the cogito (self-consciousness, interiority) was contemporary with the social individualism of Thomas Hobbes (1588–1679). 17 The Rules for the Direction of the Mind were not published in the author’s lifetime; there are several English translations. 18 We now know that this gland, the epiphysis, produces hormonal secretions, especially melatonin, which play a role in the sleep/waking cycle, as well as in sexual behaviour. We are far from the soul envisaged by Descartes. 19 B. Pascal, Pensées, ed. L. Brunschvig (Paris: Classiques Hachette, 1904–1914), p. 458. 20 In this wager, Pascal endeavoured to demonstrate that man, in his ignorance, has much to gain by betting on God’s existence. We have nothing to lose, everything to gain. This involves a psychological calculation. 21 In that same period, La Fontaine, in his fable ‘An Animal in the Moon’, followed St Augustine in showing how reason must always correct the illusions of our senses (Fable VII, 18, in Fables, ed. J.-P. Collinet (Paris: Gallimard, ‘Bibliothèque de la Pléiade’, 1991): ‘When the water bends a stick, my reason straightens it,/Reason decides like a mistress,/Thanks to this help, my eyes/Never deceive me, even when perpetually lying to me.’ 22 In contemporary economics, system 2 corresponds to ‘rational choice’ theory. 23 This emancipation was a corollary of the French materialism defended by Julien Offray de La Mettrie (1709–1751). 24 W. Molyneux, ‘Letter to John Locke’, 7 July 1688, in E. S. de Beer (ed.), The Correspondence of John Locke (Oxford: Clarendon Press, 1978), vol. 3, n. 1064. 25 D. Diderot, Lettre sur les aveugles à l’usage de ceux qui voient (Paris: Flammarion, GF, 2000 [1749]).

The inconstancy of the human being


26 However, he turns the argument against Descartes by emphasizing with his hypothesis the paradox that we have innate ideas of which we are not conscious. 27 It was thanks to Leibniz that George Boole (1815–1864) came up with his ideas about algebra that, a century later, transformed the ‘logic of words’ into calculations using abstract signs.

Bibliography Berthoz, A. (2016). The Vicarious Brain, Creator of Worlds. Cambridge: Harvard University Press. Château, J. (1977). Les grandes psychologies modernes. Liège: Mardaga. Damasio, A. (2003). Looking for Spinoza: Joy, Sorrow, and the Feeling Brain. Orlando: Harcourt. Damasio, A. (2018). The Strange Order of Things: Life, Feeling, and the Making of Cultures. New York: Pantheon Books. Damasio, A. and Carvalho, G. (2013). The nature of feelings: Evolutionary and neurobiological origins. Nature Reviews Neuroscience, 14, 143–152. Dehaene, S. (2014). Consciousness and the Brain: Deciphering How the Brain Codes Our Thoughts. New York: Viking. Dehaene, S. and Brannon, E.M. (2010). Space, time, and number: A Kantian research program. Trends in Cognitive Sciences, 14, 517–519. Held, R. et al. (2011). The newly sighted fail to match seen shape with felt. Nature Neuroscience, 14, 551–555. Jacomuzzi, A. et al. (2003). Molyneux’s question redux. Phenomenology and the Cognitive Sciences, 2, 255–280. Maurer, D. et al. (2005). Missing sights: Consequences for visual development. Trends in Cognitive Sciences, 9, 144–151. Parot, F. and Richelle, M. (1992). Introduction à la Psychologie. Histoire et Méthodes. Paris: PUF. Proust, J. (ed.). (1997). Perception et Intermodalité. Approches Actuelles de la Question de Molyneux. Paris: PUF.

4 TOWARDS A SCIENCE OF PSYCHOLOGY The nineteenth and twentieth centuries

The nineteenth and twentieth centuries witnessed a quickening of pace in the history of psychology: names, places, concepts and methods proliferated in the process of acceleration and profusion often associated with the birth of a new academic domain. This included the development of university teaching and the creation of research laboratories and other institutions all over the world, in France, Germany and Austria, Russia, Britain and the United States. From the Sorbonne and the Collège de France to Harvard University, the science of the soul (or for some, the science of ‘behaviour’ alone) that had been shaped since antiquity here completed its historical-philosophical phase and achieved the status of a true modern science. In France, this was a period of political instability, disappointment and violence following the Revolution: the dream of ‘Enlightenment’ and reason was shattered by the Terror. From 1799, a new strong man came to power: Napoleon Bonaparte, the major figure in the rise and fall of a First French Empire that confronted an alarmed Europe and ended at Waterloo in 1815. There followed a succession of many different regimes, until the consolidation of the democratic idea of the ‘Enlightenment’ with the Third Republic in 1875. The eighteenth and nineteenth centuries were also the period of the Industrial Revolution and the spread of railways, factories, electricity and automobiles. Britain was the first country to take advantage of this revolution when James Watt (1736–1819) developed his steam engine. Meanwhile, the English colonists of the American continent declared independence in 1776, and the United States of America was created. However, it was not until the outbreak of the First World War that the United States became the first world power. Harvard University (so named in 1780), near Boston, had already been established in 1636. Further scientific progress followed: as early as 1795, the Institut de France became the home of the Academy of Moral and Political Sciences, which still

Towards a science of psychology


exists (after various vicissitudes). The term ‘moral’ in its name needs to be viewed as having a psychological dimension, among others. The physiologist Jean Georges Cabanis (1757–1808), a member of this new academy, stressed that psychology was as necessary to the moralist as to the physician (which he was), because the brain is an ‘inner man’ who inhabits the behaviour and affections of the ‘outer man’. Auguste Comte (1798–1857), meanwhile, directed the thought of his age away from metaphysical speculation towards positivism (the observation of so-called ‘positive’ facts), which refers to what are now called the ‘hard’ or exact sciences such as mathematics and physics. Psychology did not rank highly in his ‘scale of sciences’ (where mathematics was queen and came before astronomy, physics, chemistry and biology) but Comte did nevertheless give one example of a positive psychology, namely the ‘physiological phrenology’ of his contemporary Gall (discussed below). In biology, it was from Britain that the revolution came, in the shape of Darwinism, the essential evolutionist root of contemporary psychology (man and ape had a common ancestor – a discovery which left Queen Victoria quite unamused). In France, Claude Bernard (1813–1878), a professor at the Collège de France and the Sorbonne and a member of the French Academy, developed a new experimental method for physicians and physiologists. This method involved analysing variables and formulating a hypothesis, constructing a set of experiments and developing a precise procedure for observing the facts before coming to a conclusion. As his academic positions testified, Bernard’s hypothetico-deductive method of observation had an impact far beyond medicine and science; for example on Emile Zola’s work The Experimental Novel, which Zola linked directly to Bernard’s method. At the beginning of the nineteenth century, Maine de Biran (1766–1824) noted in his diary that psychology – a term he popularized – was a science of internal facts of a particular kind. However, in the 1870s it was Wundt’s work in Leipzig, Germany that made possible the measurement of ‘reaction times’ by precise experimental means; this technique would soon spread worldwide, and it is still used today. It fulfilled all the criteria of a true science: both positive facts, as Comte had demanded, and the experimental method dear to Bernard. Francis Bacon’s desire to associate theory with practice was finally realized. But which theory was involved here? Shortly afterwards, ‘official psychology’ became recognized through the first International Congress of the discipline (then called ‘physiological psychology’) held in Paris from 6 to 10 August 1889, in association with the Great Exhibition and the building of the Eiffel Tower. Like that ‘iron lady’, official psychology was a construction that would prove enduring.

1. Towards the birth of ‘official psychology’ Let us begin in the nineteenth century, in France, with the celebrated phrenology of Gall, before discussing the evolutionism of the French naturalist Lamarck and


History of theories about thinking

the English naturalist Darwin, author of The Origin of Species – a revolution comparable to that of physics in the preceding centuries with Copernicus, Galileo and Newton. I will then describe the birth of the German tradition of psychophysics and the experimental psychology of measurement, derived from physiology, with Fechner and Wundt and the creation in 1875 of the first ever psychology laboratory in Leipzig. This was followed in 1889 by the founding of the first French laboratory at the Sorbonne in Paris under the impetus of Ribot, then at the Collège de France, a laboratory that Binet directed from 1895.1 In the United States in the same period, William James wrote his Principles of Psychology (1891), subsequently establishing the Harvard laboratory.

1.1 Gall and phrenology: cerebral mapping Franz Joseph Gall (1758–1828), who was trained at the Austrian School of Neurology, developed a theory called ‘organology’ or ‘phrenology’ that associated each mental function with a particular part of the cerebral cortex, which was considered to be the organ that carried out those functions. Hence he established a model of the human brain in which twenty-seven faculties, mental and moral, were located on the cortex, as on a map; and certain protuberances of the skull (detected by so-called ‘cranioscopic’ examination) corresponded to the particular degree of development of these faculties in particular individuals. These faculties included verbal memory, the meaning of words and consciousness, as well as love of authority, pride, metaphysical spirit, poetic talent and piety. This evidently fanciful model was the object of an immense infatuation (it was at the peak of its prestige in the 1830s) and gave birth to various charlatanisms: judgement by examination of facial features, predictions of the future, the search for the bone in the skull corresponding to the ‘gift for maths’, and so on. Phrenology became discredited. Nevertheless, Gall’s scientific intuition was correct regarding: (a) there being no function without an organ (in accordance with the new biological theories); and (b) the idea of the localization – from the invisible to the visible – of the cerebral and cognitive functions, in the broadest sense. In spite of the vagaries of phrenology, this quest for cerebral localizations was pursued in neuropsychology by Paul Broca (1824–1880), a neurosurgeon at the hospital of the Kremlin-Bicêtre, near Paris. In 1861, he provided the first scientific proof that the faculty of language is localized in the left inferior frontal gyrus now known as ‘Broca’s area’, a discovery that he owed to post-mortem observation, in this case the autopsy of a patient who had lost the use of speech (he could only utter the syllable ‘tan’). As early as 1848, another neuropsychological case occurred which has become famous today – that of Phineas Gage, reported by the physician John Harlow. An explosion at a railway construction site in New England caused a large iron rod to pierce Gage’s skull and brain. It was only in the 1990s, a century and a half later, that Hanna and Antonio Damasio, using Gage’s skull preserved at Harvard University, were able to use new computer-assisted imaging techniques to

Towards a science of psychology


reconstruct in three dimensions the trajectory of the rod. They located the area that had been destroyed, namely the right ventromedial prefrontal cortex now known to be the centre of emotional guidance in processes of decision-making. The medical report of 1848 indicated that, although Gage had survived, he had indeed lost his ability to make decisions. Thus Gall’s phrenology had (if we ignore its fanciful side and the charlatanism associated with it) been groundbreaking in the nineteenth century in its insistence on the localization of mental functions. It is now taken more seriously and more scientifically,2 thanks to in vivo cerebral imaging, which allows accurate neurocognitive mapping of the human brain, even in its emotional and social dimensions. Moreover, in The Modularity of Mind, Jerry Fodor (1983) referred back to Gall in his discussion of innate, anatomically localized peripheral modules, such as language and colour vision, that can be studied experimentally in cognitive psychology. But Gall was persecuted by the Church, because the soul and spirituality could not be localized. He migrated from Vienna and went into exile in Paris, but he was detested by Napoleon I, according to whom individual functions needed to be under the control of a supreme authority (rather than being physically broken up).

1.2 Darwin and evolution: biology, heredity, psychometry and statistics For Aristotle the living world was fixed, a scala naturae (‘scale of beings’: see Chapter 1) created by God. At the summit of this scale stood man, in an eternal universe close to the concepts of Socrates and Plato, and not in a transformist world in constant evolution as would be discovered by Buffon, Lamarck and Darwin in the scientific revolution of the nineteenth century, in whose wake we are still living. At this time, the term ‘biology’ was forged from the Greek bios, ‘life’, and logos, ‘rational discourse’, as a twin of psychology. Georges-Louis de Buffon (1707–1788) had already introduced the idea of a natural history freed from religious beliefs. Then his heir Jean-Baptiste de Lamarck (1744–1829), a zoologist at the Museum of Natural History in Paris, proposed a transformist theory of the evolution of living beings, from a mechanistic and materialistic point of view. In his view, (a) the organization of living beings becomes more complex under the effect of an internal mechanism (the metabolism); and (b) these living beings diversify, become more individual, under the effect of an external mechanism: they adapt to the environment and to circumstances, for example the giraffe’s neck that stretches to reach high leaves. According to Lamarck, these modifications were transmitted to offspring by the inheritance of acquired characteristics. After Lamarck, the English naturalist Charles Darwin (1809–1882), who paid tribute to him, would radically revise the French naturalist’s theory. In The Origin of Species (1859), Darwin formulated the hypothesis – now accepted – of a general mechanism of natural selection that applies to populations and is based on two principles. The first principle is the variation of characteristics, generation after generation (the genetic origin of this random variation would be understood only


History of theories about thinking

after Darwin, thanks to Mendel’s laws). The second principle is the selection by survival and reproduction of those who have (by chance) the combination of characteristics best suited to their environment. The effect of the environment is therefore indirect, and there is no longer, as in Lamarck, a heredity of acquired characteristics (by daily habit). To this, Darwin adds a world-shattering new idea: all living things have a common ancestry, because life on earth has a unique origin, from bacteria and blue algae to man. Thus Darwin introduced into science and psychology the idea of a natural evolution of animal and human intelligence through phylogenesis or the evolution of species,3 a process lasting millions of years, in which matter, life and thought are intertwined – thus excluding God from scientific explanation (and, by the same token, ruling out the divine-innate ideas of Descartes and Plato). In The Expression of Emotions in Man and Animals (1872), Darwin described in detail studies of the expressions on the face and the emergence of language in children. The baby he observed was his own son, Doddy Darwin. This was the century of the first monographs devoted to children: the French historian Hippolyte Taine (1828–1893) and the English physiologist William Preyer (1841–1897) both reported the observations taken from their own children.4 In the twentieth century, this idea of an evolution of intelligence was taken up again in the study of ontogeny – the idea that, from babyhood to adulthood, body and mind evolve. Piaget developed this in the psychology of the child’s cognitive development and Changeux in neurobiology (his ‘neural-mental Darwinism’). According to the latter theory, Darwin’s variation-selection mechanisms also operate in the brain itself, affecting cognitive representations within neuronal populations – a theory that Changeux shares with the Nobel Laureate in Physiology or Medicine, Gerald Edelman (1929–2014). In fact, all psychology has to consider two time scales: phylogenesis, or the evolution of species (Darwin), and ontogenesis, that is, development from baby to adult (including embryogenesis). There is also cognitive ‘microgenesis’, which corresponds to the much shorter time taken up by learning or by the brain when solving a task (months, days, hours, minutes, fractions of seconds) throughout life. Evolutionary dynamics operate within these time scales, which form the ‘Russian dolls’ of psychology. It should be added that even back in the nineteenth century these concepts of evolution prompted theories about society and human heredity that are now seen as questionable. ‘Social Darwinism’ (selection of the fittest) was promoted by the philosopher and sociologist Herbert Spencer (1820–1903), whom Darwin did not rate highly; and a highly dangerous pseudoscientific political ideology, eugenics (the artificial selection of geniuses to improve the race), was advocated by a cousin of Darwin, Francis Galton (1822–1911). Within this context, however, Galton went on to develop differential psychology. A research laboratory was set up in London in 1884 to develop various tests to measure differences in aptitude and thus identify ‘genius’ as well as statistical instruments for comparing individuals (inter-individual differences) and ranking them against a norm or average. While working in this anthropometric laboratory

Towards a science of psychology


over a number of years, Galton and then Karl Pearson (1857–1936) and Ronald Fisher (1890–1962) invented the methods of correlation (Galton, Pearson) and the analysis of variance (Fisher). This marked the simultaneous birth of modern psychometry and statistics. In the same vein, the discovery of factor analysis by Charles Edward Spearman (1863–1945) shortly afterwards, after observing that the results of various intelligence tests correlated with each other, prompted his theory of the ‘g factor’ or so-called ‘general’ intelligence.5 Thus, in addition to Darwin’s biological contribution to developmental psychology, nineteenth-century England fostered the differential and more broadly statistical (mathematical) dimension in modern psychology. At the same time, Germany saw the creation of the first experimental psychology laboratory.

2. The first psychology laboratories and the founding fathers of the twentieth century Following developments in the study of physiology in nineteenth-century Germany, in particular those of Hermann Helmholtz (1821–1894), who specialized in the speed of conduction of nerve impulses and in auditory and chromatic perception, Gustav Fechner (1801–1887) and Wilhelm Wundt (1832–1920) created the first psychology laboratory, in Leipzig in 1875.

2.1 Fechner and Wundt: psychophysics and the Leipzig experimental psychology laboratory It was Fechner who first had the intuition of a fundamental union between mind and matter, hence between psychology and physics, a union that was numerical and measurable. He founded ‘psychophysics’, and sought to discover laws that connected the intensity of a sensation subjectively perceived with the intensity of the stimulation that had caused it. Thus, the ‘Weber–Fechner law’ predicts that the subjective sensation is proportional to the logarithm of the intensity of stimulus. Its unit of measurement is the smallest difference in perceived intensity (‘just notable’) between two excitations or stimuli: this gives the psychologist a relative differential threshold. A psychophysical law of this type has recently been rediscovered by Dehaene for the comparison of numbers: the difference seems to be greater between two small numbers like 2 and 3 than between 97 and 98.6 Wundt, an admirer of Fechner and the heir to his aim of measuring these phenomena, founded the Leipzig laboratory and, using the ideas of a Dutch ophthamologist, developed the study of ‘reaction times’, hypothesizing that there was a parallelism between psychological facts and neural facts. He measured the exact time (in fractions of seconds) between the administration of a sensory stimulus, such as sound or image, and the motor reaction of the subjects of the experiment, often lab colleagues or students. This measurement was carried out under controlled and standardized conditions using specialized instruments: chronometers, metronomes and so on. The ‘measurement of the soul’ that had been foreseen since ancient times (as with the


History of theories about thinking

psychopomp) was thus being realized. Claude Bernard’s method could therefore be used to vary the characteristics of what was presented to subjects and their instructions (what they were asked to do). Experimental laboratory psychology was born. In addition, Wundt developed a so-called ‘experimental introspective’ method, in which subjects were trained in introspection and produced, sometimes after 10,000 tests, reports that could be considered reliable. Wundt’s object of study remained the apprehension of mental facts, consciousness, but he wanted their analysis to be supported by rigorous recording and the repeatability of the experiments, as in science. Using these two techniques, he explored very diverse fields of psychology, ranging from the simplest sensations to reasoning, via attention and affectivity. Young experimentalist psychologists from all over the world (especially the United States) came to be trained in Wundt’s laboratory in Leipzig. At the same time, another German psychologist, Hermann Ebbinghaus (1850– 1909), who was also inspired by Fechner but who worked in more isolated circumstances – he used himself as a subject and invented the experimental psychology of memory (several elements to be memorized, repetitions, methods of learning, etc.). He was the founder of this field of research.

2.2 Ribot and Binet: French scientific psychology and the Sorbonne laboratory The new scientific logic from Germany was introduced into France by Théodule Ribot (1839–1916), the author of La Psychologie allemande contemporaine (Contemporary German Psychology, 1879), a follow-up to his La Psychologie anglaise contemporaine (Contemporary British Psychology, 1870), and the holder of the first chair of psychology at the Collège de France, in ‘experimental and comparative psychology’ (1888–1896). With the support of Louis Liard, Director of Higher Education at the Sorbonne, Ribot provided the impetus for the creation of the first French psychology laboratory at the university, led first by Henri Beaunis from 1889 to 1894 then by Alfred Binet from 1895 to 1911. In addition to this pioneer role in French scientific psychology at the institutional level, at the end of the nineteenth century Ribot published a large number of studies on a wide range of themes (he was similar to Wundt in his eclecticism): these themes included memory, the will, personality, reasoning, attention, people with phenomenal calculating ability, and outstanding chess players. A particularly valuable book is his La Psychologie des sentiments (The Psychology of the Emotions, 1897), supplemented by La Logique des sentiments (The Logic of the Emotions, 1904), prefiguring the most innovative work in the neurosciences, in particular that of Antonio Damasio: The Feeling of What Happens: Body and Emotion in the Making of Consciousness (1999) and Self Comes to Mind: Constructing the Conscious Brain (2010). In his works, Ribot was already insisting on the observation and experimental scrutiny of evidence of consciousness in the case of pathologies both neurological and psychological.

Towards a science of psychology


After Henri Beaunis had equipped the Sorbonne’s laboratory with the most up-to-date instruments (such as those used by Wundt in Germany), his successor Alfred Binet (1857–1911) conducted research there on the measurement of reaction times, combining psychometry and psychophysics (with ‘delicate tools’ and a ‘special installation’, he said). However, Binet would achieve international fame a little later, and by another route. In 1881–1882, Education Minister Jules Ferry (1832–1893) instituted the free and compulsory French republican primary school, which meant that psychologists now had an urgent new task: could they detect in good time children with learning difficulties due to some intellectual disability? In 1905, with this in mind, Alfred Binet and Theodore Simon developed the first mental age test: a metric scale of intelligence called the Binet–Simon test. This was the basis for the invention, a few years later, of the intelligence quotient (IQ) by the German psychologist William Stern (1871– 1938) following the formula: IQ = mental age/real age × 100. In this calculation, emblematic of psychometry, normal intelligence is set at around 100 (plus or minus 15 or 30 depending on the criterion). This came close to the differentialist concerns found in British statistical psychology.

2.3 William James at Harvard Working at the same time as Ribot in the United States, William James (1842– 1910) founded American scientific psychology, creating the first psychology laboratory at Harvard (1891) just after publishing his Principles of Psychology (1890). He elaborated the theory of the stream of consciousness, emphasizing its continuous character, and the so-called ‘James–Lang’ theory of emotions, according to which the direct consciousness of physiological or bodily modifications produces emotions in us: I see a danger, I tremble, and this creates fear. Damasio would later show that it is not the perception of the body itself, but neural maps reconstructed in the brain that process this information coming from the body via ‘loops of simulation’. James paved the way in the United States to the exciting field of the experimental study of human consciousness and emotions, which current cognitive neuroscience has taken up with greater precision.7

2.4 Pavlov, Skinner and Watson: animal conditioning and behaviourism However, not all of James’s students followed this path: Edward Thorndike (1874–1949), for example, studied the learning curves of caged animals, discovering the laws of exercise and effect – a minimal psychology of stimulus-response. Burrhus Skinner (1904–1990), who invented the conditioning of caged rats, explored further this law of reinforcement by reward: the rat now had to act (to press on a lever) in order to learn. This was type II of response conditioning, the opposite (or complement) of type I which the Russian psychologist Ivan Pavlov (1849–1936) had already discovered in experiments with dogs at the beginning of


History of theories about thinking

the twentieth century. His dogs salivated at the mere sight of food or the person who brought it, a phenomenon which he called ‘psychic secretion’, different from the secretion caused by hunger. Pavlov received the Nobel Prize in Physiology or Medicine in 1904 for his work on digestion. The discovery of these basic laws of animal learning – ever since Darwin, we have known that man is an animal – has contributed to the formation, in psychology, of the so-called ‘behaviourist’ trend founded on the objective study of behaviour alone (mainly on the white rat used in laboratories), of which John Broadus Watson (1878–1948) was a powerful spokesman in the United States. This animal psychology in fact ignores language, and therefore introspective reports (even when controlled) and consciousness as well – although it does not contest the latter’s existence. It also has educational ambitions, as had been the case with the empiricist psychology of Locke in the eighteenth century: give me any child, says Watson, and with ‘my own specified world’ (in which he controls all the environmental conditioning), I will make him an expert in any field. Watson went too far, but behaviourism, often highly criticized in the history of psychology, did nevertheless point to the fundamental mechanisms of learning by conditioning (type I or II) that remain valid today, both for studies in cognitive psychology and work in the neurosciences.

2.5 Janet and Freud: clinical psychology, the subconscious and the unconscious In France, things were quite different. Following Ribot, Pierre Janet (1859–1947) developed a psychology of what he called conduct. After training under Jean-Martin Charcot (1825–1893) at the Salpêtrière hospital in Paris – a prominent school for hypnosis and the study of hysteria – Janet founded French psychopathology. Like Ribot, his master, he felt that the clinical study of mental illnesses was a good method for understanding normal mental life. This ‘clinical method’ analysed ‘conduct’, a term he preferred to ‘behaviour’ as it was less reductive and more subjective, covering as it did all the acts of an individual, from the simplest (ideomotor phenomena and other movements) to the most complex (reasoning, moral judgement and logic), as well as the underlying personality structure. Janet also introduced the term ‘subconscious’ in his book L’Automatisme psychologique (Psychological Automatism, 1889), prefiguring Freud’s ‘unconscious’ and the study of cognitive automatisms in experimental psychology. These psychic processes were not accessible to the conscious subject. Janet raised the profile of French psychology considerably, for in 1902 he succeeded Ribot at the Collège de France, supported by the philosopher Henri Bergson (1859–1941), who opposed Binet’s candidature and shared Janet’s clinical vision of psychology and consciousness. As mentioned above, the Austrian doctor and neurologist Sigmund Freud (1856–1939), who was also inspired by Charcot, went on to found the parallel discipline of psychoanalysis, which met with great success throughout the world. This prescribed a treatment in which the patient must freely verbalize his

Towards a science of psychology


thoughts and associations of ideas in order to get round resistances and unblock what had lain repressed in his unconscious since childhood, creating intrapsychic conflicts (including neuroses such as obsessions, anxieties and phobias). The Oedipus complex (often left unresolved), as mentioned in the discussion of Greek mythology in Chapter 1, is one example.8 Other psychoanalysts pursued these investigations and intuitions, particularly Carl Gustav Jung (1875–1961) in Switzerland, who developed the notion of the collective unconscious, and in France, Jacques Lacan (1901–1981), with his view that the unconscious is structured like a language, and Didier Anzieu (1923–1999), with his notion of the ego as a kind of skin (‘the Skin-Ego’). They all contributed to a clinical, intuitive and resolutely ‘in-depth’ approach to the human psyche – complemented by the development of projective techniques (spontaneous responses to ambiguous stimuli) such as the test developed by and named after the Swiss psychiatrist Hermann Rorschach (1884–1922). This was all similar to the ‘cure of souls’ practised in Cos, the island of Hippocrates, in antiquity.

2.6 Piéron, Fraisse and Piaget: towards experimental cognitive psychology The other influential branch of French experimental psychology in the Ribot heritage was that of Henri Piéron (1881–1964), also elected professor at the Collège de France in 1923 where he held the chair of ‘Physiology of Sensations’. He was particularly interested in the physiology of sleep and hypnotoxins, which he studied in dogs. Piéron’s career was also very political, in the institutional sense: Binet’s premature death in 1911 was Piéron’s chance to succeed him as director of the Sorbonne psychology laboratory, with the support of the rector Liard. In 1920, again with Liard’s support, he founded the Institut de Psychologie, the first university institute for psychology in France, which now bears his name, and in 1928 the Institut national d’orientation professionnelle (INOP, or National Institute for Vocational Guidance), which would become the current INETOP. Piéron undertook research, lectured at the Collège de France and published on a wide range of subjects: vision, hearing, space, skin sensitivity and pain, always viewed from a psychophysiological angle. In his laboratory for experimental psychology, of which Paul Fraisse (1911–1996) became director in 1952, a great number of psychology researchers and teachers were trained, from all across France. In a speech given in 1964 for the seventy-fifth anniversary of the Sorbonne laboratory, Piéron said of it, in tribute to its founders Beaunis and Binet (and a little bit in tribute to himself, too): I can show that all the successes achieved by French psychology – and they are numerous – could only have been obtained thanks to the presence of the Sorbonne psychology laboratory, a prestigious name that has registered many successes in all fields of scientific psychology.9


History of theories about thinking

It was also in this laboratory (or more exactly in a joint laboratory for pedagogy in Paris) that the young Swiss psychologist Jean Piaget (1896–1980) came to train with Binet, administering intelligence tests and very quickly discovering his vocation, with an approach that was very different from Binet’s psychometry. It was more fundamental, studying the paths of reasoning in children, the logic of their errors and, consequently, the structures and stages of cognitive development (a post-Darwinian ontogenesis). This new approach was the basis of Piaget’s genetic psychology and epistemology. Piaget’s contribution to the history of psychology, at the University of Geneva (and at the Collège de France in 1942 and at the Sorbonne from 1952 to 1963), was to produce a real synthesis between empiricism (from Aristotle to Locke and Hume) and innatism (from Plato to Descartes and Kant). He rejected both trends as too simplistic and proposed a third intermediate way: constructivism. In his experimental and clinical studies involving children, he demonstrated the construction of psychological structures, from the stage of sensorimotor schemes (as in Kant) in the baby to the stages of concrete operations in the child and then the formal (abstract) operations in the teenager. This comprised the genesis of logico-mathematical intelligence, a view now revised by post-Piaget thinkers. Others supplemented it by focusing on the more emotional, social and cultural origins of human cognition: Henri Wallon (1879–1962) and Lev Vygotski (1896–1934), opening the way to Jerome Bruner (1915–2016), Michael Tomasello (1950–) and Pierre Oléron (1915–1995). Piaget’s treatment of perception through his constructivism was quite different from German ‘form psychology’ (gestalt psychology) which postulated the existence of innate structures (groupings or ‘good forms’) associated with the a priori laws of perception, in this sense marking a return to Kant.10 This was an influential research trend in Germany in the first half of the twentieth century, represented by Max Wertheimer (1880–1946), Wolfgang Köhler (1887–1967), Kurt Koffka (1886–1941) and Kurt Lewin (1890–1947), an American psychologist of German origin who created the notions of the perceptive field of individuals and of ‘group dynamics’ in social psychology. Another German trend that Piaget opposed was the ‘psychology of thought’ (Denkpsychologie) by Oswald Külpe (1862–1915), which reduced thought to a mere mirror of logic (logicism). One famous opponent of Piaget was the American linguist Noam Chomsky, whose theory postulates that the faculty for language is innate.11 The innate nature of the baby’s early skills was generalized as a capacity for cognition by the French psycholinguist Jacques Melher, who (with Jean-Pierre Changeux) taught Stanislas Dehaene, and by American psychologists like Elizabeth Spelke of Harvard University, known for her ‘core knowledge theory’: for example, in her view, physical cognition in the baby involved the principles of contact, continuity and cohesion. This is the long lineage of innatism in the cognitive sciences, from Plato’s Ideas to Spelke’s core knowledge. Piaget also questioned Auguste Comte’s classification of the sciences, placing psychology at the foundation of mathematics and logic (as against the concept of realism that goes back to Plato), and even making it part of biology, chemistry and physics,12 in a veritable ‘circle of sciences’. This radical change of point of

Towards a science of psychology


view, epistemological in nature, has given psychology a new place at the very heart of the system of so-called ‘hard’ sciences, heralding the current interdisciplinary framework of the cognitive sciences in Europe.13 The cognitive revolution of the mid-twentieth century shifted a now globalized psychology beyond the study of behaviours and sensations alone towards the study of all mental processes, including consciousness, together with their localization by means of in vivo cerebral imaging in the historical wake of Herophilus, Galen, Gall and many others. In this vein, Part II of the book will focus on reasoning and decision-making processes. More precisely, we will analyse three main cognitive systems of the human brain: (1) fast intuitive heuristics, (2) slow logical algorithms (as in Piaget’s theory), and (3) inhibition of heuristics, case by case, within an executive control system.

Notes 1 This laboratory still exists on the fourth floor of the Sorbonne, 46, rue Saint-Jacques, Paris; I am its current director. It is a part of the Centre national de la recherche scientifique (CNRS or National Centre for Scientific Research), an institute set up in France in the middle of the twentieth century. 2 A book published by Stanislas Dehaene indicates this: the French title of his La Bosse des maths (Paris: Odile Jacob, 1996; revised and enlarged edition 2010) plays on the old idea of Gall. English translation: The Number Sense: How the Mind Creates Mathematics (New York: Oxford University Press, 2011; revised and updated edition). 3 The word phylogenesis comes from the Greek phylon (‘tribe’) and genesis (‘origin’): thus, the origin of men. Ontogenesis, from the Greek on, ontos (‘being, what is’) and genesis (‘origin’) refers to the development of an individual creature from fertilization via infancy to adulthood. 4 The Americans Stanley Hall and Arnold Gesell would later develop methods that were more systematic. 5 This theory was debated in the twentieth century by Louis Thurstone (1887–1955); he distinguished between various aptitudes (numerical, verbal, spatial, memory, reasoning, perceptual speed, etc.). 6 This is Weber’s fraction according to S. Dehaene (see The Number Sense), namely D/n1 where, in absolute values, D = n1 – n2, with n1 and n2 being two quantities (numbers) to be compared (in our example, 1/2 is indeed bigger than 1/97). This fraction can be measured by the response times for comparisons between numbers. 7 Damasio’s various works discuss this at greater length; see also Stanislas Dehaene, Consciousness and the Brain: Deciphering How the Brain Codes Our Thoughts (New York: Viking, 2014). 8 At the same time as Freud – though the two men were personally unacquainted – Marcel Proust was analysing latent memories in a more literary way, in his A la recherche du temps perdu (In Search of Lost Time). His father, Adrien Proust, was a doctor who had followed Charcot’s courses, as had Janet and Freud. Note that Proust also anticipated current work in cognitive psychology on memory, learning, attention and self-feeling. 9 This speech was published in L’Année psychologique, no. 65, 1965, pp. 6–15. 10 O. Houdé, ‘L’intelligence malgré tout’, introduction to the new edition of J. Piaget, La Psychologie de l’intelligence (Paris: Armand Colin, 2012), pp. 3–13 (first edn, 1947: lectures at the Collège de France in 1942). An English translation of Piaget’s work was published as The Psychology of Intelligence, translated by Malcolm Piercy and D. E. Berlyne (London: Routledge & Paul, 1950).


History of theories about thinking

11 M. Piatteli-Palmarini (ed.), Théories du langage, théories de l’apprentissage. Le débat entre Jean Piaget et Noam Chomsky (Paris: Seuil, 1979). Chomsky also argued against Skinner, who believed that language (or verbal behaviour) stemmed essentially from environmental conditioning. 12 For example, these days a new trend in research is relating the quantum physics discovered in the twentieth century with the means of predicting human cognitive phenomena such as reasoning and decision-taking; see P. D. Bruza, Z. Wang and J. R. Busemeyer, ‘Quantum cognition: a new theoretical approach to psychology’, Trends in Cognitive Sciences, no. 19, 2015, pp. 383–393. 13 L. Nadel (ed.), The Encyclopedia of Cognitive Science (London: Nature Publishing Group-Macmillan, 2003). One article is devoted to Piaget and his impact.

Bibliography Fodor, J. (1983). The Modularity of Mind: An Essay on Faculty Psychology. Cambridge, MA and London: MIT Press.


3-system theory of thinking and reasoning

Introduction To start this second part of the book, we will trace a brief history of reasoning. An initial simple definition of reasoning is: ‘to know how to reason is to know how to reflect’. This is using reason to control rapid, impulsive responses: intuitions, beliefs and emotions. It is of course a strength but it can also become a fault. In the history of literacy, reasoners have been defined as people who think too much, that is who reason and argue about anything. At first glance, reasoning could therefore be a serious, almost boring term. It calls to mind logic and its associated rules (syllogisms, ‘if–then’, etc.), the rigour of formal thought. That is true. However, readers will better and more deeply understand the mechanisms of reasoning if it is viewed in the same way as one views poetry. Arthur Rimbaud (1854–1891) said poets should always seek out the new and arrive at the unknown.1 Similarly, when the brain reasons and learns to reflect, it formulates and tests hypotheses (if . . .), it infers, it deduces (then . . .) and it seeks solutions that are novel to it – and sometimes novel to everyone; in other words, it is creative. The psychologist Jean Piaget described, in the same spirit, the emergence of logical-mathematical reasoning in children using this analysis: before adolescence, the possible is a special case of the real, after that the real becomes a special case of the possible (Inhelder & Piaget, 1958; Piaget & Inhelder, 1969). In a word, the brain becomes capable of abstraction, in the sense of its ability to imagine various scenarios (or hypotheses) through thought alone. From a historical perspective, the academic example of the syllogism is well known and rooted in antiquity. From Aristotle, we know that one must reason in order to understand that if (a) all men are mortal and (b) Socrates is a man, then (c) Socrates is mortal.


3-system theory of thinking and reasoning

What counts is the validity of the syllogism, of linking or inference, and not the credibility of the conclusion (it is well known that Socrates, a famous philosopher, was mortal). The conclusion could have been that Socrates is mortal whereas reasoning was false. For example, if (a) all men are mortal and (b) Aristotle is a man, then (c) Socrates is mortal. Nothing leads to this conclusion, from premises (a) and (b), in terms of logical reasoning (the deduction could only have led to ‘Aristotle is mortal’). Impulsively one wishes to say that it is correct, ‘Yes, of course Socrates is mortal!’, whereas the deduction is not logical. Conversely, the immortality of Socrates could have been deduced from correct reasoning: if (a) all men are immortal and (b) Socrates is a man, then (c) Socrates is immortal. Deduction therefore gives the brain the power and the privilege to play with ideas as we have just done, with true or false propositions (truth is discussed, established), to link them and, in particular, to check their logic, both oneself and with others, through their views, their arguments.

1. Back to roots: myth, logos and science Myths permanently lost their capacity to explain the world when Aristotle’s logos (reasoned discourse) was conceived along with philosophy and science in ancient Greece (see Part I). Emerging science found its niche here. Reasoning became a virtue. The remarkable discoveries of Greek thinkers from ancient times, such as Archimedes (287–212 BC), and Roman engineers are the foundations of modern science and technology. The famous ‘Eureka! Eureka!’ (‘I have found it! I have found it!’) from Archimedes was the result of an observation, a happy coincidence (serendipity) and brilliant reasoning: ‘Any body plunged into a fluid is acted on by an upward force, from bottom to top, equal to the weight of the displaced volume of fluid.’ In addition, Archimedes discovered the mathematical rules regarding levers. However, in history, reasoning has, through its power, largely gone beyond the boundaries of science. The most significant example is the Middle Ages theology (a Christian philosophy called ‘scholastic’) of St Anselm of Canterbury (1033–1109), who sought ‘reasons of faith’, that is proof of the existence of God through reasoning. This is referred to as an ‘ontological argument’, which resurfaced in the Renaissance with René Descartes. Reasoning continued to develop, in science as well as faith, and even went on, through the introspection of philosophers, to become a psychological process as important as knowledge itself. An onward movement of reason was thus irrepressible. With considerable advances made in the science of measurement and in scientific subjects – mathematics, geometry, physics, astronomy (Descartes was a contemporary of the Italian scholar Galileo Galilei and his heliocentrism) – another triumphant century was to come: the Age of the Enlightenment. This was an age of accumulated, crystallized knowledge (in the form of L’Encyclopédie), and one of reason in particular: ‘Have the courage to use your own

3-system theory of thinking and reasoning


understanding’, claimed Immanuel Kant.2 Kant added that the intelligence of an individual was measured by the amount of uncertainty that he or she is capable of supporting. Reasoning, like valour, thereby continued to assert itself and, combined with emerging scientific thought, went on to have one of its most elegant expressions in the nineteenth century in Claude Bernard’s experimental method (1813–1878). This involved analysing variables and formulating a hypothesis, devising an experimental procedure, developing a clear process for observing the facts, and arriving at a conclusion. A continuous epistemological line connects this use of reasoning and observation with that of Archimedes and successfully leads to the sciences of the twentieth century. At the same time, the ‘logic of words’ was transformed into calculations using abstract signs using the algebra of George Boole, an idea to which Gottfried Leibniz had made reference.3 Before Claude Bernard, Auguste Comte had already been steering the thought of the period towards fewer metaphysical speculations and more positivism (the observation of facts said to be capable of verification), in reference to what is today known as the exact ‘hard’ sciences such as mathematics and physics. The twentieth century benefited from the legacy of this move towards positive progress, but with epistemological nuances, as emerging experimental psychology allowed true scientific study of reasoning itself as a biological function of the brain. This view of the human brain as one that reasons and experiments by its own actions was, for the first time, made through the work of Piaget in children and through the lightning fast expansion, at the turn of the twenty-first century, of cognitive neuroscience and cerebral imaging. Many other aspects could have been raised in this chronological and all too quick summary of the great classic thinkers (e.g. military reasoning and the political trickery of Machiavelli). Reasoning is so much at the heart of everything that its history is often confused with that of thought and science, and keeping it separate is an impossible exercise.

2. Reasoning today If reasoning has been imposed, as we have just seen, as a cognitive, even political, value, across history since antiquity, is this still the case today in the society of the twenty-first century? Readers should be aware that the only contemporary psychologist with a Nobel prize, Daniel Kahneman (Nobel prize for economics in 2002), is a specialist in reasoning and decision-making (Kahneman, 2011).4 Beyond psychology, this distinction is testimony to the importance given, at the start of the twenty-first century, to understanding how the human brain reasons and decides in economics, and to a greater extent, in politics. In the same vein, Richard Thaler received the Nobel prize in 2017 for the ‘understanding of the psychology of economics’ (Thaler, 2015; Thaler & Sunstein, 2008). Kahneman’s (2011)


3-system theory of thinking and reasoning

famous book, to which we will return in Chapter 6, is Thinking, Fast and Slow, which contrasts intuitive, rapid and emotional thought with more reflective and logical thought. The spirit of finesse and the spirit of geometry can be seen here, both of which were close to Pascal’s heart. Another example of the topicality of reasoning and of the cognitive processes that are associated with it (from the perception of objects and actions to decisionmaking) is that of major international research projects (involving billions of euros) with the ambition of making an artificial brain. In 2012 for the first time, an artificial brain called ‘Spaun’ was able to respond to a basic logic test: to guess a sequence of numbers (Eliasmith et al., 2012). The scene could have been conducted with a child in any primary school but this was a computer which mimicked subtle behaviour in a laboratory – a basic form of reasoning – with 2.5 million virtual neurons functioning in a similar way to our own. The architecture of this simulator mimics our cerebral areas and their cognitive functions according to principles directly taken from the results of cerebral imaging in humans. The artificial brain sees, memorizes, deduces and communicates its response by drawing with a robotic arm. However, Spaun requires hours to simulate what happens in our brain in one second. The next step in research in this domain is to design tools capable of doing the same thing in a few thousandths of a second. This research links neuroscience, medicine (brain dysfunctions), experimental cognitive psychology and computing. The objective is both to improve the performance of computers and to understand our brain better. It is believed that a brain replicated in silicon can be used for experiments that will reveal how it functions – including how it reasons – through a series of adjustments. Two major international projects have the ambitious objective of producing a highperforming artificial brain: ‘SyNAPSE’ in the United States and, more recently, the ‘Human Brain Project’ (HBP) in Europe.5 The latter project, for which the cost is estimated to be €1.19 billion, is intended (over the period 2013–2023) to reconstitute the current knowledge of the brain, piece by piece, in computer simulations and models – so-called ‘neuromorphic’ circuits with chips specializing in the simulation of neurons and their connections.6 These projects represent a general high-level approach involving the best people worldwide, prompted by a scientific, potentially achievable, dream: to assemble billions of artificial neurons made of a new kind of electronic chip in order to reproduce the actions of a brain that reasons, decides, creates.

3. Definitions: deduction and induction To close this introduction, here is a clearer, more technical definition of deduction, with respect to syllogisms and conditional rules (‘if–then’), as well as the contrasting definition of induction. These slightly formal definitions, without being overly complicated, will be useful to the reader in properly understanding the examples given over the course of the next chapters.

3-system theory of thinking and reasoning


3.1 Deduction: syllogism, ‘if–then’ Let us go back to the classic example of the syllogism: if (a) all men are mortal and (b) Socrates is a man, then (c) Socrates is mortal. This form of a–b–c reasoning is defined as an inference formed of two premises and a conclusion. The first two expressions, (a) and (b), are the premises. The third one, (c), is the conclusion. Any syllogism involves three terms (in this case: man, mortal and Socrates). One is present in both of the premises (a) and (b) (here, man). In each of the premises are found one of the other two terms (here, mortal in (a) and Socrates in (b)) and they are both in the conclusion (c). The deductive inference is the cognitive operation that makes the link between the premise and the conclusion. From the perspective of logic, a deduction is valid if the structure of the inference is logical. The example above is valid. It does not depend on the content of phrases as such (the semantics) and it can even contradict our beliefs or knowledge of the world. It is in this sense that logic is said to be ‘content independent’. It is generally considered that adolescence is the time when logical reasoning is permanently instilled in our brain along with formal deductions (see Chapter 5). The other major form of deduction is conditional reasoning, that is the statements (or rules) of the form if–then: the ‘if . . .’ part (the antecedent) is the hypothesis and the ‘then . . .’ part (the consequent) is the deduction. This is referred to as hypothetico-deductive reasoning.7 For example, the rule ‘If there is no red square on the left, then there is a yellow circle on the right’ can be made true (T) or false (F), in a logic test, by mentally manipulating a truth table applied to the antecedent (A) and to the consequent (C): TT, TF, FT or FF. The response ‘Blue square on left, yellow circle on right’ (TT) makes the rule true; whereas the response ‘Blue square on left, green diamond on right’ (TF) makes the rule false (there are of course other possible TT and TF responses). The case of figure FT or FF (a false antecedent in both cases) does not make the rule false because it is then not applicable (we will come back to this example in Chapter 6). This is not the only area in which formal, abstract logic applies but it works in exactly the same way in daily life when either complying with or breaking rules (when the practical or legal impact is always restricted to their enforcement conditions). Finally, let us point out that the two forms of deductive reasoning, syllogisms and conditional rules, are related. A syllogism like the one given as an example above (Socrates is mortal) can, in fact, be transformed into a single complex proposition of the form If all . . . and if . . . then . . ., a conditional proposition with controlled distribution of quantified terms (such as ‘all’) where humans, mortals and Socrates are found. These elements of verbal definition of deduction do not exclude the possibility that it can operate, according to some psychologists, without having the need to postulate the existence in the brain of ‘mental logic’ (Braine & O’Brien, 1998) consisting of rules. Philip Johnson-Laird stated that all our deductive reasoning can be explained not by the use of verbal and logical rules (expressions of a–b–c syllogisms, conditional rules ‘if–then’ as above), but by a sort of production involving actors personifying the details of a problem in an internal theatre,


3-system theory of thinking and reasoning

which he called a ‘mental model’ (Johnson-Laird, 2001). This is reasoning based on visual and spatial imagination (or at least something graphic). This is not the perspective of other psychologists who have worked on reasoning; at the forefront in this regard is Piaget. He resolutely declared that mental hypothetical-deductive logic does exist in the human brain, that it develops in children and adolescents (through concrete and then abstract rules), but that ‘it is logic which is the reflection of thought and not the opposite’ (Piaget, 1942, p. 51). Piaget therefore opposed the British philosopher and logician Bertrand Russell (1872–1970), who in the twentieth century championed the idea that logical laws have an ideal objective content, independent of psychology (the theory of ‘logicism’8).

3.2 Induction Much research has been conducted into deduction – since Aristotle’s syllogism, this is regarded as the most important thought process, if there is one – but we will see over the course of the following chapters that another important reasoning process, that of induction, is also studied experimentally, together with deduction. Inductive reasoning is based on special cases. It is not directly based on general rules (syllogisms, conditional rules) as in deduction. The latter is by definition associated with logical necessity (or obligatory deduction), whereas induction tolerates uncertainty: it is about the probable. The question here is whether human judgements involving the probable (e.g. the assessment of probabilities from specific information, the choices and taking of medical or financial risks) comply with the classic mathematical formalization of uncertainty: the calculation of probabilities. There is much evidence that this is not the case with intuitions and emotions, driven by rapid thought and short-circuit reflective and logical judgements. This is the topic of research that led to Kahneman’s Nobel Prize in 2002. *** Chapter 5 is devoted to Piaget’s theory or the ‘logical system’ derived from Aristotle, Descartes and others. Chapter 6 concerns the theory of a ‘dual system’ in the human brain: System 1, which is rapid and intuitive, and System 2, which is logical, as defined by Kahneman and which recall Pascal. Chapter 7 deals with the synthesis which today allows the theory of inhibition for System 1 to activate System 2, thanks to a third system or metasystem. Each of these theoretical approaches will be illustrated by experimental data and concrete examples of problems of reasoning. Finally, Chapter 8 will discuss recent discoveries of the amazing capacities of babies’ reasoning and their theoretical implications for the understanding of human thinking.

Bibliography Braine, M. and O’Brien, D. (Eds.) (1998). Mental Logic. Hove: Erlbaum. Château, J. (1971). Montaigne Psychologue et Pédagogue. Paris: Vrin.

3-system theory of thinking and reasoning


Eliasmith, C. et al. (2012). A large-scale model of the functioning brain. Science, 338, 1202–1205. Inhelder, B. and Piaget, J. (1958). The Growth of Logical Thinking from Childhood to Adolescence. London: Routledge. Johnson-Laird, P. (2001). Mental models and deduction. Trends in Cognitive Sciences, 5, 434–442. Kahneman, D. (2011). Thinking, Fast and Slow. New York: Farrar, Straus and Giroux. Piaget, J. (2012 [1942]). La Psychologie de L’intelligence. Paris: Armand Colin. Piaget, J. and Inhelder, B. (1969). The Psychology of the Child. New York: Basic Books. Thaler, R. (2015). Misbehaving: The Making of Behavioural Economics. New York: W.W. Norton and Company. Thaler, R. and Sunstein, C. (2008). Nudge: Improving Decisions about Health, Wealth, and Happiness. New Haven: Yale University Press.


From antiquity to the present day, the logical system of reasoning (syllogisms, conditional rules) has been a clear common thread linking Aristotle, Descartes and Piaget. Aristotle first identified it 2,000 years ago when he referred to it as logos, which replaced mythology. Yet where did this logical system originate? How does it become incorporated in (or impressed upon) our minds, our brains? Divergent opinions exist, ranging from a primarily divine origin (Descartes) to a biological and psychological origin (Piaget).

1. Where does the logical system originate? In the Renaissance, in his Treatise on Man, Descartes referred to the proof that seemed self-evident to him: God instils in our minds, from birth, clear and distinct logical and mathematical concepts, the core of human intelligence (our ‘soul’, according to Descartes). So a baby is ‘potentially intelligent and can reason’ (a very current concept: see Chapter 8), by way of a gift from God. Four centuries later, this is clearly not what science suggests as an answer to this question. Between the time of Descartes and the present day, two key events marked scientific progress in this respect. Firstly, Charles Darwin introduced the concept of the natural selection of animal and human intelligence (through phylogeny or the evolution of species), which links matter, life and thought – excluding God from the explanation. Then in the twentieth century, the concept resurfaced in the study of ontogenesis (by which intelligence evolves from infancy to adulthood) made by Piaget in child psychology (Piaget, 2015; Piaget & Inhelder, 1969) and by Changeux (1985, 2002) in the study of neurobiology with the concept of ‘Neural Darwinism’. Piaget subsequently suggested the concept of stages of cognitive development to explain the origins of the logical system. According to him, the construction of


3-system theory of thinking and reasoning

childhood intelligence is incremental, because it is systematically linked, stage by stage, to the concept of acquisition and progress through the individual actions of a child, and their coordination and mental internalization (representation). This can be referred to as the ‘staircase model’. Each step represents a major milestone, a well-defined stage or unique mode of thinking (structure) in the genesis of logical and mathematical intelligence. This begins with the sensorimotor intelligence of a baby (zero to two years), based on senses and actions, before attaining conceptual intelligence (number, categorization, reasoning), which is initially concrete in children (at about the age of seven) and becomes abstract in adolescents (at about twelve to fourteen years) and adults. It is in this final stage that, according to Piaget, the logical system is permanently in place in the hypothetico-deductive reasoning involving concepts, propositions and rules. His reference work on this topic, published in 1955 with his associate Bärbel Inhelder (1913–1997), is The Growth of Logical Thinking from Childhood to Adolescence (Inhelder & Piaget, 1958). The reader can also refer to The Psychology of the Child in which the issue of reasoning is discussed by Piaget and Inhelder (1969) with respect to other areas of cognition, such as number and categorization. Today, Piaget’s theory is the target of significant criticism, which we will come back to, but it remains a historical central reference point. The psychologist John Flavell, who introduced Piaget to the United States towards the end of the 1960s, wrote in a tribute to Piagetian theory thirty years later: almost everything that is thought by people today in this area [the development of logic in children] has some connection with the issues raised by Piaget. It can therefore be said that Piaget’s role in the study of cognitive development is comparable to that of Noam Chomsky’s in the study of linguistic development: he created and formed a new field of investigation.9

2. The interest in childhood Philosophers’ and scientists’ interest in childhood was clearly not new when Piaget proposed his theory at the start of the twentieth century. In the eighteenth century, Jean-Jacques Rousseau, in Émile, emphasized the ideas of the Enlightenment with respect to children, education and the influence of society. In the nineteenth century, Darwin, in The Expression of the Emotions in Man and Animals, devoted thorough studies to childhood (with respect to facial expressions and the emergence of language); the child that he observed was his own baby, Doddy Darwin (see Chapter 4). What was radically new, with Piaget, was viewing childhood as the experimental field of epistemology,10 in the sense of general mechanisms for the construction of knowledge and reasoning, whether this meant logic, mathematics or physics. With this ‘genetic epistemology’ (Piaget, 1970), defined in reference to the idea of genesis (ontogenesis), it was the view of the child that had changed. A child had become a ‘little intellectual’

Jean Piaget’s theory or the logical system


who questioned reality, explored, experimented and therefore (re)discovered the laws of the world: a ‘child mathematician’ (the construction of number), a ‘logician’ (reasoning). Studying evolving behaviour in children then became an ongoing scientific study, from a baby to an adult – ‘the embryology of reason’, according to Piaget’s expression, that is mathematics, logic, physics and so on in development. It is a form of history of ideas and science in which the child is the primary player and it all takes place in a strikingly short period (barely twenty years). This epistemological approach to childhood no doubt explains why Piaget’s theory was appealing beyond psychology itself. For example, at the start of his career (at a seminar in Davos in 1921), he had conversations with Albert Einstein (1879–1955), and in 1990, the astrophysicist Hubert Reeves paid homage to him: The Swiss psychologist Jean Piaget was one of the first to introduce an historical dimension to the study of the acquisition of knowledge. From the beginning, he recognized that logic was an evolving process. . . . The question asked resulted in the appearance of indisputable evidence: the problem of the origin of logic is a problem of a psychological and biological order.11

3. Against logicism: logic, reflection of thought Early in the twentieth century, Piaget developed his concept of the psychological and biological origin of logic, opposing logicism, which played down, and even denied, the psychological dimension to favour pure logic alone. On that issue, Piaget exposed the foundations of his theory in a series of lectures given at the Collège de France in 1942. These lectures were published in 1947 under the title The Psychology of Intelligence (see Piaget, 2015). In 1942, Piaget was halfway through his life, having been born in Neuchâtel in 1896 and dying in Geneva in 1980. For him, then aged 46, it was the year of ‘intelligence despite everything’. In the midst of a world war (he had already experienced the tragedy of 1914–1918), the Collège de France in Paris invited him to give a series of lectures on the psychology of intelligence and its relationship with logic and reasoning. Piaget accepted. Despite the war and the breaking up of France, occupied since 1940, he continued (in Switzerland and in France) to construct steadfastly his intellectual contribution, just as, according to him, a child has to build its intelligence through the selection of its actions. If one tries to identify what were the intellectual forces of the time, that is the contributors, potential opponents, from whom Piaget took great care to distance himself, one of his primary targets was the logicism of the British philosopher Bertrand Russell. Piaget firmly opposed Russell and his notion that laws of logic have an ideal objective content, independent of psychology (logicism). He also denounced its influence on the contemporary ‘psychology of thought’ in Germany (Denkpsychologie) according to which thought is reduced to a simple


3-system theory of thinking and reasoning

reflection of logic. For Piaget, as we have seen, it was logic that was the reflection of human thought and not the opposite. You get a sense here of the relationship of the Piaget/Russell forces and how strong Piaget’s counter-argument was: ‘Logic is an axiomatic of reason for which the psychology of intelligence is the corresponding experimental science.’12 In 2002, Changeux defended in The Physiology of Truth the thesis according to which logic or mathematical truths are the product of the brain and therefore of human reasoning (see also his discussion with the mathematician Alain Connes: Changeux & Connes, 1995).13 So it can be seen how extensively, sixty years after The Psychology of Intelligence, that the ideas of Piaget, in their opposition to Russell’s, remain very topical in science and cognitive neuroscience.

4. The ‘circle of the sciences’ Piaget initially countered Russell’s logicism: he placed psychology at the foundation of mathematics and logic. Moreover, he went beyond this counter-argument, outlining from the middle of the twentieth century the ‘circle of the sciences’. In an audacious departure from the hierarchy of the sciences of Auguste Comte (of the nineteenth century), Piaget not only placed psychology at the foundation of mathematics and logic, but also placed it alongside biology, chemistry and physics, if one completes the circle. This radical change in perspective – which was completely original for the time (and remains so today) – gave an unrivalled role to child psychology and reasoning at the heart of the ‘hard’ sciences and heralded in Europe the current interdisciplinary context of cognitive science. That is why in the Encyclopaedia of Cognitive Science (Nadel et al., 2003), Piaget figured among the prestigious precursors of the discipline. At the Collège de France in 1942, Piaget had already mapped out, calmly and brilliantly, the path that cognitive science would take much later with Jean-Pierre Changeux, Alain Berthoz and today Stanislas Dehaene. In the 1950s, Piaget was also a lecturer at the Sorbonne, and from 1952 to 1963 he occupied the chair of child psychology, which had previously been occupied by the philosopher Maurice Merleau-Ponty (1808–1961).

5. The stages of intelligence Now we shall look in more detail at the stages of cognitive development outlined by Piaget (from a baby to an adolescent) and how his ideas led to the gradual emergence of the logical system of reasoning. Piaget was convinced of the cerebral (biological) basis of ‘logical-mathematical’ operations – in the child, the adolescent and the adult (psychology). However, at the time he lacked the technological means of observing it in vivo: that of functional cerebral imaging.14 He was therefore limited, experimentally, to

Jean Piaget’s theory or the logical system


inferring the psychological mechanisms of logical-mathematical operations from the observation of behaviour: both actions and verbal responses. Challenging both the empiricism of John Locke and David Hume, who believed everything derives from experience through association and practice, and innatism (the opposing theory), which explains everything through innate structures (see Plato and Descartes, but also Kant and his a priori forms of sensitivity), Piaget proposed an intermediate theory, called ‘constructivism’. This holds that intellectual structures, that is our thoughts, our mental operations, have a genesis specific to them (cognitive ontogenesis). From birth to adulthood, they are gradually constructed, stage by stage (like going upstairs one step at a time), in the context of the interaction between an individual and his or her environment – or, in more biological terms, between a body and its environment. In this interaction, what is essential for Piaget, is the action of a child on objects that surround it (exploring, handling and ‘experimentation’), a concept very different from the idea of ‘passive’ learning (association and practice) specific to empiricism. In the interaction of an individual with his or her environment, what counts for Piaget is the ‘assimilation and accommodation’ dynamic. In psychology, as in biology, assimilation is the process by which an object in the environment is ‘directly absorbed’ by the structure of the body. Conversely, accommodation is the process by which the structure of the body changes to adjust to an environment. Piaget saw in this psychobiological dynamic, which governs a child’s actions, the driver of the development of intelligence, reasoning and logical abstraction, through balancing and continual (internal) self-regulation. He clearly distinguishes the intelligence of a baby (zero to two years) from that of a child (two to twelve years), regarding them as separate stages of development.

5.1 The baby according to Piaget Piaget identifies the period up to about the age of two as the sensorimotor stage. A baby interprets the world that surrounds it based on its senses (sensori-) and its actions (motor). From birth and from its initial reflexes (like sucking at its mother’s breast) it learns certain rules, which become increasingly sophisticated over the months, concerning the workings of the physical world and its ability to affect it. Piaget called these rules ‘action schemas’ (acquired through assimilation and accommodation). At about the age of eight months, for example, a baby discovers that when an object (let us say its teddy bear) disappears from view (hidden behind a cushion on the sofa for example), this object nevertheless continues to exist. This is because it can, through its actions, either remove the hiding obstruction (the cushion in this example) or reach the object and recover it. This is referred to as ‘object permanence’, a founding principle of the construction of reality (what applies to the teddy bear also applies to all objects in the world). However, this form of sensorimotor intelligence (in our example, vision-action) makes a baby very dependent on the present time.


3-system theory of thinking and reasoning

A new stage is reached at about the age of two, when a child becomes capable of detaching itself from immediate action. According to Piaget, its intelligence from that point on becomes ‘symbolic’ or ‘representative’ (it is endowed with mental representation). It is, however, difficult to believe that object permanence does not require from a baby an elementary form of mental representation (to represent the disappeared object in memory) and even some reasoning regarding the situation, particularly in the case of the movement of objects. Nevertheless, it is from the age of two that the expression of symbolic thought emerges the most clearly in an infant: deferred imitation (evidence of mental representation of an absent example), the ‘symbolic’ game (e.g. a child may play telephone with a banana or a toy mobile phone), drawing and language. These last two symbolic activities that have undergone an extraordinary evolution in humans in relation to other animals (up to art and literature) enable a child to re-describe or re-present events that have been experienced. They also, as in the game, give free rein to imagination. The logos of Aristotle involved both reason and language. This is the symbolic thought (or function) of Piaget. Therefore, a two-year-old child uses action schemas that it has learnt in the sensorimotor stage, but this time with a detachment from reality. It starts to internalize and combine them mentally. Through this fundamental cognitive process (internalization and combination), (real) actions become mental operations. During the pre-operational stage (two to seven years) and the concrete operational stage (seven to twelve years), which correspond to the essential period when a child goes from being a toddler to attending infant school, and from there on to primary school, a child gradually builds fundamental concepts of thought, such as number and the comprehension of categories.

5.2 The age of reason At about six to seven years – ‘the age of reason’, which is very dear to classical philosophers – the intelligence of a child becomes flexible. This is what Piaget called ‘operative reversibility’, that is the capacity of a child to cancel, through thought alone, the effect of an action (by combining a mental operation and its opposite). The Piagetian task called the ‘conservation of discrete quantities’ is an example with respect to number. This example is emblematic of the preparation and implementation of concrete operations of reasoning. Two lines of counters of the same number are laid out on a table (discrete quantities), six to eight depending on the situation, and of the same length (the area occupied on the table). At about four to five years, a nursery school child recognizes that there are the same number of counters in each line. However, if the adult conducting the experiment spreads out the counters in one of the two lines (so that the number stays the same but the length of the line increases), the child will think that a longer line means that there are more counters. This verbal response is an error of reasoning, based on the perceptive intuition ‘length equals number’, which

Jean Piaget’s theory or the logical system


shows, according to Piaget, that a child has not yet acquired the concept of number. From six to seven years (the age of a primary school child), thought becomes flexible, and the action of moving the counters can be corrected, cancelled, by an opposing action. This happens by the mental representation of the action of moving the counters closer together – where, this time, a verbal response of numerical equivalence is made (‘it is the same; the counters moved, but you can put them back as they were before’, or even compensation arguments for the dimensions of length and/or density). There is, therefore, in this case, operative reversibility, a conservation of quantities (and what applies to the counters applies to all objects in the world). Piaget invented other ingenious experimental tasks like this one. Even if this is not yet formal logic, it is already minor reasoning of major concepts such as number, a building block of mathematics. He therefore used, in the concrete operational stage, conservation tasks (for number, substance, etc.), the inclusion of categories and series, all associated with an original clinical interrogation method (inspired by psychiatric diagnosis and investigation). He conversed freely with a child about guided topics (‘Are there more counters when they are moved away from one another?’; ‘Is there more plasticine when the ball is flattened?’; ‘Are there more daisies or more flowers?’ when presented with ten daisies and two roses) by testing the soundness of verbal responses from the child through the requests for justification and counter-suggestions. Sometimes, the child must simply (and this is not easy for the younger ones amongst them) sort out sticks of varying sizes on a table, placing them in a logical series. The invention of these ‘Piagetian’ tasks owes a lot to the work of a team (at the École de Genève), in particular to Inhelder, as previously mentioned. Finally, in the last stage of intelligence, the formal operational stage (from twelve to sixteen years), the child, who has become an adolescent, acquires the capacity to reason directly about logical propositions, ideas and hypotheses (Inhelder & Piaget, 1958). This is ‘hypothetico-deductive’ reasoning. The loop (the circle of the sciences) is therefore closed: through this gradual psychological path, stage by stage, one arrives at adult logic.

6. Establishment of hypothetico-deductive reasoning Let us look more closely at this last stage, eleven to twelve years and older. Aside from puberty and its physical, sexual aspects, a sort of ‘unhooking’ occurs in adolescence of thought in relation to the real. From then on, quantitative (number) and qualitative (categorization) processes that the child performs involve concrete objects less and ideas more. This is reasoning in the full sense of the term. From this perspective, it is not surprising that children who have grown to adolescence start to want to ‘change the world’ by challenging systems in place, beginning with those of their parents. It is the age of great ideas and initial


3-system theory of thinking and reasoning

‘personal theories’ about politics, philosophy, literature or science. Piaget was himself a prodigious example of this with his first scientific articles written between eleven and sixteen years of age. Rimbaud was another, with his poetry: at the age of sixteen, he wrote Aventure du voyant, ‘a long, immense and reasoned unsettling of all the senses’.

6.1 The real, special case of the possible Adolescents discover, for the first time, the extraordinary power of their brain when it is in ‘hypothetico-deductive mode’ (‘if–then’), even if this cognitive mode was already working in a concrete manner. It is a priori as strong on the intellectual level as is the discovery of love on an emotional level. Everything becomes possible – at least through thought. To differentiate this new stage (formal operations) from the previous one (concrete operations: number, categorization, series, etc.), Piaget used a fine description: before adolescence, the possible is a special case of the real; after adolescence, the real becomes a special case of the possible. This is the ‘unhooking’ of thought in relation to concrete objects (the counters, flowers, sticks, etc. of the previous examples). In his theory, Piaget describes quite a complex system of logical rules that are combined (the ‘logic of logicians’), a system which corresponds to the final stage of cognitive organization.15 To understand things in both a formalized and a simple way, here is an example of a hypothetico-deductive problem that is often cited, that of the snail (see next section). This illustrates the nature of the complex formal structure, which, according to Piaget, falls into place in the middle of adolescence. It is ‘the INRC group’ or group of two reversibilities, where N and R indicate the two reversibilities, by inversion or negation, N, and by reciprocity, R; and where I represents zero or ‘identical’ transformation and C, the correlative, the inverse of the reciprocal. This group of operations is the final synthesis of concrete operational ‘groupings’. In the previous stage (six to twelve years), reversibility by inversion (or negation) is found in groupings of logical categories (A + A’ = B and B – A’ = A: e.g. 10 daisies + 2 roses = 12 flowers and 12 flowers – 2 roses = 10 daisies).16 Reciprocal reversibility is found in groupings of relations of series where the child logically orders sticks of varying sizes (any element of the series is understood as being simultaneously bigger than the previous, E > D, C, B, A, and smaller than the next, E < F, G, etc.).17 This is a reversibility that takes the form of a reciprocal (in relation to sticks, E, D and F are reciprocated). However, no structure of the concrete stage in children (six to twelve years) is composed of these two reversibilities. Such a general system (called ‘group’ in the mathematical sense of the term) is, according to Piaget, the final cognitive synthesis of partial systems or ‘groupings’ built during the concrete operations stage; it subsequently comes together in a single total organization of inversions and reciprocals which previously had been separate.

Jean Piaget’s theory or the logical system


6.2 Coordination, synthesis and abstraction in adolescence This new cognitive structuring of intelligence operations underlies hypotheticodeductive reasoning, which must lead to achieving the correct result in the problem of the ‘snail on its board’. A snail (in this case an empty shell) is placed on a board, which is in turn placed on a table with a point of reference (as a reminder, in the INRC group, I = direct operation or special identity, N = Negation, R = Reciprocal and C = Correlative). The snail on its board can make a track in one direction, I, or in the direction that cancels the first track, N. Nevertheless, the experimenter can move the board in the opposite direction of I: I is no longer cancelled by N, but by this reciprocal movement, R (in relation to the reference point fixed on the table). The inverse of R is C, the correlative operation of I, as the tracks I and C converge. In the concrete operations stage (six to twelve years), a child is capable of reasoning within each of these systems: the track of the snail on the board, I and N (a round trip, which is very simple), and the track of the board on the table, C and R. It is only at around eleven to twelve years, having reached adolescence, that a child can start to coordinate the two systems (involving the reversibility of one with that of the other) by ‘if–then’ and in relation to a reference point. Hypothetico-deductive reasoning thus gives intelligence operations a capacity of abstraction unrivalled up to this point by structuring the possibles (what can formally happen; here, the movements of the snail and/or the board) of which the real becomes a special case. Piaget therefore arrived at his proof: logic (the algebra of Boole or others: formulae composed of letters and abstract signs) is the reflection of thought, a model of reason for which the psychology of intelligence is the scientific and experimental study. This opening of the human spirit, through cognitive development, to new possibles or co-possibles to be imagined, inferred, related and structured mentally was the subject of Piaget’s research until the end of his life in 1980, as can be seen in his four posthumous publications Possibility and Necessity (two volumes, Piaget, 1987), Morphisms and Categories (Piaget et al., 1992) and Towards a Logic of Meanings (Piaget & Garcia, 2015).

7. Critics of Piaget’s theory During the 1980s and 1990s, after Piaget’s death, various research trends drove the cognitive psychology of reasoning. An important debate arose between those who believed, like Piaget, in the existence of a mental logic (the ‘top step of the stairs’) and those who did not believe in it.

7.1 Mental logic or mental models? The first point of view was championed by Martin Braine (Braine & O’Brien, 1998). According to Braine, there exists in the human brain, according to the


3-system theory of thinking and reasoning

textbooks since Aristotle, a universal and natural deductive logic, limited however to elementary operations. The function of this logic is to facilitate verbal interactions through a set of formal rules automatically performed and acquired during childhood from language (the famous ‘if–then’, for example). This point of view is very Piagetian, and Braine only attributes to language, in childhood, the key role that Piaget attributed to actions and mental operations (Piaget nevertheless recognized the role of language through symbolic function). Conversely, Philip Johnson-Laird maintained that mental logic, even elementary logic, did not exist (Johnson-Laird, 2001). According to him, all our reasoning can be explained not by the use of verbal and logical rules (Piaget, Braine) but by a sort of visuo-spatial (or graphic) production involving actors personifying the details of a problem in an internal theatre which he called a ‘mental model’ (itself made up of alternative models to be tested mentally). Several experimental psychological studies have been conducted to decide between these two theories – mental logic versus mental models – but no conclusion has been reached, with each ‘school’ providing behavioural data supporting its point of view and challenging the other. The most likely scenario biologically is that, as Braine and Piaget thought (following Aristotle, Descartes and many others), our brain has at least a few logical rules, organized in a regulatory system relevant to our particular situation. This does not rule out the existence of other forms of reasoning, as is described in the next chapter which is devoted to the dual system theory of the logical and the intuitive.

7.2 Non-linear development The most fundamental criticism today of Piaget does not so much involve mental logic, for which it is difficult to doubt some minimal existence, but the very concept of cognitive development. The new child psychology questions the ‘staircase model’ of incremental progress stage by stage, as propounded by Piaget, or it at least states that it is not the only possibility. On the one hand, there already exist in babies quite elaborate cognitive capacities (Gopnik, 2012; Teglas et al., 2011; Xu & Garcia, 2008), in particular probabilistic forms of reasoning for statistical purposes (but also diverse physical and logical-mathematical proto-knowledge: see Chapter 8), which were ignored by Piaget and not reducible to strictly sensorimotor functioning (the ‘first step of the staircase’). On the other hand, the continuation of the development of intelligence up to adolescence and adulthood (the ‘top step’) is punctuated with errors of reasoning, perceptive and semantic bias (Evans, 1989, 2003; Kahneman, 2011; Kahneman et al., 1982), and unexpected gaps including steps backwards or ‘regressions’ not predicted by Piagetian theory (see Chapter 7).18 So, rather than following an oblique line that leads without a hitch from the sensorimotor to the abstract (Piaget’s stages of a logic system), intelligence moves in a much more crooked and accidental way, that is it is non-linear.

Jean Piaget’s theory or the logical system


This new image of cognitive ontogenesis is consistent with current, and also dynamical and non-linear, concepts of the development of knowledge in the history of sciences. So, for Michel Serres, time, over the centuries, has ‘stopping points, ruptures, deep wells, channels of extraordinary acceleration, gaps’.19 This historian and philosopher of science proposed the metaphor that time bends and twists, like a crumpled handkerchief at the bottom of a pocket, demonstrating topology, the science of spatial relations, and not ‘metric geometry’, the science of well-defined and stable distances (which would be here represented by Piaget’s stages). In Chapter 7 it will be seen that this dynamic imposes – for the expression of the logic system – mechanisms of regulation, of cognitive control called an ‘executive function’ (inhibition, cognitive resistance) exercised by the prefrontal cortex, at the front of the brain. These are executive mechanisms that enable reasoning on a case-by-case basis. However, before discussing the regulation of reasoning systems by the brain, of the inhibition of one to activate the other, one must first understand which systems are in competition and create interference and cognitive conflicts with logic, sources of reasoning bias and systematic errors, both in children and in adults. How do these systems operate and in what order and why? A good way of understanding current experimental data is to distinguish, in an initial analysis, two major systems of reasoning, each constituted of multiple strategies (some automatic and heuristic), as Kahneman (2011) did, following other contemporary psychologists: an intuitive, rapid and emotional system (System 1) opposed to a more reflective and logical system (System 2).

Notes 1 A. Rimbaud, Lettre à Paul Demeny, 15 May 1871, in Œuvres complètes (Paris: Gallimard, 1979). 2 I. Kant, What Is Enlightenment? (Paris: Fayard, 2006 [1784]). L’Encyclopédie by Diderot and Alembert, issued between 1751 and 1772 (seventeen volumes of texts, eleven volumes of plates, 150 authors), bore the subtitle Dictionnaire raisonné des sciences, des arts et des métiers (souligné par nous). In this same era, Étienne de Condillac, a friend of Diderot, hinted already at a psychology of reasoning (or understanding) founded on the comparison of ideas – either related or separated by words – and on the linking of judgements (Essai sur l’origine des connaissances humaines, Paris: Vrin, 2002 [1746]). 3 At the time of Leibniz, we may also mention the publication in 1662 of ‘La logique de Port-Royal’ (the name of the famous Jansenist abbey near Paris, a place of Catholic reform with which Pascal was associated): La Logique ou l’Art de penser, of which the title of the initial editions heralded the Grammaire générale et raisonnée. 4 See ‘Note sur la remise du titre de docteur Honoris Causa in Sorbonne to Nobel prizewinner Daniel Kahneman: hommage by Olivier Houdé’, L’Année psychologique, no. 107, 2007, pp. 525–527. 5 See www.humanbrainproject.eu 6 In the months that followed the official announcement of the European HBP project in early 2013, President Obama made a competitor project public: BRAIN (invested with $100 million for the first year, 2014). 7 For the opposite search for a cause (then) from its effect (if), one refers to ‘abductive reasoning’: this is the case in medical diagnosis or in a police investigation.


3-system theory of thinking and reasoning

8 The logicist doctrine associated with ‘anti-psychologism’ (logical laws, objective and public, cannot derive subjective and private representations) was at the heart of the very major formalization of the mathematics project of Gottlob Frege (1848–1925) at the end of the nineteenth century in Germany. For the critical position of Piaget in this respect in the 1940s (regarding the logicism reformulated by Russell) see O. Houdé, ‘L’intelligence malgré tout’, preface to the new French edition of J. Piaget, La Psychologie de l’intelligence, (Paris: Armand Colin, 2012), pp. 3–13. 9 My translation; see O. Houdé and C. Meljac (eds.), L’Esprit Piagétien: Hommage International à Jean Piaget (Paris: Puf, 2000), p. 213. 10 The critical study of the sciences, intended to determine their logical origin, their value and their scope (philosophy in the sciences). It is also a general theory of knowledge: what is knowledge, how is it acquired? 11 H. Reeves, Malicorne (Paris: Le Seuil, 1990), p. 49. 12 J. Piaget, La Psychologie de l’Intelligence, p. 51. 13 This discussion opposes cerebral constructivism (Changeux or Dehaene today) and realism (Connes) in mathematics. As Plato had already suggested (the Ideas), realists think that mathematical (or logical-mathematical) reality exists independently of any human investigation. It is therefore constructed neither by the brain nor by psychology. However, Connes added ‘we only perceive it thanks to our brain, at the price, as Valéry said, of a rare mix of concentration and desire’ (Changeux & Connes, 1989, p.48 of the French edition, Paris, Odile Jacob). 14 Observation in vivo is possible today in children from primary and nursery schools, as it is in young adults; see for example, Houdé et al. (2000, 2011). 15 We will not venture here into the technical, sometimes excessive, aspects of Piagetian formalization, but the reader can find a detailed account of this in Inhelder and Piaget (1958). 16 With this reversible operation, the child manages to respond that there are inevitably more flowers (B) than daisies (A), irrespective of the spatial and numerical extensions of daisies and roses (A and A’, with A > A’). This is cognitive manipulation (a round trip of the reversible operation) in working memory. 17 When the composition of asymmetric relations is reached, transitivity, which is a deductive composition, is also reached: A < C if A < B and B < C (a list that is easily transformed into a conditional rule ‘if–then’). 18 These deductive and inductive biases are observed, systematically, whereas the INRC logic group is in place according to Piaget. 19 M. Serres, Éclaircissements (Paris: François Bourin, 1992).

Bibliography Braine, M. and O’Brien, D. (Eds.) (1998). Mental Logic. Hove: Erlbaum. Changeux, J.-P. (1985). Neuronal Men: The Biology of Mind. New York: Pantheon. Changeux, J.-P. (2002). The Physiology of Truth. Cambridge: Harvard University Press. Changeux, J.-P. and Connes, A. (1989). Matière à pensée. Paris: Odile Jacob. Changeux, J.-P. and Connes, A. (1995). Conversations on Mind, Matter, and Mathematics. Princeton, NJ: Princeton University Press. Evans, J. (1989). Bias in Human Reasoning. Hillsdale, NJ: Erlbaum. Evans, J. (2003). In two minds: Dual-process accounts of reasoning. Trends in Cognitive Sciences, 7, 454–459. Gopnik, A. (2012). Scientific thinking in young children: Theoretical advances, empirical research, and policy education. Science, 337, 1623–1627. Houdé, O. and Meljac, C. (Eds.) (2000). L’Esprit Piagétien: Hommage International à Jean Piaget. Paris: Puf.

Jean Piaget’s theory or the logical system


Houdé, O. et al. (2000). Shifting from the perceptual brain to the logical brain: The neural impact of cognitive inhibition training. Journal of Cognitive Neuroscience, 12, 721–728. Houdé, O. et al. (2011). Functional MRI study of Piaget’s conservation-of-number task in preschool and school-age children: A neo-Piagetian approach. Journal of Experimental Child Psychology, 110, 332–346. Inhelder, B. and Piaget, J. (1958). The Growth of Logical Thinking from Childhood to Adolescence. London: Routledge. Johnson-Laird, P. (2001). Mental models and deduction. Trends in Cognitive Sciences, 5, 434–442. Kahneman, D. (2011). Thinking, Fast and Slow. New York: Farrar, Straus and Giroux. Kahneman, D. et al. (1982). Judgment under Uncertainty: Heuristics and Biases. New York: Cambridge University Press. Nadel, L. (Ed.) (2003). The Encyclopedia of Cognitive Science. London: Nature Publishing Group. Piaget, J. (1970). Genetic Epistemology. New York: Columbia University. Piaget, J. (1987). Possibility and Necessity (Vols. 1 and 2). Minneapolis: University of Minnesota Press. Piaget, J. (2015). The Psychology of Intelligence. London: Routledge. Piaget, J. and Garcia, R. (2015). Towards a Logic of Meanings. Hillsdale, NJ: Erlbaum. Piaget, J. and Inhelder, B. (1969). The Psychology of the Child. New York: Basic Books. Piaget, J. et al. (1992). Morphisms and Categories. Hillsdale, NJ: Erlbaum. Teglas, E. et al. (2011). Pure reasoning in 12-month-old infants as probabilistic inference. Science, 332, 1054–1059. Xu, F. and Garcia, V. (2008). Intuitive statistics by 8-month-old infants. PNAS, 105, 5012–5015.

6 THE DUAL-SYSTEM THEORIES System 1 (intuition) and System 2 (logic)

We shall begin this chapter with a concrete example of ‘if–then’ logical deduction. This is the example of shapes and colours already given, which we will now look at in detail regarding the cognitive processes involved in it. If you did not do so for the previous example of Piaget’s snail (which was not as simple as all that), get a pencil now, even colouring pencils if you have some to hand, and a note pad. Your Systems 1 and 2 will be mobilized! Maybe even your emotions!

1. Logic versus intuition: cognitive bias Imagine you have been asked to read on a computer screen (or on a tablet) the rule ‘If there is a red square on the left, then there is a yellow circle on the right’. Then, you are required to select with the mouse (or finger) two shapes that conform to this rule from various figures shown (simple geometric shapes of various colours). You will, like everyone, place a red square on the left and a yellow circle on the right, which is correct. The consequent (the ‘then . . .’ part) can also be put into the negative: ‘If there is a red square on the left, then there is no yellow circle on the right’. In this situation, too, everyone gives a correct response: for example, a red square on the left and a blue diamond on the right. Now for something a little bit more difficult: make the last rule false. This presents no problem either: everyone places a red square on the left and a yellow circle on the right. So many logic puzzles activate our logical reasoning, acquired in adolescence, and seem to support Piaget and Braine’s work (Chapter 5). However, if the antecedent (the ‘If . . .’ part) had been made negative, which from the perspective of logic is not technically more complicated, at least 90% of us, both adolescents and adults, would have failed. So almost everyone.

The dual-system theories


1.1 ‘If–then’ and matching bias Jonathan Evans discovered that if people are asked to make the rule ‘If there is no red square on the left, then there is a yellow circle on the right’ false, they answer (believing they are answering correctly): ‘a red square on the left and a yellow circle on the right’ (Evans, 1998). They intuitively think something like: ‘I must make the rule false, there is a negative at the beginning, so if I match my response exactly with the two shapes mentioned in the rule that must be right’. Yet this is not the right answer, it is an error of logic called ‘matching bias’. This bias is caused by a heuristic process in which the presence of a negative automatically focuses our attention on what is denied. For example, it is known that if one says to a child that there is no (or no more) chocolate ice cream, he or she thinks about it all day. Another example is given by the winner of the 1972 Nobel prize for medicine or physiology, Gérald Edelman, in his book Bright Air, Brilliant Fire: On the Matter of the Mind. If you are told not to think of a pink elephant, like most people you will instantly create a mental image of a pink elephant (Edelman, 1992). This matching bias (in this case, red square, yellow circle) in an ‘if–then’ task is therefore a mistake or a snag – a ‘delayed error’ for adolescents and adults (‘delayed’ in terms of cognitive development). It is a shift, a regression in relation to formal logic, and Piaget did not predict this shift. In the Piagetian model, we have reached the last cognitive, logical stage, the last step – and gone well beyond that, if we are considering adults. However, even senior engineers from well-known schools make mistakes that have the matching bias: we have tested it, in training seminars, with senior managers from a leading global company. This does not exclude the fact that mental logic exists, but shows that one of the characteristics of the human brain is to deviate from it rather than to apply it – irrespective of the level of academic training. This is also part of the psychology of reasoning, ignored by the ‘official’ (very epistemological) Piagetian account of how we develop from a baby into a logical thinker.

1.2 Definition of reasoning bias In 1989, using examples like this, Evans embarked on writing the ‘other account’ – ‘I think, therefore I am wrong’, focused on deviations from logic. He wrote: There is, in my view, sufficient evidence of widespread mistakes and biases in human reasoning – on very robust criteria of measurement – to justify a systematic attempt to classify such phenomena . . . I see this effort as complementary to, rather than in opposition to the efforts of other psychologists [Piaget, Braine, etc.] to propose mechanisms underlying the competence which is also manifest in reasoning studies. (Evans, 1989, p. 10) He defined reasoning bias as the systematic tendencies that consider non-relevant factors when fulfilling a task, while ignoring relevant factors.


3-system theory of thinking and reasoning

Therefore, in the ‘if–then’ and coloured geometric shapes task (‘If there is no red square on the left, then there is a yellow circle on the right’), one should not allow oneself to be influenced, trapped by the perception of the elements cited in the rule (in this case the red square and yellow circle). Instead, one should reason according to logic by selecting a situation where the antecedent of the rule is true (not a red square) and the consequent of the rule is false (not a yellow circle). An example of this is a green square on the left and a blue diamond on the right. Other logical responses are possible; what needs to be selected is anything except a red square on the left and anything except a yellow circle on the right. This is what makes the rule false as required by the instruction for the problem. In this scenario, therefore, being logical involves going against the perception of the elements cited in the rule and detaching from it, that is by inhibiting the matching bias. In the next chapter, we shall come back to this executive component of inhibition, which is central to our ability to reason correctly.

1.3 Another bias: syllogisms and beliefs The ‘if–then’ example we have just looked at (the matching of coloured shapes) can be described as a ‘perceptive’ type of bias, though the phenomenon of cognitive bias is so general that semantic examples can also be easily identified. These are related to reasoning errors where our general knowledge of the world short-circuits logic. In syllogisms, this is ‘belief’ bias. As a reminder, from the perspective of logic, the valid nature of a deduction depends on the structure of the inference and not on the content of expressions as such – the semantic. It may even contradict our beliefs of the world. In this context, it is said that logic is ‘independent of content’. Indeed, Evans discovered the existence of a reasoning bias that resulted in the semantic strategy being favoured (credibility) over the logic strategy (validity), the belief bias that exists in children, adolescents and adults (Evans et al., 1983; Evans & Perry, 1995). To illustrate this simply, one example is where children systematically have a tendency to accept a non-valid yet credible conclusion: (a) elephants are hay eaters, (b) hay eaters are not heavy. Does that mean that (c) elephants are heavy? Children respond yes, yet nothing allows them to deduce this conclusion from the premise (the first two expressions, which they are asked to hold as true). It has been shown that the difficulty with this type of task, during development, is to manage to inhibit (as in matching bias) the semantic content of the conclusion: the strong belief of children about the weight of elephants (De Neys & Van Gelder, 2008; Moutier, 2000). So ‘elephants are heavy’ is, as we know, very general and semantic knowledge rooted in our brain from a very young age. Like matching bias (perception), this belief bias (semantic) in children and its persistence in various problems in adolescents and adults was not identified by Piaget. To consider it, Evans proposed a model that predicted that individuals – children, adolescents or adults – will evaluate semantic credibility before logical validity (Evans & Over, 1996). In other words, if the conclusion is credible, they

The dual-system theories


will accept it without evaluation – this is the heuristic strategy of belief; if it is not credible, they will then seek to ascertain if the premises validly result from it (by applying the algorithm of logical verification of the syllogism). These errors of syllogisms related to semantic conflicts are not therefore specific to young children, as Piaget predicted, but they are inscribed in our general cognitive operating mode, for both children and adults. Thus, in Kahneman (2011), he gives the example (a) all roses are flowers, (b) some flowers wilt quickly, therefore (c) some roses wilt quickly. He stated that a large majority of university students, from Harvard, the Massachusetts Institute of Technology (MIT) and Princeton, hold the view that this syllogism is valid (Aristotle of Athens, a fictitious student on a European programme, would not have been of this view). This response is obviously biased because it may be that no rose is amongst the flowers that wilt quickly. However, the conclusion ‘some roses wilt quickly’ is credible and as Kahneman (2011, p. 59) wrote, ‘One needs to work hard to eliminate it. The insistent idea that “it’s true, it’s true” makes logical verification difficult, and most people don’t bother reflecting on the problem’. Summarizing Evans’s concept (the credibility/validity model): if the conclusion is credible (some roses wilt quickly), people will accept it without question. They are not worried (if they notice) about a conflict between the conclusion and their beliefs or usual knowledge. This lies at the heart of the economy and even laziness of human reasoning.

2. Two systems: System 1 (intuitive) and System 2 (logical) In his 1989 work Bias in Human Reasoning (a true bible of deduction bias), Evans analysed reasoning bias in terms of heuristic strategies. These are rapid, relatively automatic strategies (e.g. perceptive matching or semantic credibility), as opposed to analytical strategies, which are slower, yet controlled and attentional, and correspond to deductive competence: logic and the exact algorithm (the Piaget system). Then in 1996, with David Over, Evans introduced the concept, compatible with the above distinction, of two forms of rationality which they numbered simply 1 and 2 (Evans & Over, 1996). Rationality 1 is a form of ‘daily reasoning’ that people use to achieve their objectives without seeking to conform to logic. Rationality 2 is deductive competence as explored by Piaget (the formal operations stage) and Braine (mental logic).1 According to Evans and Over, reasoning bias is a feature of Rationality 1 rather than Rationality 2. This theoretical distinction between two systems S1 and S2 (or R1 and R2) – so straightforward that it could be a dichotomy (for a cognitive function of the complexity of reasoning) – has been very successful in the last decades. The concept can be found in the writings of the psychologist Keith Stanovich (Stanovich & West, 2000). Evans picked up on this again in 2003 in a fine article entitled ‘In Two Minds: Dual-Process Accounts of Reasoning’, in which he outlined how the two systems (or forms of mind) act only in a single brain and are constrained by it. I and others have demonstrated this experimentally through the measurement of perceptive matching


3-system theory of thinking and reasoning

bias (S1), compared to logical responses (S2), in a seminal brain imaging experiment (Houdé et al., 2000). Finally, Kahneman used the S1/S2 dichotomy in 2011, nine years after he was awarded the Nobel Prize for economics. He used it to structure the publication of all his research, a lot of which was conducted with Amos Tversky (1937–1996). Before looking in more detail at Kahneman’s contribution regarding System 1, the intuitive and heuristic system with its multiple bias and judgement errors (errors that ‘people in general’ make), we would like to highlight what Piaget wrote concerning logical operations of adolescents and adults (operations described in Chapter 5: the INRC group in particular). He said that they are ‘potentialities that a normal subject can make use of even if each person does not achieve them all and even if their actualization is subject to accelerations or delays’ (Piaget & Inhelder, 1969, p. 104). He therefore thought that he had foreseen all the possible scenarios with respect to his theory. However, Piaget never imagined that errors of reasoning in logical elementary tasks (those of Evans, Kahneman, etc.) could be part of the system. In other words, they are stable characteristics of all ‘normal’ adolescents and adults (between 80% and 100% depending on the bias), in principle as stable as what he considered to be logical structures (Piaget was a structuralist).2 As was so well highlighted by the neoPiagetian psychologist Kurt Fischer, it is noted today that ‘shift [with respect to logic] is the rule of cognitive development’ and not the exception (Fischer & Farrar, 1988, p. 141). This is also true for absurd decisions in the world of technology and management (Morel, 2002, 2010). We will come back to this in the next chapter.

2.1 System 1 defined by Daniel Kahneman From the very start of Thinking, Fast and Slow, Kahneman (2011) unambiguously states that ‘the automatic System 1 is the hero of the book’ (p. 29). This is a marked difference from Piaget, who considered that (following on from Aristotle, Descartes, etc.) the hero, conversely, was logic or System 2, as it develops from the child to the adolescent. In this respect, as previously stated, Kahneman revived (without citing it) the work of Pascal, according to whom our thought must be twofold: the spirit of geometry (System 2) and the spirit of finesse, in the sense of unconscious, intuitive and rapid reasoning (System 1). In order to portray the psychological character of System 1 more clearly (it had already been well described by Evans in his definition of bias and Rationality 1), Kahneman (2011) listed a series of properties that, according to him, establish our System 1 reasoning. System 1 works automatically and rapidly, with little or no effort and without deliberate control. It produces impressions, feelings, inclinations, which is what gives it its ‘intuitive’ character. It can provide competent reactions and intuitions after specific training (expertise). It creates a coherent schema of ideas activated in associative memory (Hume and Condillac are relevant here, see Chapter 3). It attaches a sensation of cognitive ease to illusions

The dual-system theories


of truth, pleasant feelings and, consequently, reduced vigilance. It neglects ambiguity and removes doubt.3 It is biased towards belief and confirmation. It exaggerates emotional consistency (the ‘halo effect’4). It concentrates on existing evidence and ignores missing evidence. It is more sensitive to changes than to steady states. It overestimates weak probabilities. It is less and less sensitive to quantity (psychophysics). It reacts more strongly to losses than to gains (aversion to loss), and so on. The list is not exhaustive. Kahneman was original in being first to say strongly – and demonstrating experimentally – that these intuitive characteristics of System 1 are valid not only for the cognitive psychology of reasoning, in a laboratory and on a daily basis, but also for the economy; he very seriously doubted the premise of the rationality of individuals (or ‘agents’) in the dominant standard theory. This resulted in the new trend of ‘behavioural economy’ and also the ‘nudge’ approach (Thaler, 2015; Thaler & Sunstein, 2008) in social psychology regarding public policy (the ‘nudge’ metaphor is that of a mother duck who keeps her ducklings on the right track with a discreet little push). It involves incitements in terms of economy, health or safety, which increases political efficiency by adapting to the psychological bias of individuals (via System 1) using little pushes rather than by applying logical constraints (System 2).5 Thanks to this nudge theory, Richard Thaler received the Nobel Prize in Economics in 2017. Considering the number of properties in the System 1 list, it is clear that this intuitive system – and consequently our brain with its 86 to 100 billion neurons (and a million billion connections) – is often, despite its tremendous neurocognitive potential, a machine that draws hasty conclusions. That is how Kahneman explains the belief bias in syllogisms: our System 1 is ‘biased to believe’ that elephants are heavy or that some roses wilt quickly (its associative memory encourages it), because it works automatically and rapidly, with little or no effort. This is also valid for the deductive bias of matching in ‘if–then’ rules with a negative antecedent: the intuitive behaviour to make a rule false by using the two elements cited in the rule substitutes the easy perceptive solution of System 1 for the logical, more difficult, deduction of System 2. The examples given up to now have been related to deduction (syllogisms, conditional rules), but Kahneman’s most original experimental contribution was to demonstrate his theory when considering human judgements involving the probable, that is induction. The theoretical question to ask, in this case, is whether our judgements conform to the conventional mathematical formalization of uncertainty: the calculation of probabilities.

2.2 Biases of inductive reasoning: stereotypes, framing and so on The most famous example in the work of Kahneman and Tversky is that of a fictitious female character, Linda (Tversky & Kahneman, 1983). The story told to individuals who did the experiment (students in general) was about Linda, whose profile was that of a very left-wing student in the United States in the 1970s. The


3-system theory of thinking and reasoning

students then made a comparative evaluation of probabilities relating to whether this ex-student who was now thirty was (a) a bank teller or (b) a bank teller and active in the feminist movement. The results show that approximately 80% of individuals questioned considered the proposition ‘feminist bank teller’ more likely, thereby contravening the basic calculation of probabilities (p) according to which p (a&b) ≤ p (a), where a = bank teller and b = feminist. Think of or draw the situation in terms of a Venn diagram with two circles with values a and b, and their zone of intersection. Instead of basing it on the logical calculation of probability (System 2), most people reason according to a heuristic called ‘the bias of representativeness’, that is an instant resemblance with a social stereotype in associative memory (an intuition of System 1). Kahneman (2011) gave a very full description of this discovery (an entire chapter is dedicated to it): we found [with Tversky] that 89% of the undergraduates in our sample violated the logic of probability. We were convinced that statistically sophisticated respondents would do better, so we administered the same questionnaire to doctoral students in the decision-science program of the Stanford School of Business, all of whom had taken several advanced courses in probability, statistics, and decision theory. We were surprised again: 85% of these respondents also ranked feminist bank teller as more likely than bank teller. (p. 158) The Linda problem illustrates a basic error of the conjunction of probabilities. As we stated earlier with a simple formula, the co-occurrence of two events is still less probable than the occurrence of a single one. Everyone knows and nobody doubts this if searching on a dating website or in the small adverts in a newspaper for beauty, intelligence and wealth. This inductive bias is even more astonishing because, according to the theory of Piaget, from ‘the age of reason’ (six to seven years) children well know that there are more flowers than daisies or roses (and in adolescence, moreover, they manipulate the INRC group and its two reversibilities). Like bank tellers, flowers are the superordinate category or class and like feminist bank tellers, daisies (or roses) are the sub-category (this is the logical relation of inclusion). This illustrates well the non-linear nature of the development and use of the logical system of reasoning (System 2) as we highlighted at the end of the previous chapter. Thinking, Fast and Slow, Kahneman (2011) is full of examples of this type in its 500 pages (many, like the Linda example, are related to inductive reasoning and its special cases). In the first few pages of the book, Kahneman invites us to imagine the story of Steve: ‘Steve is very shy and withdrawn, invariably helpful but with little interest in people or in the world of reality. A meek and tidy soul, he has a need for order and structure, and a passion for detail’ (p. 7). The question asked is: Is Steve more likely to be a librarian or a farmer? Your System 1 is

The dual-system theories


already working and your System 2 is being lazy. The resemblance between the profile of Steve and the stereotype of the librarian (System 1) instantly resonates, whereas statistical considerations (System 2), which are equally important, are ignored. Kahneman commented: Did it occur to you that there are more than 20 male farmers for each male librarian in the United States? Because there are so many more farmers, it is almost certain that more ‘meek and tidy’ souls will be found on tractors than at library information desks. (p. 7) As well as ‘cognitive illusions’, Kahneman gives the example of perceptive errors, like the famous Müller-Lyer illusion in which two parallel lines of exactly the same length seem very different due to the opposite-facing arrows that are added to them. In the field of visual perception, this is also an intuitive expression of System 1. Another example in the field of medical decision-making is ‘the frame effect’ or ‘framing’ (an example which can also apply to decisions or financial bets of the ‘wheel of fortune’ type). Tversky and his colleagues at the Harvard Medical School (McNeil et al., 1982; Tversky & Kahneman, 1981) conducted this experiment. Doctors participating in the study received statistics involving the results of two types of treatment for lung cancer: surgical intervention and radiotherapy. The five-year survival rates greatly tip the balance in favour of surgical intervention, but, in the short term, the latter is more risky than radiotherapy. Statistics were provided to study participants (divided into two groups): the first group were told that after surgical intervention (a) the survival rate at one month is 90% (the ‘survival frame’), and the second that (b) there is a 10% mortality rate in the first month (the ‘mortality frame’). From a logical perspective (System 2), (a) and (b) are strictly equivalent statistics. However, the results show that in the ‘survival frame’ of the presentation of data, 84% of doctors opted for surgical intervention, whereas in the ‘mortality frame’ only 50% did so (with 50% for radiotherapy).

2.3 From framing to emotions So, as Kahneman noted, we rarely notice how much our preferences are dependent on framing rather than as responses to reality. In the example just given, we can see that System 1 is very sensitive to strong words with an emotional charge: dying is not a good thing, surviving is a good thing, and a 90% survival rate seems encouraging to our brains, whereas a 10% mortality rate is terrifying. It is emotionally true (at least understandable) for System 1 but should not be so for System 2. Kahneman demonstrated that in medical or financial decisions we generally feel an excessive aversion to losses (via System 1) compared to potential gains (the


3-system theory of thinking and reasoning

exact calculations of System 2). This led to political recommendations regarding the importance of formulation (or framing) of information with people. If a potential result is presented as a loss, it will no doubt have more impact on the population than if it is presented as a gain. These observations give an idea of the extent to which emotions are involved in System 1, and in fact the frame effect can be ‘neutralized in a laboratory’, as demonstrated in our lab by Mathieu Cassotti with an experimental modification (called ‘incidental’) of the emotional context (Cassotti et al., 2012). A positive induction (pleasant image) was introduced just before the choice, and this made the frame effect disappear. To describe the relationship between these emotions and reasoning, Kahneman – with reference to the psychologist Paul Slovic – also spoke more generally of ‘the affect heuristic’ (the simple feeling of liking or not liking). This is an emotional substitution where the response to an easy question (What do I feel about such a subject? I like it, I hate it, etc.) acts as a response to a more difficult question, requiring reflection, analysis and cognitive deliberation (What do I think?).6 Kahneman, with Slovic, put forward the hypothesis that this influence of emotions could operate via System 1 without us necessarily being aware of it. Here he refers to the famous work of the neuroscientist Antonio Damasio, according to whom the emotional evaluation of results of a choice, our physical condition and our tendencies to attachment and avoidance that are associated with it play a central role in decision-making in our brain. This is the so-called theory of ‘somatic markers’, a theory both biological and psychological that explains how emotions can guide reasoning in our brain. This is done via the prefrontal cortex using executive functions (control functions: on/off). Damasio’s theory, fundamental in the field, enables us to introduce here the notion that Kahneman’s Systems 1 and 2 are not enough to understand the mechanisms of reasoning (Chapter 5). This is because, in the event of serious cognitive conflicts between intuition (which is very strong as we have seen) and logic, our brain needs another system of arbitration or inhibitory control. This could be System 3, called ‘executive’, which is involved in the stopping of System 1 and the switching to System 2. As I have outlined above, this system of ‘cognitive resistance’ (the stop signal or inhibition) can reveal neither the intuition (clearly) of Kahneman’s System 1 nor the pure logic of System 2 (Piaget). This is particularly so when the first, which is almost irrepressible, and the second are embroiled in repeated conflicts (where System 1, which is more rapid, is the winner – Kahneman’s ‘hero’). The acute need for executive control – which can justify targeted educational interventions (Diamond & Lee, 2011; Houdé, 2007) – is true in adults (the examples in this chapter illustrate this), but even more so in children due to incomplete maturation of the prefrontal cortex (Casey et al., 2005). The link between these conceptions of cognitive systems, System 1 (intuitive) versus System 2 (logic), and System 3 (executive), is the cognitive-emotional

The dual-system theories


brain in the sense that Damasio intended, that is a form of guidance towards reason. It is not as with Kahneman a misleading emotional and unconscious tendency (or a bias of System 1), but it would be by definition the positive opposite.

3. Emotional guidance in the brain The neuroscientific work of Damasio, in addition to his numerous specialist articles, is recorded in important books such as Descartes’ Error: Emotion, Reason, and the Human Brain (1994), The Feeling of What Happens: Body and Emotion in the Making of Consciousness (1999), Self Comes to Mind: Constructing the Conscious Brain (2010) and, the more recent one, The Strange Order of Things: Life, Feeling, and the Making of Cultures (2018).

3.1 Antonio Damasio’s ‘somatic markers hypothesis’ This theory of a relationship between emotion and cognition is placed – rightly or wrongly depending on the philosopher – in opposition to the Cartesian error of a mind (soul) considered ‘apart’ from the body (the mind–body dualism or ‘Descartes’s error’ according to Damasio). Around the middle of the twentieth century, this idea was the source of the incorrect metaphor of the human mind being similar to computer software (therefore independent of the machine, i.e. of brain and body). But the circuits of the brain are not those of a conventional computer (conversely, as we have seen, there is today an extraordinary project in progress in which researchers are trying to build an artificial brain of a new type, which is ‘biologically plausible’ with networks of virtual neurons similar to our own). The circuits of the human brain include regions related to emotion such as the amygdala, which is involved in the fear of danger, belonging to the limbic system in general, which is the location of multiple forms of emotions (a computer does not have fear). In the paralimbic cortex (near to the limbic system), in the front (prefrontal), low (ventral) and middle (medial), is an essential region called the ‘ventromedial prefrontal cortex’ (vmPFC). The experiments in Damasio’s laboratory conducted with patients with specific lesions in this region of the brain showed that they seemed not to feel emotions and were incapable of detecting them in others. One of the indicators used was the variation of cutaneous conductance, that is an emotional response of the skin. Everything indicated that their way of cold-blooded reasoning prevented them from attributing different weights to various solutions, which were offered to them. Therefore, the ‘landscape’ where their decision-making was operating had no contours. These neuropsychological observations in patients made by the Damasio team with respect to vmPFC were the origin of his ‘theory of somatic markers’. By deduction, in the healthy brain (as an example of neuropsychological inference), the ability to express and to feel emotions provides us with the correct direction and places us in the right location in the space where decision-making takes


3-system theory of thinking and reasoning

place. This is where we can properly implement the principles of logic (System 2). From this perspective, the prefrontal cortex plays a central role because it receives signals from all sensory regions of the brain where images, which are the origin of our reasoning processes, form. These include somatosensory areas where states of the body past and present are mapped continuously: as somatic markers. These are connections established between some categories of objects or events and pleasant or unpleasant bodily states. These markers are the product of multiple individual experiences – from babies to adults – regulated by the biological system of homeostasis. In 2013 Damasio published some remarkable illustrations of this ‘integrated homeostasis’ – the subtle maintenance of biological equilibrium at all levels, from body temperature and the concentration of substances to high level cognition – in an article entitled ‘The Nature of Feelings: Evolutionary and Neurobiological Origins’ (Damasio & Carvalho, 2013). In the detail of the human body (with multiple anatomical schemas providing support) we can understand how, from individual neurons to cortical networks, the deep anchoring of cognition in feelings, emotions and biology takes place. These are constant and interrelated sources of regulation and equilibrium. Here the Cartesian dualism of mind/body no longer has a place. During cognitive and emotional development in children, these markers become increasingly differentiated and are stored in the brain in the form of ‘simulation loops’ (feelings) which replace direct reference to real somatic states.7 These somatic markers, bearers of emotional values, are integrated in the vmPFC where they function as a sort of automatic guide – ‘the mind modulated by the body’ – which directs the choices of an individual and therefore his or her reasoning. They act while partially ‘hidden’ (i.e. without the subject necessarily being aware of them) to favour, through the bias of attention mechanisms, some elements with respect to others and to command the signals ‘on’, ‘off’ or ‘change of direction’ that are involved in decision-making. That is where the misleading emotions of Kahneman can be inserted (aversions, affect heuristic, etc.), though, conversely, this role of the emotions can also be positive (re-equilibrating), in favour of System 2. Houdé et al. (2001) have therefore demonstrated in cerebral imaging in healthy individuals (with no brain lesions) subjected to a task of logical reasoning that vmPFC is activated during inhibition of an incorrect or biased strategy (this is the ‘stop’ command of System 1) and during activation of an appropriate strategy (‘change of direction’ and the ‘on’ command of System 2). We will return to this experiment in the next chapter.

3.2 Self-consciousness, metacognition and reasoning When these selective attention mechanisms take over (against System 1) to change direction clearly (attentional control) by correcting an initial error of reasoning, then reflective (or self-) consciousness is operating. One is conscious of seriously reflecting on the task, which is sometimes difficult, of resolving it

The dual-system theories


(by deduction or induction); in addition one is conscious of the cognitive, executive and prefrontal effort that it requires; one may even feel pleasure. This is what Damasio (1999) called ‘the feeling of what happens’ in reasoning and which Théodule Ribot, the founder of French scientific psychology, had identified long ago as the ‘intellectual feeling’ in a very complete typology of feelings (Ribot, 1896). This consciousness of reasoning corresponds to what John Flavell (1979) defined as a ‘metacognitive experience’. Comparable phenomena exist for metamemory, metalanguage, metacommunication and so on. Flavell differentiated these from simple metaknowledge about variables of people, tasks or strategies likely to affect memory, language, communication or, in this case, reasoning (see also Houdé, 2004). We therefore rediscover the importance of the reflective experience, which is both cognitive and affective, related to the resolution of a particular problem. In this case, it is being aware of one’s own reasoning error (from System 1) and understanding, or at least feeling, the need to change strategy (use resources from System 2). This shifting, that is this flexibility, by cerebral inhibition and activation (System 1/System 2), comes from System 3: to inhibit for reasoning, due to the executive functions of the prefrontal cortex, the most developed part of the brain in humans compared with other primates, carnivores and rodents (Fuster, 1997, 2003). The challenge is to resist the biases of heuristics (matchings, beliefs, stereotypes, framings, etc. of System 1) triggered by other more impulsive parts of the brain. Kahneman saw the hero of reasoning in System 1 but I suggest that the inhibitor System 3 is also, and maybe above all, the hero of cognitive resistance. In the next chapter, we shall discover the details of this inhibition process and the role of System 3 between intuitions (System 1) and logic (System 2). The reference to Damasio already allows us to understand, at this stage of the book, how positive, constructive inhibition, as a factor of intelligence (stopping System 1 to release System 2), can be directed by emotional circuits of the brain. It is another epistemological way, one not conducted by Piaget, of establishing a deep, somatic connection (homeostasis, regulation and guidance) with biology.

Notes 1 For French language specialists, this dichotomy is reminiscent of those already established between 1970 and 1980 in differential psychology by Maurice Reuchlin (realization and formalization) and in cognitive developmental psychology by Pierre Oléron (direct circuits and long or indirect circuits), Pierre Gréco (sense and calculation), Jacqueline Bideaud and Jacques Lautrey (empirical and logical or analogical and propositional systems). See, for example, J. Bideaud, Logique et Bricolage chez l’Enfant (Lille: Pul, 1988). 2 This was the same period as the anthropologist Claude Lévi-Strauss. See J. Piaget, Structuralism (New York: Harper Colophon Books, 1971). 3 This is both contrary to the methodical doubt of Descartes and the definition of the measurement of intelligence by Kant: ability to tolerate (and not neglect) ambiguity and uncertainty.


3-system theory of thinking and reasoning

4 The tendency to like (or detest) everything outright in a person, including things that we have not observed. 5 In 2009, Sunstein was appointed an adviser in these matters to President Obama. The almost direct influence of Kahneman’s theory is measured – and of the psychology of reasoning – on the United States government. 6 The concept of easy/difficult substitution is at the heart of numerous experimental demonstrations on the laziness of System 2. For example: (a) a bat and a ball cost 1.10 dollars, (b) the bat costs 1 dollar more than the ball, (c) how much is the ball? The easy response for System 1 is 10 cents. That is wrong (10 cents + 1 dollar = 1.10 dollars, the cost of the bat, + 10 cents for the ball = 1.20 dollars in total). The correct response is 5 cents (the slower and more difficult calculation of System 2). Most students at Harvard, MIT and Princeton intuitively answer 10 cents (see Kahneman & Frederick, 2003). 7 The neurocognitive translation of bodily emotions in the mind in this case corresponds to feelings.

Bibliography Casey, B.J. et al. (2005). Imaging the developing brain: What have we learned about cognitive development? Trends in Cognitive Sciences, 9, 104–110. Cassotti, M. et al. (2012). Positive emotional context eliminates the framing effect in decision-making. Emotion, 2, 926–931. Damasio, A. (1994). Descartes’ Error: Emotion, Reason and the Human Brain. New York: Putnam. Damasio, A. (1999). The Feeling of What Happens: Body and Emotion in the Making of Consciousness. Orlando: Harcourt. Damasio, A. (2010). Self Comes to Mind: Constructing the Conscious Brain. New York: Pantheon Books. Damasio, A. (2018). The Strange Order of Things: Life, Feeling, and the Making of Cultures. New York: Pantheon Books. Damasio, A. and Carvalho, G. (2013). The nature of feelings: Evolutionary and neurobiological origins. Nature Reviews Neuroscience, 14, 143–152. De Neys, W. and Van Gelder, E. (2008). Logic and belief across the life span: The rise and fall of belief inhibition during syllogistic reasoning. Developmental Science, 12, 23–130. Diamond, A. and Lee, K. (2011). Interventions shown to aid executive function development in children 4 to 12 years old. Science, 333, 959–964. Edelman, G. (1992). Bright Air, Brilliant Fire: On the Matter of the Mind. New York: Basic Books. Evans, J. (1989). Bias in Human Reasoning. Hillsdale, NJ: Erlbaum. Evans, J. (1998). Matching bias in conditional reasoning. Thinking & Reasoning, 4, 45–82. Evans, J. (2003). In two minds: Dual-process accounts of reasoning. Trends in Cognitive Sciences, 7, 454–459. Evans, J. and Over, D. (1996). Rationality and Reasoning. Hove: Psychology Press. Evans, J. and Perry, T. (1995). Belief bias in children’s reasoning. Current Psychology of Cognition, 14, 103–115. Evans, J. et al. (1983). On the conflict between logic and belief in syllogistic reasoning. Memory and Cognition, 11, 295–306. Fischer, K. and Farrar, M. (1988). Generalizations about generalization: How a theory of skill development explains both generality and specificity. In A. Demetriou (ed.), The NeoPiagetian Theories of Cognitive Development (pp. 137–171). Amsterdam: North-Holland.

The dual-system theories


Flavell, J. (1979). Metacognition and cognitive monitoring. American Psychologist, 34, 906–911. Fuster, J. (1997). The Prefrontal Cortex. New York: Raven Press. Fuster, J. (2003). Cortex and Mind: Unifying Cognition. Oxford: Oxford University Press. Houdé, O. (Ed.) (2004). Dictionary of Cognitive Science. New York and Hove: Psychology Press (entry: Metacognition, pp. 227–230). Houdé, O. (2007). First insights on neuropedagogy of reasoning. Thinking & Reasoning, 13, 81–89. Houdé, O. et al. (2000). Shifting from the perceptual brain to the logical brain: The neural impact of cognitive inhibition training. Journal of Cognitive Neuroscience, 12, 721–728. Houdé, O. et al. (2001). Access to deductive logic depends on a right ventromedial prefrontal area devoted to emotion and feeling: Evidence from a training paradigm. NeuroImage, 14, 1486–1492. Kahneman, D. (2011). Thinking, Fast and Slow. New York: Farrar, Straus and Giroux. Kahneman, D. and Frederick, S. (2003). Representativeness revisited: Attribute substitution in intuitive judgement. In T. Gilovich et al. (eds.), Heuristics and Biases (pp. 49–81). New York: Cambridge University Press. McNeil, B. et al. (1982). On the elicitation of preferences for alternative therapies. New England Journal of Medicine, 306, 1259–1262. Morel, C. (2002–2012). Les Décisions absurdes. Paris: Gallimard. Moutier, S. (2000). Deductive competence and executive efficiency in school children. Current Psychology Letters, 3, 87–100. Piaget, J. and Inhelder, B. (1969). The Psychology of the Child. New York: Basic Books. Ribot, T. (1896). La Psychologie des Sentiments. Paris: Alcan. Stanovich, K. and West, R. (2000). Individual differences in reasoning: Implications for the rationality debate. Behavioral and Brain Sciences, 23, 645–665. Thaler, R. (2015). Misbehaving: The Making of Behavioural Economics. New York: W.W. Norton and Company. Thaler, R. and Sunstein, C. (2008). Nudge: Improving Decisions about Health, Wealth, and Happiness. New Haven: Yale University Press. Tversky, A. and Kahneman, D. (1981). The framing of decisions and the psychology of choices. Science, 211, 453–458. Tversky, A. and Kahneman, D. (1983). Extensional versus intuitive reasoning: The conjunction fallacy in probability judgement. Psychological Review, 90, 293–315.

7 INHIBITING IN ORDER TO REASON System 3 (executive)

Towards the end of his monumental book, Kahneman (2011) posed an essential question for education: ‘What can be done about biases [System 1]? How can we improve judgments and decisions, both our own and those of institutions that we serve and that serve us?’ His answer: ‘As I know from experience, System 1 is not readily educable’ (p. 417).

1. Bias correction or debiasing Can reasoning be un-biased? Thirty years ago, Evans, co-discoverer with Kahneman and Tversky of multiple cognitive bias (matching bias, belief bias, etc.), answered negatively. Debiasing, or the suppression of bias, was, according to him, almost impossible due to the perceptive and/or cognitive power of automatic heuristics. In 1989, Evans wrote: ‘On the whole, there is very little evidence that deductive reasoning biases can be removed by verbal instructions relating to the underlying logical principles’ (pp. 116–117). What Evans wrote was accurate. The power of bias in System 1 (Pascal’s ‘misleading power’) is great. It is not only intuitive but is also very quick and is therefore always activated by our brain first in the sequence of reasoning. It is not enough to exercise System 2 by purely logical and even intense instruction; the intuition versus logic battle is unequal and the meteoric, lightning System 1 is systematically the winner. When people make mistakes because of naturally giving in to System 1, they may be partially aware of it (that is they have doubts). This is very clearly demonstrated today in the experimental work of Wim De Neys in our laboratory (De Neys, 2018; De Neys & Bonnefon, 2013; De Neys et al., 2008, 2011), and it is not necessarily that logic as such is lacking, in the sense that it is not stored in memory. It can simply be short-circuited.

Inhibiting in order to reason


This last point is important. It involves the hypothesis of the ‘presumption of rationality’ (presumption of System 2) which I made in 1995 in my book Rationality, Development, and Inhibition (Houdé, 1995; see also, Houdé, 2000). This hypothesis was formulated after studying the work of the philosopher Pascal Engel (1991): The Norm of Truth: An Introduction to the Philosophy of Logic. The idea is that an adult, like a child, can make a mistake in a task involving logic (such as is found in Piaget, Evans or Kahneman and Tversky) whilst being potentially logical. In developmental psychology, this challenges Piaget’s identification of stages and the associated ‘false negatives’.1 What is often a failure, in the event of a logical reasoning error, is the prefrontal capacity to inhibit a direct response circuit (System 1) automatically triggered by the more impulsive parts of our memory and our brain. Following the introduction in 1996 by Evans and Over of the distinction between two forms of Rationality, 1 and 2 (which went on to become the ‘dual processes’ theory or Systems 1 and 2, Evans & Stanovich, 2013), I proposed the cognitive principle of a necessary System 3 for inhibitory control (Houdé, 1997). This concept was further explored by Evans (2003) and since then by some other authors in the field, including De Neys, who developed ingenious experimental situations with or without ‘bias versus logic’ conflicts (the measurement of doubt arising from biased responses). These help to highlight, both in children and in adults, the failure in logic (the storage of rules in memory: the syllogism, ‘if–then’, etc.) with regard to the failure in System 1/System 2 conflict detection followed by the inhibition of System 1. The Cartesian doubt, which is here finely measured, therefore comes back to the centre of experimental preoccupations: the Cogito is unscathed, and the existence of reasoning bias does not necessarily mean ‘human irrationality’, which would be ontological (‘within the being’). Therefore, cognitive misers are not happy fools (De Neys et al., 2013)! In the context of inhibition as a positive factor for reasoning, the solution for the correction of bias or debiasing – a critical educational issue raised by Kahneman – is to learn to use indirect circuits, alternative pathways in the brain: this is ‘cognitive vicariance’ (Berthoz, 2016). This tricky and costly (in terms of cognitive effort) process can be the focus of targeted educational interventions, both in adults and children (Diamond et al., 2007; Diamond & Lee, 2011; Houdé, 2007).

2. Vicariance: inhibiting System 1 to activate System 2 In order to restore equilibrium (picking up Damasio’s concept of homeostasis here again) between the very dominant System 1 (Kahneman) and the weaker System 2 (Piaget), a form of strong contrary tendency is required: that is, psychological resistance. This is firstly in order to stop, on a case-by-case basis,2 System 1, which is automatically pre-activated (inhibition of bias, heuristics, etc.). Secondly, it is to set in motion the more reflective System 2 through further intentional activation (logical rules).


3-system theory of thinking and reasoning

It then follows, with regard to the issue of education and correction of bias, that if one wants an education of reasoning that is both effective and appropriate, in both children and adults, one needs to exercise or use something other than logic alone (or the instruction-repetition of the rules of System 2 as Evans suggested between 1980 and 1990). Education must include System 3 in its ‘executive’ role. The executive functions of the brain, associated with metacognition (see Chapter 6), were thoroughly investigated for ten years by experimental psychology and cognitive neuroscience (Diamond, 2013). This is the control of the execution of reasoning behaviour: the choice of strategies, on/off. This executive control, undertaken by the prefrontal cortex (Fuster, 1997, 2003; Smith et al., 1999), is a subtle operator of cognitive detour, of vicariance, which can enable, if it is properly executed, flexibility (or shifting) from one mode of reasoning to another: from System 1 to System 2.

2.1 Prefrontal cortex and flexibility of the brain In 2000, I and my colleagues published in an article the first experimental demonstration of this shifting from System 1 to System 2 in the brains of young adults, using cerebral imaging techniques that were new at the time (Houdé et al., 2000). The demonstration was concerned with the correction and elimination of Evans’s matching bias (the example of the red square and yellow circle: see p. 86). This much cited article in the field of reasoning was entitled ‘Shifting from the Perceptual Brain to the Logical Brain: The Neural Impact of Cognitive Inhibition Training’. It showed that it had become possible to conceive theoretically, with respect to the actual learning rules for the human brain, an executive System 3 that was likely to control, on a case-by-case basis, competition between System 1 and System 2. Before describing this brain imaging experiment in detail and the role of the prefrontal cortex, it is important to view it, along with reasoning, in terms of the general problem in biology of competition between neural networks, something we already began to consider earlier with Damasio’s theory. It should be viewed as a true variation and selection phenomenon in the Darwinian sense (see Chapter 4). This is the contribution of Changeux’s current Neural Darwinism neurobiological model, briefly outlined at the start of Chapter 5, which complements Piaget’s theory. It will be seen that it applies specifically to the issue of ‘understanding’ and reasoning. This brings to mind Kant in the Age of the Enlightenment: ‘Have the courage to use your own understanding.’

2.2 Neural Darwinism, ‘understanding’ and reasoning How then, as Piaget advocated (and Damasio does today via the body and emotion), can the link be made between biology and psychology, that is between genes, neuronal epigenesis3 and cognition or intelligence (reasoning)? This is the

Inhibiting in order to reason


challenge raised by the current Neural Darwinism model. This model is championed in France by Changeux and in the United States by Gerald Edelman. In this way, Darwin’s evolutionist thinking is introduced into neuroscience and cognitive psychology (Changeux, 2002) with respect to fundamental issues such as the construction of logico-mathematical objects, aesthetic pleasure, moral rules, the search for the truth and consciousness. Changeux’s model starts with the observation of the existence of multiple organizational levels in the nervous system: the molecular and cellular levels, reflex arcs and local circuits, neural networks – referred to as ‘understanding’ – and finally the links between neural networks or reasoning (the controlled exercise of understanding), where the flow of mental objects is orchestrated. In the context of this neuronal architecture, ranging from molecular and cellular aspects to mental objects, all functions, including cognitive ones, are assigned to a given organizational level without being in any way autonomous. Lower-level laws are obeyed, but also there is a marked dependency on higher-level ones. To take account of this dual dependency, Changeux, with Dehaene, proposed a Darwinian schema of generalized variation-selection (Dehaene & Changeux, 2011).4 There are two components to this schema: a generator of diversity (variation) and a system of selection through testing. At the most elaborate levels, understanding and reasoning, the dynamics of this schema are as follows. Firstly, the generator of diversity produces in the brain the spontaneous and transitory activation (dominated in this case by System 1 if one is referring to Kahneman) of neural networks or ‘pre-representations’. Secondly, the system of selection then proceeds with a test activity that anticipates interaction with the environment. Two scenarios are then possible: either there is ‘resonance’ (either intuitive or reflective) between the internal state of the neuronal-mental system and the external state, or there is no resonance, this being a function of the adaptive value of generated neuronal assemblies. In the first case (resonance), there is stabilization and storage in memory; in the second case (non-resonance), nothing is put in the memory. This all leads one to believe that this is how heuristic or logical reasoning strategies are selected in our brain, on a case-by-case basis, in ‘working memory’.5 Then Systems 1 and 2 (Kahneman and Piaget) are constructed through longer-term brain stabilization. These are stabilized themselves to varying degrees, and are either dominant or controlled depending on the age and the situation. They will contribute to new cognitive selections. We know that the variation-selection schema is conventional in the case of the evolution of the species and it has also been shown in the development of immunological response, in the transfer from cellular level to multicellular organisms, as well as in the general morphogenesis of the brain. However, Changeux goes further: he generalizes the Darwinian schema to the interaction between the nervous system and the external world during post-natal development, from a baby to an adult, in the acquisition of superior cognitive functions such as reasoning. Evolution however occurs in this case inside the brain without necessarily changing genetic material – contrary to Piaget’s belief – and within the


3-system theory of thinking and reasoning

limitations of short time scales: from months, days, hours, minutes, down to tenths of seconds, for the reorganization of cognitive strategies. In this evolution, self-evaluation plays an important role; it is informed by complex reward systems. By postulating the existence of a common variation-selection mechanism, this model presents the originality of combining two time scales, phylogeny or the evolution of species (Darwin in the nineteenth century) with ontogenesis (Piaget and Changeux in the twentieth century), that is the neurocognitive development from a baby to an adult. Changeux has emphasized the involvement of the prefrontal cortex, which is the source of thought and abstraction, and the limbic system, which is the source of emotions (Damasio) in the functioning of this mechanism. In his 2002 book The Physiology of Truth (nearly twenty years after his famous Neuronal Men, 1985), Changeux gives as an example my and my colleagues 2000 experiment involving the observation using cerebral imaging of the variationselection of reasoning strategies (matching bias versus logic) in the adult brain. Changeux notes that we thereby observe the sudden in vivo change which occurs on a neuronal level when, during the same reasoning task, there is a progression within a matter of minutes by variation-selection. This occurs from an easy perceptive mode, which is often wrong (Kahneman’s System 1), to a difficult and more critical logical mode (System 2). During this move to logical thought operations, that is in the inhibition of the perceptive mode in order to access the ‘logical truth’ of the task, a very clear switch is seen from a distribution of neural networks located at the back of the brain (posterior cortex) to a distribution located at the front, in the prefrontal cortex. The new neuronal distribution or assembly includes in particular a paralimbic region of the right hemisphere dedicated to emotions and the fear of error. We will come back to this. This brain imaging result demonstrates well the last level of Changeux’s neuronal architecture, that of the linking of neural networks: our ability to reason. According to Changeux (2002): Any serious consciousness theory should explain the orchestration of this consistent flow of mental objects, which gives us access to rational validation of a proposal (logic), to this formal truth, which, for Kant, is found in agreement of knowledge with itself. (p. 166) This is Piaget’s System 2, released from System 1 (bias or heuristic) by the inhibitory control of System 3 (prefrontal cortex). Let us now look at the details of the experiment to see how such an observation was made possible.

2.3 Brain imaging of the matching-bias inhibition As a reminder, if people are asked (as Evans did) to make the rule ‘If there is no red square on the left, then there is a yellow circle on the right’ false, they very

Inhibiting in order to reason


often answer: ‘A red square on the left and a yellow circle on the right’ (Evans, 1998). This is a powerful perceptual matching bias (a heuristic) because the logical response, which is spontaneously very rare, is, for example, a green square on the left and a blue diamond on the right (see p. 86). In this case, the exact algorithm has a true antecedent, no red square, and a false consequent, no yellow circle: TF in the logical truth table applied to the ‘if–then’ rule. How to correct this matching bias? The ideal method is by experimental metacognitive training. That is a sort of ‘laboratory teaching’: teaching the brain to correct its errors. Our working hypothesis was that the difficulty lay in two reasoning strategies being in competition and clashing in the brain: one being perceptive (System 1) and the other logical (System 2). Confronted with this cognitive competition, everything seems to indicate that adolescents or adults fail to inhibit the dominant perceptive strategy, without it being a problem of logic as such (this is ‘presumption of rationality’). To demonstrate this, we first tested, through psychological experiments, the efficiency of various training conditions (Houdé & Moutier, 1996, 1999). The first was the inhibition of the matching strategy using verbal executive alerts about the risk of error and the type of perceptive trap to avoid. The second was the logical explanation – strictly verbal instructions with respect to the logical principle of the truth table: TF and ‘if–then’. The third was the simple repetition of the task, the latter type of training being a test-retest control, which corresponds to the effects of practice. Only the executive training of inhibition has proven to be effective. The success rate, initially lower than 10% in the Evans task described previously, has improved to become more than 90%. This indicates that it is this executive blocking mechanism (the intervention of System 3) that individuals questioned that is lacking and not practice or logic in itself (System 2; although in some cases, a logical explanation can also be appropriate). In these last two conditions, the rate of error stayed comparable to the initial rate. In order to understand this, the reader needs to know that, as is always done in this type of ‘metacognitive’ experiment, the training does not itself involve the Evans task, but another task, with other material – letters and numbers instead of coloured shapes – yet relating to the same logic and triggering the same type of perceptive bias.6 It will be seen later why a single logical explanation is insufficient. We therefore used exactly the same experiment in brain imaging (Houdé et al., 2000), that is by making a three-dimensional reconstruction on a computer of digital images conveying the activity of neurons at all points of the brain. This was to observe what was happening in individuals before and after the learning of the inhibition of the perceptive strategy (System 3 training), that is before and after the correction of the reasoning error: shifting or vicariance System 1/System 2. The participants were therefore placed twice under the brain-imaging scanner, with training performed outside it. The results showed a very clear reconfiguration – or plasticity – of cerebral networks, from the back part of the brain to its front part, the so-called ‘prefrontal’.7 It is therefore not enough to have reached, in adolescence, Piaget’s formal logical operations stage (the last step of the stairs for System 2) in order to be ‘prefrontal’ and logical. If logic is then, as Piaget thought, a form of biological


3-system theory of thinking and reasoning

adaptation, the following can be noted (see Figure 1). Firstly, at any time while the brain is active, including in adults, several reasoning strategies can clash and enter into competition. Perceptive responses (bias or heuristics) will then take precedence over logical responses (Kahneman’s observation of the dominance of System 1). Secondly, it is cognitive inhibition, triggered in this case by experimental training (System 3), which is the access key to logic (System 2). This experiment therefore illustrates dynamically (by competition/selection) how an abstraction process can occur from perception to logic in the brain.

2.4 Cold or hot training? In another brain imaging experiment (Houdé et al., 2001), we compared the differentiated neuronal impact of the learning of inhibition – which involves warnings and ‘hot’ alarms against the hazard of the perceptual trap, the possible error – and strictly logical learning, which was described as ‘cold’ (this distinction between hot and cold being prevalent in the literature in the field). The results showed: (i) on a behavioural level what was already known – that only the inhibition training is effective (because it was not logic as such which individuals questioned were lacking); and (ii) that the greatest cerebral activation which differentiates this type of training from logical training is that of the right ventromedial prefrontal cortex (vmPFC). This is very precisely the region, close to the limbic system, described by Damasio in his clinical neuropsychology research (see p. 95), which is located at the front (prefrontal), low down (ventro) and in the middle or internal (medial) part of the right brain. We have seen that this region is involved, according to Damasio, in a close relationship between emotion, self-consciousness and reasoning: in this case, the emotion arising from a feeling of being deceived, specifically triggered by the training to inhibit the perceptual bias. This is the region that was injured in Damasio’s patients and in Phineas Gage, the famous clinical case described by John Harlow in 1848 and recounted by Damasio in his research with Hanna Damasio (1994). Remember that Gage, a young site manager, was found to be completely, socially and intellectually maladjusted because of a severing of the relationship between emotion and reasoning, after a 1.10-metrelong crowbar hit his skull and brain, destroying his right vmPFC. In our data, at the level of individual analysis (that is the difference in regional cerebral blood flow individual by individual), it is this same region of the right hemisphere which is activated in those who access logic (System 2) after the learning of the inhibition of perceptual bias (therefore after correction of the error of System 1). Conversely, it is not activated in those who do not access logic and persist in the error after ‘cold’ logical learning. Given that Damasio’s theory is a theory of reflective consciousness, defined as emotion related to feeling the self in a cognitive activity, it is interesting to note that, before the learning of the inhibition of reasoning bias, the participants in our experiment were not aware that they were committing an error of logic (they thought they were all answering correctly), whereas afterwards they were. We did not however measure

Inhibiting in order to reason



B Strategy 1

Strategy 4 Strategy 2

Percent use



Strategy 5

Strategy 3

I Age


Three-systems theory (O. Houdé) Heuristic system

Automatic and intuitive thinking Reliability Speed


Inhibitory system

Interrupting the heuristic system for activating the algorithmic system Metacognitive control


Algorithmic system

Logico-mathematical and analytical thinking Reliability



3-System theory of the cognitive brain. (A) Piaget's theory or the 'staircase model' of incremental progress stage by stage, from intuitions to logic. (B) Non-linear development of cognitive strategies that come either from the fast and heuristic (intuitive) system or from the slow and algorithmic (logical) system at any age. When these two systems compete (System 1 versus System 2), our brain needs a third system, located in the prefrontal cortex, for inhibiting the too fast heuristic system and activating the logical one. Used with permission. Originally published in L’école du cerveau: De Montessori, Freinet et Piaget aux sciences cognitives (Houdé, 2018).


in the way De Neys does today (De Neys et al., 2008), with an experimental confidence scale of reliability, relating to whether they did not already have ‘a slight doubt’ at the outset. This phenomenon of being aware of errors, for which the cerebral trace has been uncovered here, is a metacognitive experience. These brain imaging results therefore suggest that emotion can help reasoning, unlike in the view introduced by Descartes, and still implicit with Piaget and even


3-system theory of thinking and reasoning

Kahneman, of a necessary opposition of reason and emotion (‘Descartes’s error’ denounced by Damasio in 1995). From an evolutionary perspective, the role conventionally allocated to emotion in survival comes to mind here: the fear experienced when in danger that leads animals – and humans – to flee from it and therefore avoid it. It can therefore be argued, from a Darwinian perspective, that evolution (phylogeny) must have shaped a brain that feels the emotions that are necessary to inhibit inappropriate behaviour (via a variation-selection and testing system like the one proposed by Changeux), including when it is involved in logical reasoning. It is perhaps this which is the optimal form of biological adaptation and not, as Piaget thought, logical intelligence per se (System 2). The human brain is not a cold and logical calculator like a conventional computer – learning to inhibit bias is ‘hot’ (emotional) even at the final stage of Piaget’s formal operations. The above demonstration of the neuroscience of reasoning with respect to the experimental example of perceptual matching bias in the ‘if–then’ rules (Houdé et al., 2000) was later replicated by Prado and Noveck (2007) and generalized by Vinod Goel’s team when studying the semantic belief bias in syllogisms (Goel, 2007).

2.5 Inhibiting beliefs, stereotypes and absurd decisions As we have just suggested, System 3 and its function of inhibiting System 1 is clearly required to correct bias other than perceptual matching bias, in particular belief bias and stereotypes (‘representativeness’ bias). This brings to mind the illustration of the syllogism where children systematically have a tendency to accept a non-valid yet credible conclusion: (a) elephants are hay eaters, (b) hay eaters are not heavy. When asked if that means (c) elephants are heavy, they answer yes, whereas nothing allows them to deduce this conclusion logically from the premises. In this example, new experimental data demonstrated that the executive capacities of System 3 could inhibit bias (Moutier, 2000). Several studies, including those of De Neys, confirmed this point in both children and adults (De Neys & Van Gelder, 2008). In the same vein, we demonstrated experimentally that the all-powerful stereotype of the Linda problem invented by Kahneman and Tversky in the 1980s (the bias of ‘representativeness’ or conjunction error: ‘teller-feminist’) is not uncontrollable and can be inhibited after specific System 3 training, whereas the simple re-explanation of the rules of probability (System 2) is not enough (Moutier & Houdé, 2003). Many experimental educational elements remain to be developed, reinforced, re-tested and particularly to be adapted on a case-by-case basis. However, they can already give a clear indication of the type of response that can be made to Kahneman’s question: ‘How can we improve judgments and decisions, both our own and those of institutions that we serve and that serve us?’ (Kahneman, 2011, p. 417). In his everyday sociology book, Absurd Decisions, which has attracted a lot of attention in France, Christian Morel, a former manager in French companies, suggests applying our ideas on the positive and adaptive role of inhibition to the world of business (and institutions in general). He writes:

Inhibiting in order to reason


Among pilots, crews, engineers, managers, with a scientific skill and who practise it, almost infantile reasoning processes seem sometimes to appear suddenly or reappear, as if they were lying in ambush in spirit, ready to leap as soon as inhibition which usually keeps them in check is suspended. (Morel, 2002–2010, p. 144) Although looked at in a different way, this is a very clear summary of what we wanted to say about the almost heroic role that System 3 must play, not only in psychology and neuroscience laboratories (where the eyes of researchers are riveted on cerebral imaging screens and the micro-experimental correction of bias), but also out in society.

2.6 Anchors: positive or negative priming According to Kahneman (2011), ‘anchors’ are an ‘absurdity’ of judgements and of human reasoning; absurd decisions have, in fact, resilience and anchors. The experimental example of anchors, which is important in Kahneman’s theory, will enable us to illustrate differently our various points of view regarding the mechanisms of plasticity and vicariance between Systems 1 and 2 thanks to the inhibitory control of System 3. One example was an experiment for which Tversky and Kahneman (1974) created a fake wheel of fortune. It was numbered from 0 to 100, but the only numbers it would stop on were 10 and 65. They then had the wheel turn and asked participants to note which number it stopped on (this could clearly only be 10 or 65, although they did not know this). Then Kahneman and Tversky asked them two questions: (a) ‘Is the percentage of African nations among the United Nations (UN) members larger or smaller than the number you just wrote?’ and (b) ‘What is your best guess of the percentage of African nations in the UN?’ There is clearly no relationship between this wheel of fortune, whether fake or not, and the geopolitical questions asked but, nevertheless, the participants did not ignore this anchoring. Those who had seen the number 10 answered 25% on average, as opposed to 45% for those who had seen the number 65. This ‘anchoring effect’ bias occurs when people are exposed to a particular value (the anchor in this case) before estimating an unknown quantity. It is an extremely robust result in experimental psychology. You can imagine the multiple impacts of this in life and society: the awareness of the announcement of a price for the purchase of a house or apartment, or any other form of negotiation (which, by the initial anchor, can be psychologically and skilfully drawn to the lower or the higher). It is also a formidable weapon for ‘effective marketing’. In order to understand this bias better (a further example of Pascal’s misleading powers), Kahneman’s cognitive interpretation of it is what is also known in experimental psychology as the ‘priming effect’ or facilitation: ‘positive’ priming in associative memory. It is positive in the sense of being a facilitatory reinforcement, even if it can be a bias to negative consequences for factual or logical


3-system theory of thinking and reasoning

accuracy of estimation. Prior information (the prime), either consciously or unconsciously perceived, or an action, attitude or anchoring strategy, reinforces (facilitates) the response or judgement that follows. Such experimental sequences are called ‘prime-probes’. To support his interpretation, Kahneman referred to the work of Thomas Mussweiler on the role of associative coherence in anchoring (Mussweiler, 2000). In this example, the question concerns temperature. Participants are asked: (a) ‘Is the annual mean temperature in Germany higher or lower than 20 °C?’ or (b) ‘Is the annual mean temperature in Germany higher or lower than 5 °C?’ Immediately afterwards, Mussweiler asks them, in another task, simply to identify words (compared with non-words). The results show that the prime or anchor ‘20 °C’ facilitates (positive priming), to a statistically significant extent, the recognition of words related to summer (sun, beach, etc.) whereas the prime ‘5 °C’ encourages the recognition of words related to winter (frost, skiing, etc.). So the high or low numbers activate, by association in the memory, different series of ideas. These effects of cognitive suggestion are for Kahneman a further example, one of the most insidious, of the automatic operations of System 1. Anchoring is often therefore a potential generator of absurdity in the field of reasoning and decision-making. But does one have to be resigned to it? Does System 1 have to remain the hero? Whether in property value estimation, negotiation or marketing of all kinds, is it an incorrigible weakness of spirit? In 2011, Kahneman concluded with these words (which can be described as ‘metacognitive’ in the sense we have defined it): ‘you should assume that any number that is on the table has had an anchoring effect on you, and if the stakes are high you should mobilize yourself (your System 2) to combat the effect’ (p. 128). Nevertheless, Kahneman himself recognizes in the previous page of his book that ‘System 2 has no control over the anchoring effect’ (p. 127). Therefore, how can we mobilize control of a system that has no control resources? That would not be reasonable, particularly if the stakes are high. That is the weak point of the dual System 1 and 2 theory and we are suggesting that the inhibitory control of System 3 should be introduced from the point at which Kahneman’s theory begins to fall apart. In this case, it is this executive system which is the cognitive hero in the sense that it can provide the effort, the ‘executive cost’, needed to combat the anchoring effects. It is the resistant one! It has the resources in the neurons of the prefrontal cortex, the site of executive functions – specific resources, inscribed very early on in the structure of our brain, that increase with age and which are receptive to learning (Cachia et al., 2014; Casey et al., 2005; Diamond & Lee, 2011; Houdé, 2007). Recourse to System 3 is not in our analysis a position of principle, a piece of theatre or theoretical possibility, but is actually a new experimental approach in the domain of logico-mathematical reasoning and decision-making that needs to be scientifically explored, tested, illustrated and generalized. From this perspective, the essential question of anchoring or positive priming covered by Kahneman is a technical example, revealing a ‘philosophy of intuitive thought’, which will enable us to explain our thesis of System 3 by demonstrating the opposite

Inhibiting in order to reason


effect to that described by Kahneman: ‘negative priming’ of reasoning, an experimental measurement of the efficiency of System 3 inhibitory control. With Grégoire Borst, we have published a chapter which illustrates this approach: ‘Negative Priming in Logico-Mathematical Reasoning: The Cost of Blocking Your Intuition’ (Borst et al., 2013b).

2.7 The cost of blocking your intuition We have already described in the previous examples of Kahneman, Tversky and Mussweiler what positive priming is in terms of the facilitation effect that reinforces the intuition of System 1 and the laziness of System 2. Negative priming, created by System 3, is the opposite dynamic. It still concerns experimental sequences with anchors (primes) and consecutive responses (probes), but they are a little more complex and the aim is directly to ‘measure the cognitive effort’. It is a programme of research ‘against cognitive laziness’, which is certainly sometimes pleasant, just as intuitions are very relevant (in experts for example), but also dangerous for both oneself and for others. The very first study that we designed and published in 2001 in this logic (the invention of the experimental paradigm applied to cognitive strategies, Houdé & Guichart, 2001) was related to the famous number conservation task used by Piaget – and later by many psychologists throughout the world – to test the stages of logico-mathematical development in children (see Chapter 5). We hypothesized that this task, which is successful at about the age of seven (the ‘age of reason’), uses System 3 (the executive capacity of inhibiting the heuristic ‘length equals number’) more than System 2 (the logic of number in itself, as Piaget thought). This hypothesis was supported by the fact that, according to Piaget, many studies conducted by international teams of researchers had clearly demonstrated the existence of a logic of number in the young child (‘the gift for maths’, Dehaene, 2000) well before age seven. This finding is reinforced today by the discoveries about the capacities of reasoning of babies, described in the next chapter. System 2 would seem therefore to be under-used up to the age of seven. But this apparent cognitive laziness could be related, according to us, to a real and different cerebral limitation (the effort of inhibition of the heuristic ‘length equals number’) and this has never been directly measured using the modern techniques of experimental psychology (computers and response times in milliseconds) in the conventional Piagetian task. This limitation must depend on the development of System 3, in this case the prefrontal cortex. It is known today, through anatomical cerebral imaging, that the maturation of this part of the brain is slow and delayed, from infancy to adolescence (Gogtay et al., 2004). We even demonstrated for the first time in 2011, in functional cerebral imaging (the child resolving Piaget’s task while undergoing MRI), that a specific part of the prefrontal cortex must be mobilized – the increase in its haemodynamic signal was measured in real time – in order to inhibit the intuitive heuristic ‘length equals number’ (Houdé et al., 2011).


3-system theory of thinking and reasoning

This is the description of the negative priming paradigm developed in 2001. As a reminder, in the Piagetian number conservation task (see p. 78), the response ‘there are more counters where it is longer’ is an error of reasoning founded on the perceptive intuition ‘length equals number’, which reveals, according to Piaget, that a child has not yet acquired the concept of number (and the corresponding concrete operations stage). In our computerized version of this task, designed to test negative priming, sequences with anchors and consecutive test responses were programmed in the computer in order to record response times in milliseconds. This is called ‘mental chronometry’ (the very rapid reorganization of strategies in Changeux’s Darwinian neuronal-mental model). The principle was firstly to have a child solve a Piaget-type task (where, by hypothesis, he or she had to inhibit the heuristic strategy length equals number) and secondly to present to him or her, just afterwards, a situation where length and number co-vary (two lines of counters where the longer one also contains the most counters). The child should activate in the second task the heuristic strategy that he or she inhibited in the first. The results show that, in this second case, a primary school child takes a bit longer to respond (approximately 150 ms) than in a control situation where he or she did not have to solve the Piaget-type task first. This slight difference in time, which is statistically significant, is what is called negative priming, an experimental demonstration of the fact that the child had to inhibit the strategy length equals number (System 1) to be successful in the Piaget-type task. Hence, additional time is taken to unblock this strategy when the removal of inhibition becomes relevant. It is a technical trick, which can be done in experimental psychology. It shows the executive cost (the effort) required for System 3 to block the perceptive intuition of System 1 and leave the child with the potential to express his or her actual logical capacities (System 2: in this case counting or enumeration) beyond the conflict created by the visuo-spatial interference between number and length. One can ask oneself where this type of misleading heuristic of System 1 comes from in children. Just like the rules of System 2, whose construction Piaget studied, the regularities of System 1 are constructed. They are culturally reinforced at certain points of development and become dominant in the brain. Let us come back for a moment to the definitions of strategies: heuristics (System 1) or logical algorithms (System 2). A heuristic is a very rapid, very effective strategy – therefore economical for children and adults – which works very well, very often, but not always (unlike the exact algorithm, counting or enumeration in our example, which is slower, but which leads always to the correct solution). Where does the ‘length equals number’ heuristic come from? For example, on the shelves of supermarkets, in general, it is true that length and number vary together (co-vary): faced with two lines of products of the same type, the one that is longer also contains more products. A child’s brain detects this type of visual and spatial regularity very early. At both school and at home, when one learns addition and subtraction with objects on a table, by adding one or more

Inhibiting in order to reason


objects (1 + 1 + 1 + 1 + . . .), it becomes longer; by subtracting, the opposite happens. So therefore, in elementary arithmetic as in the supermarket, length and number co-vary. This is also true in pre-school maths books. In general, one finds there the sequence of numbers from 1 to 10 illustrated by the line of objects of increasing length (lines of animals or fruits, for example). Therefore, almost everywhere, except in Piaget’s task, length and number vary together. This gives perceptive, visuo-spatial intuition, according to which ‘length equals number’. The strength of this intuition of System 1, which is often useful and ready to leap in (even in adults), consequently requires – when it is necessary, as in Piaget’s task – a more powerful mechanism of cognitive resistance: inhibition by System 3 of the heuristic ‘length equals number’. We can now understand better, depending on the more or less conflictual System 1/System 2 situations encountered by a child during his or her development, that there are failures and unexpected deviations (including steps backwards or apparent ‘regressions’). We referred to these in Chapter 5 as a quirky, accidental non-linear development. It is a weakness of our brain, certainly, but also a real strength, because this source of error and variability of performance can – from the time when it is diagnosed8 – be exploited in metacognitive experiences (Houdé, 2007) for targeted and differentiated teaching (called ‘executive’ or cognitive control) which can ‘learn to resist’ (System 3). In the chapter ‘The Cost of Blocking Your Intuition’ written with Borst, in addition to the Piagetian example of number conservation, we also showed how the principle of negative priming is applicable for the measurement of the action of System 3 in multiple domains. This was done with prime-probe pairs of items of reasoning described as experimentally ‘related’ (able to activate what has been inhibited) or ‘non-related’ (control pairs without this executive link, i.e. without cost related to the removal of inhibition). Therefore, this is now proven for logical categorization (the inclusion of categories: A, A’ and B; Piaget’s example of ten daisies, two roses and twelve flowers – see Chapter 5), where the difficulty is inhibiting direct perceptive comparison (A > A’) (Borst et al., 2013a), or even in the domain of syllogisms as studied by Evans, Kahneman and Tversky when it involves inhibiting beliefs (credibility) to activate logic (validity) (Moutier et al., 2006). These experimental demonstrations were done both with children and with adults, for whom the effort is still necessary even if the executive cost is less due to the progressive maturation of their prefrontal cortex. These multi-domain effects show how the cognitive process used is very general, as was illustrated by Borst et al. (2012) with inter-domain transfers, and is in this respect a strong argument for a cognitive system: System 3.

2.8 In the laboratory as in school The negative priming test is not only applicable to laboratory situations. It is also applicable to a certain number of chronic, very classical difficulties encountered by children in class (and then retested in the laboratory so as to


3-system theory of thinking and reasoning

understand fully the cognitive processes involved). It is known that schoolchildren often stumble over verbal formulations such as the following. Louise has twenty-five balls. She has five balls more than Leo. How many balls does Leo have? Frequently, a child is unable to inhibit the implicit heuristic ‘there is the word “more”, so therefore I add’ (25 + 5 = 30) and instead activate the simple and exact algorithm of subtraction (25 – 5 = 20). Here also, the procedure of negative priming has enabled the measurement of the real executive cost taking place in System 3 (inhibiting 25 + 5 = 30) when children learn to overcome their logical difficulty (correct answer = 20) (Lubin et al., 2013). There is no point in repeating to him or her more than necessary the rules of addition and subtraction (System 2); it is rather that System 3 needs to be exercised more. There is then real hope in education of overcoming the doubt expressed by Kahneman (2011) after half a century of monumental work (‘As I know from experience, System 1 is not readily educable’, p. 417). If it were only an aspect of childhood, it would already be very important, but it is also an issue for adults, as confirmed by all the examples of bias discovered and reported by Evans, Kahneman and others, who immerse us in multiple shortcircuits of intuitive and rapid System 1 thought. The example of the balls will almost certainly have brought to mind for you the problem of the bat and ball which most Harvard, MIT and Princeton students trip up on (see p. 98). It is known now, thanks to the work of De Neys et al. (2013), that these students are wrong, but that they still doubt nevertheless. All that remains is for this doubt to be transformed into a basis for the teaching of the inhibition of reasoning bias.

3. Emotion and the anticipation of regret in System 3 Now, to close this chapter, we return to emotion and Damasio’s theory. We have seen that there are well-identified neuronal circuits in the brain that can (unless there are lesions) enable effective emotional guidance for logical reasoning, in particular via the right vmPFC. This capacity for guidance and reorientation in our cognitive landscape (off/on or inhibition and activation of System 1/System 2 strategies, i.e. heuristics and algorithms) is fragile, however, and it is not acquired straightaway, nor do we keep it forever, as we have just seen in multiple examples. Hence the interest in targeted System 3 education. In 100 Psychology Terms (Houdé, 2008), we put forward the idea – which we tested experimentally in our laboratory (Habib et al., 2012) – of an executive role of regret and, more specifically, of anticipation of regret, that we ventured to test in a situation of reasoning (or of cognitive, social choices; the taking of a decision in the broadest sense). This is a psychological element to add to Damasio’s initial schema.9 In fact, by observing patients with lesions in the orbito-frontal cortex (the same paralimbic system as the vmPFC studied by Damasio’s team), other researchers (Camille et al., 2004) have demonstrated that to take decisions the brain needs not merely to call up previous pleasant or unpleasant experiences

Inhibiting in order to reason


(via memorized somatic markers). It must also be capable of imagining virtual, hypothetical scenarios (called ‘counterfactual’, i.e. going against the facts) and of anticipating regret with respect to them: in this particular study, it was the gains that an individual could have obtained, by other choices, in a money game. Regret is the distinctive feeling of this type of process (orbito-frontal patients no longer experience it) and, linked to the sense of responsibility (even the guilt that characterizes emotions in healthy individuals), it is essential to cognitive adaptation, that is to the making of correct choices. The adaptive advantage of regret was shaped for a long time in the brains of our ancestors, those of the Pleistocene epoch, well before Aristotle. In a similar and complementary way, much research by Damasio and his collaborators has established a cerebral mapping of ‘moral sense’ – or the taking of moral decisions, the emergence of which is related to social emotions: for example, to an aversion to making others suffer (Koenigs et al., 2007). One can therefore potentially learn to inhibit either ‘for oneself’ or ‘for others’ – two educationally related but distinct objectives (one can well imagine this) depending on the case, the challenges – right up to self-denial. Reasoning is more than ever a value related to subtle cognitive and social emotions. But in this case they are the emotions of System 3, to be studied as such. They are of a different type from the emotions of intuitive and rapid thought of Kahneman’s System 1 (such as the ‘I like or I hate’, without an ounce of reflection). These are even – if a very simple mapping is needed – inverse emotions: cognitive emotions. They are a lot closer to the reflective and prefrontal self-evaluation described by Changeux with his Neural Darwinism: emotions related to the system of selection and testing which anticipates interaction with the environment. This is the best place to insert the anticipation of regret in order to guide the inhibition of impulsive System 1 responses. These emotions related to self-evaluation (selection, test of strategies), which contribute, according to Changeux, ‘to the agreement of knowledge with itself’ (Kant), must therefore not be confused with System 1 (intuition) or with System 2 (logic), particularly in the event of cognitive conflicts System 1/System 2. To arbitrate in these conflicts, an emotion that is almost moral in the strongest sense of the term is needed (our idea of resistance). Piaget (1932) also wrote, in his earliest work, that morality is the logic of action. However, he should not have reduced this morality to the single System 2 (logico-mathematical development) and to his too cognitive and formal operations (up to the INRC group, Inhelder & Piaget, 1958) – which are complex, but also discrete and ineffective, in the case of serious cognitive conflicts, at all ages and regardless of academic level (all examples reported by Kahneman, 2011). In this case morality is more the logic of inhibition and the counterfactual emotions that guide it (System 3): to doubt oneself (De Neys et al., 2008) and regret a possible System 1 response by anticipation (that is against the initial operation of System 1 in working memory), and from then on inhibit this impulsive response (by flexibility, vicariance) so as to respond differently (System 2).


3-system theory of thinking and reasoning

That is how one can say with the French writer Jean d’Ormesson that ‘to think is to refuse, to say no, to think against oneself. . . . Thinking is always something else’.10 In the history of science, ‘the philosophy of no’ of Gaston Bachelard (1884–1962) combated and overcame epistemological obstacles. If it is well applied and positively reinforced by school and education, this executive doubt–regret–inhibition sequence can be stabilized in our brain, reinforced in long-term memory (Neural Darwinism) and become very rapid (therefore adaptive, and competing well with the rapidity of Kahneman’s System 1) when faced with new similar cognitive conflicts (Linzarini et al., 2017). It is through teaching children, as well as adults, this type of effective ‘metacognitive automation’ that education in reasoning can hope – beyond simple logical learning itself – to combat and correct bias, both deductive and inductive.

Notes 1 In this case, a false negative is the tendency to conclude wrongfully that children who fail a logical task are necessarily incompetent in relation to the notion being tested: see Gelman (1997). 2 Sometimes it is not necessary and the intuitions of System 1 are both relevant and effective. 3 Neuronal epigenesis (from the Greek epigignesthai, ‘to be born after, occur as a result of’) is the installation of connections in the brain during post-natal development, which are not rigid but are more the result of trial and error. 4 Associated today with global workspace theory. 5 Within the global workspace, Dehaene and Changeux (2011). See also Alan Baddeley (2003). 6 In this case, the classical Wason (1968) selection task. 7 The inferior prefrontal cortex, right or left depending on the nature of the task, is known to be involved in cognitive inhibition in children and in adults: Aron et al. (2004, 2014) and Houdé et al. (2010). 8 That is to say by avoiding false negatives. 9 The idea of anticipation was introduced by Damasio (2003) in Looking for Spinoza where he presents a model of two channels of reasoning strategies, one which anticipates options of actions and future results (channel A), the other less reflective which simply reactivates some previous emotions (channel B which Damasio associates with the observations of Kahneman, i.e. System 1). 10 J. d’Ormesson, Qu’ai-je donc fait? (Paris: Robert Laffont, 2008), p. 315.

Bibliography Aron, A. et al. (2004). Inhibition and the right inferior frontal cortex. Trends in Cognitive Sciences, 8, 170–177. Aron, A. et al. (2014). Inhibition and the right inferior frontal cortex: One decade on. Trends in Cognitive Sciences, 18, 177–185. Baddeley, A. (2003). Working memory. Nature Reviews Neuroscience, 4, 829–839. Berthoz, A. (2016). The Vicarious Brain, Creator of Worlds. Cambridge: Harvard University Press. Borst, G. et al. (2012). Inhibitory control in number-conservation and class-inclusion tasks: A neo-Piagetian intertasks priming study. Cognitive Development, 27, 283–298.

Inhibiting in order to reason


Borst, G. et al. (2013a). Inhibitory control efficiency in a Piaget-like class-inclusion task in school-age children and adults: A developmental negative priming study. Developmental Psychology, 49, 1366–1374. Borst, G. et al. (2013b). Negative Priming in Logico-Mathematical Reasoning: The Cost of Blocking Your Intuition. In W. De Neys and M. Osman (eds.), New Approaches in Reasoning Research (pp. 34–50). New York: Psychology Press. Cachia, A. et al. (2014). The shape of the anterior cingulate cortex contributes to cognitive control efficiency in preschoolers. Journal of Cognitive Neuroscience, 26, 96–106. Camille, N. et al. (2004). The involvement of the orbitofrontal cortex in the experience of regret. Science, 304, 1167–1170. Casey, B.J. et al. (2005). Imaging the developing brain: What have we learned about cognitive development? Trends in Cognitive Sciences, 9, 104–110. Changeux, J.-P. (1985). Neuronal Men: The Biology of Mind. Princeton: Princeton University Press (1983 French edition: L’Homme neuronal. Paris: Fayard). Changeux, J.-P. (2002). The Physiology of Truth. Cambridge: Harvard University Press. Damasio, A. (2003). Looking for Spinoza: Joy, Sorrow, and the Feeling Brain. Orlando: Harcourt. Damasio, H. et al. (1994). The return of Phineas Gage. Science, 264, 1102–1105. Dehaene, S. (2000). The Number Sense. Oxford: Oxford University Press. Dehaene, S. and Changeux, J.-P. (2011). Experimental and theoretical approaches to conscious processing. Neuron, 70, 200–227. De Neys, W. (Ed.) (2018). Dual Process Theory 2.0. Oxford: Routledge. De Neys, W. and Bonnefon, J.-F. (2013). The whys and whens of individual differences in individual thinking biases. Trends in Cognitive Sciences, 17, 172–178. De Neys, W. and Van Gelder, E. (2008). Logic and belief across the life span: The rise and fall of belief inhibition during syllogistic reasoning. Developmental Science, 12, 23–130. De Neys, W. et al. (2008). Smarter than we think: When our brains detect that we are biased. Psychological Science, 19, 483–489. De Neys, W. et al. (2011). Biased but in doubt: Conflict and decision confidence. PLoS ONE, 6(1), e15954. De Neys, W. et al. (2013). Bats, balls, and substitution sensitivity: Cognitive misers are no happy fools. Psychonomic Bulletin & Review, 20, 269–273. Diamond, A. (2013). Executive functions. Annual Review of Psychology, 64, 35–168. Diamond, A. and Lee, K. (2011). Interventions shown to aid executive function development in children 4 to 12 years old. Science, 333, 959–964. Diamond, A. et al. (2007). Preschool program improves cognitive control. Science, 318, 1387–1388. Engel, P. (1991). The Norm of Truth: An Introduction to the Philosophy of Logic. Toronto: University of Toronto Press. Evans, J. (1989). Bias in Human Reasoning. Hillsdale, NJ: Erlbaum. Evans, J. (1998). Matching bias in conditional reasoning. Thinking & Reasoning, 4, 45–82. Evans, J. (2003). In two minds: Dual-process accounts of reasoning. Trends in Cognitive Sciences, 7, 454–459. Evans, J. and Stanovich, K. (2013). Dual-process theories of higher cognition: Advancing the debate. Perspectives on Psychological Science, 8, 223–241, 263–271. Fuster, J. (1997). The Prefrontal Cortex. New York: Raven Press. Fuster, J. (2003). Cortex and Mind. Oxford and New York: Oxford University Press. Gelman, R. (1997). Constructing and using conceptual competence. Cognitive Development, 2, 305–313.


3-system theory of thinking and reasoning

Goel, V. (2007). Anatomy of deductive reasoning. Trends in Cognitive Sciences, 11, 435–441. Gogtay, N. et al. (2004). Dynamic mapping of human cortical development during childhood through early adulthood. PNAS, 101, 8174–8179. Habib, M. et al. (2012). Counterfactually mediated emotions: A developmental study of regret and relief in a probabilistic gambling task. Journal of Experimental Child Psychology, 112, 265–274. Houdé, O. (1995). Rationalité, Développement et Inhibition. Paris: PUF. Houdé, O. (1997). The problem of deductive competence and the inhibitory control of cognition. Current Psychology of Cognition, 16, 108–113. Houdé, O. (2000). Inhibition and cognitive development: Object, number, categorization, and reasoning. Cognitive Development, 15, 63–73. Houdé, O. (2007). First insights on neuropedagogy of reasoning. Thinking & Reasoning, 13, 81–89. Houdé, O. (2008). Les 100 Mots de la Psychologie. Paris: PUF. Houdé, O. (2018). L’ecole du cerveau: De Montessori, Freinet et Piaget aux sciences cognitives. Brussels: Éditions Mardaga. Houdé, O. and Guichart, E. (2001). Negative priming effect after inhibition of number/ length interference in a Piaget-like task. Developmental Science, 4, 71–74. Houdé, O. and Moutier, S. (1996). Deductive reasoning and experimental inhibition training. Current Psychology of Cognition, 15, 409–434 (and 1999, 18, 75–85). Houdé, O. and Moutier, S. (1999). Deductive reasoning and experimental inhibition training. Current Psychology of Cognition, 18, 75–85. Houdé, O. et al. (2000). Shifting from the perceptual brain to the logical brain: The neural impact of cognitive inhibition training. Journal of Cognitive Neuroscience, 12, 721–728. Houdé, O. et al. (2001). Access to deductive logic depends on a right ventromedial prefrontal area devoted to emotion and feeling: Evidence from a training paradigm. NeuroImage, 14, 1486–1492. Houdé, O. et al. (2010). Mapping numerical processing, reading, and executive functions in the developing brain. Developmental Science, 13, 876–885. Houdé, O. et al. (2011). Functional MRI study of Piaget’s conservation-of-number task in preschool and school-age children: A neo-Piagetian approach’. Journal of Experimental Child Psychology, 110, 332–346. Inhelder, B. and Piaget, J. (1958). The Growth of Logical Thinking from Childhood to Adolescence. London: Routledge. Kahneman, D. (2011). Thinking, Fast and Slow. New York: Farrar, Straus and Giroux. Koenigs, M. et al. (2007). Damage to the prefrontal cortex increases utilitarian moral judgements. Nature, 446, 908–911. Linzarini, A. et al. (2017). Cognitive control outside of conscious awareness. Consciousness and Cognition, 53, 185–193. Lubin, A. et al. (2013). Inhibitory control is needed for the resolution of arithmetic word problems: A developmental negative priming study. Journal of Educational Psychology, 105, 701–708. Morel, C. (2002–2010). Les Décisions Absurdes. Paris: Gallimard. Moutier, S. (2000). Deductive competence and executive efficiency in school children. Current Psychology Letters, 3, 87–100. Moutier, S. and Houdé, O. (2003). Judgment under uncertainty and conjunction-fallacy inhibition training. Thinking & Reasoning, 9, 185–201. Moutier, S. et al. (2006). Syllogistic reasoning and belief-bias inhibition in school children. Developmental Science, 9, 166–172.

Inhibiting in order to reason


Mussweiler, T. (2000). The use of category and exemplar knowledge in the solution of anchoring task. Journal of Personality and Social Psychology, 78, 1038–1052. Piaget, J. (1932). The Moral Judgment of the Child. London: Kegan Paul. Prado, J. and Noveck, I. (2007). Overcoming perceptual features in logical reasoning: A parametric functional magnetic resonance imaging study. Journal of Cognitive Neuroscience, 19, 642–657. Smith, E. et al. (1999). Storage and executive processes in the frontal lobes. Science, 283, 1657–1660. Tversky, A. and Kahneman, D. (1974). Judgment under uncertainty: Heuristics and biases. Science, 185, 1124–1131. Wason, P. (1968). Reasoning about a rule. Quarterly Journal of Experimental Psychology, 20, 273–281.


This final chapter is a very short one, reflecting the period of development that it describes: infancy from zero to two years. It is important that this aspect of the research into reasoning in infants comes at the end of the book, firstly because it involves very recent research trends and discoveries, which definitively challenge Piaget’s concept of a ‘sensorimotor’ infant (see Chapter 5), secondly because it concerns the concept of an ‘infant already being a statistician’. This creates a clear paradox in relation to Kahneman’s theory and his adult hero, System 1, who is not much of a statistician (see Chapter 6). The need for a System 3 (Chapter 7) will be reinforced – this is a system to regulate, during development, more or less efficiently, the variation between early competencies (those of an infant and young child) that are still being constructed and late incompetencies (those of adults). Just as in its adult form, when reasoning is developing, cognitive gaps in performance or failures are the rule and not the exception in the functioning of the brain.

1. Failures in statistics at university As we may recall, advanced students of the science of decision-making programme at Stanford Business School, who had all followed high-level courses in probability, statistics and decision theory, were wrong over 80% of the time in the Linda problem conceived by Kahneman and Tversky (see p. 91). However, it is known today, and we will see this in detail, that when they were babies their brains were already capable of statistics. Is this a gift from God, to use Descartes’s words? No. It comes instead from subtle cognitive learning mechanisms and inferences, which have been identified in recent years in developmental psychology. The contrast between infants who are already statisticians (System 2) and adult students who no longer are (System 1) will be the final demonstration of the

The paradox of reasoning in infants


non-linear nature of cognitive development, which conflicts with Piaget’s stages. It also shows the need to develop, step by step, from babies to adults, a System 3 of inhibitory prefrontal control (off/on of System 1/System 2) to resolve conflicts (see Chapter 3). Kahneman did not specifically address this issue of a paradox between the competencies of babies and those of adults, because his field is the psychology of reasoning and decision-making in adults, and not developmental psychology. However, when one seeks to understand cognitive development, this paradox is obvious.

2. Statistics in the cradle Recent discoveries in cognitive science indicate that very early on, even before the appearance of language (i.e. before the age of two years), infants already use statistics to understand and anticipate events that they perceive. For several decades child psychologists have demonstrated that infants are a lot more intelligent in their first year of life than Piaget imagined. Well before the appearance of articulated language (at age two years), it has now been established through the study of visual reactions in babies that they understand the elementary principles of unity and the permanence of objects, of number, as well as physical or mental causality (Baillargeon et al., 2010; Baillargeon & Wang, 2002; Berger et al., 2006; Dehaene-Lambertz & Spelke, 2015; Spelke, 2000; Wynn, 1998; and others). Before the age of one, they even show early moral evaluation capacities (the distinction between nice and naughty, the expectation of social conformity, etc.), as tested by their visual reactions (using short videos) when faced with contrasted situations of social interaction (Hamlin et al., 2013; Powell & Spelke, 2013). Following on from these discoveries, a new trend has recently emerged which views an infant as a real little scientist who uses statistics to understand and anticipate events that he or she observes. Therefore, it would follow from this that infants begin to understand the world through statistics. And not just any statistics: these are ‘Bayesian principles’ (named after the British mathematician Thomas Bayes, 1702–1761, whose theory determines the probability of causes from effects observed). The American psychologist Alison Gopnik is the forerunner of this new research trend. According to Gopnik (2012), infants and young children are Bayesian statisticians; in other words, they are little thinkers who already deduce abstract hierarchical structures from perceptive data pertaining to their environment. Hence, infants detect statistical patterns and use them to test causal hypotheses relating to objects and individuals. For example, by using a visual reaction technique, Fei Xu showed in an experiment involving table tennis balls that eight-month-old infants are sensitive to statistical schemas (Xu & Garcia, 2008). She showed babies a large box filled with red and white balls, then closed her eyes and randomly took a few balls from the box and placed them in another smaller box, close by. If the sample removed


3-system theory of thinking and reasoning

had been random, the distribution of the balls in the small box should have corresponded to that of the large box. After the drawing of samples, babies saw a sample of balls in the small box, which, depending on the experimental situation that was presented, either corresponded to the probabilistic distribution (statistically expected event) or did not correspond to it (an unexpected event). The results: babies were surprised and spent more time looking when the perceptive event was not compliant with probabilities. They had therefore perceived the error and detected the breaking of the statistical pattern. In a control condition, babies were shown exactly the same sequence of actions but this time Fei Xu took the balls from her pocket and not from the large box. In this case, there was no surprise reaction. Another group of researchers (Teglas et al., 2011) also published a study confirming that from the age of twelve months infants have astonishing probabilistic reasoning capacities when perceiving complex configurations of moving objects. The authors referred to ‘pure reasoning’. In this study, infants had very clear visual expectations with respect to future events. They predicted these events perfectly, just like little scientists, according to variables that they used systematically and rationally: the number of objects, their physical layout and their time of disappearance. The researchers concluded that the visual reactions of babies are strictly consistent with those of a Bayesian system capable of abstracting general principles relating to moving objects. Other studies, conducted in the same vein, revealed that infants already use statistical patterns to test causal hypotheses pertaining to series of images, spoken phrases and so on. It is with this ‘proto-mathematical’ brain, which is seemingly passive yet very active and lucid, that babies begin to understand the world.

3. What remains to be learned Alongside these revelations, which have been solidly supported experimentally in infants who have been referred to as ‘thinkers’, ‘scientists’ or at the very least ‘very successful’ (as Gopnik described them), Kahneman (2011) goes on to describe cultured adults who are ignorant of Bayesian rules. He devotes a section explicitly to this in his book in order to show how System 1 ignores Bayesian statistics. There is therefore a paradox. If from the first year of life, infants perceive the world in both an intelligent and a scientific way, already with the capacity of ‘pure reasoning’, then why do older children at school and even adults make so many systematic errors of logical reasoning? As Piaget had clearly identified in children of pre-school and school age (errors of conservation of number, inclusion of categories, etc.), and later Kahneman and Evans in adults (systematic errors of deduction and induction), our brain often reasons the wrong way, obeying bias, heuristics or perceptive intuitions (System 1) more than logical rules and abstract mathematics (System 2). Explaining this paradox of early competencies and late incompetencies is the primary task of current cognitive developmental

The paradox of reasoning in infants


psychology. One of the ways of eliminating this paradox is to consider that the human brain, in particular its prefrontal part, must still learn to inhibit (System 3) certain perceptive or semantic heuristics that are acquired at an early stage, during childhood, and even later in adulthood. Consequently, if the perceptive world commands our attention very early in development, one must also learn to resist it in order to reason. That cannot be taken for granted.

Bibliography Baillargeon, R. et al. (2010). False-belief understanding in infants. Trends in Cognitive Sciences, 14, 110–118. Baillargeon, R. and Wang, S. (2002). Event categorization in infancy. Trends in Cognitive Sciences, 6, 85–93. Berger, A. et al. (2006). Infant brains detect arithmetic errors. PNAS, 103, 12649–12653. Dehaene-Lambertz, G. and Spelke, E. (2015). The infancy of the human brain. Neuron, 88, 93–109. Gopnik, A. (2012). Scientific thinking in young children: Theoretical advances, empirical research and policy implications. Science, 337, 1623–1627. Hamlin, J. et al. (2013). Social evaluation by preverbal infants. Nature, 450, 557–560. Kahneman, D. (2011). Thinking, Fast and Slow. New York: Farrar Straus and Giroux. Powell, L. and Spelke, E. (2013). Preverbal infants expect members of social groups to act alike. PNAS, 110, 3965–3972. Spelke, E. (2000). Core knowledge. American Psychologist, 55, 1233–1243. Teglas, E. et al. (2011). Pure reasoning in 12-month-old infants as probabilistic inference. Science, 332, 1054–1059. Wynn, K. (1998). Psychological foundations of number: Numerical competence in human infants. Trends in Cognitive Sciences, 2, 296–303. Xu, F. and Garcia, V. (2008). Intuitive statistics by 8-month-old infants. PNAS, 105, 5012–5015.


Let us reflect for a moment on the contrasting feelings that reading this book might have provoked: hope, disappointment, hope. The initial hope over the centuries (Part I) had been to relive the unbelievable rise in the power of reasoning: from myth to logos in antiquity (Aristotle) and then onwards in a dazzling upward sequence, during which many entered the ‘theatre of reasoning’ – Archimedes, St Anselm of Canterbury, Montaigne, Descartes, Pascal, Kant and Claude Bernard – through to the sciences of the twentieth century. As we have seen, the culmination of this is the scientific study of reasoning itself as a biological function of the brain. This vertiginous selfreflection of the human brain that reasons and experiments on its own mechanisms was described, for the first time, by Piaget with his study of the logical system in children (Chapter 5), and then by the emergence of cognitive neuroscience and cerebral imaging. Even if Piaget moderated the enthusiasm of Auguste Comte (with his scale of the sciences) and ‘logicians’ (Russell, etc.) by reminding the proponents of mathematics and logic of their psychological and biological origins (the circle of the sciences), the hope of previous centuries was consolidated: logic and reasoning as the heroes of thought, both in children and in adults. However, the fall was sudden. From a 2,000-year-old hope (even if sophisms, paralogisms and other misleading influences have been noted), there was a shift, during the second half of the twentieth century, to profound and systematic disillusionment. The hero was no longer reason and logic; instead it had become replaced by bias, heuristic, rapid intuition and the error of reasoning (Evans, Kahneman): this is the triumph of System 1 (Chapter 6). This is not, as we have seen, an anecdotal point, a marginal gap, which is either horizontal or vertical as Piaget would have said (and why not oblique too?), but a strong tendency (in over 80% of ‘reasoners’, both children and adults), supported by numerous



experimental results. The elegant experimental method of Claude Bernard – the jewel of reasoning in the nineteenth century – was reused in the following century to demonstrate, through a now scientific psychology, the errors of the methods of human reasoning. This tendency was so strong, the fall so sudden, that even in economics the dominant premise was found to be cracked; the so-called rationality of individuals (or agents) in the standard theory was therefore challenged. Hence, Kahneman’s Nobel Prize in Economics in 2002. Therefore, after twenty centuries of quiet growth (even in the Middle Ages reason and theology were compatible) and the very promising encouragement of the Age of Enlightenment, by the end of the twentieth century reason was suddenly reduced to playing a secondary role, being rebaptized ‘System 2’ by Kahneman, following many other authors. This could not be more explicit. Moreover, the final death blow came when child psychology, so dear to Piaget, simultaneously discovered a logical development which is not incremental (unlike the Piagetian stages), but one which is irregular, uneven and nonlinear – a disillusionment and disaster for reason. However, if we look at it more closely, for about twenty years the extraordinary progress of neuroscience and cerebral imaging for the exploration of the brain, as well as the reappraisal (following Piaget) of cognitive development processes in children and education, has led to a prioritizing of new neurocognitive principles. These are at the heart of our very human way of reasoning and learning: the brain that makes mistakes, stops, feels the emotions (Damasio) necessary to correct its errors and reconfigure its neural networks; a brain that resists (System 3), takes different paths (vicariance), seeks the new, like a poet, by inhibiting the old or the familiar (see Chapter 7). The prefrontal cortex, both powerful and fragile, can today be examined by cerebral imaging, allowing us to study scientifically in real time, in both children and adults, the process of reasoning. So System 3, you see, is the hero of this book. With this renewed hope for reason, albeit now rather fragile and lazy, in which the defects are well mapped scientifically (System 1), one can reinvest in new clear approaches for the education of reasoning (System 3: vicariance System 1/System 2). This applied and experimental education must, according to me, be of service to contemporary cognitive and cultural values: reason in the world of digital screens, reason for sciences in school – and why not construct an artificial brain that reasons and makes decisions? If it reasons, at least like a child, like an infant (Chapter 8), and it has good perceptive and semantic mechanisms (its knowledge and beliefs of the world), and is capable of inhibiting them on a caseby-case basis, then this artificial brain could almost be smart. Besides this dream of creating an artificial intelligence, which is still a long way off, the most urgent challenge today is without doubt to educate the capacity of the reasoning of the human brain with its Systems 1, 2 and 3 in the new world of digital screens. Computers, smartphones, digital tablets (but also smart watches and glasses or digital toys) are now almost omnipresent, day and night, in the life and private time of children, adolescents and adults. It is the ‘Tom Thumb’ generation described by



the philosopher Michel Serres:1 young people (and the less young now) always with their thumbs on the touch screen, skilfully operating it, their eyes not moving from the computer, smartphone or tablet. However, do they still reason? Parents and educators, delighted with the digital exploits of their children, do in fact worry about their more profound capacities of reflection, of personal synthesis, of taking a step back (Carr, 2011). This fear is fuelled by recent scientific studies indicating that thought processes that are too rapid, superficial and excessively fluid can accompany the use of screens by young people: the culture of ‘zapping’ or multitasking (Ophir et al., 2009). Therefore, such people might end up retaining the method of access (the links on search engines) more than the content itself, at the expense of understanding it deeply (Sparrow et al., 2011). Is there memory without reasoning? Is System 2, and with it the logos of Aristotle, the Cogito, ergo sum of Descartes, the logico-mathematical intelligence of Piaget, stored in the attic of unnecessary accessories? Is the human brain in danger of a digital short circuit? No. This ‘organ of thought’ is formed from a very long biological evolution, calculated in millions of years (to be accurate and put into perspective the mere two millennia that separate us from antiquity), and it should be able to adapt to screens. So one can believe that our neurocultural networks will be functionally changed through experience, as in the past they were successfully changed for writing and reading. It is known, as was demonstrated by Stanislas Dehaene with respect to reading, that the neurons in the brain have this capacity to recycle themselves (Dehaene et al., 2007). Education must play a key role here. The challenge is to ensure that when faced with screens, neural networks can recycle themselves, preserving a good reasoning capacity in the sense defined in this book: intuition and rapidity (System 1) but also learning to inhibit (System 3) impulses, automatic reflexes and perceptual-motor and cognitive bias so as to enable the exercising of logical reasoning (System 2). In an interesting way, the digital revolution suggests a ‘renewed use’ of reasoning. It is imposed today as the cognitive value to be cultivated and preserved in the face of a growing mass of information (the Web, big data), on screens or elsewhere, that our brain must manage to collect, organize and analyse – whilst avoiding (inhibiting) traps.

Note 1 M. Serres, Petite Poucette (Paris: Le Pommier, 2012).

Bibliography Carr, N. (2011). The Shallows: What the Internet Is Doing to Our Brains. New York: W. Norton & Company. Dehaene, S. et al. (2007). Cultural recycling of cortical maps. Neuron, 56, 384–398. Ophir, E. et al. (2009). Cognitive control in media multitaskers. PNAS, 106, 15583–15587. Sparrow, B. et al. (2011). Google effects on memory: Cognitive consequences of having information at our fingertips. Science, 333, 776–778.


Abelard, Peter 25 abstraction processes vii, 26, 65, 106 absurdity 90, 108–109, 110 accommodation 11, 77 action schemas 77–78 adolescence 79–81, 111 affect heuristic 94 agreeableness 41 amygdala 95 anchors 109–111, 112 animals 13, 60 anticipation of regret 114–116 Anzieu, Didier 61 Arab world 21 Archimedes 24, 66, 67 Aristotle x, xi, 3, 6, 7, 9, 10–12, 14, 21–29, 39, 40, 43–44, 45, 55, 65 artificial brains 68, 95, 125 artificial intelligence xiii, 40, 125 assembly processes 46 assimilation-accommodation 11, 77 associationism 12, 45, 46, 47, 77 associative memory 90, 91, 109, 110 authority 24, 37, 41, 55 automatic processes xiii, 83, 90–91, 100, 101 automatons 42, 60 Averroës 21 Avicenna 28n2 babies: cognitive capacities 82; logical and rational thinking viii; Piaget 77–78; potential intelligence of 40, 73, 101; reasoning 111, 120–123; see also children

Bachelard, Gaston 116 Bacon, Francis 36–37, 53 Bacon, Roger 21, 23–24, 26, 27, 28 Baillargeon, R. viii bat and ball problem 114 Bayesian principles 121–122 Beaunis, Henri 58, 59, 61 behaviourism 33, 52, 60 belief bias 88 Bergson, Henri 29n7, 46, 60 Berkeley, George 46–47 Bernard, Claude 23, 24, 53, 58, 67 Berthoz, Alain 34, 76, 101 bias 87–88; see also cognitive bias; matching bias; perceptive bias; reasoning bias; representativeness bias bias correction 100–102 big data xiii Binet, Alfred 54, 58, 59, 60, 61, 62 Binet-Simon test 59 biology 11–12, 34, 40, 53, 54, 55, 77, 82, 102–104, 108 Biran, Maine de 53 ‘body is a tomb’ 7, 8 body-mind dualism 8–9, 38–40, 95, 96 Boole, George 51n27, 67, 81 Borst, Grégoire 111, 113 BRAIN 83n6 brain: Braine 82; and digital screens 126; and emotion/feelings 95; Herophilos 12–13; logical rules 82; mechanistic science 40; phrenology 54–55; and the soul 12; System 1 91



brain damage 54–55, 95, 106 brain imaging see cerebral imaging Braine, Martin 11, 69, 81–82, 86, 89 Brannon, Elizabeth 48 Broca’s area 54 Bruner, Jerome 62 Buddhism 14–15 Buffon, Georges-Louis de 12, 55 Cabanis, Jean Georges 53 Calvin, John 31–32, 37 cardiocentrism x, 9, 12 Cassotti, Matheiu 94 categorization 25 cause and effect 46, 47 cave allegory (Plato) 7–8, 16, 17, 27 cell theory 13 cerebral cortex and phrenology 54–55; see also prefrontal cortex cerebral imaging 42, 54–55, 63, 67, 76, 90, 96, 102, 104–107, 111, 114–116, 125 cerebral localizations 54, 55 cerebrocentricity x, 9, 12, 13 Changeux, Jean-Pierre 8, 9, 10, 25, 38, 62, 73, 76, 102, 103, 104, 108, 112 Charcot, Jean-Martin 60 Château, Jean 8, 14, 32 children: age of reason 78–79; child development 76–79, 82–83, 87, 96, 101, 103–104, 113; Darwin 56; élan vital (zest for life) 45–46; history of interest in 74–75; language 62; non-linear development 82–83, 92, 113, 121, 125; Piaget’s study of 62, 76–79; reasoning 62, 73, 75; Sorbonne, University of the 62; staircase model of development 74, 82–83 Chomsky, Noam 7, 62, 74 Christianity 4–5, 15–18, 20–29, 31, 36, 66 ‘circle of the sciences’ 62, 76, 79 classicism 36 clinical psychology 49, 60 cogito 18, 38–40, 47, 49, 126 cognitive bias vii, 11, 18, 27, 42, 86–89 cognitive conflicts viii, 83, 121 cognitive illusions 93 cognitive laziness 89, 111 cognitive neuroscience 59, 67, 76, 102 cognitive psychology 6, 7, 16 cognitive resistance 34, 94, 97, 113 cognitive science 44, 49 cognitive-emotional brain 94–95 cold training 106–108 Collège de France 31, 52, 53, 54, 58, 60, 61, 75, 76

competence/performance distinction 17 competition 102, 105, 106 computer-assisted imaging techniques 54–55 Comte, Auguste 38, 53, 62, 67 concrete operational stage 78, 79, 80, 81 Condillac, Étienne Bonnot de 43, 46–47, 90 conditional reasoning 69 conditioning 59, 60 conduct 60 Connes, Alain 9–10, 76 consciousness 33, 38–40, 45, 47, 58, 59, 96–97, 103, 104 constructivism 9, 25–26, 46, 48, 62, 77 contiguity 46, 47 Copernicus 23, 31, 37, 44, 49 core knowledge 7, 62 counterfactual scenarios 115 cranioscopic examination 54 creation of the universe 22–23, 37 creativity 25 credibility bias 89 custom 41–42, 44 da Vinci, Leonardo 30, 31 Dalai Lama 15 Damasio, Antonio ix, x, xii, xiii, 12, 40, 54, 58, 59, 94–97, 102, 106, 108, 114, 115, 125 Damasio, Hanna 54, 106 Darwin, Charles 12, 23, 40, 53, 55–57, 73, 74, 103 De Neys, Wim 100, 101, 107, 108, 114, 115 debiasing 100 deception 19n8, 41, 106 decision-making vii, ix, 95–96, 114–116, 120 deduction 11, 24, 27, 53, 66, 68–70, 88, 89, 91, 95–96, 108–109 Dehaene, Stanislas 31, 33, 48, 57, 62, 76, 103, 111, 126 Descartes, René x–xi, 7, 8, 14, 18, 36, 38–40, 49, 56, 66, 73, 107–108, 126 developmental psychology xi, 44, 45, 56, 73–74, 101, 121, 122–123 Diderot, Denis 42, 43, 47 differential psychology 34, 39, 56–57, 59 digital screens 125–126 diseases of the soul 13–14, 35 diversity 10, 25, 34, 44, 103 d’Ormesson, Jean 22–23, 116 Doubt x–xi, 39, 100, 101, 107 doubt-regret-inhibition 116


Ebbinghaus, Hermann 58 Eco, Umberto 29n8 Economics vii, 50n22, 67–68, 91 Edelman, Gérald 87, 103 education 7, 8, 23, 31, 35–36, 43–45, 59, 94, 101–102, 108, 114, 125, 126 ego 49, 61 Egocentrism xi, 34 Einstein, Albert 75 Elektra complex 3 Eliasmith, Chris 68 emotion/feelings: anticipation of regret 114–116; and biology 96; and body-mind split 40; and the brain x; child development 96; Darwin 56; emotional guidance in the brain 95–97; and evolution 108; and homeostasis xiii; James-Lang theory of emotions 59; and logic xii; prefrontal cortex 55; and reasoning 106–108; Ribot 58; substitutions 94; System 1 91, 93–95; in System 3 ix, 100–119; ‘two faces’ of man 49 empiricism: Antiquity 12; Aristotle 11, 39; behavioural studies 60; Berkeley and Condillac 46; French psychology laboratories 58–59, 61–63; German psychology laboratories 57–58; Hume 46; Kant 47, 49; Leibniz 47; Locke 44; Middle Ages 23–24; nineteenth century 49, 53; Piaget 62, 77; Renaissance 36–37; Wolff 48 Engel, Pascal 101 Enlightenment xii, 28, 42–48, 49, 52, 66–67, 125 environment 44, 47, 60, 77 epigenesis 116n3 epiphysis 38 equality 42–43 equilibrium 101 Erasmus of Rotterdam 50n3 Eros 4 error 18, 27, 90, 93, 97, 100, 106, 125 ethics 6 ethology 33 eugenics 56 Evans, Jonathan 11, 87, 89, 90, 100, 101, 102, 104–105, 113 evolution 12, 23, 40, 53, 55–57, 73, 103, 108, 115 executive cognition (System 3) 94–95, 100–119 executive functions xi, 35, 83, 94, 97, 102, 110 experience 24, 44, 45, 77, 96


experimental cognitive science 15, 31, 33, 42, 59 experimental method 23–24, 33, 36–37; see also empiricism experimental psychology: Bernard 67; birth of modern 58, 67; and the Cartesian duality ; executive functions 102; Janet 60; nineteenth century 52–64; Piaget 78–79, 81 facilitation 109–111 faith 20–24, 34 false negatives 101 Fechner, Gustav 48, 57, 58 Ferry, Jules 59 finesse, spirit of xi, 36, 41, 49, 90 Fischer, Kurt 90 Fisher, Ronald 57 Flavell, John 74, 97 Fodor, Jerry 7, 55 form psychology/gestalt psychology 62 Fraisse, Paul 61 framing 93–94 Francis I, King 30–31 free association 61 freedom 22 Frege, Gottlieb 84n9 French Revolution 43, 49 Freud, Sigmund ix, 3, 5–6, 33, 60–61 frontal gyrus 54 functional cerebral imaging 111 Gage, Phineas 54–55, 106 Galen, Claudius 3, 13, 40 Galileo Galilei 23, 37, 40, 44, 49, 66 Gall, Franz Joseph 53, 54–55 Galton, Francis 56–57 generalized things 24–25, 103 genesis 17 genetics 55–56, 74, 102–103 geometrical spirit xi, 41, 49, 68, 90 Gesell, Arnold 63n4 g-factor 57 global workspace theory 116n4 God: Antiquity 8, 9, 12, 14, 16, 17; creator God 22–23, 37, 55; and Darwin 40; and evolution 56, 73; Grand Siècle 37; Kant 48; Middle Ages 20; origins of logical system 73; origins of the universe 22–23; Pascal’s wager 41; as perfection 39; and reasoning 66; and truth 27; William of Occam 26 Goel, Vinod 108 Gopnik, Alison viii, xii, 82, 121, 122



Grand Siècle 36–42 grandiosity 36 Greek mythology 3–6, 22 Greek philosophy x, 3 group dynamics 62 Gutenberg, Johannes 31 Hall, Stanley 63n4 halo effect 91 happiness 12, 15 Harlow, John 54–55, 106 Harvard University 52, 54, 59, 93 Harvey, William 38 heart x, 9, 12, 16, 38, 41 Heaven of Ideas 9, 14, 21 Hegel, G.W.F. 6 heliocentrism 31, 37–38, 66 Helmholtz, Hermann 57 heredity 55–57 Herophilos 3, 12–13, 40 heuristic processes vii, viii, 83, 87, 89, 92, 97, 100, 103, 112 Hippocrates 14, 61 Hobbes, Thomas 50n16 homeostasis ix, xiii, 96, 101 Homo duplex x Homo sapiens ix, x Homo triplex x, xii, 9 hot training 106–108 Houdé, Olivier 9, 90, 96, 101, 102, 104, 105, 106, 108, 111, 114 Human Brain Project (HBP) 68 humanism 28, 30, 32 Hume, David xii, 12, 43, 46, 47, 49, 77, 90 hypothetico-deductive reasoning 24, 53, 69, 74, 79–81 idealism 46 Ideas 6, 7, 8, 9, 11, 14, 16, 25, 27, 39, 43–44, 47 ideology 32–33 ‘if-then’ 65, 69, 80, 82, 86–89, 101, 105, 108 ignorance versus latency 16–17 illusion of perspective 34 imagination 34, 46, 49, 65, 78 imitation 45 immaterialism 46 inconstancy 34–35, 49 Indian psychology 14–15 individual, notions of 26–27, 36, 49 inductive reasoning 11, 24, 25, 70, 91–93 Industrial Revolution 52 inert statue 47

inferences 10, 69, 77 inferior frontal gyrus viii Inhelder, Bärbel 65, 73, 79 inheritance 55–57 inhibiting beliefs 108–109 inhibition viii, ix, x, xi, xiii, 34, 35, 83, 94, 96, 97, 100–119, 122–123 innatism 7, 12, 39, 43, 45, 46, 47, 48, 49, 56, 62, 77 inner focus 14–15, 16, 17–18, 36, 39, 40, 53 inner trinity 17 inner truth 17, 27 INRC group 80–81, 90, 92, 115 intellectual disability 59 intellectualism 14, 23 intelligence 8, 12, 21, 27, 39, 48, 57, 59, 62, 67, 73, 76–79, 121 interdisciplinary frameworks 63, 68, 76 internalization 78 International Congress of psychology 53 international research 68 introspection 18, 21, 32, 33, 36, 48, 49, 58, 60, 66 intuition vii, xii, 26, 41, 48, 78–79, 86–89, 90, 94, 110–113, 115 inverse emotions 115 IQ (intelligence quotient) 59 Italy 30–31 James, William 54, 59 James-Lang theory of emotions 59 Janet, Pierre 33, 60–61 Jansenism 41, 83n3 Johnson-Laird, Philip 11, 69, 70, 82 judgement 17, 35–36, 48, 70, 90, 91 Jung, Carl Gustav 3, 61 Kahneman, Daniel vii, viii, ix, x, xii, xiii, 11, 42, 46, 67–68, 70, 83, 89, 90–97, 100, 101, 103, 106–110, 113, 114, 115, 122, 125 Kant, I. 7, 43, 46, 47–48, 49, 62, 67, 77 Koffka, Kurt 62 Köhler, Wolfgang 62 Külpe, Oswald 62 La Mettrie, Julien Offray de 40 Lacan, Jacques 61 Lamarck, Jean-Baptiste de 12, 55 language: and animal psychology 60; Berkeley and Condillac 46; Braine 82; child development 74; Chomsky 62; and humanity 78; Montaigne 33; Occam’s razor 26; Piaget 78; St Augustine


16; symbolic function 82; unconscious is like 61 latent knowledge 7, 14, 16–17 laws 35, 38, 49, 60, 103 learning difficulties 59 Leibniz, Gottfried 44, 47–48, 67 Leipzig laboratory 57–58 Lewin, Kurt 62 Liard, Louis 58, 61 limbic system 95, 104, 106 Linda problem 92, 108, 120 localization 54, 55 Locke, John 12, 43, 44–46, 47, 49, 60, 77 logical and rational thinking: Aristotle 10–12, 21, 28; cannot justify reasoning by reasoning xi; Cartesian xi; and the Catholic Church xi; cold training 106; critics of Piaget 81–83; deduction 88; deduction (definition) 69; Descartes 39; Enlightenment 42–48; error 101; versus intuition 86–89; logic of logicians 80; and mathematics 67; Middle Ages 21–22, 27, 28; origins of 73–74; Pascal 41–42; Piaget vii–viii, 62–63, 65, 70, 73–85; Plato x; St Augustine 17; System 2 92, 101–102 logical-mathematical reasoning 62, 65, 76–79, 103, 111, 126 logicism 62, 70, 75–76 logos 3, 10, 66, 73, 78, 126 loss-gain calculations 93–94 Luther, Martin 31 machine, man as a 38–40 macrogenesis 17 maieutics 7 matching bias 87, 102, 104, 105, 108 materialistic science 55 mathematics 6–7, 10, 16, 17, 25, 41–42, 48, 53, 57, 62, 66, 111, 122 mechanistic science 37–38, 44, 49, 55 medicine 23, 24, 31, 38, 53 meditation 13, 15 Melher, Jacques 62 memory 9, 13, 17, 35, 103, 126 mental chronometry 112 mental frameworks 45–46, 48 mental illness 60 mental imagery 17–18 mental logic rules 11, 69, 80–82 mental models 70, 81–82 Merleau-Ponty, Maurice 76 metabolism 55 metacognition viii, xiii, 96–97, 102, 105, 106, 107, 110, 116


metaphysics 18n6, 23, 32, 48, 49, 67 Meyerson, Ignace 50n7 microgenesis 17, 34, 56 Middle Ages 20–29 mind-body split 8–9, 21, 38–40, 95, 96 mindfulness 15 misleading powers 95, 96, 100, 109 Molyneux’s problem 44–45, 47 Montaigne, Michel de xi, xii, xiii, 12, 28, 30, 32–36, 37, 39, 49 Montesquieu, Baron de 42–43 morality 6, 30, 32, 41, 53, 115, 121 Morel, Christian 90, 108 MRI 111 Müller-Lyer illusion 93 Mussweiler, Thomas 110 mythology 3–4, 66 Nadel, Lynn 64n13, 76 natural history 55 natural selection 55, 73, 102, 108 negative priming 111–113, 114 Neo-Thomism 28n4 nervous system ix Neural Darwinism 73, 102–104, 116 neural facts 57 neural networks 102, 103, 104, 126 neurobiology 40, 73 neurocognitive mapping 55 neuromorphic circuits 68 neuropsychology 54 neuroscience: cerebrocentricity 9; code of consciousness 33; and the mind-body split 8, 40; neurocognitive constructivism 25; neurocognitive mapping 125; nineteenth century 59, 60; paths of truth 34; Ribot 58 neuroses 61 neutrality 36 Newton, Isaac 38, 44, 45, 46, 49 Nietzsche, Friedrich 6 nominalism 24–27, 31 non-linear development 82–83, 92, 113, 121, 125 nothing 22–23 noûs (intellect, reason) 9, 12, 17 Noveck, Ira 108 nudge economics 91 number conservation task 111, 112 object permanence 77–78 objective knowledge 11, 36, 37–38 O’Brien, David 69 Occam’s razor 26–27, 28, 31, 39, 46



Oedipus 3, 5–6, 29n12, 61 ‘official psychology,’ birth of 53–57 Oléron, Pierre 62, 97n1 ontogenesis 56, 62, 73, 74, 77, 83, 104 ontological arguments 66 operative reversibility 78–79, 80, 81, 92 organology 54 origins of the universe 22–23, 37 Over, David 89, 101 paralimbic cortex 95, 104 paralogisms 19n8, 23, 27 Paré, Ambroise 31 Parot, Françoise 14, 24, 49 Pascal, Blaise xi–xii, 36, 40, 41–42, 49, 68, 90, 100, 109 paths of truth 22, 34 Patinir, Joachim 4 Pavlov, Ivan ix, 59–60 Pearson, Karl 57 perception 26, 44, 47, 62, 88, 93 perceptive bias 42, 88, 89–90, 105, 106, 107 personality 60 perspective-taking 34 persuasion 40, 41 phrenology 53, 54–55 phylogenesis 56, 73, 108 physiological psychology 53 Piaget, Jean vii, viii, ix, xii, 10, 73–85; assimilation-accommodation 11; child’s cognitive development 56; cognitive constructivism 25; construction of structures 46; on deduction 70; deduction 89; egocentrism of children 34; experimental psychology 62; genesis 17; logical system 73–85, 86, 87, 90, 105–106, 107–108, 115; logical-mathematical reasoning 62, 65, 76–79, 103, 111, 126; number conservation task 111, 112; question of God 23; and reasoning 42, 67; reflective abstraction 26; schema 48; self-scrutiny 33 Picot, Édouard 4 Piéron, Henri 61 pineal gland 38, 39, 40 Planck, Max 22 Plato x, xi, xii, 3, 6–10, 12, 14, 16, 17, 21, 25, 27, 28, 39, 43–44, 49, 55, 56 play 45 poetry 20, 65 positive priming 109–110 positive psychology 15, 53 positivism 38, 53, 67 Prado, Jérôme 108

prefrontal cortex viii, 55, 83, 94, 95–96, 97, 101, 102, 105–106, 107, 110, 122–123 pre-operational stage 78 pre-representations 103 presumption of rationality 101, 105 Preyer, William 56 priming effect 109–111 printing 30, 31, 43 probabilities 70, 82, 91, 92, 108, 120, 121–122 projection 61 proof 41, 81 Proust, Marcel 33, 63n8 Psyche 3, 4, 22 psychic secretion 60 psychoanalysis 6, 14, 49, 60–61 psychological essays 33 psychology of thought (Denkpsychologie) 62, 75–76 psychometry 57, 59, 62 psychopathology 60 psychophysics 48, 57–58, 59, 91 psychopomps 4–5, 21, 58 psychotherapy 9, 15 pure concepts 48, 49 pure reason 122 Pythagoras 7 quantum physics 64n12 Raphael 6, 10, 14 rapid thought 68, 70, 89, 90–91, 100, 112; see also System 1 rationalism 12, 39 rationality ix–x, 125; Aristotle 10–12; Greek philosophy 3; irrationality vs rationality ix, xiii; Kant 48; into nineteenth century 49; Plato 6–10; presumption of rationality 101, 105; rational choice theory 50n22; sources of x; ‘two faces’ of man 49; Wolff 48; see also logical and rational thinking reaction times, measurement of 53, 57, 111, 112 realism 9, 25–26, 62 reasoning: age of reason (children) 78–79; artificial brains 68; babies 120–123; Berkeley and Condillac 46–47; bias correction 100–101; children’s 36; competition 105–106; critics of Piaget 81–83; deduction 11, 24, 27, 53, 66, 68–70, 88, 89, 91, 95–96, 108–109; Descartes 39, 40; and digital screens 125–126; duality with faith 22; and emotion/feelings 93–95; Enlightenment 42–48; Grand Siècle


36–37; Herophilos 13; history of 65–67; hypothetico-deductive reasoning 24, 53, 69, 74, 79–81; inductive reasoning 11, 24, 25, 70, 91–93; Kahneman, D. 67–68; and metacognition 96–97; Middle Ages 21–22, 27; Montaigne 33–35; Pascal 41; Piaget 87; Plato 9; pure reason 48; Roger Bacon 23–24; System 3 103; today 67–68; as virtue 66; William of Occam 26; Wolff 48 reasoning bias 87–89, 90, 91–93, 96–97, 100–114, 122–123 Reeves, Hubert 75 reflection 47, 96, 97, 106 reflexes 38, 77–78 Reformation 30, 32, 37, 49 regressions 82, 87 regret xiii, 114–116 religion 6, 30, 31–32, 41, 42, 49, 55; see also Christianity; God reminiscence 7 Renaissance xi, 12, 18, 28, 30–36, 66, 73 representations 45, 47, 74, 78, 79 representativeness bias 92, 108–109 resemblance 46, 47 resonance 103 response conditioning 59–60 Reuchlin, Maurice 97n1 revelation 20–21, 22, 48 Ribot, Théodule 12, 31, 54, 58–59, 60, 97 Richelle, Marc 14, 24, 49 right hemisphere viii, 106 rights 43 Rimbaud, Arthur 65, 80 Romanticism 49 Rorschach, Hermann 61 Roscellinus Compendiensis 25 Rousseau, Jean-Jacques 36, 43, 46, 74 Russell, Bertrand 70, 75–76 scala naturae (scale of beings) 12, 55 schema 48, 62 scholasticism 21, 25, 66 School of Athens (Raphael) 6, 10, 14 science: birth of modern 37; experimental psychology 57–58; and God 23; Middle Ages 22, 23–24, 27; Montaigne, M. de 36; Pascal, B. 41; and reasoning 66; Renaissance 31; Roger Bacon 23–24; social utility 37; see also empiricism Science viii secularisation of psychology 36, 49 self-awareness x–xi, 18, 42 self-consciousness 96–97, 106


self-control xii, 13 self-discipline 35 self-evaluation 104, 115 self-improvement 32 self-knowledge 35 self-reflexivity 12 self-regulation 77 self-scrutiny 33, 39 sensation-ideas link 45 sensation/sensory input 9, 11, 26, 29n13, 44, 45, 46–47, 48, 57 sensorimotor schema 62, 74, 120 sensorimotor stage of child development 77–78, 120 sensualism 46–47 Serres, Michel 83, 126 Shakespeare, William 12 Sherrington, Charles Scott ix shifting (vicariance) xi, 101–114, 125 short-circuiting 70, 100–101, 114 Simon, Theodore 59 simulation loops 59, 96 singularities 25, 26 Skin-Ego 61 Skinner, Burrhus 59, 64n11 sleep 61 Slovic, Paul 94 slower, reflective thought 68, 89; see also System 2 social Darwinism 56 social psychology 62, 91 sociocentrism 34 Socrates 6, 7, 9, 14, 36, 55 somatic marker theory 40, 94, 95–96, 115 sophisms 19n8, 23, 27 Sophocles 5 Sorbonne, University of the 21, 31, 54, 58, 59, 61, 76 soul xi; Antiquity 11–12; biological origin of the soul 11–12; cure of souls 13–14, 61; Descartes 18; and empiricism 48; immortal souls 7, 38; measurement of the 57–58; not localizable 55; Pascal, B. 41, 42; St Augustine xi, 17–18; and truth 27 Soul (Psyche) 4 soul-body dualism 8–9, 38–40, 95, 96 Spaun 68 Spearman, Charles Edward 57 Spelke, Elizabeth viii, xii, 62 Spencer, Herbert 56 Spinoza, Baruch xiii, 40 spirituality 4, 49, 55, 81 splitting 38–39 St Anselm of Canterbury 66



St Augustine x–xi, 15–18, 21, 25, 27, 32, 36, 39 St Francis of Assisi 21, 23 St Michael, archangel 4–5, 21 St Thomas Aquinas xi, 22–23, 25, 26, 27 staircase model 74, 82–83 Stanovich, Keith 89 statistical learning 70, 80, 91, 92, 108, 120–123 statistical methods 57, 59 stereotypes 91–93, 108–109 stimulus-response 57, 59 Stoicism 13, 15 structuralism 90 subconscious 33, 60 substitutions 91 superego 6 syllogisms x, xi, 10–11, 24, 27, 39, 40, 65–66, 69, 88–89, 101, 108, 113 symbolic intelligence 78, 82 SyNAPSE 68 System 1 89–95, 104, 106, 125; bias correction 100; definition vii, viii, 90–91; and emotion/feelings 95–97; and faith xi; Greek philosophy x; and Hume 46; negative priming 112–113; and Pascal 42; philosophy x–xii; priming effect 110; and St Augustine xi; vicariance 101–114 System 2 vii, viii, x–xii, 42, 86–99, 101–114, 125 System 3 viii, ix, xi, xii, xiii, 94, 100–119, 125 tabula rasa 7, 10, 11–12, 39, 45 Taine, Hippolyte 56 Teglas, Erno 122 Terror 43, 52 Thaler, Richard vii, viii, 67, 91 theory and practice, linking 37, 53 Thorndike, Edward 59 Thurstone, Louis 63n5 thymos 9, 12, 17 time scales 56 Tomasello, Michael 62

transformist worlds 12, 55 truth 8, 9, 11, 17, 22, 27, 34, 39, 91, 103, 105 Tversky, Amos 90, 91, 92, 93, 100, 108, 109, 113 uncertainty 67 unconscious 33, 60–61, 90 unhooking 79, 80 United States of America 52, 98n5 universals 11, 24–27, 42–43, 45, 48 universities, establishment of 21, 52 utility 37, 43 valence ix, xiii Valéry, Paul 12, 33 variation/diversity 34, 55–56, 102, 108 ventricular theory/cell theory 13 ventromedial prefrontal cortex ix, 55, 95, 96, 106, 114 vicariance xi, 101–114, 125 visual-spatial models 11, 70, 77–78, 112–113 Voltaire 41, 43 Vygotski, Lev 62 Wallon, Henri 62 Wars of Religion 32 Watson, John Broadus 60 Watt, James 52 Weber-Fechner law 57 Wertheimer, Max 62 will xi, 9, 16, 17, 39 William of Occam 10, 24, 26–28, 31 wisdom 35–36 Wolff, Christian 47–48 working memory xi, 103 world of ideas 45 Wundt, Wilhelm 53, 57–58 Wynn, Karen viii, xii Xu, Fei 121–122 Zink, Michel 20