This handbook introduces the reader to the thought-provoking research on the neural foundations of human intelligence. W
1,808 202 10MB
English Pages 515 Year 2021
Table of contents :
List of Figures page x
List of Tables xiii
List of Contributors xv
Preface xix
Part I Fundamental Issues 1
1 Defining and Measuring Intelligence: The Psychometrics and
Neuroscience of g
thomas r. coyle 3
2 Network Neuroscience Methods for Studying Intelligence
kirsten hilger and olaf sporns 26
3 Imaging the Intelligence of Humans
kenia martı´nez and roberto colom 44
4 Research Consortia and Large-Scale Data Repositories for
Studying Intelligence
budhachandra khundrakpam, jean-baptiste poline,
and alan c. evans 70
Part II Theories, Models, and Hypotheses 83
5 Evaluating the Weight of the Evidence: Cognitive
Neuroscience Theories of Intelligence
matthew j. euler and ty l. mckinney 85
6 Human Intelligence and Network Neuroscience
aron k. barbey 102
7 It’s about Time: Towards a Longitudinal Cognitive
Neuroscience of Intelligence
rogier a. kievit and ivan l. simpson-kent 123
vii
8 A Lifespan Perspective on the Cognitive Neuroscience
of Intelligence
joseph p. hennessee and denise c. park 147
9 Predictive Intelligence for Learning and Optimization:
Multidisciplinary Perspectives from Social, Cognitive, and
Affective Neuroscience
christine ahrends, peter vuust, and morten
l. kringelbach 162
Part III Neuroimaging Methods and Findings 189
10 Diffusion-Weighted Imaging of Intelligence
erhan genc¸ and christoph fraenz 191
11 Structural Brain Imaging of Intelligence
stefan drakulich and sherif karama 210
12 Functional Brain Imaging of Intelligence
ulrike basten and christian j. fiebach 235
13 An Integrated, Dynamic Functional Connectome
Underlies Intelligence
jessica r. cohen and mark d’esposito 261
14 Biochemical Correlates of Intelligence
rex e. jung and marwa o. chohan 282
15 Good Sense and Good Chemistry: Neurochemical Correlates of
Cognitive Performance Assessed In Vivo through Magnetic
Resonance Spectroscopy
naftali raz and jeffrey a. stanley 297
Part IV Predictive Modeling Approaches 325
16 Predicting Individual Differences in Cognitive Ability
from Brain Imaging and Genetics
kevin m. anderson and avram j. holmes 327
17 Predicting Cognitive-Ability Differences from Genetic
and Brain-Imaging Data
emily a. willoughby and james j. lee 349
Part V Translating Research on the Neuroscience
of Intelligence into Action 365
18 Enhancing Cognition
michael i. posner and mary k. rothbart 367
viii Contents
19 Patient-Based Approaches to Understanding Intelligence and
Problem-Solving
shira cohen-zimerman, carola salvi, and jordan
h. grafman 382
20 Implications of Biological Research on Intelligence for
Education and Public Policy
kathryn asbury and diana fields 399
21 Vertical and Horizontal Levels of Analysis in the Study of
Human Intelligence
robert j. sternberg 416
22 How Intelligence Research Can Inform Education and
Public Policy
jonathan wai and drew h. bailey 434
23 The Neural Representation of Concrete and Abstract Concepts
robert vargas and marcel adam just 448
Index 469
The Cambridge Handbook of Intelligence and Cognitive Neuroscience Can the brain be manipulated to enhance intelligence? The answer depends on neuroscience progress in understanding how intelligence arises from the interplay of gene expression and experience in the developing brain and how the mature brain processes information to solve complex reasoning problems. The bad news is the issues are nightmarishly complex. The good news is there is extraordinary progress from researchers around the world. This book is a comprehensive sampling of recent exciting results, especially from neuroimaging studies. Each chapter has minimum jargon, so an advanced technical background is not required to understand the issues, the data, or the interpretation of results. The prospects for future advances will whet the appetite of young researchers and fuel enthusiasm for researchers already working in these areas. Many intelligence researchers of the past dreamed about a day when neuroscience could be applied to understanding fundamental aspects of intelligence. As this book demonstrates, that day has arrived. aron k. barbey is Professor of Psychology, Neuroscience, and Bioengineering at the University of Illinois at Urbana-Champaign. He directs the Intelligence, Learning, and Plasticity Initiative, and the Decision Neuroscience Laboratory at the Beckman Institute for Advanced Science and Technology. sherif karama is a psychiatrist with a PhD in neuroscience. He completed a five-year postdoctoral fellowship in Brain Imaging of Cognitive Ability Differences at the Montreal Neurological Institute. He is an assistant professor in the Department of Psychiatry of McGill University. richard j. haier is Professor Emeritus in the School of Medicine, University of California, Irvine. His PhD in psychology is from Johns Hopkins University, he has been a staff fellow at NIMH and on the faculty of Brown University School of Medicine.
“This exciting book makes an elegant case that human intelligence is not the result of a test. It is the consequence of a brain. Drawing on state-of-the-art imaging methods, the reader is afforded a comprehensive view of the substrates enabling our most valued mental abilities.” Scott T. Grafton, Bedrosian-Coyne Presidential Chair in Neuroscience and Director of the Brain Imaging Center, University of California at Santa Barbara
“Our scientific understanding of human intelligence has advanced greatly over the past decade in terms of the measurement and modeling of intelligence in the human brain. This book provides an excellent analysis of current findings and theories written by top international authors. It should be recommended to students and professionals working in this field.” Sarah E. MacPherson, Senior Lecturer in Human Cognitive Neuroscience, University of Edinburgh
“This handbook focuses on the brain, but also integrates genetics and cognition. Come for a comprehensive brain survey and get the bonus of a panoramic foreshadowing of integrated intelligence research and applications.” Douglas K. Detterman, Louis D. Beaumont University Professor Emeritus of Psychological Sciences, Case Western Reserve University
“This handbook captures the conceptualization and measurement of intelligence, which is one of psychology’s greatest achievements. It shows how the advent of modern imaging techniques and large-scale data sets have added to our knowledge about brain–environmental– ability relationships and highlights the controversy in this rapidly expanding field.” Diane F. Halpern, Professor of Psychology, Emerita, Claremont McKenna College
“This handbook assembles an impressive group of pioneers and outstanding young researchers at the forefront of intelligence neuroscience. The chapters summarize the state of the field today and foreshadows what it might become.” Lars Penke, Professor of Psychology, Georg August University of Göttingen
“This book is a tribute to its topic. It is intelligently assembled, spanning all aspects of intelligence research and its applications. The authors are distinguished experts, masterfully summarizing the latest knowledge about intelligence obtained with cutting-edge methodology. If one wants to learn about intelligence, this is the book to read.” Yulia Kovas, Professor of Genetics and Psychology, Goldsmiths University of London
The Cambridge Handbook of Intelligence and Cognitive Neuroscience Edited by
Aron K. Barbey University of Illinois at Urbana-Champaign
Sherif Karama McGill University
Richard J. Haier University of California, Irvine
University Printing House, Cambridge CB2 8BS, United Kingdom One Liberty Plaza, 20th Floor, New York, NY 10006, USA 477 Williamstown Road, Port Melbourne, VIC 3207, Australia 314–321, 3rd Floor, Plot 3, Splendor Forum, Jasola District Centre, New Delhi – 110025, India 79 Anson Road, #06-04/06, Singapore 079906 Cambridge University Press is part of the University of Cambridge. It furthers the University’s mission by disseminating knowledge in the pursuit of education, learning, and research at the highest international levels of excellence. www.cambridge.org Information on this title: www.cambridge.org/9781108480543 DOI: 10.1017/9781108635462 © Cambridge University Press 2021 This publication is in copyright. Subject to statutory exception and to the provisions of relevant collective licensing agreements, no reproduction of any part may take place without the written permission of Cambridge University Press. First published 2021 A catalogue record for this publication is available from the British Library. Library of Congress Cataloging-in-Publication Data Names: Barbey, Aron K., editor. | Karama, Sherif, editor. | Haier, Richard J., editor. Title: The Cambridge handbook of intelligence and cognitive neuroscience / edited by Aron K. Barbey, University of Illinois, Urbana-Champaign, Sherif Karama, McGill University, Montréal, Richard J. Haier, University of California, Irvine. Description: 1 Edition. | New York : Cambridge University Press, 2020. | Series: Cambridge handbooks in psychology | Includes bibliographical references and index. Identifiers: LCCN 2020033919 (print) | LCCN 2020033920 (ebook) | ISBN 9781108480543 (hardback) | ISBN 9781108727723 (paperback) | ISBN 9781108635462 (epub) Subjects: LCSH: Intellect, | Cognitive neuroscience. Classification: LCC BF431 .C268376 2020 (print) | LCC BF431 (ebook) | DDC 153.9–dc23 LC record available at https://lccn.loc.gov/2020033919 LC ebook record available at https://lccn.loc.gov/2020033920 ISBN 978-1-108-48054-3 Hardback ISBN 978-1-108-72772-3 Paperback Cambridge University Press has no responsibility for the persistence or accuracy of URLs for external or third-party internet websites referred to in this publication and does not guarantee that any content on such websites is, or will remain, accurate or appropriate.
The Cambridge Handbook of Intelligence and Cognitive Neuroscience Can the brain be manipulated to enhance intelligence? The answer depends on neuroscience progress in understanding how intelligence arises from the interplay of gene expression and experience in the developing brain and how the mature brain processes information to solve complex reasoning problems. The bad news is the issues are nightmarishly complex. The good news is there is extraordinary progress from researchers around the world. This book is a comprehensive sampling of recent exciting results, especially from neuroimaging studies. Each chapter has minimum jargon, so an advanced technical background is not required to understand the issues, the data, or the interpretation of results. The prospects for future advances will whet the appetite of young researchers and fuel enthusiasm for researchers already working in these areas. Many intelligence researchers of the past dreamed about a day when neuroscience could be applied to understanding fundamental aspects of intelligence. As this book demonstrates, that day has arrived. aron k. barbey is Professor of Psychology, Neuroscience, and Bioengineering at the University of Illinois at Urbana-Champaign. He directs the Intelligence, Learning, and Plasticity Initiative, and the Decision Neuroscience Laboratory at the Beckman Institute for Advanced Science and Technology. sherif karama is a psychiatrist with a PhD in neuroscience. He completed a five-year postdoctoral fellowship in Brain Imaging of Cognitive Ability Differences at the Montreal Neurological Institute. He is an assistant professor in the Department of Psychiatry of McGill University. richard j. haier is Professor Emeritus in the School of Medicine, University of California, Irvine. His PhD in psychology is from Johns Hopkins University, he has been a staff fellow at NIMH and on the faculty of Brown University School of Medicine.
“This exciting book makes an elegant case that human intelligence is not the result of a test. It is the consequence of a brain. Drawing on state-of-the-art imaging methods, the reader is afforded a comprehensive view of the substrates enabling our most valued mental abilities.” Scott T. Grafton, Bedrosian-Coyne Presidential Chair in Neuroscience and Director of the Brain Imaging Center, University of California at Santa Barbara
“Our scientific understanding of human intelligence has advanced greatly over the past decade in terms of the measurement and modeling of intelligence in the human brain. This book provides an excellent analysis of current findings and theories written by top international authors. It should be recommended to students and professionals working in this field.” Sarah E. MacPherson, Senior Lecturer in Human Cognitive Neuroscience, University of Edinburgh
“This handbook focuses on the brain, but also integrates genetics and cognition. Come for a comprehensive brain survey and get the bonus of a panoramic foreshadowing of integrated intelligence research and applications.” Douglas K. Detterman, Louis D. Beaumont University Professor Emeritus of Psychological Sciences, Case Western Reserve University
“This handbook captures the conceptualization and measurement of intelligence, which is one of psychology’s greatest achievements. It shows how the advent of modern imaging techniques and large-scale data sets have added to our knowledge about brain–environmental– ability relationships and highlights the controversy in this rapidly expanding field.” Diane F. Halpern, Professor of Psychology, Emerita, Claremont McKenna College
“This handbook assembles an impressive group of pioneers and outstanding young researchers at the forefront of intelligence neuroscience. The chapters summarize the state of the field today and foreshadows what it might become.” Lars Penke, Professor of Psychology, Georg August University of Göttingen
“This book is a tribute to its topic. It is intelligently assembled, spanning all aspects of intelligence research and its applications. The authors are distinguished experts, masterfully summarizing the latest knowledge about intelligence obtained with cutting-edge methodology. If one wants to learn about intelligence, this is the book to read.” Yulia Kovas, Professor of Genetics and Psychology, Goldsmiths University of London
The Cambridge Handbook of Intelligence and Cognitive Neuroscience Edited by
Aron K. Barbey University of Illinois at Urbana-Champaign
Sherif Karama McGill University
Richard J. Haier University of California, Irvine
University Printing House, Cambridge CB2 8BS, United Kingdom One Liberty Plaza, 20th Floor, New York, NY 10006, USA 477 Williamstown Road, Port Melbourne, VIC 3207, Australia 314–321, 3rd Floor, Plot 3, Splendor Forum, Jasola District Centre, New Delhi – 110025, India 79 Anson Road, #06-04/06, Singapore 079906 Cambridge University Press is part of the University of Cambridge. It furthers the University’s mission by disseminating knowledge in the pursuit of education, learning, and research at the highest international levels of excellence. www.cambridge.org Information on this title: www.cambridge.org/9781108480543 DOI: 10.1017/9781108635462 © Cambridge University Press 2021 This publication is in copyright. Subject to statutory exception and to the provisions of relevant collective licensing agreements, no reproduction of any part may take place without the written permission of Cambridge University Press. First published 2021 A catalogue record for this publication is available from the British Library. Library of Congress Cataloging-in-Publication Data Names: Barbey, Aron K., editor. | Karama, Sherif, editor. | Haier, Richard J., editor. Title: The Cambridge handbook of intelligence and cognitive neuroscience / edited by Aron K. Barbey, University of Illinois, Urbana-Champaign, Sherif Karama, McGill University, Montréal, Richard J. Haier, University of California, Irvine. Description: 1 Edition. | New York : Cambridge University Press, 2020. | Series: Cambridge handbooks in psychology | Includes bibliographical references and index. Identifiers: LCCN 2020033919 (print) | LCCN 2020033920 (ebook) | ISBN 9781108480543 (hardback) | ISBN 9781108727723 (paperback) | ISBN 9781108635462 (epub) Subjects: LCSH: Intellect, | Cognitive neuroscience. Classification: LCC BF431 .C268376 2020 (print) | LCC BF431 (ebook) | DDC 153.9–dc23 LC record available at https://lccn.loc.gov/2020033919 LC ebook record available at https://lccn.loc.gov/2020033920 ISBN 978-1-108-48054-3 Hardback ISBN 978-1-108-72772-3 Paperback Cambridge University Press has no responsibility for the persistence or accuracy of URLs for external or third-party internet websites referred to in this publication and does not guarantee that any content on such websites is, or will remain, accurate or appropriate.
The Cambridge Handbook of Intelligence and Cognitive Neuroscience Can the brain be manipulated to enhance intelligence? The answer depends on neuroscience progress in understanding how intelligence arises from the interplay of gene expression and experience in the developing brain and how the mature brain processes information to solve complex reasoning problems. The bad news is the issues are nightmarishly complex. The good news is there is extraordinary progress from researchers around the world. This book is a comprehensive sampling of recent exciting results, especially from neuroimaging studies. Each chapter has minimum jargon, so an advanced technical background is not required to understand the issues, the data, or the interpretation of results. The prospects for future advances will whet the appetite of young researchers and fuel enthusiasm for researchers already working in these areas. Many intelligence researchers of the past dreamed about a day when neuroscience could be applied to understanding fundamental aspects of intelligence. As this book demonstrates, that day has arrived. aron k. barbey is Professor of Psychology, Neuroscience, and Bioengineering at the University of Illinois at Urbana-Champaign. He directs the Intelligence, Learning, and Plasticity Initiative, and the Decision Neuroscience Laboratory at the Beckman Institute for Advanced Science and Technology. sherif karama is a psychiatrist with a PhD in neuroscience. He completed a five-year postdoctoral fellowship in Brain Imaging of Cognitive Ability Differences at the Montreal Neurological Institute. He is an assistant professor in the Department of Psychiatry of McGill University. richard j. haier is Professor Emeritus in the School of Medicine, University of California, Irvine. His PhD in psychology is from Johns Hopkins University, he has been a staff fellow at NIMH and on the faculty of Brown University School of Medicine.
“This exciting book makes an elegant case that human intelligence is not the result of a test. It is the consequence of a brain. Drawing on state-of-the-art imaging methods, the reader is afforded a comprehensive view of the substrates enabling our most valued mental abilities.” Scott T. Grafton, Bedrosian-Coyne Presidential Chair in Neuroscience and Director of the Brain Imaging Center, University of California at Santa Barbara
“Our scientific understanding of human intelligence has advanced greatly over the past decade in terms of the measurement and modeling of intelligence in the human brain. This book provides an excellent analysis of current findings and theories written by top international authors. It should be recommended to students and professionals working in this field.” Sarah E. MacPherson, Senior Lecturer in Human Cognitive Neuroscience, University of Edinburgh
“This handbook focuses on the brain, but also integrates genetics and cognition. Come for a comprehensive brain survey and get the bonus of a panoramic foreshadowing of integrated intelligence research and applications.” Douglas K. Detterman, Louis D. Beaumont University Professor Emeritus of Psychological Sciences, Case Western Reserve University
“This handbook captures the conceptualization and measurement of intelligence, which is one of psychology’s greatest achievements. It shows how the advent of modern imaging techniques and large-scale data sets have added to our knowledge about brain–environmental– ability relationships and highlights the controversy in this rapidly expanding field.” Diane F. Halpern, Professor of Psychology, Emerita, Claremont McKenna College
“This handbook assembles an impressive group of pioneers and outstanding young researchers at the forefront of intelligence neuroscience. The chapters summarize the state of the field today and foreshadows what it might become.” Lars Penke, Professor of Psychology, Georg August University of Göttingen
“This book is a tribute to its topic. It is intelligently assembled, spanning all aspects of intelligence research and its applications. The authors are distinguished experts, masterfully summarizing the latest knowledge about intelligence obtained with cutting-edge methodology. If one wants to learn about intelligence, this is the book to read.” Yulia Kovas, Professor of Genetics and Psychology, Goldsmiths University of London
The Cambridge Handbook of Intelligence and Cognitive Neuroscience Edited by
Aron K. Barbey University of Illinois at Urbana-Champaign
Sherif Karama McGill University
Richard J. Haier University of California, Irvine
University Printing House, Cambridge CB2 8BS, United Kingdom One Liberty Plaza, 20th Floor, New York, NY 10006, USA 477 Williamstown Road, Port Melbourne, VIC 3207, Australia 314–321, 3rd Floor, Plot 3, Splendor Forum, Jasola District Centre, New Delhi – 110025, India 79 Anson Road, #06-04/06, Singapore 079906 Cambridge University Press is part of the University of Cambridge. It furthers the University’s mission by disseminating knowledge in the pursuit of education, learning, and research at the highest international levels of excellence. www.cambridge.org Information on this title: www.cambridge.org/9781108480543 DOI: 10.1017/9781108635462 © Cambridge University Press 2021 This publication is in copyright. Subject to statutory exception and to the provisions of relevant collective licensing agreements, no reproduction of any part may take place without the written permission of Cambridge University Press. First published 2021 A catalogue record for this publication is available from the British Library. Library of Congress Cataloging-in-Publication Data Names: Barbey, Aron K., editor. | Karama, Sherif, editor. | Haier, Richard J., editor. Title: The Cambridge handbook of intelligence and cognitive neuroscience / edited by Aron K. Barbey, University of Illinois, Urbana-Champaign, Sherif Karama, McGill University, Montréal, Richard J. Haier, University of California, Irvine. Description: 1 Edition. | New York : Cambridge University Press, 2020. | Series: Cambridge handbooks in psychology | Includes bibliographical references and index. Identifiers: LCCN 2020033919 (print) | LCCN 2020033920 (ebook) | ISBN 9781108480543 (hardback) | ISBN 9781108727723 (paperback) | ISBN 9781108635462 (epub) Subjects: LCSH: Intellect, | Cognitive neuroscience. Classification: LCC BF431 .C268376 2020 (print) | LCC BF431 (ebook) | DDC 153.9–dc23 LC record available at https://lccn.loc.gov/2020033919 LC ebook record available at https://lccn.loc.gov/2020033920 ISBN 978-1-108-48054-3 Hardback ISBN 978-1-108-72772-3 Paperback Cambridge University Press has no responsibility for the persistence or accuracy of URLs for external or third-party internet websites referred to in this publication and does not guarantee that any content on such websites is, or will remain, accurate or appropriate.
The Cambridge Handbook of Intelligence and Cognitive Neuroscience Can the brain be manipulated to enhance intelligence? The answer depends on neuroscience progress in understanding how intelligence arises from the interplay of gene expression and experience in the developing brain and how the mature brain processes information to solve complex reasoning problems. The bad news is the issues are nightmarishly complex. The good news is there is extraordinary progress from researchers around the world. This book is a comprehensive sampling of recent exciting results, especially from neuroimaging studies. Each chapter has minimum jargon, so an advanced technical background is not required to understand the issues, the data, or the interpretation of results. The prospects for future advances will whet the appetite of young researchers and fuel enthusiasm for researchers already working in these areas. Many intelligence researchers of the past dreamed about a day when neuroscience could be applied to understanding fundamental aspects of intelligence. As this book demonstrates, that day has arrived. aron k. barbey is Professor of Psychology, Neuroscience, and Bioengineering at the University of Illinois at Urbana-Champaign. He directs the Intelligence, Learning, and Plasticity Initiative, and the Decision Neuroscience Laboratory at the Beckman Institute for Advanced Science and Technology. sherif karama is a psychiatrist with a PhD in neuroscience. He completed a five-year postdoctoral fellowship in Brain Imaging of Cognitive Ability Differences at the Montreal Neurological Institute. He is an assistant professor in the Department of Psychiatry of McGill University. richard j. haier is Professor Emeritus in the School of Medicine, University of California, Irvine. His PhD in psychology is from Johns Hopkins University, he has been a staff fellow at NIMH and on the faculty of Brown University School of Medicine.
“This exciting book makes an elegant case that human intelligence is not the result of a test. It is the consequence of a brain. Drawing on state-of-the-art imaging methods, the reader is afforded a comprehensive view of the substrates enabling our most valued mental abilities.” Scott T. Grafton, Bedrosian-Coyne Presidential Chair in Neuroscience and Director of the Brain Imaging Center, University of California at Santa Barbara
“Our scientific understanding of human intelligence has advanced greatly over the past decade in terms of the measurement and modeling of intelligence in the human brain. This book provides an excellent analysis of current findings and theories written by top international authors. It should be recommended to students and professionals working in this field.” Sarah E. MacPherson, Senior Lecturer in Human Cognitive Neuroscience, University of Edinburgh
“This handbook focuses on the brain, but also integrates genetics and cognition. Come for a comprehensive brain survey and get the bonus of a panoramic foreshadowing of integrated intelligence research and applications.” Douglas K. Detterman, Louis D. Beaumont University Professor Emeritus of Psychological Sciences, Case Western Reserve University
“This handbook captures the conceptualization and measurement of intelligence, which is one of psychology’s greatest achievements. It shows how the advent of modern imaging techniques and large-scale data sets have added to our knowledge about brain–environmental– ability relationships and highlights the controversy in this rapidly expanding field.” Diane F. Halpern, Professor of Psychology, Emerita, Claremont McKenna College
“This handbook assembles an impressive group of pioneers and outstanding young researchers at the forefront of intelligence neuroscience. The chapters summarize the state of the field today and foreshadows what it might become.” Lars Penke, Professor of Psychology, Georg August University of Göttingen
“This book is a tribute to its topic. It is intelligently assembled, spanning all aspects of intelligence research and its applications. The authors are distinguished experts, masterfully summarizing the latest knowledge about intelligence obtained with cutting-edge methodology. If one wants to learn about intelligence, this is the book to read.” Yulia Kovas, Professor of Genetics and Psychology, Goldsmiths University of London
The Cambridge Handbook of Intelligence and Cognitive Neuroscience Edited by
Aron K. Barbey University of Illinois at Urbana-Champaign
Sherif Karama McGill University
Richard J. Haier University of California, Irvine
University Printing House, Cambridge CB2 8BS, United Kingdom One Liberty Plaza, 20th Floor, New York, NY 10006, USA 477 Williamstown Road, Port Melbourne, VIC 3207, Australia 314–321, 3rd Floor, Plot 3, Splendor Forum, Jasola District Centre, New Delhi – 110025, India 79 Anson Road, #06-04/06, Singapore 079906 Cambridge University Press is part of the University of Cambridge. It furthers the University’s mission by disseminating knowledge in the pursuit of education, learning, and research at the highest international levels of excellence. www.cambridge.org Information on this title: www.cambridge.org/9781108480543 DOI: 10.1017/9781108635462 © Cambridge University Press 2021 This publication is in copyright. Subject to statutory exception and to the provisions of relevant collective licensing agreements, no reproduction of any part may take place without the written permission of Cambridge University Press. First published 2021 A catalogue record for this publication is available from the British Library. Library of Congress Cataloging-in-Publication Data Names: Barbey, Aron K., editor. | Karama, Sherif, editor. | Haier, Richard J., editor. Title: The Cambridge handbook of intelligence and cognitive neuroscience / edited by Aron K. Barbey, University of Illinois, Urbana-Champaign, Sherif Karama, McGill University, Montréal, Richard J. Haier, University of California, Irvine. Description: 1 Edition. | New York : Cambridge University Press, 2020. | Series: Cambridge handbooks in psychology | Includes bibliographical references and index. Identifiers: LCCN 2020033919 (print) | LCCN 2020033920 (ebook) | ISBN 9781108480543 (hardback) | ISBN 9781108727723 (paperback) | ISBN 9781108635462 (epub) Subjects: LCSH: Intellect, | Cognitive neuroscience. Classification: LCC BF431 .C268376 2020 (print) | LCC BF431 (ebook) | DDC 153.9–dc23 LC record available at https://lccn.loc.gov/2020033919 LC ebook record available at https://lccn.loc.gov/2020033920 ISBN 978-1-108-48054-3 Hardback ISBN 978-1-108-72772-3 Paperback Cambridge University Press has no responsibility for the persistence or accuracy of URLs for external or third-party internet websites referred to in this publication and does not guarantee that any content on such websites is, or will remain, accurate or appropriate.
For all those who inspire us – without reference to success or failure – to understand the origins of intelligence and the diversity of talents that make us all equally human. And for Michelle, who inspires me, beyond compare. Aron K. Barbey To my son, Alexandre, who enriches my life. To my father, Adel, who has taught me not to let the dictates of my passions interfere with my assessments of the facts and of the weight of evidence. To the mentors that have shaped my career and approach to science. Sherif Karama Dedicated to all the scientists and students who follow intelligence data wherever they lead, especially into the vast uncharted recesses of the brain. Richard J. Haier
Contents
List of Figures List of Tables List of Contributors Preface Part I Fundamental Issues
page x xiii xv xix 1
1 Defining and Measuring Intelligence: The Psychometrics and Neuroscience of g
thomas r. coyle
3
2 Network Neuroscience Methods for Studying Intelligence
kirsten hilger and olaf sporns
26
3 Imaging the Intelligence of Humans
kenia martı´nez and roberto colom
44
4 Research Consortia and Large-Scale Data Repositories for Studying Intelligence
budhachandra khundrakpam, jean-baptiste poline, and alan c. evans
70
Part II Theories, Models, and Hypotheses
83
5 Evaluating the Weight of the Evidence: Cognitive Neuroscience Theories of Intelligence
matthew j. euler and ty l. mckinney
85
6 Human Intelligence and Network Neuroscience
aron k. barbey
102
7 It’s about Time: Towards a Longitudinal Cognitive Neuroscience of Intelligence
rogier a. kievit and ivan l. simpson-kent
123
vii
viii
Contents
8 A Lifespan Perspective on the Cognitive Neuroscience of Intelligence
joseph p. hennessee and denise c. park
147
9 Predictive Intelligence for Learning and Optimization: Multidisciplinary Perspectives from Social, Cognitive, and Affective Neuroscience
10 11 12 13
christine ahrends, peter vuust, and morten l. kringelbach
162
Part III Neuroimaging Methods and Findings
189
Diffusion-Weighted Imaging of Intelligence erhan genc¸ and christoph fraenz
191
Structural Brain Imaging of Intelligence stefan drakulich and sherif karama
210
Functional Brain Imaging of Intelligence ulrike basten and christian j. fiebach
235
An Integrated, Dynamic Functional Connectome Underlies Intelligence
jessica r. cohen and mark d’esposito 14
Biochemical Correlates of Intelligence
rex e. jung and marwa o. chohan 15
16
18
282
Good Sense and Good Chemistry: Neurochemical Correlates of Cognitive Performance Assessed In Vivo through Magnetic Resonance Spectroscopy
naftali raz and jeffrey a. stanley
297
Part IV Predictive Modeling Approaches
325
Predicting Individual Differences in Cognitive Ability from Brain Imaging and Genetics
kevin m. anderson and avram j. holmes 17
261
327
Predicting Cognitive-Ability Differences from Genetic and Brain-Imaging Data
emily a. willoughby and james j. lee
349
Part V Translating Research on the Neuroscience of Intelligence into Action
365
Enhancing Cognition
michael i. posner and mary k. rothbart
367
Contents
19 Patient-Based Approaches to Understanding Intelligence and Problem-Solving
shira cohen-zimerman, carola salvi, and jordan h. grafman
382
20 Implications of Biological Research on Intelligence for Education and Public Policy
kathryn asbury and diana fields
399
21 Vertical and Horizontal Levels of Analysis in the Study of Human Intelligence
robert j. sternberg
416
22 How Intelligence Research Can Inform Education and Public Policy
jonathan wai and drew h. bailey
434
23 The Neural Representation of Concrete and Abstract Concepts
robert vargas and marcel adam just
448
Index
469
ix
Figures
2.1 Schematic illustration of structural and functional brain page 29 network construction and key network metrics. 2.2 The brain bases of intelligence – from a network neuroscience 32 perspective. 3.1 Regions identified by the Parieto-Frontal Integration Theory 46 (P-FIT) as relevant for human intelligence. 3.2 Variability in the gray matter correlates of intelligence across the psychometric hierarchy as reported in one study by Román et al. 48 (2014). 3.3 Workflow for voxel-based morphometry (VBM) and surface52 based morphometry (SBM) analysis. 3.4 Top panel: three left hemisphere brain surfaces from different 53 individuals. 3.5 (A and B) Distribution and variability of cortical thickness computed through different surface-based protocols: Cortical Pattern Matching (CPM), Brain-Suite, and CIVET. 54 3.6 (A) Pearson’s correlations among cortical thickness (CT), cortical surface area (CSA), and cortical gray matter volume (CGMV) obtained from a subsample of 279 healthy children and adolescents of the Pediatric MRI Data Repository created for the National Institute of Mental Health MRI Study of Normal Brain Development (Evans and Brain Development Cooperative Group, 2006). (B) Topography of significant correlations (q < .05, false discovery rate (FDR) corrected) between IQ and cortical thickness (CT), cortical surface area (CSA), and cortical gray matter volume (CGMV). 55 3.7 Summary of basic analytic steps for connectome-based analyses (A). The analytic sequence for computing the structural and 58 functional connectivity matrices (B). 3.8 Structural and functional correlates of human intelligence are not identified within the same brain regions: “the dissociation of functional vs. structural brain imaging correlates of intelligence is at odds with the principle assumption of the P-FIT that functional and structural studies on neural correlates of x
List of Figures
3.9
6.1 6.2 6.3 7.1
7.2 8.1 8.2 9.1 9.2 9.3 9.4 9.5 10.1
10.2
12.1
12.2 12.3 13.1 14.1
intelligence converge to imply the same set of brain regions” (Basten et al., 2015, p. 21). Mean (A) and variability (B) of cortical thickness across the cortex in two groups of individuals (Sample A and Sample B) matched for sex, age, and cognitive performance. The regional maps are almost identical. Pearson’s correlations between visuospatial intelligence and cortical thickness differences in these two groups are also shown (C). Small-world network. Intrinsic connectivity networks and network flexibility. Dynamic functional connectivity. Simplified bivariate latent change score model illustrating the co-development of intelligence scores (top) and brain measures (bottom) across two waves. An overview of longitudinal studies of brain structure, function, and intelligence. Lifespan performance measures. A conceptual model of the scaffolding theory of aging and cognition-revisited (STAC-r). The pleasure cycle, interactions between experience and predictions, as well as how learning might occur. The pleasure cycle, on its own, during circadian cycles, and over the lifespan. Parameter optimization of learning models. Hierarchical neuronal workspace architectures. Reward-related signals in the orbitofrontal cortex (OFC) unfold dynamically across space and time. The top half depicts ellipsoids (left side, A) and tensors (right side, B) that were yielded by means of diffusion-weighted imaging and projected onto a coronal slice of an MNI brain. White matter fiber tracts whose microstructural properties were found to correlate with interindividual differences in intelligence. Brain activation associated with the processing of intelligencerelated tasks, showing the results of the meta-analysis conducted by Santarnecchi, Emmendorfer, and Pascual-Leon (2017). Intelligence-related differences in brain activation during cognitive processing. Brain activation as a function of task difficulty and intelligence. Brain graph schematic. Representative spectrum from human voxel obtained from parietal white matter.
59
61 105 107 112
125 130 151 152 164 166 169 171 173
194
196
238 239 248 265 284
xi
xii
List of Figures
14.2 Linear relationship between size of study (Y axis) and magnitude of NAA–intelligence, reasoning, general cognitive functioning relationship (X axis), with the overall relationship being inverse (R2 = .20). 15.1 Examples of a quantified ¹H MRS spectrum and a quantified ³¹P MRS spectrum. 16.1 A graphical depiction of the Deep Boltzmann Machine (DMN) developed by Wang et al. (2018) to predict psychiatric case status. 19.1 Schematic drawing of brain areas associated with intelligence based on lesion mapping studies. 23.1 Conceptual schematic showing differences between GLM activation-based approaches and pattern-oriented MVPA, where the same number of voxels activate (shown as dark voxels) for two concepts but the spatial pattern of the activated voxels differs.
289 299
336 385
450
Tables
4.1 Details of large-scale datasets and research consortia with concurrent measures of neuroimaging and intelligence (and/or related) scores, and, in some cases, genetic data. page 72 6.1 Summary of cognitive neuroscience theories of human intelligence. 103 7.1 An overview of longitudinal studies of brain structure, function, and intelligence. 127 13.1 Definitions and descriptions of graph theory metrics. 266 14.1 Studies of NAA. 288 21.1 Vertical levels of analysis for the study of human intelligence. 418 21.2 Horizontal levels of analysis for the study of human intelligence. 424 22.1 Reverse and forward causal questions pertaining to intelligence. 438
xiii
Contributors
christine ahrends, PhD Candidate, Department of Psychiatry, University of Oxford, UK kevin m. anderson, PhD Candidate, Department of Psychology and Psychiatry, Yale University, USA kathryn asbury, PhD, Senior Lecturer, Department of Education, University of York, UK drew h. bailey, PhD, Associate Professor, School of Education, University of California, Irvine, USA aron k. barbey, PhD, Professor, Department of Psychology and Beckman Institute, University of Illinois at Urbana-Champaign, USA ulrike basten, PhD, Postdoctoral Researcher, Bernstein Center for Computational Neuroscience, Goethe University Frankfurt, Germany marwa o. chohan, PhD Candidate, Albuquerque Academy, New Mexico, USA jessica r. cohen, PhD, Assistant Professor, Department of Psychology and Neuroscience, University of North California at Chapel Hill, USA shira cohen-zimerman, PhD, Postdoctoral Fellow, Shirley Ryan Ability Lab, USA roberto colom, PhD, Professor, Department of Biological and Health Psychology, Autonomous University of Madrid, Spain thomas r. coyle, PhD, Professor, Department of Psychology, University of Texas at San Antonio, USA mark d’esposito, MD, Professor, Helen Wills Neuroscience Institute, University of California, Berkeley, USA stefan drakulich, PhD Candidate, Integrated Program in Neuroscience, Montreal Neurological Institute, Canada matthew j. euler, PhD, Assistant Professor, Department of Psychology, The University of Utah, USA xv
xvi
List of Contributors
alan c. evans, PhD, James McGill Professor, Department of Neurology and Neurosurgery, McGill University, Canada christian j. fiebach, PhD, Professor, Psychology Department, Goethe University Frankfurt, Germany diana fields, PhD Candidate, Department of Education, University of York, UK christoph fraenz, PhD, Postdoctoral Researcher, Department of Psychology, Ruhr-University Bochum, Germany erhan genc¸ , PhD, Principal Investigator, Department of Psychology, RuhrUniversity Bochum, Germany jordan h. grafman, PhD, Professor of Physical Medicine and Rehabilitation, Neurology – Ken and Ruth Davee Department, Northwestern University Feinberg School of Medicine, USA richard j. haier, PhD, Professor Emeritus, School of Medicine, University of California, Irvine, USA joseph p. hennessee, PhD, Postdoctoral Researcher, School of Behavior and Brain Sciences, The University of Texas at Dallas, USA kirsten hilger, PhD, Postdoctoral Research Associate, Faculty of Human Sciences, University of Würzburg, Germany avram j. holmes, PhD, Assistant Professor, Department of Psychology and Psychiatry, Yale University, USA rex e. jung, PhD, Assistant Professor, Department of Neurosurgery, University of New Mexico, USA marcel adam just, PhD, D. O. Hebb Professor of Psychology, Psychology Department, Carnegie Mellon University, USA sherif karama, MD PhD, Assistant Professor, Department of Psychiatry, McGill University, Canada budhachandra khundrakpam, PhD, Postdoctoral Fellow, Department of Neurology and Neurosurgery, McGill University, Canada rogier a. kievit, PhD, Programme Leader, Department of Neuroscience, University of Cambridge, UK morten l. kringelbach, DPhil, Professor, Department of Psychiatry, Aarhus University, Denmark james j. lee, PhD, Assistant Professor, Department of Psychology, University of Minnesota, USA
List of Contributors
kenia martı´ nez, PhD, Investigator, Biomedical Imaging and Instrumentation Group, Gregorio Marañon General University Hospital, Spain ty l. mckinney, PhD Candidate, Department of Psychology, University of Utah, USA denise c. park, PhD, Professor, Center for Vital Longevity, The University of Texas at Dallas, USA jean-baptiste poline, PhD, Associate Professor, Department of Neurology and Neurosurgery, McGill University, Canada michael i. posner, PhD, Professor Emeritus, Department of Psychology, University of Oregon, USA naftali raz, PhD, Professor, Institute of Gerontology, Wayne State University, USA mary k. rothbart, PhD, Professor Emeritus, Department of Psychology, University of Oregon, USA carola salvi, PhD, Lecturer, Department of Psychiatry, University of Texas at Austin, USA ivan l. simpson-kent, PhD Candidate, MRC Cognitive and Brain Sciences Unit, University of Cambridge, UK olaf sporns, PhD, Distinguished Professor, Department of Psychological and Brain Sciences, Indiana University Bloomington, USA jeffrey a. stanley, PhD, Professor, Department of Psychiatry, Wayne State University, USA robert j. sternberg, PhD, Professor of Human Development, College of Human Ecology, Cornell University, USA robert vargas, PhD Candidate, Psychology Department, Carnegie Mellon University, USA peter vuust, Professor, Department of Clinical Medicine, Aarhus University, Denmark jonathan wai, PhD, Assistant Professor, Department of Education Reform, University of Arkansas, USA emily a. willoughby, PhD Candidate, Department of Psychology, University of Minnesota, USA
xvii
Preface
This book introduces one of the greatest and most exciting scientific challenges of our time – explicating the neurobiological foundations of human intelligence. Written for students and for professionals in related fields, The Cambridge Handbook of Intelligence and Cognitive Neuroscience surveys research emerging from the rapidly developing neuroscience literature on human intelligence. Our emphasis is on theoretical innovation and recent advances in the measurement, modeling, and characterization of the neurobiology, especially from brain imaging studies. Scientific research on human intelligence is evolving from limitations of psychometric testing approaches to advanced neuroscience methods. Each chapter, written by experts, explains these developments in clear language. Together the chapters show how scientists are uncovering the rich constellation of brain elements and connections that give rise to the remarkable depth and complexity of human reasoning and personal expression. If you doubt that intelligence can be defined or measured sufficiently for scientific study, you are in for a surprise. Each chapter presents thought-provoking findings and conceptions to whet the appetite of students and researchers. Part I is an introduction to fundamental issues in the characterization and measurement of general intelligence (Coyle, Chapter 1), reviewing emerging methods from network neuroscience (Hilger and Sporns, Chapter 2), presenting a comparative analysis of structural and functional MRI methods (Martínez and Colom, Chapter 3), and surveying multidisciplinary research consortia and large-scale data repositories for the study of general intelligence (Khundrakpam, Poline, and Evans, Chapter 4). Part II reviews cognitive neuroscience theories of general intelligence, evaluating the weight of the neuroscience evidence (Euler and McKinney, Chapter 5), presenting an emerging approach from network neuroscience (Barbey, Chapter 6), and reviewing neuroscience research that investigates general intelligence within a developmental (Kievit and Simpson-Kent, Chapter 7) and lifespan framework (Hennessee and Park, Chapter 8), and that applies a social, cognitive, and affective neuroscience perspective (Ahrends, Vuust, and Kringelbach, Chapter 9). Due to a production issue, Chapter 23 (Vargas and Just) was omitted from Part II, where it was intended to appear. This chapter now appears as the final chapter. xix
xx
Preface
Part III provides a systematic review of contemporary neuroimaging methods for studying intelligence, including structural and diffusion-weighted MRI techniques (Genç and Fraenz, Chapter 10; Drakulich and Karama, Chapter 11), functional MRI methods (Basten and Fiebach, Chapter 12; Cohen and D’Esposito, Chapter 13), and spectroscopic imaging of metabolic markers of intelligence (Jung and Chohan, Chapter 14; Raz and Stanley, Chapter 15). Part IV reviews predictive modeling approaches to the study of human intelligence, presenting research that enables the prediction of cognitive ability differences from brain imaging and genetics data (Anderson and Holmes, Chapter 16; Willoughby and Lee, Chapter 17). Finally, Part V addresses the need to translate findings from this burgeoning literature into potential action/policy, presenting research on cognitive enhancement (Posner and Rothbart, Chapter 18), clinical translation (Cohen-Zimerman, Salvi, and Grafman, Chapter 19), and education and public policy (Asbury and Fields, Chapter 20; Sternberg, Chapter 21; Wai and Bailey, Chapter 22). Research on cognitive neuroscience offers the profound possibility of enhancing intelligence, perhaps in combination with molecular biology, by manipulating genes and brain systems. Imagine what this might mean for education, life success, and for addressing fundamental social problems. You may even decide to pursue a career dedicated to these prospects. That would be an extra reward for us.
PART I
Fundamental Issues
1 Defining and Measuring Intelligence The Psychometrics and Neuroscience of g Thomas R. Coyle
Aims and Organization The purpose of this chapter is to review key principles and findings of intelligence research, with special attention to psychometrics and neuroscience. Following Jensen (1998), the chapter focuses on intelligence defined as general intelligence (g). g represents variance common to mental tests and arises from ubiquitous positive correlations among tests (scaled so that higher scores indicate better performance). The positive correlations indicate that people who perform well on one test generally perform well on all others. The chapter reviews measures of g (e.g., IQ and reaction times), models of g (e.g., Spearman’s model and the Cattell-Horn-Carroll model), and the invariance of g across test batteries. The chapter relies heavily on articles published in the last few years, seminal research on intelligence (e.g., neural efficiency hypothesis), and meta-analyses of intelligence and g-loaded tests. Effect sizes are reported for individual studies and meta-analyses of the validity of g and its link to the brain. The chapter is divided into five sections. The first section discusses historical definitions of intelligence, concluding with the decision to focus on g. The second section considers vehicles for measuring g (e.g., IQ tests), models for representing g (e.g., Cattell-Horn-Carroll), and the invariance of g. The next two sections discuss the predictive power of g-loaded tests, followed by a discussion of intelligence and the brain. The final section considers outstanding issues for future research. The issues include non-g factors, the development of intelligence, and recent research on genetic contributions to intelligence and the brain (e.g., Lee et al., 2018).
Defining Intelligence Intelligence can be defined as a general cognitive ability related to solving problems efficiently and effectively. Historically, several definitions of intelligence have been proposed. Alfred Binet, who co-developed the precursor 3
4
t. r. coyle
to modern intelligence tests (i.e., Stanford-Binet Intelligence Scales), defined it as “judgment, otherwise called good sense, practical sense, initiative, the faculty of adapting one’s self to circumstances” (Binet & Simon, 1916/1973, pp. 42–43). David Wechsler, who developed the Wechsler Intelligence Scales, defined it as the “global capacity of the individual to act purposefully, to think rationally and to deal effectively with his environment” (Wechsler, 1944, p. 3). Howard Gardner, a proponent of the theory of multiple intelligences, defined it as “the ability to solve problems, or to create products, that are valued within one or more cultural settings” (Gardner, 1983/2003, p. x). Perhaps the best known contemporary definition of intelligence was reported in the statement “Mainstream Science on Intelligence” (Gottfredson, 1997). The statement was signed by 52 experts on intelligence and first published in the Wall Street Journal. It defines intelligence as: [A] very general mental capability that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly and learn from experience. It is not merely book learning, a narrow academic skill, or test-taking smarts. Rather, it reflects a broader and deeper capability for comprehending our surroundings – catching on, making sense of things, or figuring out what to do. (Gottfredson, 1997, p. 13)
Two elements of the definition are noteworthy. The first is that intelligence represents a general ability, which influences performance on all mental tasks (e.g., verbal, math, spatial). The second is that intelligence involves the ability to learn quickly, meaning that intelligence is related to fast and efficient mental processing. Psychometric evidence strongly supports the view that intelligence, measured by cognitive tests, reflects a general ability that permeates all mental tasks, and that it is associated with efficient mental processing, notably on elementary cognitive tasks that measure reaction times (e.g., Jensen, 1998, 2006). Arthur Jensen, a titan in intelligence research, advised against the use of the term “intelligence” because of its vague meaning and questionable scientific utility, noting: “Largely because of its popular and literary usage, the word ‘intelligence’ has come to mean too many different things to many people (including psychologists). It has also become so fraught with value judgments, emotions, and prejudices as to render it useless in scientific discussion” (Jensen, 1998, p. 48). Rather than using the term “intelligence,” Jensen (1998) proposed defining mental ability as g, which represents variance common to mental tasks. g reflects the empirical reality that people who do well in one mental task generally do well on all other mental tasks, a finding supported by positive correlations among cognitive tests. Following Jensen (1998), the current chapter focuses on g and g-loaded measures. It discusses methods for measuring g, models for representing g, the validity of g-loaded tests, and the nexus of relations between g-loaded measures, the brain, and diverse criteria.
Defining and Measuring Intelligence
Measuring g Jensen (1998, pp. 308–314) distinguished between constructs, vehicles, and measurements of g. The construct of g represents variance common to diverse mental tests. g is based on the positive manifold, which refers to positive correlations among tests given to representative samples. The positive correlations indicate that people who score high on one test generally score high on all others. g is a source of variance (i.e., individual differences) in test performance and therefore provides no information about an individual’s level of g, which can be measured using vehicles of g. Vehicles of g refer to methods used to elicit an individual’s level of g. Common vehicles of g include IQ tests, academic aptitude tests (SAT, ACT, PSAT), and elementary cognitive tasks (ECTs) that measure reaction times. All mental tests are g loaded to some extent, a finding consistent with Spearman’s (1927, pp. 197–198) principle of the indifference of the indicator, which states that all mental tests are loaded with g, irrespective of their content. A test’s g loading represents its correlation with g. Tests with strong g loadings generally predict school and work criteria well, whereas tests with weak g loadings generally predict such criteria poorly (Jensen, 1998, pp. 270–294). Measurements of g refer to the measurement scale of g-loaded tests (e.g., interval or ratio). IQ scores are based on an interval scale, which permits ranking individuals on a trait (from highest to lowest) and assumes equal intervals between units. IQ tests provide information about an individual’s performance on g-loaded tests (compared to other members in his or her cohort), which can be converted to a percentile. Unfortunately, IQ scores lack an absolute zero point and therefore do not permit proportional comparisons between individuals such as “individual A is twice as smart (in IQ points) as individual B,” which would require a ratio scale of measurement.
g-Loaded Tests This section reviews common tests of g. The tests include IQ tests, aptitude tests (SAT, ACT, PSAT), tests of fluid and crystallized intelligence, elementary cognitive tasks (ECTs), and tests of executive functions.
IQ Tests IQ tests include the Wechsler Intelligence Scales, which are among the most widely used IQ tests in the world. The Wechsler Scales are age-normed and define the average IQ at any age as 100 (with a standard deviation of 15). The scales yield four ability indexes: verbal comprehension, which measures verbal abilities (e.g., vocabulary knowledge); perceptual reasoning, which measures
5
6
t. r. coyle
non-verbal reasoning (e.g., building a model with blocks); processing speed, which measures psychomotor speed (e.g., completing a coding chart); and working memory, which measures the ability to manipulate information in immediate memory. Working memory is a strong correlate of g (e.g., Gignac & Watkins, 2015; see also, Colom, Rebollo, Palacios, Juan-Espinosa, & Kyllonen, 2004). Working memory can be measured using the Wechsler backward digit-span subtest, which measures the ability to repeat a series of digits in reverse order. The Wechsler Scales yield a strong g factor (Canivez & Watkins, 2010), which measures variance common to mental tests. g largely explains the predictive power of tests, which lose predictive power after (statistically) removing g from tests (Jensen, 1998, pp. 270–294; see also, Coyle, 2018a; Ree, Earles, & Teachout, 1994). The Wechsler vocabulary subtest has one of the strongest correlations with g (compared to other subtests), suggesting that vocabulary knowledge is a good proxy of g. A Wechsler subtest with a relatively weak g loading is coding, which measures the ability to quickly complete a coding chart. Coding partly measures handwriting speed, which involves a motor component that correlates weakly with g (e.g., Coyle, 2013). IQ scores are based on an interval (rather than ratio) scale, which estimates where an individual ranks relative to others in his or his age group. IQ scores can be converted to a percentile rank, which describes the percentage of scores that are equal to or lower than it. (For example, a person who scores at the 95th percentile performs better than 95% of people who take the test.) However, because IQ scores are not based on ratio scale (and therefore have no real zero point), they cannot describe where a person stands in proportion to another person. Therefore, IQ scores do not permit statements such as “a person with an IQ of 120 is twice as smart (in IQ points) as a person with an IQ of 60.” For such statements to be meaningful, cognitive performance must be measured on a ratio scale. Reaction times are based on a ratio scale and do permit proportional statements (for similar arguments, see Haier, 2017, pp. 41–42; Jensen, 2006, pp. 56–58).
Aptitude Tests Aptitude tests are designed to measure specific abilities (verbal or math) and predict performance in a particular domain (school or work). Aptitude tests include the SAT and ACT, two college admissions tests taken in high school; the PSAT, a college readiness test taken in junior high school; and the Armed Services Vocational Aptitude Battery (ASVAB), a selection test used by the US military. All of these tests produce scores based on an interval scale and provide percentiles to compare an examinee to others in his or her cohort. The SAT, ACT, PSAT, and ASVAB also yield a strong g factor, which accounts for about half of the variance in the tests. All of the tests correlate strongly with IQ tests and g factors based on other tests, suggesting that they are in fact
Defining and Measuring Intelligence
“intelligence” tests, even though “intelligence” is not mentioned in their names. Finally, all of the tests derive their predictive power for work and school criteria largely (though not exclusively) from g (e.g., Coyle & Pillow, 2008; see also Coyle, Purcell, Snyder, & Kochunov, 2013).
Tests of Fluid and Crystallized Intelligence Cattell (1963; see also Brown, 2016; Horn & Cattell, 1966) distinguished between fluid intelligence, which measures general reasoning ability on novel problems, and crystallized intelligence, which measures culturally acquired knowledge. A widely used test of fluid intelligence is the Raven’s Progressive Matrices. Each Raven’s item depicts a 3 3 grid, with the lower right cell empty and the other cells filled with shapes that form a pattern. Participants must select the shape (from eight options) that completes the pattern. The Raven’s correlates strongly with a g based on diverse tests (g loading [λ] .70), making it a good measure of g, and it also loads moderately on a visuospatial factor (λ .30, Gignac, 2015). Crystallized intelligence is often measured using vocabulary and general knowledge tests. Both types of tests measure culturally acquired knowledge and typically have among the highest correlations with a g based on diverse tests (λ .80, Gignac, 2015). Fluid and crystallized intelligence show different developmental trajectories over the lifespan (20–80 years). Fluid intelligence begins to decline in early adulthood and shows rapid declines in middle and late adulthood. In contrast, crystallized intelligence shows slight declines in later adulthood, with modest declines thereafter (e.g., Tucker-Drob, 2009, p. 1107).
Elementary Cognitive Tasks (ECTs) ECTs examine relations between g and mental speed using reaction times (RTs) to simple stimuli (e.g., lights or sounds) (for a review see Jensen, 2006, pp. 155–186). ECTs measure two types of RTs: Simple RT (SRT), which measures the speed of responding to a single stimulus (with no distractors), and choice RT (CRT), which measures the speed of responding to a target stimulus paired with one or more distractors. In general, RTs increase (become slower) with the number of distractors, which increases the complexity of the ECT. Moreover, RT-IQ relations, and RT relations with other g-loaded measures (e.g., working memory) increase as a function of task complexity. RT-IQ relations are weakest for SRT and stronger for CRT, with RT-IQ relations increasing with the complexity of the ECT (e.g., Jensen, 2006, pp. 164–166). Such a pattern is consistent with the idea that intelligence involves the ability to handle complexity (Gottfredson, 1997). A similar pattern is found when RT is correlated with participants’ age in childhood (up to 20 years) or adulthood (20–80 years). Age correlates more strongly with CRT than with SRT, and
7
8
t. r. coyle
CRT relations with age generally increase with the number of distractors (e.g., Jensen, 2006, pp. 105–117). ECTs can separate the effects of RT, which measures how quickly participants initiate a response to a reaction stimulus (light or sound), from movement time (MT), which measures how quickly participants execute a response after initiating it. RT and MT can be measured with the Jensen box (Jensen, 2006, pp. 27–31). The Jensen box involves a home button surrounded by a semicircle of one-to-eight response buttons, which occasionally light up. The participant begins with a finger on the home button, waits for a response button to light up, and then has to release the home button and press the response button. RT is the interval between the lighting of the response button and the release of the home button. MT is the interval between the release of the home button and the press of the response button. RT generally correlates more strongly with IQ and task complexity than does MT (Jensen, 2006, p. 234). Such results suggest that IQ reflects the ability to evaluate options and initiate a response (i.e., RT) more than the ability to execute a motoric response after deciding to initiate it (cf. Coyle, 2013).
Executive Functions (EFs) Executive functions are cognitive abilities used to plan, control, and coordinate behavior. EFs include three cognitive abilities: updating, which measures the ability to update information in working memory; shifting, which measures the ability to shift attention to different stimuli or goals; and inhibition, which measures the ability to suppress distractions (Miyake et al., 2000). Of the three EFs, updating and its analog of working memory correlate most strongly with g (e.g., Friedman et al., 2006; see also, Benedek, Jauk, Sommer, Arendasy, & Neubauer, 2014). The relation between working memory and g approaches unity in latent variable analysis (e.g., Colom et al., 2004; see also, Gignac & Watkins, 2015), with a mean meta-analytic correlation of .48 among manifest variables (Ackerman, Beier, & Boyle, 2005). The three major EFs (updating, shifting, inhibition) are related to each other, suggesting a general EF factor. Controlling for correlations among the three EFs indicates that updating (an analog of working memory) correlates most strongly with g, whereas shifting and inhibition correlate weakly with g (e.g., Friedman et al., 2006).
Models of Intelligence and g Two prominent models of g are a Spearman model with no group factors, and a hierarchical model with group factors (Jensen, 1998, pp. 73–81). Group factors estimate specific abilities (e.g., verbal, math, spatial), whereas g estimates variance common to all abilities. Group factors (and the tests used to estimate them) almost always correlate positively, reflecting shared variance among the factors. The Spearman model estimates g using manifest variables
Defining and Measuring Intelligence
(e.g., test scores), with no intervening group factors. In contrast, the hierarchical model estimates g based on a pyramidal structure, with g at the apex, group factors (broad and narrow) in the middle, and manifest variables (individual tests) at the base. There are many hierarchical models of g with group factors. One of the most notable is the Cattell-Horn-Carroll (CHC) model (McGrew, 2009). The CHC model describes g as a third-order factor, followed by broad secondorder group factors, each loading on g, and narrow first-order group factors, each loading on a broad factor. The broad factors (sample narrow factors in parentheses) include fluid intelligence (induction), crystallized intelligence (general knowledge), quantitative knowledge (math knowledge), processing speed (perceptual speed), and short- and long-term memory (working memory capacity). In practice, intelligence research often targets g and a small number of group factors relevant to a study’s aims. It should be emphasized that all group factors (broad and narrow) are related to g. Therefore, the unique contribution of a group factor (e.g., math ability) to a criterion (e.g., school grades) can be examined only after statistically removing g from the factor, a point revisited in the section on non-g factors.
Invariance of g Using hierarchical models of g, Johnson and colleagues (Johnson, Bouchard, Krueger, McGue, & Gottesman, 2004; Johnson, te Nijenhuis, & Bouchard, 2008) estimated correlations among g factors based on different batteries of cognitive tests. An initial study (Johnson et al., 2004; N = 436 adults) estimated g and diverse group factors using three test batteries: Comprehensive Ability Battery (14 tests estimating five group factors), Hawaii Battery (17 tests estimating five group factors), and Wechsler Adult Intelligence Scale (11 tests estimating three group factors). g factors for each battery were estimated as second order factors in latent variable analyses. Although the three batteries differed on key dimensions (e.g., number of tests, content of tests, number of group factors), the g factors of the batteries correlated nearly perfectly (r 1.00). The near perfect correlations suggest that g is independent of specific tests and that g factors based on diverse test batteries are virtually interchangeable. Johnson et al.’s (2004) results were replicated in a subsequent study (Johnson et al., 2008), cleverly titled “Still just 1 g: Consistent results from five test batteries.” The study involved Dutch seamen (N = 500) who received five test batteries. The batteries estimated g and different group factors (perceptual, spatial, mechanical, dexterity), with few verbally loaded factors. Consistent with Johnson et al.’s (2004) results, the g factors of the different batteries correlated .95 or higher, with one exception. The exception was a test battery composed entirely of matrix type reasoning tests (Cattell Culture Fair Test), which yielded a g that correlated .77 or higher with the g factors of the other
9
10
t. r. coyle
tests. In Johnson et al.’s (2008) words, the results “provide evidence both for the existence of a general intelligence factor [i.e., g] and for the consistency and accuracy of its measurement” (p. 91). Johnson et al.’s (2004, 2008) results are consistent with Spearman’s (1927) principle of the indifference of the indicator. This principle is based on the idea that all cognitive tests are indicators of g and load on g (to some extent). The g loading of a test represents its correlation with g, which reflects how well it estimates g. The degree to which a test battery estimates g depends on the number and diversity of tests in the battery (e.g., Major, Johnson, & Bouchard, 2011). Larger and more diverse batteries like the ones used by Johnson et al. (2004, 2008) generally yield better estimates of g because such batteries are more likely to identify variance common to tests (i.e., g) and have test-specific variances cancel out. Johnson et al.’s (2004, 2008) research estimated g using samples from WEIRD countries (e.g., United States and Europe). WEIRD stands for Western, Educated, Industrialized, Rich, and Democratic. WEIRD countries have high levels of wealth and education (Henrich, Heine, & Norenzayan, 2010), which contribute to cognitive development, specific abilities (e.g., verbal, math, spatial), and g. Non-WEIRD countries have fewer resources, which may retard cognitive development and yield a poorly defined g factor (which explains limited variance among tests). Warne and Burningham (2019) examined g factors in non-WEIRD countries (e.g., Bangladesh, Papua New Guinea, Sudan). Cognitive test data were obtained for 97 samples from 31 non-WEIRD countries totaling 52,340 individuals. Exploratory factor analyses of the tests estimated g, defined as the first unrotated factor when only one factor was extracted, or a second-order factor when multiple factors were extracted. A single g factor was observed in 71 samples (73%), and a second-order g factor was observed in the remaining 23 of 26 samples (83%). The average variance explained by the first unrotated factor was 46%, which is consistent with results from WEIRD countries. In sum, a clearly identified g factor was observed in 94 of 97 non-WEIRD countries, suggesting that g is a universal human trait, found in both WEIRD and non-WEIRD countries.
Predictive Power of g and g-Loaded Tests Intelligence tests are useful because they predict diverse criteria in everyday life. The current section reviews research on the predictive power of intelligence at school and work. The review focuses on recent and seminal studies of g-loaded tests. g-loaded tests include IQ tests (Wechsler Scales), college aptitude tests (SAT, ACT, PSAT), military selection tests (Armed Services Aptitude Battery), and other cognitive tests (e.g., ECTs). In general, any test that involves a mental challenge will be g-loaded (to some extent), with the degree of relatedness between a test and g increasing with task complexity.
Defining and Measuring Intelligence
Intelligence and School Intelligence tests were developed to predict school performance and so it is no surprise that they predict school grades. Roth et al. (2015) examined the metaanalytic correlation between intelligence tests (verbal and nonverbal) and school grades with 240 samples and 105,185 students. The population correlation was .54 after correcting for artifacts (measurement error and range restriction). Moderating analysis indicated that the test-grade correlations increased from elementary to middle to high school (.45, .54, .58), and were stronger for math/science (.49) than for languages (.44), social sciences (.43), art/music (.31), and sports (.09). Roth et al. (2015) argued that the increases in effect sizes across grade levels could be attributed to increases in the complexity of course material, which would decrease the ability to compensate with practice and increase the contribution of intelligence. Are intelligence-grade correlations attributable to students’ socioeconomic status (SES), which reflects parental wealth, education, and occupational status? The question is important because intelligence tests and college admissions tests (SAT) have been assumed to derive their predictive power from SES. To address this question, Sackett, Kuncel, Arneson, Cooper, and Waters (2009) meta-analyzed SAT-GPA correlations using college GPAs from 41 institutions and correcting for range restriction. (The SAT correlates strongly with a g based on diverse tests [r = .86, corrected for nonlinearity, Frey & Detterman, 2004].) The meta-analytic SAT-GPA correlation was .47, which dropped negligibly to .44 after controlling for SES (Sackett et al., 2009, p. 7). Contrary to the assumption that the SAT derives its predictive power from SES, the results suggest that SES has a negligible impact on SAT-GPA correlations. The predictive power of admissions tests is not limited to undergraduate criteria but also applies to graduate and professional school criteria. Kuncel and Hezlett (2007) meta-analyzed correlations involving graduate admissions tests, correcting for range restriction and measurement error. The tests included the Graduate Record Examination (GRE), Law School Admission Test (LSAT), Pharmacy College Admission Test (PCAT), Miller Analogies Test (MAT), Graduate Management Admission Test (GMAT), and Medical College Admission Test (MCAT). The tests robustly predicted first-year graduate GPA (r > .40, all tests), overall graduate GPA (r > .40, all tests), and qualifying exams (r > .39, GRE and MAT). Moreover, the tests also predicted criteria other than grades, including publication citations (r .23, GRE), faculty evaluations (r > .36, GRE and MAT), and licensing exams (r > .45, MCAT and PCAT). All correlations were positive, indicating better performance was associated with higher achievement. Are test–grade correlations attributable to g? The question is important because g is considered the “active ingredient” of tests, with the predictive power of a test increasing with its g loading. To address this question, Jensen
11
12
t. r. coyle
(1998, p. 280) correlated the g loadings of 11 subtests of the Wechsler Adult Intelligence Scale (WAIS) with their corresponding validity coefficients for college grades. The correlation between the g loadings and validity coefficients was r = .91, suggesting that a test’s predictive power is largely explained by g. Jensen (1998, p. 280) replicated the result in an analysis correlating the g loadings of the WAIS subtests with their validity coefficients for high-school class rank (r = .73). Separately, Thorndike (1984) found that 80–90% of the predictable variance in school grades was accounted for by g, with 10–20% explained by non-g factors measured by IQ and other tests. Together, the results suggest that the predictive power of cognitive tests is largely explained by g, with non-g factors explaining little variance.
Intelligence and Work Intelligence tests and other g-loaded tests also predict work performance (e.g., productivity and supervisor ratings). In a classic meta-analysis, Schmidt and Hunter (1998) examined the predictive power of general cognitive ability tests for overall job performance, as well as the incremental validity of other predictors (beyond general ability). The meta-analytic correlation between general ability and job performance was r = .51. Incremental validity, obtained after accounting for general ability, was negligible for other predictors, including job experience (.03), years of education (.01), conscientiousness (.09), and job knowledge (.07). The negligible effects suggest that the other predictors contributed little to the prediction of job performance beyond general ability. In related research, Schmidt and Hunter (2004) showed that the relationship between general ability and job performance was largely mediated by job knowledge, with general ability leading to increases in job knowledge, which in turn improved job performance. Is the predictive power of cognitive tests attributable to non-g factors? Non-g factors include specific abilities measured by cognitive tests. Specific abilities include math, verbal, and spatial abilities, which might contribute to the validity of tests beyond g. Ree et al. (1994) examined the validity of g (variance common to tests) and specific abilities (s, variance unique to tests) using the Armed Services Vocational Aptitude Battery (ASVAB), which was given to US Air Force recruits (N = 1,036). g was correlated with job performance based on three criteria (hands-on, work sample, walk-through), and s was measured as the incremental validity coefficient (beyond g). The average meta-analytic correlation (across all criteria) between g and job performance was .42, whereas the average incremental validity coefficient for s was .02. Based on these results, Ree et al. (1994) concluded that the predictive power of cognitive ability tests was attributable to “not much more than g” and that specific abilities (i.e., s) have negligible validity for job performance. A key question is whether g-loaded tests predict criteria at very high ability levels. The question is important because IQs and g-loaded tests have been
Defining and Measuring Intelligence
assumed to lose predictive power beyond an ability threshold, defined as IQs above 120 (e.g., Gladwell, 2008, p. 79). Data relevant to the question come from the Study of Mathematically Precocious Youth (SMPY; for a review, see Lubinski, 2016). The SMPY involves a sample of gifted subjects who took the SAT around age 12 years and scored in the top 1%. (The SAT correlates strongly with IQ and other g-loaded tests [e.g., Frey & Detterman, 2004].) The top 1% represents an IQ (M = 100, SD = 15) of around 135 or higher, which is well above 120; therefore, based on the threshold assumption, SAT scores of SMPY subjects should have negligible predictive power. Contrary to the assumption, higher SAT scores predicted higher levels of achievement around age 30 years in SMPY subjects (e.g., higher incomes, more patents, more scientific publications, more novels) (Lubinski, 2016, p. 913). The results indicate that higher levels of g are associated with higher achievement levels, even beyond the putative ability threshold of IQ 120.
Intelligence and the Brain The brain is the seat of intelligence. Early studies found that head size (an indirect measure of brain size) correlates positively with IQ test performance (e.g., r .20, Deary et al., 2007, p. 520). Contemporary studies of intelligence and the brain use modern neuroimaging technologies. These technologies include magnetic resonance imaging (MRI), which measures brain structure (e.g., brain volume and cortical thickness); functional MRI (fMRI), which measures neural activity at rest or during task performance (typically based on blood flow); and positron emission tomography (PET), which measures chemicals in the brain such as glucose metabolism. The current section selectively reviews research on intelligence and the brain. Two hypotheses are discussed. The first is the “bigger is better hypothesis,” which predicts that intelligence correlates positively with larger quantities of brain tissue (gray matter or white matter), based on various brain measurements (e.g., volume, thickness, density). The second hypothesis is the “efficiency hypothesis,” which predicts that more intelligent people have more efficient brains, based on functional (e.g., cortical glucose metabolism) and structural (e.g., length of white matter tracts) measurements.
Efficiency and Intelligence In a seminal study of the efficiency hypothesis, Haier et al. (1988) had eight healthy males solve problems on the Raven’s Advanced Progressive Matrices (RAPM) while measuring cortical glucose metabolism with PET. The RAPM is a non-verbal reasoning test with a strong g loading (r .70, Gignac, 2015). Cortical glucose metabolism is a measure of cortical energy consumption. The key result was a negative correlation between absolute glucose metabolic
13
14
t. r. coyle
rate and RAPM performance (whole-slice r = –.75, Haier et al., 1988, p. 208), indicating that glucose metabolism was lower for higher ability subjects. The result suggested that higher ability subjects were able to solve RAPM problems more easily and therefore processed the task more efficiently (and used less energy during problem solving). Haier et al.’s (1988) findings led to subsequent studies of the efficiency hypothesis (for a review, see Neubauer & Fink, 2009; see also, Haier, 2017, pp. 153–155). Consistent with the hypothesis, some studies found negative relations between intelligence and brain activity, measured as glucose metabolism or cerebral blood flow (Neubauer & Fink, 2009, pp. 1007–1008). Support for the hypothesis was strongest in frontal areas and for tasks of low-tomoderate difficulty. In contrast, other studies found mixed or contradictory evidence for the hypothesis (e.g., Neubauer & Fink, 2009, pp. 1010–1012). Basten, Stelzel, and Fiebach (2013) clarified the mixed results by distinguishing between two brain networks: the task-negative network (TNN), where brain activation decreases with task difficulty, and the task-positive network (TPN), where brain activation increases with task difficulty. When solving difficult RAPM problems, higher ability subjects showed lower efficiency (more activation) in the TPN, which is related to attentional control; but higher efficiency (less activation) in the TNN, which is related to mind wandering. According to Basten et al. (2013, pp. 523–524), the results suggest that “while high-intelligent individuals are more efficient in deactivating the TNN during task processing, they put more effort into cognitive control-related activity in the TPN.” More generally, the results suggest that studies of the efficiency hypothesis should consider both brain networks (TNN and TPN) or risk drawing erroneous conclusions about brain activity and intelligence. The neural efficiency hypothesis has also been examined using graph analysis, which measures efficiency based on brain regions (nodes) and connections between the regions (paths) (Li et al., 2009; Santarnecchi, Galli, Polizzotto, Rossi, & Rossi, 2014; van den Heuvel, Stam, Kahn, & Hulshoff Pol, 2009). Graph analysis examines the distance between brain regions (path length), based on structure (e.g., white matter tracts) or function (e.g., temporal activation between regions). Path length is assumed to measure the efficiency of neural communication, with shorter paths being related to quicker (and more efficient) neural processing between brain regions. Using structural MRI of white matter tracts, Li et al. (2009) correlated IQs based on the Wechsler Adult Intelligence Scale (N = 79 healthy subjects) with two measures of efficiency: mean shortest path length between nodes, and global efficiency (inverse of the mean shortest path length between nodes). Consistent with the efficiency hypothesis, IQ correlated negatively with mean shortest path length (r = –.36, weighted) and positively with global efficiency (r = .31) (Li et al., 2009, p. 9). van den Heuvel et al. (2009) found similar results using graph analysis and functional MRI, with measurements based on temporal activation between different brain regions. Together, the results suggest
Defining and Measuring Intelligence
that higher ability participants have more efficient connections between brain regions, which facilitate communication between regions and improve task performance.
Brain Size and Intelligence Intelligence and g-loaded measures are positively associated with various measures of brain size, including brain volume and cortical thickness, based on whole-brain and regional measurements (for a review, see Haier, 2017, pp. 101–186). Early studies found a small but positive relation between intelligence and external head size (head circumference), a crude and indirect indicator of brain size (e.g., r .20, Deary et al., 2007, p. 520). Contemporary studies have examined intelligence–brain size relations using modern neuroimaging technologies (e.g., structural MRI). Consistent with the earlier studies, MRI studies show positive relations between g-loaded measures (e.g., IQ, Raven’s, Wechsler scales) and total brain volume, indicating that larger brains are associated with higher levels of g (e.g., Gignac & Bates, 2017). The correlations between g-loaded measures and brain volume vary with quality of intelligence measurements and with corrections (if any) for range restriction. Pietschnig, Penke, Wicherts, Zeiler, and Voracek (2015) found a meta-analytic correlation of r = .24, with no adjustments for quality or range restriction. Gignac, Vernon, and Wickett (2003) and McDaniel (2005) found meta-analytic correlations between intelligence and brain volume of .43 and .33, respectively, after correcting for range restriction, with McDaniel (2005) analyzing only healthy samples and total brain measurements. Finally, Gignac and Bates (2017) found a correlation between intelligence and brain size of r = .31, after correcting for range restriction, but the effect was moderated by quality. Intelligence measures were classified into three quality categories (fair, good, excellent), based on number of tests (1 to 9+), dimensions of tests (1 to 3+), testing time (3 to 40+ minutes), and correlation with g (< .50 to > .94). The correlations between brain size and intelligence increased with quality of measure (.23, .32, .39), indicating that quality moderated the effects. A recent preregistered study (Nave, Jung, Linnér, Kable, & Koellinger, 2019) with a large UK sample (N = 13,608) found that total brain volume correlated positively with fluid intelligence (r = .19) and with educational attainment (r = .12), and that the correlations were primarily attributable (fluid intelligence, educational attainment) to gray matter (.13, .06) rather than white matter (.06, .03). Are the correlations between brain size and intelligence attributable to g (variance common to cognitive abilities)? The question is important because meta-analyses of brain size and intelligence often rely on marker tests of g (e.g., Raven’s Matrices) rather than using factor analysis (of multiple tests) to estimate g. (Marker tests of g may be loaded with substantial non-g variance, which may also correlate with brain size.) Using structural MRI, Colom, Jung,
15
16
t. r. coyle
and Haier (2006a; see also, Colom, Jung, & Haier, 2006b) addressed the question by correlating the subtest g-loadings of the Wechsler Adult Intelligence Scale (WAIS) with the subtest validity coefficients for cluster size. Cluster size was an aggregate measure of brain volume, based on gray matter and white matter, and was obtained using voxel based morphometry. g loadings were obtained using hierarchical factor analysis of the WAIS subtests. The correlation between the subtest g loadings and the subtest validity coefficients for cluster size was r = .95 (Colom et al., 2006a, p. 565). The near perfect correlation suggests that g (rather than non-g factors) explains the effects and that larger brain volumes are associated with higher levels of g. Parallel analyses examined relations between g and cluster size across different brain regions (frontal, temporal, occipital, parietal, insula). Significant relations were concentrated in the frontal region but also included other regions (e.g., temporal, occipital, parietal, insula), suggesting that a distributed brain network mediates g (Colom et al., 2006a, p. 568; Colom et al., 2006b, p. 1361).
Parieto-Frontal Integration Theory (PFIT) The parieto-frontal integration theory (PFIT) integrates brain–intelligence relationships (Haier & Jung, 2007; Jung & Haier, 2007). PFIT emphasizes the role of the frontal and parietal regions. These regions form a key network that contributes to intelligence, with other regions (e.g., sensory and motor) also playing a role. PFIT assumes that sensory information is initially integrated in posterior regions (occipital and temporal), followed by anterior regions (parietal) and associative regions (frontal). The theory also assumes that similar levels of g can emerge by engaging different brain regions. For example, one person may have high levels of verbal ability but low levels of math ability, while another may show the opposite pattern, with both of them showing similar levels of g. Basten, Hilger, and Fiebach (2015) examined PFIT in a meta-analysis of functional (n = 16 studies) and structural (n = 12 studies) brain imaging studies. The meta-analysis examined associations between intelligence and either (a) brain activation during cognitive performance in functional studies, or (b) amount of gray matter based on voxel-based morphometry in structural studies. Intelligence was based on established g-loaded tests such as the Raven’s Advanced Progressive Matrices and the Wechsler Adult Intelligence Scale. All studies involved healthy adults and reported spatial coordinates in standard brain reference space (e.g., Talairach). The meta-analysis identified brain regions with significant clusters of activation or voxels linked to intelligence. The functional results yielded eight significant clusters, located in the lateral and medial frontal regions and the bilateral parietal and temporal regions. The structural results yielded 12 significant clusters, located in the lateral and medial frontal, temporal, and occipital regions, and subcortical regions linked to the default mode network (which is related to brain activity at rest). Curiously, the functional and structural results showed limited
Defining and Measuring Intelligence
overlap, and the structural results showed no parietal effects. Despite these anomalies, both sets of results were broadly consistent with the PFIT, which predicts that intelligence involves a distributed network of brain regions. Additional support for PFIT was obtained by Colom et al. (2009), who found that different measures of intelligence (g, fluid, crystallized) correlated with gray matter volumes in parieto-frontal regions.
Network Neuroscience Theory of Intelligence Network neuroscience theory of intelligence (Barbey, 2018; see Chapter 6) complements and extends PFIT. Whereas PFIT targets a specific brain network (i.e., frontoparietal), network neuroscience theory specifies three distinct networks that operate throughout the brain. The three networks have different connections between nodes (regions) and different relations with specific and general abilities. Regular networks consist of short connections between local nodes, which promote local efficiency and support specific abilities (e.g., testspecific abilities). Random networks consist of random connections between all types of nodes (local or distant), which promote global efficiency and support broad and general abilities (e.g., fluid intelligence). Small world networks balance the features of regular and random networks. Small world networks consist of short connections between local nodes and long connections between distant nodes. Such networks exist close to a phase transition between regular and random networks. A transition toward a regular network engages local regions and specific abilities, whereas a transition toward a random network engages distant regions and general abilities. According to network neuroscience theory, the flexible transition between network states is the foundation of g (Barbey, 2018, pp. 15–17), which is associated with all mental abilities.
Outstanding Issues and Future Directions This last section considers three areas for future research: non-g factors, development and intelligence, and genes and intelligence.
Non-g Factors g permeates all cognitive abilities, which correlate positively with each other, indicating that people who rank high on one ability generally rank high on all others. Although g has received much attention in intelligence research, non-g factors have received less attention. Non-g factors refer to specific abilities unrelated to g. Such abilities include verbal, math, and spatial abilities, obtained after (statistically) removing g from tests. The removal of g from tests produces non-g residuals, which can be correlated with criteria at work
17
18
t. r. coyle
(e.g., job or income) or school (e.g., college major or grades). Early studies found that non-g factors had negligible predictive power for overall performance at work (e.g., supervisor ratings) and school (e.g., grade point average) (e.g., Brown, Le, & Schmidt, 2006; Ree et al., 1994; see also, Coyle, Elpers, Gonzalez, Freeman, & Baggio, 2018). In contrast, more recent studies have found that non-g factors robustly predict domain-specific criteria at school and work (for a review, see Coyle, 2018a). Coyle (2018a, pp. 2–9; see also, Coyle, 2018b; Coyle & Pillow, 2008; Coyle, Snyder, Richmond, & Little, 2015; Coyle et al., 2013) found that non-g residuals of the SAT and ACT math and verbal subtests differentially predicted school grades and specific abilities (based on other tests), as well as college majors and jobs in two domains: science, technology, engineering, and math (STEM), and the humanities (e.g., history, English, arts, music, philosophy). Math residuals of the SAT and ACT correlated positively with STEM criteria (i.e., STEM grades, abilities, majors, jobs) and negatively with humanities criteria. In contrast, verbal residuals showed the opposite pattern. The contrasting patterns were confirmed by Coyle (2018b), who correlated math and verbal residuals of latent variables, based on several ASVAB tests, with criteria in STEM and the humanities. (Unlike single tests such as the SAT or ACT, latent variables based on multiple tests are more likely to accurately measure a specific ability and reduce measurement error.) Coyle (2018a; see also, Coyle, 2019) interpreted the results in terms of investment theories (e.g., Cattell, 1987, pp. 138–146). Investment theories predict that investment in a specific domain (e.g., math/STEM) boosts the development of similar abilities but retards the development of competing abilities (e.g., verbal/ humanities). Math residuals presumably reflect investment in STEM, which boosts math abilities but retards verbal abilities. Verbal residuals presumably reflect investment in humanities, which boosts verbal abilities but retards math abilities. The different patterns of investment may be related to early preferences, which increase engagement in complementary activities (e.g., math–STEM) and decrease engagement in non-complementary activities (e.g., math–humanities) (Bouchard, 1997; Scarr & McCartney, 1983). Future research should consider the role of non-g factors linking cognitive abilities with brain criteria. Consistent with investment theories, sustained investment in a specific domain (via sustained practice) may boost non-g factors, which in turn may affect brain morphology linked to the domain. In support of this hypothesis, sustained practice in motor skill learning has been linked to changes in brain morphology (e.g., increases in gray matter) in regions related to the practice. Such changes have been observed for golf (Bezzola, Mérillat, Gaser, & Jäncke, 2011), balancing skills (Taubert et al., 2010), and juggling (Gerber et al., 2014) (for a critical review, see Thomas & Baker, 2013). A similar pattern may be observed for sustained practice in learning specific, non-g skills (e.g., math, verbal, spatial), which in turn may affect brain morphology in regions related to the non-g skills.
Defining and Measuring Intelligence
Future research should also consider the impact of non-g factors at different levels of ability using Spearman’s Law of Diminishing Returns (SLODR). SLODR is based on Spearman’s (1927) observation that correlations among mental tests decrease at higher ability levels, presumably because tests becomes less loaded with g, and more loaded with non-g factors. Such a pattern has been confirmed by meta-analysis (e.g., Blum & Holling, 2017). The pattern suggests that non-g factors related to brain criteria (e.g., volume or activity) may be stronger predictors at higher ability levels, which are associated with stronger non-g effects.
Development and Intelligence A variation of SLODR is the age dedifferentiation hypothesis. The hypothesis predicts that the influence of g increases, and the influence of non-g factors decreases, over the lifespan (20–80 years) (e.g., Deary et al., 1996; see also, Tucker-Drob, 2009). The hypothesis is based on the assumption that a general ability factor related to all other abilities (e.g., mental slowing) exerts increasing influence over the lifespan. Although the hypothesis is theoretically plausible, it has received limited support (e.g., Tucker-Drob, 2009). Indeed, contra to the dedifferentiation hypothesis, Tucker-Drob (2009, p. 1113) found that the proportion of variance in broad abilities explained by g declined over the lifespan (20–90 years). The decline in g was significant for three broad abilities (crystallized, visual-spatial, short-term memory). The decline may reflect ability specialization (via sustained practice), which may increase the influence of non-g factors over the lifespan.
Genetics, Intelligence, and the Brain The cutting edge of intelligence research considers genetic contributions to intelligence, the brain, and diverse criteria. This issue was examined by Lee et al. (2018), who analyzed polygenic scores related to educational attainment in a sample of 1.1 million people. Polygenic scores are derived from genomewide association studies and are computed in regression using the sum of alleles (for a trait) multiplied by their betas. Lee et al. (2018) examined polygenic scores related to educational attainment, measured as the number of years of schooling completed by an individual. Educational attainment is strongly g-loaded (e.g., Jensen, 1998, pp. 277–280). Lee et al. (2018) identified 1,271 independent and significant singlenucleotide polymorphisms (SNPs; dubbed “lead SNPs”) for educational attainment. The median effect size for the lead SNPs equated to 1.7 weeks of schooling, or 1.1 and 2.6 weeks of schooling at the 5th and 95th percentiles, respectively. Moreover, consistent with the idea that the brain is the seat of intelligence, genes near the lead SNPs were overexpressed in the central
19
20
t. r. coyle
nervous system (relative to genes in a random set of loci), notably in the cerebral cortex and hippocampus. Lee et al. (2018) used genetic information in their discovery sample of 1.1 million people to predict educational attainment in two independent samples: the National Longitudinal Study of Adolescent to Adult Health (Add Health, n = 4,775), a representative sample of American adolescents; and the Health and Retirement Study (HRS, n = 8,609), a representative sample of American adults over age 50 years. The polygenic scores explained 10.6% and 12.7% of the variance in educational attainment in Add Health and HRS, respectively, with a weighted mean of 11%. Other analyses indicated the polygenic scores explained 9.2% of the variance in overall grade point average (GPA) in Add Health. These percentages approximate the variance in first-year college GPA explained by the SAT (e.g., Coyle, 2015, p. 20). The pattern was not specific to educational attainment. Similar associations with polygenic scores were obtained for related criteria, including cognitive test performance, self-reported math ability, and hardest math class completed (Lee et al., 2018). Future research should use polygenic scores to examine moderators and mediators of relations among g-loaded measures, brain variables, and other criteria. One possibility is to examine the Scarr-Rowe effect, originally described by Scarr-Salapatek (1971; see also, Tucker-Drob & Bates, 2015; Woodley of Menie, Pallesen, & Sarraf, 2018). The Scarr-Rowe effect predicts a gene environment interaction that reduces the heritability of cognitive ability at low socioeconomic status (SES) levels, perhaps because low-SES environments suppress genetic potential whereas high-SES environments promote it (e.g., Woodley of Menie et al., 2018). Scarr-Rowe effects have been observed for g-loaded measures (IQ) in twin pairs in the US but not in non-US countries (Tucker-Drob & Bates, 2015). In addition, Scarr-Rowe effects have been obtained using polygenic scores based on diverse cognitive phenotypes (e.g., IQ, educational attainment, neuropsychological tests). Consistent with Scarr-Rowe effects, polygenic scores correlate more strongly with IQ at higher SES levels (Woodley of Menie et al., 2018). Scarr-Rowe effects may be amplified for non-g factors, which may be particularly sensitive to environmental variation in school quality, school quantity (days of schooling), parental income, and learning opportunities. In particular, more beneficial environments may allow low-SES individuals to reach their genetic potential and facilitate the development of specific abilities unrelated to g.
Acknowledgement This research was supported by a grant from the National Science Foundation’s Interdisciplinary Behavioral and Social Science Research Competition (IBSS-L 1620457).
Defining and Measuring Intelligence
References Ackerman, P. L., Beier, M. E., & Boyle, M. O. (2005). Working memory and intelligence: The same or different constructs? Psychological Bulletin, 131(1), 30–60. Barbey, A. K. (2018). Network neuroscience theory of human intelligence. Trends in Cognitive Sciences, 22(1), 8–20. Basten, U., Hilger, K., & Fiebach, C. J. (2015). Where smart brains are different: A quantitative meta-analysis of functional and structural brain imaging studies on intelligence. Intelligence, 51, 10–27. Basten, U., Stelzel, C., & Fiebach, C. J. (2013). Intelligence is differentially related to neural effort in the task-positive and the task-negative brain network. Intelligence, 41(5), 517–528. Benedek, M., Jauk, E., Sommer, M., Arendasy, M., & Neubauer, A. C. (2014). Intelligence, creativity, and cognitive control: The common and differential involvement of executive functions in intelligence and creativity, Intelligence, 46, 73–83. Bezzola, L., Mérillat, S., Gaser, C., & Jäncke, L. (2011). Training-induced neural plasticity in golf novices. Journal of Neuroscience, 31(35), 12444–12448. Binet, A., & Simon, T. (1916). The development of intelligence in children. Baltimore, MD: Williams & Wilkins (reprinted 1973, New York: Arno Press). Blum, D., & Holling, H. (2017). Spearman’s law of diminishing returns. A metaanalysis. Intelligence, 65, 60–66. Bouchard, T. J. (1997). Experience producing drive theory: How genes drive experience and shape personality. Acta Paediatrica, 86(Suppl. 422), 60–64. Brown, K. G., Le, H., & Schmidt, F. L. (2006). Specific aptitude theory revisited: Is there incremental validity for training performance? International Journal of Selection and Assessment, 14(2), 87–100. Brown, R. E. (2016) Hebb and Cattell: The genesis of the theory of fluid and crystallized intelligence. Frontiers in Human Neuroscience, 10, 1–11. Canivez, G. L., & Watkins, M. W. (2010). Exploratory and higher-order factor analyses of the Wechsler Adult Intelligence Scale–Fourth Edition (WAISIV) adolescent subsample. School Psychology Quarterly, 25(4), 223–235. Cattell, R. B. (1963). Theory of fluid and crystallized intelligence: A critical experiment. Journal of Educational Psychology, 54(1), 1–22. Cattell, R. B. (1987). Intelligence: Its structure, growth and action. New York: NorthHolland. Colom, R., Haier, R. J., Head, K., Álvarez-Linera, J., Ángeles Quiroga, M., Chun Shih, P., & Jung, R. E. (2009). Gray matter correlates of fluid, crystallized, and spatial intelligence: Testing the P-FIT model. Intelligence, 37(2), 124–135. Colom, R., Jung, R. E., & Haier, R. J. (2006a). Finding the g-factor in brain structure using the method of correlated vectors. Intelligence, 34(6), 561–570. Colom, R., Jung, R. E., & Haier, R. J. (2006b). Distributed brain sites for the g-factor of intelligence. Neuroimage, 31(3), 1359–1365. Colom, R., Rebollo, I., Palacios, A., Juan-Espinosa, M., & Kyllonen, P. C. (2004). Working memory is (almost) perfectly predicted by g. Intelligence, 32(3), 277–296.
21
22
t. r. coyle
Coyle, T. R. (2013). Effects of processing speed on intelligence may be underestimated: Comment on Demetriou et al. (2013). Intelligence, 41(5), 732–734. Coyle, T. R. (2015). Relations among general intelligence (g), aptitude tests, and GPA: Linear effects dominate. Intelligence, 53, 16–22. Coyle, T. R. (2018a). Non-g factors predict educational and occupational criteria: More than g. Journal of Intelligence, 6(3), 1–15. Coyle, T. R. (2018b). Non-g residuals of group factors predict ability tilt, college majors, and jobs: A non-g nexus. Intelligence, 67, 19–25. Coyle, T. R. (2019). Tech tilt predicts jobs, college majors, and specific abilities: Support for investment theories. Intelligence, 75, 33–40. Coyle, T. R., Elpers, K. E., Gonazalez, M. C., Freeman, J., & Baggio, J. A. (2018). General intelligence (g), ACT scores, and theory of mind: ACT(g) predicts limited variance among theory of mind tests. Intelligence, 71, 85–91. Coyle, T. R., & Pillow, D. R. (2008). SAT and ACT predict college GPA after removing g. Intelligence, 36(6), 719–729. Coyle, T. R., Purcell, J. M., Snyder, A. C., & Kochunov, P. (2013). Non-g residuals of the SAT and ACT predict specific abilities. Intelligence, 41(2), 114–120. Coyle, T. R., Snyder, A. C., Richmond, M. C., & Little, M. (2015). SAT non-g residuals predict course specific GPAs: Support for investment theory. Intelligence, 51, 57–66. Deary, I. J., Egan, V., Gibson, G. J., Brand, C. R., Austin, E., & Kellaghan, T. (1996). Intelligence and the differentiation hypothesis. Intelligence, 23(2), 105–132. Deary, I. J., Ferguson, K. J., Bastin, M. E., Barrow, G. W. S., Reid, L. M., Seckl, J. R., . . . MacLullich, A. M. J. (2007). Skull size and intelligence, and King Robert Bruce’s IQ. Intelligence, 35(6), 519–525. Frey, M. C., & Detterman, D. K. (2004). Scholastic assessment or g? The relationship between the scholastic assessment test and general cognitive ability. Psychological Science, 15(6), 373–378. Friedman, N. P., Miyake, A., Corley, R. P., Young, S. E., DeFries, J. C., & Hewitt, J. K. (2006). Not all executive functions are related to intelligence. Psychological Science, 17(2), 172–179. Gardner, H. (1983/2003). Frames of mind. The theory of multiple intelligences. New York: Basic Books. Gerber, P., Schlaffke, L., Heba, S., Greenlee, M. W., Schultz, T., & Schmidt-Wilcke, T. (2014). Juggling revisited – A voxel-based morphometry study with expert jugglers. Neuroimage, 95, 320–325. Gignac, G. E. (2015). Raven’s is not a pure measure of general intelligence: Implications for g factor theory and the brief measurement of g. Intelligence, 52, 71–79. Gignac, G. E., & Bates, T. C. (2017). Brain volume and intelligence: The moderating role of intelligence measurement quality. Intelligence, 64, 18–29. Gignac, G., Vernon, P. A., & Wickett, J. C. (2003). Factors influencing the relationship between brain size and intelligence. In H. Nyborg (ed.), The scientific study of general intelligence: Tribute to Arthur R. Jensen (pp. 93–106). New York: Pergamon. Gignac, G. E., & Watkins, M. W. (2015). There may be nothing special about the association between working memory capacity and fluid intelligence. Intelligence, 52, 18–23.
Defining and Measuring Intelligence
Gladwell, M. (2008). Outliers: The story of success. New York: Little, Brown & Co. Gottfredson, L. S. (1997). Mainstream science on intelligence: An editorial with 52 signatories, history, and bibliography. Intelligence, 24(1), 13–23. Haier, R. J. (2017). The neuroscience of intelligence. New York: Cambridge University Press. Haier, R. J. & Jung, R. E. (2007). Beautiful minds (i.e., brains) and the neural basis of intelligence. Behavioral and Brain Sciences, 30(2), 174–178. Haier, R. J., Siegel Jr, B. V., Nuechterlein, K. H., Hazlett, E., Wu, J. C., Paek, J., . . . Buchsbaum, M. S. (1988). Cortical glucose metabolic rate correlates of abstract reasoning and attention studied with positron emission tomography. Intelligence, 12(2), 199–217. Henrich, J., Heine, S. J., & Norenzayan, A. (2010). The weirdest people in the world? Behavior & Brain Sciences, 33(2–3), 61–83. Horn, J. L., & Cattell, R. B. (1966). Refinement and test of the theory of fluid and crystallized general intelligences. Journal of Educational Psychology, 57(5), 253–270. Jensen, A. R. (1998). The g factor: The science of mental ability. Westport, CT: Praeger. Jensen, A. R. (2006). Clocking the mind: Mental chronometry and individual differences. Amsterdam, The Netherlands: Elsevier. Johnson, W., Bouchard Jr, T. J., Krueger, R. F., McGue, M., & Gottesman, I. I. (2004). Just one g: Consistent results from three test batteries. Intelligence, 32(1), 95–107. Johnson, W., te Nijenhuis, J., & Bouchard, T. J. Jr. (2008). Still just 1 g: Consistent results from five test batteries. Intelligence, 36(1), 81–95. Jung, R. E., & Haier, R. J. (2007). The Parieto-Frontal Integration Theory (P-FIT) of intelligence: Converging neuroimaging evidence. Behavioral and Brain Sciences, 30(2), 135–154. Kuncel, N. R., & Hezlett, S. A. (2007). Standardized tests predict graduate students’ success. Science, 315(5815), 1080–1081. Lee, J. J., Wedow, R., Okbay, A., Kong, O., Maghzian, M., Zacher, M., . . . Cesarini, D. (2018). Gene discovery and polygenic prediction from a 1.1-million-person GWAS of educational attainment. Nature Genetics, 50(8), 1112–1121. Li, Y., Liu, Y., Li, J., Qin, W., Li, K., Yu, C. & Jiang, T. (2009). Brain anatomical network and intelligence. PLoS Computational Biology, 5(5), 1–17. Lubinski, D. (2016). From Terman to today: A century of findings on intellectual precocity. Review of Educational Research, 86(4), 900–944. Major, J. T., Johnson, W., & Bouchard, T. J. (2011). The dependability of the general factor of intelligence: Why small, single-factor models do not adequately represent g. Intelligence, 39(5), 418–433. McDaniel, M. A. (2005). Big-brained people are smarter: A meta-analysis of the relationship between in vivo brain volume and intelligence. Intelligence, 33(4), 337–346. McGrew, K. S. (2009). CHC theory and the human cognitive abilities project: Standing on the shoulders of the giants of psychometric intelligence research. Intelligence, 37(1), 1–10. Miyake, A., Friedman, N. P., Emerson, M. J., Witzki, A. H., Howerter, A., & Wager, T. D. (2000). The unity and diversity of executive functions and their
23
24
t. r. coyle
contributions to complex “frontal lobe” tasks: A latent variable analysis. Cognitive Psychology, 41(1), 49–100. Nave, G., Jung, W. H., Linnér, R. K., Kable, J. W., & Koellinger, P. D. (2019). Are bigger brains smarter? Evidence from a large-scale preregistered study. Psychological Science, 30(1), 43–54. Neubauer, A. C., & Fink, A. (2009). Intelligence and neural efficiency. Neuroscience and Biobehavioral Reviews, 33(7), 1004–1023. Pietschnig, J., Penke, L., Wicherts, J. M., Zeiler, M., & Voracek, M. (2015). Metaanalysis of associations between human brain volume and intelligence differences: How strong are they and what do they mean? Neuroscience and Biobehavioral Reviews, 57, 411–432. Ree, M. J., Earles, J. A., & Teachout, M. S. (1994). Predicting job performance: Not much more than g. Journal of Applied Psychology, 79(4), 518–524. Roth, B., Becker, N., Romeyke, S., Schäfer, S., Domnick, F., & Spinath, F. M. (2015). Intelligence and school grades: A meta-analysis. Intelligence, 53, 118–137. Sackett, P. R., Kuncel, N. R., Arneson, J. J., Cooper, S. R., & Waters, S. D. (2009). Does socioeconomic status explain the relationship between admissions tests and post-secondary academic performance? Psychological Bulletin, 135(1), 1–22. Santarnecchi, E., Galli, G., Polizzotto, N. R., Rossi, A., & Rossi, S. (2014). Efficiency of weak brain connections support general cognitive functioning. Human Brain Mapping, 35(9), 4566–4582. Scarr, S., & McCartney, K. (1983). How people make their own environments: A theory of genotype ➔ environment effects. Child Development, 54(2), 424–435. Scarr-Salapatek, S. (1971). Race, social class, and IQ. Science, 174(4016), 1285–1295. Schmidt, F. L., & Hunter, J. E. (1998). The validity and utility of selection methods in personnel psychology: Practical and theoretical implications of 85 years of research findings. Psychological Bulletin, 124(2), 262–274. Schmidt, F. L., & Hunter, J. E. (2004). General mental ability in the world of work: Occupational attainment and job performance. Journal of Personality and Social Psychology, 86(1), 162–173. Spearman, C. (1927). The abilities of man: Their nature and measurement. New York: The Macmillan Company. Taubert, M., Draganski, B., Anwander, A., Müller, K., Horstmann, A., Villringer, A., & Ragert, P. (2010). Dynamic properties of human brain structure: Learningrelated changes in cortical areas and associated fiber connections. Journal of Neuroscience, 30(35), 11670–11677. Thomas, C., & Baker, C. I. (2013). Teaching an adult brain new tricks: A critical review of evidence for training-dependent structural plasticity in humans. Neuroimage, 73, 225–236. Thorndike, R. L. (1984). Intelligence as information processing: The mind and the computer. Bloomington, IN: Center on Evaluation, Development, and Research. Tucker-Drob, E. M. (2009). Differentiation of cognitive abilities across the life span. Developmental Psychology, 45(4), 1097–1118. Tucker-Drob, E. M., & Bates, T. C. (2015). Large cross-national differences in gene socioeconomic status interaction on intelligence. Psychological Science, 27(2), 138–149.
Defining and Measuring Intelligence
van den Heuvel, M. P., Stam, C. J., Kahn, R. S., & Hulshoff Pol, H. E. (2009). Efficiency of functional brain networks and intellectual performance. Journal of Neuroscience, 29(23), 7619–7624. Warne, R. T., & Burningham, C. (2019). Spearman’s g found in 31 non-Western nations: Strong evidence that g is a universal phenomenon. Psychological Bulletin, 145(3), 237–272. Wechsler, D. (1944). The measurement of adult intelligence (3rd ed.). Baltimore, MD: Williams & Wilkins. Woodley of Menie, M. A., Pallesen, J., & Sarraf, M. A. (2018). Evidence for the ScarrRowe effect on genetic expressivity in a large U.S. sample. Twin Research and Human Genetics, 21(6), 495–501.
25
2 Network Neuroscience Methods for Studying Intelligence Kirsten Hilger and Olaf Sporns
Introduction The human brain is a complex network consisting of numerous functionally specialized brain regions and their inter-regional connections. In recent years, much research has focused on identifying principles of the anatomical and functional organization of brain networks (Bullmore & Sporns, 2009; Sporns, 2014) and their relation to spontaneous (resting-state; Buckner, Krienen, & Yeo, 2013; Fox et al., 2005) or task-related brain activity (Cole, Bassett, Power, Braver, & Petersen, 2014). Numerous studies have identified relationships between variations in network elements or features and individual differences in behavior and cognition. In the context of this monograph, studies of general cognitive ability (often indexed as general intelligence) are of special interest. In this chapter we survey some of the methodological aspects surrounding studies of human brain networks using noninvasive large-scale imaging and electrophysiological techniques and discuss the application of such network approaches in studies of human intelligence.
The Human Brain as a Complex Network Structural and Functional Connectivity A fundamental distinction in the study of human brain networks is that between structural and functional networks (Honey et al., 2009; Sporns, 2014). Structural networks represent anatomical connectivity, usually estimated from white-matter tracts that connect gray matter regions of the cerebral cortex and subcortex. The resulting networks appear to be relatively stable across time and define potential pathways for the flow of neural signals and information (Avena-Koenigsberger, Misic, & Sporns, 2018). In contrast, functional networks are usually derived from time series of neural activity and represent patterns of statistical relationships. These patterns fluctuate on fast time scales (on the order of seconds), both spontaneously (Hutchison, Womelsdorf, Gati, Everling, & Menon, 2013) as well as in response to changes 26
Network Neuroscience Methods for Studying Intelligence
in stimulation or task (Cole et al., 2014; Gonzalez-Castillo et al., 2015). In humans, structural networks are most commonly constructed from diffusion tensor imaging (DTI) and tractography (Craddock et al. 2013; Hagmann et al., 2008) while functional networks are often estimated from functional magnetic resonance imaging (fMRI) obtained in the resting state (Buckner et al., 2013) or during ongoing tasks. An alternative approach to assess structural connectivity involves the construction of networks from acrosssubject covariance of regionally resolved anatomical measures, e.g., cortical thickness (Lerch et al., 2006). Limitations of structural covariance networks are the uncertain mechanistic basis of the observed covariance patterns in structural anatomy and the need to consider covariance patterns across different subjects. Thus, while this approach may be promising as a proxy of interregional connectivity, it is difficult to implement when studying inter-subject variations (e.g., in the relation of connectivity to intelligence). Alternative strategies for functional connectivity include other recording modalities (e.g., electroencephalography [EEG]/magnetencephalography [MEG]) and a large variety of time series measures designed to extract statistical similarities or causal dependence (e.g., the Phase Lag Index; Stam, Nolte, & Daffertshofer, 2007). In this chapter, we limit our survey to MRI approaches given their prominent role in the study of human intelligence. The estimation of structural networks involves a process of inference, resulting in a model of likely anatomical tracts and connections that fits a given set of diffusion measurements. This process is subject to numerous sources of error and statistical bias (Jbabdi, Sotiropoulos, Haber, Van Essen, & Behrens, 2015; Maier-Hein et al., 2017; Thomas et al., 2014) and requires careful control over model assumptions and parameters, as well as principled model selection and evaluation of model fit (Pestilli, Yeatman, Rokem, Kay, & Wandell, 2014). A number of investigations have attempted to validate DTI-derived networks against more direct anatomical techniques such as tract tracing carried out in rodents or non-human primates. Another unresolved issue is the definition of connection weights. Weights are often expressed as numbers or densities of tractography streamlines, or through measures of tract coherence or integrity, such as the fractional anisotropy (FA). The consideration of measurement accuracy and sensitivity is important for defining networks that are biologically valid, and they impact subsequent estimates of network measures. The estimation of functional networks, usually from fMRI time series, encounters a different set of methodological limitations and biases. As in all fMRI-derived measurements, neural activity is captured only indirectly through the brain’s neurovascular response whose underlying neural causes cannot be accessed directly. Furthermore, the common practice of computing simple Pearson correlations among time courses as a proxy for “functional connectivity” cannot disclose causal interactions or information flow – instead, such functional networks report mere similarities among temporal response profiles that may or may not be due to direct causal effects. Finally, these correlations
27
28
k. hilger and o. sporns
are sensitive to numerous sources of physiological and non-physiological noise, most importantly systematic biases due to small involuntary head motions. However, significant efforts have been made to correct for these unwanted sources of noise in fMRI recordings to improve the mapping of functional networks (Power, Schlaggar, & Petersen, 2015). Resting-state fMRI, despite the absence of an explicit task setting, yields functional networks that are consistent between (Damoiseaux et al., 2006) as well as within subjects (Zuo & Xing, 2014), provided that signals are sampled over sufficient periods of time (Birn et al., 2013; Laumann et al., 2015). The analysis of resting-state functional networks has led to the definition of a set of component functional systems (“resting-state networks”) that engage in coherent signal fluctuations and occupy specific cortical and subcortical territories (Power et al., 2011; Yeo et al., 2011). Meta-analyses have shown that restingstate networks strongly resemble fMRI co-activation patterns observed across large numbers of tasks (Crossley et al., 2013; Smith et al., 2009). When measured during task states, functional connectivity exhibits characteristic task-related modulation (Cole et al. 2014; Gonzalez-Castillo et al., 2015; Telesford et al., 2016). These observations suggest that, at any given time, functional networks represent a conjunction of “intrinsic connectivity” and “task-evoked connectivity” (Cole et al., 2014) and that switching between rest and task involves widespread reorganization of distributed functional connections (Amico, Arenas, & Goñi, 2019).
Networks and Graphs Both structural and functional networks can be represented as collections of nodes (brain regions) and their interconnecting edges (connections); see Figure 2.1. Such objects are also known as “graphs” and are the subject of graph theory, a branch of mathematics with many applications in modern network science (Barabási, 2016). Graph theory offers numerous quantitative measures that capture various aspects of a network’s local and global organization (Rubinov & Sporns, 2010). Local measures include the number of connections per node (node degree), or a node’s clustering into distinct subgroups. Global measures express characteristics of an entire network, such as its clustering coefficient (the average over the clustering of its nodes) and its path length (the average length of shortest paths linking all pairs of nodes). The conjunction of high clustering and short path length (compared to an appropriately designed random or randomized null model) is the hallmark of the smallworld architecture, a universal feature of many natural, social, and information networks (Watts & Strogatz, 1998). Globally, the shorter the path length of a network, the higher its global efficiency (Latora & Marchiori, 2001), a measure expressing a network’s capacity for communication irrespective of the potential communication cost imposed by wiring or time delays. Several important points are worth noting. First, many local and global network metrics are mutually dependent (correlated), and many are powerfully influenced by the
Network Neuroscience Methods for Studying Intelligence
Figure 2.1 Schematic illustration of structural and functional brain network construction and key network metrics. (A) Network construction. First (left), network nodes are defined based on, e.g., an anatomical brain atlas. Second (middle), edges are defined between pairs of nodes by measuring white matter fiber tracts (structural network, e.g., measured with DTI) or by estimating temporal relationships between time series of BOLD signals (functional network, e.g., measured with resting-state fMRI). Third (right), nodes and edges together define a graph (network) whose topological properties can be studied with global (whole-brain) and nodal (region-specific) graphtheoretical measures. (B) Key network metrics. Network efficiency (left) is derived from the lengths of shortest paths between node pairs. In this example, the path between nodes 1 and 2 has a length of three steps. Network modularity (right) partitions the network into communities or modules that are internally densely connected, with sparse connections between them. In this example, the network consists of four modules illustrated in different colors. Individual nodes differ in the way they connect to other nodes within their own module (withinmodule connectivity) and to nodes in other modules (diversity of betweenmodule connectivity, nodal participation). Here, node 1 has low participation, while node 2 has high participation.
29
30
k. hilger and o. sporns
brain’s geometry and spatial embedding (Bullmore & Sporns, 2012; Gollo et al., 2018). These aspects can be addressed through appropriately configured statistical and generative models designed to preserve specific features of the data in order to reveal their contribution to the global network architecture (Betzel & Bassett, 2017a; Rubinov, 2016). Second, while most graph metrics are suitable for structural networks, their application on functional networks (i.e., derived from fMRI time series correlations) can be problematic (Sporns, 2014). This includes frequently used metrics such as path length or clustering. For example, network paths constructed from correlations between time series of neural activation have a much less obvious physical interpretation than, for example, paths that link a series of structural connections. One of the most useful and biologically meaningful approaches to characterize both structural and functional brain networks involved the detection of network communities or modules (Sporns & Betzel, 2016). Most commonly, modules are defined as non-overlapping sets of nodes that are densely interconnected within, but only weakly connected between sets. Computationally, many tools for datadriven detection of modules are available (Fortunato & Hric, 2016). Most widely used are approaches that rely on the maximization of a global modularity metric (Newman & Girvan, 2004). With appropriate modification, this approach can be applied to all classes of networks (binary, weighted, directed, positive, and negative links) that are encountered in structural and functional neuroimaging. Recent advances further allow the detection of modules on multiple spatial scales (Betzel & Bassett, 2017b; Jeub, Sporns, & Fortunato, 2018), in multilayer networks (Vaiana & Muldoon, 2018), and in dynamic connectivity estimated across time (Fukushima et al., 2018). The definition of modules allows the identification of critical nodes or “hubs” that link modules to each other and hence promote global integration of structural and functional networks. Such nodes straddle the boundaries of modules and thus have uncertain modular affiliation (Rubinov & Sporns, 2011; Shinn et al., 2017) as well as connections spanning multiple modules (high participation). Modules and hubs have emerged as some of the most useful network attributes for studies of individual variability in phenotype and genotype, behaviour, and cognition. While network approaches have delivered many new insights into the organization of brain networks, they are also subject to significant challenges. An important issue shared across structural and functional networks involves the definition of network nodes. In practice, many studies are still carried out using nodes defined by anatomical brain atlases. However, such nodes generally do not correspond to anatomical or functional units derived from data-driven parcellation efforts, e.g., those based on boundary-detection in resting-state (Gordon et al., 2016), or on integration of data across multiple modalities (Glasser et al., 2016). The adoption of a particular parcellation strategy propagates into subsequent network analyses and may affect their reliability and robustness (Ryyppö, Glerean, Brattico, Saramäki, & Korhonen, 2018; Zalesky et al., 2010). For example, different parcellations may divide the brain into different numbers of nodes, resulting in networks that differ in size and density. In future, more
Network Neuroscience Methods for Studying Intelligence
standardized parcellation approaches may help to address this important methodological issue. Other limitations remain, e.g., the uncertain definition of edge weights, variations in pre-processing pipelines (global signal regression, motion correction, thresholding), the lack of data on directionality, and temporal delays in signal propagation. However, it is important to note that many of these limitations reflect constraints on the measurement process itself, and can be overcome in principle through refined spatial and temporal resolution, as well as through the inference of causal relations in networks (Bielczyk et al., 2019). The application of network neuroscience tools and methods is facilitated by a number of in-depth scholarly surveys and computational resources. Fornito, Zalesky, and Bullmore (2016) provide a comprehensive introduction to network methods applied to brain data that goes much beyond the scope and level of detail offered in this brief overview. For practical use, several software packages (Matlab and python) are available. These combine various structural and functional graph metrics, null and generative models, and visualization tools (e.g., https://sites.google.com/site/bctnet/).
Intelligence and Insights from Network Neuroscience Approaches Structural Networks Already one of the earliest and most popular neurocognitive models of intelligence, i.e., the Parieto-Frontal Integration Theory (P-FIT; Jung & Haier, 2007), suggests that structural connections (white matter fiber tracts) are critical for human intelligence. Since then many neuroimaging studies have addressed the question of whether and how specific structural brain connections may contribute to differences in intelligence. Most of them support the global finding that higher intelligence is associated with higher levels of brain-wide white matter integrity (as indexed by FA; e.g., Chiang et al., 2009; Navas-Sánchez et al., 2013). Some studies suggest further that this relation may differ for men and women (positive correlation in women, negative correlation in men; Schmithorst, 2009; Tang et al., 2010) and that intelligence-related differences are most prominent in white matter tracts linking frontal to parietal regions (arcuate fasciculus, longitudinal fasciculi; Malpas et al., 2016; Schmithorst, 2009), frontal to occipital regions (fronto-occipital fasciculus; Chiang et al., 2009; Kievit et al., 2012; Kievit, Davis, Griffiths, Correia, & Henson, 2016; Malpas et al., 2016), different frontal regions to each other (uncinate fasciculus; Kievit et al., 2016; Malpas et al, 2016; Yu et al., 2008), and connecting both hemispheres (corpus callosum; Chiang et al., 2009; Damiani, Pereira, Damiani, & Nascimento, 2017; Dunst, Benedek, Koschutnig, Jauk, & Neubauer, 2014; Kievit et al., 2012; Navas-Sánches et al., 2013; Tang et al., 2010; Wolf et al., 2014; Yu et al., 2008 for review). A schematic illustration of white matter tracts consistently associated with intelligence is depicted in Figure 2.2. Chiang et al.
31
32
Figure 2.2 The brain bases of intelligence – from a network neuroscience perspective. Schematic illustration of selected structural and functional brain connections associated with intelligence across different studies.
Network Neuroscience Methods for Studying Intelligence
(2009) found that the relation between intelligence and white matter integrity is mediated by common genetic factors and proposed a common physiological mechanism. Beyond microstructural integrity, higher intelligence has also been related to higher membrane density (lower global mean diffusivity; Dunst et al., 2014; Haász et al., 2013; Malpas et al., 2016) and higher myelinization of axonal fibers (radial diffusivity; Haász et al., 2013; Malpas et al., 2016) in a widely distributed network of white matter tracts. Graph-theoretical network approaches were also applied to structural connections measured with DTI. While some studies suggest that higher intelligence is linked to a globally more efficient organization of structural brain networks (Kim et al., 2015; Koenis et al., 2015; Li et al., 2009; Ma et al., 2017; Zalesky et al., 2011), others could not replicate this finding and found support only for slightly different metrics (Yeo et al., 2016). Finally, one initial study tested whether intelligence relates to the global level of network segregation (global modularity), but found no support for this hypothesis (Yeo et al., 2016).
Functional Networks Additional insight into the neural bases of intelligence comes from research focused on functional connectivity. Most of these studies employed fMRI recordings during the resting state, reflecting the brain’s intrinsic functional architecture. Intrinsic connectivity has been shown to relate closely to the underlying anatomical connections (Greicius, Supekar, Menon, & Dougherty, 2009; Hagmann et al., 2008; Honey, Kötter, Breakspear, & Sporns, 2007), and to predict brain activity during cognitive demands (Cole et al., 2014; Tavor et al., 2016). Early studies addressing the relation between intrinsic connectivity and intelligence used seed-based approaches focusing on connections between specific cortical regions. These studies suggested that higher connectivity between regions belonging to the fronto-parietal control network (Dosenbach et al., 2007) together with lower connectivity between these fronto-parietal regions and regions of the default mode network (DMN, Greicius, Krasnow, Reiss, & Menon, 2003; Raichle et al., 2001) are related to higher intelligence (Langeslag et al., 2013; Sherman et al., 2014; Song et al., 2008). This effect is also illustrated schematically in Figure 2.2. Going beyond seed-based analyses, graph-theoretical approaches applied to whole-brain networks have revealed global principles of intrinsic brain network organization. A pioneering study suggested that a globally shorter path length (higher efficiency) is positively correlated with levels of general intelligence (van den Heuvel, Stam, Kahn, & Hulshoff Pol, 2009). More recent investigations with samples of up to 1096 people (Kruschwitz, Waller, Daedelow, Walter, & Veer, 2018) could not replicate this intuitively plausible finding (Hilger, Ekman, Fiebach, & Basten, 2017a; Kruschwitz et al., 2018; Pamplona, Santos Neto, Rosset, Rogers, & Salmon, 2015). This inconsistency may reflect differences in data acquisition and pre-processing, as well as sample sizes and subject cohorts. In contrast, the nodal efficiency of three specific brain regions that have previously been
33
34
k. hilger and o. sporns
associated with processes like salience detection and focused attention (dorsal anterior cingulate cortex, anterior insula, temporo-parietal junction area) appear to be associated with intelligence (Hilger et al., 2017a). A similar picture has recently been reported in respect to the concept of network modularity. Although intelligence is not correlated with the global level of functional network segregation (indexed by global modularity), there are different profiles of within- and between-module connectivity in a set of circumscribed cortical and subcortical brain regions (Hilger, Ekman, Fiebach, & Basten, 2017b). Interestingly, some of these brain regions are also featured in existing neurocognitive models of intelligence based on task activation profiles and morphological characteristics (e.g., the dorsal anterior cingulate cortex; P-FIT, Basten, Hilger, & Fiebach, 2015; Jung & Haier, 2007). In contrast, other regions are not implicated in these models (e.g., anterior insula) and thus seem to differ solely in their pattern of connectivity but not in task activation or structural properties (Hilger et al., 2017a; Smith et al., 2015). Further support for the relevance of functional connectivity for intelligence comes from prediction-based approaches (implying cross-validation; Yarkoni & Westfall, 2017) demonstrating that individual intelligence scores can significantly be predicted from the pattern of functional connectivity measured during rest (Dubois et al., 2018; Ferguson, Anderson, & Spreng, 2017; Finn et al., 2015) and during task (Greene, Gao, Scheinost, & Costable, 2018). Higher connectivity within the fronto-parietal network together with lower connectivity within the DMN was proposed as most critical for this prediction (Finn et al., 2015; see also Figure 2.2). However, the total amount of explained variance seems to be rather small (20% in Dubois et al., 2018; 6% in Ferguson et al., 2017; 25% in Finn et al., 2015; 20% in Greene et al., 2018). Focusing in more detail on the comparison between rest and task, Santarnecchi et al. (2016) reported high similarity between a meta-analytically generated intelligence network, i.e., active during various intelligence tasks, and the (resting-state) dorsal attention network (Corbetta & Shulman, 2002; Corbetta, Patel, & Shulman, 2008). The lowest similarity was observed with structures of the DMN. Interestingly, it has further been found that higher intelligence is associated with less reconfiguration of functional connections when switching from rest to task (Schultz & Cole, 2016). This finding corresponds well to the observation that higher intelligence is linked to lower DMN de-activation during cognitive tasks (Basten, Stelzel, & Fiebach, 2013). Studies addressing other concepts of connectivity revealed additional insights and observed, for instance, that an increased robustness of brain networks to systematic insults is linked to higher intelligence (Santarnecchi, Rossi, & Rossi, 2015). One point that has to be clarified by future research is the question of whether or not higher intelligence relates to generally higher levels of functional connectivity. Whereas some studies suggest a positive association (Hearne, Mattingley, & Cocchi, 2016; Smith et al., 2015), others observed no such effect (Cole, Yarkoni, Repovs, Anticevic, & Braver, 2012; Hilger et al., 2017a). Instead, the latter studies suggest that associations between connectivity and intelligence are
Network Neuroscience Methods for Studying Intelligence
region-specific and of both directions (positive and negative), thus canceling out at the global level (Hilger at al., 2017a). Another perspective on the relation between intelligence and functional connectivity was provided by studies using EEG. Functional connectivity was primarily measured as coherence between time series of distant EEG channels (signal space) and assessed during cognitive rest. Positive (Anokhin, Lutzenberger, & Birbaumer, 1999; Lee, Wu, Yu, Wu, & Chen, 2012), negative (Cheung, Chan, Han, & Sze, 2014, Jahidin, Taib, Tahir, Megat Ali, & Lias, 2013), and no associations (Smit, Stam, Posthuma, Boomsma, & De Geus, 2008) between intelligence and different operationalizations of intrinsic connectivity (coherence, inter-hemispheric asymmetry of normalized energy spectral density, synchronization likelihood) were observed. Two graphtheoretical studies reported a positive association between intelligence and the small-worldness of EEG-derived intrinsic brain networks (same sample; Langer, Pedroni, & Jäncke, 2013; Langer et al., 2012). In contrast to fMRIbased evidence suggesting less reconfiguration of connectivity between rest and task in more intelligent subjects (Schultz & Cole, 2016), two EEG-studies point to the opposite effect, i.e., more rest–task reconfiguration related to higher intelligence (Neubauer & Fink, 2009; Pahor & Jaušovec, 2014). Specifically, more intelligent subjects demonstrated greater changes in phase-locking values (Neubauer & Fink, 2009) and theta-gamma coupling patterns (Pahor & Jaušovec, 2014). Finally, one study applied graph-theoretical network analyses also to MEG data and found that higher intelligence in little children is linked to lower small-worldness of intrinsic brain networks (modeled on the base of mutual information criteria between time series of MEG channels; Duan et al., 2014).
Open Questions and Future Directions The application of network neuroscience approaches during the last decades has revealed interesting novel insights into the neural bases of human intelligence. An overall higher level of white matter integrity appears to be related to higher intelligence, and intelligence-related differences become most visible in structural connections linking frontal to parietal and frontal to occipital regions. Results from fMRI studies highlight the relevance of region-specific intrinsic connectivity profiles. However, these region-specific intelligence-related differences do not necessarily carry over to the global scale. Connections of attention-related brain regions seem to play a particularly important role as well as the proper segregation between task-positive and task-negative regions. Such new insights also stimulated the formulation of new theoretical models. For example, the recently proposed Network Neuroscience Theory of Intelligence suggests that general intelligence depends on the ability to flexibly transition between “easy-to-reach” and “difficult-toreach” network states (Barbey, 2018; Girn, Mills, & Christo, 2019). While
35
36
k. hilger and o. sporns
there is initial support for the relevance of dynamic connectivity (time-varying connectivity; Zalesky, Fornito, Cocchi, Gollo, & Breakspear, 2014) for intelligence (higher stability in intrinsic brain networks was associated with higher intelligence; Hilger, Fukushima, Sporns, & Fiebach, 2020), specific hypotheses concerning intelligence and task-induced transitions between specific network states remain to be investigated. Evidence from EEG and MEG studies is quite heterogeneous, potentially caused by the large number of free parameters (e.g., connectivity measure, signal vs. source space) which makes it difficult to compare results from different studies – a problem that can be overcome through the development of common standards. Importantly, the empirical evidence available so far does not allow for any inferences about directionality, i.e., whether region A influences region B or vice versa. Conclusions about directionality can only be derived from connectivity measures that account for the temporal lag between EEG or MEG signals stemming from different regions (e.g., phase slope index; Ewald, Avarvand, & Nolte, 2013) or from specific approaches developed for fMRI (Bielczyk et al., 2019). These methods have not yet been applied to the study of human intelligence. Furthermore, the reported relationships between intelligence and various aspects of connectivity are only correlative; they do not allow to infer whether variations in network characteristics causally contribute to variations in intelligence or vice versa. Some pioneering studies overcome this constraint through employing neuromodulatory interventions and suggest that even intelligence test performance can be experimentally influenced; most likely when baseline performance is low (Neubauer, Wammerl, Benedek, Jauk, & Jausovec, 2017; Santarnecchi et al., 2016). These techniques represent promising candidates for future investigations, and have, together with new graph-theoretical concepts like, e.g., multiscale modularity, and the analyses of connectivity changes over time (dynamic connectivity), great potential to enrich our understanding about the biological bases of human intelligence – from a network neuroscience perspective.
Acknowledgment KH received funding from the German Research Foundation (DFG grants FI 848/6-1 and HI 2185/1-1).
References Amico, E., Arenas, A., & Goñi, J. (2019) Centralized and distributed cognitive task processing in the human connectome. Network Neuroscience, 3(2), 455–474. Anokhin, A. P., Lutzenberger, W., & Birbaumer, N. (1999). Spatiotemporal organization of brain dynamics and intelligence: An EEG study in adolescents. International Journal of Psychophysiology, 33(3), 259–273.
Network Neuroscience Methods for Studying Intelligence
Avena-Koenigsberger, A., Misic, B., & Sporns, O. (2018). Communication dynamics in complex brain networks. Nature Reviews Neuroscience, 19(1), 17–33. Barabási, A. L. (2016). Network science. Cambridge University Press. Barbey, A. K. (2018). Network neuroscience theory of human intelligence. Trends in Cognitive Sciences, 22(1), 1–13. Basten, U., Hilger, K., & Fiebach, C. J. (2015). Where smart brains are different: A quantitative meta-analysis of functional and structural brain imaging studies on intelligence. Intelligence, 51, 10–27. Basten, U., Stelzel, C., & Fiebach, C. J. (2013). Intelligence is differentially related to neural effort in the task-positive and the task-negative brain network. Intelligence, 41(5), 517–528. Betzel, R. F., & Bassett, D. S. (2017a). Generative models for network neuroscience: Prospects and promise. Journal of the Royal Society Interface, 14(136), 20170623. Betzel, R. F., & Bassett, D. S. (2017b). Multi-scale brain networks. Neuroimage, 160, 73–83. Bielczyk, N. Z., Uithol, S., van Mourik, T., Anderson, P., Glennon, J. C., & Buitelaar, J. K. (2019). Disentangling causal webs in the brain using functional magnetic resonance imaging: A review of current approaches. Network Neuroscience, 3(2), 237–273. Birn, R. M., Molloy, E. K., Patriat, R., Parker, T., Meier, T. B., Kirk, G. R., . . . Prabhakaran, V. (2013). The effect of scan length on the reliability of restingstate fMRI connectivity estimates. Neuroimage, 83, 550–558. Buckner, R. L., Krienen, F. M., & Yeo, B. T. (2013). Opportunities and limitations of intrinsic functional connectivity MRI. Nature Neuroscience, 16(7), 832–837. Bullmore, E., & Sporns, O. (2009). Complex brain networks: Graph theoretical analysis of structural and functional systems. Nature Reviews Neuroscience, 10(3), 186–198. Bullmore, E., & Sporns, O. (2012). The economy of brain network organization. Nature Reviews Neuroscience, 13(5), 336–349. Cheung, M., Chan, A. S., Han, Y. M., & Sze, S. L. (2014). Brain activity during resting state in relation to academic performance. Journal of Psychophysiology, 28(2), 47–53. Chiang, M.-C., Barysheva, M., Shattuck, D. W., Lee, A. D., Madsen, S. K., Avedissian, C., . . . Thompson, P. M. (2009). Genetics of brain fiber architecture and intellectual performance. Journal of Neuroscience, 29(7), 2212–2224. Cole, M. W., Bassett, D. S., Power, J. D., Braver, T. S., & Petersen, S. E. (2014). Intrinsic and task-evoked network architectures of the human brain. Neuron, 83(1), 238–251. Cole, M. W., Yarkoni, T., Repovs, G., Anticevic, A., & Braver, T. S. (2012). Global connectivity of prefrontal cortex predicts cognitive control and intelligence. Journal of Neuroscience, 32(26), 8988–8999. Corbetta, M., Patel, G., & Shulman, G. L. (2008). The reorienting system of the human brain: From environment to theory of mind. Neuron, 58(3), 306–324. Corbetta, M., & Shulman, G. L. (2002). Control of goal-directed and stimulus-driven attention in the brain. Nature Reviews Neuroscience, 3(3), 201–215. Craddock, R. C., Jbabdi, S., Yan, C. G., Vogelstein, J. T., Castellanos, F. X., Di Martino, A., . . . Milham, M. P. (2013). Imaging human connectomes at the macroscale. Nature Methods, 10(6), 524–539.
37
38
k. hilger and o. sporns
Crossley, N. A., Mechelli, A., Vértes, P. E., Winton-Brown, T. T., Patel, A. X., Ginestet, C. E., . . . Bullmore, E. T. (2013). Cognitive relevance of the community structure of the human brain functional coactivation network. Proceedings of the National Academy of Sciences USA, 110(28), 11583–11588. Damiani, D., Pereira, L. K., Damiani, D., & Nascimento, A. M. (2017). Intelligence neurocircuitry: Cortical and subcortical structures. Journal of Morphological Sciences, 34(3), 123–129. Damoiseaux, J. S., Rombouts, S. A. R. B., Barkhof, F., Scheltens, P., Stam, C. J., Smith, S. M., & Beckmann, C. F. (2006). Consistent resting-state networks across healthy subjects. Proceedings of the National Academy of Sciences USA, 103(37), 13848–13853. Dosenbach, N. U. F., Fair, D. A, Miezin, F. M., Cohen, A. L., Wenger, K. K., Dosenbach, R. A. T., . . . Petersen, S. E. (2007). Distinct brain networks for adaptive and stable task control in humans. Proceedings of the National Academy of Sciences USA, 104(26), 11073–11078. Duan, F., Watanabe, K., Yoshimura, Y., Kikuchi, M., Minabe, Y., & Aihara, K. (2014). Relationship between brain network pattern and cognitive performance of children revealed by MEG signals during free viewing of video. Brain and Cognition, 86, 10–16. Dubois, J., Galdi, P., Paul, L. K., Adolphs, R., Engineering, B., Angeles, L., . . . Dubois, J. (2018). A distributed brain network predicts general intelligence from resting-state human neuroimaging data. Philosophical Transactions of the Royal Society of London B Biological Sciences, 373(1756), 20170284. Dunst, B., Benedek, M., Koschutnig, K., Jauk, E., & Neubauer, A. C. (2014). Sex differences in the IQ-white matter microstructure relationship: A DTI study. Brain and Cognition, 91, 71–78. Ewald, A., Avarvand, F. S., & Nolte, G. (2013). Identifying causal networks of neuronal sources from EEG/MEG data with the phase slope index: A simulation study. Biomedizinische Technik, 58(2), 165–178. Ferguson, M. A., Anderson, J. S., & Spreng, R. N. (2017). Fluid and flexible minds: Intelligence reflects synchrony in the brain’s intrinsic network architecture. Network Neuroscience, 1(2), 192–207. Finn, E. S., Shen, X., Scheinost, D., Rosenberg, M. D., Huang, J., Chun, M. M., . . . Constable, R. T. (2015). Functional connectome fingerprinting: Identifying individuals using patterns of brain connectivity. Nature Neuroscience, 18(11), 1664–1671. Fornito, A., Zalesky, A., & Bullmore, E. (2016). Fundamentals of brain network analysis. Cambridge, MA: Academic Press. Fortunato, S., & Hric, D. (2016). Community detection in networks: A user guide. Physics Reports, 659, 1–44. Fox, M. D., Snyder, A. Z., Vincent, J. L., Corbetta, M., Van Essen, D. C., & Raichle, M. E. (2005). The human brain is intrinsically organized into dynamic, anticorrelated functional networks. Proceedings of the National Academy of Sciences USA, 102(27), 9673–9678. Fukushima, M., Betzel, R. F., He, Y., de Reus, M. A., van den Heuvel, M. P., Zuo, X. N., & Sporns, O. (2018). Fluctuations between high- and low-modularity topology in time-resolved functional connectivity. NeuroImage, 180(Pt. B), 406–416.
Network Neuroscience Methods for Studying Intelligence
Girn, M., Mills, C., & Christo, K. (2019). Linking brain network reconfiguration and intelligence: Are we there yet? Trends in Neuroscience and Education, 15, 62–70. Glasser, M. F., Coalson, T. S., Robinson, E. C., Hacker, C. D., Harwell, J., Yacoub, E., . . . Smith, S. M. (2016). A multi-modal parcellation of human cerebral cortex. Nature, 536(7615), 171–178. Gollo, L. L., Roberts, J. A., Cropley, V. L., Di Biase, M. A., Pantelis, C., Zalesky, A., & Breakspear, M. (2018). Fragility and volatility of structural hubs in the human connectome. Nature Neuroscience, 21(8), 1107–1116. Gonzalez-Castillo, J., Hoy, C. W., Handwerker, D. A., Robinson, M. E., Buchanan, L. C., Saad, Z. S., & Bandettini, P. A. (2015). Tracking ongoing cognition in individuals using brief, whole-brain functional connectivity patterns. Proceedings of the National Academy of Sciences USA, 112(28), 8762–8767. Gordon, E. M., Laumann, T. O., Adeyemo, B., Huckins, J. F., Kelley, W. M., & Petersen, S. E. (2016). Generation and evaluation of a cortical area parcellation from resting-state correlations. Cerebral Cortex, 26(1), 288–303. Greene, A. S., Gao, S., Scheinost, D., & Costable, T. (2018). Task-induced brain states manipulation improves prediction of individual traits. Nature Communications, 9(1), 2807. Greicius, M. D., Krasnow, B., Reiss, A. L., & Menon, V. (2003). Functional connectivity in the resting brain: A network analysis of the default mode hypothesis. Proceedings of the National Academy of Sciences USA, 100(1), 253–258. Greicius, M. D., Supekar, K., Menon, V., & Dougherty, R. F. (2009). Resting-state functional connectivity reflects structural connectivity in the default mode network. Cerebral Cortex, 19(1), 72–78. Haász, J., Westlye, E. T., Fjær, S., Espeseth, T., Lundervold, A., & Lundervold, A. J. (2013). General fluid-type intelligence is related to indices of white matter structure in middle-aged and old adults. NeuroImage, 83, 372–383. Hagmann, P., Cammoun, L., Gigandet, X., Meuli, R., Honey, C. J., Wedeen, V. J., & Sporns, O. (2008). Mapping the structural core of human cerebral cortex. PLoS Biology, 6(7), e159. Hearne, L. J., Mattingley, J. B., & Cocchi, L. (2016). Functional brain networks related to individual differences in human intelligence at rest. Scientific Reports, 6, 32328. Hilger, K., Ekman, M., Fiebach, C. J. & Basten, U. (2017a). Efficient hubs in the intelligent brain: Nodal efficiency of hub regions in the salience network is associated with general intelligence. Intelligence, 60, 10–25. Hilger, K., Ekman, M., Fiebach, C. J., & Basten, U. (2017b). Intelligence is associated with the modular structure of intrinsic brain networks. Scientific Reports, 7(1), 16088. Hilger, K., Fukushima, M., Sporns, O., & Fiebach, C. J. (2020). Temporal stability of functional brain modules associated with human intelligence. Human Brain Mapping, 41(2), 362–372. Honey, C. J., Kötter, R., Breakspear, M., & Sporns, O. (2007). Network structure of cerebral cortex shapes functional connectivity on multiple time scales. Proceedings of the National Academy of Sciences USA, 104(24), 10240–10245. Honey, C. J., Sporns, O., Cammoun, L., Gigandet, X., Thiran, J. P., Meuli, R., & Hagmann, P. (2009). Predicting human resting-state functional connectivity from structural connectivity. Proceedings of the National Academy of Sciences USA, 106(6), 2035–2040.
39
40
k. hilger and o. sporns
Hutchison, R. M., Womelsdorf, T., Gati, J. S., Everling, S., & Menon, R. S. (2013). Resting-state networks show dynamic functional connectivity in awake humans and anesthetized macaques. Human Brain Mapping, 34(9), 2154–2177. Jahidin, A. H., Taib, M. N., Tahir, N. M., Megat Ali, M. S. A., & Lias, S. (2013). Asymmetry pattern of resting EEG for different IQ levels. Procedia – Social and Behavioral Sciences, 97, 246–251. Jbabdi, S., Sotiropoulos, S. N., Haber, S. N., Van Essen, D. C., & Behrens, T. E. (2015). Measuring macroscopic brain connections in vivo. Nature Neuroscience, 18(11), 1546. Jeub, L. G., Sporns, O., & Fortunato, S. (2018). Multiresolution consensus clustering in networks. Scientific Reports, 8(1), 3259. Jung, R. E., & Haier, R. J. (2007). The Parieto-Frontal Integration Theory (P-FIT) of intelligence: Converging neuroimaging evidence. Behavioral and Brain Sciences, 30(2), 135–154. Kievit, R. A., Davis, S. W., Griffiths, J. D., Correia, M. M., & Henson, R. N. A. (2016). A watershed model of individual differences in fluid intelligence. Neuropsychologia, 91, 186–198. Kievit, R. A., van Rooijen, H., Wicherts, J. M., Waldorp, L. J., Kan, K. J., Scholte, H. S., & Borsboom, D. (2012). Intelligence and the brain: A modelbased approach. Cognitive Neuroscience, 3(2), 89–97. Kim, D.-J., Davis, E. P., Sandman, C. A., Sporns, O., O’Donnell, B. F., Buss, C., & Hetrick, W. P. (2015). Children’s intellectual ability is associated with structural network integrity. NeuroImage, 124(Pt. A), 550–556. Koenis, M. M. G., Brouwer, R. M., van den Heuvel, M. P., Mandl, R. C. W., van Soelen, I. L. C., Kahn, R. S., . . . Hulshoff Pol, H. E. (2015). Development of the brain’s structural network efficiency in early adolescence: A longitudinal DTI twin study. Human Brain Mapping, 36(12), 4938–4953. Kruschwitz, J. D., Waller, L., Daedelow, L. S., Walter, H., & Veer, I. M. (2018). General, crystallized and fluid intelligence are not associated with functional global network efficiency: A replication study with the human connectome project 1200 data set. Neuroimage, 171, 323–331. Langer, N., Pedroni, A., Gianotti, L. R. R., Hänggi, J., Knoch, D., & Jäncke, L. (2012). Functional brain network efficiency predicts intelligence. Human Brain Mapping, 33(6), 1393–1406. Langer, N., Pedroni, A., & Jäncke, L. (2013). The problem of thresholding in smallworld network analysis. PLoS One, 8(1), e53199. Langeslag, S. J. E., Schmidt, M., Ghassabian, A., Jaddoe, V. W., Hofman, A., van der Lugt, A., . . . White, T. J. H. (2013). Functional connectivity between parietal and frontal brain regions and intelligence in young children: The generation R study. Human Brain Mapping, 34(12), 3299–3307. Latora, V., & Marchiori, M. (2001). Efficient behavior of small-world networks. Physical Review Letters, 87(19), 198701. Laumann, T. O., Gordon, E. M., Adeyemo, B., Snyder, A. Z., Joo, S. J., Chen, M. Y., . . . Schlaggar, B. L. (2015). Functional system and areal organization of a highly sampled individual human brain. Neuron, 87(3), 657–670. Lee, T. W., Wu, Y. Te, Yu, Y. W. Y., Wu, H. C., & Chen, T. J. (2012). A smarter brain is associated with stronger neural interaction in healthy young females: A resting EEG coherence study. Intelligence, 40(1), 38–48.
Network Neuroscience Methods for Studying Intelligence
Lerch, J. P., Worsley, K., Shaw, W. P., Greenstein, D. K., Lenroot, R.K., Giedd, J., & Evans, A. C. (2006). Mapping anatomical correlations across cerebral cortex (MACACC) using cortical thickness from MRI. Neuroimage, 31(3), 993–1003. Li, Y. H., Liu, Y., Li, J., Qin, W., Li, K. C., Yu, C. S., & Jiang, T. Z. (2009). Brain anatomical network and intelligence. Plos Computational Biology, 5(5), e1000395. Ma, J., Kang, H. J., Kim, J. Y., Jeong, H. S., Im, J. J., Namgung, E., . . . Yoon, S. (2017). Network attributes underlying intellectual giftedness in the developing brain. Scientific Reports, 7(1), 11321. Maier-Hein, K. H., Neher, P. F., Houde, J. C., Côté, M. A., Garyfallidis, E., Zhong, J., . . . Reddick, W. E. (2017). The challenge of mapping the human connectome based on diffusion tractography. Nature Communications, 8(1), 1349. Malpas, C. B., Genc, S., Saling, M. M., Velakoulis, D., Desmond, P. M., & Brien, T. J. O. (2016). MRI correlates of general intelligence in neurotypical adults. Journal of Clinical Neuroscience, 24, 128–134. Navas-Sánchez, F. J., Alemán-Gómez, Y., Sánchez-Gonzalez, J., Guzmán-DeVilloria, J. A, Franco, C., Robles, O., . . . Desco, M. (2013). White matter microstructure correlates of mathematical giftedness and intelligence quotient. Human Brain Mapping, 35(6), 2619–2631. Neubauer, A. C., & Fink, A. (2009). Intelligence and neural efficiency. Neuroscience and Biobehavioral Reviews, 33(7), 1004–1023. Neubauer, A. C., Wammerl, M., Benedek, M., Jauk, E., & Jausovec, N. (2017). The influence of transcranial alternating current on fluid intelligence. A fMRI study. Personality and Individual Differences, 118, 50–55. Newman, M. E., & Girvan, M. (2004). Finding and evaluating community structure in networks. Physical Review E, 69(2), 026113. Pahor, A., & Jaušovec, N. (2014). Theta–gamma cross-frequency coupling relates to the level of human intelligence. Intelligence, 46, 283–290. Pamplona, G. S. P., Santos Neto, G. S., Rosset, S. R. E., Rogers, B. P., & Salmon, C. E. G. (2015). Analyzing the association between functional connectivity of the brain and intellectual performance. Frontiers in Human Neuroscience, 9, 61. Pestilli, F., Yeatman, J. D., Rokem, A., Kay, K. N., & Wandell, B. A. (2014). Evaluation and statistical inference for human connectomes. Nature Methods, 11(10), 1058–1063. Power, J. D., Cohen, A. L., Nelson, S. M., Wig, G. S., Barnes, K. A., Church, J. A., . . . Petersen, S. E. (2011). Functional network organization of the human brain. Neuron, 72(4), 665–678. Power, J. D., Schlaggar, B. L., & Petersen, S. E. (2015). Recent progress and outstanding issues in motion correction in resting state fMRI. Neuroimage, 105, 536–551. Raichle, M. E., MacLeod, A. M., Snyder, A. Z., Powers, W. J., Gusnard, D. A., & Shulman, G. L. (2001). A default mode of brain function. Proceedings of the National Academy of Sciences, 98(2), 676–682. Rubinov, M. (2016). Constraints and spandrels of interareal connectomes. Nature Communications, 7(1), 13812. Rubinov, M., & Sporns, O. (2010). Complex network measures of brain connectivity: Uses and interpretations. Neuroimage, 52(3), 1059–1069. Rubinov, M., & Sporns, O. (2011). Weight-conserving characterization of complex functional brain networks. Neuroimage, 56(4), 2068–2079.
41
42
k. hilger and o. sporns
Ryyppö, E., Glerean, E., Brattico, E., Saramäki, J., & Korhonen, O. (2018). Regions of interest as nodes of dynamic functional brain networks. Network Neuroscience, 2(4), 513–535. Santarnecchi, E., Muller, T., Rossi, S., Sarkar, A., Polizzotto, N. R., Rossi, A., & Cohen Kadosh, R. (2016). Individual differences and specificity of prefrontal gamma frequency-tACS on fluid intelligence capabilities. Cortex, 75, 33–43. Santarnecchi, E., Rossi, S., & Rossi, A. (2015). The smarter, the stronger: Intelligence level correlates with brain resilience to systematic insults. Cortex, 64, 293–309. Schmithorst, V. J. (2009). Developmental sex differences in the relation of neuroanatomical connectivity to intelligence. Intelligence, 37(2), 164–173. Schultz, X. D. H., & Cole, X. W. (2016). Higher intelligence is associated with less ask-related brain network reconfiguration. Journal of Neuroscience, 36(33), 8551–8561. Sherman, L. E., Rudie, J. D., Pfeifer, J. H., Masten, C. L., McNealy, K., & Dapretto, M. (2014). Development of the default mode and central executive networks across early adolescence: A longitudinal study. Developmental Cognitive Neuroscience, 10, 148–159. Shinn, M., Romero-Garcia, R., Seidlitz, J., Váša, F., Vértes, P. E., & Bullmore, E. (2017). Versatility of nodal affiliation to communities. Scientific Reports, 7(1), 4273. Smit, D. J. A, Stam, C. J., Posthuma, D., Boomsma, D. I., & De Geus, E. J. C. (2008). Heritability of “small-world” networks in the brain: A graph theoretical analysis of resting-state EEG functional connectivity. Human Brain Mapping, 29(12), 1368–1378. Smith, S. M., Fox, P. T., Miller, K. L., Glahn, D. C., Fox, P. M., Mackay, C. E., . . . Beckmann, C. F. (2009). Correspondence of the brain’s functional architecture during activation and rest. Proceedings of the National Academy of Sciences USA, 106(31), 13040–13045. Smith, S. M., Nichols, T. E., Vidaurre, D., Winkler, A. M., Behrens, T. E., Glasser, M. F., . . . Miller, K. L. (2015). A positive-negative mode of population covariation links brain connectivity, demographics and behavior. Nature Neuroscience, 18(11), 1565–1567. Song, M., Zhou, Y., Li, J., Liu, Y., Tian, L., Yu, C., & Jiang, T. (2008). Brain spontaneous functional connectivity and intelligence. NeuroImage, 41(3), 1168–1176. Sporns, O. (2014). Contributions and challenges for network models in cognitive neuroscience. Nature Neuroscience, 17(5), 652–660. Sporns, O., & Betzel, R. F. (2016). Modular brain networks. Annual Review of Psychology, 67, 613–640. Stam, C. J., Nolte, G., & Daffertshofer, A. (2007). Phase lag index: Assessment of functional connectivity from multi channel EEG and MEG with diminished bias from common sources. Human Brain Mapping, 28(11), 1178–1193. Tang, C. Y., Eaves, E. L., Ng, J. C., Carpenter, D. M., Mai, X., Schroeder, D. H., . . . Haier, R. J. (2010). Brain networks for working memory and factors of intelligence assessed in males and females with fMRI and DTI. Intelligence, 38(3), 293–303. Tavor, I., Jones, O. P., Mars, R. B., Smith, S. M., Behrens, T. E., & Jbabdi, S. (2016). Task-free MRI predicts individual differences in brain activity during task performance. Science, 352(6282), 216–220.
Network Neuroscience Methods for Studying Intelligence
Telesford, Q. K., Lynall, M. E., Vettel, J., Miller, M. B., Grafton, S. T., & Bassett, D. S. (2016). Detection of functional brain network reconfiguration during taskdriven cognitive states. NeuroImage, 142, 198–210. Thomas, C., Frank, Q. Y., Irfanoglu, M. O., Modi, P., Saleem, K. S., Leopold, D. A., & Pierpaoli, C. (2014). Anatomical accuracy of brain connections derived from diffusion MRI tractography is inherently limited. Proceedings of the National Academy of Sciences USA, 111(46), 16574–16579. Vaiana, M., & Muldoon, S. F. (2018). Multilayer brain networks. Journal of Nonlinear Science, 1–23. van den Heuvel, M. P., Stam, C. J., Kahn, R. S., & Hulshoff Pol, H. E. (2009). Efficiency of functional brain networks and intellectual performance. Journal of Neuroscience, 29(23), 7619–7624. Watts, D. J., & Strogatz, S. H. (1998). Collective dynamics of “small-world” networks. Nature, 393(6684), 440–442. Wolf, D., Fischer, F. U., Fesenbeckh, J., Yakushev, I., Lelieveld, I. M., Scheurich, A., . . . Fellgiebel, A. (2014). Structural integrity of the corpus callosum predicts long-term transfer of fluid intelligence-related training gains in normal aging. Human Brain Mapping, 35(1), 309–318. Yarkoni, T., & Westfall, J. (2017). Choosing prediction over explanation in psychology: Lessons from machine learning. Perspectives on Psychological Science, 12(6), 1100–1122. Yeo, B. T., Krienen, F. M., Sepulcre, J., Sabuncu, M. R., Lashkari, D., Hollinshead, M., . . . Buckner, R. L. (2011). The organization of the human cerebral cortex estimated by intrinsic functional connectivity. Journal of Neurophysiology, 106(3), 1125–1165. Yeo, R. A., Ryman, S. G., van den Heuvel, M. P., de Reus, M. A., Jung, R. E., Pommy, J., . . . Calhoun, V. D. (2016). Graph metrics of structural brain networks in individuals with schizophrenia and healthy controls: Group differences, relationships with intelligence, and genetics. Journal of the International Neuropsychological Society, 22(2), 240–249. Yu, C. S., Li, J., Liu, Y., Qin, W., Li, Y. H., Shu, N., . . . Li, K. C. (2008). White matter tract integrity and intelligence in patients with mental retardation and healthy adults. Neuroimage, 40(4), 1533–1541. Zalesky, A., Fornito, A., Cocchi, L., Gollo, L. L., & Breakspear, M. (2014). Timeresolved resting-state brain networks. Proceedings of the National Academy of Sciences, 111(28), 10341–10346. Zalesky, A., Fornito, A., Harding, I. H., Cocchi, L., Yücel, M., Pantelis, C., & Bullmore, E. T. (2010). Whole-brain anatomical networks: does the choice of nodes matter? NeuroImage, 50(3), 970–983. Zalesky, A., Fornito, A., Seal, M. L., Cocchi, L., Westin, C., Bullmore, E. T., . . . Pantelis, C. (2011). Disrupted axonal fiber connectivity in schizophrenia. Biological Psychiatry, 69(1), 80–89. Zuo, X. N., & Xing, X. X. (2014). Test–retest reliabilities of resting-state FMRI measurements in human brain functional connectomics: A systems neuroscience perspective. Neuroscience & Biobehavioral Reviews, 45, 100–118.
43
3 Imaging the Intelligence of Humans Kenia Martínez and Roberto Colom
Introduction Most humans can perceive the world, store information in the shortand the long-term, recover the relevant information when required, comprehend and produce language, orient themselves in known and unknown environments, make calculations of high and low levels of sophistication, and so forth. These cognitive actions must be coordinated and integrated in some way and “intelligence” is the psychological factor that takes the lead when humans pursue this goal. The manifestation of widespread individual differences in this factor is well documented in everyday life settings and has been addressed by scientific research from at least three complementary models: psychometric models, cognitive/information-processing models, and biological models. Psychometric models of intelligence have identified the dimensions of variation in cognitive ability (Johnson & Bouchard, 2005; Schneider & McGrew, 2018). Information-processing models uncover the basic cognitive processes presumably relevant for the abilities comprising the psychometric models (Chuderski, 2019). Finally, at the biological level, individual differences in brain structure and function (Basten, Hilger, & Fieback, 2015), along with genetic and non-genetic factors (Plomin, DeFries, Knopik, & Neiderhiser, 2016), are considered with respect to the behavioral variability both at the psychometric and cognitive levels. Most scientists acknowledge that the brain is the organ where relevant biological processes takes place for supporting every expression of intelligent behavior (Haier, 2017, Hunt, 2011). Nevertheless, the brain is also important for further psychological phenomena erroneously believed to be independent of intelligence (Grotzinger, Cheung, Patterson, Harden, & Tucker-Drob, 2019; Hill, Harris, & Deary, 2019). Thus, for instance, as underscored by Caspi and Moffitt (2018), “all mental disorders are expressed through dysfunction of the same organ (brain), whereas physical diseases such as cirrhosis, emphysema, and diabetes are manifested through dysfunction of different organ systems. Viewed from this perspective, perhaps the search for non-specificity in psychiatry is not unreasonable (. . .) a usual way to think about the meaning of a general factor of psychopathology (p) is, by analogy, in relation to cognitive abilities (. . .) just as there is a general factor of cognitive ability (g), it is possible that there is also a p” (pp. 3–8). Mental disorders and mental 44
Imaging the Intelligence of Humans
abilities share the same brain and, from this broad perspective, Colom, Chuderski, and Santarnecchi (2016) wrote the following regarding human intelligence: available evidence might depart from the view that there is a place in the brain for intelligence, and even that only places are relevant, as temporal properties of neural processing like synchronization and coordination might also play an important role, as well as network-level dynamics promoting the ability to evolve, robustness, and plasticity (. . .) maybe there is no place in the brain for intelligence because the brain itself is the place. And we only have a single brain. (p. 187, emphasis added)
Neuroimaging likely can help scientists find out answers to the question of why some people are smarter than others because of differences at the brain level. However, despite the large number of studies analyzing the links between structural and functional brain properties and high-order cognition, we still lack conclusive answers. Many early studies had small samples and often limited their analyses to a single measure of intelligence in addition to sometimes having poor quality control of image data (see Drakulich and Karama, Chapter 11). Brain properties, regions, and networks presumably supporting cognitive performance differences may look unstable across research studies (Colom, Karama, Jung, & Haier, 2010; Deary, Penke, & Johnson, 2010; Haier, 2017; Jung & Haier, 2007). Considerable neuroimaging evidence is consistent with the idea that understanding the intelligence of humans from a biological standpoint would not be achieved by focusing on specific regions of the brain, but on brain networks hierarchically organized to collaborate and compete in some way (Colom & Thompson, 2011; Colom, Jung, & Haier, 2006; Colom et al., 2010; Haier, 2017). Why does finding the brain supporting features of the intelligence of humans using neuroimaging approaches seem that difficult?
Imaging the Intelligence of Humans Rex Jung & Richard Haier proposed in 2007 the Parieto-Frontal Integration Theory of intelligence (P-FIT) based on the qualitative commonalities across 37 structural and functional neuroimaging studies published between 1988 and 2007. The framework identified several brain areas distributed across the brain as relevant for intelligence, with special emphasis on the dorsolateral prefrontal cortex and the parietal cortex (Figure 3.1). The model summarized (and combined) exploratory research available at the time using different neuroimaging structural and functional approaches for studying the brain support of intelligent behavior. Afterwards, most neuroimaging studies addressing the topic have evaluated and interpreted their results using the P-FIT as a frame of reference. However, two caveats could be highlighted: (1) the model is very generic and difficult to test against
45
46
k. martı´nez and r. colom
Figure 3.1 Regions identified by the Parieto-Frontal Integration Theory (P-FIT) as relevant for human intelligence. These regions are thought to support different information-processing stages. First, occipital and temporal areas process sensory information (Stage 1): the extrastriate cortex (Brodmann areas (BA) 18 and 19) and the fusiform gyrus (BA 37), involved with recognition, imagery, and elaboration of visual inputs, as well as Wernicke’s area (BA 22) for analysis and elaboration of syntax of auditory information. In the second processing stage (Stage 2), integration and abstraction of the sensory information is carried out by parietal BAs 39 (angular gyrus), 40 (supramarginal gyrus), and 7 (superior parietal lobule). Next, in Stage 3, the parietal areas interact with the frontal regions (BAs 6, 9, 10, 45, 46, and 47) serving problem solving, evaluation, and hypothesis testing. Finally, in Stage 4, the anterior cingulate (BA 32) is implicated for response selection and inhibition of alternative responses, once the best solution is determined in the previous stage.
competing models (Dubois, Galdi, Paul, & Adolphs, 2018) and (2) the empirical evidence does not all necessarily converge to the same degree on the brain regions highlighted by the P-FIT model (Colom, Jung, & Haier, 2007; Martínez et al., 2015). There is remarkable variability among the studies summarized by Jung and Haier (2007) and only a small number of identified brain areas approached 50% of convergence across published studies even employing the same neuroimaging strategy. Thus, for instance, considering gray matter properties, 32 brain areas were initially nominated, but only BAs 39–40 and 10 showed 50% of convergence. Nevertheless, subsequent studies showed that evidence is roughly consistent with the P-FIT model, mainly because of the fact that virtually the entire brain would be relevant for supporting intelligent behavior. Also, it should be noted that, because of the heterogeneity of approaches across studies, even the most consistently identified brain regions may show relatively low levels of convergence.
Imaging the Intelligence of Humans
These are examples of relevant sources of variability across neuroimaging studies of intelligence potentially contributing to this heterogeneity: 1. How intelligence is defined and estimated (IQ vs. g, broad domains such as fluid, crystallized, or visuospatial intelligence, specific measures administered across studies, and so forth). 2. How MRI images are processed (T1 and T2 weighted, diffusion weighted, etc.). 3. Which brain feature (structure or function), tissue (white matter, gray matter) or property (gray matter volume, cortical thickness, cortical folding pattern, responsiveness to targeted stimulation, white matter microstructure, connectivity, etc.) is considered. 4. Which humans are analyzed (sex, age, healthy humans, patients with chronic or acute lesions, infants, elderly, etc.). The remaining of this chapter is devoted to briefly addressing these key points because they are extremely important for pursuing a proper scientific investigation regarding the potential role of the brain for human intelligence.
What Intelligence? The evidence suggests that human intelligence can be conceptualized as a high-order integrative mental ability. Measured cognitive performance differences on standardized tests organize people according to their general mental ability (g). Moreover, individual variations in g are related with performance differences in a set of lower-level cognitive processes. Psychometric models have shown that individual differences in intelligence tests can be grouped in a number of broad and narrow cognitive dimensions. These models are built based on performance differences assessed by diverse speeded and non-speeded tests across domains (abstract, verbal, numerical, spatial, and so forth). Individual differences in the measured performance are submitted to exploratory or confirmatory factor analysis for separating sources of variance contributing to a general factor (g), cognitive abilities (group factors), and cognitive skills (test specificities). It is well known within the psychometric literature that g should be identified by three or more broad cognitive factors, and these latter factors must be identified by three or more tests varying in content and processing requirements (Haier et al., 2009). The outcomes derived from this psychometric framework support the view that intelligence has a hierarchical structure; fluid-abstract, crystallized-verbal, and visuospatial abilities being the most frequently considered factors of intelligence (Schneider & McGrew, 2018). Studying samples representative of the general population, g factor scores usually account for no less that 50% of the performance differences assessed by standardized tests and the obtained g estimates are extremely consistent across intelligence batteries
47
48
k. martı´nez and r. colom
(Johnson, Bouchard, Krueger, McGue, & Gottesman, 2004; Johnson, te Nijenhuis, & Bouchard, 2008). It is imperative to consider the available psychometric evidence when planning brain imaging studies of human intelligence. Figure 3.2 depicts an
Figure 3.2 Variability in the gray matter correlates of intelligence across the psychometric hierarchy as reported in one study by Román et al. (2014). Results at the test level are widespread across the cortex and show only minor overlaps with their respective factors (Gf, Gc, and Gv). This highlights the distinction among constructs, vehicles, and measurements when trying to improve our understanding of the biological underpinnings of cognitive performance. As noted by Jensen (1998) in his seminal book (The g Factor. The Science of Mental Ability), a given psychological construct (e.g., fluid intelligence) can be represented by distinguishable vehicles (e.g., intelligence tests) yielding different measurements. Changes in the measurements may or may not involve changes in the construct. The former changes involve different sources of variance. Using single or omnibus cognitive measures provides largely different results when their biological bases are systematically inspected. Therefore, fine-grained psychometric approaches for defining the constructs of interest are strongly required (Colom & Thompson, 2011; Colom et al., 2010; Haier et al., 2009). Figure adapted from Román et al. (2014)
Imaging the Intelligence of Humans
example of why this is extremely important. This example, which needs to be replicated on larger samples with more stringent statistical thresholds that account for multiple comparisons, suggests a variability in the gray matter correlates of human intelligence differences considered at different levels of the intelligence hierarchy (Román et al., 2014). Relatedly, the same score obtained for a given intelligence dimension might result from different cognitive profiles where specific skills involved contribute to the observed performance to different degrees. Distinguishable engagement of brain regions is expected when different mental processes are involved. Same test scores might be achieved using different cognitive strategies and brain features (Price, 2018). However, brain imaging findings typically reflect group averages and more or less overlapping regions across individuals. Most research studies have considered standardized global IQ indices, single measures, or poorly defined measurement models [see Colom et al. (2009) for a discussion of this issue]. Moreover, many studies have used only one cognitive test/subtest to tap psychometric g or one or two tests to tap cognitive ability domain factors (Cole, Yarkoni, Repovš, Anticevic, & Braver, 2012; Haier et al., 2009; Karama et al., 2011; Langer et al., 2012). Although Haier et al. (2009) stressed the need of following required standards in the neuroimaging research of the intelligence of humans, they are usually neglected. It is worthwhile to remember them before moving ahead: 1. Use several diverse measures tapping abstract, verbal, numerical, and visuospatial content domains. 2. Use three or more measures to define each cognitive ability (group factor). These abilities should fit the main factors comprised in models such as the CHC (Schneider & McGrew, 2018) of the g-VPR (Johnson & Bouchard, 2005) psychometric models. 3. Measures for each cognitive ability should not be based solely on speeded or non-speeded tests. Both types should be used. This recommendation is based on the fact that lower level abilities comprise both level (nonspeeded) and speed factors. 4. Use three or more lower level cognitive abilities to define the higher-order factor representing g. Ideally, measurement models should reveal that nonverbal, abstract, or fluid reasoning is the group factor best predicted by g. Fluid reasoning (drawing inferences, concept formation, classification, generating and testing hypotheses, identifying relations, comprehending implications, problem solving, extrapolating, and transforming information) is the cognitive ability more closely related to g. 5. Find a way to separate sources of variance contributing to participants’ performance on the administered measures. The influence of g is pervasive, but it changes for different lower order cognitive abilities and individual measures. Participants’ scores result from g, cognitive abilities (group factors), cognitive skills (test specificities), and statistical noise. Brain
49
50
k. martı´nez and r. colom
correlates for a given cognitive ability, like verbal ability or spatial ability, are influenced by all these sources of variance and they must be distinguished. Neuroimaging studies of intelligence must be carefully designed using what we already know after a century of research around the psychometrics of the intelligence of humans. Otherwise, we will jeopardize the scientific research efforts.
What Neuroimaging Approach and Brain Property? In the last decade, the number of methods for processing structural and functional MRIs has increased, along with the number of potential neuro-markers relevant for the intelligence of humans. Technical refinements address required improvements in the reliable and exhaustive description of brain structural and functional variations related with intelligence differences. However, there are methodological caveats eroding the goal of achieving sound reproducibility across research studies (Botvinik-Nezer et al., 2020). The most replicated brain correlate of intelligence is global brain size. The meta-analyses by Gignac and Bates (2017) addressed three shortcomings of the previously published Pietschnig, Penke, Wicherts, Zeiler, and Voracek’s (2015) meta-analysis. Their findings revealed a raw value of r = .29 (N = 1,758). Nevertheless, Gignac and Bates have gone one step further by classifying the considered studies according to their quality regarding the administered measures of intelligence (fair, good, and excellent). Interestingly, r values increased accordingly from .23 to .39. Based on these results, the authors confirmed that global brain volume is the largest neurophysiological correlate of the intelligence of humans. There is, however, one more lesson derived from the Gignac and Bates meta-analysis. In their own words and all else being equal, researchers who administer more comprehensive cognitive ability test batteries require smaller sample sizes to achieve the same level of (statistical) power . . . an investigator who plans to administer 9 cognitive ability tests (40 minutes testing time) would require a sample size of 49 to achieve a power of 0.80, based on an expected correlation of 0.30 . . . it is more efficient to administer a 40-minute comprehensive measure of intelligence across 49 participants in comparison to a relatively brief 20-minute measure across 146 participants. (Gignac & Bates, 2017, p. 27)
This conclusion cautions against the widespread (by default) current tendency to look with blind admiration at studies considering huge samples while ignoring small-scale research. As recently observed by Thompson et al. (2020) in their review of a decade of research within the ENIGMA consortium: “for effect sizes of d > .6, the reproducibility rate was higher than 90%
Imaging the Intelligence of Humans
even when including the datasets with sample sizes as low as 15, while it was impossible to obtain 70% reproducibility for small effects of d < .2 even with a relatively large minimum sample size threshold of 500.” Therefore, bigger samples are not necessarily always better. Beyond global brain measures, reported relationships between intelligence and regional structural brain features are rather unstable. The cortex varies widely among humans. Two popular approaches for studying these variations in macroscopic cortex anatomy for comparative analyses using highresolution T1-weighted data are Voxel-Based Morphometry (VBM) and Surface-Based Morphometry (SBM) (Figure 3.3). VBM identifies differences in the local composition of brain tissue across individuals and groups making voxel-by-voxel comparisons once large-scale differences in gross anatomy and position are discounted after registering the individual structural images to the same standard reference (Ashburner & Friston, 2000; Mechelli, Price, Friston, & Ashburner, 2005). On the other hand, using SBM methods, surfaces representing the structural boundaries within the brain are created and analyzed on the basis of brain segmentation in white matter, gray matter, and cerebrospinal fluid. Surfaces representing each boundary are often generated by a meshing algorithm that codifies relationships among voxels on the boundary into relationships between polygonal or polyhedral surface elements. For more details on VBM methods, see Drakulich and Karama, Chapter 11. VBM has been widely used because it requires minimal user intervention and can be completed relatively quickly by following well-documented and publicly available protocols. When considering the inconsistency of results for the neural correlates of cognitive measures, the answer may lie in the details of the various processing techniques. Regarding VBM, substantial inter-subject variability in cortical anatomy can be problematic for most standard linear and nonlinear volumetric registration algorithms used by different pipelines (Frost & Goebel, 2012). Neglecting this inter-subject macro-anatomical variability may weaken statistical power on group statistics, because different cortical regions are considered to be the same region and hence erroneously compared across subjects (Figure 3.4). The substantial brain variability across humans complicates replicability of findings in independent, albeit comparable, samples. To alleviate this loss of power due to data macro-anatomical misregistration across subjects, surface-based approaches create geometrical models of the cortex using parametric surfaces and build deformation maps on the geometric models explicitly associating corresponding cortical regions across individuals (Thompson et al., 2004). Furthermore, SBM allows computing several GM tissue features at the local level. These features include surface complexity, thickness, surface area, or volume. There are several SBM approaches and each protocol differs in algorithms, parameters, and required userintervention.
51
52
k. martı´nez and r. colom
Figure 3.3 Workflow for voxel-based morphometry (VBM) and surface-based morphometry (SBM) analysis. The analyses are based on high-resolution structural brain images. VBM and SBM approaches shared processing steps, as illustrated in (A). These steps involve linear or nonlinear volumetric alignment of MRI data from all subjects to a standardized anatomical template; correction for intensity inhomogeneities; and classification and segmentation of the image into gray matter, white matter, and cerebrospinal fluid (CSF). In VBM analyses (B), the normalized gray matter segment is smoothed with an isotropic Gaussian kernel and the smoothed normalized gray matter segments are entered into a statistical model to conduct voxel-wise statistical tests and map significant effects (D, left). To create parametric models of the brain from the SBM framework (C), additional steps are required, including the extraction of cortical surface models from each image data set (surfaces generation), as well as the non-linear matching of cortical features on these models to improve the alignment of individual data for group analysis. Finally, morphometric variables (e.g., volume, area, or thickness) obtained at the vertex level should be smoothed before group statistical analysis (D, right).
Imaging the Intelligence of Humans
Figure 3.4 Top panel: three left hemisphere brain surfaces from different individuals. Although all major sulci and gyri are present, the spatial location and geometry vary widely across individuals. For instance, the contours and boundaries of the sulci and gyri highlighted in a ‘real’ brain (Bottom panel, www.studyblue.com/notes/note/n/2-gyri/deck/5877272) are hardly discerned and widely vary across the individual brains in this specific example. In short: individual brains are unique.
Martínez et al. (2015) provide an elegant example of how extreme the influence of imaging protocols over the identified structural brain correlates of human intelligence can be when multiple issues converge (Figure 3.5). In this study, Martínez et al. used MRI data from 3T scanners to compare the outputs of different SBM pipelines on the same subjects. Three different SBM protocols for analyzing structural variations in regional cortical thickness (CT) were used. Distribution and variability of CT and CT–cognition relationships were systematically compared across pipelines. The findings revealed that, when all issues converge, even using the same SBM approach, the outputs from different processing pipelines can be inconsistent and show what seems like a considerable variation in the spatial pattern of observed CT–intelligence relationships. Importantly, when thresholded for multiple comparisons over the whole brain, no association between intelligence and cortical thickness was shown in any of the three pipelines (unpublished finding). This finding might stem from (a) a low power following the inherent small effect size of the intelligence–cortical thickness associations, (b) relatively old versions of imaging pipelines unable to deal well with 3T MRI data, or (c) the fact that the sample was not large enough to detect the weak signal that may have been there. Available data suggests that, even when all the analyzed gray matter indices quantify the number and density of neuronal bodies and dendritic ramifications
53
54
k. martı´nez and r. colom
Figure 3.5 (A and B) Distribution and variability of cortical thickness computed through different surface-based protocols: Cortical Pattern Matching (CPM), Brain-Suite, and CIVET. The figure depicts (A) mean values, (B) standard deviation values, and (C) Pearson’s correlations between cortical thickness and intelligence variations at the vertex level (Martínez et al., 2015). The results observed in the figure illustrate how the protocol applied for processing imaging data can influence the identified neural correlates of human intelligence in situations of low statistical power for detecting very small effect sizes. As discussed in the text, the relevance of sample size increases for reduced effect sizes.
supporting information processing capacity, the genetic etiology and cellular mechanisms supporting them can (and must) be distinguished (Panizzon et al., 2009; Winkler et al., 2010). Thus, the number of vertical columns drives the size of the cortical surface area reflecting, to a significant degree, the overall degree of cortical folding, whereas cortical thickness is mainly driven by the number of cells within a vertical column (Rakic, 1988), reflecting the packing
Imaging the Intelligence of Humans
density of neurons, as well as the content of neuropil. Neuroimaging measurements of cortical surface area and cortical thickness have been found to be genetically independent (Chen et al., 2013). Figure 3.6 shows how the GM measurement considered is relevant as a potential source of variability across studies. In the Figure 3.6 example, correlations between cortical surface area (CSA) and cortical gray matter volume (CGMV) across the cortex are stronger
Figure 3.6 (A) Pearson’s correlations among cortical thickness (CT), cortical surface area (CSA), and cortical gray matter volume (CGMV) obtained from a subsample of 279 healthy children and adolescents of the Pediatric MRI Data Repository created for the National Institute of Mental Health MRI Study of Normal Brain Development (Evans & Brain Development Cooperative Group, 2006). (B) Topography of significant correlations (q < .05, false discovery rate (FDR) corrected) between IQ and cortical thickness (CT), cortical surface area (CSA), and cortical gray matter volume (CGMV). Percentages of overlap among maps for the three gray matter measurements are also shown. Note. To obtain cortical thickness measurement, 3D T1-weighted MR images were submitted to the CIVET processing environment (version 1.1.9) developed at the MNI, a fully automated pipeline to extract and co-register the cortical surfaces for each subject. (Ad-Dab’bagh et al., 2006; Kim et al., 2005; MacDonald, Kabani, Avis, & Evans, 2000)
55
56
k. martı´nez and r. colom
compared to cortical thickness (CT)–CGMV correlations (Figure 3.6A). Also, there are low associations between CT and CSA. Figure 3.6B shows the spatial maps for significant (all positive) correlations with IQ scores: higher IQ scores were associated with greater GMM in several cortical regions. The highest percentage of significant vertices was found for the IQ–CGMV relationships (45.02%). The pattern of IQ–CT and IQ–CSA correlations was largely different (only 1.79% of significant vertices overlapped). In contrast, 50.79% of IQ–CGMV significant relationships were shared with IQ–CT (20.12%) or IQ–CSA (30.67%) associations. Therefore, findings based on different measures might not be directly comparable. This being said, one must keep in mind that there are different kinds of surfaces that could be used to calculate area: pial surface area, white matter surface area, and midsurface area (surface placement at the midpoint of cortical thickness). Pial surface area and mid surface area will correlate with thickness as they are based, in part, on thickness, whereas white matter surface area will correlate much less (if it does) with thickness. The same goes, more or less, for correlations between CSA and CGMV. CT and CGMV will definitely correlate as CGMV depends on thickness on top of also being associated with area. In other words, correlations between CSA and other metrics will highly depend on which measure of CSA is used. The example also illustrates the fact that some neuromarkers seem to be more relevant than others for explaining the variability observed at the psychological level. As noted above, larger brains tend to show greater intelligence levels, which may be related with increased numbers of neurons, increased sulcal convolution (surface area) or processing units – number of vertical columns – rather than to its thickness (Im et al., 2008; Pakkenberg & Gundersen, 1997). Variability in CSA is greater than variability in CT across individuals and species. The former index did show a dramatic growth over the course of evolution, which may support differences in intelligent behavior across species (Roth & Dicke, 2005). This, in combination with the differences in the surfaces used to estimate surface area, might account for some reports of more prominent findings for CGMV and CSA than CT when considering their relationships with individual differences in intelligence (Colom et al., 2013; Fjell et al., 2015; Vuoksimaa et al., 2015). Similar arguments might apply to further studies conducted from the assumption that high intelligence probably requires undisrupted and reliable information transfer among brain regions along white matter fibers and functional connections. In this regard, diffusion tensor imaging (DTI) and functional MRI have been used to study which properties of interacting brain networks predict individual differences in intelligence. For more on functional brain imaging of intelligence, see Chapter 12 by Basten and Fiebach. These studies revealed significant correlations between water diffusion parameters that quantify white matter integrity (such as fractional anisotropy,
Imaging the Intelligence of Humans
mean diffusivity, radian, and axial diffusivity) and intelligence measured by standardized tests. For more on white matter associations with intelligence, see the Chapter 10 by Genç and Fraenz. These associations have been reported at voxel and tract levels. Again, there are a variety of available processing pipelines to obtain the diffusion parameters and to compute the main bundles of white matter fibers by tractography algorithms. As a result, findings are also variable and the same pattern is observed when functional studies (task-related and resting state) are considered. Typically, connectome-based studies rely on graph theory metrics (see Figure 3.7). For more on network analyses, see Hilger and Sporns, Chapter 2. Usually considered measures include connection strength and degree, global and local efficiency, clustering, and characteristic path length. Where some studies find significant associations between intelligence and brain network efficiency (Dubois et al., 2018; Pineda-Pardo, Martínez, Román, & Colom, 2016; van den Heuvel, Stam, Kahn, & Hulshoff Pol, 2009), others do not, even using the same dataset (e.g. the study by Dubois et al. (2018) contradicts by remarkable numbers the results obtained by Kruschwitz, Waller, Daedelow, Walter, and Veer (2018) using Human Connectome Project data). Nevertheless, structural and functional studies support four tentative general conclusions: 1. Intelligent behavior relies on distributed (and interacting) brain networks (Barbey, 2018). Regions involved in these networks overlap with frontalparietal regions underscored by the P-FIT model (Pineda-Pardo et al., 2016), but also with other regions, such as those involved in the Default Mode Network (DMN) (Dubois et al., 2018). 2. Brains of humans with higher levels of intelligence seem to process information more efficiently (using fewer brain resources when performing demanding cognitive tasks) than brains of humans with lower levels of intelligence (Haier, 2017). For more on the efficiency hypothesis, see the Basten and Fiebach, Chapter 12. 3. It is important to distinguish between “task” and “individual differences” approaches for exploring brain–intelligence relationships. The goal of the task approach is to detect those regions engaged in a group of individuals, ignoring inter-subject differences that might be associated with intelligence differences. The individual differences approach assesses whether differences in brain features are or are not correlated with individual differences in intelligence: “the fact that a brain region is commonly activated during a cognitive challenge does not yet imply that individual differences in this activation are linked to individual differences in cognitive ability” (Basten et al., 2015, p. 11). Using the individual differences approach, Basten et al.’s (2015) meta-analysis found that structural and functional correlates of intelligence are dissociated (Figure 3.8). For more on the individual differences approach, see Chapter 12 by Basten and Fiebach.
57
58
k. martı´nez and r. colom
Figure 3.7 Summary of basic analytic steps for connectome-based analyses (A). The analytic sequence for computing the structural and functional connectivity matrices (B). The T1-weighted MRI images are used for cortical and subcortical parcellation, whereas diffusion images and BOLD (Blood Oxygen Level-Dependent) images are processed for computing diffusion tensor tractography and blood flow time courses, respectively. Structural connectivity networks are typically represented as symmetric matrices including normalized weights connecting each pair of nodes in the parcellation scheme. Functional connectivity is commonly calculated as a Pearson correlation between each pair of regions in the adopted parcellation, which are subsequently Fisher-Z transformed. (C) The results from two published studies relating individual differences in intelligence and functional and structural connectivity. (Hearne, Mattingley, & Cocchi, 2016; Ponsoda et al., 2017)
Imaging the Intelligence of Humans
Figure 3.8 Structural and functional correlates of human intelligence are not identified within the same brain regions: “the dissociation of functional vs. structural brain imaging correlates of intelligence is at odds with the principle assumption of the P-FIT that functional and structural studies on neural correlates of intelligence converge to imply the same set of brain regions.” (Basten et al., 2015, p. 21)
In short, means and correlations might provide findings with quite substantially different theoretical implications. 4. Novel analytic strategies might help to find “new” brain properties or neuromarkers that better predict intelligent behavior (e.g., brain resilience and brain entropy). Thus, for instance, Santarnecchi, Rossi, and Rossi’s (2015) findings suggest that the brain of humans with greater intelligence levels are more resilient to targeted and random attacks. They quantified the robustness of individual networks using graph-based metrics after the systematic loss of the most important nodes within the system, concluding that the higher the intelligence level, the greater the distributed processing capacity. This thought-provoking finding requires independent replication using functional and structural data. Another interesting study suggests that greater brain entropy (measured by resting state input data) is associated with higher intelligence (Saxe, Calderone, & Morales, 2018). Within this framework, entropy in considered as “an indicator of the brain’s general readiness to process unpredictable stimuli from the environment, rather than the active use of brain states during a particular task” (p. 13).
59
60
k. martı´nez and r. colom
In summary, now scientists are strongly prone to apply multimodal approaches to exhaustively characterize individual brains and submit the data to matching learning algorithms for selecting the pool of neuromarkers that maximize predictive power.
What Humans? Heterogeneity in sample characteristics across studies has been traditionally addressed by regressing out the influence of potential confounding variables from the relationships between intelligence and brain features. The most popular confounding variables are sex and age because the brain changes with age and there are known systematic neuro-morphometric mean sex differences. These differences could add noise to a statistical analysis if not accounted for or, even worse for age, be confounded for substrates of cognitive change differences when they might be age effects that covary with cognitive development but that are somewhat independent of cognitive development per se. However, many studies don’t look at “age by intelligence” and “sex by intelligence” interactions because of the assumption that similar brain regions and networks support intelligent performance in both sexes and across developmental stages. However, men and women might have distinguishable neural substrates for intelligence (Chekroud, Ward, Rosenberg, & Holmes, 2016; Haier, Jung, Yeo, Head, & Alkire, 2005; Ingalhalikar et al., 2014; Ritchie et al., 2018) and brain correlates for intelligence might change across the life span (Estrada, Ferrer, Román, Karama, & Colom, 2019; Román et al., 2018; Viviano, Raz, Yuan, & Damoiseaux, 2017; Wendelken et al., 2017). However, there is still another possibility. There might be distinguishable neural substrates of intelligence for different individuals (regardless of their sex and age). Different brains may achieve closely similar intelligent levels through varied hard-wired and soft-wired routes and group analyses mainly tend to detect overlapping regions across the sample of a group. Figure 3.9 illustrates the point. Martínez et al. (2015) matched 100 pairs of independent samples of participants for sex, age, and cognitive performance (fluid intelligence, crystallized intelligence, visuospatial intelligence, working memory capacity, executive updating, controlled attention, and processing speed). Afterwards, the reproducibility of the brain–cognition correlations across these samples was assessed. Figure 3.9 depicts a randomly selected case example of the low convergence observed for the 100 pairs of matched samples. As shown in Figure 3.5, this meager convergence might follow, at least in part, from the fact that the effect size of the association between intelligence and cortical thickness is too small to be detected in a stable fashion with relatively small sample sizes. In a condition of low power for detecting very small effect sizes,
Imaging the Intelligence of Humans
Figure 3.9 Mean (A) and variability (B) of cortical thickness across the cortex in two groups of individuals (Sample A and Sample B) matched for sex, age, and cognitive performance. The regional maps are almost identical. Pearson’s correlations between visuospatial intelligence and cortical thickness differences in these two groups are also shown (C). The maps are remarkably different. This happens even when the distribution of cortical and intelligence scores are identical in both groups. The results might illustrate the fact that not all brains work the same way. (Haier, 2017)
results can be affected by random error/noise. This issue is further compounded by not using whole-brain thresholds to control for multiple comparisons. In such situations, no definitive conclusions can be drawn. Nevertheless, we think Euler’s (2018) evaluation of this study raises one thought-provoking possibility that stimulates refined research: the key finding was that although the subsamples were essentially identical in terms of their anatomical distribution of mean cortical thickness and variability, they showed no significant overlap (and even opposite effects) in some of their brain–ability relationships. [These findings] suggest the deeper and more intriguing possibility that cognitive ability might be structured somewhat differently in different individuals . . . imaging approaches that strongly emphasize inter-subject consistency may be looking for convergence that does not ultimately exist. (p. 101)
The finding highlighted by Martínez et al. (2015) is far from surprising. As demonstrated by Gratton et al. (2018), brain network organization arises from stable factors (genetics, structural connections, and long-term histories of coactivation among regions). Ongoing cognition or day-to-day variations are much less relevant: “the large subject-level effects in functional networks highlight the importance of individualized approaches for studying properties of brain organization.” These researchers considered data from 10 individuals scanned across 10 fMRI sessions. Within each session five
61
62
k. martı´nez and r. colom
runs were completed: rest, visual coherence, semantic, memory, and motor. Variability of functional networks was analyzed and the key result showed that functional networks clustering was explained by participants’ identity: “individual variability accounts for the majority of variation between functional networks, with substantially smaller effects due to task or session . . . task states modified functional networks, but these modifications largely varied by individual . . . networks formed from co-activations during tasks strongly resemble functional networks from spontaneous firing at rest.” These results emphasized the relevance of approaches aimed at the individual level for researching brain structure and function: “neglect of individual differences in brain networks may cause researchers to miss substantial and relevant portions of variability in the data.” Increasingly sophisticated technical developments invite to change perspectives from the group to the individual level, as underscored by Dubois and Adolphs (2016): “while the importance of a fully personalized investigation of brain function has been recognized for several years, only recent technological advances now make it possible.” In this regard, recent neuroscientific intelligence research points to this personalized approach. Consistent with the framework provided by Colom and Román (2018), Daugherty et al.’s (2020) research identified individuals showing low, moderate, and high responses to targeted cognitive interventions: “acute interventions aimed to promote cognitive ability appear to not be ‘one size fits all’ and individuals vary widely in response to the intervention (. . .) the type of multi-modal intervention activity did not differentiate between performance subgroups.” Intrinsic characteristics of humans win the game.
Concluding Remarks Because many brain regions participate in superficially disparate cognitive functions, a selective correspondence is difficult to establish. Moreover, cognitive profiles are heterogeneous: humans might display the same performance, but the way in which different underlying mental processes are involved may vary between them. We would expect similar heterogeneity in the association between intelligence and brain features. A direct implication of this lack of consistency for brain properties psychological factors relationships is the difficulty to choose among competing neuroimaging protocols. Large samples are welcome (e.g., Human Connectome Project, ENIGMA, UK Biobank, and so forth) but this current generalized tendency might mask relevant problematic issues regarding the consideration of the intelligence– brain relationships. A large sample is not synonymous with a high quality study. Pursuing just statistical brute force might divert our attention
Imaging the Intelligence of Humans
from the main research goal of finding reliable brain correlates of individual differences in human intelligence and leave us with large blind spots. Because of the dynamic nature of the human brain and the complexities of cognition, replication needs carefully matched samples and strictly comparable psychological scores, neuroimaging methods, and brain properties. But even in such instances, replication may not be achieved. Imaging methods for processing structural and functional MR data are systematically refined for increasing biological plausibility. Advances will help to resolve the observed inconsistencies, but for now it is strongly recommended to replicate the findings using different protocols using the same dataset, along with the precise clarification of each processing step. Also, if the computed analyses produce null findings, they should be reported. If researchers choose to move on to exploring the data in further ways to get significant findings, they must explicitly acknowledge the move (Button et al., 2013). We think one of the most formidable challenges for intelligence research implies thinking again about the best way for studying complex psychological factors at the biological level. Statistical analyses for identifying distinguishable brain profiles may be very useful for a better understanding of brain network properties that account for inter-subject variability in cognitive performance. Moving from the group to the individual level may shed new light (Horien, Shen, Scheinost, & Constable, 2019). Finn et al. (2015) demonstrated that brain functional connectivity profiles act as a “fingerprint” that accurately identify unique individuals. Moreover, these individualized connectivity profiles did predict, to some degree, intelligence scores. As we move within the third decade of the twenty-first century, we would like to change our minds for addressing the key question about how varied brain features support the core psychological factor we usually name with the word “intelligence.” Because of the integrative nature of this factor, it is safe to predict that general properties of the brain will be of paramount relevance for enhancing our understanding of the role of this organ in intelligent behavior. This is what Valizadeh, Liem, Mérillat, Hänggi, and Jäncke (2018) found after the analysis of 191 individuals (from the Longitudinal Healthy Aging Brain) scanned three times across two years. Using 11 large brain regions only, they were able to identify individual brains with high accuracy: “even the usage of composite anatomical measures representing relatively large brain areas (total brain volume, total brain area, or mean cortical thickness) are so individual that they can be used for individual subject identification.” Because there are not two individuals on Earth with the same genome, there will not be two individuals with the same brain (Sella & Barton, 2019). This fact cannot be ignored from now on and should be properly weighted for achieving reliable answers to the question of why some people are smarter than others.
63
64
k. martı´nez and r. colom
References Ad-Dab’bagh, Y., Lyttelton, O., Muehlboeck, J. S., Lepage, C., Einarson, D., Mok, K., . . . Evans, A. C. (2006). The CIVET image-processing environment: A fully automated comprehensive pipeline for anatomical neuroimaging research. Proceedings of the 12th annual meeting of the organization for human brain mapping (Vol. 2266). Florence, Italy. Ashburner, J., & Friston, K. J. (2000). Voxel-based morphometry – The methods. Neuroimage, 11(6), 805–821. Barbey, A. K. (2018). Network neuroscience theory of human intelligence. Trends in Cognitive Sciences, 22(1), 8–20. Basten, U., Hilger, K., & Fieback, C. (2015). Where smart brains are different: A quantitative meta-analysis of functional and structural brain imaging studies on intelligence. Intelligence, 51, 10–27. Botvinik-Nezer, R., Holzmeister, F., Camerer, C. F., Dreber, A., Huber, J., Johannesson, M., . . . Schonberg, T. (2020). Variability in the analysis of a single neuroimaging dataset by many teams. Nature, 582, 84–88. Button K. S., Ioannidis, J. P., Mokrysz, C., Nosek, B. A., Flint, J., Robinson, E. S., & Munafò, M. R. (2013). Power failure: Why small sample size undermines the reliability of neuroscience. Nature Reviews Neuroscience, 14(5), 365–376. Caspi, A., & Moffitt, T. E. (2018). All for one and one for all: Mental disorders in one dimension. American Journal of Psychiatry, 175(9), 831–844. Chekroud, A. M., Ward, E. J., Rosenberg, M. D., & Holmes, A. J. (2016). Patterns in the human brain mosaic discriminate males from females. Proceedings of the National Academy of Sciences, 113(14), E1968–E1968. Chen, C. H., Fiecas, M., Gutierrez, E. D., Panizzon, M. S., Eyler, L. T., Vuoksimaa, E., . . . & Kremen, W. S. (2013). Genetic topography of brain morphology. Proceedings of the National Academy of Sciences, 110(42), 17089–17094. Chuderski, A. (2019). Even a single trivial binding of information is critical for fluid intelligence. Intelligence, 77, 101396. Cole, M. W., Yarkoni, T., Repovš, G., Anticevic, A., & Braver, T. S. (2012). Global connectivity of prefrontal cortex predicts cognitive control and intelligence. Journal of Neuroscience, 32(26), 8988–8999. Colom, R., Burgaleta, M., Román, F. J., Karama, S., Álvarez-Linera, J., Abad, F. J., . . . Haier, R. J. (2013). Neuroanatomic overlap between intelligence and cognitive factors: Morphometry methods provide support for the key role of the frontal lobes. Neuroimage, 72, 143–152. doi: 10.1016/j. neuroimage.2013.01.032. Colom, R., Chuderski, A., & Santarnecchi, E. (2016). Bridge over troubled water: Commenting on Kovacs and Conway’s process overlap theory. Psychological Inquiry, 27(3), 181–189. Colom, R., Haier, R. J., Head, K., Álvarez-Linera, J., Quiroga, M. Á., Shih, P. C., & Jung, R. E. (2009). Gray matter correlates of fluid, crystallized, and spatial intelligence: Testing the P-FIT model. Intelligence, 37(2), 124–135. Colom, R., Jung, R. E., & Haier, R. J. (2006). Distributed brain sites for the g-factor of intelligence. NeuroImage, 31(3), 1359–1365.
Imaging the Intelligence of Humans
Colom, R., Jung, R. E., & Haier, R. J. (2007). General intelligence and memory span: Evidence for a common neuroanatomic framework. Cognitive Neuropsychology, 24(8), 867–878. Colom, R., Karama, S., Jung, R. E., & Haier, R. J. (2010). Human intelligence and brain networks. Dialogues in Clinical Neuroscience, 12(4), 489–501. Colom, R., & Román, F. (2018). Enhancing intelligence: From the group to the individual. Journal of Intelligence, 6(1), 11. Colom, R., & Thompson, P. M. (2011). Understanding human intelligence by imaging the brain. In T. Chamorro-Premuzic, S. von Stumm, & A. Furnham (eds.), The Wiley-Blackwell handbook of individual differences (p. 330–352). Hoboken, NJ: Wiley-Blackwell. Daugherty, A. M., Sutton, B. P., Hillman, C., Kramer, A. F., Cohen, N. J., & Barbey, A. K. (2020). Individual differences in the neurobiology of fluid intelligence predict responsiveness to training: Evidence from a comprehensive cognitive, mindfulness meditation, and aerobic exercise intervention. Trends in Neuroscience and Education, 18, 100123. Deary, I. J., Penke, L., & Johnson, W. (2010). The neuroscience of human intelligence differences. Nature Reviews Neuroscience, 11(3), 201–211. Dubois, J., & Adolphs, R. (2016). Building a science of individual differences from fMRI. Trends in Cognitive Sciences, 20(6), 425–443. Dubois, J., Galdi, P., Paul, L. K., & Adolphs, R. (2018). A distributed brain network predicts general intelligence from resting-state human neuroimaging data. Philosophical Transactions of the Royal Society B: Biological Sciences, 373(1756), 20170284. Estrada, E., Ferrer, E., Román, F. J., Karama, S., & Colom, R. (2019). Timelagged associations between cognitive and cortical development from childhood to early adulthood. Developmental Psychology, 55(6), 1338–1352. Euler, M. J. (2018). Intelligence and uncertainty: Implications of hierarchical predictive processing for the neuroscience of cognitive ability. Neuroscience & Biobehavioral Reviews, 94, 93–112. Evans, A. C., & Brain Development Cooperative Group (2006). The NIH MRI study of normal brain development. NeuroImage, 30(1), 184–202. Finn, E. S., Shen, X., Scheinost, D., Rosenberg, M. D., Huang, J., Chun, M. M., . . . Constable, R. T. (2015). Functional connectome fingerprint: Identifying individuals using patterns of brain connectivity. Nature Neuroscience, 18(11), 1664–1671. Fjell, A. M., Westlye, L. T., Amlien, I., Tamnes, C. K., Grydeland, H., Engvig, A., . . . Walhovd, K. B. (2015). High-expanding cortical regions in human development and evolution are related to higher intellectual abilities. Cerebral Cortex, 25(1), 26–34. Frost, M. A., & Goebel, R. (2012). Measuring structural–functional correspondence: Spatial variability of specialised brain regions after macro-anatomical alignment. Neuroimage, 59(2), 1369–1381. Gignac, G. E., & Bates, T. C. (2017). Brain volume and intelligence: The moderating role of intelligence measurement quality. Intelligence, 64, 18–29. doi: 10.1016/ j.intell.2017.06.004.
65
66
k. martı´nez and r. colom
Gratton, C., Laumann, T. O., Nielsen, A. N., Greene, D. J., Gordon, E. M., Gilmore, A. W., . . . Petersen, S. E. (2018). Functional brain networks are dominated by stable group and individual factors, not cognitive or daily variation. Neuron, 98(2), 439–452. Grotzinger, A. D., Cheung, A. K., Patterson, M. W., Harden, K. P., & TuckerDrob, E. M. (2019). Genetic and environmental links between general factors of psychopathology and cognitive ability in early childhood. Clinical Psychological Science, 7(3), 430–444. Haier, R. J. (2017). The neuroscience of intelligence. Cambridge University Press. Haier, R. J., Colom, R., Schroeder, D., Condon, C., Tang, C., Eaves, E., & Head, K. (2009). Gray matter and intelligence factors: Is there a neuro-g? Intelligence, 37(2), 136–144. Haier, R. J., Jung, R. E., Yeo, R. A., Head, K., & Alkire, M. T. (2005). The neuroanatomy of general intelligence: Sex matters. NeuroImage, 25(1), 320–327. Hearne, L. J., Mattingley, J. B., & Cocchi, L. (2016). Functional brain networks related to individual differences in human intelligence at rest. Scientific Reports, 6, 32328. doi: 10.1038/srep32328. Hill, W. D., Harris, S. E., & Deary, I. J. (2019). What genome-wide association studies reveal about the association between intelligence and mental health. Current Opinion in Psychology, 27, 25–30. doi: 10.1016/j.copsyc.2018.07.007. Horien, C., Shen, X., Scheinost, D., & Constable, R. T. (2019). The individual functional connectome is unique and stable over months to years. NeuroImage, 189, 676–687. doi: 10.1016/j.neuroimage.2019.02.002. Hunt, E. B. (2011). Human intelligence. Cambridge University Press. Im, K., Lee, J. M., Lyttelton, O., Kim, S. H., Evans, A. C., & Kim, S. I. (2008). Brain size and cortical structure in the adult human brain. Cerebral Cortex, 18(9), 2181–2191. Ingalhalikar, M., Smith, A., Parker, D., Satterthwaite, T. D., Elliott, M. A., Ruparel, K., . . . Verma, R. (2014). Sex differences in the structural connectome of the human brain. Proceedings of the National Academy of Sciences, 111(2), 823–828. Jensen, A. R. (1998). The g factor. The science of mental ability. Westport, CT: Praeger. doi: 10.1093/cercor/bhm244. Johnson, W., & Bouchard, T. (2005). The structure of human intelligence: It is verbal, perceptual, and image rotation (VPR), not fluid and crystallized. Intelligence, 33, 393–416. Johnson, W., Bouchard Jr, T. J., Krueger, R. F., McGue, M., & Gottesman, I. I. (2004). Just one g: Consistent results from three batteries. Intelligence, 32(1), 95–107. Johnson, W., te Nijenhuis, J., & Bouchard, T. (2008). Still just 1 g: Consistent results from five test batteries. Intelligence, 36(1), 81–95. Jung, R. E., & Haier, R. J. (2007). The parieto-frontal integration theory (P-FIT) of intelligence: Converging neuroimaging evidence. Behavioral and Brain Sciences, 30(2), 135–187. Karama S., Colom, R., Johnson, W., Deary, I. J., Haier, R., Waber, D. P., . . . Brain Development Cooperative Group (2011). Cortical thickness correlates of
Imaging the Intelligence of Humans
specific cognitive performance accounted for by the general factor of intelligence in healthy children aged 6 to 18. NeuroImage, 55(4), 1443–1453. Kim, J. S., Singh, V., Lee, J. K., Lerch, J., Ad-Dab’bagh, Y., MacDonald, D., . . . Evans, A. C. (2005). Automated 3-D extraction and evaluation of the inner and outer cortical surfaces using a Laplacian map and partial volume effect classification. NeuroImage, 27(1), 210–221. Kruschwitz, J. D., Waller, L., Daedelow, L. S., Walter, H., & Veer, I. M. (2018). General, crystallized and fluid intelligence are not associated with functional global network efficiency: A replication study with the human connectome project 1200 data set. NeuroImage, 171, 323–331. doi: 10.1016/j. neuroimage.2018.01.018. Langer, N., Pedroni, A., Gianotti, L. R., Hänggi, J., Knoch, D., & Jäncke, L. (2012). Functional brain network efficiency predicts intelligence. Human Brain Mapping, 33(6), 1393–1406. MacDonald, D., Kabani, N., Avis, D., & Evans, A. C. (2000). Automated 3-D extraction of inner and outer surfaces of cerebral cortex from MRI. NeuroImage, 12(3), 340–356. Martínez, K., Madsen, S. K., Joshi, A. A., Joshi, S. H., Roman, F. J., Villalon-Reina, J., . . . Colom, R. (2015). Reproducibility of brain–cognition relationships using three cortical surface-based protocols: An exhaustive analysis based on cortical thickness. Human Brain Mapping, 36(8), 3227–3245. Mechelli, A., Price, C. J., Friston, K. J., & Ashburner, J. (2005). Voxel-based morphometry of the human brain: Methods and applications. Current Medical Imaging Reviews, 1(2), 105–113. Pakkenberg, B., & Gundersen, H. J. G. (1997). Neocortical neuron number in humans: Effect of sex and age. Journal of Comparative Neurology, 384(2), 312–320. Panizzon, M. S., Fennema-Notestine, C., Eyler, L. T., Jernigan, T. L., PromWormley, E., Neale, M., . . . Kremen, W. S. (2009). Distinct genetic influences on cortical surface area and cortical thickness. Cerebral Cortex, 19(11), 2728–2735. Pietschnig, J., Penke, L., Wicherts, J. M., Zeiler, M., & Voracek, M. (2015). Metaanalysis of associations between human brain volume and intelligence differences: How strong are they and what do they mean? Neuroscience & Biobehavioral Reviews, 57, 411–432. Pineda-Pardo, J. A., Martínez, K., Román, F. J., & Colom, R. (2016). Structural efficiency within a parieto-frontal network and cognitive differences. Intelligence, 54, 105–116. doi: 10.1016/j.intell.2015.12.002. Plomin, R., DeFries, J. C., Knopik, V. S., & Neiderhiser, J. M. (2016). Top 10 replicated findings from behavioral genetics. Perspectives on Psychological Science, 11(1), 3–23. Ponsoda, V., Martínez, K., Pineda-Pardo, J. A., Abad, F. J., Olea, J., Román, F. J., . . . Colom, R. (2017). Structural brain connectivity and cognitive ability differences: A multivariate distance matrix regression analysis. Human Brain Mapping, 38(2), 803–816. Price, C. J. (2018). The evolution of cognitive models: From neuropsychology to neuroimaging and back. Cortex, 107, 37–49. Rakic, P. (1988). Specification of cerebral cortical areas. Science, 241(4862), 170–176.
67
68
k. martı´nez and r. colom
Ritchie, S. J., Cox, S. R., Shen, X., Lombardo, M. V., Reus, L. M., Alloza, C., . . . Deary, I. J. (2018). Sex differences in the adult human brain: Evidence from 5216 UK Biobank participants. Cerebral Cortex, 28(8), 2959–2975. Román F. J., Abad, F. J., Escorial, S., Burgaleta, M., Martínez, K., Álvarez-Linera, J., . . . Colom, R. (2014). Reversed hierarchy in the brain for general and specific cognitive abilities: A morphometric analysis. Human Brain Mapping, 35(8), 3805–3818. Román, F. J., Morillo, D., Estrada, E., Escorial, S., Karama, S., & Colom, R. (2018). Brain–intelligence relationships across childhood and adolescence: A latentvariable approach. Intelligence, 68, 21–29. doi: 10.1016/j.intell.2018.02.006. Roth, G., & Dicke, U. (2005). Evolution of the brain and intelligence. Trends in Cognitive Sciences, 9(5), 250–257. Santarnecchi, E., Rossi, S., & Rossi, A. (2015). The smarter, the stronger: Intelligence level correlates with brain resilience to systematic insults. Cortex, 64, 293–309. doi: 10.1016/j.cortex.2014.11.005. Saxe, G. N., Calderone, D., & Morales, L. J. (2018). Brain entropy and human intelligence: A resting-state fMRI study. PloS One, 13(2), e0191582. Schneider, W. J., & McGrew, K. S. (2018). The Cattell–Horn–Carroll theory of cognitive abilities. In D. P. Flanagan, & E. M. McDonough (eds.), Contemporary intellectual assessment: Theories, tests, and issues (pp. 73–163). New York: The Guilford Press. Sella, G., & Barton, N. H. (2019). Thinking about the evolution of complex traits in the era of genome-wide association studies. Annual Review of Genomics and Human Genetics, 20, 461–493. Thompson, P. M., Hayashi, K. M., Sowell, E. R., Gogtay, N., Giedd, J. N., Rapoport, J. L., . . . Toga, A. W. (2004). Mapping cortical change in Alzheimer’s disease, brain development, and schizophrenia. NeuroImage, 23, S2–S18. doi: 10.1016/j.neuroimage.2004.07.071. Thompson, P. M., Jahanshad, N., Ching, C. R., Salminen, L. E., Thomopoulos, S. I., Bright, J., . . . for the ENIGMA Consortium (2020). ENIGMA and global neuroscience: A decade of large-scale studies of the brain in health and disease across more than 40 countries. Translational Psychiatry, 10(1), 1–28. Valizadeh, S. A., Liem, F., Mérillat, S., Hänggi, J., & Jäncke, L. (2018). Identification of individual subjects on the basis of their brain anatomical features. Scientific Reports, 8(1), 1–9. van den Heuvel, M.P., Stam, C.J., Kahn, R.S., Hulshoff Pol, H.E. (2009). Efficiency of functional brain networks and intellectual performance. Journal of Neuroscience, 29(23), 7619–7624. Viviano, R. P., Raz, N., Yuan, P., & Damoiseaux, J. S. (2017). Associations between dynamic functional connectivity and age, metabolic risk, and cognitive performance. Neurobiology of Aging, 59, 135–143. doi: 10.1016/j. neurobiolaging.2017.08.003. Vuoksimaa, E., Panizzon, M. S., Chen, C. H., Fiecas, M., Eyler, L. T., FennemaNotestine, C., . . . Kremen, W. S. (2015). The genetic association between neocortical volume and general cognitive ability is driven by global surface area rather than thickness. Cerebral Cortex, 25(8), 2127–2137.
Imaging the Intelligence of Humans
Wendelken, C., Ferrer, E., Ghetti, S., Bailey, S. K., Cutting, L., & Bunge, S. A. (2017). Frontoparietal structural connectivity in childhood predicts development of functional connectivity and reasoning ability: A large-scale longitudinal investigation. Journal of Neuroscience, 37(35), 8549–8558. Winkler, A. M., Kochunov, P., Blangero, J., Almasy, L., Zilles, K., Fox, P. T., . . . Glahn, D. C. (2010). Cortical thickness or grey matter volume? The importance of selecting the phenotype for imaging genetics studies. NeuroImage, 53(3), 1135–1146.
69
4 Research Consortia and Large-Scale Data Repositories for Studying Intelligence Budhachandra Khundrakpam, Jean-Baptiste Poline, and Alan C. Evans
Neuroimaging of Intelligence The first neuroimaging studies of intelligence were done with positron emission tomography (PET) (Haier et al., 1988). PET was expensive and invasive but more researchers had access to neuroimaging when Magnetic Resonance Imaging (MRI) became widely available around the year 2000. The advent of advanced MRI methods enabled researchers to investigate localized (region-level) associations of brain measures and measures of intelligence in healthy individuals (Gray & Thompson, 2004; Luders, Narr, Thompson, & Toga, 2009). At the whole-brain level, MRI-based studies have reported a positive association (r = .40 to .51) between some measures of intelligence and brain size (Andreasen et al., 1993; McDaniel, 2005). Several studies at the voxel and regional levels have also demonstrated a positive correlation of morphometry with intelligence in brain regions that are especially relevant to higher cognitive functions including frontal, temporal, parietal, hippocampus, and cerebellum (Andreasen et al., 1993; Burgaleta, Johnson, Waber, Colom, & Karama, 2014; Colom et al., 2009; Karama et al., 2011; Narr et al., 2007; Shaw et al., 2006). More recently, neuroimaging studies have revealed largescale structural and functional brain networks as potential neural substrates of intelligence (see review by Jung & Haier, 2007 and Barbey et al., 2012; Barbey, Colom, Paul, & Grafman, 2014; Colom, Karama, Jung, & Haier, 2010; Khundrakpam et al., 2017; Li et al., 2009; Sripada, Angstadt, Rutherford, & Taxali, 2019). The use of neuroimaging for studying intelligence has increased tremendously in recent years (detailed in Haier (2017), Neuroscience of Intelligence). From initial explorations comprising of a handful of subjects, recent studies have been conducted with very large sample sizes. For example, a recent study using the UK Biobank investigated the brain correlates of longitudinal changes in a measure of fluid intelligence across three time points in 185,317 subjects (Kievit, Fuhrmann, Borgeest, Simpson-Kent, & Henson, 2018). With increasing number of subjects, the statistical power of a study increases;
70
Research Consortia and Large-Scale Data Repositories
however, the required number of subjects depends partly on the effect size of the research question. Much of the large-scale data repositories and research consortia have been established partly because of inconsistencies in studies with small samples (Turner, Paul, Miller, & Barbey, 2018). Another prominent reason for large-scale data repositories is the fact that the study of many potential factors such as the genetic effects for a trait (such as intelligence) on brain measures leads to multiple comparison problems, thus requiring hundreds – sometimes many more – of subjects (Cox, Ritchie, FawnsRitchie, Tucker-Drob, & Deary, 2019).
Large-Scale Data Repositories and Study of Intelligence The various datasets that have been used in the study of intelligence can be categorized based on the level of planning with which they were acquired. The first category belongs to planned datasets such as the NIH MRI Study of Normal Brain Development (NIHPD), Pediatric Imaging, Neurocognition and Genetics (PING), Philadelphia Neurodevelopmental Cohort (PNC), Healthy Brain Network (HBN), Human Connectome Project (HCP), Lothian birth-cohort, IMAGEN, Adolescent Brain Cognitive Development (ABCD) and UK Biobank datasets (see Table 4.1 for details). These datasets arose from carefully planned studies with standardized protocols from several sites. Although rare, there are single-site planned studies also, examples being the Philadelphia Neurodevelopmental Cohort (PNC) dataset (Satterthwaite et al., 2016) and the IARPA SHARP Program INSIGHT dataset (Daugherty et al., 2020; Zwilling et al., 2019). Such planned datasets have resulted in major advances in understanding the neural correlates of intelligence. One prominent dataset is the NIHPD dataset that was, at the time, the most representative imaging sample of typically developing children and adolescents in the US (ages 5–22 years) and spurred a series of findings related to intelligence. For example, using longitudinal MRI scans of 307 subjects (total number of scans = 629), Shaw et al. (2006) reported that children with higher intelligence demonstrated more dynamic cortical trajectories compared to children with lower intelligence. In another study, using MRI scans from 216 subjects, Karama et al. (2009) reported positive associations between general cognitive ability factor (an estimate of g) and cortical thickness in several multimodal association areas, with a follow-up study demonstrating that the cortical thickness correlates of cognitive performance on complex tasks were well captured by g (Karama et al., 2011). Going beyond associations of intelligence and cortical thickness in localized brain regions, using 586 longitudinal MRI scans of children, Khundrakpam et al. (2017) showed distinct anatomical coupling among widely distributed cortical regions possibly reflecting a more efficient organisation in children with high verbal intelligence. In terms of functional connectivity, Sripada et al. (2019) utilized data from the
71
Table 4.1 Details of large-scale datasets and research consortia with concurrent measures of neuroimaging and intelligence (and/or related) scores, and, in some cases, genetic data. Although rare, some large-scale datasets were collected from single sites, while the majority were collected from multiple sites. Note, the list is not exhaustive and mostly concentrated on developmental datasets. Project/ Data
Brain measures
IQ and related tests/measures
21
sMRI, fMRI, dMRI
Crystallized composite and fluid composite
Yes
~550 (~3 visits)
6
sMRI, DTI
WASI, WISC-III
No
PING
~1,400
10
sMRI, fMRI, dMRI
Yes
PNC
~1,400
1
sMRI, fMRI, DTI
HBN
~660
3
Yes
ABIDE
~2,200
24
sMRI, fMRI, dMRI, EEG sMRI, fMRI
8 NTCB subtests including Oral Reading Recognition test, Picture Vocabulary test CNB tests for domains including executive control, complex cognition, social cognition WISC-V, WAIS-IV, WASI
PIQ, VIQ, FSIQ
No
ENIGMA Consortium
~30,000
200
sMRI, DTI
PIQ, VIQ, FSIQ
Yes
Sample size
Sites
ABCD
~12,000 (at visit 1)
NIHPD
Genetic data
Yes
Description Longitudinal study of 9–10 year olds, to be scanned every year for the next 10 years (Casey et al., 2018) Longitudinal study of brain development (ages 5–20 years) (Evans & Brain Development Cooperative Group, 2006) Cross-sectional study of brain development (ages 3–20 years) (Jernigan et al., 2016) Cross-sectional study of brain development (ages 8–21 years) (Satterthwaite et al., 2016) Creation of a biobank of 10,000 participants (ages 5–21 years) (Alexander et al., 2017) Agglomerated dataset of individuals with ASD and healthy controls (ages 6–74 years) (Di Martino et al., 2014, 2017) Consortium for large-scale collaborative analyses of neuroimaging and genetic data across lifespan (Thompson et al., 2017)
UK Biobank
Lothian Birth Cohort 1936 IMAGEN Consortium
HCP
~500,000 (total) 100,000 (+ imaging) ~1,091
22
sMRI, fMRI, dMRI
7 cognitive tests including Verbal Numerical Reasoning (Fluid intelligence)
Yes
Creation of a biobank of ~500,000 participants (ages 40–69 years) (Sudlow et al., 2015)
1
sMRI, DTI
Moray House Test of general cognitive ability
Yes
~2,000
4
sMRI, fMRI
PIQ, VIQ
Yes
~1,200
1
sMRI, fMRI, dMRI
NTCB tests including Oral Reading Recognition, Picture Vocabulary
Yes
Follow-up cohort study of participants in both youth (~11 years) and older age (~70 years) (Deary et al., 2007) Longitudinal genetic-neuroimaging study of 14 year-old adolescents (follow-up at 16 years) (Schumann et al., 2010) Cross-sectional study of brain connectivity in young adults (ages 22–35 years) (Van Essen et al., 2013)
Abbreviations: ABCD = Adolescent Brain Cognitive Development, NIHPD = NIH MRI Study of Normal Brain Development, PING = Pediatric Imaging, Neurocognition and Genetics, PNC = Philadelphia Neurodevelopmental Cohort, HBN = Healthy Brain Network, ABIDE = Autism Brain Imaging Data Exchange, ENIGMA = Enhancing NeuroImaging Genetics through Meta-Analysis, HCP = Human Connectome Project, sMRI = Structural Magnetic Resonance Imaging, fMRI = Functional Magnetic Resonance Imaging, dMRI = Diffusion Magnetic Resonance Imaging, DTI = Diffusion Tensor Resonance Imaging, EEG = Electroencephalography, NTCB = NIH ToolBox Cognitive Battery, CNB = Computerized Neurocognitive Battery, PIQ = Performance Intelligent Quotient, VIQ = Verbal IQ, FSIQ = Full-scale IQ, WISC = Wechsler Intelligence Scale for Children, WASI = Wechsler Adult Intelligence Scale, WAIS = Wechsler Adult Intelligence Scale.
74
b. khundrakpam, j.-b. poline, and a. c. evans
HCP and ABCD datasets, and identified novel mechanisms of general intelligence involving widespread functional brain networks. In particular, they showed the separation of the fronto-parietal network and the default-mode network as the major locus of individual variability in general intelligence. Several studies have also been conducted using other datasets, some of which include Burgaleta et al. (2014), Cox et al. (2019), Dubois, Galdi, Paul, & Adolphs (2018), Karama et al. (2014), Kievit et al. (2018), Xiao, Stephen, Wilson, Calhoun, and Wang (2019), and Zhao, Klein, Castellanos, and Milham (2019). Finally, the IARPA INSIGHT dataset represents the largest intervention trial conducted to date investigating the efficacy of a comprehensive, 16-week intervention protocol designed to enhance fluid intelligence (n = 400), examining skill-based cognitive training, high-intensity interval fitness training, non-invasive brain stimulation (HD-tDCS), and mindfulness meditation. Recent findings from the INSIGHT project establish the importance of individual differences in structural brain volume for predicting training response and transfer to measures of fluid intelligence (Daugherty et al., 2020) and further demonstrate the potential of multi-modal interventions (that incorporate physical fitness and cognitive training) to enhance fluid intelligence and decision-making (Zwilling et al., 2019). The next category belongs to unplanned, agglomerative datasets such as the Autism Brain Imaging Data Exchange, ABIDE dataset (Di Martino et al., 2014), ADHD-200 Consortium (HD-200 Consortium TA-200, Milham, Damien, Maarten, & Stewart, 2012), and 1,000 Connectomes project (Biswal et al., 2010), which comprise several datasets with compatible sample and imaging characteristics (see Table 4.1 for details). These datasets have revealed complex interactions between intelligence measures, brain measures, and clinical states. For example, using resting-state fMRI data of 964 subjects from the ABIDE dataset, Nielsen et al. (2013) built models to classify autism from controls, and showed that verbal IQ was significantly related to the classification score. In another study, Bedford et al. (2020) used structural MRI data from 1,327 subjects (including data from the ABIDE dataset) to demonstrate the influence of full-scale IQ (FSIQ) on neuroanatomical heterogeneity in autism. Interestingly, even without the use of neuroimaging, personal characteristic data (PCD) such as age, sex, handedness, and IQ from these large-scale datasets have facilitated the growth of machine learning models for automated diagnosis of autism (Parikh, Li, & He, 2019) and ADHD (Brown et al., 2012). Of particular interest is the study by Parikh et al. (2019), who showed that for classification models of autism, full-scale IQ, followed by verbal IQ, and performance IQ, had better predictive power than age, sex, and handedness. The last category belongs to meta-analyses such as the Enhancing Neuro Imaging Genetics through Meta-Analysis (ENIGMA), BrainMap, in which data stay with individual sites and are not collected at a single site. The usual approach is to pool findings from smaller studies in order to help reach
Research Consortia and Large-Scale Data Repositories
consensus for inconsistent findings. A prominent example is BrainMaps’ coordinate-based activation likelihood estimation meta-analysis of neuroimaging data (Eickhoff et al. 2009) which allows statistical combination of coordinate-based analyses across studies. Using this meta-analysis approach on 12 structural and 16 functional neuroimaging studies on humans, Basten, Hilger, and Fiebach (2015) performed an empirical test of the Parieto-Frontal Integration Theory of Intelligence (P-FIT) model (Jung & Haier 2007), and suggested an updated P-FIT model for the neural bases of intelligence by extending earlier models to include the posterior cingulate cortex and subcortical structures. In another study, Santarnecchi, Emmendorfer, and PascualLeone (2017) performed a quantitative meta-analysis of 47 fMRI and PET studies on fluid intelligence in humans and demonstrated a network-centered perspective of problem-solving related to fluid intelligence. Another prominent example is the ENIGMA project in which several datasets are analyzed (using standardized pipelines) at individual sites, and then pooled for a metaanalysis on the shared results from each site (Thompson et al., 2014, 2017). By sharing data from 15,782 subjects, researchers of the ENIGMA project showed three genetic loci linked with intracranial volume (ICV). More interestingly, a variant of one of these genetic sequences (HMGA2 gene) was associated with IQ scores (Stein et al., 2012).
Opportunities and Challenges The most significant benefit of these large-scale data repositories and consortia is the ability to study imaging genetics of intelligence. Imaging genetics has always been difficult with small samples. However, the advent of big datasets (such as UK Biobank) and big consortia (such as ENIGMA) has enabled us to overcome these limitations. One of the early applications came from the ENIGMA project, which showed a link between HMGA2 gene and IQ measure (Stein et al., 2012). Another example that showed the utility of large-scale consortia came from a study by Huguet et al. (2018). Although copy number variants (CNVs) are prevalent in ~15% of individuals with neurodevelopmental disorders, individual association studies have been difficult due to their rare occurrence. Huguet et al. used data from 2,090 adolescents from the IMAGEN study and 1,983 children and parents from the Saguenay Youth Study, and investigated the effect sizes of recurrent and non-recurrent CNVs on IQ. They observed that, for rare deletions, size and number of genes affected IQ, such that each gene deletion was linked to a decrease in PIQ of .67.19 (mean standard error) points. Genome-wide association studies (GWAS) of intelligence have also been possible with the advent of the large-scale datasets and consortia. In one such study, using data from 78,308 individuals, Sniekers et al. (2017) performed a genome-wide meta-analysis of intelligence identifying 336 associated
75
76
b. khundrakpam, j.-b. poline, and a. c. evans
single-nucleotide polymorphisms (SNPs) in 18 genomic loci. By including data from the UK Biobank, a follow-up study performed a GWAS meta-analysis on 269,867, and identified 206 genome-wide significant regions (Savage et al., 2018). More interestingly, the GWAS analysis resulted in genome-wide polygenic scores (GPS) of IQ that predicted ~4% of the variance in intelligence in independent samples (Savage et al. 2018). Going further, recent studies have explored the link between genome-wide polygenic score (GPS) of IQ and brain structure. Using the ABCD dataset (N = 11,875), Loughnan et al. (2019) demonstrated a significant association between total cortical surface area and GPS-IQ, with .3% variance of GPS-IQ explained by the total surface area. With increased sample size, future studies will likely reveal the neural correlates of GPS-IQ at regional and network-levels, but will require deep phenotyping to enable informed interpretation. For detailed discussion on this emerging new genetics of intelligence, the reader is referred to a review article by Plomin and Von Stumm (2018). There have been major advances in the development of new analysis techniques/methods because of the availability of these large-scale datasets. For example, the ABCD consortium recently organized the ABCD Neurocognitive Prediction Challenge-2019, in which they invited researchers to develop their methods for predicting fluid intelligence from T1-weighted MRI data. Out of a total of 8,500 subjects, aged 9–10 years, data of 4,100 children were provided for training, and the accuracy of the models were then tested by comparing the predicted fluid intelligence scores of 4,400 children. The winning team developed several regression and deep learning methods to predict fluid intelligence and showed Kernel Ridge Regression as the best prediction model with a mean-squared error of 69.72 on the validation set and 92.12 on the test set (Mihalik et al., 2019). On the other hand, these large neuroimaging datasets, along with personal characteristic data (PCD) including IQ, have led to the development of methods for enhanced diagnosis of brain disorders such as autism and ADHD (Ghiassian, Greiner, Jin, & Brown, 2016; Parikh et al., 2019). Interestingly, studies have also reported the critical importance of personal characteristics such as measures of IQ (in addition to age, sex, and handedness) in enhanced diagnosis of brain disorders. An elegant example of this came from the ADHD 200 Global Competition (http://fcon_1000.projects.nitrc.org/indi/adhd200/results.html), in which teams were invited to develop diagnostic classification tools for ADHD diagnosis based on neuroimaging and PCD data from the ADHD 200 consortium (N = 973, subjects with ADHD and healthy controls). The winning team (Brown et al., 2012) showed that using subjects’ PCD (including age, gender, handedness, site, performance IQ, verbal IQ, and full-scale IQ) without neuroimaging data as input features resulted in a diagnostic classifier with the highest accuracy. The study illustrated the critical importance of accounting for variability in personal characteristics (including IQ) in imaging diagnostic research.
Research Consortia and Large-Scale Data Repositories
These large-scale datasets and consortia come with several challenges. One prominent challenge is the increased variability in the MRI data due to multiple sites for data collection. The variability may be due to scanner and/or MRI protocol specifications. Such concerns have been raised in studies utilizing agglomerated datasets. For example, studies using the ABIDE dataset have shown that fMRI features predictive of autism have limited generalizability across sites (King et al., 2019; Nielsen et al., 2013). Similar inference may be made for studies of intelligence with large-scale datasets involving multiple sites. Several research groups are currently working on addressing this issue. For instance, the ABCD consortium adapted the Empirical Bayes approach “ComBat” (Fortin et al., 2018; Johnson, Li, & Rabinovic, 2007) in order to harmonize the differences related to the scanner (Nielson et al., 2018). Efforts are also made to incorporate data-driven methods for quantifying dataset bias in agglomerated datasets (Wachinger, Becker, & Rieckmann, 2018). Such studies could also give guidelines on how to merge data from different sources while limiting the introduction of unwanted variation. Note that these variations may also arise from different population structures. It may be noted that the datasets and consortia cited here are not exhaustive, the reader is referred to recent articles on large-scale datasets and questions of data-sharing (Book, Stevens, Assaf, Glahn, & Pearlson, 2016; Craddock et al., 2013; Mennes, Biswal, Castellanos, & Milham, 2013; Poldrack & Gorgolewski, 2014; Turner, 2014).
Conclusions The advent of large-scale open data repositories and research consortia has led to rapid advancement in the study of intelligence. The fact that large-scale datasets from research consortia are now available to not just the consortia members but also to any investigator has completely changed the research scenario. The most prominent advancement has been in the field of genetics of intelligence, particularly genome-wide studies revealing new genetic loci associated with intelligence measures. This, in turn, has led to genome-wide polygenic scores of intelligence with potential implications for society (Plomin & Von Stumm, 2018). Additionally, the availability of the large-scale datasets have spurred development of innovative methods, such as combining personal characteristics (e.g., IQ), examining sources of inter-individual differences (Daugherty et al., 2020; Hammer et al., 2019; Talukdar, Roman, Operskalski, Zwilling, & Barbey, 2018), and neuroimaging data for enhanced diagnosis of brain disorders (Ghiassian et al., 2016; Parikh et al., 2019). While high quality, large, and deep phenotyped open data are still missing, one challenge will be to identify which research questions can best be examined with the available large-scale datasets. This will likely prompt the next generation of researchers to develop more innovative ideas for working with big data in the study of intelligence.
77
78
b. khundrakpam, j.-b. poline, and a. c. evans
References Alexander, L. M., Escalera, J., Ai, L., Andreotti, C., Febre, K., Mangone, A., . . . Milham M. P. (2017). Data descriptor: An open resource for transdiagnostic research in pediatric mental health and learning disorders. Scientific Data, 4, 1–26. Andreasen, N. C., Flaum, M., Swayze, V., O’Leary, D. S., Alliger, R., Cohen, G., . . ., Yuh, W. T. (1993). Intelligence and brain structure in normal individuals. American Journal of Psychiatry, 150(1), 130–134. Barbey, A. K., Colom, R., Paul, E. J., & Grafman, J. (2014). Architecture of fluid intelligence and working memory revealed by lesion mapping. Brain Structure and Function, 219(2), 485–494. doi: 10.1007/s00429-013-0512-z. Barbey, A. K., Colom, R., Solomon, J., Krueger, F., Forbes, C., & Grafman, J. (2012). An integrative architecture for general intelligence and executive function revealed by lesion mapping. Brain, 135(Pt 4), 1154–1164. doi: 10.1093/ brain/aws021. Basten, U., Hilger, K., & Fiebach, C. J. (2015). Where smart brains are different: A quantitative meta-analysis of functional and structural brain imaging studies on intelligence. Intelligence, 51, 10–27. Bedford, S. A., Park, M. T. M., Devenyi, G. A., Tullo, S., Germann, J., Patel, R., . . . Chakravarty, M. M. (2020). Large-scale analyses of the relationship between sex, age and intelligence quotient heterogeneity and cortical morphometry in autism spectrum disorder. Molecular Psychiatry, 25(3), 614–628. Biswal, B. B., Mennes, M., Zuo, X. N., Gohel, S., Kelly, C., Smith, S. M., . . . Milham, M. P. (2010). Toward discovery science of human brain function. Proceedings of the National Academy of Sciences USA, 107(10), 4734–4739. Book, G. A., Stevens, M. C., Assaf, M., Glahn, D. C., & Pearlson, G. D. (2016). Neuroimaging data sharing on the neuroinformatics database platform. Neuroimage, 124(Pt. B), 1089–1092. Brown, M. R. G. G., Sidhu, G. S., Greiner, R., Asgarian, N., Bastani, M., Silverstone, P. H., . . . Dursun, S. M. (2012). ADHD-200 global competition: Diagnosing ADHD using personal characteristic data can outperform resting state fMRI measurements. Frontiers in Systems Neuroscience, 6, 1–22. Burgaleta, M., Johnson, W., Waber, D. P., Colom, R., & Karama, S. 2014. Cognitive ability changes and dynamics of cortical thickness development in healthy children and adolescents. Neuroimage, 84, 810–819. Casey, B. J., Cannonier, T., Conley, M. I., Cohen, A. O., Barch, D. M., Heitzeg, M. M., . . . Dale, A. M. (2018). The Adolescent Brain Cognitive Development (ABCD) study: Imaging acquisition across 21 sites. Developmental Cognitive Neuroscience, 32, 43–54. Colom, R., Haier, R. J., Head, K., Álvarez-Linera, J., Quiroga, M. Á., Shih, P. C., & Jung, R. E. (2009). Gray matter correlates of fluid, crystallized, and spatial intelligence: Testing the P-FIT model. Intelligence, 37(2), 124–135. Colom, R., Karama, S., Jung, R. E., & Haier, R. J. 2010. Human intelligence and brain networks. Dialogues in Clinical Neuroscience, 12(4), 489–501. Cox, S. R., Ritchie, S. J., Fawns-Ritchie, C., Tucker-Drob, E. M., & Deary, I. J. (2019). Structural brain imaging correlates of general intelligence in UK Biobank. Intelligence, 76, 101376.
Research Consortia and Large-Scale Data Repositories
Craddock, C., Benhajali, Y., Chu, C., Chouinard, F., Evans, A., Jakab, A., . . . Bellec, P. (2013). The Neuro Bureau Preprocessing Initiative: Open sharing of preprocessed neuroimaging data and derivatives. Frontiers in Neuroinformatics, 7. doi: 10.3389/conf.fninf.2013.09.00041. Daugherty, A., Sutton, B., Hillman, C. H., Kramer, A., Cohen, N., & Barbey, A. K. (2020). Individual differences in the neurobiology of fluid intelligence predict responsiveness to training: Evidence from a comprehensive cognitive, mindfulness meditation, and aerobic exercise intervention. Trends in Neuroscience and Education, 18, 100123. Deary, I. J., Gow, A. J., Taylor, M. D., Corley, J., Brett, C., Wilson, V., . . . Starr, J. M. (2007). The Lothian Birth Cohort 1936: A study to examine influences on cognitive ageing from age 11 to age 70 and beyond. BMC Geriatrics, 7, 28. Di Martino, A., O’Connor, D., Chen, B., Alaerts, K., Anderson, J. S., Assaf, M., . . . Milham, M. P. (2017). Enhancing studies of the connectome in autism using the autism brain imaging data exchange II. Scientific Data, 4, 170010. Di Martino, A., Yan, C. G., Li, Q., Denio, E., Castellanos, F. X., Alaerts, K., . . . Milham, M. P. (2014). The autism brain imaging data exchange: Towards a large-scale evaluation of the intrinsic brain architecture in autism. Molecular Psychiatry, 19(6), 659–667. Dubois, J., Galdi, P., Paul, L. K., & Adolphs, R. (2018). A distributed brain network predicts general intelligence from resting-state human neuroimaging data. Philosophical Transactions of the Royal Society B Biological Science, 373(1756), 20170284. Eickhoff, S. B., Laird, A. R., Grefkes, C., Wang, L. E., Zilles, K., & Fox, P. T. (2009). Coordinate-based activation likelihood estimation meta-analysis of neuroimaging data: A random-effects approach based on empirical estimates of spatial uncertainty. Human Brain Mapping, 30(9), 2907–2926. Evans, A. C., & Brain Development Cooperative Group. (2006). The NIH MRI study of normal brain development. Neuroimage, 30(1), 184–202. Fortin, J.-P., Cullen, N., Sheline, Y. I., Taylor, W. D., Aselcioglu, I., Cook, P. A., . . . Shinohara, R. T. (2018). Harmonization of cortical thickness measurements across scanners and sites. Neuroimage, 167, 104–120. Ghiassian, S., Greiner, R., Jin, P., & Brown, M. R. G. 2016. Using functional or structural magnetic resonance images and personal characteristic data to identify ADHD and autism. PLoS One, 11(12), e0166934. Gray, J. R., & Thompson, P. M. (2004). Neurobiology of intelligence: Science and ethics. Nature Reviews Neuroscience, 5(6), 471–482. Haier, R. J. (2017). The neuroscience of intelligence. Cambridge University Press. Haier, R. J., Siegel, B. V., Nuechterlein, K. H., Hazlett, E., Wu, J. C., Paek, J., . . . Buchsbaum, M. S. (1988). Cortical glucose metabolic rate correlates of abstract reasoning and attention studied with positron emission tomography. Intelligence, 12(2), 199–217. HD-200 Consortium TA-200, Milham, P. M., Damien, F., Maarten, M., & Stewart, H. M. (2012). The ADHD-200 Consortium: A model to advance the translational potential of neuroimaging in clinical neuroscience. Frontiers in Systems Neuroscience, 6, 1–5.
79
80
b. khundrakpam, j.-b. poline, and a. c. evans
Hammer, R., Paul, E. J., Hillman, C. H., Kramer, A. F., Cohen, N. J., & Barbey, A. K. (2019). Individual differences in analogical reasoning revealed by multivariate task-based functional brain imaging. Neuroimage, 184, 993–1004. doi: 10.1016/j.neuroimage.2018.09.011. Huguet, G., Schramm, C., Douard, E., Jiang, L., Labbe, A., Tihy, F., . . . Jacquemont, S. (2018). Measuring and estimating the effect sizes of copy number variants on general intelligence in community-based samples. JAMA Psychiatry, 75(5), 447–457. Jernigan, T. L., Brown, T. T., Hagler, D. J., Akshoomoff, N., Bartsch, H., Newman, E., . . . Pediatric Imaging, Neurocognition and Genetics Study. (2016). The Pediatric Imaging, Neurocognition, and Genetics (PING) data repository. Neuroimage. 124(Pt. B), 1149–1154. Johnson, W. E., Li, C., & Rabinovic, A. (2007). Adjusting batch effects in microarray expression data using empirical Bayes methods. Biostatistics, 8(1), 118–127. Jung, R. E., & Haier, R. J. (2007). The Parieto-Frontal Integration Theory (P-FIT) of intelligence: Converging neuroimaging evidence. Behavioral and Brain Sciences, 30(2), 135–154. Karama, S., Ad-Dab’bagh, Y., Haier, R. J., Deary, I. J., Lyttelton, O. C., Lepage, C., & Evans, A. C. (2009). Positive association between cognitive ability and cortical thickness in a representative US sample of healthy 6 to 18 year-olds. Intelligence. 37(2), 145–155. Karama, S., Bastin, M. E., Murray, C., Royle, N. A., Penke, L., Muñoz Maniega, S., . . . Deary, I. J. (2014). Childhood cognitive ability accounts for associations between cognitive ability and brain cortical thickness in old age. Molecular Psychiatry, 19(3), 555–559. Karama, S., Colom, R., Johnson, W., Deary, I. J., Haier, R., Waber, D. P., . . . Evans, A. C. (2011). Cortical thickness correlates of specific cognitive performance accounted for by the general factor of intelligence in healthy children aged 6 to 18. Neuroimage, 55(4), 1443–1453. Khundrakpam, B. S., Lewis, J. D., Reid, A., Karama, S., Zhao, L., ChouinardDecorte, F., . . . Brain Development Cooperative Group. (2017). Imaging structural covariance in the development of intelligence. Neuroimage, 144(Pt. A), 227–240. Kievit, R. A., Fuhrmann, D., Borgeest, G. S., Simpson-Kent, I. L., & Henson, R. N. A. (2018). The neural determinants of age-related changes in fluid intelligence: A pre-registered, longitudinal analysis in UK Biobank. Wellcome Open Research, 3, 38. King, J. B., Prigge, M. B. D., King, C. K., Morgan, J., Weathersby, F., Fox, J. C., . . . Anderson, J. S. (2019). Generalizability and reproducibility of functional connectivity in autism. Molecular Autism, 10, 27. Li, Y., Liu, Y., Li, J., Qin, W., Li, K., Yu, C., & Jiang, T. (2009). Brain anatomical network and intelligence. PLoS Computational Biology, 5(5), e1000395. Loughnan, R. J., Palmer, C. E., Thompson, W. K., Dale, A. M., Jernigan, T. L., & Fan, C. C. (2019). Polygenic score of intelligence is more predictive of crystallized than fluid performance among children. bioRxiv. 637512. doi: 10.1101/637512.
Research Consortia and Large-Scale Data Repositories
Luders, E., Narr, K. L., Thompson, P. M., & Toga, A. W. (2009). Neuroanatomical correlates of intelligence. Intelligence, 37(2), 156–163. McDaniel, M. A. (2005). Big-brained people are smarter: A meta-analysis of the relationship between in vivo brain volume and intelligence. Intelligence, 33(4), 337–346. Mennes, M., Biswal, B. B., Castellanos, F. X., & Milham, M. P. (2013). Making data sharing work: The FCP/INDI experience. Neuroimage, 82, 683–691. Mihalik, A., Brudfors, M., Robu, M., Ferreira, F. S., Lin, H., Rau, A., . . . Oxtoby, N. P. (2019). ABCD Neurocognitive Prediction Challenge 2019: Predicting individual fluid intelligence scores from structural MRI using probabilistic segmentation and kernel ridge regression. In K. Pohl, W. Thompson, E. Adeli, & M. Linguraru (eds.), Adolescent brain cognitive development neurocognitive prediction. ABCD-NP 2019. Lecture Notes in Computer Science, vol. 11791. Cham: Springer. doi: 10.1007/978-3-030-31901-4_16. Narr, K. L., Woods, R. P., Thompson, P. M., Szeszko, P., Robinson, D., Dimtcheva, T., . . . Bilder, R. M. (2007). Relationships between IQ and regional cortical gray matter thickness in healthy adults. Cerebral Cortex, 17(9), 2163–2171. Nielson, D. M., Pereira, F., Zheng, C. Y., Migineishvili, N., Lee, J. A., Thomas, A. G., & Bandettini, P. A. (2018). Detecting and harmonizing scanner differences in the ABCD study – Annual release 1.0. bioRxiv. 309260. doi: 10.1101/309260. Nielsen, J. A., Zielinski, B. A., Fletcher, P. T., Alexander, A. L., Lange, N., Bigler, E. D., . . . Anderson, J. S. (2013). Multisite functional connectivity MRI classification of autism: ABIDE results. Frontiers in Human Neuroscience, 7, 599. Parikh, M. N., Li, H., & He, L. (2019). Enhancing diagnosis of autism with optimized machine learning models and personal characteristic data. Frontiers in Computational Neuroscience, 13, 1–5. Plomin, R., & Von Stumm, S. (2018). The new genetics of intelligence. Nature Reviews Genetics, 19(3), 148–159. Poldrack, R. A., & Gorgolewski, K. J. (2014). Making big data open: Data sharing in neuroimaging. Nature Neuroscience, 17(11), 1510–1517. Santarnecchi, E., Emmendorfer, A., & Pascual-Leone, A. (2017). Dissecting the parieto-frontal correlates of fluid intelligence: A comprehensive ALE metaanalysis study. Intelligence, 63, 9–28. Satterthwaite, T. D., Connolly, J. J., Ruparel, K., Calkins, M. E., Jackson, C., Elliott, M. A., . . . Gur, R. E. (2016). The Philadelphia Neurodevelopmental Cohort: A publicly available resource for the study of normal and abnormal brain development in youth. Neuroimage, 124(Pt. B), 1115–1119. Savage, J. E., Jansen, P. R., Stringer, S., Watanabe, K., Bryois, J., de Leeuw, C. A., . . . Posthuma, D. (2018). Genome-wide association meta-analysis in 269,867 individuals identifies new genetic and functional links to intelligence. Nature Genetics, 50(7), 912–919. Schumann, G., Loth, E., Banaschewski, T., Barbot, A., Barker, G., Büchel, C., . . . Struve, M. (2010). The IMAGEN study: Reinforcement-related behaviour in normal brain function and psychopathology. Molecular Psychiatry, 15(12), 1128–1139.
81
82
b. khundrakpam, j.-b. poline, and a. c. evans
Shaw, P., Greenstein, D., Lerch, J., Clasen, L., Lenroot, R., Gogtay, N., . . . Giedd, J. (2006). Intellectual ability and cortical development in children and adolescents. Nature, 440(7084), 676–679. Sniekers, S., Stringer, S., Watanabe, K., Jansen, P. R., Coleman, J. R. I., Krapohl, E., . . . Posthuma, D. (2017). Genome-wide association meta-analysis of 78,308 individuals identifies new loci and genes influencing human intelligence. Nature Genetics, 49(7), 1107–1112. Sripada, C., Angstadt, M., Rutherford, S., & Taxali, A. (2019). Brain network mechanisms of general intelligence. bioRxiv. 657205. doi: 10.1101/657205. Stein, J. L., Medland, S. E., Vasquez, A. A., Hibar, D. P., Senstad, R. E., Winkler, A. M., . . . Thompson, P. M. (2012). Identification of common variants associated with human hippocampal and intracranial volumes. Nature Genetics, 44(5), 552–561. Sudlow, C., Gallacher, J., Allen, N., Beral, V., Burton, P., Danesh, J., . . . Collins, R. (2015). UK Biobank: An open access resource for identifying the causes of a wide range of complex diseases of middle and old age. PLoS Medicine, 12(3), e1001779. Talukdar, T., Roman, F. J., Operskalski, J. T., Zwilling, C. E., & Barbey, A. K. (2018). Individual differences in decision making competence revealed by multivariate fMRI. Human Brain Mapping, 39(6), 2664–2672. doi: 10.1002/ hbm.24032. Thompson, P. M., Dennis, E. L., Gutman, B. A., Hibar, D. P., Jahanshad, N., Kelly, S., . . . Ye, J. (2017). ENIGMA and the individual: Predicting factors that affect the brain in 35 countries worldwide. Neuroimage, 145(Pt. B), 389–408. Thompson, P. M., Stein, J. L., Medland, S. E., Hibar, D. P., Vasquez, A. A., Renteria, M. E., . . . Drevets, W. (2014). The ENIGMA Consortium: Large-scale collaborative analyses of neuroimaging and genetic data. Brain Imaging Behavior, 8(2), 153–182. Turner, B. O., Paul, E. J., Miller, M. B., & Barbey, A. K. (2018). Small sample sizes reduce the replicability of task-based fMRI studies. Communications Biology, 1, 62. doi: 10.1038/s42003-018-0073-z. Turner, J. A. (2014). The rise of large-scale imaging studies in psychiatry. Gigascience, 3, 1–8. Van Essen, D. C., Smith, S. M., Barch, D. M., Behrens, T. E. J., Yacoub, E., Ugurbil, K., & WU-Minn HCP Consortium. (2013). The WU-Minn human connectome project: An overview. Neuroimage, 80, 62–79. Wachinger, C., Becker, B. G., & Rieckmann, A. (2018). Detect, quantify, and incorporate dataset bias: A neuroimaging analysis on 12,207 individuals. arXiv:1804.10764. Xiao, L., Stephen, J. M., Wilson, T. W., Calhoun, V. D., & Wang, Y.-P. (2019). A manifold regularized multi-task learning model for IQ prediction from two fMRI paradigms. IEEE Transaactions in Biomedical Engineering, 67(3), 796–806. Zhao, Y., Klein, A., Castellanos, F. X., & Milham, M. P. (2019). Brain age prediction: Cortical and subcortical shape covariation in the developing human brain. Neuroimage, 202, 116149. Zwilling, C. E., Daugherty, A. M., Hillman, C. H., Kramer, A. F., Cohen, N. J., & Barbey, A. K. (2019). Enhanced decision-making through multimodal training. NPJ Science of Learning, 4, 11. doi: 10.1038/s41539-019-0049-x.
PART II
Theories, Models, and Hypotheses
5 Evaluating the Weight of the Evidence Cognitive Neuroscience Theories of Intelligence Matthew J. Euler and Ty L. McKinney
Introduction The goal of this chapter is to provide an overview and critique of the major theories in the cognitive neuroscience of intelligence. In taking a broad view of this literature, two related themes emerge. First, as might be expected, theoretical developments have generally followed improvements in the methods available to acquire and analyze neural data. In turn, as a result of these developments, along with those in the psychometric and experimental literatures, cognitive neuroscience theories of intelligence have followed a general trajectory that runs from relatively global statements early on, to increasingly precise models and claims. As such, following Haier (2016), it is perhaps most instructive to divide the development of these models into early and later phases. The first group of theories consists of a small number of prominent, established models either for which there is a large base of support, and/or which are connected to large empirical literatures. The earliest of these grew out of electrophysiological (EEG) event-related potential (ERP) studies, and, in line with the capacities of those methods, sought to link intelligence to seemingly universal properties of the brain, like neural speed or variability (Ertl & Schafer, 1969; Hendrickson & Hendrickson, 1980). As such, while various difficulties all but doomed the latter account (Euler, Weisend, Jung, Thoma, & Yeo, 2015; Mackintosh, 2011), neural speed has quietly accrued support, and is currently undergoing a resurgence (e.g., Schubert, Hagemann, & Frischkorn, 2017). Nevertheless, and again consistent with the technological theme, interest in these accounts had until recently largely given way to the Neural Efficiency Hypothesis (NEH; Haier et al., 1988). NEH emerged from the first functional neuroimaging studies of intelligence and, following other advances, has undergone its own shift in emphasis, from a focus on neural activation to that of connectivity (Neubauer & Fink, 2009a). Finally, as structural and functional MRI (fMRI) eventually became widely available, the literature saw the development of the
85
86
m. j. euler and t. l. mckinney
more precise, anatomically-focused models of Parieto-Frontal Integration Theory (P-FIT; Jung & Haier, 2007) and the Multiple Demand system (MD; Duncan, 2010). These latter theories, perhaps along with NEH, still dominate the current literature. The second phase of theorizing essentially incorporates the advances brought about in the first phase, and seeks to revise, deepen, and integrate the previous accounts. These models have largely been developed in the last several years, and most prominently include Process Overlap Theory (POT; Kovacs & Conway, 2016), Network Neuroscience Theory (NNT; Barbey, 2018), Hierarchical Predictive Processing (Euler, 2018), and the Watershed Model of Fluid Intelligence (Kievit et al., 2016). Following then from that basic chronology, this chapter aims to provide an overview of the status and issues facing the current major theories in the neuroscience of intelligence, leading up to their culmination and extension in the most recent models. The chapter concludes with a summary of current challenges and proposed solutions for the field as a whole, as well as a preview of how resolving those issues might ultimately inform broader applications.
Established Cognitive Neuroscience Theories of Intelligence Intelligence and Neural Speed Although the neural speed view of intelligence is somewhat less prominent among neuroscientists, it is nevertheless important to consider, given its firm basis in one of the most central and best-replicated effects in the field – the moderate inverse relation between overall intelligence and speed of reaction time (Deary, Der, & Ford, 2001; Sheppard & Vernon, 2008). Moreover, while the relationship between ERP amplitudes and intelligence remains somewhat ambiguous, and typically violates neural efficiency, many studies of ERP latencies have shown the expected inverse relationship. As such, the key challenges concerning the theory of neural speed have less to do with establishing the basic effect, but rather with understanding which factors affect its relationship with intelligence and why. A recent study by Schubert et al. (2017) made several important observations in this respect. First, the authors noted that whereas intelligence is conceived of as a stable trait, ERP latencies likely reflect variance due to both situational and stable factors. As such, they combined multiple recording sessions with a psychometric approach, which enabled them to separate out situational variance and to operationalize neural speed as a stable latent trait. Second, they evaluated two competing models – one that contained a single latent speed variable vs. one that distinguished between early (P1, N1) and later ERP components (P2, N2, and P3). Results indicated not only that the second model fit the data better, but also that the speed of later
Evaluating the Weight of the Evidence
components in particular accounted for nearly 90% of the variance in general intelligence (g). In contrast, the speed of early ERP components only showed a small, positive relationship. Interestingly, while that study highlighted the need to distinguish between early vs. late information processing phases, other studies suggest that neural speed effects also depend on task conditions. For example, recent studies found that the correlation between intelligence and the latency of the P3 ERP depends crucially on task demands, such that it may scale with increasing complexity (Kapanci, Merks, Rammsayer, & Troche, 2019; Troche, Merks, Houlihan, & Rammsayer, 2017). The same case can be made for ERP amplitudes (Euler, McKinney, Schryver, & Okabe, 2017; McKinney & Euler, 2019). While these findings await replication, as a group, they exemplify the movement in the field from broad toward narrower claims. That is, rather than relating intelligence to overall neural speed, the findings of Schubert et al. (2017) preferentially implicate speed of higher-order processing. Likewise, the argument for task-dependence seeks to shift the focus away from global claims about activity–ability relationships, towards one on the conditions that elicit those effects, and hence onto more discrete neural circuits.
The Neural Efficiency Hypothesis Like neural speed, the neural efficiency hypothesis is undergoing a shift towards a more precise formulation. Namely, how should “efficiency” be operationalized in a way that allows for its systematic evaluation? A key challenge here concerns the sheer variety of methods used to assess NEH, for the reason that many of the various methodologies measured quite distinct physiological processes. For example, whereas the initial positron emission tomography (PET) studies showed that higher-ability people had lower overall glucose metabolism (Haier et al., 1988), fMRI research suggests a complex picture with various regional effects. Currently, the most definitive single statement on the activation-based account of NEH – the notion that neural activity inversely relates to intelligence – remains Neubauer and Fink’s (2009b) review, where they found that while most of the available literature supported NEH, the relationship seemed to primarily hold in frontal regions, at moderate task difficulties, with task type and sex also moderating the effect. Thus, like neural speed, recent studies on the NEH tend to make more nuanced claims about patterns of activation. For example, activation within the task-positive vs. task-negative networks shows opposite relationships with cognitive ability (Basten, Stelzel, & Fiebach, 2013), and efficiency effects may only emerge when higher and lower ability participants are matched for performance, but not subjective task difficulty (fMRI: Dunst et al., 2014; alpha ERD: Nussbaumer, Grabner, & Stern, 2015). Finally, as noted in the previous section, ERP
87
88
m. j. euler and t. l. mckinney
amplitudes typically correlate positively with intelligence, apparently contradicting activation-based NEH (Euler, 2018). The second major branch of neural efficiency research concerns whether higher-ability individuals are characterized by more efficient patterns of brain connectivity. Many of these studies use graph-theoretical approaches that can formally quantify aspects of network efficiency, such as the average distance between connections,1 and the importance of particular nodes. Here again, recent research has revealed that the relationship between intelligence and brain “efficiency” is apt to be nuanced. For example, while both structural and functional imaging studies initially pointed to a relationship between intelligence and global efficiency metrics (Li et al., 2009; van den Heuvel, Stam, Kahn, & Hulshoff Pol, 2009), other studies have cast doubt on this picture (Kruschwitz, Waller, Daedelow, Walter, & Veer, 2018); favoring a role for more specific networks (Hilger, Ekman, Fiebach, & Basten, 2017; Pineda-Pardo, Martínez, Román, & Colom, 2016; Santarnecchi, Emmendorfer, Tadayon, et al., 2017), alternative framings of efficiency (Schultz & Cole, 2016), and moderating factors (Ryman et al., 2016). Indeed, efficiency might even manifest in minute, cellular features, with recent evidence linking intelligence to sparser dendritic arbors, which may promote more efficient signaling in particular networks (Genç et al., 2018). In summary, while neural efficiency remains an appealing concept in intelligence research, both the activation- and connectivity-based formulations require further specification in terms of which specific neural properties constitute “efficiency” (Poldrack, 2015), and precisely how these should relate to intelligence in an a priori way. Given these various nuances, it appears that while efficiency does characterize many brain–ability relationships, it does not rise to the level of a general functional principle that uniformly applies across networks and situations.
Fronto-Parietal Models Parieto-frontal integration theory (P-FIT) is arguably the most prominent and well-established cognitive neuroscience theory of intelligence. In short, P-FIT holds that a core network of largely lateral frontal and parietal areas (along with the anterior cingulate cortex, temporal and occipital association areas, and their connecting white matter tracts; Barbey et al., 2012; Jung & Haier, 2007) forms the basic substrate for individual differences in intelligence. Each set of regions is thought to play a particular role in a four-stage model of complex information processing (beginning with perceptual apprehension, through recollection, reasoning, etc., and ultimately response execution;
1
In this context, distance typically refers to the number of intervening connections between two given nodes of interest, as opposed to their physical proximity in the brain.
Evaluating the Weight of the Evidence
Jung & Haier, 2007), that may not be strictly sequential, and could take different forms across individuals (Haier, 2016, p. 94). Crucially, although P-FIT highlights the role of these particular regions in intelligence, it has always emphasized their collective status as an integrated network – a theme which has been heightened in recent years. Whereas P-FIT was proposed on the basis of a single large review of the available neuroimaging literature, the Multiple Demand (MD) theory evolved over time, from an experimental model that initially emphasized the role of the prefrontal cortex in cognitive differences (Owen & Duncan, 2000), until its present formulation as an explicitly fronto-parietal account (Duncan, 2010). The two accounts can also be contrasted in that while P-FIT defines a much more extensive network (involving sensory association areas and a larger area of the prefrontal cortex), the MD system is more circumscribed, being centered around the inferior frontal sulcus, intraparietal sulcus, and the anterior cingulate and pre-supplementary motor area. Further, while P-FIT has typically been discussed in anatomical terms, the consistent theme throughout the evolution of the MD account has been its longstanding functional emphasis, on trying to identify brain regions that are commonly activated across diverse types of tasks. Thus, whereas P-FIT facilitated a shift in the literature away from a focus on single regions to a more distributed view of intelligence in the brain, MD theory offers a contrasting account, with a unique functional emphasis and a more limited set of regions. Notwithstanding their differences, both P-FIT and MD theory clearly coalesce around a shared contention that a core set of lateral frontal and parietal and medial frontal regions are disproportionately related to intelligence. On that basis, there is considerable support in the literature for that broad claim (including from lesion studies; Barbey et al., 2012; Gläscher et al., 2010), although precise replications tend to be elusive. Two recent metaanalyses brought these issues into sharper focus. The first meta-analysis found a general pattern of support for P-FIT across both structural and functional brain imaging modalities (Basten, Hilger, & Fiebach, 2015). Functional studies showed both positive and negative associations, across regions primarily in the same core set of lateral frontal and parietal and medial frontal areas, with additional clusters in the right temporal lobe and posterior insula. Structural studies, by contrast, showed only positive associations between gray matter volume and intelligence in lateral and medial frontal regions (along with a more distributed set of cortical and subcortical regions). Strikingly though, the structural findings showed no overlapping voxels with the functional results (Basten et al., 2015). The second meta-analysis focused more narrowly on fMRI correlates of fluid intelligence, and likewise revealed a primary pattern of convergence in lateral fronto-parietal regions, albeit with a left-hemisphere predominance, and additional foci in the occipital and insular cortex, as well as the basal ganglia and thalamus (Santarnecchi, Emmendorfer, & PascualLeone, 2017).
89
90
m. j. euler and t. l. mckinney
Overall, while these studies differ somewhat in their methods and focus, they converge on several points. Most notably, the total lack of convergence between structural and functional studies in the first meta-analysis points to ongoing difficulties in relating cognitive differences to discrete neural regions. As those authors explain, this may in part reflect that while P-FIT was originally formulated at the level of Brodmann areas, neuroimaging studies provide much finer resolution, allowing the possibility of no precise overlap between different studies or modalities (Basten et al., 2015). Second, while both meta-analyses largely affirm the importance of fronto-parietal networks to cognitive differences, they also suggest revisions to that basic story, particularly regarding the role of the insula and subcortical areas (Basten et al., 2015; Santarnecchi, Emmendorfer, & Pascual-Leone, 2017). Finally, while P-FIT always emphasized integration throughout that network, recent studies have supported this more directly by explicitly examining neural connectivity (Pineda-Pardo et al., 2016; Vakhtin, Ryman, Flores, & Jung, 2014; and see Network Neuroscience Theory below). In summary then, while studies over the last decade largely support a disproportionate role for fronto-parietal networks in intelligence, exact replications are limited, and many studies implicate broader networks. Thus, the overall progress in the neuroanatomical literature has brought a set of more detailed questions into focus. Namely, how extensive is the core neural network involved in intelligence, how consistent is it across individuals, and what is the precise role of those regions in supporting intelligence? As outlined in the following sections, the newest cognitive neuroscience theories of intelligence each seek to address one or more of these issues.
Recent Theoretical Developments Process Overlap Theory While not a cognitive neuroscience theory per se, Process Overlap Theory (POT; Kovacs & Conway, 2016) offers a functional account of why fronto-parietal networks have been so central to neuroscience theories of intelligence, and hence shares features of Multiple Demand theory. Specifically, POT seeks to explain the positive manifold phenomenon of g, in that it aims to explain how intelligence as a unitary capacity could arise from a diverse set of processes. In motivating their answer, the authors note several key findings: (1) the fact that g is highly correlated with fluid reasoning (Gf) skills in particular, (2) that working memory and other executive skills are crucial to Gf, and 3) that the slowest reaction times within a task (indicative of executive lapses) often highly predict g. In light of these findings, POT suggests that working memory and executive control processes constrain cognitive performance in a domain-general way. That is, despite the fact that many
Evaluating the Weight of the Evidence
tasks rely on domain-specific skills, they also all rely on a single set of executive processes. These function as a bottleneck, constraining performance regardless of domain. In turn, given the predominant role of parietal and especially frontal cortices in supporting working memory and executive processes, the former are strong candidate substrates for the physical constraints (Kovacs & Conway, 2016, p. 169).
Network Neuroscience Theory As suggested in the section on fronto-parietal models, advances in the field have facilitated a shift away from a focus on discrete, localized functions to an emphasis on integrated networks. In particular, many recent studies have drawn attention to intrinsic connectivity or fMRI “resting state” networks (ICNs), and their potential importance to intelligence. While several of these have affirmed the importance of fronto-parietal networks (Hearne et al., 2016; Santarnecchi, Emmendorfer, Tadayon, et al., 2017), they also raise novel issues. For example, results have prompted calls to more carefully delineate the term “fronto-parietal” as it applies to the dorsal attention network, vs. other ICNs involving those regions, such as executive control networks (Santarnecchi, Emmendorfer, Tadayon, et al., 2017). Further, several studies have highlighted the role of broader networks, and particularly the default mode and salience network, albeit with some mixed results (c.f., Hearne et al., 2016; Hilger et al., 2017; Santarnecchi, Emmendorfer, Tadayon, et al., 2017). Overall, this pattern reinforces the importance of moderators, and highlights the theme that intelligence is unlikely to be a property of a single brain process, region, or even network. In that vein, Network Neuroscience Theory (NNT; Barbey, 2018) organizes the findings in this emerging frontier, and integrates the connectivity and psychometric literatures to arrive at a network-based account of human intelligence. Following from the hierarchical structure of the construct (i.e., g sits atop broad factors, followed by specific abilities), NNT applies formal graphtheoretical concepts to explain how intelligence as a global capacity arises from the dynamic connectivity patterns that characterize the brain. In brief, NNT invokes at least four key premises: First, g is conceived of as a global network phenomenon, and as such cannot be understood by analyses of particular cognitive processes or tests. Second, the brain is held to be organized such that it balances modularity – relatively focal, densely inter-connected, functional centers – with select long-range connections, that allow for more global integration across modules. This is formally known as “small-world” architecture, which characterizes the brain in general, as well as specific ICNs. Third, this small-world architecture underlies distinct broad capacities, such as fluid and crystallized intelligence; albeit as mediated by different networks. Crystallized intelligence, involving the retrieval of semantic and episodic knowledge, relies on easy-to-reach network states that require well-connected functional hubs,
91
92
m. j. euler and t. l. mckinney
especially within the default mode network. Fluid intelligence, on the other hand, seems to be supported by hard-to-reach states, and weak connectivity between the fronto-parietal and cingulo-opercular networks, that permit the flexible development of novel cognitive representations. Finally, g itself does not relate to static regions or even networks, but rather emerges through the dynamic reconfiguration of connectivity patterns among different ICNs (Barbey, 2018). Thus, NNT offers a view of intelligence based on the dynamic functional organization of the brain. On that basis, it may help address inconsistencies in the literature, by promoting a view of intelligence as a dynamic, emergent phenomenon, as opposed to one that “resides” in particular structures. Moreover, as noted later on in the section detailing Broader Applications, the view implied by NNT has important implications for issues beyond intelligence research itself.
Hierarchical Predictive Processing Predictive processing theories begin from the ambitious notion that the brain can be best understood as a statistical organ, designed to (1) enact a “model” of the world and (2) allow the organism to act in ways that minimize deviations from that model, in the form of unexpected, maladaptive exchanges with the environment (e.g., avoid being a literal “fish out of water”; Friston, 2010). This view has gained considerable traction within broader cognitive science, to the point of being under serious discussion as a potentially unified theory of the brain and cognition (Clark, 2015). From that jumping off point, the predictive processing view of intelligence poses two questions: First, if correct, what implications does predictive processing have for intelligence research; and second, as a supposedly unifying account, how could it foster greater integration among the somewhat disparate lines of neuroscientific intelligence research (Euler, 2018)? Three main ideas seem to follow in response to those questions. First, in considering intelligence as a complex, hierarchical construct, predictive processing suggests that, rather than grouping tasks only according to the constructs they measure (fluid reasoning vs. processing speed), it is also useful to recall the fundamental property they share – inducing a form of uncertainty. Although this seems like a truism, it nevertheless may help integrate the current methodological extremes of chronometric ERP studies at one end, and fMRI studies of complex cognition at the other. This is because it provides a systematic way for thinking about the expected size of task–ability correlations, and the likely neural basis of the underlying effects. Specifically, it suggests that most chronometric effects are perhaps rightfully modest, owing to the more limited uncertainty typical of those tasks and the comparatively circumscribed neural networks recruited. On the other hand, since reasoning and other complex cognitive processes entail much higher-order uncertainty, such tasks should recruit much more
Evaluating the Weight of the Evidence
distributed networks, thereby eliciting greater variability in brain functioning and intelligence. This in turn raises the second key aspect of predictive processing, which is the idea that neural hierarchies should be important in determining brain–ability effects. For example, if task-related uncertainty drives neural recruitment, then ERP–ability correlations should also scale as a function of the complexity of the tasks (and networks) involved. Likewise, tasks with the greatest uncertainty should require extended and iterative processing across multiple networks, and may be subject to domain-general bottlenecks that are implicated in higher-order cognition (e.g., fronto-parietal networks, as hypothesized by POT). Thus, as an explicitly hierarchical model of brain functioning (Clark, 2015), predictive processing provides a framework for relating cognitive hierarchies to neural ones, and potentially for better integrating the currently disparate subfields of ERP and fMRI research on intelligence. Third, because predictive processing holds that organisms must develop a model of the world, it could provide explicit, testable mechanisms for distinguishing neural activity that reflect more momentary capacities vs. those related to prior learning. Thus, it potentially provides a framework for testing developmental questions in the neuroscience of intelligence (Euler, 2018). Finally, it should be noted that the strength of predictive processing may also be its weakness, in that its ambitious scope may fail to heed the lessons of prior efforts to link intelligence to universal principles.
The Watershed Model of Fluid Intelligence The Watershed Model begins from the observation that while the heritability of intelligence is well-established, it has been difficult to understand the functional mechanisms that link candidate genes to the complex behavioral phenotype known as intelligence. Borrowing from the behavioral genetics literature, Kievit et al.’s (2016) Watershed Model instead recapitulates intelligence as an endophenotype – an intermediate step between genes and phenotype. As an endophenotype, intelligence is not a behavior per se, but a series of indirect dispositions towards intelligent behavior (i.e., high IQ scores), arranged in a hierarchical manner. Thus, while many previous theories have emphasized single neural processes in explaining intelligence (e.g., neural speed, NEH), the Watershed Model tries to integrate the relative influence of lower level biological processes with more proximal cognitive correlates, while also capturing the variability in how these processes might manifest across people. In their paper, Kievit and colleagues explore the influences that white matter tracts and processing speed could both have on fluid intelligence as an example of how intelligence might operate as an endophenotype. Like a geographical watershed, the genes that govern brain structure each make
93
94
m. j. euler and t. l. mckinney
small, independent contributions to the ultimate behavioral phenotype. Hence, they function like smaller “tributaries” that funnel into more proximate behavioral causes (larger “waterways”), which include things like properties of neurons or glia, and broader neuroanatomical features. The authors argue that white matter structure is one such lower level biological process, which could indirectly support intelligence through higher processing speed. Using sophisticated statistical models, they established that various processing speed measures each had partially independent contributions to fluid intelligence (i.e., multiple realizability) and could not be collapsed into a single factor. Similarly, white matter integrity across various tracts did correlate with fluid intelligence, but indirectly through processing speed. Furthermore, white matter integrity also had a higher dimensionality than processing speed (i.e., it was comprised of a greater number of factors), consistent with the predictions of the watershed model. Thus, the results provide evidence that various biological correlates of intelligence should be simultaneously considered within a hierarchy of influence, and that multiple constellations of these correlates could come together to achieve an equivalent level of ability.
Summary of Progress, Current Challenges, and Potential Ways Forward The Emerging Synthesis In reviewing the insights gained over the two phases of theory development, arguably five principles have emerged. First, contrary to some early suggestions in this literature, the neural correlates of intelligence are clearly distributed throughout the brain. Second, given the mounting complexity of findings, it seems likely that neither a single set of regions, nor a single functional relationship will sufficiently describe “the” brain–ability relationship. Rather, the search for universal principles has given way to an appreciation that many brain–ability effects are apt to be regionally-dependent and contingent on various moderators (Haier’s first law; 2016). Fortunately, the field seems to be embracing this complexity, and moving toward an emphasis on neural networks. This shift toward networks is the third consensus point, facilitating more sophisticated theories in two ways – by helping to unify the more activation-based vs. neuroanatomical approaches, and also by drawing attention to the specific functional roles that various networks play in intelligence. Indeed, each of the newer theories reviewed here explicitly highlights the ways in which different neural networks are apt to play unique roles in various sub-factors of intelligence. That is, although P-FIT and some work on NEH clearly acknowledged the importance of networks, it took technological advances to begin to understand those implications in practice. Recently, the second phase of theorizing has brought forward the last two principles, which
Evaluating the Weight of the Evidence
concern the role of measurement and hierarchies in neural research on intelligence.2 Because these raise the next set of challenges for the field, we take them in turn here.
Current Challenges and Potential Solutions The first challenge facing neuroscientific research on intelligence concerns measurement-related questions, and particularly difficulties in reproducing various brain–ability relationships (Basten et al., 2015; Martínez et al., 2015). These reflect both methodological and substantive issues, and, in light of the former, the first steps toward improving reproducibility include general recommendations like increasing sample sizes, distinguishing between withinand between-subjects variance with respect to neural correlates of task performance, reporting results of whole-brain analyses, and explicitly examining moderators (Basten et al., 2015). In addition, because intelligence is a hierarchical construct, it is critical that researchers carefully distinguish between specific tests, broad factors, and intelligence itself when testing brain–ability relationships, lest the results reflect unknown contributions from more specific capacities (Haier et al., 2009). Likewise, in functional studies, brain–ability correlations are likely also contingent on the tasks that are employed to operationalize constructs. Setting aside these methodological concerns, there are also several more conceptual issues facing studies in the area. The first of these is that, since intelligence is a hierarchical construct, various neural hierarchies may be operating to complicate research. For example, it has been shown that, contrary to intuitions, a “reversed hierarchy” may exist in the brain such that there might actually be fewer reliable neural correlates of intelligence as one moves up the cognitive hierarchy (Román et al., 2014; and see: Barbey et al., 2012; Gläscher et al., 2010). This is reminiscent of the concept of “Brunswik symmetry,” which holds that correlations are necessarily attenuated between constructs at different hierarchical levels (e.g., between g and a lower-order personality factor; Kretzschmar et al., 2018). Speculatively, this could operate in the brain in two possible ways: First, it could be that, for functional studies, brain–ability correlations inherently scale as a function of the complexity of the networks involved. That is, more demanding tasks may elicit greater variability in brain functioning, thereby producing stronger brain–ability correlations (Euler, 2018). Second, it could also be the case that attempting to relate relatively discrete neural events (ERPs, average BOLD responses) to the aggregate construct of intelligence might systematically underestimate 2
Neural speed, NNT, and predictive processing also raise neural dynamics as an important factor in understanding the neural basis of intelligence. Insofar as fully assessing these effects will require the next phase of conceptual and technological advances (e.g., MEG; Haier, 2016), this likely represents an additional frontier in this area.
95
96
m. j. euler and t. l. mckinney
these statistical relationships – an idea that seems to accord with the recent success in relating intelligence to neural activity when both were modeled as latent factors (Schubert et al., 2017). In any case, the best way to evaluate these possibilities is for researchers to begin systematically assessing them by using formal measurement models and testing the generality of effects at different hierarchical levels. A second and related issue is the possibility that intelligence might be instantiated in different ways in different brains (Kievit et al., 2016). That is, one might interpret the evidence for a reversed hierarchy as suggesting that g relates to either a small set of brain regions, or that it relates to more distributed networks, but in a way that varies across people. Nearly all of the theories reviewed in this chapter, and certainly the most recent theories, entertain some version of this possibility. Here again, though, attending to hierarchies can facilitate testing this idea, in that as one moves down the cognitive hierarchy to more domain-specific tests, and presumably to more discrete neural circuits, between-subject variability should give way to greater consistency in brain structure and function. Going in the other direction, to the extent that higher-order brain–ability effects could not be reliably shown, it would provide evidence that intelligence is in fact multiply realized to some degree. Last, the notion of hierarchies arguably also provides a means to achieve greater integration within the field. Ultimately, cognitive neuroscience theories of intelligence should strive toward a complete account of why apparently quite disparate phenomena all predict cognitive ability, which factors affect the size of those relationships, and how reliable they are across people. In turn, the answers to those questions should greatly enhance our conception of what intelligence actually is.
Broader Applications Although this chapter has largely focused on the basic science of intelligence, recent theoretical developments nevertheless have important implications for broader endeavors. Most clearly, these include applications to neurorehabilitation, to diagnosing and mitigating the effects of neurodegenerative conditions, and, potentially, to ultimately enhancing intelligence itself. The first major application of the ideas discussed here relates to treatment approaches following brain injuries. For example, NNT highlights how greater compartmentalization of domain-specific functions minimizes the consequences of neural injury (Barbey, 2018), while research on frontoparietal networks, and especially bottleneck accounts, highlight the crucial importance of these latter systems to domain-general cognition. Given that intelligence, broadly speaking, is a protective factor for many different health conditions, including recovery from brain injury, this underscores the importance of rehabilitative efforts following lesions to these networks. Regained
Evaluating the Weight of the Evidence
functionality in these networks will not only directly help patients recover cognitive abilities, but also promote their capacity to adapt to other aspects of their injuries (e.g., motor dysfunction and difficulties with emotion regulation). As such, neuroscientific intelligence research provides clear targets for rehabilitation scientists in their efforts to improve outcomes following acquired brain injury. Next, the cognitive neuroscience of intelligence informs neurological diagnosis and prevention in two important ways. First, if we can better understand the factors that moderate brain–ability relationships, and particularly the role of subjective difficulty as illuminated by NEH (Dunst et al., 2014), this paves the way toward a science of mental exertion. In turn, if intelligence researchers can validate neural markers of mental exertion, and especially using low-cost, highly-portable methods like EEG, this potentially supports a revolution in neuropsychological assessment and diagnosis. For example, a valid marker of mental exertion would enable clinicians to observe when a patient, based on their IQ score, is exerting greater effort than expected to perform a given task, thereby potentially signaling cerebral compromise. In turn, this would conceivably allow earlier detection of incipient neurodegenerative diseases, for the reason that neural functioning is likely impaired in these conditions prior to the onset of behavioral deficits. Given the movement in Alzheimer’s research toward earlier detection of at-risk individuals (Fiandaca, Mapstone, Cheema, & Federoff, 2014), understanding these moderators, and especially how exertion affects activity–ability relationships, could considerably improve early diagnosis. The second way that the cognitive neuroscience of intelligence can impact brain health and care is through identifying the neural substrates of cognitive reserve. In brief, cognitive reserve refers to those factors that provide functional resilience against the deleterious effects of neuropathology, whether due to dementias or acquired brain injuries (Barulli & Stern, 2013). Critically, cognitive reserve is understood to be shaped by an individual’s life experience, and especially education and exposure to intellectual stimulation. Further, reserve is routinely estimated using measures of crystallized intelligence, thereby placing it firmly within the purview of intelligence research. Thus, by refining the concept of crystallized intelligence, as well as its development and neural basis, research in this field could profoundly impact our understanding of cognitive reserve, and ultimately help provide strategies to lessen the effects of acquired brain injuries and neurodegenerative diseases. A final way in which cognitive neuroscience theories of intelligence could have broader impacts is by clarifying the neural basis of intellectual development, and particularly how intelligence might eventually be enhanced via environmental or biological interventions. While those interventions are likely far-off, several recent theories nevertheless provide testable claims that could improve understanding of how intelligence develops. In the first instance, NNT makes specific predictions about how learning, and hence intellectual
97
98
m. j. euler and t. l. mckinney
development, entails a transition from the ability to engage hard-to-reach network states in initial learning phases (when problems are novel), to consolidating those skills, via transfers to networks governed by easy-to-reach states (Barbey, 2018). Predictive processing complements NNT and provides an additional framework for potentially distinguishing neural activity related to previous learning (i.e., neural “priors”) vs. more novel processing (Euler, 2018). Most importantly, both accounts suggest ways to quantify discrete neural markers of learning and, hence, to make testable predictions about their role in intellectual development. Thus, progress in this area has the potential to help adjudicate questions related to the malleability of intelligence, and, potentially, to inform education and other interventions designed to increase intelligence. Overall, the theories reviewed in this chapter, and especially the emerging lines of research, have considerable potential to increase understanding of this fundamental trait and, ultimately, to enhance human wellbeing.
References Barbey, A. K. (2018). Network neuroscience theory of human intelligence. Trends in Cognitive Sciences, 22(1), 8–20. doi: 10.1016/j.tics.2017.10.001. Barbey, A. K., Colom, R., Solomon, J., Krueger, F., Forbes, C., & Grafman, J. (2012). An integrative architecture for general intelligence and executive function revealed by lesion mapping. Brain, 135(4), 1154–1164. Barulli, D., & Stern, Y. (2013). Efficiency, capacity, compensation, maintenance, plasticity: Emerging concepts in cognitive reserve. Trends in Cognitive Sciences, 17(10), 502–509. doi: 10.1016/J.TICS.2013.08.012. Basten, U., Hilger, K., & Fiebach, C. J. (2015). Where smart brains are different: A quantitative meta-analysis of functional and structural brain imaging studies on intelligence. Intelligence, 51, 10–27. doi: 10.1016/j.intell.2015.04.009. Basten, U., Stelzel, C., & Fiebach, C. J. (2013). Intelligence is differentially related to neural effort in the task-positive and the task-negative brain network. Intelligence, 41(5), 517–528. Clark, A. (2015). Surfing uncertainty: Prediction, action, and the embodied mind. Oxford University Press. Deary, I. J., Der, G., & Ford, G. (2001). Reaction times and intelligence differences: A population-based cohort study. Intelligence, 29(5), 389–399. doi: 10.1016/ S0160–2896(01)00062-9. Duncan, J. (2010). The multiple-demand (MD) system of the primate brain: Mental programs for intelligent behaviour. Trends in Cognitive Sciences, 14(4), 172–179. doi: 10.1016/j.tics.2010.01.004. Dunst, B., Benedek, M., Jauk, E., Bergner, S., Koschutnig, K., Sommer, M., . . . Neubauer, A. C. (2014). Neural efficiency as a function of task demands. Intelligence, 42, 22–30. doi: 10.1016/j.intell.2013.09.005. Ertl, J. P., & Schafer, E. W. P. (1969). Brain response correlates of psychometric intelligence. Nature, 223(5204), 421–422. doi: 10.1038/223421a0.
Evaluating the Weight of the Evidence
Euler, M. J. (2018). Intelligence and uncertainty: Implications of hierarchical predictive processing for the neuroscience of cognitive ability. Neuroscience & Biobehavioral Reviews, 94, 93–112. doi: 10.1016/j.neubiorev.2018.08.013. Euler, M. J., McKinney, T. L., Schryver, H. M., & Okabe, H. (2017). ERP correlates of the decision time-IQ relationship: The role of complexity in task- and brain-IQ effects. Intelligence, 65, 1–10. doi: 10.1016/j.intell.2017.08.003. Euler, M. J., Weisend, M. P., Jung, R. E., Thoma, R. J., & Yeo, R. A. (2015). Reliable activation to novel stimuli predicts higher fluid intelligence. NeuroImage, 114, 311–319. doi: 10.1016/j.neuroimage.2015.03.078. Fiandaca, M. S., Mapstone, M. E., Cheema, A. K., & Federoff, H. J. (2014). The critical need for defining preclinical biomarkers in Alzheimer’s disease. Alzheimer’s & Dementia, 10(3), S196–S212. doi: 10.1016/J.JALZ.2014.04.015. Friston, K. J. (2010). The free-energy principle: A unified brain theory? Nature Reviews Neuroscience, 11(2), 127–138. doi: 10.1038/nrn2787. Genç, E., Fraenz, C., Schlüter, C., Friedrich, P., Hossiep, R., Voelkle, M. C., . . . Jung, R. E. (2018). Diffusion markers of dendritic density and arborization in gray matter predict differences in intelligence. Nature Communications, 9(1), 1905. doi: 10.1038/s41467–018-04268-8. Gläscher, J., Rudrauf, D., Colom, R., Paul, L. K., Tranel, D., Damasio, H., & Adolphs, R. (2010). Distributed neural system for general intelligence revealed by lesion mapping. Proceedings of the National Academy of Sciences of the United States of America, 107(10), 4705–4709. doi: 10.1073/ pnas.0910397107. Haier, R. J. (2016). The neuroscience of intelligence. Cambridge University Press. Haier, R. J., Colom, R., Schroeder, D. H., Condon, C. A., Tang, C., Eaves, E., & Head, K. (2009). Gray matter and intelligence factors: Is there a neuro-g? Intelligence, 37(2), 136–144. doi: 10.1016/j.intell.2008.10.011. Haier, R. J., Siegel Jr, B. V., Nuechterlein, K. H., Hazlett, E., Wu, J. C., Paek, J., . . . Buchsbaum, M. S. (1988). Cortical glucose metabolic rate correlates of abstract reasoning and attention studied with positron emission tomography. Intelligence, 12(2), 199–217. doi: 10.1016/0160-2896(88)90016-5. Hearne, L. J., Mattingley, J. B., Cocchi, L., Neisser, U., Melnick, M. D., Harrison, B. R., . . . He, Y. (2016). Functional brain networks related to individual differences in human intelligence at rest. Scientific Reports, 6, 32328. doi: 10.1038/srep32328. Hendrickson, D. E., & Hendrickson, A. E. (1980). The biological basis of individual differences in intelligence. Personality and Individual Differences, 1(1), 3–33. Hilger, K., Ekman, M., Fiebach, C. J., & Basten, U. (2017). Efficient hubs in the intelligent brain: Nodal efficiency of hub regions in the salience network is associated with general intelligence. Intelligence, 60, 10–25. doi: 10.1016/ J.INTELL.2016.11.001. Jung, R. E., & Haier, R. J. (2007). The Parieto-Frontal Integration Theory (P-FIT) of intelligence: Converging neuroimaging evidence. Behavioral and Brain Sciences, 30(2), 135–187. doi: 10.1017/S0140525X07001185. Kapanci, T., Merks, S., Rammsayer, T. H., & Troche, S. J. (2019). On the relationship between P3 latency and mental ability as a function of increasing demands
99
100
m. j. euler and t. l. mckinney
in a selective attention task. Brain Sciences, 9(2), 28. doi: 10.3390/ brainsci9020028. Kievit, R. A., Davis, S. W., Griffiths, J., Correia, M. M., Cam-CAN, & Henson, R. N. (2016). A watershed model of individual differences in fluid intelligence. Neuropsychologia, 91, 186–198. doi: 10.1016/ J.NEUROPSYCHOLOGIA.2016.08.008. Kovacs, K., & Conway, A. R. A. (2016). Process overlap theory: A unified account of the general factor of intelligence. Psychological Inquiry, 27(3), 151–177. doi: 10.1080/1047840X.2016.1153946. Kretzschmar, A., Spengler, M., Schubert, A.-L., Steinmayr, R., Ziegler, M., Kretzschmar, A., . . . Ziegler, M. (2018). The relation of personality and intelligence – What can the Brunswik symmetry principle tell us? Journal of Intelligence, 6(3), 30. doi: 10.3390/jintelligence6030030. Kruschwitz, J. D., Waller, L., Daedelow, L. S., Walter, H., & Veer, I. M. (2018). General, crystallized and fluid intelligence are not associated with functional global network efficiency: A replication study with the human connectome project 1200 data set. NeuroImage, 171, 323–331. doi: 10.1016/ J.NEUROIMAGE.2018.01.018. Li, Y., Liu, Y., Li, J., Qin, W., Li, K., Yu, C., & Jiang, T. (2009). Brain anatomical network and intelligence. PLoS Computational Biology, 5(5), e1000395. doi: 10.1371/journal.pcbi.1000395. Mackintosh, N. J. (2011). IQ and human intelligence (2nd ed.). Oxford University Press. Martínez, K., Madsen, S. K., Joshi, A. A., Joshi, S. H., Román, F. J., Villalon-Reina, J., . . . Colom, R. (2015). Reproducibility of brain-cognition relationships using three cortical surface-based protocols: An exhaustive analysis based on cortical thickness. Human Brain Mapping, 36(8), 3227–3245. doi: 10.1002/hbm.22843. McKinney, T. L., & Euler, M. J. (2019). Neural anticipatory mechanisms predict faster reaction times and higher fluid intelligence. Psychophysiology, 56(10), e13426. doi: 10.1111/psyp.13426. Neubauer, A. C., & Fink, A. (2009a). Intelligence and neural efficiency: Measures of brain activation versus measures of functional connectivity in the brain. Intelligence, 37(2), 223–229. doi: 10.1016/j.intell.2008.10.008. Neubauer, A. C., & Fink, A. (2009b). Intelligence and neural efficiency. Neuroscience & Biobehavioral Reviews, 33(7), 1004–1023. doi: 10.1016/j.neubiorev.2009.04.001. Nussbaumer, D., Grabner, R. H., & Stern, E. (2015). Neural efficiency in working memory tasks: The impact of task demand. Intelligence, 50, 196–208. doi: 10.1016/j.intell.2015.04.004. Owen, A. M., & Duncan, J. (2000). Common regions of the human frontal lobe recruited by diverse cognitive demands. Trends in Neurosciences, 23(10), 475–483. doi: 10.1007/s11631–017-0212-0. Pineda-Pardo, J. A., Martínez, K., Román, F. J., & Colom, R. (2016). Structural efficiency within a parieto-frontal network and cognitive differences. Intelligence, 54, 105–116. doi: 10.1016/J.INTELL.2015.12.002. Poldrack, R. A. (2015). Is “efficiency” a useful concept in cognitive neuroscience? Developmental Cognitive Neuroscience, 11, 12–17.
Evaluating the Weight of the Evidence
Román, F. J., Abad, F. J., Escorial, S., Burgaleta, M., Martínez, K., Álvarez-Linera, J., . . . Colom, R. (2014). Reversed hierarchy in the brain for general and specific cognitive abilities: A morphometric analysis. Human Brain Mapping, 35(8), 3805–3818. doi: 10.1002/hbm.22438. Ryman, S. G., Yeo, R. A., Witkiewitz, K., Vakhtin, A. A., van den Heuvel, M., de Reus, M., . . . Jung, R. E. (2016). Fronto-Parietal gray matter and white matter efficiency differentially predict intelligence in males and females. Human Brain Mapping, 37(11), 4006–4016. doi: 10.1002/hbm.23291. Santarnecchi, E., Emmendorfer, A., & Pascual-Leone, A. (2017). Dissecting the parieto-frontal correlates of fluid intelligence: A comprehensive ALE metaanalysis study. Intelligence, 63, 9–28. doi: 10.1016/J.INTELL.2017.04.008. Santarnecchi, E., Emmendorfer, A., Tadayon, S., Rossi, S., Rossi, A., & Pascual-Leone, A. (2017). Network connectivity correlates of variability in fluid intelligence performance. Intelligence, 65, 35–47. doi: 10.1016/ J.INTELL.2017.10.002. Schubert, A.-L., Hagemann, D., & Frischkorn, G. T. (2017). Is general intelligence little more than the speed of higher-order processing? Journal of Experimental Psychology: General, 146(10), 1498–1512. doi: 10.1037/xge0000325. Schultz, D. H., & Cole, M. W. (2016). Higher intelligence is associated with less taskrelated brain network reconfiguration. The Journal of Neuroscience, 36(33), 8551–8561. doi: 10.1523/jneurosci.0358-16.2016. Sheppard, L. D., & Vernon, P. A. (2008). Intelligence and speed of informationprocessing: A review of 50 years of research. Personality and Individual Differences, 44(3), 535–551. doi: 10.1016/j.paid.2007.09.015. Troche, S. J., Merks, S., Houlihan, M. E., & Rammsayer, T. H. (2017). On the relation between mental ability and speed of information processing in the Hick task: An analysis of behavioral and electrophysiological speed measures. Personality and Individual Differences, 118, 11–16. doi: 10.1016/ J.PAID.2017.02.027. Vakhtin, A. A., Ryman, S. G., Flores, R. A., & Jung, R. E. (2014). Functional brain networks contributing to the Parieto-Frontal Integration Theory of Intelligence. Neuroimage, 103(0), 349–354. doi: 10.1016/j.neuroimage.2014.09.055. van den Heuvel, M. P., Stam, C. J., Kahn, R. S., & Hulshoff Pol, H. E. (2009). Efficiency of functional brain networks and intellectual performance. Journal of Neuroscience, 29(23), 7619–7624. doi: 10.1523/JNEUROSCI.1443-09.2009.
101
6 Human Intelligence and Network Neuroscience Aron K. Barbey
Introduction Flexibility is central to human intelligence and is made possible by the brain’s remarkable capacity to reconfigure itself – to continually update prior knowledge on the basis of new information and to actively generate internal predictions that guide adaptive behavior and decision making. Rather than lying dormant until stimulated, contemporary research conceives of the brain as a dynamic and active inference generator that anticipates incoming sensory inputs, forming hypotheses about that world that can be tested against sensory signals that arrive in the brain (Clark, 2013; Friston, 2010). Plasticity is therefore critical for the emergence of human intelligence, providing a powerful mechanism for updating prior beliefs, generating dynamic predictions about the world, and adapting in response to ongoing changes in the environment (Barbey, 2018). This perspective provides a catalyst for contemporary research on human intelligence, breaking away from the classic view that general intelligence (g) originates from individual differences in a fixed set of cortical regions or a singular brain network (for reviews, see Haier, 2017; Posner & Barbey, 2020). Early studies investigating the neurobiology of g focused on the lateral prefrontal cortex (Barbey, Colom, & Grafman, 2013b; Duncan et al., 2000), motivating an influential theory based on the role of this region in cognitive control functions for intelligent behavior (Duncan & Owen, 2000). The later emergence of network-based theories reflected an effort to examine the neurobiology of intelligence through a wider lens, accounting for individual differences in g on the basis of broadly distributed networks. For example, the Parietal-Frontal Integration Theory (P-FIT) was the first to propose that “a discrete parieto-frontal network underlies intelligence” (Jung & Haier, 2007) and that g reflects the capacity of this network to evaluate and test hypotheses for problem-solving (see also Barbey et al., 2012). A central feature
Aron K. Barbey, Decision Neuroscience Laboratory, Beckman Institute for Advanced Science and Technology, University of Illinois at Urbana-Champaign, 405 North Mathews Avenue, Urbana, IL 61801, USA. Email: [email protected]; Web: http://decisionneurosciencelab.org/
102
Human Intelligence and Network Neuroscience
103
of the P-FIT model is the integration of knowledge between the frontal and parietal cortex, afforded by white-matter fiber tracks that enable efficient communication among regions. Evidence to support the fronto-parietal network’s role in a wide range of problem-solving tasks later motivated the Multiple-Demand (MD) Theory, which proposes that this network underlies attentional control mechanisms for goal-directed problem-solving (Duncan, 2010). Finally, the Process Overlap Theory represents a recent network approach that accounts for individual differences in g by appealing to the spatial overlap among specific brain networks, reflecting the shared cognitive processes underlying g (Kovacs & Conway, 2016). Thus, contemporary theories suggest that individual differences in g originate from functionally localized processes within specific brain regions or networks (Table 6.1; for a comprehensive review of cognitive neuroscience theories of intelligence, see Chapter 5, by Euler and McKinney). Network Neuroscience Theory adopts a new perspective, proposing that g originates from individual differences in the system-wide topology and dynamics of the human brain (Barbey, 2018). According to this approach, the small-world topology of brain networks enables the rapid reconfiguration of their modular community structure, creating globally-coordinated mental representations of a desired goal-state and the sequence of operations required to achieve it. This chapter surveys recent evidence within the rapidly developing field of network neuroscience that assess the nature and mechanisms of general intelligence (Barbey, 2018; Girn, Mills, & Christoff, 2019) (for an
Table 6.1 Summary of cognitive neuroscience theories of human intelligence. System-Wide Topology and Dynamics
Functional Localization
Lateral PFC Theory P-FIT Theory* MD Theory Process Overlap Theory Network Neuroscience Theory
Primary Region
Primary Network
Multiple Networks
SmallWorld Topology
Network Flexibility
Network Dynamics
✔
✘
✘
✘
✘
✘
✘ ✘ ✘
✔ ✔ ✘
✘ ✘ ✔
✘ ✘ ✘
✘ ✘ ✘
✘ ✘ ✘
✘
✘
✔
✔
✔
✔
* The P-FIT theory was the first to propose that “a discrete parieto-frontal network underlies intelligence” (Jung & Haier, 2007).
104
a. k. barbey
introduction to modern methods in network neuroscience, see Chapter 2, by Hilger and Sporns). We identify directions for future research that aim to resolve prior methodological limitations and further investigate the hypothesis that general intelligence reflects individual differences in network mechanisms for (i) efficient and (ii) flexible information processing.
Network Efficiency Early research in the neurosciences revealed that the brain is designed for efficiency – to minimize the cost of information processing while maximizing the capacity for growth and adaptation (Bullmore & Sporns, 2012; Ramón y Cajal, Pasik, & Pasik, 1999). Minimization of cost is achieved by dividing the cortex into anatomically localized modules, comprised of densely interconnected regions or nodes. The spatial proximity of nodes within each module reduces the average length of axonal projections (conservation of space and material), increasing the signal transmission speed (conservation of time) and promoting local efficiency (Latora & Marchiori, 2001). This compartmentalization of function enhances robustness to brain injury by limiting the likelihood of global system failure (Barbey et al., 2015). Indeed, the capacity of each module to function and modify its operations without adversely effecting other modules enables cognitive flexibility (Barbey, Colom, & Grafman, 2013a) and therefore confers an important adaptive advantage (Bassett & Gazzaniga, 2011; Simon, 1962). Critically, however, the deployment of modules for coordinated systemwide function requires a network architecture that also enables global information processing. Local efficiency is therefore complemented by global efficiency, which reflects the capacity to integrate information across the network as a whole and represents the efficiency of the system for information transfer between any two nodes. This complementary aim, however, creates a need for long-distance connections that incur a high wiring cost. Thus, an efficient design is achieved by introducing competing constraints on brain organization, demanding a decrease in the wiring cost for local specialization and an opposing need to increase the connection distance to facilitate global, system-wide function. These competing constraints are captured by formal models of network topology (Deco, Tononi, Boly, & Kringelbach, 2015) (Figure 6.1). Local efficiency is embodied by a regular network or lattice, in which each node is connected to an equal number of its nearest neighbors, supporting direct local communication in the absence of long-range connections. In contrast, global efficiency is exemplified by a random network, in which each node connects on average to any other node, including connections between physically distant regions.
Human Intelligence and Network Neuroscience
Figure 6.1 Small-world network. Human brain networks exhibit a small-world topology that represents a parsimonious balance between a regular brain network, which promotes local efficiency, and a random brain network, which enables global efficiency. Figure modified with permission from Bullmore and Sporns (2012)
Recent discoveries in network neuroscience suggest that the human brain balances these competing constraints by incorporating elements of a regular and random network to create a small-world topology (Bassett & Bullmore, 2006, 2017; Watts & Strogatz, 1998). A small-world network embodies (i) short-distance connections that reduce the wiring cost (high local clustering), along with (ii) long-distance connections that provide direct topological links or short-cuts that promote global information processing (short path length). Together, these features enable high local and global efficiency at relatively low cost, providing a parsimonious architecture for human brain organization (Robinson, Henderson, Matar, Riley, & Gray, 2009; Sporns, Tononi, & Edelman, 2000a, b; van der Maas et al., 2006). Evidence further indicates that efficient network organization is based on routing strategies that combine local and global information about brain network topology in an effort to approximate a small-world architecture (Avena-Koenigsberger et al., 2019). Research in network neuroscience has consistently observed that the topology of human brain networks indeed exemplifies a small-world architecture, which has been demonstrated across multiple neuroimaging modalities, including structural (He, Chen, & Evans, 2007), functional (Achard & Bullmore, 2007; Achard, Salvador, Whitcher, Suckling, & Bullmore, 2006; Eguiluz, Chialvo, Cecchi, Baliki, & Apkarian, 2005), and diffusion tensor magnetic resonance imaging (MRI) (Hagmann et al., 2007). Alterations in the topology of a small-world network have also been linked to multiple disease states (Stam, 2014; Stam, Jones, Nolte, Breakspear, & Scheltens, 2007), stages of lifespan development (Zuo et al., 2017), and pharmacological interventions (Achard & Bullmore, 2007), establishing their importance for understanding human
105
106
a. k. barbey
health, aging, and disease (Bassett & Bullmore, 2009). Emerging neuroscience evidence further indicates that general intelligence is directly linked to characteristics of a small-world topology, demonstrating that individual differences in g are associated with network measures of global efficiency.
Small-World Topology and General Intelligence The functional topology and community structure of the human brain has been extensively studied through the application of resting-state functional MRI, which examines spontaneous low frequency fluctuations of the bloodoxygen-level dependent (BOLD) signal. This method demonstrates coherence in brain activity across spatially distributed regions to reveal a core set of intrinsic connectivity networks (ICNs; Figure 6.2a) (Achard et al., 2006; Biswal, Yetkin, Haughton, & Hyde, 1995; Buckner et al., 2009; Bullmore & Sporns, 2009; Power & Petersen, 2013; Power et al., 2011; Smith et al., 2013; Sporns, Chialvo, Kaiser, & Hilgetag, 2004; van den Heuvel, Mandl, Kahn, & Hulshoff Pol, 2009). Functional brain networks largely converge with the structural organization of networks measured using diffusion tensor MRI (Byrge, Sporns, & Smith, 2014; Hagmann et al., 2007; Park & Friston, 2013), together providing a window into the community structure from which global information processing emerges. The discovery that global brain network efficiency is associated with general intelligence was established by van den Heuvel, Stam, Kahn, and Hulshoff Pol (2009), who observed that g was positively correlated with higher global efficiency (as indexed by a globally shorter path length) (for earlier research on brain network efficiency using PET; see Haier et al., 1988). Santarnecchi, Galli, Polizzotto, Rossi, and Rossi (2014) further examined whether this finding reflects individual differences in connectivity strength, investigating the relationship between general intelligence and global network efficiency derived from weakly vs. strongly connected regions. Whereas strong connections provide the basis for densely connected modules, weak links index long-range connections that typically relay information between (rather than within) modules. The authors replicated van den Heuvel, Stam, et al. (2009) and further demonstrated that weakly connected regions explain more variance in g than strongly connected regions (Santarnecchi et al., 2014), supporting the hypothesis that global efficiency and the formation of weak connections are central to general intelligence. Further support for the role of global efficiency in general intelligence is provided by EEG studies, which examine functional connectivity as coherence between time series of distant EEG channels measured at rest. For instance, Langer and colleagues provide evidence for a positive association between g and the small-world topology of intrinsic brain networks derived from EEG (Langer, Pedroni, & Jancke, 2013; Langer et al., 2012). Complementary research examining the global connectivity of regions within the prefrontal cortex also supports a positive association with measures
Human Intelligence and Network Neuroscience
Figure 6.2 Intrinsic connectivity networks and network flexibility. (A) Functional networks drawn from a large-scale meta-analysis of peaks of brain activity for a wide range of cognitive, perceptual, and motor tasks. Upper left figure represents a graph theoretic embedding of the nodes. Similarity between nodes is represented by spatial distance, and nodes are assigned to their corresponding network by color. Middle and right sections present the nodal and voxel-wise network distribution in both hemispheres. Figure modified with permission from Power and Petersen (2013). (B) Left graph illustrates the percent of regions within each intrinsic connectivity network that can transition to many easy-to-reach network states, primarily within the default mode network. Right graph illustrates the percent of regions within each intrinsic connectivity network that can transition to many difficult-to-reach network states, primarily within cognitive control networks. Figure modified with permission from Gu et al. (2015)
107
108
a. k. barbey
of intelligence. For example, Cole, Ito, and Braver (2015) and Cole, Yarkoni, Repovš, Anticevic, and Braver (2012) observed that the global connectivity of the left lateral prefrontal cortex (as measured by the average connectivity of this region with every other region in the brain) demonstrates a positive association with fluid intelligence. Converging evidence is provided by Song et al. (2008), who found that the global connectivity of the bilateral dorsolateral prefrontal cortex was associated with general intelligence. To integrate the diversity of studies investigating the role of network efficiency in general intelligence – and to account for null findings (Kruschwitz, Waller, Daedelow, Walter, & Veer, 2018) – it will be important to examine differences among studies with respect to resting-state fMRI data acquisition, preprocessing, network analysis, and the study population. A central question concerns whether resting-state fMRI is sufficiently sensitive or whether task-based fMRI methods provide a more powerful lens to examine the role of network efficiency in general intelligence. Indeed, a growing body of evidence suggests that functional brain network organization measured during cognitive tasks is a stronger predictor of intelligence than when measured during resting-state fMRI (Greene, Gao, Scheinost, & Constable, 2018; Xiao, Stephen, Wilson, Calhoun, & Wang, 2019). This literature has primarily employed task-based fMRI paradigms investigating cognitive control, specifically within the domain of working memory (for a review, see Chapter 13, by Cohen and D’Esposito). For example, fMRI studies investigating global brain network organization have revealed that working memory task performance is associated with an increase in network integration and a decrease in network segregation (Cohen & D’Esposito, 2016; see also Gordon, Stollstorff, & Vaidya, 2012; Liang, Zou, He, & Yang, 2016). Increased integration was found primarily within networks for cognitive control (e.g., the fronto-parietal and cingular-opercular networks) and for task-relevant sensory processing (e.g., the somatomotor network) (Cohen, Gallen, Jacobs, Lee, & D’Esposito, 2014). Thus, global brain network integration measured by task-based fMRI provides a powerful lens for further characterizing the role of network efficiency in high-level cognitive processes (e.g., cognitive control and working memory). Increasingly, scientists have proposed that high-level cognitive operations emerge from brain network dynamics (Breakspear, 2017; Cabral, Kringelbach, & Deco, 2017; Deco & Corbetta, 2011; Deco, Jirsa, & McIntosh, 2013), motivating an investigation of their role in general intelligence.
Network Flexibility and Dynamics Recent discoveries in network neuroscience motivate a new perspective about the role of global network dynamics in general intelligence – marking an important point of departure from the standard view that
Human Intelligence and Network Neuroscience
intelligence originates from individual differences in a fixed set of cortical regions (Duncan et al., 2000) or a singular brain network (Barbey et al., 2012; Duncan, 2010; Jung & Haier, 2007) (Table 6.1). Accumulating evidence instead suggests that network efficiency and dynamics are critical for the diverse range of mental abilities underlying general intelligence (for earlier research on brain network efficiency using PET; see Haier et al. (1988)).
Network Dynamics of Crystallized Intelligence Global information processing is enabled by the hierarchical community structure of the human brain, with modules that are embedded within modules to form complex, interconnected networks (Betzel & Bassett, 2017; Meunier, Lambiotte, & Bullmore, 2010). This infrastructure is supported, in part, by nodes of high connectivity or hubs (Buckner et al., 2009; Hilger, Ekman, Fiebach, & Basten, 2017a, b; Power, Schlaggar, Lessov-Schlaggar, & Petersen, 2013; van den Heuvel & Sporns, 2013). These regions serve distinct roles either as provincial hubs, which primarily connect to nodes within the same module, or as connector hubs, which instead provide a link between distinct modules (Guimera & Nunes Amaral, 2005). Hubs are therefore essential for transferring information within and between ICNs and provide the basis for mutual interactions between cognitive processes (Bertolero, Yeo, & D’Esposito, 2015; van der Maas et al., 2006). Indeed, strongly connected hubs together comprise a rich club network that mediates almost 70% of the shortest paths throughout the brain and is therefore important for global network efficiency (van den Heuvel & Sporns, 2011). By applying engineering methods to network neuroscience, research from the field of network control theory further elucidates how brain network dynamics are shaped by the topology of strongly connected hubs, examining their capacity to act as drivers (network controllers) that move the system into specific network states (Gu et al., 2015). According to this approach, the hierarchical community structure of the brain may facilitate or constrain the transition from one network state to another, for example, by enabling a direct path that requires minimal transitions (an easy-to-reach network state) or a winding path that requires many transitions (a difficult-to-reach network state). Thus, by investigating how the brain is organized to form topologically direct or indirect pathways (comprising short- and long-distance connections), powerful inferences about the flexibility and dynamics of ICNs can be drawn. Recent studies applying this approach demonstrate that strongly connected hubs enable a network to function within many easy-to-reach states (Gu et al., 2015), engaging highly accessible representations of prior knowledge and experience that are a hallmark of crystallized intelligence (Carroll, 1993; Cattell, 1971; McGrew & Wendling, 2010). Extensive neuroscience data indicate that the topology of brain networks is shaped by learning and prior
109
110
a. k. barbey
experience – reflecting the formation of new neurons, synapses, connections, and blood supply pathways that promote the accessibility of crystallized knowledge (Bassett et al., 2011; Buchel, Coull, & Friston, 1999; Pascual-Leone, Amedi, Fregni, & Merabet, 2005). The capacity to engage easy-to-reach network states – and therefore to access crystallized knowledge – is exhibited by multiple ICNs, most prominently for the default mode network (Betzel, Gu, Medaglia, Pasqualetti, & Bassett, 2016; Gu et al., 2015) (Figure 6.2b). This network is known to support semantic and episodic memory representations that are central to crystallized intelligence (Christoff, Irving, Fox, Spreng, & Andrews-Hanna, 2016; Kucyi, 2018; St Jacques, Kragel, & Rubin, 2011; Wirth et al., 2011) and to provide a baseline, resting state from which these representations can be readily accessed. Thus, according to this view, crystallized abilities depend on accessing prior knowledge and experience through the engagement of easily reachable network states, supported, for example, by strongly connected hubs within the default mode network (Betzel, Gu et al., 2016; Gu et al., 2015).
Network Dynamics of Fluid Intelligence Although the utility of strongly connected hubs is well-recognized, a growing body of evidence suggests that they may not fully capture the higher-order structure of brain network organization and the flexibility of information processing that this global structure is known to afford (Schneidman, Berry, Segev, & Bialek, 2006). Research in network science has long appreciated that global information processing depends on the formation of weak ties, which comprise nodes with a small number of connections (Bassett & Bullmore, 2006, 2017; Granovetter, 1973). By analogy to a social network, a weak tie represents a mutual acquaintance that connects two groups of close friends, providing a weak link between multiple modules. In contrast to the intuition that strong connections are optimal for network function, the introduction of weak ties is known to produce a more globally efficient small-world topology (Gallos, Makse, & Sigman, 2012; Granovetter, 1973). Research investigating their role in brain network dynamics further indicates that weak connections enable the system to function within many difficult-toreach states (Gu et al., 2015), reflecting a capacity to adapt to novel situations by engaging mechanisms for flexible, intelligent behavior. Unlike the easily reachable network states underlying crystallized intelligence, difficult-to-reach states rely on connections and pathways that are not well-established from prior experience – instead requiring the adaptive selection and assembly of new representations that introduce high cognitive demands. The capacity to access difficult-to-reach states is exhibited by multiple ICNs, most notably the frontoparietal and cingulo-opercular networks (Gu et al., 2015) (Figure 6.2b). Together, these networks are known to support cognitive control, enabling the top-down regulation and control of mental operations (engaging the
Human Intelligence and Network Neuroscience
fronto-parietal network) in response to environmental change and adaptive task goals (maintained by the cingulo-opercular network) (Dosenbach, Fair, Cohen, Schlaggar, & Petersen, 2008). Converging evidence from resting-state fMRI and human lesion studies strongly implicates the fronto-parietal network in cognitive control, demonstrating that this network accounts for individual differences in adaptive reasoning and problem-solving – assessed by fMRI measures of global efficiency (Cole et al., 2012; Santarnecchi et al., 2014; van den Heuvel, Stam, et al., 2009) and structural measures of brain integrity (Barbey, Colom, Paul, & Grafman, 2014; Barbey et al., 2012, 2013a; Glascher et al., 2010). From this perspective, the fronto-parietal network’s role in fluid intelligence reflects a global, system-wide capacity to adapt to novel environments, engaging cognitive control mechanisms that guide the dynamic selection and assembly of mental operations required for goal achievement (Duncan, Chylinski, Mitchell, & Bhandari, 2017). Thus, rather than attempt to localize individual differences in fluid intelligence to a specific brain network, this framework instead suggests that weak connections within the fronto-parietal and cingulo-opercular networks (Cole et al., 2012; Santarnecchi et al., 2014) drive global network dynamics – flexibly engaging difficult-to-reach states in the service of adaptive behavior and providing a window into the architecture of individual differences in general intelligence at a global level.
Network Dynamics of General Intelligence Recent discoveries in network neuroscience motivate a new perspective about the role of global network dynamics in general intelligence – breaking away from standard theories that account for individual differences in g on the basis of a single brain region (Duncan et al., 2000), a primary brain network (Barbey et al., 2012; Duncan, 2010; Jung & Haier, 2007), or the overlap among specific networks (Kovacs & Conway, 2016). Accumulating evidence instead suggests that network flexibility and dynamics are critical for the diverse range of mental abilities underlying general intelligence. According to Network Neuroscience Theory, the capacity of ICNs to transition between network states is supported by their small-world topology, which enables each network to operate in a critical state that is close to a phase transition between a regular and random network (Beggs, 2008; Petermann et al., 2009) (Figure 6.1). The transition toward a regular network configuration is associated with the engagement of specific cognitive abilities, whereas the transition toward a random network configuration is linked to the engagement of broad or general abilities (Figure 6.1). Rather than reflect a uniform topology of dynamic states, emerging evidence suggests that ICNs exhibit different degrees of variability (Betzel, Gu et al., 2016; Mattar, Betzel, & Bassett, 2016) – elucidating the network architecture that supports flexible, time-varying profiles of functional
111
112
a. k. barbey
connectivity. Connections between modules are known to fluctuate more than connections within modules, demonstrating greater dynamic variability for connector hubs relative to provincial hubs (Zalesky, Fornito, Cocchi, Gollo, & Breakspear, 2014; Zhang et al., 2016). Thus, the modular community structure of specific mental abilities provides a stable foundation upon which the more flexible, small-world topology of broad mental abilities is constructed (Hampshire, Highfield, Parkin, & Owen, 2012). The dynamic flexibility of ICNs underlying broad mental abilities (Figure 6.2b) is known to reflect their capacity to access easy- vs. difficult-to-reach states, with greatest dynamic flexibility exhibited by networks that are strongly associated with fluid intelligence, particularly the fronto-parietal network (Figure 6.3) (Braun et al., 2015; Cole et al., 2013; Shine et al., 2016).
Figure 6.3 Dynamic functional connectivity. (A) Standard deviation in resting-state BOLD fMRI reveals regions of low (blue), moderate (green), and high (red) variability. (B) Dynamic functional connectivity matrices are derived by windowing time series and estimating the functional connectivity between pairs of regions. Rather than remain static, functional connectivity matrices demonstrate changes over time, revealing dynamic variability in the connectivity profile of specific brain regions. (C) Dynamic functional connectivity matrices can be used to assess the network’s modular structure at each time point, revealing regions of low or high temporal dynamics. Figure modified with permission from Mattar et al. (2016)
Human Intelligence and Network Neuroscience
Functional Brain Network Reconfiguration Accumulating evidence examines the dynamic reconfiguration of brain networks in the service of goal-directed, intelligent behavior. Recent findings indicate that the functional reconfiguration of brain networks (i.e., greater network flexibility) is positively associated with learning and performance on tests of executive function. For example, Bassett et al. (2011) found that functional network flexibility (as measured by changes in the modular structure of brain networks) predicted future learning in a simple motor task. Converging evidence is provided by Braun et al. (2015), who examined functional brain network reconfiguration in a continuous recognition memory task (i.e., n-back) and observed that higher cognitive load was associated with greater network reorganization within frontal cortex. In addition, Jia, Hu, and Deshpande (2014) examined functional brain network dynamics in the context of resting-state fMRI, investigating the stability of connections over time. The authors found that performance on tests of executive function was associated with the average stability of connections examined at the whole brain level, with greater brain network reconfiguration (i.e., lower stability) predicting higher performance. Notably, the highest level of functional brain network reconfiguration was observed within the fronto-parietal network (Jia et al., 2014; see also, Hilger, Fukushima, Sporns, & Fiebach, 2020). Taken together, these findings support the role of flexible brain network reconfiguration in goal-directed, intelligent behavior. Additional evidence to support this conclusion is provided by studies that investigate the efficiency of functional brain network reconfiguration in the context of task performance. For example, Schultz and Cole (2016) examined the similarity between functional connectivity patterns observed at rest vs. during three task conditions (language, working memory, and reasoning). The authors predicted that greater reconfiguration efficiency (as measured by the similarity between the resting-state and task-based connectomes) would be associated with better performance. Consistent with this prediction, the authors found that individuals with greater reconfiguration efficiency demonstrated better task performance and that this measure was positively associated with general intelligence. This finding emphasizes the importance of reconfiguration efficiency in task performance and supports the role of flexible, dynamic network mechanisms for general intelligence. Network Neuroscience Theory motivates new predictions about the role of network dynamics in learning, suggesting that the early stages of learning depend on adaptive behavior and the engagement of difficult-to-reach network states, followed by the transfer of skills to easily reachable network states as knowledge and experience are acquired to guide problem-solving. Indeed, recent findings suggest that the development of fluid abilities from childhood to young adulthood is associated with individual differences in the flexible
113
114
a. k. barbey
reconfiguration of brain networks for fluid intelligence (Chai et al., 2017). A recent study by Finc et al. (2020) examined the dynamic reconfiguration of functional brain networks during working memory training, providing evidence that early stages of learning engage cognitive control networks for adaptive behavior, followed by increasing reliance upon the default mode network as knowledge and skills are acquired (Finc et al., 2020), supporting the predictions of the Network Neuroscience Theory. A primary direction for future research is to further elucidate how the flexible reconfiguration of brain networks is related to general intelligence, with particular emphasis on mechanisms for cognitive control. Although brain networks underlying cognitive control have been extensively studied, their precise role in specific, broad, and general facets of intelligence remain to be well characterized (Mill, Ito, & Cole, 2017). Future research therefore aims to integrate the wealth of psychological and psychometric evidence on the cognitive processes underlying general intelligence (Carroll, 1993) and cognitive control (Friedman & Miyake, 2017) with research on the network mechanisms underlying these processes (Barbey, Koenigs, & Grafman, 2013; Barbey et al., 2012, 2013b) in an effort to better characterize the cognitive and neurobiological foundations of general intelligence.
Conclusion Network Neuroscience Theory raises new possibilities for understanding the nature and mechanisms of human intelligence, suggesting that interdisciplinary research in the emerging field of network neuroscience can advance our understanding of one of the most profound problems of intellectual life: How individual differences in general intelligence – which give rise to the stunning diversity and uniqueness of human identity and personal expression – originate from the network organization of the human brain. The reviewed findings elucidate the global network architecture underlying individual differences in g, drawing upon recent studies investigating the small-world topology and dynamics of human brain networks. Rather than attribute individual differences in general intelligence to a single brain region (Duncan et al., 2000), a primary brain network (Barbey et al., 2012; Duncan, 2010; Jung & Haier, 2007), or the overlap among specific networks (Kovacs & Conway, 2016), the proposed theory instead suggests that general intelligence depends on the dynamic reorganization of ICNs – modifying their topology and community structure in the service of system-wide flexibility and adaptation (Table 6.1). This framework sets the stage for new approaches to understanding individual differences in general intelligence and motivates important questions for future research, namely: • What are the neurobiological foundations of individual differences in g? Does the assumption that g originates from a primary brain region
Human Intelligence and Network Neuroscience
or network remain tenable, or should theories broaden the scope of their analysis to incorporate evidence from network neuroscience on individual differences in the global topology and dynamics of the human brain? • To what extent does brain network dynamics account for individual differences in specific, broad, and general facets of intelligence and do mechanisms for cognitive control figure prominently? To gain a better understanding of this issue, a more fundamental characterization of network dynamics will be necessary. • In what respects are ICNs dynamic?, how do strong and weak connections enable specific network transformations?, and what mental abilities do network dynamics support? • How does the structural topology of ICNs shape their functional dynamics and the capacity to flexibly transition between network states? To what extent is our current understanding of network dynamics limited by an inability to measure more precise temporal profiles or to capture higherorder representations of network topology at a global level? As the significance and scope of these issues would suggest, many fundamental questions about the nature and mechanisms of human intelligence remain to be investigated and provide a catalyst for contemporary research in network neuroscience. By investigating the foundations of general intelligence in global network dynamics, the burgeoning field of network neuroscience will continue to advance our understanding of the cognitive and neural architecture from which the remarkable constellation of individual differences in human intelligence emerge.
Acknowledgments This work was supported by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via Contract 2014-13121700004 to the University of Illinois at Urbana-Champaign (PI: Barbey) and the Department of Defense, Defense Advanced Research Projects Activity (DARPA), via Contract 2019HR00111990067 to the University of Illinois at Urbana-Champaign (PI: Barbey). The views and conclusions contained herein are those of the author and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of the ODNI, IARPA, DARPA, or the US Government. The US Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation thereon. Preparation of this chapter was based on and adapted from research investigating the Network Neuroscience Theory of human intelligence (Barbey, 2018).
115
116
a. k. barbey
References Achard, S., & Bullmore, E. (2007). Efficiency and cost of economical brain functional networks. PLoS Computational Biology, 3, e17. Achard, S., Salvador, R., Whitcher, B., Suckling, J., & Bullmore, E. (2006). A resilient, low-frequency, small-world human brain functional network with highly connected association cortical hubs. Journal of Neuroscience, 26(1), 63–72. Avena-Koenigsberger, A., Yan, X., Kolchinsky, A., van den Heuvel, M. P., Hagmann, P., & Sporns, O. (2019). A spectrum of routing strategies for brain networks. PLoS Computational Biology, 15, e1006833. Barbey, A. K. (2018). Network neuroscience theory of human intelligence. Trends in Cognitive Sciences, 22(1), 8-20. Barbey, A. K., Belli, T., Logan, A., Rubin, R., Zamroziewicz, M., & Operskalski, T. (2015). Network topology and dynamics in traumatic brain injury Current Opinion in Behavioral Sciences, 4, 92–102. Barbey, A. K., Colom, R., & Grafman, J. (2013a). Architecture of cognitive flexibility revealed by lesion mapping. Neuroimage, 82, 547–554. Barbey, A. K., Colom, R., & Grafman, J. (2013b). Dorsolateral prefrontal contributions to human intelligence. Neuropsychologia, 51(7), 1361–1369. Barbey, A. K., Colom, R., Paul, E. J., & Grafman, J. (2014). Architecture of fluid intelligence and working memory revealed by lesion mapping. Brain Structure and Function, 219(2), 485–494. Barbey, A. K., Colom, R., Solomon, J., Krueger, F., Forbes, C., & Grafman, J. (2012). An integrative architecture for general intelligence and executive function revealed by lesion mapping. Brain, 135(4), 1154–1164. Barbey, A. K., Koenigs, M., & Grafman, J. (2013c). Dorsolateral prefrontal contributions to human working memory. Cortex, 49(5), 1195–1205. Bassett, D. S., & Bullmore, E. (2006). Small-world brain networks. Neuroscientist, 12(6), 512–523. Bassett, D. S., & Bullmore, E. T. (2009). Human brain networks in health and disease. Current Opinion in Neurology, 22(4), 340–347. Bassett, D. S., & Bullmore, E. T. (2017). Small-world brain networks revisited. Neuroscientist, 23(5), 499–516. Bassett, D. S., & Gazzaniga, M. S. (2011). Understanding complexity in the human brain. Trends in Cognitive Sciences, 15(5), 200–209. Bassett, D. S., Wymbs, N. F., Porter, M. A., Mucha, P. J., Carlson, J. M., & Grafton, S. T. (2011). Dynamic reconfiguration of human brain networks during learning. Proceedings of the National Academy of Sciences USA, 108(18), 7641–7646. Beggs, J. M. (2008). The criticality hypothesis: How local cortical networks might optimize information processing. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Science, 366(1864), 329–343. Bertolero, M. A., Yeo, B. T., & D’Esposito, M. (2015). The modular and integrative functional architecture of the human brain. Proceedings of the National Academy of Sciences USA, 112(49), E6798–6807. Betzel, R. F., & Bassett, D. S. (2017). Multi-scale brain networks. Neuroimage, 160, 73–83.
Human Intelligence and Network Neuroscience
Betzel, R. F., Gu, S., Medaglia, J. D., Pasqualetti, F., & Bassett, D. S. (2016). Optimally controlling the human connectome: the role of network topology. Science Reports, 6, 30770. Betzel, R. F., Satterthwaite, T. D., Gold, J. I., & Bassett, D. S. (2016). A positive mood, a flexible brain. arXiv preprint. Biswal, B., Yetkin, F. Z., Haughton, V. M., & Hyde, J. S. (1995). Functional connectivity in the motor cortex of resting human brain using echo-planar MRI. Magnetic Resonance Medicine, 34(4), 537–541. Braun, U., Schäfer, A., Walter, H., Erk, S., Romanczuk-Seiferth, N., Haddad, L., . . . Bassett, D. S. (2015). Dynamic reconfiguration of frontal brain networks during executive cognition in humans. Proceedings of the National Academy of Sciences USA, 112(37), 11678–11683. Breakspear, M. (2017). Dynamic models of large-scale brain activity. Nature Neuroscience, 20, 340–352. Buchel, C., Coull, J. T., & Friston, K. J. (1999). The predictive value of changes in effective connectivity for human learning. Science, 283(5407), 1538–1541. Buckner, R. L., Sepulcre, J., Talukdar, T., Krienen, F. M., Liu, H., Hedden, T., . . . Johnson, K. A. (2009). Cortical hubs revealed by intrinsic functional connectivity: Mapping, assessment of stability, and relation to Alzheimer’s disease. Journal of Neuroscience, 29(6), 1860–1873. Bullmore, E., & Sporns, O. (2009). Complex brain networks: Graph theoretical analysis of structural and functional systems. Nature Reviews Neuroscience, 10, 186–198. Bullmore, E., & Sporns, O. (2012). The economy of brain network organization. Nature Reviews Neuroscience, 13, 336–349. Byrge, L., Sporns, O., & Smith, L. B. (2014). Developmental process emerges from extended brain-body-behavior networks. Trends in Cognitive Sciences, 18(8), 395–403. Cabral, J., Kringelbach, M. L., & Deco, G. (2017). Functional connectivity dynamically evolves on multiple time-scales over a static structural connectome: Models and mechanisms. Neuroimage, 160, 84–96. Carroll, J. B. (1993). Human cognitive abilities: A survey of factor-analytic studies. Cambridge University Press. Cattell, R. B. (1971). Abilities: Their structure, growth, and action. Boston: Houghton Mifflin. Chai, L. R., Khambhati, A. N., Ciric, R., Moore, T. M., Gur, R. C., Gur, R. E., . . . Bassett, D. S. (2017). Evolution of brain network dynamics in neurodevelopment. Network Neuroscience, 1(1), 14–30. Christoff, K., Irving, Z. C., Fox, K. C., Spreng, R. N., & Andrews-Hanna, J. R. (2016). Mind-wandering as spontaneous thought: A dynamic framework. Nature Reviews Neuroscience, 17, 718–731. Clark, A. (2013). Whatever next? Predictive brains, situated agents, and the future of cognitive science. Behavioral and Brain Sciences, 36(3), 181–204. Cohen, J. R., & D’Esposito, M. (2016). The segregation and integration of distinct brain networks and their relationship to cognition. Journal of Neuroscience, 36, 12083–12094.
117
118
a. k. barbey
Cohen, J. R., Gallen, C. L., Jacobs, E. G., Lee, T. G., & D’Esposito, M. (2014). Quantifying the reconfiguration of intrinsic networks during working memory. PLoS One, 9, e106636. Cole, M. W., Ito, T., & Braver, T. S. (2015). Lateral prefrontal cortex contributes to fluid intelligence through multinetwork connectivity. Brain Connectivity, 5(8), 497–504. Cole, M. W., Reynolds, J. R., Power, J. D., Repovs, G., Anticevic, A., & Braver, T. S. (2013). Multi-task connectivity reveals flexible hubs for adaptive task control. Nature Neuroscience, 16(9), 1348–1355. Cole, M. W., Yarkoni, T., Repovs, G., Anticevic, A., & Braver, T. S. (2012). Global connectivity of prefrontal cortex predicts cognitive control and intelligence. Journal of Neuroscience, 32(26), 8988– 8999. Deco, G., & Corbetta, M. (2011). The dynamical balance of the brain at rest. Neuroscientist, 17(1), 107–123. Deco, G., Jirsa, V. K., & McIntosh, A. R. (2013). Resting brains never rest: Computational insights into potential cognitive architectures. Trends in Neurosciences, 36(5), 268–274. Deco, G., Tononi, G., Boly, M., & Kringelbach, M. L. (2015). Rethinking segregation and integration: Contributions of whole-brain modelling. Nature Reviews Neuroscience, 16, 430–439. Dosenbach, N. U., Fair, D. A., Cohen, A. L., Schlaggar, B. L., & Petersen, S. E. (2008). A dual-networks architecture of top-down control. Trends in Cognitive Sciences, 12(3), 99–105. Duncan, J. (2010). The multiple-demand (MD) system of the primate brain: Mental programs for intelligent behaviour. Trends in Cognitive Sciences, 14(4), 172–179. Duncan, J., Chylinski, D., Mitchell, D. J., & Bhandari, A. (2017). Complexity and compositionality in fluid intelligence. Proceedings of the National Academy of Sciences USA, 114(20), 5295–5299. Duncan, J., & Owen, A. M. (2000). Common regions of the human frontal lobe recruited by diverse cognitive demands. Trends in Neurosciences, 23(10), 475–483. Duncan, J., Seitz, R. J., Kolodny, J., Bor, D., Herzog, H., Ahmed, A., . . . Emslie, H. (2000). A neural basis for general intelligence. Science, 289(5478), 457–460. Eguiluz, V. M., Chialvo, D. R., Cecchi, G. A., Baliki, M., & Apkarian, A. V. (2005). Scale-free brain functional networks. Physical Review Letters, 94, 018102. Finc, K., Bonna, K., He, X., Lydon-Staley, D. M., Kuhn, S., Duch, W., & Bassett, D. S. (2020). Dynamic reconfiguration of functional brain networks during working memory training. Nature Communications, 11, 2435. Friedman, N. P., & Miyake, A. (2017). Unity and diversity of executive functions: Individual differences as a window on cognitive structure. Cortex, 86, 186–204. Friston, K. (2010). The free-energy principle: A unified brain theory? Nature Reviews Neuroscience, 11, 127–138. Gallos, L. K., Makse, H. A., & Sigman, M. (2012). A small world of weak ties provides optimal global integration of self-similar modules in functional brain networks. Proceedings of the National Academy of Sciences USA, 109(8), 2825–2830.
Human Intelligence and Network Neuroscience
Girn, M., Mills, C., & Christoff, K. (2019). Linking brain network reconfiguration and intelligence: Are we there yet? Trends in Neuroscience and Education, 15, 62–70. Glascher, J., Rudrauf, D., Colom, R., Paul, L. K., Tranel, D., Damasio, H., & Adolphs, R. (2010). Distributed neural system for general intelligence revealed by lesion mapping. Proceedings of the National Academy of Sciences USA, 107(10), 4705–4709. Gordon, E. M., Stollstorff, M., & Vaidya, C. J. (2012). Using spatial multiple regression to identify intrinsic connectivity networks involved in working memory performance. Human Brain Mapping, 33(7), 1536–1552. Granovetter, M. S. (1973). The strength of weak ties. American Journal of Sociology, 78(6), 1360–1380. Greene, A. S., Gao, S., Scheinost, D., & Constable, R. T. (2018). Task-induced brain state manipulation improves prediction of individual traits. Nature Communications, 9, 2807. Gu, S., Pasqualetti, F., Cieslak, M., Telesford, Q. K., Yu, A. B., Kahn, A. E., . . . Bassett, D. S. (2015). Controllability of structural brain networks. Nature Communications, 6, 8414. Guimera, R., & Nunes Amaral, L. A. (2005). Functional cartography of complex metabolic networks. Nature, 433, 895–900. Hagmann, P., Kurant, M., Gigandet, X., Thiran, P., Wedeen, V. J., Meuli, R., & Thiran, J. P. (2007). Mapping human whole-brain structural networks with diffusion MRI. PLoS One, 2, e597. Haier, R. J., 2017. The neuroscience of intelligence. Cambridge University Press. Haier, R. J., Siegel, B. V., Nuechterlein, K. H., Hazlett, E., Wu, J. C., Paek, J., . . . Buchsbaum, M. S. (1988). Cortical glucose metabolic-rate correlates of abstract reasoning and attention studied with positron emission tomography. Intelligence, 12(2), 199–217. Hampshire, A., Highfield, R. R., Parkin, B. L., & Owen, A. M. (2012). Fractionating human intelligence. Neuron, 76(6), 1225–1237. He, Y., Chen, Z. J., & Evans, A. C. (2007). Small-world anatomical networks in the human brain revealed by cortical thickness from MRI. Cerebral Cortex, 17(10), 2407–2419. Hilger, K., Ekman, M., Fiebach, C. J., & Basten, U. (2017a). Efficient hubs in the intelligent brain: Nodal efficiency of hub regions in the salience network is associated with general intelligence. Intelligence, 60, 10–25. Hilger, K., Ekman, M., Fiebach, C. J., & Basten, U. (2017b). Intelligence is associated with the modular structure of intrinsic brain networks. Science Reports, 7(1), 16088. Hilger, K., Fukushima, M., Sporns, O., & Fiebach, C. J. (2020). Temporal stability of functional brain modules associated with human intelligence. Human Brain Mapping, 41(2), 362–372. Jia, H., Hu, X., & Deshpande, G. (2014). Behavioral relevance of the dynamics of the functional brain connectome. Brain Connectivity, 4(9), 741–759. Jung, R. E., & Haier, R. J. (2007). The Parieto-Frontal Integration Theory (P-FIT) of intelligence: Converging neuroimaging evidence. Behavioral and Brain Sciences, 30(2), 135–154; discussion 154–187.
119
120
a. k. barbey
Kovacs, K., & Conway, A. R. A. (2016). Process overlap theory: A unified account of the general factor of intelligence. Psychological Inquiry, 27(3), 151–177. Kruschwitz, J., Waller, L., Daedelow, L., Walter, H., & Veer, I. (2018). General, crystallized and fluid intelligence are not associated with functional global network efficiency: A replication study with the human connectome project 1200 data set. Neuroimage, 171, 323–331. Kucyi, A. (2018). Just a thought: How mind-wandering is represented in dynamic brain connectivity. Neuroimage, 180(Pt B), 505–514. Langer, N., Pedroni, A., Gianotti, L. R., Hänggi, J., Knoch, D., & Jäncke, L. (2012). Functional brain network efficiency predicts intelligence. Human Brain Mapping, 33(6), 1393–1406. Langer, N., Pedroni, A., & Jancke, L. (2013). The problem of thresholding in smallworld network analysis. PLoS One, 8, e53199. Latora, V., & Marchiori, M. (2001). Efficient behavior of small-world networks. Physical Review Letters, 87(19), 198701. Liang, X., Zou, Q., He, Y., & Yang, Y. (2016). Topologically reorganized connectivity architecture of default-mode, executive-control, and salience networks across working memory task loads. Cerebral Cortex, 26(4), 1501–1511. Mattar, M. G., Betzel, R. F., & Bassett, D. S. (2016). The flexible brain. Brain, 139(8), 2110–2112. McGrew, K. S., & Wendling, B. J. (2010). Cattell-Horn-Carroll cognitive-achievement relations: What we have learned from the past 20 years of research. Psychology in the Schools, 47(7), 651–675. Meunier, D., Lambiotte, R., & Bullmore, E. T. (2010). Modular and hierarchically modular organization of brain networks. Frontiers in Neuroscience, 4, 200. Mill, R. D., Ito, T., & Cole, M. W. (2017). From connectome to cognition: The search for mechanism in human functional brain networks. Neuroimage, 160, 124–139. Park, H. J., & Friston, K. (2013). Structural and functional brain networks: From connections to cognition. Science, 342(6158), 1238411. Pascual-Leone, A., Amedi, A., Fregni, F., & Merabet, L. B. (2005). The plastic human brain cortex. Annual Review of Neuroscience, 28, 377–401. Petermann, T., Thiagarajan, T. C., Lebedev, M. A., Nicolelis, M. A., Chialvo, D. R., & Plenz, D. (2009). Spontaneous cortical activity in awake monkeys composed of neuronal avalanches. Proceedings of the National Academy of Sciences USA, 106(37), 15921–15926. Posner, M. I., & Barbey, A. K. (2020). General intelligence in the age of neuroimaging. Trends in Neuroscience and Education, 18, 100126. Power, J. D., Cohen, A. L., Nelson, S. M., Wig, G. S., Barnes, K. A., Church, J. A., . . . Petersen, S. E. (2011). Functional network organization of the human brain. Neuron, 72(4), 665–678. Power, J. D., & Petersen, S. E. (2013). Control-related systems in the human brain. Current Opinion in Neurobiology, 23(2), 223–228. Power, J. D., Schlaggar, B. L., Lessov-Schlaggar, C. N., & Petersen, S. E. (2013). Evidence for hubs in human functional brain networks. Neuron, 79(4), 798–813.
Human Intelligence and Network Neuroscience
Ramón y Cajal, S., Pasik, P., & Pasik, T. (1999). Texture of the nervous system of man and the vertebrates. Wien: Springer. Robinson, P. A., Henderson, J. A., Matar, E., Riley, P., & Gray, R. T. (2009). Dynamical reconnection and stability constraints on cortical network architecture. Physical Review Letters, 103, 108104. Santarnecchi, E., Galli, G., Polizzotto, N. R., Rossi, A., & Rossi, S. (2014). Efficiency of weak brain connections support general cognitive functioning. Human Brain Mapping, 35(9), 4566–4582. Schneidman, E., Berry, M. J., 2nd, Segev, R., & Bialek, W. (2006). Weak pairwise correlations imply strongly correlated network states in a neural population. Nature, 440, 1007–1012. Schultz, D. H., & Cole, M. W. (2016). Higher intelligence is associated with less taskrelated brain network reconfiguration. Journal of Neuroscience, 36(33), 8551–8561. Shine, J. M., Bissett, P. G., Bell, P. T., Koyejo, O., Balsters, J. H., Gorgolewski, K. J., . . . Poldrack, R. A. (2016). The dynamics of functional brain networks: Integrated network states during cognitive task performance. Neuron, 92(2), 544–554. Simon, H. (1962). The architecture of complexity. Proceedings of the American Philosophical Society, 106(6), 467–482. Smith, S. M., Beckmann, C. F., Andersson, J., Auerbach, E. J., Bijsterbosch, J., Douaud, G., . . . WU-Minn HCP Consortium (2013). Resting-state fMRI in the Human Connectome Project. Neuroimage, 80, 144–168. Song, M., Zhou, Y., Li, J., Liu, Y., Tian, L., Yu, C., & Jiang, T. (2008). Brain spontaneous functional connectivity and intelligence. Neuroimage, 41(3), 1168–1176. Sporns, O., Chialvo, D. R., Kaiser, M., & Hilgetag, C. C. (2004). Organization, development and function of complex brain networks. Trends in Cognitive Sciences, 8(9), 418–425. Sporns, O., Tononi, G., & Edelman, G. M. (2000a). Connectivity and complexity: The relationship between neuroanatomy and brain dynamics. Neural Networks, 13(8–9), 909–922. Sporns, O., Tononi, G., & Edelman, G. M. (2000b). Theoretical neuroanatomy: Relating anatomical and functional connectivity in graphs and cortical connection matrices. Cerebral Cortex, 10(2), 127–141. St Jacques, P. L., Kragel, P. A., & Rubin, D. C. (2011). Dynamic neural networks supporting memory retrieval. Neuroimage, 57(2), 608–616. Stam, C. J. (2014). Modern network science of neurological disorders. Nature Reviews Neuroscience, 15, 683–695. Stam, C. J., Jones, B. F., Nolte, G., Breakspear, M., & Scheltens, P. (2007). Smallworld networks and functional connectivity in Alzheimer’s disease. Cerebral Cortex, 17(1), 92–99. van den Heuvel, M. P., Mandl, R. C., Kahn, R. S., & Hulshoff Pol, H. E. (2009). Functionally linked resting-state networks reflect the underlying structural connectivity architecture of the human brain. Human Brain Mapping, 30(10), 3127–3141.
121
122
a. k. barbey
van den Heuvel, M. P., & Sporns, O. (2011). Rich-club organization of the human connectome. Journal of Neuroscience, 31(44), 15775–15786. van den Heuvel, M. P., & Sporns, O. (2013). Network hubs in the human brain. Trends in Cognitive Sciences, 17(12), 683–696. van den Heuvel, M. P., Stam, C. J., Kahn, R. S., & Hulshoff Pol, H. E. (2009). Efficiency of functional brain networks and intellectual performance. Journal of Neuroscience, 29(23), 7619–7624. van der Maas, H. L., Dolan, C. V., Grasman, R. P., Wicherts, J. M., Huizenga, H. M., & Raijmakers, M. E. (2006). A dynamical model of general intelligence: The positive manifold of intelligence by mutualism. Psychological Review, 113(4), 842–861. Watts, D. J., & Strogatz, S. H. (1998). Collective dynamics of “small-world” networks. Nature, 393, 440–442. Wirth, M., Jann, K., Dierks, T., Federspiel, A., Wiest, R., & Horn, H. (2011). Semantic memory involvement in the default mode network: A functional neuroimaging study using independent component analysis. Neuroimage, 54(4), 3057–3066. Xiao, L., Stephen, J. M., Wilson, T. W., Calhoun, V. D., & Wang, Y. P. (2019). Alternating diffusion map based fusion of multimodal brain connectivity networks for IQ prediction. IEEE Transactions of Biomedical Engineering, 66(8), 2140–2151. Zalesky, A., Fornito, A., Cocchi, L., Gollo, L. L., & Breakspear, M. (2014). Timeresolved resting-state brain networks. Proceedings of the National Academy of Sciences USA, 111(28), 10341–10346. Zhang, J., Cheng, W., Liu, Z., Zhang, K., Lei, X., Yao, Y., . . . Feng, J. (2016). Neural, electrophysiological and anatomical basis of brain-network variability and its characteristic changes in mental disorders. Brain, 139(8), 2307–2321. Zuo, X. N., He, Y., Betzel, R. F., Colcombe, S., Sporns, O., & Milham, M. P. (2017). Human connectomics across the life span. Trends in Cognitive Sciences, 21(1), 32–45.
7 It’s about Time Towards a Longitudinal Cognitive Neuroscience of Intelligence Rogier A. Kievit and Ivan L. Simpson-Kent
Introduction The search for the biological properties that underlie intelligent behavior has held the scientific imagination at least since the pre-Socratic philosophers. Early hypotheses posited a crucial role for the heart (Aristotle; Gross, 1995), the ventricles (Galen; Rocca, 2009), and the “Heat, Moisture, and Driness” of the brain (Huarte, 1594). The advent of neuroimaging technology such as EEG, MEG, and MRI has provided more suitable tools to scientifically study the relationship between mind and brain. To date, many hundreds of studies have examined the association between brain structure and function on the one hand and individual differences in general cognitive abilities on the other. Both qualitative and quantitative reviews have summarized the crosssectional associations between intelligence and brain volume (Pietschnig, Penke, Wicherts, Zeiler, & Voracek, 2015), as well as more network- and imaging-specific hypotheses which suggest a key role for the frontoparietal system in supporting individual differences in intelligence (Basten, Hilger, & Fiebach, 2015; Deary, Penke, & Johnson, 2010; Jung & Haier, 2007). These findings are bolstered by converging evidence from lesion studies (Barbey, Colom, Paul, & Grafman, 2014), cognitive abilities in disorders associated with physiological abnormalities (Kail, 1998), and the neural signatures associated with the rapid acquisition of new skills (Bengtsson et al., 2005). These innovations in neuroimaging coincided with the emergence of more dynamic, longitudinal models of the development of intelligence. Where seminal works on intelligence such as Spearman (1904) and Jensen’s (1998) The g Factor: The Science of Mental Ability barely discuss developmental change, new theories have begun to address the role of development to understand intelligence conceptually and empirically. For instance, theories such as that of Dickens and Flynn (2001) suggest direct, reciprocal interactions between intelligence, genetic predisposition, and the environment over the lifespan. This model, where genetic predispositions lead to people self-stratifying to environments in line with their abilities, leads to amplification of initial differences, thus reconciling previously puzzling facts about heritability and environmental influences. Later, inspired by ecological models of predator–prey relationships,
123
124
r. a. kievit and i. l. simpson-kent
Van Der Maas et al. (2006) proposed the mutualism model, which suggests that general cognitive ability emerges, at least in part, due to positive reciprocal influences between lower cognitive faculties. In other words, a greater ability in one domain, such as vocabulary, may facilitate faster growth in others (memory, reasoning) through a range of mechanisms. Recent empirical studies (Ferrer & McArdle, 2004; Kievit, Hofman, & Nation, 2019; Kievit et al., 2017) in longitudinal samples as well as meta-analytic and narrative reviews (Peng & Kievit, 2020; Peng, Wang, Wang, & Lin, 2019) find support for the mutualism model, suggesting a key role for developmental dynamics in understanding cognitive ability. Converging evidence for the plausibility of such dynamic models comes from atypical populations. For instance, Ferrer, Shaywitz, Holahan, Marchione, and Shaywitz (2010) demonstrated that a subpopulation with dyslexia was characterized not (just) by differences in absolute performance in reading ability, but by an absence of positive, reciprocal effects between IQ and reading compared to typical controls. In other words, the atypical group is best understood as having atypical dynamic processes, which may ultimately manifest as cross-sectional differences However, there is very little work on the intersection between these two innovative strands of intelligence research: How changes in brain structure and function go hand in hand with changes in cognitive abilities associated with intelligence. This is unfortunate, as to truly understand the nature of the relationship between emerging brains and minds, there is no substitute for longitudinal data (Raz & Lindenberger, 2011), where the same individuals undergo repeated sessions of neuroimaging as well as repeated assessments of higher cognitive abilities closely associated with intelligence.
Towards a Dynamic Cognitive Neuroscience of Intelligence Our goal in this chapter is to examine longitudinal studies that study change in both cognitive ability and brain structure in childhood, adolescence, and early adulthood, when change in both domains is rapid. A similarly exciting question is neurocognitive aging at the other end of the lifespan – however, as that has recently been comprehensively reviewed (Oschwald et al., 2019), this chapter will focus on the period from early childhood to early adulthood. We will focus (with some partial exceptions) on studies that measure both cognitive ability and measures of brain structure on at least two time points. Such studies are sufficiently rare that we can survey them here comprehensively. Although we focus on intelligence, we do not limit ourselves to studies that use IQ scores, but rather studies that measure continuous measures of cognitive ability that are canonically considered closely related to intelligence (e.g., as defined by high standardized factor loadings in a hierarchical factor model). These include measures such as (working) memory, fluid reasoning, vocabulary, and processing speed, all of which
Towards a Longitudinal Cognitive Neuroscience of Intelligence
generally show steep developmental increases, and as such are likely most sensitive to contemporaneous brain changes. To understand the unfolding dynamics between cognitive and neural change requires longitudinal data as well as longitudinal methodology. As outlined in Kievit et al. (2018) (see Figure 7.1), we can conceptualize the relationship between brain change and cognitive change in terms of three key parameters which capture causal hypotheses: Brain structure driving cognitive change, cognitive ability leading to brain reorganization, and correlated change. First, current brain structure may be found to govern the rate of change in cognitive performance. This is what we would refer to as “structural scaffolding.” According to this hypothesis, the current state of the brain (most commonly captured by structural measures, but trait-like functional measures may serve a similar conceptual purpose) provides, in some sense, the preconditions that facilitate cognitive growth. Specifically, we would expect that individuals with “better” neural characteristics (e.g., high volume, greater
Figure 7.1 Simplified bivariate latent change score model illustrating the co-development of intelligence scores (top) and brain measures (bottom) across two waves. For more details, see Kievit et al. (2018)
125
126
r. a. kievit and i. l. simpson-kent
white matter integrity, etc.) would show faster rates of cognitive gain. This effect can be quantified by means of a coupling parameter in a latent change score (LCS) model, a regression that quantifies the association between the current state of brain structure and the rate of change (delta) of the cognitive domain of interest (red, upward arrow in Figure 7.1). Alternatively, current cognitive performance may be associated with the rate of change of structural (or functional) brain metrics. This could be conceptualized as cognitive plasticity, or reorganization. For instance, achieving a certain greater level of cognitive ability may lead to more rapid reorganization of cortical structure to engrain, or solidify, these newly acquired abilities. Recent mechanistic proposals (Wenger, Brozzoli, Lindenberger, & Lövdén, 2017) have shown how a mechanistic cascade of glial changes, dendritic branching, and axonal sprouting following rapid skill acquisition lead to volume expansion, followed by a period of renormalization. Such effects can be captured by a coupling parameter in an LCS model (blue, downward arrow in Figure 7.1). Both these parameters can be further expanded such that the recent rate of change in one domain governs future changes in another domain (Estrada, Ferrer, Román, Karama, & Colom, 2019; Grimm, An, McArdle, Zonderman, & Resnick, 2012). Finally, changes in brain structure and cognitive function may be correlated (yellow double-headed arrow in Figure 7.1). Although not all analytical approaches allow for the investigation of all these parameters, and the papers we discuss in Table 7.1 show considerable methodological heterogeneity, we find the analytical framework as sketched in Figure 7.1 a fruitful way to frame how associations over time unfold. We will discuss recent findings, grouped by imaging measure, which bear on these questions, and discuss avenues for future work. The empirical papers we will discuss in the following section are described in more detail (sample sizes, age range, measures of interest) in Table 7.1, and an at-a-glance overview of the age ranges and number of occasions is provided in Figure 7.2.
Grey Matter One of the earliest papers, Sowell et al. (2004), focused on mapping cortical changes across two years in 45 children aged 5–11 years. Greater cortical thinning was associated with more rapid gains in vocabulary, especially in the left hemisphere. A considerably larger study by Shaw et al. (2006) grouped 307 children into three “strata” of intelligence (high, medium, and low) and observed a complex pattern: Children with higher cognitive ability showed especially pronounced changes, with early steep increases in volume followed by steeper rates of cortical thinning afterwards. Notably, this process induced different brain–intelligence associations across development, with negative associations between cortical thickness and intelligence at early ages but positive associations later on in development.
Table 7.1 An overview of longitudinal studies of brain structure, function, and intelligence. For each study we show the abbreviated reference and details about cognitive and imaging measures used. If papers were ambiguous (e.g., only report SD instead of range), numbers reflect an informed estimate. Mean interval between waves (years)
Publication year
Sample size (per wave)
Beckwith and Parmelee (1986)
1986
53/53/53/ 53/49
~1
Sowell et al. (2004) Shaw et al. (2006) Brans et al. (2010) Ramsden et al. (2011) Tamnes, Walhovd, Grydeland et al. (2013) Burgaleta et al. (2014) Evans et al. (2015)
2004
45/45
~2
2006
307/178/92
~2
2010
242/183
5
2011
33/33
~3.5
2013
79
2014
188/188
~2
WASI
2015
43/43/12/7
~1
2015
162/162
~3
WASI, WIAT-II, digit recall, block recall, count recall, backward digit recall WISC-III
2015
504/504
~4
Reference
Koenis et al. (2015) Schnack et al. (2015)
Age range (years)
Congnitive test(s)
Imaging metric
Gesell Developmental Scale (4.9.24 mo), Stanford-Baiet Intelligence Scale (age 5–8), WISC (age 8) WISC (vocab & block design)
Electroencephalogram (EEG)
0–8
Cortical thickness, brain volume Cortical thickness
5–11
Cortical thickness, brain volume Functional & structural MRI Grey matter volume
20–40
WPR5I-III, WISC-III, WAIS-III WAIS-III WISC-III (wave 1). WAIS-III (wave 2) Verbal working memory
WISC-III, short form WAIS & WAIS-III-NL
Cortical thickness, cortical surface area Brain volume, resting state connectivity (fMRI) Fractional anisotropy, streamline count Cortical thickness, cortical surface area
4–25
12–20 8–22
6–22 8–14
9–15 9–60
Table 7.1 (cont.)
Congnitive test(s)
Imaging metric
~0.75
Mullen Scales of tarty learning
~1.5
WASI Matrix Reasoning
306/7/7
2
WASI
DWI Myelin water Fraction (MWF) Fractional anisotropy, functional connectivity (fMRI) Cortical thickness
6–18
2017
75/39/18/29
~5
WPPSI-III
Fractional anisotropy
0–29
2018
132/132/132
~2
WASI
6–21
2018
237/224/217
~2
WISC & WASI
Cortical thickness, cortical surface area Brain volume
2018
201/121/71
~1.5
2018
310/255/130
~3/~5
WISC-R, Woodcock-Johnson Test of Achievement (WJ-R) WISC-III, WAIS-iii
Fractional anisotropy, white matter volume Fractional anisotropy
9–23
2019
401
~1/5
Head circumference
0–26
2019
430/430/430
~2
K-ABC (ages 6–8) & WAIS (age 26) WASI
6–22
2019
813/7(up to8 waves, total 1748 scant) 37/37
~3
WPPSI-III, WISC-R, WAIS
Cortical thickness and surface area Cortical thickness
2
WASI-II. Language: CELF 4/CTOPP/GORT 5
Fractional anisotropy
6–8
Sample size (per wave)
Deoni et al. (2016) Wendelken et al. (2017)
2016 2017
257/126/ 39/15/4 523/223
Khundrakpam et al. (2017) Young et al. (2017) Román et al. (2018) Tamnes et al. (2018) Ferrer (2018)
2017
Reference
Koenis et al. (2018) Jaekel et al. (2019) Estrada et al. (2019) Schmitt et al. (2019) Borchers et al. (2019)
Age range (years)
Mean interval between waves (years)
Publication year
2019
0–5 6–22
8–29 5–21
3–34
Ritchie et al. (under review)
2019
2,091/1,423
~5
CANTAB, WISC, Educational polygenic score
Hahn et al. (2019) Qi et al. (2019)
2019
36/36
6
WISC, WAIS
2019
55/52/51
~1
Dai et al. (2019)
2019
210/not known
Selmeczy et al. (2020) Judd et al. (2020) Madsen et al. (2020)
2020
90/83/75
Unclear (likely similar to Deoni et al., 2016) 0.75–3.7 years
Sentence comprehension test (Test zum Satzverstehen von kindern, TSVK) Mullen Scales of Early Learning
Item-context association memory
2020
551/551
~5
CANTAB
2020
79/85/78/ 72/67/63/5/ 51/26
0.5
Stop signal reaction time (SSRT)
14–19
Cortical thickness, cortical volume, surface area EEG sleep spindles
8–18
Cortical thickness
5–7
DWI Myelin water Fraction (MWF)
0.1–4
Memory task-based (MRI response Cortical thickness/ surface area Fractional anisotropy, mean diffusivity; radial diffusivity
8–14 14–19 7–19
130
r. a. kievit and i. l. simpson-kent
Beckwith & Parmelee, 1986 Sowell et al., 2004 Shaw et al., 2006 Brans et al., 2010 Ramsden et al., 2011 Tamnes et al., 2013 Burgaleta et al., 2014 Evans et al., 2015 Koenis et al., 2015 Schnack et al., 2015 Deoni et al., 2016 Wendelken et al., 2017 Khundrakpam et al., 2017 Young et al., 2017 Koenis et al., 2018 Román et al., 2018 Tamnes et al., 2018 Ferrer, 2018 Jaekel et al., 2019 Estrada et al., 2019 Schmi et al., 2019 Borchers et al., 2019 Ritchie et al. 2019 Hahn et al., 2019 Qi et al., 2019 Dai et al., 2019 Selmeczy et al. 2020
Judd et al., 2020 Madsen et al., 2020
0
1
10
15
20
25
Figure 7.2 An overview of longitudinal studies of brain structure, function, and intelligence. For each study, organized chronologically, we show the age range (lines), number of waves (number of dots), and mean age at each wave (location of dots). Lines that extend beyond the graph rightwards have more waves beyond early adulthood. More details per study are shown in Table 7.1.
A more recent follow-up study attempted to tease apart cognitive specificity for such associations. Ramsden et al. (2011) examined longitudinal changes in verbal and non-verbal intelligence and their associations with grey matter in a relatively small sample (N = 33) of healthy adolescent participants. Correlating change scores across two measurements, three years apart, the authors showed that changes in verbal intelligence (VIQ) co-occurred alongside changes in grey matter density in a region of the left motor cortex previously linked to articulation of speech (Ramsden et al., 2011, p. 114). In contrast, changes in non-verbal intelligence were positively correlated with grey matter density in the anterior cerebellum, which has previously been implicated in hand motor movements. Although preliminary, this work suggests potential specificity in neurodevelopmental patterns. Burgaleta, Johnson, Waber, Colom, and Karama (2014) examined changes in intelligence test scores across two waves and observed correlated change between cortical thickness (especially in frontoparietal areas) and changes in intelligence. The pattern of results showed
Towards a Longitudinal Cognitive Neuroscience of Intelligence
that those with greatest gains in FSIQ showed less rapid cortical thinning than those with smaller gains, or decreases, in FSIQ. Similar results were not observed for cortical surface area, suggesting greater sensitivity of thickness to cognitive changes. Schnack et al. (2015) expanded beyond only children to a wider, lifespan sample (9–60 years), although weighted towards children and adolescents. As observed in Shaw et al. (2006), they observed a developmentally heterogeneous set of associations, with thinner cortices being associated with better cognitive performance at age 10 years, and high IQ children showing more rapid thinning. However, in adulthood this pattern reversed, such that greater cortical thickness in middle age is associated with higher intelligence, possibly due to slower lifespan thinning, further emphasizing the importance of a truly longitudinal, developmental perspective. The majority of work focuses on thickness, volume, and area of cortical regions. In contrast, Tamnes, Bos, van de Kamp, Peters, and Crone (2018) studied longitudinal changes in the hippocampus and its subregions in 237 individuals scanned up to three times. They observed cross-sectional correlations between intelligence and hippocampal subregions, but the only significant longitudinal associations were a positive association between the rates of increase in the molecular layer of the hippocampus and cognitive performance. Interestingly, such correlated changes may be specific to certain subtests of intelligence, even those considered quite central to cognitive ability such as working memory: Tamnes, Walhovd, Grydeland et al. (2013) showed that greater volume reductions in the frontal and rostral middle frontal gyri were associated with greater gains in working memory, even after adjusting for IQ. Although most studies focus on properties of brain structure directly (e.g., thickness, volume), an emerging subfield focuses on the covariance between regions instead. Khundrakpam et al. (2017) used structural covariance between regions to characterize the covariance network between regions. In 306 subjects scanned up to three times, they observed greater cortical thickness, higher global efficiency, but lower local efficiency in individuals with higher IQ, especially in frontal and temporal gyri. Although based on longitudinal data, the rate of change itself was not directly used in this study. Where most studies use summary metrics of intelligence or cognitive performance, increasingly authors implement full (longitudinal) latent variable models. The benefits of doing so are many, including increased power, and establishment of measurement invariance, which allows for unbiased interpretation of change over time (Widaman, Ferrer, & Conger, 2010), and more flexibility regarding missing data. One example of this approach is Román et al. (2018), who studied longitudinal changes in a measurement invariant g factor alongside changes in cortical thickness and surface area in a sample of 132 children and adolescents (6–21 years) from the NIH Pediatric MRI Data Repository (Evans & Brain Development Cooperative Group, 2006). A general intelligence factor was estimated at three time points with an
131
132
r. a. kievit and i. l. simpson-kent
average interval of two years. Changes in g scores correlated with changes in cortical thickness as well as surface area (r = .3/.37), in all regions (for cortical thickness) but mainly fronto-temporal (for surface area). Moreover, the trajectories of cortical thinning depended on cognitive ability: Significant cortical thinning was apparent at age 10–14 for individuals with lower g scores, whereas for those with higher g scores, cortical thinning only became apparent around age 17. A follow-up study on the same sample by a similar team of authors extended the analysis to what is likely the most advanced psychometric analysis in this field to date (Estrada et al., 2019). Using Latent Change Score models in three waves, the authors were able to tease apart lead-lag relations between intelligence and brain structure. Notably, and unlike any other paper to date, they used the estimates of current rates of change (at wave 2) to predict future rates of change in the other domains (see also Grimm et al., 2012 for a technical overview). They observed a complex but fascinating pattern of results: Although changes in cognitive ability or cortical structure were not predicted by the level in the other domain one time point before, the recent rate of change did predict future changes. In other words, individuals who showed less thinning and less surface loss showed greater gains in general intelligence in the subsequent period. In contrast, individuals who increased more in g during the previous period showed greater subsequent thinning – Possibly due to greater reorganization following the cognitive skill gains. For all analyses surface area showed less pronounced effects, suggesting, as in other studies, that cortical thickness is a more sensitive measure than surface area. These findings offer intriguing insights into the true intricacies of the unfolding development of cognition and brain structure, and suggest that even higher temporal resolution, as well as greater numbers of measurement occasions, are needed to truly capture these processes. A similarly psychometrically sophisticated study was conducted by Ritchie et al. (under review), who examined a large (N = 2,316) number of adolescents (14–19 years) tested and scanned in two waves, approximately five years apart from the IMAGEN sample. Ritchie et al. examined the relationship between (changes in) a broad general factor of ability (by extracting the first component of a battery of CANTAB tasks) and (changes in) a global (cortical) summary of grey matter, indexed by volume, thickness, and surface area. Using a Latent Change Score modeling strategy, they observed a constellation of interesting patterns. Cross-sectionally, higher cognitive ability was correlated with higher cortical volume and larger surface area, with weaker results for cortical thickness. Those with higher baseline ability tended to show more rapid cortical thinning and volume loss, although this was a relatively small effect. In contrast with some other findings, baseline brain structure was not associated with rates of cognitive change over time. Qi, Schaadt, and Friederici (2019) sought to investigate how cortical alteration during early development contributes to later language acquisition. In 56 children aged five to six years, they measured left and right hemispheric
Towards a Longitudinal Cognitive Neuroscience of Intelligence
cortical thickness and administered a sentence comprehension test (Test zum Satzverstehen von Kindern, TSVK) at two separate two points (mean interval: ~1 year). Moreover, they again acquired TSVK scores for a third time about a year later (age seven years) to estimate the effect of brain lateralization on language ability. In addition to evidence of early lateralization, they found that greater (cross-sectional) cortical thinning in the left, compared to the right, inferior frontal gyrus between ages five and six years was associated with greater language ability at age seven years. Lastly, and more relevant to this chapter, in the same subset of children, they found that changes in cortical thickness asymmetry were positively correlated with changes in language performance at age seven years, so that children with a greater increase in lateralization between five and six years improved more on the language test than those with less asymmetry. The majority of studies focused on children, usually from age seven-to-eight years onwards, likely for practical and logistical reasons. However, exceptions exist. Jaekel, Sorg, Baeuml, Bartmann, and Wolke (2019) studied the association between head growth and intelligence in 411 very preterm, preterm, and term born infants. Doing so, they observed that greater perinatal head size, as well as faster head growth (especially in the first 20 months) were associated with better cognitive performance, together explaining up to 70% of the variance in adult IQ. Other infant studies focusing on white matter will be discussed in the next section.
White Matter Much of the initial work on longitudinal studies of intelligence focused on grey matter structure, especially volume and thickness. In recent years, technology, especially quantitative models, to capture and quantify white matter microstructure has made considerable strides. Most of these innovations have focused on diffusion weighted imaging (DWI), which allows researchers to capture metrics such as fractional anisotropy (FA), mean diffusivity (MD), and myelin water fraction. Although the mapping from such measures to the underlying physiology such as axonal width and myelination remains far from perfect (Jones, Knösche, & Turner, 2013; Wandell, 2016), DWI measures have provided a range of new insights into the development of intelligence. Ferrer (2018) examined developmental changes in fluid reasoning in an N = 201, three wave sample of children, adolescents, and young adults aged 5–21 years (Earlier work by the same group relies on lower numbers from the same sample; Ferrer et al., 2013). Fluid reasoning was assessed using Matrix Reasoning, Block Design, Concept Formation, and Analysis Synthesis. Notably, Ferrer incorporated both a latent variable of fluid reasoning, as well as establishing measurement invariance across waves. Ferrer observed that greater global white matter volume (in mm) and greater
133
134
r. a. kievit and i. l. simpson-kent
white matter microstructure (indexed as fractional anisotropy) were associated with more rapid improvements in fluid reasoning. However, white matter was only incorporated at baseline, precluding the examination of cognitive performance driving white matter change. In a small cohort of children scanned on two occasions (N = 37, age range: six-to-eight years, two-year interval between scans), Borchers et al. (2019) examined the influence of white matter microstructure (mean tract-FA) at age six years on subsequent reading ability (Oral Reading Index) at age eight years. Pre-literate reading ability was assessed at age six years (considered by the authors as the onset of learning to read), with concurrent estimates of mean tract-FA of tracts known to be involved in reading-related abilities. They found that reading ability at age eight was predicted by mean tract-FA of the left inferior cerebellar peduncle as well as left and right superior longitudinal fasciculus even after controlling for age six years pre-literacy skills and demographic indicators (family history and sex). One of the most recent, and remarkable, projects is the Danish HUBU study. In this study, 95 children, aged 7 to 13 years, were scanned up to 12 times, six months apart using Diffusion Weighted Imaging alongside longitudinal assessments of cognitive tasks. Although the rich longitudinal findings of this project are still emerging, a recent paper reports initial findings. Madsen et al. (2020) examined the relationship between (changes in) tract-average Fractional Anisotropy and (changes in) a stop-signal reaction time task that captures the efficiency of executive inhibition in 88 children measured up to nine times (mean: 6.6). They observed that children with higher fractional anisotropy in pre-SMA tended to have better baseline SSRT performance, but a more shallow slope of improvement, suggesting that children with better white matter microstructure in key regions more rapidly reach a performance plateau in terms of speed. Although the sample size of the HUBU is moderate, the temporal richness of this study is unique, and likely to yield unique insights moving forward. Moving beyond summary metrics, Koenis et al. (2015) examined withinsubject white matter networks using mean FA as well as streamline count, and computed graph theory metrics such as global and local efficiency (both network metrics, which quantify the ease with which two nodes in a network can reach each other through edges, or “connections”) to characterize the nature of the within-subject networks. They demonstrated that children who made the greatest gains in efficiency (not to be confused with “neural efficiency” during task performance, see Neubauer & Fink, 2009) of their structural network were those who made the greatest gains in intelligence test scores. In contrast, individuals who showed no change or a decrease in network efficiency showed a decrease in intelligence test scores. The strongest nodal associations were present for the orbitofrontal cortex and the anterior cingulum. The associations for streamline, rather than FA, based networks were largely non-significant or marginally so and sometimes inconsistent with FA based efficiency.
Towards a Longitudinal Cognitive Neuroscience of Intelligence
Two studies examined the role of white matter microstructure in infants. Young et al. (2017) examined the white matter trajectories (as quantified by FA, MD, AD, and RD) in 75 very preterm neonates across up to four waves of diffusion weighted imaging. Doing so, they observed that a slower decrease in mean diffusivity (defined by MD) was associated with lower full scale IQ scores later on in life. Moreover, Deoni et al. (2016) studied the myelination profiles of 257 healthy developing children. A subset (N = 126) was scanned and tested at least twice, with further subsamples being scanned up to five times to derive advanced myelination metrics such as myelin water fraction. In all regions studied, children of above average cognitive ability showed distinct myelination trajectories: Higher intelligence test scores were associated with a longer initial lag and slower growth period, followed by a longer overall growth phase and faster secondary growth rates, yielding the most pronounced cross-sectional differences at age three years. A follow-up study using the same sample (Dai et al., 2019) examined a different question, namely whether the contemporaneous correlations between myelination water fraction and cognitive ability varied longitudinally using non-parametric models. Doing so, they observed complex, non-linear patterns of the association between MWF and cognitive ability during development, with a peak of association around 1 year of age. Although this approach precludes modeling coupling effects, it illustrates the time-varying associations between brain and behavior are also found in the absence of cohort or selection confounds.
The Role of Genetics As shown in the “white matter” and “grey matter” sections, an emerging body of work suggests key roles for cortical and subcortical maturation in supporting changes in intelligence. However, these observations leave open the question of the etiology of these processes. Are they driven largely by concurrent changes and differences in the environment? Or do underlying genetic differences underlie most or all of the observed processes? To address these questions, Schmitt et al. (2019) used a twin design to examine the associations between (changes in) cortical thickness and baseline intelligence, as well as the extent to which they shared genetic similarity, in 813 typically developing children with up to as many as eight scans. Using a twin analysis, the authors showed that the phenotypic covariance between IQ and cortical thickness and cortical thickness change was effectively entirely genetic. Koenis et al. (2018) showed (in the same sample as Koenis et al., 2015) age-dependent correlations between brain measures and IQ: Weak or absent correlations early in childhood (r = 0 at age 10 years) became pronounced by middle to late adolescence (age 18 years, r = .23). Notably, the previous steep changes in efficiency between age 10 and 13 years had leveled off by age 18 years, suggesting a slowing down of cortical development. Finally, a unique twin
135
136
r. a. kievit and i. l. simpson-kent
design allowed the authors to compute genetic correlations: The extent to which brain network changes and intelligence changes may share an underlying genetic origin. They observed a steady increase in this genetic correlation, suggesting that developmental trajectories continue to unfold, an interpretation in line (in spirit) with Dickens-Flynn type models of development, where genetic predispositions lead to an increasingly close alignment of genetic predispositions and the environment (Dickens & Flynn, 2001). Brans et al. (2010) investigated changes in cortical thickness in 66 twins (total N = 132) in late adolescence and adults (20–40 years) scanned on two occasions. They demonstrated, as did studies above, that individuals with higher IQs (defined by full WAIS-III) showed greater thickening and less thinning than those with lower IQs. Notably, the genetic correlations were significantly different between those affecting current thickness vs. rate of thinning, suggesting distinct etiological pathways between the state of the brain and its developmental pathway. The genetic analysis showed moderate to strong, regionally specific, genetic covariance between the rate of thickness change and the level of intelligence. A recent study in the IMAGEN sample (N = 551, Judd et al. (2020), age 14/19) examined the role of SES and a polygenic risk score of educational attainment (summarized across thousands of genetic loci) in the development of brain structure and working memory. The core analysis centered on the global effects of SES on the brain, but more specific effects of the polygenic risk score on individual differences in brain structure. For our purposes, the key finding was the bivariate latent change score model, which showed that individuals with higher baseline working memory ability showed a stronger decrease in global surface area during adolescence. In contrast, differences in baseline surface area at age 14 years were not associated with differences in the rate of working memory improvement – although the authors note a potential performance ceiling effect limits strong conclusions. Polygenic risk scores did not differentially predict the rate of surface area change.
Other Imaging Measures Although the majority of the work on the development of intelligence relies on structural and functional MRI, some exceptions exist. An extremely early study (Beckwith & Parmelee, 1986) examined sleep-related EEG markers and intelligence in 53 infants. Preterm infants who displayed a particular EEG pattern (“trace alternant”) during (transitional) sleep showed better intelligence scores at age 8. Hahn et al. (2019) studied longitudinal changes in sleep spindles based on polysomnography (EEG recording during sleep) in 34 children across a seven-year interval, to quantify slow and fast sleep spindle power. They observed that individuals with higher cognitive ability showed a greater increase in frontal slow spindle activity. Together
Towards a Longitudinal Cognitive Neuroscience of Intelligence
these two studies suggest that neural activity during sleep is associated with the level and change of intelligence. Selmeczy, Fandakova, Grimm, Bunge, and Ghetti (2020) used fMRI instead of EEG to study the interplay between (changes in) pubertal status and (changes) in fMRI hippocampal activation during an episodic memory task, and memory performance in three waves of 8 to 14 year olds (max N = 90) using a mixed modeling approach. They examined cross-sectional associations between hippocampal activity patterns and memory performance. In addition to observing a U-shaped developmental pattern of hippocampal activity (illustrating the importance of longitudinal studies), they found that greater baseline task-responsivity in the hippocampus was associated with a more rapid memory performance increase. Although most studies focus on a single neuroimaging metric, more recent work incorporated both functional and structural connectivity. Evans et al. (2015) examined 79 children (43 with scans) alongside changes in the Wechsler Abbreviated Scale of Intelligence, with a special focus on numerical operations. They showed that greater grey matter volume at baseline, especially in prefrontal regions, was associated with greater gains in numerical operations over time. In contrast, functional connectivity in, and between, the same regions identified in the grey matter volume analysis were not associated with greater gains in numerical abilities. Similarly, Wendelken et al. (2017) measured fluid intelligence as well as functional connectivity (FC) and structural connectivity (SC, defined as mean fractional anisotropy in tracts connecting key regions) and related this to changes in reasoning from childhood to early adulthood (age range: 6–22 years). Interestingly, this study incorporated pooled data from three datasets (some included in other papers reported here) with reasoning, FC, and/or SC data for at least two time points. The aggregate sample consisted of 523 participants. Cross-sectional analysis revealed differential age relations between FC, SC, and reasoning. Specifically, SC was strongly (and positively) related to reasoning ability in children but not adolescents and adults. In adolescents and adults, FC was positively associated with reasoning, but this effect was not found in children. Longitudinal analyses revealed fronto-parietal (RLPFC-IPL) SC at one time point positively predicted RLPFC-IPL FC, but not vice versa. Moreover, in young participants (children), SC was positively associated with change in reasoning. Together, these findings suggest that brain structure, especially white matter, may be a stronger determinant of longitudinal cognitive change than functional connectivity.
Aging For a more complete understanding of lifespan trajectories of cognitive abilities and brain structure, we must study, compare, and contrast findings from both ends of the lifespan (Tamnes, Walhovd, Dale, et al.,
137
138
r. a. kievit and i. l. simpson-kent
2013). In other words, it is crucial that we investigate not just how brain measures are associated with changes in intelligence in childhood, adolescence, and early adulthood, but also the mirroring patterns in later life decline. A recent review (Oschwald et al., 2019) has provided a comprehensive overview of truly longitudinal investigations of age-related decline in cognitive ability and concurrent changes in brain structure. We will here summarize the key findings in the realm of intelligence specifically, but refer the reader to that resource for a more in-depth discussion of the key papers in this field. Oschwald et al. identified 31 papers that had comprehensive assessments of cognitive performance as well as neural measurements on multiple occasions. The emerging findings across measures of grey matter volume, white matter volume, white matter microstructure (e.g., FA/MD), and more global measures of brain structure (e.g., intra-cranial volume, total brain volume, or head size) converged on a series of findings. The most common pattern was that of correlated change, and the overwhelming majority of such findings (exceptions include Bender, Prindle, Brandmaier, & Raz, 2015) was as expected: more rapid grey matter atrophy, white matter volume decline, and/or white matter microstructure loss were associated with more rapid cognitive decline. A second pattern of findings centered on level-change associations between domains. In other words, to what extent does cognitive decline depend on the current state of brain anatomy, and vice versa. Although not all studies used methodology that allowed for such conclusions, many observed a pattern consistent with the hypothesis we term structural scaffolding. The current state of the brain more strongly predicts the rate of cognitive decline than vice versa. One of the earliest papers to observe this was McArdle et al. (2004), who demonstrated that lower ventricle size was associated with a more rapid rate of memory decline, above and beyond contemporaneous measures of age and memory score. Both these findings (correlated change and structural scaffolding) are, in turn, in line with the notion of brain maintenance (Nyberg, Lövdén, Riklund, Lindenberger, & Bäckman, 2012): The main way to maintain current cognitive performance or decelerate cognitive decline is to maintain, to the greatest extent possible, the current state of the brain. Future work should work towards the formalized integration of theories about neurocognitive development throughout the lifespan.
Summary In this chapter, we provide an overview of studies that investigate cooccurring changes in intelligence, brain structure, and brain function from early childhood to early adulthood. From this literature, a few clear conclusions can be drawn. First and foremost, there is a profound sparsity of truly longitudinal work in this field. Regardless of one’s precise inclusion criteria,
Towards a Longitudinal Cognitive Neuroscience of Intelligence
there are currently more studies on fMRI in dogs (Thompkins, Deshpande, Waggoner, & Katz, 2016; and several studies since) than there are longitudinal investigations of changes in intelligence and brain structure in childhood. This is, perhaps, not entirely surprising: Large, longitudinal studies are demanding of time, person power, and resources. Combined with several challenges unique to this work, such as updates to MRI scanners, the studies that do exist are all the more impressive. However, there is a recent and rapid increase in large longitudinal studies – as Table 7.1 shows, almost half of all studies discussed date from just the last three years, suggesting many more studies will emerge in the near future. Future work in (very) large samples, such as the Adolescent Brain Cognitive Development (ABCD) study (Volkow et al., 2018), is uniquely positioned to examine the replicability and consistency of the exciting but preliminary findings discussed in this chapter. Taken together, although not plentiful in number, the studies so far allow several conceptual and substantive conclusions.
Timing Matters First and foremost, timing matters. All metrics used in the studies described in Table 7.1, from grey matter volume and thickness to structural and functional connectivity change rapidly, non-linearly, and in complex manners. One key consequence of these changes is that the cross-sectional associations between brain measures and intelligence will be heavily dependent on the age and age distribution of the population being studied. As the studies described in Table 7.1 show, measures such as cortical thickness can show positive, no, or negative correlations with intelligence during development, even in the same cohort, in children of different ages. This alone should send a clear message to developmental cognitive neuroscientists: Age, as much as gender, country of origin, and/or SES affects the nature of the associations we might observe. In studies that incorporate quantitative parameters of change between brain and behavior, two findings emerge across multiple studies. First, correlated change between brain structure and intelligence. Given the same interval, individuals who demonstrate greater gains in intelligence often show more rapid changes in structural development as well. Multiple, complementary explanations for such observations exist. One is that changes in both domains are governed by some third variable, such as the manifestation of a complex pattern of gene expression at a particular maturational period. A methodological explanation of the same statistical effect is that the temporal resolution of a given study is not suitable to tease apart lead-lag distinctions that may in fact exist – although in reality cognitive ability may precede brain change or vice versa, the actual intervals used in studies (often multiple years) will obscure the fine-grained temporal unfolding. This second pattern observed across multiple cohorts is what we referred to as “structural
139
140
r. a. kievit and i. l. simpson-kent
scaffolding.” This is the finding that current brain states (as indexed by measures of brain structure) tend to govern (statistically) the rate of change in cognitive performance. This pattern is observed in multiple studies (e.g., Ferrer, 2018; Wendelken et al., 2017, McArdle et al., 2004), and is consistently in the same direction: “Better” brain structure is generally associated with greater gains (in children) or shallower declines (in older individuals) than individuals with lower scores on brain structural metrics. This pattern is highly intriguing and worth further study, as it may have profound implications for how we think about the relationship between the brain and intelligence. However, this parameter can only be observed by studies that implement methodology (such as the latent change score model) which allows for the quantification of this pattern. This illustrates the importance of suitable quantitative methodology.
Methods Matter It is impossible to provide a precise quantitative meta-analysis of the findings in this field, as the analytical choices govern which parameters of interest are estimated and reported. Different models may provide somewhat, or even very, different conclusions when applied to different, or even the same, datasets (Oschwald et al., 2019). There is no single solution to this challenge: Particular methods have strengths and weaknesses, making them more or less suitable for the particular question being studied. Simultaneously, it seems clear that often (but not always), the choice of methodology is governed as much by conventions of the (sub)field and software limitations as by considered analytical choices. A broader awareness of the range of suitable methods out there would likely improve this state of affairs. A recent special issue (Pfeifer, Allen, Byrne, & Mills, 2018) brings together a wide range of innovations, perspectives, and methodological challenges, providing an excellent starting point for researchers looking to expand their horizons. Another line of improvement would be the strengthening of the explanatory theoretical frameworks used to conceptualize the development of intelligence. Studies are now of sufficient complexity and richness that we can, and should, move beyond the reporting of simple bivariate associations, and develop testable theoretical frameworks that bring together the disparate yet fascinating findings from cognitive development, genetics, grey and white matter, and brain function. Most crucially, such theories should be guided by the promise of translation into tractable quantitative models, such that others may build, refine, or replace theories moving forward. The longitudinal neuroscience of intelligence in childhood and adolescence is only in its infancy, yet many exciting discoveries have already been made. Many more await for a field willing and able to collaboratively work towards a cumulative developmental cognitive neuroscience of intelligence across the entire lifespan.
Towards a Longitudinal Cognitive Neuroscience of Intelligence
References Barbey, A. K., Colom, R., Paul, E. J., & Grafman, J. (2014). Architecture of fluid intelligence and working memory revealed by lesion mapping. Brain Structure & Function, 219(2), 485–494. doi: 10.1007/s00429–013-0512-z. Basten, U., Hilger, K., & Fiebach, C. J. (2015). Where smart brains are different: A quantitative meta-analysis of functional and structural brain imaging studies on intelligence. Intelligence, 51, 10–27. doi: 10.1016/j.intell.2015.04.009. Beckwith, L., & Parmelee, A. H. (1986). EEG patterns of preterm infants, home environment, and later IQ. Child Development, 57(3), 777–789. doi: 10.2307/1130354. Bender, A. R., Prindle, J. J., Brandmaier, A. M., & Raz, N. (2015). White matter and memory in healthy adults: Coupled changes over two years. NeuroImage, 131, 193–204. doi: 10.1016/j.neuroimage.2015.10.085. Bengtsson, S. L., Nagy, Z., Skare, S., Forsman, L., Forssberg, H., & Ullén, F. (2005). Extensive piano practicing has regionally specific effects on white matter development. Nature Neuroscience, 8(9), 1148–1150. doi: 10.1038/nn1516. Borchers, L. R., Bruckert, L., Dodson, C. K., Travis, K. E., Marchman, V. A., BenShachar, M., & Feldman, H. M. (2019). Microstructural properties of white matter pathways in relation to subsequent reading abilities in children: A longitudinal analysis. Brain Structure and Function, 224(2), 891–905. Brans, R. G. H., Kahn, R. S., Schnack, H. G., van Baal, G. C. M., Posthuma, D., van Haren, N. E. M., . . . Pol, H. E. H. (2010). Brain plasticity and intellectual ability are influenced by shared genes. Journal of Neuroscience, 30(16), 5519–5524. doi: 10.1523/JNEUROSCI.5841-09.2010. Burgaleta, M., Johnson, W., Waber, D. P., Colom, R., & Karama, S. (2014). Cognitive ability changes and dynamics of cortical thickness development in healthy children and adolescents. NeuroImage, 84, 810–819. doi: 10.1016/j. neuroimage.2013.09.038. Dai, X., Hadjipantelis, P., Wang, J. L., Deoni, S. C., & Müller, H. G. (2019). Longitudinal associations between white matter maturation and cognitive development across early childhood. Human Brain Mapping, 40(14), 4130–4145. Deary, I. J., Penke, L., & Johnson, W. (2010). The neuroscience of human intelligence differences. Nature Reviews Neuroscience, 11(3), 201–211. doi: 10.1038/ nrn2793. Deoni, S. C. L., O’Muircheartaigh, J., Elison, J. T., Walker, L., Doernberg, E., Waskiewicz, N., . . . Jumbe, N. L. (2016). White matter maturation profiles through early childhood predict general cognitive ability. Brain Structure & Function, 221, 1189–1203. doi: 10.1007/s00429–014-0947-x. Dickens, W. T., & Flynn, J. R. (2001). Heritability estimates versus large environmental effects: The IQ paradox resolved. Psychological Review, 108(2), 346–369. doi: 10.1037//0033-295X. Estrada, E., Ferrer, E., Román, F. J., Karama, S., & Colom, R. (2019). Time-lagged associations between cognitive and cortical development from childhood to early adulthood. Developmental Psychology, 55(6), 1338–1352. doi: 10.1037/ dev0000716.
141
142
r. a. kievit and i. l. simpson-kent
Evans, A. C., & Brain Development Cooperative Group. (2006). The NIH MRI study of normal brain development. Neuroimage, 30(1), 184–202. Evans, T. M., Kochalka, J., Ngoon, T. J., Wu, S. S., Qin, S., Battista, C., & Menon, V. (2015). Brain structural integrity and intrinsic functional connectivity forecast 6 year longitudinal growth in children’s numerical abilities. Journal of Neuroscience, 35(33), 11743–11750. doi: 10.1523/JNEUROSCI.0216-15.2015. Ferrer, E. (2018). Discrete- and semi-continuous time latent change score models of fluid reasoning development from childhood to adolescence. In S. M. Boker, K. J. Grimm, & E. Ferrer (eds.), Longitudinal multivariate psychology (pp. 38–60). New York: Routledge. Ferrer, E., & McArdle, J. J. (2004). An experimental analysis of dynamic hypotheses about cognitive abilities and achievement from childhood to early adulthood. Developmental Psychology, 40(6), 935–952. Ferrer, E., Shaywitz, B. A., Holahan, J. M., Marchione, K., & Shaywitz, S. E. (2010). Uncoupling of reading and IQ over time: Empirical evidence for a definition of dyslexia. Psychological Science, 21(1), 93–101. doi: 10.1177/ 0956797609354084. Ferrer, E., Whitaker, K. J., Steele, J. S., Green, C. T., Wendelken, C., & Bunge, S. A. (2013). White matter maturation supports the development of reasoning ability through its influence on processing speed. Developmental Science, 16(6), 941–951. doi: 10.1111/desc.12088. Grimm, K. J., An, Y., McArdle, J. J., Zonderman, A. B., & Resnick, S. M. (2012). Recent changes leading to subsequent changes: Extensions of multivariate latent difference score models. Structural Equation Modeling: A Multidisciplinary Journal, 19(2), 268–292. doi: 10.1080/10705511.2012.659627. Gross, C. (1995). Aristotle on the brain. The Neuroscientist, 1(4), 245–250. doi: 10.1177/107385849500100408. Hahn, M., Joechner, A., Roell, J., Schabus, M., Heib, D. P., Gruber, G., . . . Hoedlmoser, K. (2019). Developmental changes of sleep spindles and their impact on sleep-dependent memory consolidation and general cognitive abilities: A longitudinal approach. Developmental Science, 22(1), e12706. doi: 10.1111/desc.12706. Huarte, J. (1594). Examen de ingenios. [The examination of mens wits]. Trans. M. Camillo Camilli and R. C. Esquire. London: Adam Islip, for C. Hunt of Excester. Jaekel, J., Sorg, C., Baeuml, J., Bartmann, P., & Wolke, D. (2019). Head growth and intelligence from birth to adulthood in very preterm and term born individuals. Journal of the International Neuropsychological Society, 25(1), 48–56. doi: 10.1017/S135561771800084X. Jensen, A. R. (1998). The g factor: The science of mental ability. Westport, CT: Praeger. Jones, D. K., Knösche, T. R., & Turner, R. (2013). White matter integrity, fiber count, and other fallacies: The do’s and don’ts of diffusion MRI. NeuroImage, 73, 239–254. doi: 10.1016/j.neuroimage.2012.06.081. Judd, N., Sauce, B., Wiedenhoeft, J., Tromp, J., Chaarani, B., Schliep, A., . . . & Becker, A. (2020). Cognitive and brain development is independently influenced by socioeconomic status and polygenic scores for educational attainment. Proceedings of the National Academy of Sciences, 117(22), 12411–12418.
Towards a Longitudinal Cognitive Neuroscience of Intelligence
Jung, R. E., & Haier, R. J. (2007). The Parieto-Frontal Integration Theory (P-FIT) of intelligence: Converging neuroimaging evidence. Behavioral and Brain Sciences, 30(2), 135. doi: 10.1017/S0140525X07001185. Kail, R. V. (1998). Speed of information processing in patients with multiple sclerosis. Journal of Clinical and Experimental Neuropsychology, 20(1), 98–106. doi: 10.1076/jcen.20.1.98.1483. Khundrakpam, B. S., Lewis, J. D., Reid, A., Karama, S., Zhao, L., ChouinardDecorte, F., & Evans, A. C. (2017). Imaging structural covariance in the development of intelligence. NeuroImage, 144, 227–240. doi: 10.1016/j. neuroimage.2016.08.041. Kievit, R. A., Brandmaier, A. M., Ziegler, G., Van Harmelen, A. L., de Mooij, S. M., Moutoussis, M., . . . & Lindenberger, U. (2018). Developmental cognitive neuroscience using latent change score models: A tutorial and applications. Developmental Cognitive Neuroscience, 33, 99–117. Kievit, R. A., Hofman, A. D., & Nation, K. (2019). Mutualistic coupling between vocabulary and reasoning in young children: A replication and extension of the study by Kievit et al. (2017). Psychological Science, 30(8), 1245–1252. doi: 10.1177/0956797619841265. Kievit, R. A., Lindenberger, U., Goodyer, I. M., Jones, P. B., Fonagy, P., Bullmore, E. T., . . . Dolan, R. J. (2017). Mutualistic coupling between vocabulary and reasoning supports cognitive development during late adolescence and early adulthood. Psychological Science, 28(10), 1419–1431. Koenis, M. M. G., Brouwer, R. M., Swagerman, S. C., van Soelen, I. L. C., Boomsma, D. I., & Pol, H. E. H. (2018). Association between structural brain network efficiency and intelligence increases during adolescence. Human Brain Mapping, 39(2), 822–836. doi: 10.1002/hbm.23885. Koenis, M. M. G., Brouwer, R. M., van den Heuvel, M. P., Mandl, R. C. W., van Soelen, I. L. C., Kahn, R. S., . . . Pol, H. E. H. (2015). Development of the brain’s structural network efficiency in early adolescence: A longitudinal DTI twin study. Human Brain Mapping, 36(12), 4938–4953. doi: 10.1002/ hbm.22988. Madsen, K. S., Johansen, L. B., Thompson, W. K., Siebner, H. R., Jernigan, T. L., & Baare, W. F. (2020). Maturational trajectories of white matter microstructure underlying the right presupplementary motor area reflect individual improvements in motor response cancellation in children and adolescents. NeuroImage, 220, 117105. McArdle, J. J., Hamgami, F., Jones, K., Jolesz, F., Kikinis, R., Spiro, A., & Albert, M. S. (2004). Structural modeling of dynamic changes in memory and brain structure using longitudinal data from the normative aging study. The Journals of Gerontology. Series B, Psychological Sciences and Social Sciences, 59(6), P294–304. doi: 10.1093/GERONB/59.6.P294. Neubauer, A. C., & Fink, A. (2009). Intelligence and neural efficiency: Measures of brain activation versus measures of functional connectivity in the brain. Intelligence, 37(2), 223–229. doi: 10.1016/j.intell.2008.10.008. Nyberg, L., Lövdén, M., Riklund, K., Lindenberger, U., & Bäckman, L. (2012). Memory aging and brain maintenance. Trends in Cognitive Sciences, 16(5), 292–305. doi: 10.1016/j.tics.2012.04.005.
143
144
r. a. kievit and i. l. simpson-kent
Oschwald, J., Guye, S., Liem, F., Rast, P., Willis, S., Röcke, C., . . . Mérillat, S. (2019). Brain structure and cognitive ability in healthy aging: A review on longitudinal correlated change. Reviews in the Neurosciences, 31(1), 1–57. doi: 10.1515/revneuro-2018-0096. Peng, P., & Kievit, R. A. (2020). The development of academic achievement and cognitive abilities: A bidirectional perspective. Child Development Perspectives, 14(1), 15–20. doi: 10.31219/osf.io/9u86q. Peng, P., Wang, T., Wang, C., & Lin, X. (2019). A meta-analysis on the relation between fluid intelligence and reading/mathematics: Effects of tasks, age, and social economics status. Psychological Bulletin, 145(2), 189–236. doi: 10.1037/ bul0000182. Pfeifer, J. H., Allen, N. B., Byrne, M. L., & Mills, K. L. (2018). Modeling developmental change: Contemporary approaches to key methodological challenges in developmental neuroimaging. Developmental Cognitive Neuroscience, 33, 1–4. doi: 10.1016/j.dcn.2018.10.001. Pietschnig, J., Penke, L., Wicherts, J. M., Zeiler, M., & Voracek, M. (2015). Metaanalysis of associations between human brain volume and intelligence differences: How strong are they and what do they mean? Neuroscience & Biobehavioral Reviews, 57, 411–432. doi: 10.1016/j.neubiorev.2015.09.017. Qi, T., Schaadt, G., & Friederici, A. D. (2019). Cortical thickness lateralization and its relation to language abilities in children. Developmental Cognitive Neuroscience, 39, 100704. Ramsden, S., Richardson, F. M., Josse, G., Thomas, M. S. C., Ellis, C., Shakeshaft, C., . . . Price, C. J. (2011). Verbal and non-verbal intelligence changes in the teenage brain. Nature, 479(7371), 113–116. doi: 10.1038/nature10514. Raz, N., & Lindenberger, U. (2011). Only time will tell: Cross-sectional studies offer no solution to the age–brain–cognition triangle: Comment on Salthouse (2011). Psycological Bulletin, 137(5), 790–795. doi: 10.1037/a0024503. Ritchie, S. J., Quinlan, E. B., Banaschewski, T., Bokde, A. L., Desrivieres, S., Flor, H., . . . & Ittermann, B. (under review). Neuroimaging and genetic correlates of cognitive ability and cognitive development in adolescence. Psyarxiv, https:// psyarxiv.com/8pwd6/ Rocca, J. (2009). Galen and the ventricular system. Journal of the History of the Neurosciences, 6(3), 227–239. Retrieved from https://www.tandfonline.com/ doi/abs/10.1080/09647049709525710?casa_token=uaaDpevYWpgAAAAA: YgJ2sfv80R1vUd6M0VIqfxFd6hkCxAsKhim1_Bt-ZuwPHteZ4Wmwah5F WBCINOkHCi3L97VL1zuDiqo Román, F. J., Morillo, D., Estrada, E., Escorial, S., Karama, S., & Colom, R. (2018). Brain-intelligence relationships across childhood and adolescence: A latent-variable approach. Intelligence, 68, 21–29. doi: 10.1016/j. intell.2018.02.006. Schmitt, J. E., Raznahan, A., Clasen, L. S., Wallace, G. L., Pritikin, J. N., Lee, N. R., . . . Neale, M. C. (2019). The dynamic associations between cortical thickness and general intelligence are genetically mediated. Cerebral Cortex, 29(11). doi: 10.1093/cercor/bhz007. Schnack, H. G., van Haren, N. E. M., Brouwer, R. M., Evans, A., Durston, S., Boomsma, D. I., . . . Hulshoff Pol, H. E. (2015). Changes in thickness and
Towards a Longitudinal Cognitive Neuroscience of Intelligence
surface area of the human cortex and their relationship with intelligence. Cerebral Cortex, 25(6), 1608–1617. doi: 10.1093/cercor/bht357. Selmeczy, D., Fandakova, Y., Grimm, K. J., Bunge, S. A., & Ghetti, S. (2019). Longitudinal trajectories of hippocampal and prefrontal contributions to episodic retrieval: Effects of age and puberty. Developmental Cognitive Neuroscience, 36, 100599. Shaw, P., Greenstein, D., Lerch, J., Clasen, L., Lenroot, R., Gogtay, N., . . . Giedd, J. (2006). Intellectual ability and cortical development in children and adolescents. Nature, 440(7084), 676–679. doi: 10.1038/nature04513. Sowell, E. R., Thompson, P. M., Leonard, C. M., Welcome, S. E., Kan, E., & Toga, A. W. (2004). Longitudinal mapping of cortical thickness and brain growth in normal children. Journal of Neuroscience, 24(38), 8223–8231. doi: 10.1523/ JNEUROSCI.1798-04.2004. Spearman, C. (1904). “General intelligence,” objectively determined and measured. The American Journal of Psychology, 15(2), 201–292. doi: 10.2307/1412107. Tamnes, C. K., Bos, M. G. N., van de Kamp, F. C., Peters, S., & Crone, E. A. (2018). Longitudinal development of hippocampal subregions from childhood to adulthood. Developmental Cognitive Neuroscience, 30, 212–222. doi: 10.1016/ j.dcn.2018.03.009. Tamnes, C. K., Walhovd, K. B., Dale, A. M., Østby, Y., Grydeland, H., Richardson, G., . . . Fjell, A. M. (2013). Brain development and aging: Overlapping and unique patterns of change. NeuroImage, 68, 63–74. doi: 10.1016/j. neuroimage.2012.11.039. Tamnes, C. K., Walhovd, K. B., Grydeland, H., Holland, D., Østby, Y., Dale, A. M., & Fjell, A. M. (2013). Longitudinal working memory development is related to structural maturation of frontal and parietal cortices. Journal of Cognitive Neuroscience, 25(10), 1611–1623. doi: 10.1162/jocn_a_00434. Thompkins, A. M., Deshpande, G., Waggoner, P., & Katz, J. S. (2016). Functional magnetic resonance imaging of the domestic dog: Research, methodology, and conceptual issues. Comparative Cognition & Behavior Reviews, 11, 63–82. doi: 10.3819/ccbr.2016.110004. Van Der Maas, H. L., Dolan, C. V., Grasman, R. P., Wicherts, J. M., Huizenga, H. M., & Raijmakers, M. E. (2006). A dynamical model of general intelligence: The positive manifold of intelligence by mutualism. Psychological Review, 113(4), 842. Volkow, N. D., Koob, G. F., Croyle, R. T., Bianchi, D. W., Gordon, J. A., Koroshetz, W. J., . . . Weiss, S. R. B. (2018). The conception of the ABCD study: From substance use to a broad NIH collaboration. Developmental Cognitive Neuroscience, 32, 4–7. doi: 10.1016/j.dcn.2017.10.002. Wandell, B. A. (2016). Clarifying human white matter. Annual Review of Neuroscience, 39(1), 103–128. Wendelken, C., Ferrer, E., Ghetti, S., Bailey, S. K., Cutting, L., & Bunge, S. A. (2017). Frontoparietal structural connectivity in childhood predicts development of functional connectivity and reasoning ability: A large-scale longitudinal investigation. The Journal of Neuroscience: The Official Journal of the Society for Neuroscience, 37(35), 8549–8558. doi: 10.1523/JNEUROSCI.372616.2017.
145
146
r. a. kievit and i. l. simpson-kent
Wenger, E., Brozzoli, C., Lindenberger, U., & Lövdén, M. (2017). Expansion and renormalization of human brain structure during skill acquisition. Trends in Cognitive Sciences, 21(12), 930–939. doi: 10.1016/j.tics.2017.09.008. Widaman, K. F., Ferrer, E., & Conger, R. D. (2010). Factorial invariance within longitudinal structural equation models: Measuring the same construct across time. Child Development Perspectives, 4(1), 10–18. doi: 10.1111/j.17508606.2009.00110.x. Young, J. M., Morgan, B. R., Whyte, H. E. A., Lee, W., Smith, M. L., Raybaud, C., . . . Taylor, M. J. (2017). Longitudinal study of white matter development and outcomes in children born very preterm. Cerebral Cortex, 27(8), 4094–4105. doi: 10.1093/cercor/bhw221.
8 A Lifespan Perspective on the Cognitive Neuroscience of Intelligence Joseph P. Hennessee and Denise C. Park Human intelligence is a multifaceted construct that has been defined in many different ways. In the present chapter, we consider intelligence to be a stable behavioral index that provides important predictive value for many complex and adaptive behaviors in life that require problem-solving and integration of multiple cognitive operations. The index is typically developed from measures of basic core cognitive abilities that include fluid reasoning, processing speed, working memory, episodic memory, and verbal ability. Whether intelligence changes with advancing age turns out to be a question that is surprisingly difficult to answer. On one hand, both cross-sectional and longitudinal studies provide evidence that adult intelligence is characterized by predictable normative developmental change with increasing age. This profile shows decline across the adult lifespan on core cognitive measures that comprise intelligence (Park et al., 2002; Salthouse, 2016). On the other hand, although it is certain that a very long life guarantees some decline in core cognitive measures associated with intelligence, the age at which an individual begins to show decline, and how fast they decline, is quite variable (e.g., Salthouse, 2016). Adding to the puzzle of how intelligence is affected by aging, there is also astonishing evidence that intelligence within a given person is highly stable across the individual’s lifespan. The Lothian Cohort Study reported that roughly 45% of the variance in intelligence at age 90 was accounted for by that individual’s level of intelligence at age 11 (Deary, Pattie, & Starr, 2013). The Vietnam Era Twin Study of Aging recently observed a correlation of similar magnitude between intelligence scores taken at age 20 with those taken at age 62 (Kremen et al., 2019). In the present chapter, we use a cognitive aging framework to connect age-related differences in brain structure and function to the measures that comprise intelligence. We then characterize brain mechanisms that underlie the classic profile of age-related changes in human intellectual function with age. We focus on the importance of the distinction between fluid and crystallized intelligence for understanding aging, and discuss methodological issues that limit a full understanding of the lifespan trajectory of intelligence. Then, we turn our attention to individual differences. Finally, we consider whether “brain training” and other experiences can induce reliable improvements in 147
148
j. p. hennessee and d. c. park
brain structures and functions underlying intelligence and if they open a door to maintaining or even improving intellectual ability with age.
Intelligence and the Cognitive Neuroscience of Aging There is a wealth of evidence suggesting that change in the core measures of cognition is a universal aspect of human aging. These measures, on average, follow a rather prescribed trend with most aspects of performance peaking in early adulthood (~20–30) and declining in later life (~50+) (Anstey, Sargent-Cox, Garde, Cherbuin, & Butterworth, 2014; Salthouse, 2016). Core functions that are most impacted by aging include the speed at which we process incoming information, episodic retrieval, working memory, and fluid reasoning, which describes our ability to solve novel problems. However, the above conclusions are all based on group means. Recent work examining individual differences in trajectories of aging highlight that, particularly in old age, cognitive function is highly variable, and some, labeled super agers, maintain strong cognition into their later years (for a review, see Nyberg & Pudas, 2019). Although individuals vary considerably in the age at which decline occurs, as well as in the rate of decline, some decrease in these functions is a universal signature of aging, as the dynamic interaction among biological systems in an ever-changing environment yields inevitable changes in most biological systems.
Patterns of Brain Aging Healthy cognitive function relies on effective neural structure and function; thus, it is unsurprising that age-related cognitive declines coincide with profound changes in the brain (e.g., Hedden et al., 2016; MacPherson et al., 2017). Neuroimaging research has consistently shown that some brain structures shrink with age, with the most affected structures being the prefrontal cortex (PFC) and the medial temporal lobe (Pacheco, Goh, Kraut, Ferrucci, & Resnick, 2015; Raz, Ghisletta, Rodrigue, Kennedy, & Lindenberger, 2010; Storsve et al., 2014). There is a wealth of data showing that volumetric declines are related to selective decreases in component abilities comprising intelligence. For example, individual differences in hippocampal volume are predictive of memory function at every age (e.g., Harrison, Maass, Baker, & Jagust, 2018) and differences in thickness of lateral PFC and the parietal cortex are associated with working memory capacity (Østby, Tamnes, Fjell, & Walhovd, 2011), processing speed (MacPherson et al., 2017), and fluid reasoning (Yuan, Voelkle, & Raz, 2018). In each of these cases, greater volume is associated with higher intelligence. There are also changes in the integrity of the brain’s white matter with age, evidenced by decreasing density and increasing porosity of the white matter as measured by diffusion tensor imaging (DTI).
Lifespan Perspective on Cognitive Neuroscience
This results in a “disconnection syndrome” (O’Sullivan et al., 2001), which slows or even limits transmission of neural signal from the white matter to the cerebral cortex. Functional magnetic resonance imaging (fMRI) data suggest that brain activity also changes markedly with age. There are three commonly observed patterns of activation that are hallmarks of the aging brain. First, numerous studies have consistently shown that older adults show heightened activation in both the right and left dorsal lateral PFC on cognitive tasks where young adults are primarily left-lateralized. There is considerable evidence that this increased functional activity in fronto-parietal regions is used to meet cognitive task demands in a compensatory fashion (Batista et al., 2019; Huang, Polk, Goh, & Park, 2012; Park & Reuter-Lorenz, 2009; Reuter-Lorenz & Cappell, 2008; Rieck, Rodrigue, Boylan, & Kennedy, 2017; Scheller et al., 2018). Other activation patterns characteristic of older adults are less adaptive and are evidence of degradation of healthy functional brain activity. For example, cognitive control tasks such as working memory or encoding tasks typically require activation of the fronto-parietal network and suppression of activity in the default network (brain regions that are associated with relaxation and daydreaming). Older adults consistently show an inability to suppress brain activity in the regions that comprise the default network and have more difficulty directing activation of the cognitive control network (e.g., Turner & Spreng, 2015). Another major difference in activation patterns between young and old is in the display of specific neural signatures to categories such as faces and places. Older adults show a less selective or “dedifferentiated” response in the ventral-visual cortex to category-specific regions, such as the fusiform gyrus, which activates in a highly selective manner to faces in young, but less so in old (Bernard & Seidler, 2012; Carp, Park, Hebrank, Park, & Polk, 2011; Park et al., 2004; Voss et al., 2008). Similarly, there are pronounced differences in how segregated specialized brain networks are from one another with age. Young adults show a high level of specificity and modularity in activation of functional brain networks with limited connectivity between networks. In contrast, with age, individual connections within networks are sparse, with greater connectivity between networks, which results in a more generalized neural signature to a range of tasks and is associated with decreased memory (Chan, Park, Savalia, Petersen, & Wig, 2014). Taken together, changes in neural structure and function with aging are consistent with the changing cognitive landscape that comes with age.
Aging and Theories of Human Intelligence Theories of what intelligence is and how it is best measured underwent numerous changes throughout the twentieth century. Pioneering work by
149
150
j. p. hennessee and d. c. park
Spearman (1904) demonstrated that, although cognitive tests are designed to tap into seemingly distinct cognitive processes these measures tend to share a surprisingly large amount of variance. More recent measures suggest that in a typical cognitive battery with at least 10 tests, around 40–50% of the variance generally overlaps (Carretta & Ree, 1995; Deary, Penke, & Johnson, 2010). Spearman considered this shared variance a proxy for general intelligence, labeled simply as g, that is flexibly recruited across a wide range of situations and tasks. Developments in modern computing and factor analysis drove further investigation of the structure of intelligence in the second half of the twentieth century. Perhaps most notably, Cattell discovered that there were two independent factors that better described intelligence compared to a single g factor – fluid intelligence and crystallized intelligence. The Gf-Gc theory of fluid and crystallized intelligence (Cattell, 1941) distinguishes one’s ability to quickly and efficiently manipulate information to solve abstract, novel problems (fluid intelligence), from crystallized intelligence – a measure of one’s accumulated knowledge and expertise. Fluid intelligence is quite similar to the construct of g and is traditionally derived from tasks that assess reasoning skills, with core cognitive measures such as processing speed and working memory underlying fluid intelligence, as they are fundamental to problemsolving (e.g., Kim & Park, 2018). In contrast, high crystallized intelligence is considered to be a result of enriching life experiences such as level of education, job complexity, and social and intellectual engagement throughout life. Crystallized intelligence is most commonly assessed using measures of vocabulary, with the assumption that vocabulary provides a good overall estimate of knowledge that may have been derived from enriching life experiences. Importantly, Cattell and Horn noted that deficits in fluid intelligence were widely observed with increased age, but crystallized intelligence was preserved with aging (Horn & Cattell, 1967), and might even show improvement with age, as shown in Figure 8.1 (Park et al., 2002). This pattern of findings is supported by longitudinal data, but with the caveat that after around age 80, crystallized intelligence also declines (Salthouse, 2014a). There is some evidence that a late life decrease in crystallized intelligence may be evidence of latent pathology. Because crystallized intelligence is preserved well into late adulthood, it can be used to estimate overall intelligence at younger ages and, in fact, a large disparity between fluid and crystallized abilities in older adults has been related to higher levels of amyloid plaque (a protein associated with Alzheimer’s disease) deposited on the brain (McDonough et al., 2016). The distinction between fluid and crystallized intelligence has played an important role in evolving theories of neurocognitive aging. Most neuroimaging work on aging has focused on the basic component tasks that comprise fluid intelligence (i.e., reasoning, processing speed, and working memory). These studies suggest that fluid intelligence is supported by a predominantly frontoparietal network across tasks, as described in the Parieto-Frontal Integration
Lifespan Perspective on Cognitive Neuroscience
Figure 8.1 Lifespan performance measures. Cross-sectional cognitive performance for each construct (Z-scores) examined at each age decade from 20 to 80. Adapted from D. C. Park and G. N. Bischof (2013), The aging mind: Neuroplasticity in response to cognitive training. Dialogues in Clinical Neuroscience, 15(1), p. 111. Copyright 2013 by LLS and licensed under CC BY-NC-ND 3.0
Theory (P-FIT, for a review, see Jung & Haier, 2007). Congruent with P-FIT, there is a wealth of lesion data showing that damage to the frontal parietal network results in deficits on fluid tasks (Barbey, Colom, Paul, & Grafman, 2014; Roca et al., 2010). The neural underpinnings of crystallized intelligence are more poorly understood, largely because of the difficulty of measuring the type and complexity of human experiences that occur across a lifetime and contribute to crystallized intelligence. There is some evidence that better grey matter structure – measured using grey matter thickness, volume, and surface area – in the inferior and middle frontal gyri are particularly important for crystallized intelligence (Colom et al., 2013), as the inferior frontal gyrus (esp. Broca’s area) plays a critical role in both verbal comprehension (Gläscher et al., 2009) and semantic retrieval (Binder, Desai, Graves, & Conant, 2009). Much more work is needed in this area.
Maintaining Intellectual Function with Declining Brain Integrity Just as there is considerable age-related variation in core behavioral measures of intelligence, there are also substantial individual differences in the
151
152
j. p. hennessee and d. c. park
Figure 8.2 A conceptual model of the scaffolding theory of aging and cognition-revisited (STAC-r). Adapted from P. A. Reuter-Lorenz and D. C. Park (2014), How does it STAC up? Revisiting the scaffolding theory of aging and cognition. Neuropsychology Review, 24(3), p. 360. Copyright 2014 by The Authors
amount of brain degradation that older adults evidence. There is pervasive and puzzling evidence from neuroimaging research on aging and cognition that the magnitude of brain degradation observed does not always result in degraded cognition. For example, older adults with significantly degraded fronto-parietal structure may nevertheless perform very well on tasks requiring fronto-parietal resources. The Scaffolding Theory of Aging and Cognition (Reuter-Lorenz & Park, 2014; STAC, Park & Reuter-Lorenz, 2009) provides a theoretical account of how individuals can “outperform” the apparent capacity of their brains, as shown in Figure 8.2. The model proposes that biological aging combined with life experiences that deplete or enrich the brain predict brain structure and function, which in turn predicts both the absolute level as well as rate of decline in cognitive function. Importantly, increased activation of brain resources (mainly from fronto-parietal activity) may provide some compensation that offsets the effects of brain degradation and maintains cognitive performance. Thus, good cognitive performance is predicted to occur in adults who maintain youthful brains and have not manifested brain degradation, as well as in older adults who compensate for degradation by increased brain activity. Another way to account for cognitive performance that appears to exceed observable brain integrity is to conceptualize the existence of an additional pool of resources, typically referred to as a “reserve,” that can be drawn upon to
Lifespan Perspective on Cognitive Neuroscience
maintain intellectual function as the brain degrades. There are multiple versions of theories of reserve (e.g., Satz, 1993; Stern, 2002; Stern, Arenaza-Urquijo, et al., 2018), but all are highly intertwined with the notion that there are factors that make one resilient to the structural and functional brain insults that occur with age, thus delaying intellectual decline. The original theory of brain reserve was used to describe why patients with similar levels of Alzheimer’s disease pathology (e.g., amyloid burden) or similar magnitudes of brain injury due to stroke can have vastly different cognitive outcomes. According to this theory, there may be a thresholded amount of neural resources (i.e., number of neurons) needed to perform a given cognitive function, so those with greater neurological capital can afford to sustain considerable brain degradation and still maintain performance. It is also suggested that enriching and novel activities that involve cognitive challenge can enhance reserve which has often been assessed using a proxy measurement of educational attainment or occupational complexity. Stern, Gazes, Razlighi, Steffener, and Habeck (2018) have attempted in a lifespan fMRI study (ages 20–80, N = 255) to identify a task-invariant cognitive reserve network. More specifically, they examined neural regions that shared activation across 12 cognitive tasks, and whose activation was correlated with National Adult Reading Test IQ, their proxy for cognitive reserve. The resulting cognitive reserve network included large portions of the fronto-parietal network, along with motor and visual areas. There is convincing evidence that well-educated individuals and those high in crystallized intelligence during late adulthood show some resilience to cognitive decline, but further work is needed to determine the locus of reserve. Moreover, it is not at all clear how cognitive reserve differs conceptually from high measures of fluid and crystallized intelligence (Park, 2019).
Methodological Challenges Associated with Understanding Intelligence Across the Lifespan It is important to recognize that research on lifespan changes in intelligence is inherently flawed, and that there is an uncertainty factor associated with almost any conclusion about intelligence and adult aging. Most research on lifespan aging examines cross-sectional differences between younger college students and older adults (usually age 60 and older), but interpretation of findings is confounded by cohort effects that result from differential lifetime environments experienced when people of different ages are tested during the same period of time. For example, if younger adults show better eye–hand coordination than older adults, is it due to age differences or to the fact that young adults have spent many more hours playing video games than older adults in their adolescence? Adding to the issue, Flynn (1984, 1987) has shown that, across the past century, scores on intelligence tests have been rapidly rising. In the United States, this gain is estimated to be three IQ points per decade (Flynn,
153
154
j. p. hennessee and d. c. park
1984), which has been further supported by a meta-analysis of 271 international datasets from 1909–2013 (Pietschnig & Voracek, 2015). Factors driving the Flynn effect likely include improvements in education, increased use of technology, and reductions in family size (for a review, see Pietschnig & Voracek, 2015). We note that the Flynn effect for crystallized intelligence is much smaller than the effect for fluid intelligence (Flynn, 1987; Pietschnig & Voracek, 2015); and this differential contribution likely contributes to the different lifespan trajectories of these constructs when assessed in cross-sectional research. In sum, conclusions about intelligence and aging based on cross-sectional research make it difficult to determine what amount of change in intelligence across the lifespan is truly due to aging as opposed to cohort differences. It would seem that the study of longitudinal changes within individuals over time would greatly diminish the influence of cohort effects. Indeed, it does, but unfortunately new limitations surface. Diminishing sample sizes over time are compounded by non-random participant attrition and practice effects. A dropout rate of 25–40% is common in a basic 3–4-year longitudinal study (Salthouse, 2014b). Because participants who drop out often differ substantially from those who remain, longitudinal studies tend to present an unduly optimistic picture of changes in cognition with age, as it is common to find that poorer-performing subjects are most likely to drop out. One solution is to include modeling the impact of participant attrition on key findings (e.g., Hu & Sale, 2003). Longitudinal studies of intelligence are further confounded by practice effects, so that performance often improves over time. In a metaanalysis of 50 studies on practice effects (Hausknecht, Halpert, Di Paolo, & Moriarty Gerrard, 2007), it was estimated that participants performed approximately 0.25 standard deviations better the second time they completed a cognitive test. Practice effects vary considerably across different cognitive tasks (e.g., verbal ability, Hausknecht et al., 2007), and extended experience produces changes in task-related brain activation and the organization of functional network organization, as these are honed with experience (see Buschkuehl, Jaeggi, & Jonides, 2012 review). Of particular concern are findings that suggest practice effects are more prevalent in young, with those below age 40 instead showed improved performance at a second test whereas adults over 60 fail to show improvement. Thus, longitudinal declines in later life intelligence may be underestimated due to practice. Some strategies for minimizing, but not alleviating, these problems include: (1) running large cross-sectional studies that include equal representation of all ages, allowing for a more finely-graded analysis of cohort effects than a young–old comparison, (2) conducting a 10- or 20-year longitudinal study of a specific lifespan phase that would be doable within a single investigator’s career, showing reproducibility of results across multiple samples and studies conducted both cross-sectionally and longitudinally, and 3) in longitudinal studies, testing subjects twice at the beginning of the study over a period of
Lifespan Perspective on Cognitive Neuroscience
weeks or a few months, to assess and control for the magnitude of the practice effects in the young compared to old.
Practical Implications: Can We Modify Intelligence and Combat Age-Related Decline? At least as far back as Francis Galton (1869), many have believed that intelligence is strictly inherited and unmodifiable, though there is much evidence to the contrary. Estimates of the heritability of intelligence are around 30% in childhood, but climb to as much as 70–80% by old age (Deary, Penke, & Johnson, 2010) based on evidence from the longitudinal Lothian Cohort Study of 1921 (Deary, Pattie, & Starr, 2013). These data suggest that the relative place one has “in line” with respect to peers is a constant – that is a high performer in childhood will maintain high intelligence in old age. The Lothian Cohort Study also reported considerable decline in intelligence from age 79 to 90, consistent with cross-sectional examinations of crystallized intelligence. Overall, these important results provide convincing evidence that the impact of experience is relatively small with age, suggesting that it may be difficult to maintain or enhance cognition in late adulthood through experiences. Nevertheless, the central goal of applied aging research has been to determine whether we can slow or even reverse age-related intellectual decline, either through lifestyle choices (e.g., exercise and mental engagement) or scientifically designed cognitive training programs. Initially, research on the efficacy of cognitive training produced mixed results, as improvements observed on trained tasks rarely transferred to related tasks (Redick et al., 2013; Simons et al., 2016); however, recent research looks somewhat more promising (Au et al., 2015). The N-back working memory (WM) task has been one of the most frequently utilized tasks for training. In N-back training, participants are presented with a series of digits or letters and are asked to recall the item that was presented N-number of positions before. Training programs usually require participants to do this task for several hours daily for a period of 2–4 weeks, with a cognitive battery completed at the beginning and end of training. With practice, participants are typically able to access more stimuli from working memory. A focus has been placed on WM training largely because being able to store, access, and update information in WM has been hypothesized to be fundamental to many intellectual activities (Martínez et al., 2011). In a metaanalysis of 20 studies of young to middle-aged adults, Au et al. (2015) determined that WM training was significantly associated with improvements in fluid intelligence equivalent to a gain of 3–4 IQ points. Comparable improvements were seen as to whether training was done at home or in a lab, which is practically important for older populations with limited mobility and those living far from research centers. In a recent meta-analysis of
155
156
j. p. hennessee and d. c. park
251 older adult studies (Basak, Qin, & O’Connell, 2020), cognitive training was found to also be beneficial to older adults, particularly on tasks similar to the trained cognitive ability (“near transfer”), although benefits on unrelated tasks (“far transfer”) are much less common. Second, improvements for healthy older adults and those diagnosed with mild cognitive impairment were comparable in size, suggesting that training can still improve function in this group that is at-risk for developing dementia. Additionally, programs that target multiple aspects of cognition appear to be most effective, as they have been shown to produce both near and far transfer effects, as well as measurable improvement in everyday function. Training may support strong intellectual ability in later life, as well as improve their ability to independently engage in everyday activities, such as balancing one’s checkbook. Research has focused on how intellectual gains from cognitive training are manifested in the brain. After WM training, reduced activation on the N-back task is observed in regions such as the prefrontal and parietal cortex (Buschkuehl et al., 2012), which are involved in goal-directed attention and fluid intelligence (Corbetta & Shulman, 2002; Jung & Haier, 2007). These activation reductions can last at least five weeks after WM training (MiróPadilla et al., 2019), and Buschkuehl et al. (2012) proposed that these changes reflect increased neural efficiency, as participants are able to achieve even greater performance with fewer neural resources. Although the function of decreased activation is often ambiguous, this pattern has previously been linked to increased efficiency in PFC, as lower activation with good performance represents a sparing of resource that may be deflected to other tasks (Reuter-Lorenz & Cappell, 2008). Furthermore, in a more naturalistic training regime, older adults who learned photography and quilting skills for three months showed enhanced ability to modulate activity in the fronto-parietal cortex to meet task demands (McDonough, Haber, Bischof, & Park, 2015). These training-related changes in neural activity likely stem from changes to underlying neurochemistry, including altered dopamine release (e.g., Bäckman et al., 2011), though more research is needed to understand this mechanism.
Conclusions and Future Directions In this chapter, we have explored how human intelligence demonstrates both a degree of stability and remarkable change across the lifespan. This is powerfully illustrated by the common decline of fluid aspects of intelligence in old age, despite preservation of verbal ability and expertise. Important foci for future work include understanding the interrelationships between network degradation with age and measures of fluid and crystallized abilities, learning more about the neural underpinnings of crystallized intelligence, and conducting high quality, short-term longitudinal studies that thoroughly assess brain/behavioral trajectories of different age groups. A greater
Lifespan Perspective on Cognitive Neuroscience
integration of theories of intelligence with theories of cognitive aging would advantage both fields. Perhaps most importantly, a large brain/behavior study of the entire lifespan that included both children and middle-aged adults would go a very long way in providing a more complete characterization of developmental trajectories of intelligence. Cognitive training and environmental enrichment have emerged as a potential support for maintaining intelligence. Longitudinal research must examine how cognitive training impacts intelligence across longer periods to determine whether it can help delay agerelated cognitive decline. If exposure to training or enriched environments can slow or even simply delay age-related decline, it may prove to be an invaluable way to ensure that people are not only living longer lives, but also have more quality years ahead of them.
References Anstey, K. J., Sargent-Cox, K., Garde, E., Cherbuin, N., & Butterworth, P. (2014). Cognitive development over 8 years in midlife and its association with cardiovascular risk factors. Neuropsychology, 28(4), 653–665. Au, J., Sheehan, E., Tsai, N., Duncan, G. J., Buschkuehl, M., & Jaeggi, S. M. (2015). Improving fluid intelligence with training on working memory: A metaanalysis. Psychonomic Bulletin & Review, 22(2), 366–377. Bäckman, L., Nyberg, L., Soveri, A., Johansson, J., Andersson, M., Dahlin, E., . . . Rinne, J. O. (2011). Effects of working-memory training on striatal dopamine release. Science, 333(6043), 718. Barbey, A. K., Colom, R., Paul, E. J., & Grafman, J. (2014). Architecture of fluid intelligence and working memory revealed by lesion mapping. Brain Structure and Function, 219(2), 485–494. Basak, C., Qin, S., & O’Connell, M. A. (2020). Differential effects of cognitive training modules in healthy aging and mild cognitive impairment: A comprehensive meta-analysis of randomized controlled trials. Psychology and Aging, 35(2), 220–249. Batista, A. X., Bazán, P. R., Conforto, A. B., Martins, M. da G. M., Hoshino, M., Simon, S. S., . . . Miotto, E. C. (2019). Resting state functional connectivity and neural correlates of face-name encoding in patients with ischemic vascular lesions with and without the involvement of the left inferior frontal gyrus. Cortex, 113, 15–28. Bernard, J. A., & Seidler, R. D. (2012). Evidence for motor cortex dedifferentiation in older adults. Neurobiology of Aging, 33(9), 1890–1899. Binder, J. R., Desai, R. H., Graves, W. W., & Conant, L. L. (2009). Where is the semantic system? A critical review and meta-analysis of 120 functional neuroimaging studies. Cerebral Cortex, 19(12), 2767–2796. Buschkuehl, M., Jaeggi, S. M., & Jonides, J. (2012). Neuronal effects following working memory training. Developmental Cognitive Neuroscience, 2 (Supp 1), S167–S179.
157
158
j. p. hennessee and d. c. park
Carp, J., Park, J., Hebrank, A., Park, D. C., & Polk, T. A. (2011). Age-related neural dedifferentiation in the motor system. PLoS One, 6(12), e29411. Carretta, T. R., & Ree, M. J. (1995). Near identity of cognitive structure in sex and ethnic groups. Personality and Individual Differences, 19(2), 149–155. Cattell, R. B. (1941). Some theoretical issues in adult intelligence testing. Psychological Bulletin, 38(592), 10. Chan, M. Y., Park, D. C., Savalia, N. K., Petersen, S. E., & Wig, G. S. (2014). Decreased segregation of brain systems across the healthy adult lifespan. Proceedings of the National Academy of Sciences, 111(46), E4997–E5006. Colom, R., Burgaleta, M., Román, F. J., Karama, S., Álvarez-Linera, J., Abad, F. J., . . . Haier, R. J. (2013). Neuroanatomic overlap between intelligence and cognitive factors: Morphometry methods provide support for the key role of the frontal lobes. NeuroImage, 72, 143–152. Corbetta, M., & Shulman, G. L. (2002). Control of goal-directed and stimulus-driven attention in the brain. Nature Reviews Neuroscience, 3(3), 201–215. Deary, I. J., Pattie, A., & Starr, J. M. (2013). The stability of intelligence from age 11 to age 90 years: The Lothian Birth Cohort of 1921. Psychological Science, 24(12), 2361–2368. Deary, I. J., Penke, L., & Johnson, W. (2010). The neuroscience of human intelligence differences. Nature Reviews Neuroscience, 11(3), 201–211. Flynn, J. R. (1984). The mean IQ of Americans: Massive gains 1932 to 1978. Psychological Bulletin, 95(1), 29–51. Flynn, J. R. (1987). Massive IQ gains in 14 nations: What IQ tests really measure. Psychological Bulletin, 101(2), 171–191. Galton, F. (1869). Hereditary genius: An inquiry into its laws and consequences. London: Macmillan and Co. Gläscher, J., Tranel, D., Paul, L. K., Rudrauf, D., Rorden, C., Hornaday, A., . . . Adolphs, R. (2009). Lesion mapping of cognitive abilities linked to intelligence. Neuron, 61(5), 681–691. Harrison, T. M., Maass, A., Baker, S. L., & Jagust, W. J. (2018). Brain morphology, cognition, and β-amyloid in older adults with superior memory performance. Neurobiology of Aging, 67, 162–170. Hausknecht, J. P., Halpert, J. A., Di Paolo, N. T., & Moriarty Gerrard, M. O. (2007). Retesting in selection: A meta-analysis of coaching and practice effects for tests of cognitive ability. Journal of Applied Psychology, 92(2), 373–385. Hedden, T., Schultz, A. P., Rieckmann, A., Mormino, E. C., Johnson, K. A., Sperling, R. A., & Buckner, R. L. (2016). Multiple brain markers are linked to agerelated variation in cognition. Cerebral Cortex, 26(4), 1388–1400. Horn, J., & Cattell, R. B. (1967). Age differences in fluid and crystallized intelligence. Acta Psychologica, 26, 107–129. Hu, C., & Sale, M. E. (2003). A joint model for nonlinear longitudinal data with informative dropout. Journal of Pharmacokinetics and Pharmacodynamics, 30(1), 83–103. Huang, C.-M., Polk, T. A., Goh, J. O., & Park, D. C. (2012). Both left and right posterior parietal activations contribute to compensatory processes in normal aging. Neuropsychologia, 50(1), 55–66.
Lifespan Perspective on Cognitive Neuroscience
Jung, R. E., & Haier, R. J. (2007). The Parieto-Frontal Integration Theory (P-FIT) of intelligence: Converging neuroimaging evidence. Behavioral and Brain Sciences, 30(2), 135–154. Kim, S.-J., & Park, E. H. (2018). Relationship of working memory, processing speed, and fluid reasoning in psychiatric patients. Psychiatry Investigation, 15(12), 1154–1161. Kremen, W. S., Beck, A., Elman, J. A., Gustavson, D. E., Reynolds, C. A., Tu, X. M., . . . Franz, C. E. (2019). Influence of young adult cognitive ability and additional education on later-life cognition. Proceedings of the National Academy of Sciences USA, 116(6), 2021–2026. MacPherson, S. E., Cox, S. R., Dickie, D. A., Karama, S., Starr, J. M., Evans, A. C., . . . Deary, I. J. (2017). Processing speed and the relationship between Trail Making Test-B performance, cortical thinning and white matter microstructure in older adults. Cortex, 95, 92–103. Martínez, K., Burgaleta, M., Román, F. J., Escorial, S., Shih, P. C., Quiroga, M. Á., & Colom, R. (2011). Can fluid intelligence be reduced to “simple” short-term storage? Intelligence, 39(6), 473–480. McDonough, I. M., Bischof, G. N., Kennedy, K. M., Rodrigue, K. M., Farrell, M. E., & Park, D. C. (2016). Discrepancies between fluid and crystallized ability in healthy adults: A behavioral marker of preclinical Alzheimer’s disease. Neurobiology of Aging, 46, 68–75. McDonough, I. M., Haber, S., Bischof, G. N., & Park, D. C. (2015). The Synapse Project: Engagement in mentally challenging activities enhances neural efficiency. Restorative Neurology and Neuroscience, 33(6), 865–882. Miró-Padilla, A., Bueichekú, E., Ventura-Campos, N., Flores-Compañ, M.-J., Parcet, M. A., & Ávila, C. (2019). Long-term brain effects of N-back training: An fMRI study. Brain Imaging and Behavior, 13(4), 1115–1127. Nyberg, L., & Pudas, S. (2019). Successful memory aging. Annual Review of Psychology, 70(1), 219–243. Østby, Y., Tamnes, C. K., Fjell, A. M., & Walhovd, K. B. (2011). Morphometry and connectivity of the fronto-parietal verbal working memory network in development. Neuropsychologia, 49(14), 3854–3862. O’Sullivan, M., Jones, D. K., Summers, P. E., Morris, R. G., Williams, S. C. R., & Markus, H. S. (2001). Evidence for cortical “disconnection” as a mechanism of age-related cognitive decline. Neurology, 57(4), 632–638. Pacheco, J., Goh, J. O., Kraut, M. A., Ferrucci, L., & Resnick, S. M. (2015). Greater cortical thinning in normal older adults predicts later cognitive impairment. Neurobiology of Aging, 36(2), 903–908. Park, D. C. (2019). Cognitive ability in old age is predetermined by age 20. Proceedings of the National Academy of Sciences USA, 116(6):1832–1833. Park, D. C., & Bischof, G. N. (2013). The aging mind: Neuroplasticity in response to cognitive training. Dialogues in Clinical Neuroscience, 15(1), 109–119. Park, D. C., Lautenschlager, G., Hedden, T., Davidson, N. S., Smith, A. D., & Smith, P. K. (2002). Models of visuospatial and verbal memory across the adult life span. Psychology and Aging, 17(2), 299–293.
159
160
j. p. hennessee and d. c. park
Park, D. C., Polk, T. A., Park, P. R., Minear, M., Savage, A., & Smith, M. R. (2004). Aging reduces neural specialization in ventral visual cortex. Proceedings of the National Academy of Sciences USA, 101(35), 13091–13095. Park, D. C., & Reuter-Lorenz, P. (2009). The adaptive brain: Aging and neurocognitive scaffolding. Annual Review of Psychology, 60(1), 173–196. Pietschnig, J., & Voracek, M. (2015). One century of global IQ gains: A formal metaanalysis of the Flynn Effect (1909–2013). Perspectives on Psychological Science, 10(3), 282–306. Raz, N., Ghisletta, P., Rodrigue, K. M., Kennedy, K. M., & Lindenberger, U. (2010). Trajectories of brain aging in middle-aged and older adults: Regional and individual differences. NeuroImage, 51(2), 501–511. Redick, T. S., Shipstead, Z., Harrison, T. L., Hicks, K. L., Fried, D. E., Hambrick, D. Z., . . . Engle, R. W. (2013). No evidence of intelligence improvement after working memory training: A randomized, placebo-controlled study. Journal of Experimental Psychology: General, 142(2), 359–379. Reuter-Lorenz, P. A., & Cappell, K. A. (2008). Neurocognitive aging and the compensation hypothesis. Current Directions in Psychological Science, 17(3), 177–182. Reuter-Lorenz, P. A., & Park, D. C. (2014). How does it STAC up? Revisiting the scaffolding theory of aging and cognition. Neuropsychology Review, 24(3), 355–370. Rieck, J. R., Rodrigue, K. M., Boylan, M. A., & Kennedy, K. M. (2017). Age-related reduction of BOLD modulation to cognitive difficulty predicts poorer task accuracy and poorer fluid reasoning ability. NeuroImage, 147, 262–271. Roca, M., Parr, A., Thompson, R., Woolgar, A., Torralva, T., Antoun, N., . . . Duncan, J. (2010). Executive function and fluid intelligence after frontal lobe lesions. Brain, 133(1), 234–247. Salthouse, T. A. (2014a). Correlates of cognitive change. Journal of Experimental Psychology: General, 143(3), 1026–1048. Salthouse, T. A. (2014b). Selectivity of attrition in longitudinal studies of cognitive functioning. The Journals of Gerontology Series B: Psychological Sciences and Social Sciences, 69(4), 567–574. Salthouse, T. A. (2016). Continuity of cognitive change across adulthood. Psychonomic Bulletin & Review, 23(3), 932–939. Satz, P. (1993). Brain reserve capacity on symptom onset after brain injury: A formulation and review of evidence for threshold theory. Neuropsychology, 7(3), 273. Scheller, E., Schumacher, L. V., Peter, J., Lahr, J., Wehrle, J., Kaller, C. P., . . . Klöppel, S. (2018). Brain aging and APOE ε4 interact to reveal potential neuronal compensation in healthy older adults. Frontiers in Aging Neuroscience, 10, 1–11. Simons, D. J., Boot, W. R., Charness, N., Gathercole, S. E., Chabris, C. F., Hambrick, D. Z., & Stine-Morrow, E. A. L. (2016). Do “brain-training” programs work? Psychological Science in the Public Interest, 17(3), 103–186. Spearman, C. (1904). “General Intelligence,” objectively determined and measured. The American Journal of Psychology, 15(2), 201–292.
Lifespan Perspective on Cognitive Neuroscience
Stern, Y. (2002). What is cognitive reserve? Theory and research application of the reserve concept. Journal of the International Neuropsychological Society, 8(3), 448–460. Stern, Y., Arenaza-Urquijo, E. M., Bartrés-Faz, D., Belleville, S., Cantilon, M., Chetelat, G., . . . Vuoksimaa, E. (2018). Whitepaper: Defining and investigating cognitive reserve, brain reserve, and brain maintenance. Alzheimer’s & Dementia, 16(9), 1305–1311. Stern, Y., Gazes, Y., Razlighi, Q., Steffener, J., & Habeck, C. (2018). A task-invariant cognitive reserve network. NeuroImage, 178, 36–45. Storsve, A. B., Fjell, A. M., Tamnes, C. K., Westlye, L. T., Overbye, K., Aasland, H. W., & Walhovd, K. B. (2014). Differential longitudinal changes in cortical thickness, surface area and volume across the adult life span: Regions of accelerating and decelerating change. Journal of Neuroscience, 34(25), 8488–8498. Turner, G. R., & Spreng, R. N. (2015). Prefrontal engagement and reduced default network suppression co-occur and are dynamically coupled in older adults: The default–executive coupling hypothesis of aging. Journal of Cognitive Neuroscience, 27(12), 2462–2476. Voss, M. W., Erickson, K. I., Chaddock, L., Prakash, R. S., Colcombe, S. J., Morris, K. S., . . . Kramer, A. F. (2008). Dedifferentiation in the visual cortex: An fMRI investigation of individual differences in older adults. Brain Research, 1244, 121–131. Yuan, P., Voelkle, M. C., & Raz, N. (2018). Fluid intelligence and gross structural properties of the cerebral cortex in middle-aged and older adults: A multioccasion longitudinal study. NeuroImage, 172, 21–30.
161
9 Predictive Intelligence for Learning and Optimization Multidisciplinary Perspectives from Social, Cognitive, and Affective Neuroscience Christine Ahrends, Peter Vuust, and Morten L. Kringelbach Many different definitions of intelligence exist but, in the end, they all converge on the brain. In this chapter, we explore the implications of the simple idea that, ultimately, intelligence must help optimize the survival of the individual and of the species. Central to this evolutionary argument, intelligence must offer superior abilities to learn and flexibly adapt to new challenges in the environment. To enhance the possibility of survival, the brain must thus learn to make accurate predictions that optimize the amount of time and energy spent on choosing appropriate actions in a given situation. Such predictive models have a number of parameters, like speed, complexity, and flexibility, that ensure the correct balance and usefulness to solve a given problem (Deary, Penke, & Johnson, 2010; Friedman et al., 2008; Fuster, 2005; Houde, 2010; Johnson-Laird, 2001; Kringelbach & Rolls, 2004; Roth & Dicke, 2005). These parameters come from a variety of cognitive, affective, and social factors, but a main requirement is one of motivation to initiate and sustain the learning process. Finally, one thing is to survive, another is to flourish, and so we discuss whether the intelligent brain is also optimal in terms of wellbeing given that spending too much time predicting something that may never come to pass could be counterproductive to flourishing. Thus, in this perspective, intelligence can be thought of as the process of balancing and optimizing the parameters that allow animals to survive as individuals and as a species, while still maintaining the motivation to do so. Improving the predictive, intelligent brain is a lifelong process where there are important shifts throughout the lifespan in how different aspects and parameters are prioritized. In this chapter, we first investigate the fundamental requirements to survive in an intelligent manner. A central idea is that of the predictive brain which has gained traction over the last few decades. Ever more precise models have described model parameters that enable motivated learning to solve complex problems. The brain turns out to have a specific hierarchical architecture that forms the base for these learning processes. Yet, it has also become clear
162
Predictive Intelligence for Learning and Optimization
that we need to further our understanding of the communication in the human brain, and in particular the hierarchical, yet massively parallel processing that allows for intelligent predictions in this architecture. To this end, we show how these can be integrated in models such as the Global Workspace and how several concepts from the study of dynamical systems such as metastability and criticality have proven useful for describing the human brain. We then explore the support from emerging evidence in social, cognitive, and affective neuroscience on describing optimal and suboptimal states of the intelligent brain. Finally, we turn to the role of intelligence not just for surviving but for thriving.
Brain Requirements for Intelligent Survival Learning is the process that enables intelligent survival and the optimization of exploitation vs. exploration in the human brain. Crucially, learning can only occur given the motivation and expectation of receiving a reward or pleasure (Berridge & Robinson, 2003; Kringelbach & Berridge, 2017). This can be conceptualized as a pleasure cycle with multiple distinct phases: an appetitive/“wanting” phase followed by a consummatory/“liking” phase, and finally a satiety phase (see Figure 9.1a), each of which have behavioral manifestations (hence the quotation marks around “wanting” and “liking”) resulting from the underlying brain networks and mechanisms. Predictions are maximally instigated during the appetitive phase but are also present during the consumption and satiation phases. During the latter, the outcome of the cycle is evaluated to learn from the experience; although, importantly, learning can occur in any phase of the cycle (Kringelbach & Rolls, 2004). Learning thus occurs continuously in the pleasure cycles seamlessly taking place over much longer circadian cycles and over the lifespan (see Figure 9.2, for more information see Berridge and Kringelbach (2008)).
Learning Models in the Human Brain The theory of predicting from and updating a model through these pleasure cycles is essential to explain learning and decision-making. Over the past 20 years, neuroscientific research has developed several theories of a predictive brain (Clark, 2013; Friston & Kiebel, 2009; Johnson-Laird, 2001; Schacter, Addis, & Buckner, 2007). This principle describes how the mind is constantly making predictions about its environment from a mental model that it has built from previous experience, as illustrated in Figure 9.1b. The expectations generated from this model can then either be met or disappointed, which results in a prediction error. This error signal is used to update the model and improve it for future predictions – driven by the brain’s intrinsic drive to reduce the model’s free energy (or minimize surprise) (Friston, 2010).
163
164
c. ahrends, p. vuust, and m. l. kringelbach
Figure 9.1 The pleasure cycle, interactions between experience and predictions, as well as how learning might occur. (A) The pleasure cycle illustrates how humans go through an appetitive, a consummatory, and a satiety phase when processing a reward. This can be any number of different rewards, including food or abstract monetary reward. (B) Zooming in to this process shows how, at any point in the pleasure cycle, the brain is taking into
Predictive Intelligence for Learning and Optimization
Initially, research on perception demonstrated the involvement of feedforward and feedback loops to test predictions and update a mental model of the environment (for the visual domain, see Bar et al. (2006); Kanai, Komura, Shipp, and Friston (2015); and Rao and Ballard (1999) and for the auditory domain, see Garrido, Kilner, Stephan, and Friston (2009); Näätänen, Gaillard, and Mäntysalo (1978); and Näätänen, Paavilainen, Rinne, and Alho (2007)). It has since been shown that many higher level processes, like the understanding and enjoyment of music, rely on the same principle of generating and testing predictions from a model (Huron, 2006, 2016; Koelsch, Vuust, & Friston, 2019; Pearce & Wiggins, 2012; Rohrmeier & Koelsch, 2012; Vuust & Frith, 2008). The way in which the model is updated based on the expectation of future reward has been explained, for example, using reinforcement learning models (Niv & Schoenbaum, 2008; Schultz, 2015; Schultz, Dayan, & Montague, 1997; Vuust & Kringelbach, 2010). In these theories, any possible action in a given situation is associated with an expected value. In every trial (or every instance where the action is an available option), this associated value is updated based on experience. In the context of all available experience, this error signal is weighted by a learning rate, determining how much influence the given error term has on the model update (Glascher & O’Doherty, 2010; Niv & Schoenbaum, 2008; O’Doherty, 2004). Extensive evidence from human and animal neuroimaging studies has made a strong case for the involvement of dopaminergic signaling, focused on the ventral putamen and orbitofrontal cortex in humans, to mediate this process (O’Doherty, 2004; Schultz, 2015; Schultz & Dickinson, 2000; Schultz et al., 1997). A schematic of a simple reinforcement learning cycle can be found in Figure 9.1c.
Figure 9.1 (cont.) account past experiences of relevant outcomes to form predictions about the future. (C) Within the brain, several stages are likely traversed to update the predictive model in a massively parallel manner, where learning biases the update. One iteration of this cycle can be thought of as making a decision on how to act in a certain situation. First, the tree of possible options is sought by simulating, or predicting, the consequences of each option. The probabilities of each of these sequences of actions to lead to a reward are then evaluated. Suboptimal options (that are unlikely to lead to a reward) are discarded and only the most promising path is executed into an action. The outcome following this action is then evaluated based on the actual reward and compared to the expected reward associated with that action. Based on this comparison, the probabilities of the decision tree are updated: They could be increased if the reward was larger than expected, making the option more likely to be chosen in a future similar situation, or decreased if the reward either did not occur or was lower than expected, making the option less likely to be chosen. This process is iterated many times to create an accurate predictive model.
165
166
c. ahrends, p. vuust, and m. l. kringelbach
Figure 9.2 The pleasure cycle, on its own, during circadian cycles, and over the lifespan. (A) The repeating pleasure cycle consists of different phases: “wanting” (motivation), “liking” (consummation), and satiety. Learning, and updating of prediction errors, occurs throughout the cycle but most strongly after consummation. (B) These cycles are constantly iterated throughout the circadian cycle, both during the awake phase and during sleep. (C) Over the lifespan (shown on a logarithmic scale), these pleasure cycles help enable an individual’s wellbeing. Evidence suggests that overall wellbeing is u-shaped over the lifespan. (Stone et al., 2010)
Predictive Intelligence for Learning and Optimization
In everyday life, pleasure cycles are continuously and seamlessly occurring throughout the circadian cycle both when awake or asleep in order to help improve the predictive model (see Figure 9.2b). Problems with this pleasure cycle are manifested as anhedonia, the lack of pleasure, which is a key characteristic of neuropsychiatric disorders. As such, the dynamics of these pleasure cycles interact closely with our experience of wellbeing or eudaimonia, a life well-lived. Interestingly, subjective ratings of wellbeing have been shown to have a u-shape over the lifespan, with a significant dip around 50 years of age (Figure 9.2c) (Stone, Schwartz, Broderick, & Deaton, 2010).
Optimization Principles of Learning Models Learning models of intelligence have made their way into computer models where they have been particularly successful for the development of algorithms to solve complex problems. For instance, a major challenge of machine learning, the game of Go, was recently successfully mastered by a learning algorithm that relies on reinforcement learning (Silver et al., 2016, 2017). Using deep neural networks trained both in a supervised way on human games and through reinforcement learning on simulations of self-play, this algorithm outperforms human experts in the game of Go (Silver et al., 2016). Notably, the performance could be even further improved by discarding data from human games and training purely through reinforcement learning on selfplay (Silver et al., 2017). In the case of computer systems, reinforcement learning implies that the algorithm simulates a large number of games and “rewards” itself every time the sequence of actions leads to a win, and “punishes” itself every time it leads to a loss. It uses this information to compute probabilities of leading to a win at every step of the sequence. For every iteration, these probabilities are updated by weighting information from the simulations, making the model more flexible. When generating new sequences based on these probabilities, it needs to predict the outcome several steps ahead in the sequence and reiterate this at every step. How far this algorithm “looks into the future” is called the search space and is a major limiting factor in the performance of the algorithm. As a theory, these models can be generalized from computer science to human and animal decision-making in that they describe “how systems of any sort can choose their actions to maximize rewards or minimize punishments” (Dayan & Balleine, 2002: 258). They therefore offer a useful framework to understand general optimization principles (i.e., learning) in the brain. In the context of intelligence, the way in which the model used to make decisions and solve problems is updated can increase the chances of survival. This optimization depends on three main parameters that can be seen as analog to the criteria put forward for intelligence: (1) the time until a decision is made (cf. the speed of problem solving (Deary et al., 2010; Jung & Haier, 2007; Roth & Dicke, 2005)), (2) the depth of the search space (cf., the complexity of
167
168
c. ahrends, p. vuust, and m. l. kringelbach
problems that can be solved (Conway, Kane, & Engle, 2003; Duncan, 2013; Fuster, 2005; Johnson-Laird, 2001)), and (3) the model flexibility (cf., the ability to take new evidence into account (Fuster, 2005; Roth & Dicke, 2005)). Additionally, humans need to consider social aspects when trying to increase chances of survival, i.e., a fourth parameter, group size, may change depending on the focus of survival. Intelligent behavior can be thought of as optimizing these four parameters – a task that persists over the lifespan (see Figure 9.3). Each optimization parameter will change throughout the different stages of development based on the individual’s priorities (e.g., a shift from individual to family values during early adulthood). The probabilities of survival depend on the balance of these interdependent parameters. For instance, increasing the depth of the search space negatively impacts the speed at which a decision can be made. Putative relationships over the lifespan are illustrated in Figure 9.3 (center panel).
Architecture and Communication Principles in the Human Brain To enable complex processes such as learning, the whole brain needs to be organized in an appropriate structural and functional architecture that provides the necessary scaffold. Within this framework, the involved specialized regions need to communicate efficiently to implement the feedforward- and feedback-loops necessary for model updating. Recently, this network perspective has been shown to provide meaningful insights into the study of the general intelligence or g-factor in the brain (Barbey, 2018). We here argue how brain architecture and dynamics can help us understand intelligence in a wider sense, considering the interplay between cognitive, affective, and social functions of the brain. Using graph theory, it has been found that in the healthy brain, the structural connections of brain networks are self-organized in a so-called “smallworld” matter, i.e., they display dense local connectivity, with the involvement of few long-range connections (Stam & van Straaten, 2012; van den Heuvel & Hulshoff Pol, 2010). There is growing evidence that a higher degree of smallworldness, reflecting the efficiency of local information processing, together with shorter path lengths for long-range connections, i.e., increased global communication efficiency, are correlated with cognitive performance (Beaty, Benedek, Kaufman, & Silvia, 2015; Dimitriadis et al., 2013; Pamplona, Santos Neto, Rosset, Rogers, & Salmon, 2015; Santarnecchi, Galli, Polizzotto, Rossi, & Rossi, 2014; Song et al., 2008; Stern, 2009; van den Heuvel, Stam, Kahn, & Hulshoff Pol, 2009). Another perspective on the hierarchic architecture in which modular brain areas work together to solve problems is described by the Global Neuronal Workspace Theory (Baars, 1988; Baars & Franklin, 2007; Dehaene, Changeux, Naccache, Sackur, & Sergent, 2006; Dehaene, Kerszberg, & Changeux, 1998;
Predictive Intelligence for Learning and Optimization
Figure 9.3 Parameter optimization of learning models. Schematic of how the four optimization parameters of the predictive model change across the different stages of development. In general, this optimization aims toward maximizing the depth of the search space (simulate further into the future) and the group size, but minimizing the model flexibility (as a function of the model improving) and the time until decision. These general patterns are affected by developmental changes, such as during adolescence and early adulthood. Each of these parameters affects probabilities of survival: In theory, the deeper search space and faster time until decision improve chances of survival, but they
169
170
c. ahrends, p. vuust, and m. l. kringelbach
Dehaene & Naccache, 2001). This view posits that effortful cognitive processing is achieved through dynamic interaction between a single global workspace (i.e., a group of highly interconnected neurons or regions) with several specialized modular subsystems, such as perceptual, memory, or attentional processors (see Figure 9.4). This is crucial for the study of intelligence, particularly as it can explain the interplay between parallel mental operations and splitting of complex problems into several “chunks” (Zylberberg, Dehaene, Roelfsema, & Sigman, 2011). There exist qualitative differences in the direction of interaction between the hierarchical levels: The top-down direction allows for conscious control of subconscious processes (signals directed from the global workspace to select submodule input), while the bottom-up direction is a subconscious stream that constantly competes unselectively for access to the global workspace (Dehaene & Naccache, 2001). During effortful, conscious processes, the global workspace ignites and suppresses the necessary specialized processors (Dehaene et al., 1998). Parallel subconscious processes compete for access to the (conscious) global workspace where their information can be integrated and eventually be considered for action selection (Baars & Franklin, 2007; Baars, Franklin, & Ramsoy, 2013; Mesulam, 1998). An example for these interactions can be found in effortful guiding of attention to auditory or visual stimuli given specific task instructions where the global workspace can select the relevant module (auditory and visual perceptive inputs respectively), which then becomes conscious, while the other module does not enter the workspace and information is thus “ignored” (Dehaene & Naccache, 2001). This type of architecture is essential for learning processes such as the ones described under “Learning Models in the Human Brain.” The global workspace can be thought of as keeping track of the model by integrating incoming information from the perceptual subsystems with information stored in memory systems, updating probabilities associated with outcomes via evaluative subsystems, simulating paths, and finally passing on the generative information to action subsystems (see Figure 9.5). The speed, capacity, and flexibility at which the global workspace operates can be thought of as determining the efficiency of problem-solving – a potential proxy for intelligence as a learning model.
Figure 9.3 (cont.) are positively correlated making the optimization problem more complex (center panel). The model flexibility parameter is optimal at an intermediate level where it is neither too rigid to take new evidence into account and adapt to new challenges nor too plastic in which case it would constantly be renewed. The group size parameter defines the focus of survival between the individual, the small group (e.g., family), or the species.
Predictive Intelligence for Learning and Optimization
Figure 9.4 Hierarchical neuronal workspace architectures. Intelligence is perhaps best understood in a hierarchically organized network akin to the Global Neuronal
171
172
c. ahrends, p. vuust, and m. l. kringelbach
Oscillator Models A remaining question in these theories is, however, how within these brain architectures different nodes or networks interact to not only enable but optimize complex learning processes. Important insights into the communication between brain areas come from computational models. Here, the rhythmic firing of neurons and neuronal fields can be modeled as coupled oscillators (Cabral et al., 2014; Cabral, Hugues, Sporns, & Deco, 2011; Deco, Jirsa, McIntosh, Sporns, & Kotter, 2009; Werner, 2007). A simple analogy of coupled oscillators is a row of metronomes that are connected by standing on the same moving platform. Even if started at random time points and frequencies, the metronomes will spontaneously synchronize, resulting in phase coherence or phase-locking. This self-entrainment of oscillators was described by Kuramoto (1975). On the neural mechanistic level, the Communication through Coherence theory has proposed that neuronal information transmission depends on phase coherence (Fries, 2005). This theory describes how the synchronization between two oscillators (in this case two local fields) opens temporal communication windows where input and output can be exchanged at the same time. This suggests that, for information to be processed optimally, the involved brain areas need to be temporally synchronized. An apparent contradiction to this view is that weaker coupling between long-range connections also supports cognitive performance (Santarnecchi et al., 2014). Synchronization (i.e., stronger coupling between nodes) has therefore great potential to describe communication, but it cannot conclusively explain the kind of spatiotemporal flexibility necessary to update a model in the Global Workspace framework (Deco & Kringelbach, 2016). In fact, over-synchronization between certain areas has been suggested to hinder
Figure 9.4 (cont.) Workspace. (A) In the human brain, information is integrated in a hierarchical fashion (shown here as concentric rings). Sensory information is progressively processed and integrated; shown here how visual (green) and auditory (blue) information is integrated in heteromodal regions (burgundy) (Mesulam, 1998). (B) This idea was further developed in the Global Workspace (Dehaene et al., 1998) based on Baars’ ideas of a cognitive workspace (Baars, 1988), where information is integrated in a hierarchical fashion from perceptual systems based on memory, attention, and evaluative systems. In the Global Workspace, metastable brain regions are dynamically assembling to optimally integrate information in order to intelligently shape behavior to ensure survival of the individual and of the species, and potentially maintain the motivation throughout life to thrive (from Dehaene & Changeux, 2011). (C) It has since been shown that the spatial gradients are topologically organized in the human brain. (Margulies et al., 2016)
Predictive Intelligence for Learning and Optimization
Figure 9.5 Reward-related signals in the orbitofrontal cortex (OFC) unfold dynamically across space and time. (A) Representative schematic of hierarchical cortical processing demonstrating higher-order limbic cortical regions (e.g., OFC) sending prediction signals to and receiving prediction error signals from multimodal, exteroceptive, and interoceptive systems (A1, primary auditory cortex; G1, primary gustatory cortex; I1, primary interoceptive cortex; O1, primary olfactory cortex; S1, primary somatosensory cortex; V1, primary visual cortex). Each ring represents a different type of cortex, from less (interior circles) to greater (exterior circles) laminar
173
174
c. ahrends, p. vuust, and m. l. kringelbach
cognitive performance and to be related with pathologies (Anokhin, Muller, Lindenberger, Heath, & Myers, 2006; Singer, 2001; Voytek & Knight, 2015). Coupling strength might therefore be a major factor to enable efficient workspace configuration. Using non-linear models, it has been shown that a system consisting of strongly coupled oscillators self-entrains, i.e., it quickly converges to a stable state where all oscillators are synchronized – in the above example, metronomes that are strongly connected via a moving platform synchronize with each other and it is difficult to disturb this pattern of synchronization (Deco & Jirsa, 2012; Honey, Kotter, Breakspear, & Sporns, 2007). This poses a problem for a dynamic system: The brain becomes a “prisoner to itself” (Tognoli & Kelso, 2014), i.e., it gets stuck in the stable state, eliminating the possibility of exploring any other state (Deco, Kringelbach, Jirsa, & Ritter, 2017). This can be thought of as a strong, simple attractor manifold: With strong coupling, no matter at which frequencies the oscillators – or metronomes in the above example – start, they will always converge to the same state of total synchrony and remain in that single equilibrium state (stability) (Deco & Jirsa, 2012; Deco, Jirsa, & McIntosh, 2011; Rolls, 2010; Tognoli & Kelso, 2014). Stability is equivalent to a perfectly orderly organization of the system. The opposite case, a system consisting of a number of uncoupled oscillators (or metronomes with no connecting matter between them), does not synchronize, but every oscillator stays independent (Deco & Jirsa, 2012; Deco et al., 2017; Honey et al., 2007; Tognoli & Kelso, 2014). This would mean no possible communication between the nodes or between subsystems and the global workspace (Fries, 2005). Assuming that brain activity within a node is not a perfect oscillator, but includes a certain amount of noise, or random activity, each oscillator would change its activity in a random manner but never converge with the others (Deco & Jirsa, 2012; Deco et al., 2009, 2014). In the dynamic systems perspective, this can be thought of as chaos (Friston, 1997; Tognoli & Kelso, 2014).
Figure 9.5 (cont.) differentiation. Adapted from Mesulam (1998) and Chanes and Barrett (2016). (B) Conceptual representation of reward space for a task with distinct phases of cue (prediction, red), anticipation (uncertainty, blue), and outcome (prediction error, green) with darker colors signaling more reward-related activity. (C) Changes in level of activity in the OFC during phases of prediction, uncertainty, and prediction error based on neural evidence in Li et al. (2016). (D) Changes in network dynamics as a function of the activity in the OFC. Hypothetical illustration of the OFC directing functional network configurations as a key part of the global workspace across multiple brain regions over time. From Kringelbach and Rapuano (2016)
Predictive Intelligence for Learning and Optimization
Computational models have shown how a system in which oscillators have a medium coupling strength results in the most efficient exploration of different states (Deco & Kringelbach, 2016; Deco et al., 2017; Honey et al., 2007). The typical behavior of this type of system is characterized by transient phases of synchronization and de-synchronization, i.e., locking and escape or integration and segregation (Deco & Kringelbach, 2016; Tognoli & Kelso, 2014). This behavior, called metastability, has been shown to exhibit the highest degree of dynamic complexity, even in the absence of external noise – and thus exist at the border between order and chaos (Friston, 1997). In this complex attractor manifold, configurations exist to which the system will be transiently attracted (Deco & Jirsa, 2012; Rolls, 2010). In light of the Global Workspace theory, different perceptual, attentional, memory, evaluative, or motor networks are subliminally available as ghost attractors in the state space and can easily be stabilized when necessary (Deco & Jirsa, 2012). In a metastable system, at the moment where a fixed state loses its stability (bifurcation), the system is characterized by criticality (Deco & Jirsa, 2012; Deco et al., 2011; Friston, 1997; Tognoli & Kelso, 2014). In this critical zone, information processing and flexibility in the brain are optimal (Deco et al., 2017). It has been shown that the healthy waking brain self-organizes into a state close to criticality (Singer, 2001; Werner, 2007). However, a system at criticality is also particularly susceptible to perturbation (or noise) – an important consideration for the search of the intelligent brain (Tognoli & Kelso, 2014; Werner, 2007). These models of metastability and criticality have been applied and shown to have great similarity to empirical data on different time-scales (Hu, Huang, Jiang, & Yu, 2019; Kringelbach, McIntosh, Ritter, Jirsa, & Deco, 2015). They can explain single-unit behavior (Friston, 1997; Rolls, 2010), phase-relation in EEG and MEG recordings of the brain (Cabral et al., 2014), and spontaneous functional connectivity in resting-state fMRI (Cabral et al., 2011; Deco et al., 2009, 2017). The available repertoire of brain states can be thought of as an attractor manifold, in which certain functional networks constitute stronger (i.e., more stable, like the Default Mode Network [DMN]) (Anticevic et al., 2012; Deco et al., 2014, 2017; Ghosh, Rho, McIntosh, Kotter, & Jirsa, 2008) or weaker attractors to which the system transiently converges (Deco et al., 2011). The trajectory by which the brain moves through this state space and explores the different network configurations is represented by transitions between functional networks in fMRI.
Optimal and Suboptimal States of the Intelligent Brain In this chapter, we have considered intelligence in an evolutionary context as the promotion of survival of the individual and of the species. Going beyond survival, we have suggested that intelligence should be aimed
175
176
c. ahrends, p. vuust, and m. l. kringelbach
at wellbeing to sustain motivation necessary for learning. We have shown how intelligence can be further conceptualized as an optimization problem of a learning model on the parameters speed, depth of search space, and flexibility. We have described a hierarchical brain architecture akin to the Global Workspace Theory that is necessary for this process. We have also shown that in order to enable an optimal regime for such a framework to work efficiently, concepts of metastability and criticality are needed to ensure optimal communication across brain networks. We have developed this conceptualization in contrast to current concepts of intelligence in the fields of social, cognitive, and affective neuroscience, which have posed several challenges that have been difficult to resolve in a unified theory. One attempt has been to introduce independent subdomains of intelligence, such as social and emotional intelligence, and numerous subareas of cognition (Gardner, 1984; Thorndike, 1920). While these offer great specificity to explain observed phenomena, they strip the term “intelligence” of its ambition of universality and create a gap between the different fields. This has further affected the areas of neurology and neuropsychiatry where symptom classification and mechanistic explanations for some disorders are scattered into a diffuse array of social, emotional, or cognitive deficits and corresponding neural processes that lack a holistic understanding of the diseases. Understanding the intelligent brain in the way we suggest here could provide a new theoretical framework to re-evaluate traditional questions in the study of intelligence. For instance, in cognitive neuroscience, the general intelligence factor (g) has been reliably shown to be highly correlated with most cognitive subdomains, such as working memory, and their functional relationship is a question that has received great attention (Colom, Rebollo, Palacios, Juan-Espinosa, & Kyllonen, 2004; Conway et al., 2003; Kane & Engle, 2002). The Global Workspace conceptualization allows for illustrating the relationship between working memory and general intelligence, where working memory can be understood as the limited capacity of the global workspace, while general intelligence is conceptualized as the interplay between several model parameters including capacity (or depth) of the workspace. Despite promising past attempts, the field of affective neuroscience is still struggling to explain cognitive deficits in mood disorders, such as depression. Depressed patients have been shown to have lower IQ-scores than matched healthy controls and diffuse cognitive deficits (Landro, Stiles, & Sletvold, 2001; Ravnkilde et al., 2002; Sackeim et al., 1992; Veiel, 1997; Zakzanis, Leach, & Kaplan, 1998). While both causal explanations have been suggested (lower IQ as a risk factor for depression or depressive symptoms hindering cognitive performance), the relationship remains difficult to interpret (Koenen et al., 2009; Liang et al., 2018). Similar to cognitive neuroscience, the dissociation of state and trait effects is unclear in affective disorders (Hansenne & Bianchi, 2009). Researchers and clinicians therefore often default to describing
Predictive Intelligence for Learning and Optimization
specific cognitive impairments, resulting in a disjointed list of cognitive and emotional symptoms. The reinforcement learning model can help to illustrate the dependency of learning processes on affective factors like motivation and reward evaluation. In this light, cognitive deficits in depression and the general susceptibility of intelligence tests to mood can more easily be understood within the unified hierarchical framework described in this chapter. Namely, we have shown how learning happens as a function of expectation, consumption, and satiation of pleasure, providing motivation and reward. With the absence of pleasure, or anhedonia, being a core symptom of depression, a main factor that enables learning is lacking in this disorder. Looking forward, the application of the concepts of metastability and criticality as principles of an optimally working brain is an emerging area in social, cognitive, and affective neuroscience that promises important causalmechanistic insights into several central research questions within these fields.
Emerging Evidence from Social Neuroscience The recent increase of oscillator models to simulate social interactions has already provided an interesting new perspective regarding the role of coherence for social behavior. For instance, studies on musical synchronization between dyads have demonstrated that a model of two coupled oscillators can simulate synchronized tapping (Konvalinka, Vuust, Roepstorff, & Frith, 2009) and that a complex pattern of phase-locking both within and between two brains is crucial for interpersonal synchronization (Heggli, Cabral, Konvalinka, Vuust, & Kringelbach, 2019; Sanger, Muller, & Lindenberger, 2012). These models also have the potential to explain and dissociate conflicts within (e.g., between auditory and motor regions of the brain) and between individuals in interpersonal interactions by modeling several oscillators within each brain and simulating different coupling strengths within a unit and across them (Heggli et al., 2019). In this way, oscillator models have been proven useful tools to model social interaction on several different levels. Taking models of social interaction one step further, it has been shown that larger group dynamics rely on metastability: In order for a system of interacting units to converge to a consensus (or make a decision), the system needs to be close to criticality (De Vincenzo, Giannoccaro, Carbone, & Grigolini, 2017; Grigolini et al., 2015; Turalska, Geneston, West, Allegrini, & Grigolini, 2012). A metastable state of transient phases of consensus and de-stabilization can moreover be meaningfully applied to describe real-life social and political phenomena (Turalska, West, & Grigolini, 2013). This could be an important perspective to understand social intelligence as a flexible, adaptive behavior (Parkinson & Wheatley, 2015). A major goal of this line of research within social neuroscience is to model social groups wherein each brain consists of many nodes, which could provide an important step towards understanding the mechanisms underlying dynamics and conflicts between personal and
177
178
c. ahrends, p. vuust, and m. l. kringelbach
collective behavior. At the moment, however, these ambitions are still limited by computational power.
Emerging Evidence from Cognitive Neuroscience The importance of inter-area synchronization for cognitive performance has been shown in large-scale resting-state neuroimaging studies (Ferguson, Anderson, & Spreng, 2017). Moreover, it has recently been found that increasing synchrony between individual brain network nodes can attenuate age-related working-memory deficits (Reinhart & Nguyen, 2019). Betweennetwork coupling has also been evaluated in resting-state functional connectivity studies. A recent large-sample study (N = 3,950 subjects from the UK Biobank) found that, besides functional connectivity within a network, the specific coupling pattern between several canonical resting-state networks (the Default Mode Network [DMN], frontoparietal network, and cinguloopercular network) can explain a large amount of variance in cognitive performance (Shen et al., 2018). Furthermore, the role of efficient de-activation of networks and flexible switching between networks for cognition are gaining acknowledgment (Anticevic et al., 2012; Leech & Sharp, 2014). Recent studies have used dynamic approaches to functional connectivity to describe the spatio-temporal dynamics of the brain during rest. For instance, Cabral et al. (2017) used the Leading Eigenvector Dynamics Analysis (LEiDA) and found that cognitive performance in older adults can be explained by distinct patterns of flexible switching between brain states. This dynamical perspective also has the potential to contribute to the discussion of the “milk-and-jug” problem (Dennis et al., 2009). This debate revolves around the relationship between theoretical concepts of intelligence (the “jug” or capacity) and the actual performance (the “milk” or content, which is limited by the capacity of the jug). Viewing the brain as a dynamical system could give theoretical insights towards understanding this relationship by dissociating between cognitive traits (like g) as the repertoire of available brain states, and cognitive states (performance in a given situation) as the exploration of that repertoire. Despite emerging evidence on graph-theoretical network configurations during effortful cognitive processing (Kitzbichler, Henson, Smith, Nathan, & Bullmore, 2011), a detailed description of the global workspace remains a major challenge that promises to allow linking specific state transition patterns with intelligence.
Emerging Evidence from Affective Neuroscience Using the concept of metastability, it has been proposed that certain neurological and neuropsychiatric diseases, such as Parkinson’s disease, depression, anxiety, and obsessive-compulsive disorder, can be explained by overcoupling,
Predictive Intelligence for Learning and Optimization
and others, such as age-related cognitive decline, autism spectrum disorder, and schizophrenia, can be explained by undercoupling (Voytek & Knight, 2015). Behaviorally, this could reflect symptoms like rumination in depression as “getting stuck” in a strong attractor network or disorganization in schizophrenia as a super-critical, chaotic regime or a surplus of noise. Empirical findings support this view: Resting-state functional connectivity studies in patients with major depressive disorder have shown increased connectivity in short-range connections, as well as disruption and dysfunction of the central executive network, the DMN, and the salience network (Fingelkurts et al., 2007; Menon, 2011). It also has been suggested that a dysfunction in state transitions, i.e., a lack of optimal metastability, can explain anhedonia in depressed patients (Kringelbach & Berridge, 2017). Additional evidence for the importance of coherence and metastability in psychiatric disorders comes from the study of schizophrenia. A recent study found that the functional cohesiveness, integration, and metastability of several resting-state networks could not only distinguish between patients and healthy controls, but also bore explanatory value for specific symptom severity, like anxious/depressive or disorganization symptoms (Lee, Doucet, Leibu, & Frangou, 2018). Using a modeling approach similar to the ones described above, Rolls, Loh, Deco, and Winterer (2008) had previously shown that a schizophrenia-like brain state can be achieved by introducing a larger amount of noise and variability into the system. Taking it even further, these principles are now being used to assess the potential of new therapies for affective disorders like depression. The entropic brain hypothesis suggests the possible beneficial effects of psychedelic therapy for depression by moving the brain closer towards criticality and thereby enhancing exploration of the state space (Carhart-Harris, 2018; Carhart-Harris et al., 2014). The potential effects of these novel treatment options can be simulated using computational models like the ones described under “Oscillator Models” (Deco & Kringelbach, 2014; Kringelbach & Berridge, 2017).
Conclusion The quest for the optimal state of the intelligent brain has resulted in several promising theories with great relevance to the fields of social, cognitive, and affective neuroscience. We suggest that learning models and the optimization of their model parameters across the lifespan could be a useful tool to understand intelligence from a multidisciplinary point of view. Namely, we described how cognitive learning relies on affective processes like motivation and evaluation of reward. The balance between individual and group intelligence, i.e., ensuring survival of the individual and of the species, depends on the prioritization of the different model parameters in the optimization. In order to understand the architecture to allow solving
179
180
c. ahrends, p. vuust, and m. l. kringelbach
reinforcement learning problems, we have suggested the necessity for a hierarchical system such as the Global Workspace, and demonstrated how it can be applied to research problems related to intelligence. Finally, we focused on flexible brain communication within this structure to enable efficient processing. We suggest that future research can shed new light on the physics of optimal and suboptimal states of the brain using the concept of metastability. We propose that the optimal brain is a brain at criticality that flexibly switches between a large repertoire of attractor states. Taking the case of depression as an example, the concepts delineated under “Oscillator Models” have the potential to not only explain certain symptoms (e.g., affective), but also the diffuse effects of the disease including cognitive symptoms. It has been suggested that the depressed brain is sub-critical, which could affect many areas of brain function and behavior (Carhart-Harris, 2018; Kringelbach & Berridge, 2017). We have argued that whole-brain computational modeling can help establish a causal understanding of the brain in health and disease by simulating the effects of different parameters, such as system-wide or node-specific coupling strength, noise, or transmission delay on whole-brain dynamics (Deco & Kringelbach, 2014). Interpreting the behavior of these models can contribute to major discussions in the different fields of neuroscience. For instance, simulations have shown that removing the anterior insula and cingulate – areas that are functionally affected (e.g., in schizophrenia, bipolar disorder, depression, and anxiety) – from a metastable brain model results in a strong reduction of the available state repertoire (Deco & Kringelbach, 2016). Speculatively, considering the system’s susceptibility to perturbation and closeness to instability at criticality could provide a theoretical account for the popularly discussed link between genius and insanity. The available models also open avenues for discovery and theoretical assessment of novel treatment options, like the mentioned psychedelic therapy approach for depression. Criticality of the brain as the optimal state of switching between attractor networks could be a concept that re-unifies social, cognitive, and emotional intelligence. While it offers a universal theory for a range of different neuronal and behavioral questions, the established computational models can help describe whole-brain dynamics in a continuum of healthy, diseased, suboptimal, and optimal brain states in great detail. We advocate the view that social, cognitive, and affective intelligence are in fact all inherently linked. While the human brain’s successful application of purely predictive intelligence has clearly helped survival, it is not necessarily conducive for a flourishing life if social and affective aspects are neglected. It would certainly seem true that spending too much time predicting something that may never come to pass might not necessarily be helpful for enjoying the “here and now” that is a major part of our state of wellbeing. As such, intelligence could be said to be a two-edged sword that might help us survive
Predictive Intelligence for Learning and Optimization
and give us more time, but where we should not forget to enjoy this extra time and perhaps even flourish with our friends and family. Nobel-prize winning novelist John Steinbeck and marine biologist Ed Ricketts wrote presciently of the “tragic miracle of consciousness,” of how we are paradoxically bound by our “physical memories to a past of struggle and survival” and limited in our “futures by the uneasiness of thought and consciousness” (Steinbeck & Ricketts, 1941). Too narrowly focusing on prediction and not taking affective aspects such as motivation into account can easily create this paradox. This is exactly why in our proposed definition of intelligence, we have chosen to focus both on prediction and maintaining the motivational factors that allow us to flourish. One thing is to survive, but it is just as important to thrive.
References Anokhin, A. P., Muller, V., Lindenberger, U., Heath, A. C., & Myers, E. (2006). Genetic influences on dynamic complexity of brain oscillations. Neuroscience Letters, 397(1–2), 93–98. Anticevic, A., Cole, M. W., Murray, J. D., Corlett, P. R., Wang, X. J., & Krystal, J. H. (2012). The role of default network deactivation in cognition and disease. Trends in Cognitive Science, 16, 584–592. Baars, B. J. (1988). A cognitive theory of consciousness. Cambridge University Press. Baars, B. J., & Franklin, S. (2007). An architectural model of conscious and unconscious brain functions: Global Workspace Theory and IDA. Neural Networks, 20(9), 955–961. Baars, B. J., Franklin, S., & Ramsoy, T. Z. (2013). Global workspace dynamics: Cortical “binding and propagation” enables conscious contents. Frontiers in Psychology, 4, 200. Bar, M., Kassam, K. S., Ghuman, A. S., Boshyan, J., Schmid, A. M., Dale, A. M., . . . Halgren, E. (2006). Top-down facilitation of visual recognition. Proceedings of the National Academy of Sciences USA, 103(2), 449–454. Barbey, A. K. (2018). Network neuroscience theory of human intelligence. Trends in Cognitive Science, 22(1), 8–20. Beaty, R. E., Benedek, M., Kaufman, S. B., & Silvia, P. J. (2015). Default and executive network coupling supports creative idea production. Science Reports, 5, 10964. Berridge, K. C., & Kringelbach, M. L. (2008). Affective neuroscience of pleasure: Reward in humans and animals. Psychopharmacology, 199(3), 457–480. Berridge, K. C., & Robinson, T. E. (2003). Parsing reward. Trends in Neurosciences, 26(9), 507–513. Cabral, J., Hugues, E., Sporns, O., & Deco, G. (2011). Role of local network oscillations in resting-state functional connectivity. Neuroimage, 57(1), 130–139. Cabral, J., Luckhoo, H., Woolrich, M., Joensson, M., Mohseni, H., Baker, A., . . . Deco, G. (2014). Exploring mechanisms of spontaneous functional connectivity in MEG: How delayed network interactions lead to structured amplitude envelopes of band-pass filtered oscillations. Neuroimage, 90, 423–435.
181
182
c. ahrends, p. vuust, and m. l. kringelbach
Cabral, J., Vidaurre, D., Marques, P., Magalhaes, R., Silva Moreira, P., Miguel Soares, J., . . . Kringelbach, M. L. (2017). Cognitive performance in healthy older adults relates to spontaneous switching between states of functional connectivity during rest. Science Reports, 7, 5135. Carhart-Harris, R. L. (2018). The entropic brain – Revisited. Neuropharmacology, 142, 167–178. Carhart-Harris, R. L., Leech, R., Hellyer, P., Shanahan, M., Feilding, A., Tagliazucchi, E., . . . Nutt, D. (2014). The entropic brain: A theory of conscious states informed by neuroimaging research with psychedelic drugs. Frontiers in Human Neuroscience, 8, 20. Chanes, L., & Barrett, L. F. (2016). Redefining the role of limbic areas in cortical processing. Trends in Cognitive Sciences, 20(2), 96–106. Clark, A. (2013). Whatever next? Predictive brains, situated agents, and the future of cognitive science. Behavioral and Brain Sciences, 36(3), 181–204. Colom, R., Rebollo, I., Palacios, A., Juan-Espinosa, M., & Kyllonen, P. C. (2004). Working memory is (almost) perfectly predicted by g. Intelligence, 32(3), 277–296. Conway, A. R. A., Kane, M. J., & Engle, R. W. (2003). Working memory capacity and its relation to general intelligence. Trends in Cognitive Sciences, 7(12), 547–552. Dayan, P., & Balleine, B. W. (2002). Reward, motivation, and reinforcement learning. Neuron, 36(2), 285–298. De Vincenzo, I., Giannoccaro, I., Carbone, G., & Grigolini, P. (2017). Criticality triggers the emergence of collective intelligence in groups. Physical Review E, 96(2–1), 022309. Deary, I. J., Penke, L., & Johnson, W. (2010). The neuroscience of human intelligence differences. Nature Reviews Neuroscience, 11(3), 201–211. Deco, G., & Jirsa, V. K. (2012). Ongoing cortical activity at rest: Criticality, multistability, and ghost attractors. Journal of Neuroscience, 32(10), 3366–3375. Deco, G., Jirsa, V. K., & McIntosh, A. R. (2011). Emerging concepts for the dynamical organization of resting-state activity in the brain. Nature Reviews Neuroscience, 12(1), 43–56. Deco, G., Jirsa, V. K., McIntosh, A. R., Sporns, O., & Kotter, R. (2009). Key role of coupling, delay, and noise in resting brain fluctuations. Proceedings of the National Academy of Sciences USA, 106(25), 10302–10307. Deco, G., & Kringelbach, M. L. (2014). Great expectations: Using whole-brain computational connectomics for understanding neuropsychiatric disorders. Neuron, 84(5), 892–905. Deco, G., & Kringelbach, M. L. (2016). Metastability and coherence: Extending the communication through coherence hypothesis using a whole-brain computational perspective. Trends in Neuroscience, 39(3), 125–135. Deco, G., Kringelbach, M. L., Jirsa, V. K., & Ritter, P. (2017). The dynamics of resting fluctuations in the brain: Metastability and its dynamical cortical core. Science Reports, 7(1), 3095. Deco, G., Ponce-Alvarez, A., Hagmann, P., Romani, G. L., Mantini, D., & Corbetta, M. (2014). How local excitation-inhibition ratio impacts the whole brain dynamics. Journal of Neuroscience, 34(23), 7886–7898.
Predictive Intelligence for Learning and Optimization
Dehaene, S., & Changeux, J.-P. (2011). Experimental and theoretical approaches to conscious processing. Neuron, 70(2), 200–227. Dehaene, S., Changeux, J.-P., Naccache, L., Sackur, J., & Sergent, C. (2006). Conscious, preconscious, and subliminal processing: A testable taxonomy. Trends in Cognitive Sciences, 10(5), 204–211. Dehaene, S., Kerszberg, M., & Changeux, J.-P. (1998). A neuronal model of a global workspace in effortful cognitive tasks. Proceedings of the National Academy of Sciences, 95(24), 14529. Dehaene, S., & Naccache, L. (2001). Towards a cognitive neuroscience of consciousness: Basic evidence and a workspace framework. Cognition, 79(1), 1–37. Dennis, M., Francis, D. J., Cirino, P. T., Schachar, R., Barnes, M. A., & Fletcher, J. M. (2009). Why IQ is not a covariate in cognitive studies of neurodevelopmental disorders. Journal of the International Neuropsychological Society, 15(3), 331–343. Dimitriadis, S. I., Laskaris, N. A., Simos, P. G., Micheloyannis, S., Fletcher, J. M., Rezaie, R., & Papanicolaou, A. C. (2013). Altered temporal correlations in resting-state connectivity fluctuations in children with reading difficulties detected via MEG. Neuroimage, 83, 307–317. Duncan, J. (2013). The structure of cognition: Attentional episodes in mind and brain. Neuron, 80(1), 35–50. Ferguson, M. A., Anderson, J. S., & Spreng, R. N. (2017). Fluid and flexible minds: Intelligence reflects synchrony in the brain’s intrinsic network architecture. Network Neuroscience, 1(2), 192–207. Fingelkurts, A. A., Fingelkurts, A. A., Rytsala, H., Suominen, K., Isometsa, E., & Kahkonen, S. (2007). Impaired functional connectivity at EEG alpha and theta frequency bands in major depression. Human Brain Mapping, 28(3), 247–261. Friedman, N. P., Miyake, A., Young, S. E., DeFries, J. C., Corley, R. P., & Hewitt, J. K. (2008). Individual differences in executive functions are almost entirely genetic in origin. Journal of Experimental Psychology: General, 137(2), 201–225. Fries, P. (2005). A mechanism for cognitive dynamics: Neuronal communication through neuronal coherence. Trends in Cognitive Science, 9(10), 474–480. Friston, K. (1997). Transients, metastability, and neuronal dynamics. Neuroimage, 5(2), 164–171. Friston, K. (2010). The free-energy principle: A unified brain theory? Nature Reviews Neuroscience, 11(2), 127–138. Friston, K., & Kiebel, S. (2009). Predictive coding under the free-energy principle. Philosophical Transactions of the Royal Society of London B: Biological Sciences, 364(1521), 1211–1221. Fuster, J. M. (2005). Cortex and mind: Unifying cognition. Oxford University Press. Gardner, H. (1984). Frames of mind: The theory of multiple intelligences. London: Heinemann. Garrido, M. I., Kilner, J. M., Stephan, K. E., & Friston, K. J. (2009). The mismatch negativity: A review of underlying mechanisms. Clinical Neurophysiology, 120(3), 453–463.
183
184
c. ahrends, p. vuust, and m. l. kringelbach
Ghosh, A., Rho, Y., McIntosh, A. R., Kotter, R., & Jirsa, V. K. (2008). Noise during rest enables the exploration of the brain’s dynamic repertoire. PLoS Computational Biology, 4(10), e1000196. Glascher, J. P., & O’Doherty, J. P. (2010). Model-based approaches to neuroimaging: Combining reinforcement learning theory with fMRI data. Wiley Interdisciplinary Reviews: Cognitive Science, 1(4), 501–510. Grigolini, P., Piccinini, N., Svenkeson, A., Pramukkul, P., Lambert, D., & West, B. J. (2015). From neural and social cooperation to the global emergence of cognition. Frontiers in Bioengineering and Biotechnology, 3, 78. Hansenne, M., & Bianchi, J. (2009). Emotional intelligence and personality in major depression: Trait versus state effects. Psychiatry Research, 166(1), 63–68. Heggli, O. A., Cabral, J., Konvalinka, I., Vuust, P., & Kringelbach, M. L. (2019). A Kuramoto model of self-other integration across interpersonal synchronization strategies. PLoS Computational Biology, 15(10), e1007422. doi: 10.1371/journal.pcbi.1007422. Honey, C. J., Kotter, R., Breakspear, M., & Sporns, O. (2007). Network structure of cerebral cortex shapes functional connectivity on multiple time scales. Proceedings of the National Academy of Sciences USA, 104(24), 10240–10245. Houde, O. (2010). Beyond IQ comparisons: Intra-individual training differences. Nature Reviews Neuroscience, 11(5), 370. Hu, G., Huang, X., Jiang, T., & Yu, S. (2019). Multi-scale expressions of one optimal state regulated by dopamine in the prefrontal cortex. Frontiers in Physiology, 10, 113. Huron, D. (2006). Sweet anticipation: Music and the psychology of expectation. Cambridge, MA: MIT Press. Huron, D. (2016). Voice leading: The science behind a musical art. Cambridge, MA: MIT Press. Johnson-Laird, P. N. (2001). Mental models and deduction. Trends in Cognitive Science, 5(10), 434–442. Jung, R. E. & Haier, R. J. (2007). The Parieto-Frontal Integration Theory (P-FIT) of intelligence: Converging neuroimaging evidence. Behavioral and Brain Sciences, 30(2), 135–154; discussion 154–187. Kanai, R., Komura, Y., Shipp, S., & Friston, K. (2015). Cerebral hierarchies: Predictive processing, precision and the pulvinar. Philosophical Transactions of the Royal Society of London B: Biological Sciences, 370(1668), 20140169. Kane, M. J., & Engle, R. W. (2002). The role of prefrontal cortex in working-memory capacity, executive attention, and general fluid intelligence: An individualdifferences perspective. Psychonomic Bulletin & Review, 9(4), 637–671. Kitzbichler, M. G., Henson, R. N. A., Smith, M. L., Nathan, P. J., & Bullmore, E. T. (2011). Cognitive effort drives workspace configuration of human brain functional networks. The Journal of Neuroscience, 31(22), 8259. Koelsch, S., Vuust, P., & Friston, K. (2019). Predictive processes and the peculiar case of music. Trends in Cognitive Sciences, 23(1), 63–77. Koenen, K. C., Moffitt, T. E., Roberts, A. L., Martin, L. T., Kubzansky, L., Harrington, H., . . . Caspi, A. (2009). Childhood IQ and adult mental disorders: A test of the cognitive reserve hypothesis. American Journal of Psychiatry, 166(1), 50–57.
Predictive Intelligence for Learning and Optimization
Konvalinka, I., Vuust, P., Roepstorff, A., & Frith, C. (2009). A coupled oscillator model of interactive tapping. Proceedings of the 7th Triennial Conference of European Society for the Cognitive Sciences of Music (ESCOM 2009), University of Jyväskylä, Jyväskylä, Finland, pp. 242–245. Kringelbach, M. L., & Berridge, K. C. (2017). The affective core of emotion: Linking pleasure, subjective well-being, and optimal metastability in the brain. Emotion Review, 9(3), 191–199. Kringelbach, M. L., McIntosh, A. R., Ritter, P., Jirsa, V. K., & Deco, G. (2015). The rediscovery of slowness: Exploring the timing of cognition. Trends in Cognitive Science, 19(10), 616–628. Kringelbach, M. L., & Rapuano, K. M. (2016). Time in the orbitofrontal cortex. Brain, 139(4), 1010–1013. Kringelbach, M. L., & Rolls, E. T. (2004). The functional neuroanatomy of the human orbitofrontal cortex: Evidence from neuroimaging and neuropsychology. Progress in Neurobiology, 72(5), 341–372. Kuramoto, Y. (1975) Self-entrainment of a population of coupled non-linear oscillators. In H. Araki (ed.), International symposium on mathematical problems in theoretical physics. Lecture Notes in Physics, vol 39. (pp. 420–422). Berlin, Heidelberg: Springer. doi: 10.1007/BFb0013365. Landro, N. I., Stiles, T. C., & Sletvold, H. (2001). Neuropsychological function in nonpsychotic unipolar major depression. Neuropsychiatry, Neuropsychology, and Behavioral Neurology, 14(4), 233–240. Lee, W. H., Doucet, G. E., Leibu, E., & Frangou, S. (2018). Resting-state network connectivity and metastability predict clinical symptoms in schizophrenia. Schizophrenia Research, 201, 208–216. Leech, R., & Sharp, D. J. (2014). The role of the posterior cingulate cortex in cognition and disease. Brain, 137(Pt. 1), 12–32. Li, Y., Vanni-Mercier, G., Isnard, J., Mauguière, F., & Dreher, J.-C. (2016). The neural dynamics of reward value and risk coding in the human orbitofrontal cortex. Brain, 139(4), 1295–1309. doi: 10.1093/brain/awv409. Liang, S., Brown, M. R. G., Deng, W., Wang, Q., Ma, X., Li, M., . . . Li, T. (2018). Convergence and divergence of neurocognitive patterns in schizophrenia and depression. Schizophrenia Research, 192, 327–334. Margulies, D. S., Ghosh, S. S., Goulas, A., Falkiewicz, M., Huntenburg, J. M., Langs, G., . . . Smallwood, J. (2016). Situating the default-mode network along a principal gradient of macroscale cortical organization. Proceedings of the National Academy of Sciences USA, 113(44), 12574–12579. Menon, V. (2011). Large-scale brain networks and psychopathology: A unifying triple network model. Trends in Cognitive Sciences, 15(10), 483–506. Mesulam, M. M. (1998). From sensation to cognition. Brain: A Journal of Neurology, 121(6), 1013–1052. Näätänen, R., Gaillard, A. W. K., & Mäntysalo, S. (1978). Early selectiveattention effect on evoked potential reinterpreted. Acta Psychologica, 42(4), 313–329. Näätänen, R., Paavilainen, P., Rinne, T., & Alho, K. (2007). The mismatch negativity (MMN) in basic research of central auditory processing: a review. Clinical Neurophysiology, 118(12), 2544–2590.
185
186
c. ahrends, p. vuust, and m. l. kringelbach
Niv, Y., & Schoenbaum, G. (2008). Dialogues on prediction errors. Trends in Cognitive Science, 12(7), 265–272. O’Doherty, J. P. (2004). Reward representations and reward-related learning in the human brain: Insights from neuroimaging. Current Opinion in Neurobiology, 14(6), 769–776. Pamplona, G. S., Santos Neto, G. S., Rosset, S. R., Rogers, B. P., & Salmon, C. E. (2015). Analyzing the association between functional connectivity of the brain and intellectual performance. Frontiers in Human Neuroscience, 9, 61. Parkinson, C., & Wheatley, T. (2015). The repurposed social brain. Trends in Cognitive Science, 19(3), 133–141. Pearce, M. T., & Wiggins, G. A. (2012). Auditory expectation: The information dynamics of music perception and cognition. Topics in Cognitive Science, 4(4), 625–652. Rao, R. P., & Ballard, D. H. (1999). Predictive coding in the visual cortex: A functional interpretation of some extra-classical receptive-field effects. Nature Neuroscience, 2(1), 79–87. Ravnkilde, B., Videbech, P., Clemmensen, K., Egander, A., Rasmussen, N. A., & Rosenberg, R. (2002). Cognitive deficits in major depression. Scandinavian Journal of Psychology, 43(3), 239–251. Reinhart, R. M. G., & Nguyen, J. A. (2019). Working memory revived in older adults by synchronizing rhythmic brain circuits. Nature Neuroscience, 22(5), 820–827. Rohrmeier, M. A., & Koelsch, S. (2012). Predictive information processing in music cognition. A critical review. International Journal of Psychophysiology: Official Journal of the International Organization of Psychophysiology, 83(2), 164–175. Rolls, E. T. (2010). Attractor networks. Wiley Interdisciplinary Reviews: Cognitive Science, 1(1), 119–134. Rolls, E. T., Loh, M., Deco, G., & Winterer, G. (2008). Computational models of schizophrenia and dopamine modulation in the prefrontal cortex. Nature Reviews Neuroscience, 9(9), 696. Roth, G., Dicke, U. (2005). Evolution of the brain and intelligence. Trends in Cognitive Science, 9(5), 250–257. Sackeim, H. A., Freeman, J., McElhiney, M., Coleman, E., Prudic, J., & Devanand, D. P. (1992). Effects of major depression on estimates of intelligence. Journal of Clinical and Experimental Neuropsychology, 14(2), 268–288. Sanger, J., Muller, V., & Lindenberger, U. (2012). Intra- and interbrain synchronization and network properties when playing guitar in duets. Frontiers in Human Neuroscience, 6, 312. Santarnecchi, E., Galli, G., Polizzotto, N. R., Rossi, A., & Rossi, S. (2014). Efficiency of weak brain connections support general cognitive functioning. Human Brain Mapping, 35(9), 4566–4582. Schacter, D. L., Addis, D. R., & Buckner, R. L. (2007). Remembering the past to imagine the future: The prospective brain. Nature Reviews Neuroscience, 8(9), 657–661. Schultz, W. (2015). Neuronal reward and decision signals: From theories to data. Physiological Review, 95(3), 853–951. Schultz, W., Dayan, P., & Montague, P. R. (1997). A neural substrate of prediction and reward. Science, 275(5306), 1593–1599.
Predictive Intelligence for Learning and Optimization
Schultz, W., & Dickinson, A. (2000). Neuronal coding of prediction errors. Annual Review of Neuroscience, 23(1), 473–500. Shen, X., Cox, S. R., Adams, M. J., Howard, D. M., Lawrie, S. M., Ritchie, S. J., . . . Whalley, H. C. (2018). Resting-state connectivity and its association with cognitive performance, educational attainment, and household income in the UK biobank. Biological Psychiatry: Cognitive Neuroscience and Neuroimaging, 3(10), 878–886. Silver, D., Huang, A., Maddison, C. J., Guez, A., Sifre, L., van den Driessche, G., . . . Hassabis, D. (2016). Mastering the game of Go with deep neural networks and tree search. Nature, 529(7587), 484. Silver, D., Schrittwieser, J., Simonyan, K., Antonoglou, I., Huang, A., Guez, A., . . . Hassabis, D. (2017). Mastering the game of Go without human knowledge. Nature, 550(7676), 354–359. Singer, W. (2001). Consciousness and the binding problem. Annals of the New York Academy of Sciences, 929, 123–146. Song, M., Zhou, Y., Li, J., Liu, Y., Tian, L., Yu, C., & Jiang, T. (2008). Brain spontaneous functional connectivity and intelligence. Neuroimage, 41(3), 1168–1176. Stam, C. J., & van Straaten, E. C. (2012). The organization of physiological brain networks. Clinical Neurophysiology, 123(6), 1067–1087. Steinbeck, J., & Ricketts, E. F. (1941). The log from the sea of Cortez. London: Penguin. Stern, Y. (2009). Cognitive reserve. Neuropsychologia, 47(10), 2015–2028. Stone, A. A., Schwartz, J. E., Broderick, J. E., & Deaton, A. (2010). A snapshot of the age distribution of psychological well-being in the United States. Proceedings of the National Academy of Sciences of the United States of America, 107(22), 9985–9990. Thorndike, E. L. (1920). Intelligence and its uses. Harper’s Magazine, 140, 227–235. Tognoli, E., & Kelso, J. A. (2014). The metastable brain. Neuron, 81(1), 35–48. Turalska, M., Geneston, E., West, B. J., Allegrini, P., & Grigolini, P. (2012). Cooperation-induced topological complexity: A promising road to fault tolerance and Hebbian learning. Frontiers in Physiology, 3, 52. Turalska, M., West, B. J., & Grigolini, P. (2013). Role of committed minorities in times of crisis. Science Reports, 3, 1371. van den Heuvel, M. P., & Hulshoff Pol, H. E. (2010). Exploring the brain network: A review on resting-state fMRI functional connectivity. European Neuropsychopharmacology, 20(8), 519–534. van den Heuvel, M. P., Stam, C. J., Kahn, R. S., & Hulshoff Pol, H. E. (2009). Efficiency of functional brain networks and intellectual performance. Journal of Neuroscience, 29(23), 7619–7624. Veiel, H. O. (1997). A preliminary profile of neuropsychological deficits associated with major depression. Journal of Clinical and Experimental Neuropsychology, 19(4), 587–603. Voytek, B., & Knight, R. T. (2015). Dynamic network communication as a unifying neural basis for cognition, development, aging, and disease. Biological Psychiatry, 77(12), 1089–1097.
187
188
c. ahrends, p. vuust, and m. l. kringelbach
Vuust, P., & Frith, C. D. (2008). Anticipation is the key to understanding music and the effects of music on emotion. Behavioral and Brain Sciences, 31(5), 599–600. Vuust, P., & Kringelbach, M. L. (2010). The pleasure of making sense of music. Interdisciplinary Science Reviews, 35(2), 166–182. Werner, G. (2007). Metastability, criticality and phase transitions in brain and its models. Biosystems, 90(2), 496–508. Zakzanis, K. K., Leach, L., & Kaplan, E. (1998). On the nature and pattern of neurocognitive function in major depressive disorder. Neuropsychiatry, Neuropsychology, and Behavioral Neurology, 11(3), 111–119. Zylberberg, A., Dehaene, S., Roelfsema, P. R., & Sigman, M. (2011). The human Turing machine: A neural framework for mental programs. Trends in Cognitive Sciences, 15(7), 293–300.
PART III
Neuroimaging Methods and Findings
10 Diffusion-Weighted Imaging of Intelligence Erhan Genç and Christoph Fraenz Since the dawn of intelligence research, it has been of considerable interest to establish a link between intellectual ability and the various properties of the brain. In the second half of the nineteenth century, scientists such as Broca and Galton were among the first to utilize craniometry in order to investigate relationships between different measures of head size and intellectual ability (Deary, Penke, & Johnson, 2010; Galton, 1888). However, since craniometry can at best provide a very coarse estimate of actual brain morphometry and adequate methods for intelligence testing were not established at that time, respective efforts were not particularly successful in producing insightful evidence. About 100 years later, technical developments in neuroscientific research, such as the introduction of magnetic resonance imaging (MRI), enabled scientists to assess a wide variety of the brain’s structural properties in vivo and relate them to cognitive capacity. One of the most prominent and stable findings from this line of research is that bigger brains tend to perform better at intelligence-related tasks. Meta-analyses comprising a couple of thousand individuals have reported correlation coefficients in the range of .24–.33 for the association between overall brain volume and intelligence (McDaniel, 2005; Pietschnig, Penke, Wicherts, Zeiler, & Voracek, 2015). A common biological explanation for this association is the fact that individuals with more cortical volume are likely to possess more neurons (Pakkenberg & Gundersen, 1997) and thus more computational power to engage in problem-solving and logical reasoning. The studies mentioned so far were mainly concerned with relationships between intelligence and different macrostructural properties of gray matter, leaving a large gap of knowledge at the cellular level. A recent working hypothesis endorses the idea that interindividual differences in intelligence are not only manifested in the amount of brain tissue, e.g., cortical thickness or surface area, but also in its wiring properties, which comprise markers like circuit complexity or dendritic arborization (Neubauer & Fink, 2009). Until recently, little was known about the relationship between cortical microstructure and intelligence. The introduction of novel in vivo diffusion-based MRI techniques such as neurite orientation dispersion and density imaging (NODDI) (Zhang, Schneider, Wheeler-Kingshott, & Alexander, 2012) opened up new opportunities in this regard. The first study utilizing NODDI in order to shed light on possible microstructural correlates affecting intelligence was
191
192
e. genc¸ and c. fraenz
conducted by Genc et al. (2018). The authors analysed data from two large independent samples, that comprised well over 700 individuals in total. Surprisingly, they found dendritic density and complexity, averaged across the whole cortex, to be negatively associated with matrix reasoning test scores. These results indicate that fluid intelligence is likely to benefit from cortical mantles with sparsely organized dendritic arbour. This kind of architecture might increase information processing speed and network efficiency within the cortex. Therefore, it could serve as a potential neuroanatomical foundation underlying the neural efficiency hypothesis of intelligence (Neubauer & Fink, 2009). More specific analyses on the level of single brain regions confirmed the pattern of results observed for the overall cortex. Statistically significant correlations were negative and almost exclusively localized within brain regions overlapping with areas from the P-FIT network. However, it has to be noted that a few cortical areas, some of them located in the middle temporal gyrus, also showed positive associations between dendritic complexity and intelligence but failed to reach statistical significance due to strict correction for multiple comparisons. Interestingly, a recent study by Goriounova et al. (2018) also observed positive associations between dendritic complexity and intelligence in the middle temporal gyrus. Here, the authors examined a sample of epilepsy patients who underwent neurosurgical treatment, which allowed them to directly extract non-pathological brain tissue from living human brains for histological and electrophysiological investigation. In combination with psychometric intelligence testing prior to surgery the authors were able to show that dendritic size and complexity of pyramidal neurons in the middle temporal gyrus are positively associated with intelligence. A computational model incorporating structural and electrophysiological data indicated that larger dendrites generate faster action potentials, which in turn increase processing of synaptic inputs with higher temporal precision and thus improve efficient information transfer (Goriounova & Mansvelder, 2019). Combined evidence from Genc et al. (2018) and Goriounova et al. (2018) suggests that high intelligence relies on efficient information processing that can be achieved by a proper differentiation between signal and noise. This might be realized by a dendritic circuitry that increases the signal within certain brain regions like the middle temporal gyrus (positive associations between dendritic complexity and intelligence) and decreases the noise within fronto-parietal regions (negative associations between dendritic complexity and intelligence). It is important to note that interindividual differences in intelligence are not only related to neuroanatomical properties exhibited by gray matter. Features like white matter volume also show relevant associations with intelligence (Narr et al., 2007). White matter is mainly comprised of myelinated axons transferring information from one brain region to another, making it crucial for human cognition and behavior (Filley, 2012). Consequently, information transfer within the P-FIT network is also accomplished through white matter
Diffusion-Weighted Imaging of Intelligence
fiber tracts (Jung & Haier, 2007). Brain regions constituting the P-FIT network are distributed across both cerebral hemispheres. This emphasizes the relevance of functional interaction between regions and across both hemispheres for cognitive performance in tasks demanding high intellectual ability. Given that the corpus callosum represents the most important connection between both hemispheres, it has long been suggested that the layout of the corpus callosum is likely to influence interhemispheric connectivity and thus intellectual performance (Hulshoff-Pol et al., 2006). One of the first studies investigating this relationship in a sample of healthy adults and epilepsy patients found a moderate positive association between midsagittal corpus callosum area and intelligence (Atkinson, Abou-Khalil, Charles, & Welch, 1996). In another study, Luders et al. (2007) quantified callosal morphology in healthy adults by measuring callosal thickness in 100 equidistant points across the whole structure. The authors observed significant positive correlations between various IQ measures and callosal thickness which were exhibited in the posterior half of the corpus callosum. In a study comprising monozygotic and dizygotic twins, Hulshoff-Pol et al. (2006) were able to show that the associations between callosal morphology and intelligence are influenced by a common genetic factor. The authors employed a cross-trait/ cross-twin approach, which is capable of quantifying the general extent to which structure–function relationships are driven either genetically or by environmental factors. Respective analyses do not operate on a molecular level and are not designed to identify specific genes contributing to the associations of interest. However, it is reasonable to assume that genetic factors underlying complex characteristics such as general intelligence are constituted by a multitude of genes rather than a single gene alone. Aforementioned studies investigated the relationship between intelligence differences and white matter macrostructure. In order to quantify white matter in terms of its microstructure, a more elaborate approach combining standard MRI and diffusion-weighted imaging (DWI) can be used. DWI is based on the diffusion process of water molecules within biological tissue like the human brain (Le Bihan, 2003). Within fluid-filled spaces, such as ventricles, diffusion is nearly unbounded and thus non-directional (isotropic). In contrast, diffusion within white matter is more directional (anisotropic). Here, the membrane around nerve fibers constitutes a natural border which forces water molecules to move along the direction of axons creating an anisotropic diffusion pattern. For each voxel of a DWI data set, water diffusion is represented as an ellipsoid shaped by the three orthogonally arranged diffusion vectors v1, v2, and v3, whereas the eigenvalues λ1, λ2, and λ3 describe the degree of diffusion along each vector (Figure 10.1a). There are various metrics to describe certain aspects of tissue-related water diffusion. The axial diffusivity (AD) corresponds to λ1 and represents water diffusion along the principal direction of axons within a voxel. The radial diffusivity (RD) is the mean value of λ2 and λ3 and represents water diffusion perpendicular to the principal direction of
193
194
e. genc¸ and c. fraenz
Figure 10.1 The top half depicts ellipsoids (left side, A) and tensors (right side, B) that were yielded by means of diffusion-weighted imaging and projected onto a coronal slice of an MNI brain. Within each voxel, the main diffusion direction of water molecules is represented by the orientation of ellipsoids or tensors and also visualized in the form of RGB color coding (left–right axis = red, anterior–posterior axis = green, superior–inferior axis = blue). The bottom half shows enlarged images of the corpus callosum that provide a more detailed visualization of ellipsoids and tensors.
axons. The mean diffusivity (MD) is the average water diffusion across all eigenvalues λ1, λ2, and λ3. It is lower in structurally organized and higher in directionally disorganized tissue segments. The fractional anisotropy (FA) is a non-linear combination of λ1, λ2, and λ3 (Basser & Pierpaoli, 1996). It can take any value between 0 and 1, with 0 implying no directionality of water diffusion, e.g., in ventricles, and 1 representing the most extreme form of directed water diffusion, i.e., water moving along the principal direction exclusively with no diffusion perpendicular to that axis. FA can be considered the most commonly used measure to describe white matter microstructure in terms of “microstructural integrity.” Several morphological factors such as axon diameter, fiber density, myelin concentration, and the distribution of fiber orientation can influence the aforementioned microstructural measures (Beaulieu, 2002; Le Bihan, 2003). Myelin concentration was found to exert a comparably small effect on the magnitude of FA values (Beaulieu, 2002) but more recent studies show contrasting results (Ocklenburg et al., 2018; Sampaio-Baptista et al., 2013). In a special application of DWI, known as diffusion tensor imaging (DTI), the three vectors and eigenvalues are quantified voxel by voxel and summarized in the form of so-called tensors (Figure 10.1b). Thus, a tensor contains information about the directional motion distribution of water
Diffusion-Weighted Imaging of Intelligence
molecules within a given voxel and allows for conclusions to be drawn about the orientation of adjacent nerve fibers (Mori, 2007). The trajectories of fiber bundles can be virtually constructed by means of mathematical fiber tracking algorithms, which provides the opportunity to estimate structural connectivity between different brain regions (Mori, 2007). While there are different models for the purpose of reconstructing fiber bundles or streamlines, e.g., the q-ball model (Tuch, 2004) or the balland-stick model (Behrens et al., 2003), virtual fiber tractography can be categorized as either probabilistic or deterministic (Campbell & Pike, 2014). The significant difference between the two methods is that probabilistic tractography computes the likelihood with which individual voxels constitute the connection between a given seed and target region (Behrens, Berg, Jbabdi, Rushworth, & Woolrich, 2007; Morris, Embleton, & Parker, 2008) whereas deterministic tractography provides only one definitive solution in this regard (Mori, 2007). Despite their differences, both types of tractography are capable of reconstructing the human brain’s connectome, which is constituted by white matter fiber bundles (Catani & Thiebaut de Schotten, 2008). Respective fiber bundles can be quantified by different markers such as microstructural integrity, which provides the opportunity to associate them with interindividual differences in cognitive performance. This procedure is called quantitative tractography. An illustration of major white matter pathways that were found to exhibit associations with interindividual intelligence differences are presented in Figure 10.2. One way of performing quantitative tractography is to follow a hypothesisdriven approach and segregate a fiber bundle into specific regions of interest (ROI) from which certain features such as FA can be extracted. Another way is to investigate white matter properties averaged across a whole fiber bundle. Deary et al. (2006) employed a ROI based method in a sample of older adults from the Lothian Birth Cohort (LBC). They found general intelligence to be positively associated with FA exhibited by the centrum semiovale. This region is a conjunction area in the center of the brain and consists of cortical projection (corona radiata or corticospinal tract) as well as association fibers (superior longitudinal fasciculus). In another study comprising a sample of young adults, Tang et al. (2010) followed a similar approach and defined multiple ROIs within various intra- and interhemispheric fiber bundles. For the whole group, general intelligence was not related to FA in any of the tracts. This lack of replication might be due to the fact that Deary et al. (2006) examined a substantially older sample compared to Tang et al. (2010). Further, Deary et al. (2006) observed a positive association only in the centrum semiovale, a structure that was not examined by Tang et al. (2010). However, when conducting the same analysis separately for males and females, Tang et al. (2010) found bilateral ROIs located in anterior callosal fibers (genu) to exhibit positive associations in females and negative associations in males.
195
196
e. genc¸ and c. fraenz
Diffusion-Weighted Imaging of Intelligence
Figure 10.2 White matter fiber tracts whose microstructural properties were found to correlate with interindividual differences in intelligence. Respective tracts were reconstructed by means of deterministic constrained spherical deconvolution fiber tractography. Each panel shows a single fiber bundle overlaid onto axial and sagittal brain slices from an MNI template (viewed from a posterior and left-hand side perspective). The number next to each fiber bundle depicts how many times the respective tract has been reported to exhibit significant associations between its microstructural properties and intelligence. Numbers are placed in circles that are color coded (0 = red, 15 = yellow). The first row depicts projection tracts that connect the thalamus to the frontal cortex (anterior thalamic radiation) and to the visual cortex (posterior thalamic radiation). The second row depicts projection tracts that connect the cortex to the brain stem (corona radiata) and the medial temporal lobe to the hypothalamic nuclei (fornix). The third row depicts different segments of the corpus callosum which is the largest commissural tract connecting both hemispheres. Respective panels show interhemispheric tracts connecting the prefrontal cortices (genu), the frontal cortices and temporal cortices (midbody), as well as the parietal cortices and visual cortices (splenium). The remaining rows depict fiber bundles that run within a hemisphere connecting distal cortical areas. Respective panels show tracts that connect medial frontal, parietal, occipital, temporal, and cingulate cortices (cingulum); tracts that connect the occipital and temporal cortex (inferior longitudinal fasciculus); tracts that connect the orbital and lateral frontal cortices to the occipital cortex (inferior fronto-occipital fasciculus); tracts that connect the orbitofrontal cortex to the anterior and medial temporal cortices (uncinate fasciculus), and tracts that connect the perisylvian frontal, parietal, and temporal cortices (superior longitudinal fasciculus and arcuate fasciculus). (Catani & Thiebaut de Schotten, 2008)
There are several studies that followed a tract-based approach in order to investigate the relationships between white matter properties and intelligence in older adults from the LBC. One of the first studies analyzing a pilot sample from the LBC was conducted by Penke et al. (2010). They reconstructed eight major fiber tracts using probabilistic tractography, namely cingulum, cingulate gyrus, uncinate fasciculus, and arcuate fasciculus in both hemispheres as well as genu and splenium of the corpus callosum. The authors found that processing speed was significantly associated with the average FA values of all eight major tracts. Interestingly, they were also able to extract a general factor of white matter FA by means of confirmatory principal component analysis (PCA). This factor explained about 40% of variance shown by the eight tracts and was significantly associated with processing speed but not with general intelligence. After the whole data set from the LBC had been made publicly available, Penke et al. (2012) conducted a follow-up study in order to further investigate the relationships between general intelligence, processing speed, and several white matter properties, namely FA, longitudinal relaxation time,
197
198
e. genc¸ and c. fraenz
and magnetization transfer ratio (MTR). The latter is assumed to be a marker of myelin concentration (Wolff & Balaban, 1989) under certain conditions (MacKay & Laule, 2016). They employed probabilistic fiber tracking in order to reconstruct 12 major fiber tracts, namely cingulum, cingulate gyrus, uncinate fasciculus, arcuate fasciculus, inferior longitudinal fasciculus, and anterior thalamic radiation in both hemispheres as well as genu and splenium of the corpus callosum. Since all white matter properties (FA, longitudinal relaxation time, and MTR) were highly correlated across the 12 fiber bundles, the authors decided to extract three general factors, each representing one of the three biomarkers. These general factors were significantly correlated with general intelligence. However, respective associations were completely mediated by processing speed, which indicates that processing speed serves as a mediator between white matter properties and general intelligence in older adults (Penke et al., 2012). The same group of researchers investigated the relationships between general intelligence and the aforementioned white matter properties exhibited by each of the 12 individual tracts (Booth et al., 2013). Irrespective of processing speed, they found positive associations between general intelligence and average FA values extracted from bilateral uncinate fasciculus, inferior longitudinal fasciculus, anterior thalamic radiation, and cingulum. Cremers et al. (2016) conducted a similar quantitative tractography study using a very large sample of old adults. The authors extracted average FA values from 14 different white matter tracts that were reconstructed by means of probabilistic tractography. Further, they measured general cognitive performance by means of a test battery assessing memory, executive functions, word fluency, and motor speed. General cognitive performance was positively associated with average FA values exhibited within the anterior and posterior thalamic radiation, the superior and inferior longitudinal fasciculus, the inferior fronto-occipital fasciculus, the uncinate fasciculus, as well as the genu and the splenium of the corpus callosum. Importantly, when controlling for the relationship between cognitive ability and FA averaged across the overall white matter, only the associations observed for posterior thalamic radiation, inferior longitudinal fasciculus, and inferior fronto-occipital fasciculus remained statistically significant. Evidence from the aforementioned studies is based on older adults but there are also some quantitative tractography studies in younger samples. Based on previous findings indicating that general intelligence is positively associated with corpus callosum size and thickness, Kontis et al. (2009) speculated whether microstructural abnormalities in the corpus callosum of preterm born adults may lead to a reduction of general intelligence. In a sample of young adults who were born preterm or term, the authors observed a positive association between general intelligence and average FA exhibited by the corpus callosum. Specifically, preterm born adults showed reduced FA values and thus lower general intelligence, whereas both measures were increased in adults born at term. In another sample comprising young adults, Yu et al.
Diffusion-Weighted Imaging of Intelligence
(2008) performed deterministic tractography in order to relate general intelligence to average FA values within seven white matter tracts. They observed positive associations for the genu and midbody of the corpus callosum, the uncinate fasciculus, the posterior thalamic radiation and the corticospinal tract. Using a hypothesis-driven approach in a very large sample of adults, Kievit et al. (2014) were able to confirm previously reported results for the genu of the corpus callosum. They found this fiber bundle’s average FA to be positively correlated with fluid intelligence. Urger et al. (2015) observed positive associations between general intelligence and average FA exhibited by the arcuate fasciculus in a sample of adolescents. In a comparable sample, Ferrer et al. (2013) found fluid intelligence and processing speed to be positively correlated with average FA values extracted from the superior longitudinal fasciculus, the whole corpus callosum, and the corticospinal tract. Comparable to the findings reported by Penke et al. (2012) for older adults from the LBC, the correlations between average FA values and fluid intelligence were completely mediated by processing speed in the sample of adolescents as well. This indicates that processing speed can be regarded as a mediator in the relationship between white matter properties and intelligence, not only for older adults but also for adolescents (Ferrer et al., 2013). Muetzel et al. (2015) were the first to employ quantitative tractography in a large sample of children. They utilized probabilistic fiber tractography in order to reconstruct seven major white matter pathways, namely the genu and splenium of the corpus callosum, the cingulum, the uncinate fasciculus, the superior longitudinal fasciculus, the inferior longitudinal fasciculus, and the corticospinal tract. In accordance with previous findings (Penke et al., 2010), the average FA values of all tracts were highly correlated with each other. As a consequence, the authors extracted a general factor of white matter FA. This factor was positively correlated with general intelligence. In order to determine whether all or only some of the tracts contributed to this association, univariate regression analyses were used. After applying correction for multiple comparisons, average FA exhibited by the right uncinate fasciculus remained as the only significant predictor of general intelligence (Muetzel et al., 2015). In summary, it is apparent that interindividual differences in intelligence are associated with microstructural properties of many major pathways across different age ranges. It can also be noted that most of the respective fiber tracks represent critical links between brain regions constituting the P-FIT network. However, most of the aforementioned studies investigated white matter properties of a priori selected tracts averaged across whole fiber bundles. In comparison to voxel-wise analyses, this approach is very likely to overlook relevant relationships exhibited by fiber bundles that are not included in the set of white matter tracts being analyzed. With diffusionweighted data there are two approaches towards computing voxel-wise
199
200
e. genc¸ and c. fraenz
statistics. The first method is very similar to VBM. Here, individual FA maps are transformed to a common stereotactic space, smoothed, and subjected to voxel-wise statistical comparisons covering the whole whiter matter compartment. Schmithorst, Wilke, Dardzinski, and Holland (2005) and Schmithorst (2009) are among the first studies that used this technique in samples of children and adolescents. Both studies observed positive correlations between general intelligence and FA in voxels located in bilateral arcuate fasciculus, corona radiata, superior longitudinal fasciculus, and the splenium of the corpus callosum. Allin et al. (2011) employed the same technique in a sample of young adults born preterm or term. Similar to a previous study by Kontis et al. (2009), the authors investigated whether microstructural abnormalities exhibited by white matter voxels of preterm born adults are linked to a reduction in general intelligence. Their results show that a decrease in FA in voxels belonging to the corona radiata or the genu of the corpus callosum was associated to reduction in general intelligence in preterm born adults. In order to investigate the extent to which the association between general intelligence and white matter properties is driven by genetic or environmental factors, Chiang et al. (2009) employed VBM in a sample comprising monozygotic and dizygotic twins. The authors followed a cross-trait/cross-twin approach and computed respective correlations for all voxels within white matter. Results showed positive associations between general intelligence and FA values exhibited by voxels located in the cingulum, the posterior corpus callosum, the posterior thalamic radiation, the corona radiata, and the superior longitudinal fasciculus. As with Hulshoff-Pol et al. (2006), respective associations were mainly mediated by a common genetic factor. Again, it is reasonable to assume that this common genetic factor was constituted by a multitude of genes rather than a single gene alone. Although the application of VBM in white matter has its advantages compared to quantitative tractography, this method also suffers from various shortcomings like partial volume effects, spatial smoothing, and the use of arbitrary thresholds (see Chapter 11 for a discussion of some of these issues). One technique that is able to overcome these limitations and combine the strengths of both VBM and quantitative tractography is called Tract-Based Spatial Statistics (TBSS) and was introduced by Smith et al. (2006). Similar to VBM, TBSS also transforms individual FA maps to a common stereotactic space. Subsequently, a mean FA map is created by averaging the spatially normalized images from all individuals. This FA map is thinned out to create a white matter “skeleton” that only includes those voxels at the center of fiber tracts. Finally, the “skeletonized” FA maps are subjected to voxel-wise statistical comparisons. Wang et al. (2012) were among the first to employ this technique in a small sample of adolescents. They found that general intelligence was positively correlated with FA values extracted from voxels located in the anterior–inferior fronto-occipital fasciculus. A similar study, examining mathematically gifted adolescents and a control sample, revealed that FA
Diffusion-Weighted Imaging of Intelligence
values extracted from voxels predominantly located in the genu, midbody, and splenium of the corpus callosum were positively associated with general intelligence. Voxels belonging to the fornix and the anterior limb of the left internal capsule also exhibited this association. Nusbaum et al. (2017) conducted a study in which they compared children with very high and normal levels of general intelligence. They observed higher FA values in gifted children predominately located in voxels corresponding to the genu, midbody, and splenium of the corpus callosum, the corona radiata, the internal and external capsules, the uncinate fasciculus, the fornix, and the cingulum. In order to investigate how maturation of white matter microstructure affects intellectual ability, Tamnes et al. (2010) conducted a TBSS study in a large cross-sectional sample of children and young adults. The authors did not report any associations between FA and general intelligence but focused on measures of verbal and non-verbal abilities. For both abilities they observed positive correlations with FA values exhibited by voxels located in the superior longitudinal fasciculus. Furthermore, verbal abilities were significantly associated with FA in voxels from the anterior thalamic radiation and the cingulum, whereas voxels belonging to the genu of the corpus callosum exhibited significant correlations between FA and non-verbal abilities. As of today, there are two studies that report TBSS findings in adults (Dunst, Benedek, Koschutnig, Jauk, & Neubauer, 2014; Malpas et al., 2016). Interestingly, both studies report substantially different results although they examined equally sized samples with comparable age ranges, used very similar test batteries in order to assess general intelligence, and performed the same statistical analyses including age and sex as covariates. Dunst et al. (2014) observed no significant association between general intelligence and FA in any of the white matter voxels covered by the TBSS skeleton. However, when performing the same analysis separately for both sexes, females still showed no effects, but males exhibited positive correlations in voxels belonging to the genu of the corpus callosum. In contrast, Malpas et al. (2016) demonstrated a strikingly different pattern of results. In the overall sample, they found general intelligence to be positively associated with FA in a widespread network of white matter voxels (about 30% of the skeleton). Respective voxels were located in the anterior thalamic radiation, the superior longitudinal fasciculus, the inferior fronto-occipital fasciculus, and the uncinate fasciculus. It is unclear why the results observed by Malpas et al. (2016) failed to replicate those reported by Dunst et al. (2014), even though both studies employed highly comparable methods and investigated seemingly large samples. Unfortunately, lack of replication is a common problem in neuroscientific intelligence research and other disciplines as well. An adequate way of dealing with such issues is to recruit even larger samples in collaborative efforts between multiple research sites. Sharing data and reporting only those findings which can be observed consistently across different samples represents a suitable approach towards producing reliable scientific evidence.
201
202
e. genc¸ and c. fraenz
Finally, there are two studies reporting TBSS results in individuals of older age. Haász et al. (2013) demonstrated that a general factor of fluid intelligence, which was extracted from a test battery measuring matrix reasoning, processing speed, and memory, was positively correlated with FA in a widespread network of white matter voxels (about 30% of the skeleton). The strongest effects were exhibited by voxels located in the inferior fronto-occipital fasciculus, the inferior longitudinal fasciculus, the superior longitudinal fasciculus, the arcuate fasciculus, the uncinate fasciculus, the anterior thalamic radiation, and the genu of the corpus callosum. Interestingly, when conducting separate analyses for the three components underlying the general factor of fluid intelligence, processing speed was the only component replicating the results observed for higher order fluid intelligence. Kuznetsova et al. (2016) examined a sample from the LBC comprising adults of older age. They found information processing to be positively associated with FA in voxels from all major white matter fiber pathways. In summary, it can be said that results from quantitative tractography and voxel-based analyses both demonstrate that microstructural integrity of several different fiber bundles is associated with interindividual differences in cognitive performance. It is conceivable that the integrity of respective fiber bundles is key to efficient communication between regions predominantly located in areas constituting the P-FIT network and thus fosters higher intelligence. Moreover, information exchange beneficial for intelligence is not only restricted to the interaction between single brain regions but also involves communication across whole brain networks (Barbey, 2018). In addition to the microstructural integrity of specific tracts, recent empirical studies show that general intelligence is also reflected in the efficiency with which a white matter network is organized. The efficiency or quality of brain networks can be quantified by analyzing data obtained via DWI with methods borrowed from graph theory. A widely used metric to describe the quality of information exchange within a brain network is called global efficiency. This metric is able to quantify the degree of efficient communication between all regions within a brain network. The extent to which a specific brain region contributes to efficient information exchange across a whole network is referred to as nodal efficiency. The first study to introduce the graph analytical approach to intelligence research was conducted by Li et al. (2009). The authors observed that higher general intelligence was associated with higher global efficiency in a sample of young adults. Li et al. (2009) also quantified the nodal efficiency of individual brain areas and investigated its relationship with general intelligence. In accordance with P-FIT, they demonstrated that the nodal efficiency exhibited by brain regions located in parietal, temporal, occipital, and frontal lobes as well as cingulate cortex and three subcortical structures was related to interindividual differences in general intelligence. Another study that employed the graph analytical approach in a sample of young females found that higher
Diffusion-Weighted Imaging of Intelligence
global efficiency was associated with higher fluid intelligence and working memory capacity but not processing speed (Pineda-Pardo, Martínez, Román, & Colom, 2016). Genc et al. (2019) used DWI and graph theory to investigate the association between global efficiency and general knowledge, which is considered a vital marker of crystallized intelligence. In a large sample of over 300 young males and females, they found general knowledge to be positively related with global efficiency and observed this association to be driven by positive correlations between general knowledge and the nodal efficiency values of brain regions from the P-FIT network. Ryman et al. (2016) investigated how intellectual ability is related to morphometric properties and network characteristics in a large sample of young adults. They found that global efficiency and total gray matter volume were both associated with general intelligence differences in females, whereas the overall volume of parieto-frontal brain regions served as the only significant predictor of intelligence in males. Interestingly, Ma et al. (2017) were able to confirm the importance of integrated information processing for general intelligence in a sample of gifted young adults. In addition, this study also demonstrated that segregated information processing, quantified as another graph analytical metric known as the clustering coefficient, is advantageous for cognitive performance in very high intelligence ranges. Fischer, Wolf, Scheurich, and Fellgiebel (2014) showed that the positive relationship between general intelligence and global efficiency also applies to samples comprised of older adults. The results from studies investigating the associations between global efficiency and other cognitive abilities in older adults indicate that global efficiency is also positively correlated to executive functions, visuospatial reasoning, verbal abilities, and processing speed (Wen et al., 2011; Wiseman et al., 2018). However, respective studies found that global efficiency was not related to memory function. Two recent studies investigated the relationship between intellectual ability and global efficiency in samples of younger age. In a sample of preadolescent children, Kim et al. (2016) were able to show that global efficiency was positively correlated with fluid intelligence as well as four different narrow abilities constituting fluid intelligence. Furthermore, they could demonstrate that this relationship could predominantly be attributed to the nodal efficiency of specific areas belonging to the P-FIT network. Following a cross-trait/cross-twin approach, Koenis et al. (2018) conducted a well-conceived longitudinal study that examined the relationship between general intelligence and global efficiency in an adolescent cohort. They were able to demonstrate that global efficiency increased in a non-linear fashion from early adolescence to early adulthood. Furthermore, results indicated that the association between general intelligence and global/nodal efficiency also increased during adolescence. No significant associations were observed in early adolescence, whereas positive associations were present in early adulthood. Importantly, global efficiency was found to show significant heritability throughout adolescence. Moreover, the extent to which genetic factors
203
204
e. genc¸ and c. fraenz
contributed to the correlation between general intelligence and global/nodal efficiency was shown to increase with age. In early adulthood genetic factors were able to explain up to 87% of the observed correlation between global efficiency and general intelligence (Koenis et al., 2018). Based on these findings, it is reasonable to assume that the association between global efficiency and general intelligence shows a strong genetic mediation in older adults as well. Nevertheless, future studies examining monozygotic and dizygotic twins are needed to substantiate this assumption. Given the myriad of studies presented in this chapter, one might be wondering about the essential findings that can be extracted from research on the relationships between neuroanatomical correlates and general intelligence. To summarize, one of the first hypotheses in neuroscientific intelligence research, namely that bigger brains contribute to higher intellectual ability, has been confirmed time and time again by modern in vivo imaging methods. Special attention has been paid to the brain’s gray matter. Over the last decades, this research agenda was successful in relating general intelligence to various properties of gray matter like cortical thickness, surface area, as well as dendritic organization. The growing body of evidence created by these efforts led to the proposal of Parieto-Frontal Integration Theory. The centerpiece of P-FIT is a brain network comprising multiple cortical and subcortical structures that have been identified as relevant correlates of intelligence. Importantly, the model also ascribes importance to the white matter fiber tracts connecting respective brain regions. Accordingly, macrostructural properties of the corpus callosum and other major white matter fiber bundles, e.g., volume, thickness, or surface area, have been related to interindividual intelligence differences in the past. The same applies to different measures of microstructural integrity, such as fractional anisotropy, which can be obtained by DWI. A recent addition to the ever-growing arsenal of methods employed by neuroscientific intelligence research is graph theory. This approach allows for the quantification of brain network organization. Its different metrics, such as global or nodal efficiency, have consistently been found to correlate with interindividual differences in intelligence. It is important to note that many of the associations between neuroanatomical properties and general intelligence are likely to follow dynamic changes. As research has shown, intellectual ability might be negatively associated to a certain brain property in children and exhibit a positive correlation in adults. Finally, compelling evidence from a multitude of studies examining monozygotic and dizygotic twin samples points towards a major role of genetics in mediating the relationships between general intelligence and brain structure. Neuroscientific research of intelligence has come a long way since its early beginnings. Scientists like Broca and Galton, who were highly restricted in their methodical arsenal, would be amazed by the possibilities with which the neural foundations of intellectual ability can be investigated in these days. Likewise, one has all reason to be excited about the technological
Diffusion-Weighted Imaging of Intelligence
advancements the future will bring. The combination of psychometric intelligence testing and neuroimaging represents a relatively young discipline, and it is fair to say that the best is yet to come.
References Allin, M. P. G., Kontis, D., Walshe, M., Wyatt, J., Barker, G. J., Kanaan, R. A. A., . . . Nosarti, C. (2011). White matter and cognition in adults who were born preterm. PLoS One, 6(10), e24525. Atkinson, D. S., Abou-Khalil, B., Charles, P. D., & Welch, L. (1996). Midsagittal corpus callosum area, intelligence and language in epilepsy. Journal of Neuroimaging, 6(4), 235–239. Barbey, A. K. (2018). Network neuroscience theory of human intelligence. Trends in Cognitive Sciences, 22(1), 8–20. Basser, P. J., & Pierpaoli, C. (1996). Microstructural and physiological features of tissues elucidated by quantitative-diffusion-tensor MRI. Journal of Magnetic Resonance, Series B, 111(3), 209–219. Beaulieu, C. (2002). The basis of anisotropic water diffusion in the nervous system – A technical review. NMR in Biomedicine, 15(7–8), 435–455. Behrens, T. E., Berg, H. J., Jbabdi, S., Rushworth, M. F. S., & Woolrich, M. W. (2007). Probabilistic diffusion tractography with multiple fibre orientations: What can we gain? NeuroImage, 34(1), 144–155. Behrens, T. E., Woolrich, M. W., Jenkinson, M., Johansen-Berg, H., Nunes, R. G., Clare, S., . . . Smith, S. M. (2003). Characterization and propagation of uncertainty in diffusion-weighted MR imaging. Magnetic Resonance in Medicine, 50(5), 1077–1088. Booth, T., Bastin, M. E., Penke, L., Maniega, S. M., Murray, C., Royle, N. A., . . . Hernández, M. (2013). Brain white matter tract integrity and cognitive abilities in community-dwelling older people: The Lothian Birth Cohort, 1936. Neuropsychology, 27(5), 595–607. Campbell, J. S. W., & Pike, G. B. (2014). Potential and limitations of diffusion MRI tractography for the study of language. Brain and Language, 131, 65–73. Catani, M., & Thiebaut de Schotten, M. (2008). A diffusion tensor imaging tractography atlas for virtual in vivo dissections. Cortex, 44(8), 1105–1132. Chiang, M. C., Barysheva, M., Shattuck, D. W., Lee, A. D., Madsen, S. K., Avedissian, C., . . . Thompson, P. M. (2009). Genetics of brain fiber architecture and intellectual performance. Journal of Neuroscience, 29(7), 2212–2224. Cremers, L. G. M., de Groot, M., Hofman, A., Krestin, G. P., van der Lugt, A., Niessen, W. J., . . . Ikram, M. A. (2016). Altered tract-specific white matter microstructure is related to poorer cognitive performance: The Rotterdam Study. Neurobiology of Aging, 39, 108–117. Deary, I. J., Bastin, M. E., Pattie, A., Clayden, J. D., Whalley, L. J., Starr, J. M., & Wardlaw, J. M. (2006). White matter integrity and cognition in childhood and old age. Neurology, 66(4), 505–512. Deary, I. J., Penke, L., & Johnson, W. (2010). The neuroscience of human intelligence differences. Nature Reviews Neuroscience, 11(3), 201–211.
205
206
e. genc¸ and c. fraenz
Dunst, B., Benedek, M., Koschutnig, K., Jauk, E., & Neubauer, A. C. (2014). Sex differences in the IQ-white matter microstructure relationship: A DTI study. Brain and Cognition, 91, 71–78. Ferrer, E., Whitaker, K. J., Steele, J. S., Green, C. T., Wendelken, C., & Bunge, S. A. (2013). White matter maturation supports the development of reasoning ability through its influence on processing speed. Developmental Science, 16(6), 941–951. Filley, C. (2012). The behavioral neurology of white matter. New York: Oxford University Press. Fischer, F. U., Wolf, D., Scheurich, A., & Fellgiebel, A. (2014). Association of structural global brain network properties with intelligence in normal aging. PLoS One, 9(1), e86258. Galton, F. (1888). Head growth in students at the University of Cambridge. Nature, 38(996), 14–15. Genc, E., Fraenz, C., Schlüter, C., Friedrich, P., Hossiep, R., Voelkle, M. C., . . . Jung, R. E. (2018). Diffusion markers of dendritic density and arborization in gray matter predict differences in intelligence. Nature Communications, 9(1), 1905. Genc, E., Fraenz, C., Schlüter, C., Friedrich, P., Voelkle, M. C., Hossiep, R., & Güntürkün, O. (2019). The neural architecture of general knowledge. European Journal of Personality, 33(5), 589–605. Goriounova, N. A., Heyer, D. B., Wilbers, R., Verhoog, M. B., Giugliano, M., Verbist, C., . . . Verberne, M. (2018). Large and fast human pyramidal neurons associate with intelligence. eLife, 7(1), e41714. Goriounova, N. A., & Mansvelder, H. D. (2019). Genes, cells and brain areas of intelligence. Frontiers in Human Neuroscience, 13, 14. Haász, J., Westlye, E. T., Fjær, S., Espeseth, T., Lundervold, A., & Lundervold, A. J. (2013). General fluid-type intelligence is related to indices of white matter structure in middle-aged and old adults. NeuroImage, 83, 372–383. Hulshoff-Pol, H. E., Schnack, H. G., Posthuma, D., Mandl, R. C. W., Baare, W. F., van Oel, C., . . . Kahn, R. S. (2006). Genetic contributions to human brain morphology and intelligence. Journal of Neuroscience, 26(40), 10235–10242. Jung, R. E., & Haier, R. J. (2007). The Parieto-Frontal Integration Theory (P-FIT) of intelligence: Converging neuroimaging evidence. Behavioral and Brain Sciences, 30(2), 135–154. Kievit, R. A., Davis, S. W., Mitchell, D. J., Taylor, J. R., Duncan, J., Tyler, L. K., . . . Cusack, R. (2014). Distinct aspects of frontal lobe structure mediate agerelated differences in fluid intelligence and multitasking. Nature Communications, 5, 5658. Kim, D. J., Davis, E. P., Sandman, C. A., Sporns, O., O’Donnell, B. F., Buss, C., & Hetrick, W. P. (2016). Children’s intellectual ability is associated with structural network integrity. NeuroImage, 124, 550–556. Koenis, M. M. G., Brouwer, R. M., Swagerman, S. C., van Soelen, I. L. C., Boomsma, D. I., & Hulshoff Pol, H. E. (2018). Association between structural brain network efficiency and intelligence increases during adolescence. Human Brain Mapping, 39(2), 822–836.
Diffusion-Weighted Imaging of Intelligence
Kontis, D., Catani, M., Cuddy, M., Walshe, M., Nosarti, C., Jones, D., . . . Allin, M. (2009). Diffusion tensor MRI of the corpus callosum and cognitive function in adults born preterm. Neuroreport, 20(4), 424–428. Kuznetsova, K. A., Maniega, S. M., Ritchie, S. J., Cox, S. R., Storkey, A. J., Starr, J. M., . . . Bastin, M. E. (2016). Brain white matter structure and information processing speed in healthy older age. Brain Structure and Function, 221(6), 3223–3235. Le Bihan, D. (2003). Looking into the functional architecture of the brain with diffusion MRI. Nature Reviews Neuroscience, 4(6), 469–480. Li, Y., Liu, Y., Li, J., Qin, W., Li, K., Yu, C., & Jiang, T. (2009). Brain anatomical network and intelligence. PLoS Computational Biology, 5(5), e1000395. Luders, E., Narr, K. L., Bilder, R. M., Thompson, P. M., Szeszko, P. R., Hamilton, L., & Toga, A. W. (2007). Positive correlations between corpus callosum thickness and intelligence. NeuroImage, 37(4), 1457–1464. Ma, J., Kang, H. J., Kim, J. Y., Jeong, H. S., Im, J. J., Namgung, E., . . . Oh, J. K. (2017). Network attributes underlying intellectual giftedness in the developing brain. Scientific Reports, 7(1), 11321. MacKay, A. L., & Laule, C. (2016). Magnetic resonance of myelin water: An in vivo marker for myelin. Brain Plasticity, 2(1), 71–91. Malpas, C. B., Genc, S., Saling, M. M., Velakoulis, D., Desmond, P. M., & O’Brien, T. J. (2016). MRI correlates of general intelligence in neurotypical adults. Journal of Clinical Neuroscience, 24, 128–134. McDaniel, M. A. (2005). Big-brained people are smarter: A meta-analysis of the relationship between in vivo brain volume and intelligence. Intelligence, 33(4), 337–346. Mori, S. (2007). Introduction to diffusion tensor imaging. Oxford: Elsevier. Morris, D. M., Embleton, K. V., & Parker, G. J. M. (2008). Probabilistic fibre tracking: Differentiation of connections from chance events. NeuroImage, 42(4), 1329–1339. Muetzel, R. L., Mous, S. E., van der Ende, J., Blanken, L. M. E., van der Lugt, A., Jaddoe, V. W. V., . . . White, T. (2015). White matter integrity and cognitive performance in school-age children: A population-based neuroimaging study. NeuroImage, 119, 119–128. Narr, K. L., Woods, R. P., Thompson, P. M., Szeszko, P., Robinson, D., Dimtcheva, T., . . . Bilder, R. M. (2007). Relationships between IQ and regional cortical gray matter thickness in healthy adults. Cerebral Cortex, 17(9), 2163–2171. Neubauer, A., & Fink, A. (2009). Intelligence and neural efficiency. Neuroscience and Biobehavioral Reviews, 33(7), 1004–1023. Nusbaum, F., Hannoun, S., Kocevar, G., Stamile, C., Fourneret, P., Revol, O., & Sappey-Marinier, D. (2017). Hemispheric differences in white matter microstructure between two profiles of children with high intelligence quotient vs. controls: A tract-based spatial statistics study. Frontiers in Neuroscience, 11, 173. Ocklenburg, S., Anderson, C., Gerding, W. M., Fraenz, C., Schluter, C., Friedrich, P., . . . Genc, E. (2018). Myelin water fraction imaging reveals
207
208
e. genc¸ and c. fraenz
hemispheric asymmetries in human white matter that are associated with genetic variation in PLP1. Molecular Neurobiology, 56(6), 3999–4012. Pakkenberg, B., & Gundersen, H. J. G. (1997). Neocortical neuron number in humans: Effect of sex and age. Journal of Comparative Neurology, 384(2), 312–320. Penke, L., Maniega, S. M., Bastin, M. E., Hernandez, M. C. V., Murray, C., Royle, N. A., . . . Deary, I. J. (2012). Brain white matter tract integrity as a neural foundation for general intelligence. Molecular Psychiatry, 17(10), 1026–1030. Penke, L., Maniega, S. M., Murray, C., Gow, A. J., Hernandez, M. C., Clayden, J. D., . . . Deary, I. J. (2010). A general factor of brain white matter integrity predicts information processing speed in healthy older people. Journal of Neuroscience, 30(22), 7569–7574. Pietschnig, J., Penke, L., Wicherts, J. M., Zeiler, M., & Voracek, M. (2015). Metaanalysis of associations between human brain volume and intelligence differences: How strong are they and what do they mean? Neuroscience and Biobehavioral Reviews, 57, 411–432. Pineda-Pardo, J. A., Martínez, K., Román, F. J., & Colom, R. (2016). Structural efficiency within a parieto-frontal network and cognitive differences. Intelligence, 54, 105–116. Ryman, S. G., Yeo, R. A., Witkiewitz, K., Vakhtin, A. A., van den Heuvel, M., de Reus, M., . . . Jung, R. E. (2016). Fronto-Parietal gray matter and white matter efficiency differentially predict intelligence in males and females. Human Brain Mapping, 37(11), 4006–4016. Sampaio-Baptista, C., Khrapitchev, A. A., Foxley, S., Schlagheck, T., Scholz, J., Jbabdi, S., . . . Thomas, N. (2013). Motor skill learning induces changes in white matter microstructure and myelination. Journal of Neuroscience, 33(50), 19499–19503. Schmithorst, V. J. (2009). Developmental sex differences in the relation of neuroanatomical connectivity to intelligence. Intelligence, 37(2), 164–173. Schmithorst, V. J., Wilke, M., Dardzinski, B. J., & Holland, S. K. (2005). Cognitive functions correlate with white matter architecture in a normal pediatric population: A diffusion tensor MRI study. Human Brain Mapping, 26(2), 139–147. Smith, S. M., Jenkinson, M., Johansen-Berg, H., Rueckert, D., Nichols, T. E., Mackay, C. E., . . . Matthews, P. M. (2006). Tract-based spatial statistics: Voxelwise analysis of multi-subject diffusion data. NeuroImage, 31(4), 1487–1505. Tamnes, C. K., Østby, Y., Walhovd, K. B., Westlye, L. T., Due-Tønnessen, P., & Fjell, A. M. (2010). Intellectual abilities and white matter microstructure in development: A diffusion tensor imaging study. Human Brain Mapping, 31(10), 1609–1625. Tang, C. Y., Eaves, E. L., Ng, J. C., Carpenter, D. M., Mai, X., Schroeder, D. H., . . . Haier, R. J. (2010). Brain networks for working memory and factors of intelligence assessed in males and females with fMRI and DTI. Intelligence, 38(3), 293–303. Tuch, D. S. (2004). Q-ball imaging. Magnetic Resonance in Medicine, 52(6), 1358–1372. Urger, S. E., De Bellis, M. D., Hooper, S. R., Woolley, D. P., Chen, S. D., & Provenzale, J. (2015). The superior longitudinal fasciculus in typically
Diffusion-Weighted Imaging of Intelligence
developing children and adolescents: Diffusion tensor imaging and neuropsychological correlates. Journal of Child Neurology, 30(1), 9–20. Wang, Y., Adamson, C., Yuan, W., Altaye, M., Rajagopal, A., Byars, A. W., & Holland, S. K. (2012). Sex differences in white matter development during adolescence: A DTI study. Brain Research, 1478, 1–15. Wen, W., Zhu, W., He, Y., Kochan, N. A., Reppermund, S., Slavin, M. J., . . . Sachdev, P. (2011). Discrete neuroanatomical networks are associated with specific cognitive abilities in old age. Journal of Neuroscience, 31(4), 1204–1212. Wiseman, S. J., Booth, T., Ritchie, S. J., Cox, S. R., Muñoz Maniega, S., Valdés Hernández, M., . . . Deary, I. J. (2018). Cognitive abilities, brain white matter hyperintensity volume, and structural network connectivity in older age. Human Brain Mapping, 39(2), 622–632. Wolff, S. D., & Balaban, R. S. (1989). Magnetization transfer contrast (MTC) and tissue water proton relaxation in vivo. Magnetic Resonance in Medicine, 10(1), 135–144. Yu, C., Li, J., Liu, Y., Qin, W., Li, Y., Shu, N., . . . Li, K. (2008). White matter tract integrity and intelligence in patients with mental retardation and healthy adults. NeuroImage, 40(4), 1533–1541. Zhang, H., Schneider, T., Wheeler-Kingshott, C. A., & Alexander, D. C. (2012). NODDI: Practical in vivo neurite orientation dispersion and density imaging of the human brain. NeuroImage, 61(4), 1000–1016.
209
11 Structural Brain Imaging of Intelligence Stefan Drakulich and Sherif Karama
Overview The brain’s remarkable inter-individual structural variability provides a wealth of information that is readily accessible via structural Magnetic Resonance Imaging (sMRI). sMRI enables various structural properties of the brain to be captured on a macroscale level – one that is quickly moving towards submillimeter resolution (Budde, Shajan, Scheffler, & Pohmann, 2014; Stucht et al., 2015). This constitutes a remarkable leap forward from historically crude brain measures, such as head circumference measurements, aimed at understanding the neurobiology of intelligence differences. The works presented here and those which continue today are distinguished by years of incremental validation. sMRI-based global (e.g., total brain volume), regional (e.g., subcortical structural volumes), and local (e.g., local cortical thickness) brain measurements have all been examined for associations with cognitive ability at different time points throughout the lifespan (Luders, Narr, Thompson, & Toga, 2009). The growing body of research in this field suggests that many aspects of our cognitive abilities are, to various degrees, associated with aspects of neuroanatomy (Luders et al., 2009). However, likely due to statistical power and methodological idiosyncrasies, there are still contradictions in the field, and unambiguous conclusions regarding intelligence associations with brain structure cannot yet be entirely drawn. This chapter serves to summarize what we know to date on the topic and seeks to provide suggestions (implicit and explicit) regarding the implementation of future structural neuroimaging studies of intelligence. Preferentially advocating for the use of any particular pipeline or software is avoided due to the inherent subjectivity associated with such choices, as well as to the relatively quick development and refinements of the various approaches.
Total Brain Volume The topic of a relationship between brain volume and intelligence has been debated since at least the 1930s (McDaniel, 2005). More recently, and using various brain imaging methods, the evidence points to quite a robust but 210
Structural Brain Imaging of Intelligence
rather modest association between brain volume and intelligence (McDaniel, 2005). Indeed, most sMRI-based studies report correlations ranging from .3 to .4 between brain volume and measures of general intelligence (McDaniel, 2005; Wickett, Vernon, & Lee, 2000). A meta-analysis consisting of over 8000 individuals across 88 studies using various brain-imaging methods found significant positive associations between brain volume and IQ (r = .24), and found this association to generalize over age, sex, and cognitive subdomains (Pietschnig, Penke, Wicherts, Zeiler, & Voracek, 2015). This may seem surprising giving some historical findings suggesting no association between intellectual attainment and brain weight (which can be viewed as a proxy for size/volume). For instance, Einstein’s brain was 1,230 grams at autopsy, which was more or less average for his age, while Anatole France, a Nobel laureate in literature, had a brain that weighed only 1,100 grams at autopsy (DeFelipe, 2011). However, there is no contradiction here, as a correlation of .4, the maximum value reported above, essentially means that, at best, brain size only accounts for 16% (.4 squared) of the variance in general intelligence. In other words, at least 84% of intelligence differences are likely due to other factors (Andreasen et al., 1993; Rushton & Ankney, 2009). The corollary here is that some extremely bright individuals may have relatively small brains, and vice versa. The respective contributions of genes, environment, and their interactions to the relationship between brain volume and intelligence are not yet well elucidated. An early study on the topic that requires replication reported no brain volume association with intelligence within families (Schoenemann, Budinger, Sarich, & Wang, 2000). Acknowledging that brain volume can change across the lifespan, another study reported an association between brain volume change and IQ, and showed that part of this association was of genetic origin (Brouwer et al., 2014). Finally, according to Pietschnig et al. (2015), the strength of the association between brain volume and intelligence might have been inflated by publication bias and be closer to correlation values of .2 than .4.
Voxel-based Morphometry Methods sMRI analyses have initially and predominantly been performed using voxel-based morphometry (VBM), a method that aims to link a trait of interest to local amounts of gray matter, white matter, and/or cerebrospinal fluid (Ashburner & Friston, 2000). One of the advantages of VBM is its ease of implementation and its relatively low computational cost. A VBM analysis begins with a high-resolution MRI brain scan. This raw MRI then undergoes a series of linear and nonlinear transformations (standardization/normalization) to have individual brains match a predetermined template brain (registration template). The registration template is a
211
212
s. drakulich and s. karama
“reference brain,” which is typically an average of many high quality magnetic resonance images (Evans, Janke, Collins, & Baillet, 2012). The threedimensional coordinate system on which the average/registration template is set is referred to as “stereotaxic space,” whereas the coordinate system of the raw brain image, prior to its transformation, is referred to as “native space.” Note that the normalization step does not lead to a perfect match between individual brains and the template, aiming instead to only correct for global brain-shape and size differences, because a perfect match would obfuscate any attempt to find VBM differences between subjects or groups of subjects when making measurements in stereotaxic space. Normalization aims to having a given set of three-dimensional coordinates represent the same brain region across subjects, hence facilitating inter-subject/inter-group comparisons (Collins, Neelin, Peters, & Evans, 1994; Riahi et al., 1998; Worsley et al., 1996). After the brain normalization procedure, each brain image passes through an algorithm that performs tissue-classification of the voxels (voxels are somewhat analogous to three-dimensional pixels). This parcellates the brain into its various tissue components, including gray matter, white matter, and cerebrospinal fluid (CSF) (Cocosco, Zijdenbos, & Evans, 2003; Zijdenbos, Lerch, Bedell, & Evans, 2005). This step is usually followed by three-dimensional smoothing (blurring) of the data to improve the signal-to-noise ratio (Good et al., 2001). In any case, smoothing renders the data more normally distributed, thus increasing the validity of parametric statistical tests. Smoothing also reduces the effects of individual variation in sulcal/gyral anatomy that remain after standardization. The degree of smoothing is dependent on the size of the so-called smoothing kernel. While it is sometimes believed that there is a standard for the size of the smoothing kernel, the matched filter theorem (Turin, 1960) dictates otherwise. The theorem stipulates that the optimal size of the smoothing kernel depends on the size of the “real” underlying signal one wishes to detect. A smoothing kernel that is smaller or larger than the size of the true signal will decrease sensitivity to detect that signal. In other words, if the size of a brain area linked to some trait or phenotype (e.g., intelligence) is 20 mm in diameter, then the optimal kernel size to detect that association is 20 mm in diameter. There are two main types of VBM: unmodulated and modulated (aka optimized). Both methods aim to quantify, at each voxel, the degree of association between the amount of a tissue of interest (e.g., gray matter) and a phenotype of interest (e.g., intelligence). Unmodulated VBM uses data directly in stereotaxic space, and so disregards the fact that regions have been morphed (i.e., stretched or compressed) to fit the registration template. This leads to inferences about associations between a given trait of interest and a relative amount of a tissue of interest at each voxel for a given region. This is often referred to as tissue density and should not mean to refer to density in the sense of “concentration” of tissue at the voxel level. In other words, and all being equal, a voxel considered as having greater gray matter
Structural Brain Imaging of Intelligence
density contains a greater amount of gray matter rather than having gray matter that is more concentrated in the sense of more compact. Modulated (aka optimized) VBM, on the other hand, adjusts the amount of a tissue of interest for the level of morphing it has undergone. For instance, if a given voxel has been expanded to make a brain fit the registration template, the amount of tissue contained within that voxel will be reduced to compensate for the expansion. This leads to inferences about associations between a given trait of interest and absolute amount/volume of a tissue of interest in native space. After processing, associations between a trait of interest and gray/white/ CSF voxel values across the brain are typically analyzed using linear regression models. As thousands of voxels are examined independently for putative associations, appropriate corrections for multiple comparisons are required. For a figure summarizing the various VBM steps, see Figure 3.3 by Martínez and Colom.
Voxel-based Morphometry Findings Most VBM studies on brain correlates of intelligence have focused on the cortex and produced predominantly positive associations in multiple brain areas. This should not be interpreted as meaning that other brain structures are not also very important for intelligence differences but, rather, likely reflects the fact that the inherent large inter-individual variability of the human cortex and the known involvement of the cortex in higher-order cognitive processes have made it a good candidate for research on the neurobiology of intelligence (Eickhoff, Constable, & Yeo, 2018; Kennedy et al., 1998; Regis et al., 2005). The main findings from VBM methods are outlined in the Modulated VBM Cortical Findings, Unmodulated VBM Cortical Findings, and Modulated VBM White Matter Findings sections of this chapter. As a reminder, modulated VBM is used to estimate local gray matter volume, whereas unmodulated VBM is used to estimate the relative local amount of a tissue of interest and referred to as “density.” VBM-based cortical and subcortical gray matter as well as white matter findings are discussed in this section. Briefly, gray matter is distinguished by the presence of neuronal cell bodies. These are present in the cerebral cortex and within subcortical structures/ nuclei. White matter is distinguished by a predominance of myelinated axons and essentially represents most of the brain’s wiring with myelin serving as an isolating sheath that covers axons and allows for more efficient and rapid signal conduction. Important caveats: as suggested in the introduction of this chapter, the brain imaging literature of intelligence research is plagued with contradictory findings. Perhaps the most important reason for these contradictions includes
213
214
s. drakulich and s. karama
low-powered studies using low thresholds for controlling for multiple comparisons. Unfortunately, using low thresholds in such a context, where the typical effect size of associations between brain metrics and measures of intelligence is rather modest, is not an adequate strategy for compensating for low power. Rather, it frequently leads to false positive findings that tend to be more reflective of noise than real signals (Thompson, 2020). Another likely reason for the non-convergence of findings is poor measures of intelligence. In some cases, a single subtest is used. Sometimes, only a few items from a given subtest are administered within a few minutes and considered as proxy for good estimates of general intelligence. For more on this, see Chapter 3, by Martinez and Colom. A further possible source of non-reproducibility is range restriction in samples of convenience. University students, who are frequently solicited in research, tend to have a higher mean IQ than the general population, with the range of cognitive ability in these samples usually being smaller than that found in the general population. This is not trivial and can, at times, distort findings. Indeed, in cases of cognitive ability range restriction, the general factor of intelligence (g) will account for less of the variance in intelligence differences to the benefit of stronger effects of specific abilities (Haier, Karama, Colom, Jung, & Johnson, 2014). Finally, one needs to be aware that scanner-specific effects, which can even occur between two identical scanners of the same brand using the exact same scanning sequence, can have an impact on findings (Stonnington et al., 2008). One must hence exert care and account for this as much as possible when analyzing and interpreting results produced from different scanners. A common procedure includes calibrating the scanners by using an object and/or individual scanned on all scanners. Another helpful, albeit imperfect, procedure is to covary for scanner in the analysis. While both procedures are helpful, residual scanner effects may still persist. Despite these caveats, when taken as a whole, the literature seems to start converging towards certain general findings in associations between intelligence and measures of brain structure. We attempt to provide the salient points here.
Modulated (aka Optimized) VBM Cortical Findings Modulated VBM has a history of relatively robust and distributed associations between gray matter volumes and indices of cognitive ability. Cortical graymatter volume relations to cognitive ability were found to be most consistently reported in frontal and parietal lobes. This is in keeping with the ParietalFrontal Integration Theory of Intelligence (P-FIT) proposed by Jung and Haier (2007) on the basis of reviewing 37 neuroimaging studies of intelligence. It’s important to clarify that the P-FIT is not based on a meta-analysis but on a thorough review of brain imaging findings.
Structural Brain Imaging of Intelligence
A year prior to publishing the P-FIT review, Colom, Haier, and Jung reported that the cognitive tests with the highest g-loadings tended to exhibit more widespread positive gray matter volume associations than the less g-loaded cognitive tests. In keeping with the P-FIT, these areas involved frontal and parietal areas but also temporal and occipital regions (Colom, Jung, & Haier, 2006). More specifically, positive gray matter volume associations with the general intelligence factor (g) were reported in multiple histologically distinguishable Brodmann areas (BA) in frontal (BA 10, 46, and 9), parietal (BA 43 and 3), temporal (BA 21, 37, 22, and 42), and occipital regions (BA 19) (Haier, Jung, Yeo, Head, & Alkire, 2004; Jung & Haier, 2007). A subsequent study conducted by the same team in a sample of 100 healthy adult male and female university students, also found several positive associations between regional gray matter volume and g in frontal (BA 5, 6, 8, 9, 10, 11, 45, 46, and 47), parietal (BA 3, 7, 20, 21, 22, 36, 39, and 42), and occipital regions (BA 18 and 19); no negative associations were found (Colom et al., 2009). They summarized their findings by stating that their study corroborates the P-FIT model given the distribution and arrangement of volumetric correlates of intelligence factors (Colom et al., 2009; Jung & Haier, 2007). In keeping with this, findings from an independent meta-analysis of mostly modulated VBM studies generally supported the P-FIT (Jung & Haier, 2007) but also reported some cortical gray matter associations that extended beyond those of the P-FIT (Basten, Hilger, & Fiebach, 2015) by adding the precuneus, the presupplementary motor area, and the lateral middle temporal gyrus.
Unmodulated VBM Cortical Findings Studies opting for the non-optimized (i.e., unmodulated) VBM protocol to relate cognitive ability to gray matter “density” are rather scarce. Modroño et al. (2019) examined the development of transitive reasoning in parietal regions and found improved performance in the reasoning task to be negatively associated with gray matter density in the same region in adolescents – but not in the adult group. In a small sample of gifted mathematicians with age- and sex-matched controls, Aydin et al. (2007) found greater gray-matter density in the bilateral inferior parietal lobules and left inferior frontal gyrus for the group of mathematicians. A modestly sized study performed on young adolescents by Frangou, Chitins, and Williams (2004) found a significant positive association between IQ and gray-matter density in the orbitofrontal cortex and cingulate gyrus. The unmodulated VBM protocol has also been used to estimate subcortical gray-matter density associations with intelligence. Frangou et al. (2004) found positive associations between IQ and gray matter density in the thalamus and a negative correlation with IQ in the caudate nucleus. They also found positive associations between cerebellar gray matter density and IQ, somewhat partially echoing previous work that used manual segmentation and reported,
215
216
s. drakulich and s. karama
among other regions, an association between IQ and total cerebellar volume (Andreasen et al., 1993). Associations between intelligence and subcortical nuclei have been reproduced using volume estimates based on various methods and are discussed in the Other Noteworthy Findgins Using Various Methods section of this chapter.
Modulated VBM White Matter Findings Total white-matter volume increases during childhood and adolescence, likely reflecting the increased connectivity between disparate brain regions and functional connectivity pathways (Lenroot et al., 2007). These changes are thought to contribute in part to development and refinement of cognitive ability (Paus et al., 1999). The increase in total white matter volume has been reported to slow in late adolescence and early adulthood, and both proceed to decrease in late adulthood (Westlye et al., 2009). Several independent studies have found significant positive associations between total or regional white matter and intelligence using various methods to estimate white matter volume (Andreasen et al., 1993; Gur et al., 1999; Narr et al., 2007). These findings have been corroborated in at least two modulated VBM studies examining white-matter correlates of intelligence (Haier et al., 2004; Haier, Jung, Yeo, Head, & Alkire, 2005). In the first study, Haier et al. (2004) reported positive association between white matter volume and intelligence in two independent samples of adults of different mean ages (mean ages of 27 and 59 years). Although there was a weak overlap in regions associated with intelligence between the two samples, they both exhibited distributed local white matter volume associations with intelligence. The reason for the lack of overlap is not entirely clear but is likely due to lack of power and small sample sizes (23 and 24). Interestingly, positive associations between regional white matter volumes and IQ coincided, in large part, with corresponding regions in gray matter, suggesting the presence of functional connectivity due to the exhibited structural covariance (Haier et al., 2004). For support for this hypothesis, see Chapter 10, by Genç and Fraenz. In a second study from the same group, sex differences in associations between intelligence and white and grey matter volume were assessed. They had several notable findings – men had ~6.5-times the number of GM voxels identified as related to intellectual functioning than did women, and women had roughly 9-times more WM voxels than men; additionally, men had no WM voxels identified in the frontal lobes, while women had 86% of their identified WM voxels in the frontal lobes (Haier et al., 2005). Authors interpreted their findings as suggesting that men and women achieve similar levels of cognitive functioning using different regions. Although this might be the case, formally testing for statistically different gender effects in associations between intelligence and white (or gray) matter volume was not done (likely due to low power). Yet, this is required to formally draw a conclusion of
Structural Brain Imaging of Intelligence
gender differences. Nonetheless, the findings pointed Haier et al. (2005) towards a very interesting (and likely correct) hypothesis “that different types of brain designs may manifest equivalent intellectual performances” (p. 320).
Surface-based Morphometry Methods Surface-based morphometry (SBM) seeks to produce brain measures in a different manner than VBM but makes use of similar segmentation and normalization methods used in VBM. In SBM, brain surfaces (i.e., a sheathrendering of certain brain structures) are produced. Typically, cortical white and gray matter surfaces are produced but subcortical nuclei surfaces can also be produced. The exact method used to produce these surfaces goes beyond the scope of this chapter and can vary between pipelines (e.g., CIVET, FreeSurfer) and even between the various versions/updates of the same pipeline. For more details on some surface-based morphometry methods, see www.bic.mni.mcgill.ca/ServicesSoftware/CIVET and https:// surfer.nmr.mgh.harvard.edu/. One of the motivations for developing SBM was to overcome an important limitation of VBM: its inability to distinguish between area- or thickness-driven cortical volume differences (Sanabria-Diaz et al., 2010; Winkler et al., 2010). Indeed, by producing cortical white and gray matter surfaces, one can readily estimate the distance between these surfaces across the cortex and hence produce estimates of cortical thickness at multiple points on the cortical mantle. Local surface area can also be assessed across the cortex using SBM. Surface area has been used to estimate cortical area for the whole cortex or within specific regions of interest and has also been used to calculate areas of subcortical regions (Winkler et al., 2012). One has a choice of which surface to sample from when calculating a cortical areal quantity in SBM. The white surface (interface between gray and white matter) is sometimes used to calculate surface area. The benefits of using the white surface are that it matches directly to a morphological feature and is insensitive to variations in cortical thickness. The other possible choice is to sample from the middle surface, which runs at the mid-distance between the white and pial surfaces; although the middle surface does not inherently match any specific cortical layer, it is able to represent gyri and sulci in a relatively unbiased fashion (Van Essen, 2005). Finally, local cortical volume can also be estimated with SBM by considering both cortical area and thickness. It’s noteworthy that SBM metrics are typically produced in native space. As for VBM, associations between a trait of interest and SBM metrics are frequently analyzed by implementing linear regression models independently at thousands of points and correcting for these multiple comparisons. For a figure summarizing the various SBM steps, see Figure 3.3.
217
218
s. drakulich and s. karama
Surface-based Morphometry Findings The caveats about VBM findings also apply to SBM findings. Regarding sample size, in the experience of the authors of this chapter, a sample of at least 200 subjects is required for a certain degree of stability in cortical thickness findings. However, because SBM pipelines tend to be exquisitely sensitive to small movement artefacts, we must add that differences in quality control of SBM protocol outputs between studies very likely also account for some differences in findings. For more on this specific issue, see the Cortical Thickness Findings section of this chapter. Another likely source of contradiction in the literature is the use of multiple statistical models where only models yielding “significant” findings are reported in a study. On top of the obvious issue of cherry-picking, such a strategy sometimes further leads to reporting results from only a very complex regression model with multiple interaction terms where the main effect of interest no longer reflects what the authors originally intended to examine. For instance, in a hypothetical situation where one would want to examine the association between IQ and height while correcting for age (i.e. height ~ Intercept + IQ + Age), the estimated main effect of IQ would reflect the estimated change in height for each point of IQ gain/loss. As soon as one introduces an “Age by IQ” interaction, the main effect no longer holds the same meaning. Rather, it provides the change in height linked to each point of IQ gain/loss when Age equals 0 (and not across all ages). For more on this, we strongly suggest Aiken and West’s (1991) book on interpreting interactions in multiple regression. Another important source of differences in SBM reports (and VBM reports) is whether a study has controlled for total brain volume. The issue of whether one should or should not control for total brain volume is not entirely clear, and the following thought experiment with cortical thickness is useful to understand the issue at hand. Imaging the following absurd hypothetical situation: (1) every human being has exactly the same brain shape and size except for the thickness of the prefrontal cortex; and (2) there is a positive association between the thickness of the prefrontal cortex and intelligence. If this were the case, a study examining associations between cortical thickness and intelligence should find positive associations only in the prefrontal cortex. However, if one were to control for total brain volume, this association would disappear because the only source of variance in volume is the thickness of the prefrontal cortex. Not knowing that the rest of the brain is identical between humans, one would then conclude that the association is not really linked exclusively to the prefrontal cortex but stems from a global factor that affects the whole brain. However, based on the two premises, we know this not to be the case. Now, if we put the thought experiment aside, we know that brain volume is, to some degree, related to thickness and to any other measure of size in the brain. What is not entirely clear is whether brain volume stems from
Structural Brain Imaging of Intelligence
the sum of local effects or from global factors that would simultaneously affect multiple brain regions. If the brain volume differences stem mainly from small local effects, then controlling for total brain volume (which is dependent on these local differences) would likely eradicate real local findings and should not be done. On the other hand, if there are global factors that affect thickness across the cortex, then controlling for total brain volume is the strategy that should be used. As the truth likely lies somewhere between these two extremes, the need for controlling for total brain volume is unclear, and possibly presenting both results (with and without control for total brain volume) should be encouraged. Whatever may be the case, controlling for total brain volume will, in most cases, affect associations between neural morphometrics and IQ (or any other behavior of interest) and should be taken into account when comparing result findings.
Cortical Thickness Findings Cortical thickness has been a primary metric of interest in structural imaging in the last decade or so, primarily due to its ability to be mapped across the cortical sheet at relatively high resolution and in an automated or semiautomatic manner (Kabani, Le Goualher, MacDonald, & Evans, 2001; Kim et al., 2005; Lerch & Evans, 2005; MacDonald, Kabani, Avis, & Evans, 2000). While some studies had initially suggested curvilinear relationships between cortical thickness and development, more recent studies using stringent quality control procedures have shown that cortical thickness, as derived from MRI, tends to progressively thin after early childhood (Ducharme et al., 2016; Raznahan et al., 2011). This early thinning has been proposed to reflect gradual synaptic pruning of inefficient connections or other developmentally driven changes in neuronal size, glial cell density, and vasculature (Bourgeois, Goldman-Rakic, & Rakic, 1994; Huttenlocher, 1990; Zatorre, Fields, & Johansen-Berg, 2012). Alternatively, apparent MRI-based cortical thinning may simply reflect gradual cortical invasion of lower cortical layers by whitematter fibers which lead to the erroneous classification of lower cortical layers as white matter (Aleman-Gomez et al., 2013; Sowell et al., 2004). In such a scenario, cortical thickness could theoretically remain identical during development but be artefactually perceived as thinner in MRI. Likely, both artefactual and real thinning contribute to observed MRI-based thinning during development. Whether there is a causal link between early cortical and cognitive development is not entirely clear, as the co-occurrence of both phenomena could lead to spurious correlations. As for the elderly, MRI-based thinning probably mostly reflects genuine loss of tissue. One of the most cited studies on the relationship between cortical thickness and intelligence is the seminal work by Shaw et al. (2006). They reported that high-IQ individuals have a thinner cortex in young childhood, followed by a rapid increase in cortical thickness that results in a positive association
219
220
s. drakulich and s. karama
between cortical thickness and IQ by adolescence. This has been interpreted to indicate that high-IQ individuals have greater plasticity than lower-IQ individuals. While compelling, this finding still needs to be reproduced. Indeed, a study examining a large set of individuals of a similar age range as that of the Shaw et al. paper found that cortical thickness associations remained positive in both young childhood and adolescence (Karama et al., 2009). One possibility for this discrepancy is that different automated CIVET pipeline versions were used, with the pipeline version used by Karama et al. being the more recent version of the two. Another plausible explanation for the discrepancy is that Karama et al. applied a stringent quality control procedure (Ducharme et al., 2016) that had not yet been developed when Shaw et al. (2006) published their work. The quality control procedures tended to detect and remove from analysis subjects that had moved in the scanner. Indeed, in the presence of motion, the algorithm is likely to place the cortical gray–white matter boundary away from the white surface, in effect underestimating cortical thickness (Ducharme et al., 2016; Reuter et al., 2015). It is possible that the youngest children with higher IQs tended to move more in the scanner and that this movement led to artefactually thinner cortical thickness estimates. As these children aged and stopped moving as much, their cortex appeared to thicken. More work remains to be done to elucidate the source of these discrepancies. Generally, associations between cortical thickness and intelligence are often reported to be both positive and distributed and seem to hold across the lifespan (Bajaj et al., 2018; Bedford et al., 2020; Bjuland, Løhaugen, Martinussen, & Skranes, 2013; Choi et al., 2008; Karama et al., 2009, 2011, 2014; Menary et al., 2013; Narr et al., 2007; Schmitt, Raznahan et al., 2019). However, the proportion of variance in cognitive ability that is accounted for by local cortical thickness rarely exceeds 15%. In keeping with this, it is worth noting that another, relatively large study of elderly individuals (N = 672) found that TBV accounted for more of the variance in intelligence than cortical thickness (Ritchie et al., 2015). While results tend to be compatible with the P-FIT, they appear to extend beyond P-FIT areas including medial brain regions like the precuneus. This is in keeping with Basten et al.’s (2015) meta-analysis showing that the volume of some cortical regions not included in the P-FIT also seem associated with intelligence. A few studies have examined, longitudinally, associations between change in cortical thickness and changes in cognitive ability. Using data from the NIH Study of Normal Brain Development (Evans & Brain Development Cooperative Group, 2006) of children scanned and cognitively tested 2 years apart, Burgaleta, Johnson, Waber, Colom, and Karama (2014) have examined how changes in cortical thickness were associated with changes in IQ. Results showed that individuals that showed no significant change in IQ exhibited the standard pattern of cortical thinning over the 2-year period examined. In contrast, children that showed an increase in IQ did not exhibit thinning of their cortex, whereas children that showed a decrease in their IQ
Structural Brain Imaging of Intelligence
exhibited the steepest thinning of their cortex. Data to elucidate the cause of these IQ changes was not available. Nonetheless, this study suggests that reported significant changes in IQ in test–re-test situations that are classically reported for about 5% of children (Breslau et al., 2001; Moffitt, Caspi, Harkness, & Silva, 1993) are not merely artifacts of potential differences in testing conditions but reflect, at least in part, genuine changes in cognitive ability. Given that change scores over two timepoints were used and that such scores are susceptible to regression to mean effects and spurious findings, the finding needed to be replicated. Using a latent variable approach rather than a simple IQ measure, Román et al. (2018) confirmed the Burgaleta, Johnson, et al. (2014) findings on a subsample of the NIH Study of Normal Brain Development that had brain and IQ measurement at three timepoints. Another study, also conducted on a subsample from the NIH Study of Normal Brain Development, pushed the analysis further and showed that cognitive or cortical thickness changes at any given timepoint, respectively, predicted cortical thickness and cognitive change at a later point in time (Estrada, Ferrer, Román, Karama, & Colom, 2019). Another group confirmed, on another large dataset, observed associations between changes in cognitive ability and changes in cortical thickness (Schmitt, Raznahan et al., 2019). In an elegant analysis, they further showed that these dynamic associations were mainly genetically mediated (Schmitt, Neale et al., 2019). Overall, while results from multiple groups tend to suggest positive distributed associations between cortical thickness and intelligence from childhood to old age, some studies found almost no associations and even some negative associations (Tadayon, Pascual-Leone, & Santarnecchi, 2019; Tamnes et al., 2011). In some cases, potential explanations for this discrepancy are difficult to identify. In others, small sample sizes and low thresholds may account for the different findings (Colom et al., 2013; Escorial et al., 2015). In at least one study, only a complex regression model with multiple double interactions was used, making the main effect of IQ difficult to interpret (Goh et al., 2011).
Cortical Surface Area and Volume Findings Although there has been less work on cortical surface area (CSA) than thickness, CSA has also been a neuroimaging measure of interest for the study of cognitive ability. The interest in CSA is bolstered by the fact that it has greatly increased over the course of evolution and is believed to possibly account for intelligence differences between species (Roth & Dicke, 2005). Whether it is a main driver of intelligence differences within species is still not entirely clear. Whereas cortical thickness is thought to reflect the amount of neurons in a given cortical column alongside glia and dendritic arborization, CSA is thought to be related to the number and spacing of mini-columnar units of cells in the cerebral cortex (Chance, Casanova, Switala, & Crow,
221
222
s. drakulich and s. karama
2008; Chklovskii, Mel, & Svoboda, 2004; la Fougere et al., 2011; Lyttelton et al., 2009; Rakic, 1988; Sur & Rubenstein, 2005; Thompson et al., 2007). Additionally, CSA and cortical thickness have been reported to be at least partially independent both globally and regionally, and genetically uncorrelated in adults (Panizzon et al., 2009; Winkler et al., 2010). However, recent extensive work does not support this statement for children where a substantial genetic correlation (rG = .63) was shown between measures of surface area and cortical thickness; this was interpreted by the authors as “suggestive of overlapping genetic influences between these phenotypes early in life” (Schmitt, Neale et al., 2019, p. 3028). It is important to keep in mind that the level of independence between cortical thickness and surface area will depend, of course, on which exact surface is used to measure CSA (white, mid, or pial surface). This may possibly account for the apparent contradictory findings between these studies. For some more discussion on this issue, see Chapter 3, by Martínez and Colom. For a review of some of the intricacies of CSA measurement, see Winkler et al. (2012). Like cortical thickness, CSA has its own developmental trajectory, and has been compared with cortical thickness extensively (Hogstrom, Westlye, Walhovd, & Fjell, 2013; Lemaitre et al., 2012; Storsve et al., 2014). Most, if not all, published studies on cortical surface area associations with measures of intelligence have reported positive associations (Colom et al., 2013; Fjell et al., 2015; Schmitt, Neale et al., 2019; Vuoksimaa et al., 2015). Vuoksimaa et al. (2015) reported, in a large sample of middle-aged men from the Vietnam Era Twin Study of Aging, a positive association between total cortical surface area and intelligence that was of greater magnitude than the correlation between cortical thickness and intelligence. Fjell et al. (2015) reported positive associations between local cortical surface area and intelligence in a large sample of 8 to 89-year-old subjects (mean age 45.9 years). Associations were distributed across the brain and included, among others, clear frontal and cingulate regional associations (Fjell et al. 2015). Schmitt, Neale et al. (2019) also reported associations between cortical surface area and intelligence in a large sample of children, adolescents, and young adults (mean age 12.72 years). This association, which was genetically mediated, was somewhat distributed, including the precuneus, the anterior cingulate, as well as frontal and temporal regions. However, the strongest region of association was in the perisylvian area, a region known for its importance as a receptive language center. Finally, examining a much smaller sample than in the above studies, Colom et al. (2013) reported very local associations between surface area and intelligence in frontal regions. Few reports have been published on associations between SBM-based cortical volume and intelligence but the few that have, have reported positive associations with measures of intelligence (Bajaj et al., 2018; Vuoksimaa et al., 2015). Vuoksimaa et al. (2015) reported a positive association between total cortical volume and a measure of general cognitive ability on a sample of 515
Structural Brain Imaging of Intelligence
middle-aged twins but no association with thickness. In contrast, Bajaj et al. (2018) reported associations between measures of cognitive ability and cortical thickness that were of greater magnitude and more widely distributed than with cortical volume on a very small sample of 56 healthy adults.
Indices of Cortical Complexity Another gross anatomical measure of interest is the level of cortical convolution and is also a derivative measure from surface-based morphometry. Evolution of cortical convolution has served to increase the surface area of the cortex, and possibly by extension, cognitive ability (Rilling & Insel, 1999; Zilles, Armstrong, Schleicher, & Kretschmann, 1988). The gyrification index (GI) is a measure of cortical convolution and is calculated by taking the ratio of total sulcal surface area to total exposed cortical surface; this is limited by the reliability of surface classification, especially at sulcal boundaries (Kim et al., 2005; MacDonald et al., 2000). Positive associations have been found between GI and cognitive ability; one study found these associations to persist across age and sex, with the strongest associations being found in frontoparietal regions (Gregory et al., 2016). Another study found that increased gyrification was positively related to TBV and cognitive function, but not to cortical thickness (Gautam, Anstey, Wen, Sachdev, & Cherbuin, 2015). Much like other brain measures, the degree of cortical convolution can only provide so much insight as to the underlying biology of intelligence. However, examining gyrification is justified by research indicating that the regional degree and patterning of cortical convolution are likely associated with underlying neuronal circuitry and/or regional interconnectivity (Rakic, 1988; Richman, Stewart, Hutchinson, & Caviness, 1975). Increased gyrification of the parietofrontal region has been positively associated with increased working memory, even after controlling for cortical surface area (Green et al., 2018).
Other Noteworthy Findings Using Various Methods Corpus Callosum The corpus callosum (CC) is a densely packed bundle of white-matter fibers that serves as a central commissure that is thought to contribute to integrative properties of cognition between the hemispheres (Schulte & Muller-Oehring, 2010). Luders et al. (2007) reported positive associations between CC thickness and cognitive ability in adult men and women. However, two studies conducted on children and adolescents reported mainly negative correlations between total CC midsagittal area and IQ (Ganjavi et al., 2011; Luders et al., 2011). Both studies reported that males mainly drove the negative association, with one of the studies formally showing a significant gender difference in the
223
224
s. drakulich and s. karama
association between corpus callosum association with IQ (Ganjavi et al., 2011). The latter studies used typical age-standardized deviation IQ scores to evaluate the association with the corpus callosum. However, a large, well conducted multi-site study found no associations between measures of regional CC thickness and intelligence using raw cognitive scores after adjusting for the participants’ age and intracranial volume (Westerhausen et al., 2018). However, figures using typical deviation IQ scores were also provided, and no negative associations were noted in children or adolescents. Potential reasons for the discrepancy require further exploration and include important differences in data regression analysis procedures between the studies as well as differences in samples. Whatever may be the case, this study sheds doubt on reported associations between corpus callosum size and cognitive ability after controlling for age and TBV or intracranial volume.
Cerebellum The cerebellum’s role in higher-order cognitive and emotional processes, and not just motor functions, has become increasingly apparent (Riva & Giorgi, 2000; Schmahmann, 2004). Several positive associations between cerebellar volume and IQ have been found and, although they are relatively weak, they suggest a role for the cerebellum in cognitive ability (Flashman, Andreasen, Flaum, & Swayze, 1997; Frangou et al., 2004; Paradiso, Andreasen, O’Leary, Arndt, & Robinson, 1997).
Subcortical Nuclei Positive associations between IQ and subcortical gray-matter volume, “density,” and shape have also been found using various methods. Subcortical structures that have piqued interest include the thalamus and basal ganglia. Thalamus volume has been positively associated with IQ in a sample of 122 healthy children and adolescents (Xie, Chen, & De Bellis, 2012). Using unmodulated VBM, and hence looking at “density,” one study reported a negative association between IQ and caudate density but a positive association with thalamus density in a sample of 40 children and young adults (mean age ~15 years) (Frangou et al., 2004). Another study, manually counting the number of voxels within structures in native space, reported a positive association between caudate volume and IQ in a sample of 64 female and 21 male children and adolescents (mean age ~10.6 years) (Reiss, Abrams, Singer, Ross, & Denckla, 1996). A more recent study, conducted on a subsample of 303 children and adolescents from the NIH Study of Normal Brain Development (mean age 11.4 years) and using automated image intensity features with subsequent volume estimations, reported positive associations between IQ and the volume of the striatum, a subcortical nucleus comprising both the caudate and putamen
Structural Brain Imaging of Intelligence
(MacDonald, Ganjavi, Collins, Evans, & Karama, 2014). In keeping with the involvement of basal ganglia with intelligence, Burgaleta, MacDonald, et al. (2014) administered nine cognitive tests to 104 healthy adults (mean age 19.8). They then estimated fluid, crystallized, and spatial intelligence via confirmatory factor analysis and regressed these latent scores, vertex-wise, against subcortical shape while controlling for age, sex, and total brain volume. Fluid and spatial ability associations (but not crystallized ability) were shown to be positively associated with the shape of the basal ganglia, with the strongest association being a positive association between rostral putamen enlargement and cognitive performance (Burgaleta, MacDonald, et al., 2014). As only a few studies have examined associations between subcortical nuclei and IQ in large samples, further work is required before definitive statements can be made.
Structural Covariance-Based Network Neuroscience The covariance of various brain structure volumes is amenable to the use of graph-theoretic approaches, thus entering the domain of network neuroscience. Briefly, this involves the calculation of interregional or vertexwise correlations. Various graph-theoretic measures can be derived from these assembled networks, yielding several proxy measures of network organization and efficiency. Usage of surface-based morphometric data has been performed through the applications of clustering approaches on structural covariance in cortical thickness data to identify functional modules from longitudinal scan data (Chen, He, Rosa-Neto, Germann, & Evans, 2008; He, Chen, & Evans, 2007; Khundrakpam et al., 2013; Lerch et al., 2006; Lo, He, & Lin, 2011). Other graph-theoretic analyses of human brain structural metrics have also been applied to cortical surface area (Bassett et al., 2008; Li et al., 2017). However, to date, few studies have examined relationships between structural covariance and intelligence. Nonetheless, work on intelligence has been conducted using functional covariance-based and white matter connectivity-based networks. For instance, Dubois, Galdi, Paul, and Adolphs (2018) used a crossvalidation predictive framework, and were able to predict about 20% of variance in intelligence from resting-state connectivity matrices. For more on network neuroscience in general, see Chapter 6, by Barbey, as well as Chapter 2, by Hilger and Sporns. For more on functional imaging of intelligence differences, see Chapter 12, by Basten and Fiebach. For more on white-matter networks, see Chapter 10, by Genç and Fraenz.
Multi-metric Approaches This chapter focused, one metric at a time, on detailing structural associations with intelligence. It appears most likely that no one structural metric will be able to account for intelligence differences on its own, and
225
226
s. drakulich and s. karama
approaches combining multiple metrics and modalities are much more promising. For more on this, we suggest Paul et al. (2016), Ritchie et al. (2015), and Watson et al. (2016).
Conclusion Various avenues exist for structural MRI-based intelligence studies. It is essential to restate the importance of methodologically appropriate preprocessing, analyses, and study design, and a sufficient and appropriate sample (Button et al., 2013). The devil is in the details here. Advancements in structural MRI studies are happening constantly, and with the rising availability of larger datasets acquired at higher resolutions, the methods can once again rise to the occasion. Moreover, refinement of methods enables the possibility of more appropriately interpretable results, further fueling the power of neuroimaging to access the underlying biology and pathology. Examined as a whole, findings tend to strongly converge towards the view that there are significant associations between brain structure and intelligence. However, considering the many contradictions in the field, few definitive statements can be drawn. In keeping with this, Richard Haier’s three laws are apropos: (1) No story about the brain is simple; (2) No one study is definitive; and (3) It takes many years to sort out conflicting and inconsistent findings and establish a compelling weight of evidence (Haier, 2016).
References Aiken, L. S., & West, S. G. (1991). Multiple regression: Testing and interpreting interactions. Thousand Oaks, CA: Sage Publications, Inc. Aleman-Gomez, Y., Janssen, J., Schnack, H., Balaban, E., Pina-Camacho, L., AlfaroAlmagro, F., . . . Desco, M. (2013). The human cerebral cortex flattens during adolescence. Journal of Neuroscience, 33(38), 15004–15010. Andreasen, N. C., Flaum, M., Swayze, V., 2nd, O’Leary, D. S., Alliger, R., Cohen, G., . . . Yuh, W. T. (1993). Intelligence and brain structure in normal individuals. American Journal of Psychiatry, 150(1), 130–134. Ashburner, J., & Friston, K. J. (2000). Voxel-based morphometry – The methods. Neuroimage, 11(6 Pt 1), 805–821. Aydin, K., Ucar, A., Oguz, K. K., Okur, O. O., Agayev, A., Unal, Z., Yilmaz, S., and Ozturk, C. (2007). Increased gray matter density in the parietal cortex of mathematicians: A voxel-based morphometry study. AJNR American Journal of Neuroradiology, 28(10), 1859–1864. Bajaj, S., Raikes, A., Smith, R., Dailey, N. S., Alkozei, A., Vanuk, J. R., & Killgore, W. D. S. (2018). The relationship between general intelligence and cortical structure in healthy individuals. Neuroscience, 388, 36–44. Bassett, D. S., Bullmore, E., Verchinski, B. A., Mattay, V. S., Weinberger, D. R., & Meyer-Lindenberg, A. (2008). Hierarchical organization of human cortical
Structural Brain Imaging of Intelligence
networks in health and schizophrenia. Journal of Neuroscience, 28(37), 9239–9248. Basten, U., Hilger, K., & Fiebach, C. J. (2015). Where smart brains are different: A quantitative meta-analysis of functional and structural brain imaging studies on intelligence. Intelligence, 51, 10–27. Bedford, S. A., Park, M. T. M., Devenyi, G. A., Tullo, S., Germann, J., Patel, R., . . . Consortium, Mrc Aims (2020). Large-scale analyses of the relationship between sex, age and intelligence quotient heterogeneity and cortical morphometry in autism spectrum disorder. Molecular Psychiatry, 25(3), 614–628. Bjuland, K. J., Løhaugen, G. C., Martinussen, M., & Skranes, J. (2013). Cortical thickness and cognition in very-low-birth-weight late teenagers. Early Human Development, 89(6), 371–380. Bourgeois, J. P., Goldman-Rakic, P. S., & Rakic, P. (1994). Synaptogenesis in the prefrontal cortex of rhesus monkeys. Cerebral Cortex, 4(1), 78–96. Breslau, N., Chilcoat, H. D., Susser, E. S., Matte, T., Liang, K.-Y., & Peterson, E. L. (2001). Stability and change in children’s intelligence quotient scores: A comparison of two socioeconomically disparate communities. American Journal of Epidemiology, 154(8), 711–717. Brouwer, R. M., Hedman, A. M., van Haren, N. E. M., Schnack, H. G., Brans, R. G. H., Smit, D. J. A., . . . Hulshoff Pol, H. E. (2014). Heritability of brain volume change and its relation to intelligence. Neuroimage, 100, 676–683. Budde, J., Shajan, G., Scheffler, K., & Pohmann, R. (2014). Ultra-high resolution imaging of the human brain using acquisition-weighted imaging at 9.4T. Neuroimage, 86, 592–598. Burgaleta, M., Johnson, W., Waber, D. P., Colom, R., & Karama, S. (2014). Cognitive ability changes and dynamics of cortical thickness development in healthy children and adolescents. Neuroimage, 84, 810–819. Burgaleta, M., MacDonald, P. A., Martínez, K., Román, F. J., Álvarez-Linera, J., Ramos González, A., . . . Colom, R. (2014). Subcortical regional morphology correlates with fluid and spatial intelligence. Human Brain Mapping, 35(5), 1957–1968. Button, K. S., Ioannidis, J. P. A., Mokrysz, C., Nosek, B. A., Flint, J., Robinson, E. S. J., & Munafò, M. R. (2013). Power failure: Why small sample size undermines the reliability of neuroscience. Nature Reviews Neuroscience, 14(5), 365–376. Chance, S. A., Casanova, M. F., Switala, A. E., & Crow, T. J. (2008). Auditory cortex asymmetry, altered minicolumn spacing and absence of ageing effects in schizophrenia. Brain, 131(Pt 12), 3178–3192. Chen, Z. J., He, Y., Rosa-Neto, P., Germann, J., & Evans, A. C. (2008). Revealing modular architecture of human brain structural networks by using cortical thickness from MRI. Cerebral Cortex, 18(10), 2374–2381. Chklovskii, D. B., Mel, B. W., & Svoboda, K. (2004). Cortical rewiring and information storage. Nature, 431(7010), 782–788. Choi, Y. Y., Shamosh, N. A., Cho, S. H., DeYoung, C. G., Lee, M. J., Lee, J. M., . . . Lee, K. H. (2008). Multiple bases of human intelligence revealed by cortical thickness and neural activation. Journal of Neuroscience, 28(41), 10323–10329.
227
228
s. drakulich and s. karama
Cocosco, C. A., Zijdenbos, A. P., & Evans, A. C. (2003). A fully automatic and robust brain MRI tissue classification method. Medical Image Analysis, 7(4), 513–527. Collins, D. L., Neelin, P., Peters, T. M., & Evans, A. C. (1994). Automatic 3D intersubject registration of MR volumetric data in standardized Talairach space. Journal of Computer Assisted Tomography, 18(2), 192–205. Colom, R., Burgaleta, M., Román, F. J., Karama, S., Alvarez-Linera, J., Abad, F. J., . . . Haier, R. J. (2013). Neuroanatomic overlap between intelligence and cognitive factors: Morphometry methods provide support for the key role of the frontal lobes. Neuroimage, 72, 143–152. Colom, R., Haier, R. J., Head, K., Álvarez-Linera, J., Quiroga, M. Á., Shih, P. C., & Jung, R. E. (2009). Gray matter correlates of fluid, crystallized, and spatial intelligence: Testing the P-FIT model. Intelligence, 37(2), 124–135. Colom, R., Jung, R. E., & Haier, R. J. (2006). Distributed brain sites for the g-factor of intelligence. Neuroimage, 31(3), 1359–1365. DeFelipe, J. (2011). The evolution of the brain, the human nature of cortical circuits, and intellectual creativity. Frontiers in Neuroanatomy, 5, 29. Dubois, J., Galdi, P., Paul, L. K., & Adolphs, R. (2018). A distributed brain network predicts general intelligence from resting-state human neuroimaging data. Philosophical Transactions of the Royal Society B: Biological Sciences, 373(1756), 20170284. Ducharme, S., Albaugh, M. D., Nguyen, T. V., Hudziak, J. J., Mateos-Perez, J. M., Labbe, A., . . . Brain Development Cooperative Group (2016). Trajectories of cortical thickness maturation in normal brain development – The importance of quality control procedures. Neuroimage, 125, 267–279. Eickhoff, S. B., Constable, R. T., & Yeo, B. T. T. (2018). Topographic organization of the cerebral cortex and brain cartography. Neuroimage, 170, 332–347. Escorial, S., Román, F. J., Martínez, K., Burgaleta, M., Karama, S., & Colom, R. (2015). Sex differences in neocortical structure and cognitive performance: A surface-based morphometry study. Neuroimage, 104, 355–365. Estrada, E., Ferrer, E., Román, F. J., Karama, S., & Colom, R. (2019). Time-lagged associations between cognitive and cortical development from childhood to early adulthood. Developmental Psychology, 55(6), 1338–1352. Evans, A. C., & Brain Development Cooperative Group (2006). The NIH MRI study of normal brain development. Neuroimage, 30(1), 184–202. Evans, A. C., Janke, A. L., Collins, D. L., & Baillet, S. (2012). Brain templates and atlases. Neuroimage, 62(2), 911–922. Fjell, A. M., Westlye, L. T., Amlien, I., Tamnes, C. K., Grydeland, H., Engvig, A., . . . Walhovd, K. B. (2015). High-expanding cortical regions in human development and evolution are related to higher intellectual abilities. Cerebral Cortex, 25(1), 26–34. Flashman, L. A., Andreasen, N. C., Flaum, M., & Swayze, V. W. (1997). Intelligence and regional brain volumes in normal controls. Intelligence, 25(3), 149–160. Frangou, S., Chitins, X., & Williams, S. C. (2004). Mapping IQ and gray matter density in healthy young people. Neuroimage, 23(3), 800–805. Ganjavi, H., Lewis, J. D., Bellec, P., MacDonald, P. A., Waber, D. P., Evans, A. C., . . . Brain Development Cooperative Group (2011). Negative associations between
Structural Brain Imaging of Intelligence
corpus callosum midsagittal area and IQ in a representative sample of healthy children and adolescents. PLoS One, 6(5), e19698. Gautam, P., Anstey, K. J., Wen, W., Sachdev, P. S., & Cherbuin, N. (2015). Cortical gyrification and its relationships with cortical volume, cortical thickness, and cognitive performance in healthy mid-life adults. Behavioural Brain Research, 287, 331–339. Goh, S., Bansal, R., Xu, D., Hao, X., Liu, J., & Peterson, B. S. (2011). Neuroanatomical correlates of intellectual ability across the life span. Developmental Cognitive Neuroscience, 1(3), 305–312. Good, C. D., Johnsrude, I. S., Ashburner, J., Henson, R. N., Friston, K. J., & Frackowiak, R. S. (2001). A voxel-based morphometric study of ageing in 465 normal adult human brains. Neuroimage, 14(1 Pt 1), 21–36. Green, S., Blackmon, K., Thesen, T., DuBois, J., Wang, X., Halgren, E., & Devinsky, O. (2018). Parieto-frontal gyrification and working memory in healthy adults. Brain Imaging Behavior, 12(2), 303–308. Gregory, M. D., Kippenhan, J. S., Dickinson, D., Carrasco, J., Mattay, V. S., Weinberger, D. R., & Berman, K. F. (2016). Regional variations in brain gyrification are associated with general cognitive ability in humans. Current Biology, 26(10), 1301–1305. Gur, R. C., Turetsky, B. I., Matsui, M., Yan, M., Bilker, W., Hughett, P., & Gur, R. E. (1999). Sex differences in brain gray and white matter in healthy young adults: Correlations with cognitive performance. Journal of Neuroscience, 19(10), 4065–4072. Haier, R. J. (2016). The neuroscience of intelligence. Cambridge University Press. Haier, R. J., Jung, R. E., Yeo, R. A., Head, K., & Alkire, M. T. (2004). Structural brain variation and general intelligence. Neuroimage, 23(1), 425–433. Haier, R. J., Jung, R. E., Yeo, R. A., Head, K., & Alkire, M. T. (2005). The neuroanatomy of general intelligence: Sex matters. Neuroimage, 25(1), 320–327. Haier, R. J., Karama, S., Colom, R., Jung, R., & Johnson, W. (2014). Yes, but flaws remain. Intelligence, 46, 341–344. He, Y., Chen, Z. J., & Evans, A. C. (2007). Small-world anatomical networks in the human brain revealed by cortical thickness from MRI. Cerebral Cortex, 17(10), 2407–2419. Hogstrom, L. J., Westlye, L. T., Walhovd, K. B., & Fjell, A. M. (2013). The structure of the cerebral cortex across adult life: Age-related patterns of surface area, thickness, and gyrification. Cerebral Cortex, 23(11), 2521–2530. Huttenlocher, P. R. (1990). Morphometric study of human cerebral cortex development. Neuropsychologia, 28(6), 517–527. Jung, R. E., & Haier, R. J. (2007). The Parieto-Frontal Integration Theory (P-FIT) of intelligence: Converging neuroimaging evidence. Behavioral and Brain Sciences, 30(2), 135–154. Kabani, N., Le Goualher, G., MacDonald, D., & Evans, A. C. (2001). Measurement of cortical thickness using an automated 3-D algorithm: A validation study. Neuroimage, 13(2), 375–380. Karama, S., Ad-Dab’bagh, Y., Haier, R. J., Deary, I. J., Lyttelton, O. C., Lepage, C., . . . Brain Development Cooperative Group (2009). Positive
229
230
s. drakulich and s. karama
association between cognitive ability and cortical thickness in a representative US sample of healthy 6 to 18 year-olds. Intelligence, 37(2), 145–155. Karama, S., Bastin, M. E., Murray, C., Royle, N. A., Penke, L., Muñoz Maniega, S., . . . Deary, I. J. (2014). Childhood cognitive ability accounts for associations between cognitive ability and brain cortical thickness in old age. Molecular Psychiatry, 19(5), 555–559. Karama, S., Colom, R., Johnson, W., Deary, I. J., Haier, R., Waber, D. P., . . . Brain Development Cooperative Group (2011). Cortical thickness correlates of specific cognitive performance accounted for by the general factor of intelligence in healthy children aged 6 to 18. Neuroimage, 55(4), 1443–1453. Kennedy, D. N., Lange, N., Makris, N., Bates, J., Meyer, J., & Caviness, V. S., Jr. (1998). Gyri of the human neocortex: An MRI-based analysis of volume and variance. Cerebral Cortex, 8(4), 372–384. Khundrakpam, B. S., Reid, A., Brauer, J., Carbonell, F., Lewis, J., Ameis, S., . . . Brain Development Cooperative Group (2013). Developmental changes in organization of structural brain networks. Cerebral Cortex, 23(9), 2072–2085. Kim, J. S., Singh, V., Lee, J. K., Lerch, J., Ad-Dab’bagh, Y., MacDonald, D., . . . Evans, A. C. (2005). Automated 3-D extraction and evaluation of the inner and outer cortical surfaces using a Laplacian map and partial volume effect classification. Neuroimage, 27(1), 210–221. la Fougere, C., Grant, S., Kostikov, A., Schirrmacher, R., Gravel, P., Schipper, H. M., . . . Thiel, A. (2011). Where in-vivo imaging meets cytoarchitectonics: The relationship between cortical thickness and neuronal density measured with high-resolution [18F]flumazenil-PET. Neuroimage, 56(3), 951–960. Lemaitre, H., Goldman, A. L., Sambataro, F., Verchinski, B. A., Meyer-Lindenberg, A., Weinberger, D. R., & Mattay, V. S. (2012). Normal age-related brain morphometric changes: Nonuniformity across cortical thickness, surface area and gray matter volume? Neurobiology of Aging, 33(3), 617.e1–617.e9. Lenroot, R. K., Gogtay, N., Greenstein, D. K., Wells, E. M., Wallace, G. L., Clasen, L. S., . . . Giedd, J. N. (2007). Sexual dimorphism of brain developmental trajectories during childhood and adolescence. Neuroimage, 36(4), 1065–1073. Lerch, J. P., & Evans, A. C. (2005). Cortical thickness analysis examined through power analysis and a population simulation. Neuroimage, 24(1), 163–173. Lerch, J. P., Worsley, K., Shaw, W. P., Greenstein, D. K., Lenroot, R. K., Giedd, J., & Evans, A. C. (2006). Mapping anatomical correlations across cerebral cortex (MACACC) using cortical thickness from MRI. Neuroimage, 31(3), 993–1003. Li, W., Yang, C., Shi, F., Wu, S., Wang, Q., Nie, Y., & Zhang, X. (2017). Construction of individual morphological brain networks with multiple morphometric features. Frontiers in Neuroanatomy, 11, 34. Lo, C. Y., He, Y., & Lin, C. P. (2011). Graph theoretical analysis of human brain structural networks. Reviews Neuroscience, 22(5), 551–563. Luders, E., Narr, K. L., Bilder, R. M., Thompson, P. M., Szeszko, P. R., Hamilton, L., & Toga, A. W. (2007). Positive correlations between corpus callosum thickness and intelligence. Neuroimage, 37(4), 1457–1464.
Structural Brain Imaging of Intelligence
Luders, E., Narr, K. L., Thompson, P. M., & Toga, A. W. (2009). Neuroanatomical correlates of intelligence. Intelligence, 37(2), 156–163. Luders, E., Thompson, P. M., Narr, K. L., Zamanyan, A., Chou, Y. Y., Gutman, B., . . . Toga, A. W. (2011). The link between callosal thickness and intelligence in healthy children and adolescents. Neuroimage, 54(3), 1823–1830. Lyttelton, O. C., Karama, S., Ad-Dab’bagh, Y., Zatorre, R. J., Carbonell, F., Worsley, K., & Evans, A. C. (2009). Positional and surface area asymmetry of the human cerebral cortex. Neuroimage, 46(4), 895–903. MacDonald, D., Kabani, N., Avis, D., & Evans, A. C. (2000). Automated 3-D extraction of inner and outer surfaces of cerebral cortex from MRI. Neuroimage, 12(3), 340–356. MacDonald, P. A., Ganjavi, H., Collins, D. L., Evans, A. C., & Karama, S. (2014). Investigating the relation between striatal volume and IQ. Brain Imaging and Behavior, 8(1), 52–59. McDaniel, M. A. (2005). Big-brained people are smarter: A meta-analysis of the relationship between in vivo brain volume and intelligence. Intelligence, 33(4), 337–346. Menary, K., Collins, P. F., Porter, J. N., Muetzel, R., Olson, E. A., Kumar, V., . . . Luciana, M. (2013). Associations between cortical thickness and general intelligence in children, adolescents and young adults. Intelligence, 41(5), 597–606. Modroño, C., Navarrete, G., Nicolle, A., González-Mora, J. L., Smith, K. W., Marling, M., & Goel, V. (2019). Developmental grey matter changes in superior parietal cortex accompany improved transitive reasoning. Thinking & Reasoning, 25(2), 151–170. Moffitt, T. E., Caspi, A., Harkness, A. R., & Silva, P. A. (1993). The natural history of change in intellectual performance: Who changes? How much? Is it meaningful? Journal of Child Psychology and Psychiatry, 34(4), 455–506. Narr, K. L., Woods, R. P., Thompson, P. M., Szeszko, P., Robinson, D., Dimtcheva, T., . . . Bilder, R. M. (2007). Relationships between IQ and regional cortical gray matter thickness in healthy adults. Cerebral Cortex, 17(9), 2163–2171. Panizzon, M. S., Fennema-Notestine, C., Eyler, L. T., Jernigan, T. L., PromWormley, E., Neale, M., . . . Kremen, W. S. (2009). Distinct genetic influences on cortical surface area and cortical thickness. Cerebral Cortex, 19(11), 2728–2735. Paradiso, S., Andreasen, N. C., O’Leary, D. S., Arndt, S., & Robinson, R. G. (1997). Cerebellar size and cognition: Correlations with IQ, verbal memory and motor dexterity. Neuropsychiatry, Neuropsychology, and Behavioral Neurology, 10(1), 1–8. Paul, E. J., Larsen, R. J., Nikolaidis, A., Ward, N., Hillman, C. H., Cohen, N. J., . . . Barbey, A. K. (2016). Dissociable brain biomarkers of fluid intelligence. Neuroimage, 137, 201–211. Paus, T., Zijdenbos, A., Worsley, K., Collins, D. L., Blumenthal, J., Giedd, J. N., . . . Evans, A. C. (1999). Structural maturation of neural pathways in children and adolescents: In vivo study. Science, 283(5409), 1908–1911. Pietschnig, J., Penke, L., Wicherts, J. M., Zeiler, M., & Voracek, M. (2015). Metaanalysis of associations between human brain volume and intelligence
231
232
s. drakulich and s. karama
differences: How strong are they and what do they mean? Neuroscience & Biobehavioral Reviews, 57, 411–432. Rakic, P. (1988). Specification of cerebral cortical areas. Science, 241(4862), 170–176. Raznahan, A., Shaw, P., Lalonde, F., Stockman, M., Wallace, G. L., Greenstein, D., . . . Giedd, J. N. (2011). How does your cortex grow? The Journal of Neuroscience: The Official Journal of the Society for Neuroscience, 31(19), 7174–7177. Regis, J., Mangin, J. F., Ochiai, T., Frouin, V., Riviere, D., Cachia, A., . . . Samson, Y. (2005). “Sulcal root” generic model: A hypothesis to overcome the variability of the human cortex folding patterns. Neurologia Medico-Chirurgica (Tokyo), 45(1), 1–17. Reiss, A. L., Abrams, M. T., Singer, H. S., Ross, J. L., & Denckla, M. B. (1996). Brain development, gender and IQ in children. A volumetric imaging study. Brain, 119(Pt 5), 1763–1774. Reuter, M., Tisdall, M. D., Qureshi, A., Buckner, R. L., van der Kouwe, A. J. W., & Fischl, B. (2015). Head motion during MRI acquisition reduces gray matter volume and thickness estimates. Neuroimage, 107, 107–115. Riahi, F., Zijdenbos, A., Narayanan, S., Arnold, D., Francis, G., Antel, J., & Evans, A. C. (1998). Improved correlation between scores on the expanded disability status scale and cerebral lesion load in relapsing-remitting multiple sclerosis. Results of the application of new imaging methods. Brain, 121(Pt 7), 1305–1312. Richman, D. P., Stewart, R. M., Hutchinson, J. W., & Caviness, V. S., Jr. (1975). Mechanical model of brain convolutional development. Science, 189(4196), 18–21. Rilling, J. K., & Insel, T. R. (1999). The primate neocortex in comparative perspective using magnetic resonance imaging. Journal of Human Evolution, 37(2), 191–223. Ritchie, S. J., Booth, T., Valdes Hernandez, M. D., Corley, J., Maniega, S. M., Gow, A. J., . . . Deary, I. J. (2015). Beyond a bigger brain: Multivariable structural brain imaging and intelligence. Intelligence, 51, 47–56. Riva, D., & Giorgi, C. (2000). The cerebellum contributes to higher functions during development: Evidence from a series of children surgically treated for posterior fossa tumours. Brain, 123(5), 1051–1061. Román, F. J., Morillo, D., Estrada, E., Escorial, S., Karama, S., & Colom, R. (2018). Brain-intelligence relationships across childhood and adolescence: A latentvariable approach. Intelligence, 68, 21–29. Roth, G., & Dicke, U. (2005). Evolution of the brain and intelligence. Trends in Cognitive Sciences, 9(5), 250–257. Rushton, J. P., & Ankney, C. D. (2009). Whole brain size and general mental ability: A review. International Journal of Neuroscience, 119(5), 691–731. Sanabria-Diaz, G., Melie-Garcia, L., Iturria-Medina, Y., Aleman-Gomez, Y., Hernandez-Gonzalez, G., Valdes-Urrutia, L., . . . Valdes-Sosa, P. (2010). Surface area and cortical thickness descriptors reveal different attributes of the structural human brain networks. Neuroimage, 50(4), 1497–1510. Schmahmann, J. D. (2004). Disorders of the cerebellum: Ataxia, dysmetria of thought, and the cerebellar cognitive affective syndrome. The Journal of Neuropsychiatry and Clinical Neurosciences, 16(3), 367–378.
Structural Brain Imaging of Intelligence
Schmitt, J. E., Neale, M. C., Clasen, L. S., Liu, S., Seidlitz, J., Pritikin, J. N., . . . Raznahan, A. (2019). A comprehensive quantitative genetic analysis of cerebral surface area in youth. Journal of Neuroscience, 39(16), 3028–3040. Schmitt, J. E., Raznahan, A., Clasen, L. S., Wallace, G. L., Pritikin, J. N., Lee, N. R., . . . Neale, M. C. (2019). The dynamic associations between cortical thickness and general intelligence are genetically mediated. Cerebral Cortex, 29(11), 4743–4752. Schoenemann, P. T., Budinger, T. F., Sarich, V. M., & Wang, W. S. Y. (2000). Brain size does not predict general cognitive ability within families. Proceedings of the National Academy of Sciences, 97(9), 4932–4937. Schulte, T., & Muller-Oehring, E. M. (2010). Contribution of callosal connections to the interhemispheric integration of visuomotor and cognitive processes. Neuropsychology Review, 20(2), 174–190. Shaw, P., Greenstein, D., Lerch, J., Clasen, L., Lenroot, R., Gogtay, N., . . . Giedd, J. (2006). Intellectual ability and cortical development in children and adolescents. Nature, 440(7084), 676–679. Sowell, E. R., Thompson, P. M., Leonard, C. M., Welcome, S. E., Kan, E., & Toga, A. W. (2004). Longitudinal mapping of cortical thickness and brain growth in normal children. The Journal of Neuroscience, 24(38), 8223. Stonnington, C. M., Tan, G., Klöppel, S., Chu, C., Draganski, B., Jack, C. R., Jr., . . . Frackowiak, R. S. (2008). Interpreting scan data acquired from multiple scanners: a study with Alzheimer’s disease. Neuroimage, 39(3), 1180–1185. Storsve, A. B., Fjell, A. M., Tamnes, C. K., Westlye, L. T., Overbye, K., Aasland, H. W., & Walhovd, K. B. (2014). Differential longitudinal changes in cortical thickness, surface area and volume across the adult life span: Regions of accelerating and decelerating change. Journal of Neuroscience, 34(25), 8488–8498. Stucht, D., Danishad, K. A., Schulze, P., Godenschweger, F., Zaitsev, M., & Speck, O. (2015). Highest resolution in vivo human brain MRI using prospective motion correction. PLoS One, 10(7), e0133921. Sur, M., & Rubenstein, J. L. (2005). Patterning and plasticity of the cerebral cortex. Science, 310(5749), 805–810. Tadayon, E., Pascual-Leone, A., & Santarnecchi, E. (2019). Differential contribution of cortical thickness, surface area, and gyrification to fluid and crystallized intelligence. Cerebral Cortex, 30(1). Tamnes, C. K., Fjell, A. M., Østby, Y., Westlye, L. T., Due-Tønnessen, P., Bjørnerud, A., & Walhovd, K. B. (2011). The brain dynamics of intellectual development: Waxing and waning white and gray matter. Neuropsychologia, 49(13), 3605–3611. Thompson, P. M., Hayashi, K. M., Dutton, R. A., Chiang, M.-C., Leow, A. D., Sowell, E. R., . . . Toga, A. W. (2007). Tracking Alzheimer’s disease. Annals of the New York Academy of Science, 1097, 183–214. Thompson, P. (2020). ENIGMA and global neuroscience: A decade of large-scale studies of the brain in health and disease across more than 40 countries. Biological Psychiatry, 87(9, Suppl), S56. Turin, G. (1960). An introduction to matched filters. IRE Transactions on Information Theory, 6(3), 311–329.
233
234
s. drakulich and s. karama
Van Essen, D. C. (2005). A population-average, landmark- and surface-based (PALS) atlas of human cerebral cortex. Neuroimage, 28(3), 635–662. Vuoksimaa, E., Panizzon, M. S., Chen, C.-H., Fiecas, M., Eyler, L. T., FennemaNotestine, C., . . . Kremen, W. S. (2015). The genetic association between neocortical volume and general cognitive ability is driven by global surface area rather than thickness. Cerebral Cortex, 25(8), 2127–2137. Watson, P. D., Paul, E. J., Cooke, G. E., Ward, N., Monti, J. M., Horecka, K. M., . . . Barbey, A. K. (2016). Underlying sources of cognitive-anatomical variation in multi-modal neuroimaging and cognitive testing. Neuroimage, 129, 439–449. Westerhausen, R., Friesen, C. M., Rohani, D. A., Krogsrud, S. K., Tamnes, C. K., Skranes, J. S., . . . Walhovd, K. B. (2018). The corpus callosum as anatomical marker of intelligence? A critical examination in a large-scale developmental study. Brain Structure and Function, 223(1), 285–296. Westlye, L. T., Walhovd, K. B., Dale, A. M., Bjørnerud, A., Due-Tønnessen, P., Engvig, A., . . . Fjell, A. M. (2009). Life-span changes of the human brain white matter: Diffusion tensor imaging (DTI) and volumetry. Cerebral Cortex, 20(9), 2055–2068. Wickett, J. C., Vernon, P. A., & Lee, D. H. (2000). Relationships between factors of intelligence and brain volume. Personality and Individual Differences, 29(6), 1095–1122. Winkler, A. M., Kochunov, P., Blangero, J., Almasy, L., Zilles, K., Fox, P. T., . . . Glahn, D. C. (2010). Cortical thickness or grey matter volume? The importance of selecting the phenotype for imaging genetics studies. Neuroimage, 53(3), 1135–1146. Winkler, A. M., Sabuncu, M. R., Yeo, B. T., Fischl, B., Greve, D. N., Kochunov, P., . . . Glahn, D. C. (2012). Measuring and comparing brain cortical surface area and other areal quantities. Neuroimage, 61(4), 1428–1443. Worsley, K. J., Marrett, S., Neelin, P., Vandal, A. C., Friston, K. J., & Evans, A. C. (1996). A unified statistical approach for determining significant signals in images of cerebral activation. Human Brain Mapping, 4(1), 58–73. Xie, Y., Chen, Y. A., & De Bellis, M. D. (2012). The relationship of age, gender, and IQ with the brainstem and thalamus in healthy children and adolescents: A magnetic resonance imaging volumetric study. Journal of Child Neurology, 27(3), 325–331. Zatorre, R. J., Fields, R. D., & Johansen-Berg, H. (2012). Plasticity in gray and white: Neuroimaging changes in brain structure during learning. Nature Neuroscience, 15(4), 528–536. Zijdenbos, A. P., Lerch, J. P., Bedell, B. J., & Evans, A. C. (2005). Brain imaging in drug R&D. Biomarkers 10(Suppl 1), S58–S68. Zilles, K., Armstrong, E., Schleicher, A., & Kretschmann, H. J. (1988). The human pattern of gyrification in the cerebral cortex. Anatomy and Embryology (Berlin), 179(2), 173–179.
12 Functional Brain Imaging of Intelligence Ulrike Basten and Christian J. Fiebach Functional brain imaging studies of intelligence have tackled the following questions: What happens in our brains when we solve tasks from an intelligence test? And are there differences between people? Do people with higher scores on an intelligence test show different patterns of brain activation while working on cognitive tasks than people with lower scores? Answering these questions can contribute to improving our understanding of the biological bases of intelligence. To investigate these questions, researchers have used different methods for quantifying patterns of brain activation changes and their association with cognitive processing – including electroencephalography (EEG), positron emission tomography (PET), functional magnetic resonance imaging (fMRI), and functional near-infrared spectroscopy (fNIRS). The results of this research allow us to delineate those parts of the brain that are important for intelligence – either in the sense that they are activated when people solve tasks commonly used to test intelligence or in the sense that functional differences in these regions are associated with individual differences in intelligence. From the fact that some of our abilities – like our abilities to see, hear, feel, and move – can quite specifically be traced back to the contributions of distinct brain regions (namely the visual, auditory, somatosensory, and motor cortex) – one might derive the expectation that there must be another part of the brain responsible for higher cognitive functioning and intelligence. But, as the following review will show, there is no single “seat” of intelligence in our brain. Instead, intelligence is associated with a distributed set of brain regions. To study the neural basis of human intelligence, functional neuroimaging studies have used two different approaches, which we have previously described as the task approach and the individual differences approach (see Basten, Hilger, & Fiebach, 2015). The task approach seeks to identify brain regions activated when people work on intelligence-related tasks like those used in psychometric tests of intelligence. These tasks may actually be taken from established intelligence tests or be designed to closely resemble such tasks. Typically, studies following the task approach report the mean task-induced brain activation for a whole group of study participants, ignoring individual differences in brain activation and intelligence. The individual differences approach, on the other hand, explores which regions of the brain show differences in activation between persons with different degrees of general cognitive ability as assessed
235
236
u. basten and c. j. fiebach
with an intelligence test. It identifies brain regions in which activation strength covaries with intelligence. In this chapter, we present central findings from both approaches with a focus on recent evidence and the current state of knowledge. We also discuss factors that may moderate the association between intelligence and brain activation as studied in the individual differences approach. Finally, we compare the results of these two approaches, critically reflect the insights that have so far been gained with functional neuroimaging, and outline important topics for future research. Most neuroimaging studies focused their investigation on a general factor of intelligence g (sensu Spearman, 1904) or fluid intelligence gf (sensu Cattell, 1963) and used, as their measure of intelligence, established tests of reasoning (e.g., matrix reasoning tests like Raven’s Progressive Matrices, RPM) or sum scores from tests with many different cognitive tasks (i.e., full-scale intelligence tests like the Wechsler Adult Intelligence Scales, WAIS). Where we use the term intelligence without further specification in this chapter, we also refer to this broad conceptualization of fluid general intelligence. Positron emission tomography (PET) was one of the first methods capable of in vivo localization of brain activation during the performance of cognitive tasks. It visualizes changes in regional cerebral blood flow as a result of localized neuronal activity, by means of a weak radioactive tracer injected into the blood system. Functional magnetic resonance imaging (fMRI) and functional near infrared spectroscopy (fNIRS) measure a very similar biological signal, i.e., the blood oxygenation level dependent (BOLD) contrast that results from magnetization changes of the blood supplied to the brain following localized neuronal activity. However, fMRI allows for a better spatial localization of brain activation differences than fNIRS. The electroencephalogram (EEG) provides a much better temporal resolution, which makes it attractive for cognitive studies despite its lack of high spatial resolution. Most EEG work in the field of intelligence research has measured the event-related desynchronization (ERD) of brain activity, usually in the EEG alpha frequency band (approximately 8–13 Hz), which is typically observed when people concentrate on solving a cognitively demanding task (e.g., Neuper, Grabner, Fink, & Neubauer, 2005; Pfurtscheller & Aranibar, 1977).
Brain Regions Involved in Processing Intelligence-related Tasks: Can We Track Down Intelligence in the Brain? Many early studies using functional imaging to identify brain regions relevant for intelligence used the task approach, with the aim of identifying brain regions activated while study participants were solving cognitive tasks like those used in intelligence tests (e.g., Ghatan et al., 1995; Goel, Gold, Kapur, & Houle, 1998; Haier et al., 1988; Prabhakaran, Rypma, & Gabrieli,
Functional Brain Imaging of Intelligence
2001; Prabhakaran, Smith, Desmond, Glover, & Gabrieli, 1997). In these studies, researchers observed increased activation in brain regions known to be activated also during other cognitive demands (such as attention or working memory), including the lateral and medial frontal cortex as well as the parietal and insular cortex. To isolate a neural correlate of general cognitive ability in the sense of Spearman’s general intelligence factor g, which is assumed to be involved in all cognitive tasks independent of task-specific factors, Duncan et al. (2000) used PET to measure brain activation while participants performed three different tasks strongly depending on the g factor – a spatial, a verbal, and a perceptuo-motor task. Overlapping activation was found in three prefrontal brain regions, i.e., dorsolateral prefrontal cortex (DLPFC), ventrolateral prefrontal cortex (VLPFC; extending along the frontal operculum to the anterior insula), and the dorsal anterior cingulate cortex (ACC). Based on this PET study as well as on brain lesion studies also pointing to a pivotal role of the prefrontal cortex (PFC) for higher-order cognitive functioning (e.g., Duncan, Burgess, & Emslie, 1995), Duncan argued that functions of the PFC may be particularly central to general intelligence (Duncan, 1995, 2005; Duncan, Emslie, Williams, Johnson, & Freer, 1996). Yet, with evidence from other functional neuroimaging studies on intelligence, it became apparent that, while the PFC is indeed important for higher cognitive functions, intelligence-related tasks often also activate parts of the parietal cortex as well as sensory cortices in the occipital and temporal lobes (e.g., Esposito, Kirkby, Van Horn, Ellmore, & Berman, 1999; Ghatan et al., 1995; Goel & Dolan, 2001; Knauff, Mulack, Kassubek, Salih, & Greenlee, 2002). A systematic review of the brain imaging literature available in 2007 led to the formulation of the parieto-frontal integration theory of intelligence (P-FIT; Jung & Haier, 2007). This influential model conceptualizes intelligence as the product of the interaction among a set of distributed brain regions, primarily comprising parts of the frontal and parietal cortices. Duncan (2010) subsumes roughly the same prefrontal and parietal regions under the term multipledemand (MD) system, which he conceptualizes as a system of general-purpose brain regions recruited by a variety of cognitive demands and which is proposed to interact flexibly with more specialized perceptual and cognitive systems (for support from lesion studies, see Barbey et al., 2012; Barbey, Colom, & Grafman, 2013; Barbey, Colom, Paul, & Grafman, 2014; Gläscher et al., 2010; Woolgar et al., 2010; Woolgar, Duncan, Manes, & Fedorenko, 2018). Furthermore, the P-FIT and MD regions largely resemble what in other, not intelligence-related contexts is also referred to as the (attention and) working memory system (Cabeza & Nyberg, 2000), the cognitive control network (Cole & Schneider, 2007), or – most generally – the taskpositive network (Fox et al., 2005). Intelligence-related cognitive tasks thus activate a relatively broad and rather unspecific brain network involved in the processing of a number of higher cognitive challenges, ranging from working
237
238
u. basten and c. j. fiebach
Figure 12.1 Brain activation associated with the processing of intelligencerelated tasks, showing the results of the meta-analysis conducted by Santarnecchi, Emmendorfer, and Pascual-Leon E (2017). The brain regions marked with color were consistently activated across studies while study participants were solving tasks as they are used in common tests of intelligence. Produced using the brain map resulting from the ALE meta-analysis conducted by Santarnecchi, Emmendorfer, and Pascual-Leon E (2017) and made available at www.tmslab.com/santalab.php. Figure available at: https://github .com/fiebachlab/figures under a CC-BY license
memory maintenance and manipulation (e.g., see figure 1A in Basten, Stelzel, & Fiebach, 2012) to inhibitory control, as reflected for example in the Stroop task (e.g., see figure 2 in Basten, Stelzel, & Fiebach, 2011), see also Niendam et al. (2012) for meta-analytic evidence on common activation patterns across cognitive control demands. The current state of findings from the task approach in the functional neuroimaging of intelligence is comprehensively summarized by a recent meta-analysis of 35 fMRI and PET studies (Santarnecchi, Emmendorfer, & Pascual-Leone, 2017; see Figure 12.1). This meta-analysis quantitatively establishes the previously proposed convergence across studies for the frontal cortex (where 74% of the brain sites consistently activated across studies were
Functional Brain Imaging of Intelligence
Figure 12.2 Intelligence-related differences in brain activation during cognitive processing. (A) Results of the meta-analysis conducted by Basten et al. (2015). The brain regions marked with color consistently showed intelligence-related differences in brain activation across studies. Blue–green: negative associations, red–yellow: positive associations. IFG, inferior frontal gyrus; IFJ, inferior frontal junction; IPL, inferior parietal lobule; IPS, intraparietal sulcus; MFG, middle frontal gyrus; MTG, middle temporal gyrus; SFS, superior frontal sulcus. Reproduced and adapted from another illustration of the same results published in Basten et al. (2015). (B) Graphic summary where original studies found negative (–) or positive (+) associations between intelligence and brain activation. ACC, anterior cingulate cortex; PCC, posterior cingulate cortex; PFC, prefrontal cortex; Precun, Precuneus; (pre)SMA, pre-supplementary motor area. Figure available at: https://github.com/fiebachlab/figures/ under a CC-BY license
located) and the parietal lobes (13%). To a much lesser extent, convergence across activation studies was also observed in occipital regions (3%). Notably, this meta-analysis extends previous reviews – like the one resulting in the P-FIT model (Jung & Haier, 2007) – by also linking the insula and subcortical structures like the thalamus and basal ganglia (globus pallidus, putamen) to
239
240
u. basten and c. j. fiebach
the processing of intelligence-related tasks. While activation was in principle bilateral, left hemisphere activation was more dominant (63% as compared to 37% of brain sites activated across studies). This left-dominance was mainly due to a left-lateralization of inferior and middle frontal activation. A subset of regions from this distributed network, i.e., the left inferior frontal lobe and the left frontal eye fields, the bilateral anterior cingulate cortex, and the bilateral temporo-occipital cortex, was more strongly activated the more difficult the tasks were. Santarnecchi, Emmendorfer, and PascualLeon E (2017) further investigated whether there were dissociable correlates for different component processes of intelligent performance or for different task materials. Clear differences were observed for the component processes “rule inference” and “rule application”: While inferring rules (sub-analysis of eight studies) recruited left prefrontal and bilateral parietal regions, applying known rules (sub-analysis of six studies) relied on activity in subcortical structures (thalamus and caudate nuclei) as well as right prefrontal and temporal cortices. Furthermore, verbal tasks (sub-analysis of 22 studies) were associated with more left-lateralized activation involving inferior frontal and anterior cingulate areas, whereas visuospatial tasks (14 studies) were characterized by stronger activation of the bilateral frontal eye fields. To objectify the functional interpretation of their meta-analysis, Santarnecchi, Emmendorfer, and Pascual-Leon E (2017) compared the resultant meta-maps to maps of well-established functional brain networks. Such networks are identifiable in the patterns of intrinsic connectivity of the brain measured with fMRI in a so-called resting state (during which participants are not engaged in a particular cognitive task) and have been associated with specific sensory and cognitive functions, such as attention, executive control, language, sensorimotor, visual, or auditory processing (e.g., Dosenbach et al., 2007; Yeo et al., 2011). Santarnecchi and colleagues found that multiple functional networks were involved, but primarily those associated with attention, salience, and cognitive control. Specifically, the highest overlap (27 %) was observed between the results of the meta-analysis and the dorsal and ventral attention networks (Corbetta, Patel, & Shulman, 2008), followed by the anterior salience network (9%; also known as the cingulo-opercular network; Dosenbach et al., 2007), and the left-hemispheric executive control network (7%; also known as the fronto-parietal control network; Dosenbach et al., 2007). Notably, brain activation elicited by more difficult tasks showed relatively more overlap with the left executive control network and the language network. Summarizing the results from functional neuroimaging studies using the task approach to investigate the brain basis of intelligence, we can conclude that the processing of tasks commonly used in intelligence tests is associated with the activation of prefrontal and parietal brain regions that are generally involved in solving cognitive challenges. While in other research contexts the same brain regions are referred to as the task-positive, the attention and working memory, or the cognitive control network, intelligence researchers
Functional Brain Imaging of Intelligence
often refer to those networks as the “P-FIT areas” or the “multiple demand (MD) system.” The evidence available at present does not allow deciding whether this pattern of brain activation reflects the involvement of a unitary superordinate control system (as suggested for instance by Niendam et al., 2012), or whether it is the result of the co-activation of a diverse set of cognitive component processes that all contribute to solving intelligencerelated tasks. Importantly, the task approach ignores individual differences in brain activation. If, however, we want to understand how differences in brain functions may potentially explain individual differences in intelligence, we have to turn to studies using the individual differences approach.
Individual Differences in Brain Activation Associated with Intelligence: Do More Intelligent People Have More Efficient Brains? The first study on intelligence-related differences in brain activation during a cognitive challenge in healthy participants was conducted in the 1980s by Haier et al. (1988). In a sample of eight participants, these authors used PET to measure changes in the brain’s energy consumption (glucose metabolic rate, GMR) while participants completed matrix reasoning tasks from an established test of intelligence. The authors of this pioneer study had expected that participants with better performance in the reasoning task would show stronger brain activation, i.e., would recruit task-relevant brain regions to a greater degree. To their surprise, they observed the opposite.The brains of participants who solved more items correctly consumed less energy during task processing. In the same year, two other studies were published that reported concordant results (Berent et al., 1988; Parks et al., 1988). Haier et al. (1988) coined the term “efficiency” (also “brain efficiency” or “neural efficiency”) for the observed pattern of an inverse relationship between intelligence and the strength of brain activation during a cognitive challenge. In a nutshell, the resulting neural efficiency hypothesis of intelligence states that more intelligent people can achieve the same level of performance with smaller increases in brain activation as compared to less intelligent people. Haier, Siegel, Tang, Abel, and Buchsbaum (1992) summarized: “Intelligence is not a function of how hard the brain works but rather how efficiently it works” (p. 415 f.). It has been criticized that the term “neural efficiency” simply re-describes the observed pattern of less activation for a defined level of performance without explaining it (Poldrack, 2015). The potential reasons for activation differences are manifold, including the possibility that the same neural computations are indeed performed more efficiently, i.e., with lower metabolic expenditure. However, activation differences can also result from qualitative differences in neural computations and/or cognitive processes. In the latter case, differences in activation would be attributable to “people doing different things” – and not
241
242
u. basten and c. j. fiebach
to “people doing the same thing with different efficiency”. The neuroimaging studies we describe in this chapter can detect differences in activation strength – they do, however, not provide an explanation for the observed differences in terms of neural metabolism. Such explanations are the subject of interpretation and theoretical models. Higher neural efficiency during cognitive task performance could result from a more selective use of task-relevant neural networks or neurons (Haier et al., 1988, 1992), possibly due to anatomically sparser neural architectures following more extensive neural pruning in the course of brain development (for recent evidence on lower dendritic density and arborization in people with higher intelligence scores, see Genç et al., 2018) or faster information processing due to better myelinization (Miller, 1994; for evidence on higher white matter integrity in more intelligent individuals, see Kievit et al., 2016; Penke et al., 2012; and Chapter 10, by Genç and Fraenz). Furthermore, activation efficiency may also be due to a more efficient organization of intrinsic functional networks in terms of generally shorter paths from one point in the brain to any other as it is studied with brain networks modeled as graphs (van den Heuvel, Stam, Kahn, & Hulshoff Pol, 2009; but see Hilger, Ekman, Fiebach, & Basten, 2017a; Kruschwitz, Waller, Daedelow, Walter, & Veer, 2018; see also Barbey, 2018). The assumption that more intelligent people have more efficient brains represents a compelling idea with high face validity. But is there consistent evidence in support of the neural efficiency hypothesis? Since having been proposed about three decades ago, the neural efficiency hypothesis of intelligence has been studied repeatedly with different methods, including PET, EEG, fMRI, and fNIRS. All these studies have in common that they relate (by means of correlation or group comparisons) individual differences in performance on psychometric tests of intelligence (e.g., WAIS, RPM) to differences in some indicator of brain activation elicited during a cognitive challenge (such as working memory or reasoning tasks). Roughly summarized, earlier PET and EEG studies often reported negative correlations between intelligence and brain activation, which explains the early popularity of the neural efficiency hypothesis of intelligence (e.g., Haier et al., 1988, 1992; Jaušovec, 2000; Neubauer, Fink, & Schrausser, 2002; Neubauer, Freudenthaler, & Pfurtscheller, 1995; Parks et al., 1988). However, it soon became clear that a substantial number of studies also found contradictory evidence: Further EEG evidence produced mixed findings (for review, see Neubauer & Fink, 2009), and more recent fMRI studies often reported positive correlations between task-elicited brain activity and intelligence, thus contradicting the idea of higher neural efficiency in more intelligent people (e.g., Basten, Stelzel, & Fiebach, 2013; Burgess, Gray, Conway, & Braver, 2011; Choi et al., 2008; DeYoung, Shamosh, Green, Braver, & Gray, 2009; Ebisch et al., 2012; Geake & Hansen, 2005; Lee et al., 2006; O’Boyle et al., 2005). In 2015, we conducted a quantitative meta-analysis of the available evidence on intelligence-related differences in brain activation during a
Functional Brain Imaging of Intelligence
cognitive challenge. We included only studies that used the individual differences approach and that reported their results in standard brain reference space (Basten et al., 2015). These criteria were met by 16 studies (from 14 independent samples), comprising one PET study and 15 fMRI studies and a total of 464 participants, in sum reporting 151 foci for which intelligence-related differences in brain activation had been observed. Our meta-analysis provided only limited support for the neural efficiency hypothesis of intelligence. As Figure 12.2a illustrates, we identified two brain regions for which evidence across studies suggested weaker activation in more intelligent participants, i.e., the right inferior frontal junction area (IFJ, located at the junction of the inferior frontal sulcus and the inferior precentral sulcus, cf. Derrfuss, Vogt, Fiebach, von Cramon, & Tittgemeyer, 2012) and the right posterior insula. On the other hand, six brain regions were identified in which brain activation increases during solving of cognitive tasks were consistently greater for more intelligent people, i.e., the left inferior frontal junction area (IFJ), the right inferior frontal sulcus (IFS) extending into the inferior and middle frontal gyrus (IFG/MFG), the right superior frontal sulcus (SFS), the left inferior parietal lobe (IPL) and intraparietal sulcus (IPS), and the right posterior middle temporal gyrus (MTG). Figure 12.2b schematically summarizes the direction of effects reported in the original studies that were included in the meta-analysis. When comparing this visual summary to the results of the meta-analysis, it becomes clear that the lack of meta-effects in midline structures of the brain is attributable to mixed evidence across studies for anterior and posterior cingulate cortex, (pre)SMA, and the precuneus. On the other hand, the mixed evidence observed across studies (i.e., positive and negative correlations with intelligence) found in the lateral prefrontal cortex was associated with meta-analytic convergence of positive and negative effects in spatially dissociated parts of the LPFC. Taken together, empirical evidence concerning the localization of correlations between brain activation and intelligence stems primarily from fMRI studies, and at present these studies provide more evidence for positive associations between intelligence and brain activity than for a negative association (which would be in accordance with the neural efficiency hypothesis). Of the two brain regions showing individual differences in activation in accordance with the neural efficiency hypothesis, the right IFJ is located within a brain region that has also been associated with intelligence in task approach studies (see above section Brain Regions Involved in Processing Intelligence-related Tasks: Can We Track Down Intelligence in the Brain?). The insular cortex, however, has not only been implicated in cognitive processing but is more generally understood as an interface between the autonomic system, emotion, cognition, and action (e.g., Chang, Yarkoni, Khaw, & Sanfey, 2013). Overall, we have to acknowledge mixed evidence in a dual sense: First, there is mixed evidence across studies for some brain regions (e.g., dorsolateral PFC) concerning whether intelligence is associated with higher or
243
244
u. basten and c. j. fiebach
lower activity during task performance. Second, there is also mixed evidence regarding neural efficiency when comparing different regions of the brain (e.g., insula and parietal cortex), meaning that functional imaging studies do not generally support the conclusion of an “overall” more efficient brain. Very much in line with our conclusions concerning the fMRI evidence, Neubauer and Fink (2009) summarized the available EEG evidence on the neural efficiency hypothesis as being mixed. These authors state that a large body of early evidence in favor of the neural efficiency hypothesis was followed by a significant number of later studies providing only partial support or even contradictory evidence. Taking together all evidence from functional activation studies (EEG, PET, and fMRI), we must conclude that, with regard to the level of brain activation elicited by cognitive demands, the brains of more intelligent people are not generally more efficient. In their review, Neubauer and Fink (2009) discuss a number of factors potentially moderating the association between intelligence and brain activation – which could explain why some studies found negative associations supporting the neural efficiency hypothesis, while others found positive associations contradicting it, and yet others found no association at all. The suggested moderators are introduced and discussed in the next section; they may be understood as a refinement of the neural efficiency hypothesis in the sense that they define under which conditions to expect higher efficiency in more intelligent people. In a nutshell, Neubauer and Fink (2009) define these conditions as follows: “Neural efficiency might arise as a phenomenon when individuals are confronted with tasks of (subjectively) low to moderate task difficulty and it is mostly observable for frontal brain areas.” (p. 1018). When we compare the meta-analytic findings for the individual differences approach (Basten et al., 2015) to those for the task approach (Santarnecchi, Emmendorfer, & Pascual-Leone, 2017) and to the P-FIT model (which was also to the greater part based on task approach studies: 7 of 10 PET studies and 10 of 17 fMRI studies; Jung & Haier, 2007), there are two striking differences in findings: First, in contrast to the task approach, the individual differences approach provides less evidence for a role of temporal and occipital cortices in intelligence. While these regions are without doubt involved in the perceptual stages of cognitive tasks, their activation does not vary with intelligence and can thus be excluded as a neural phenomenon explaining individual differences in intelligence. The frontal and parietal cortex, on the other hand, show activation across people (in the task approach studies) along with intelligence-related differences (in the individual differences approach) that qualify them as candidates for brain regions in which functional differences may contribute to differences in intelligence. Second, the individual differences approach provides less evidence for a left-lateralization of brain systems involved in intelligence than the task approach. In our meta-analysis, we observed four clusters of intelligence-related activation differences in each
Functional Brain Imaging of Intelligence
hemisphere. One obvious explanation is that the relative left-dominance of task activation studies reflects a common characteristic of the majority of studies (like a strong dependency on verbal processes for solving the tasks) that, however, does not covary with individual differences in intelligence.
It Depends: Factors Moderating the Association between Intelligence and Brain Activation In the face of the mixed findings concerning the neural efficiency hypothesis, researchers have begun to ask under which specific conditions or circumstances we can expect to observe neural efficiency – and under which we cannot. A set of potential moderator variables has been discussed, including (a) the sex of the participants, (b) the task content, (c) the brain areas under study, (d) the state of learning and training, and (e) the difficulty of the task (for an extensive review and excellent discussion of these moderators, see Neubauer & Fink, 2009). Empirical evidence supporting the neural efficiency hypothesis has more often been reported for men than for women (see Neubauer & Fink, 2009). This finding may closely be associated with an interaction between sex and task content: Neubauer et al. (2002), for instance, found brain activation patterns supporting the neural efficiency hypothesis during performance of visuo-spatial tasks only for men and during verbal tasks only for women (see also Neubauer, Grabner, Fink, & Neuper, 2005). Thus, it seems that the phenomenon of neural efficiency is more likely to be observed in the cognitive domain in which the respective sex typically shows slight performance advantages (Miller & Halpern, 2014). To explain this moderating effect, researchers have speculated that some intelligence-relevant processes may be differently implemented in the brains of men and women. Neubauer and Fink (2009) also came to the conclusion that frontal brain areas are more likely to show activation patterns in line with the neural efficiency hypothesis than other brain areas (like the parietal cortex). A representative example for this comes from a study by Jaušovec and Jaušovec (2004), who observed that less intelligent participants showed greater activation over frontal brain areas (as inferred from event-related desynchronization in the upper alpha band of the EEG signal) during completion of a figural learning task, whereas more intelligent participants had stronger activation over parieto-occipital brain areas. It seems that under certain circumstances more intelligent people can solve a task without much frontal involvement, instead relying on parietal activity. This may reflect a relative shift from controlled to automatized processing, possibly due to more effective and better-trained cognitive routines. The assumption of neural efficiency being more likely in frontal than in parietal cortices was partly supported by
245
246
u. basten and c. j. fiebach
the findings of our meta-analysis some years later (Basten et al., 2015), which was based on a larger set of fMRI studies. This meta-analysis suggested a tendency for more intelligent people to show stronger parietal activation, while evidence was mixed for the frontal cortex. We thus conclude that there is still relatively more evidence in support of the neural efficiency hypothesis of intelligence for frontal than for parietal cortices – even though absolute evidence within the prefrontal cortex remains ambiguous. For fMRI studies, as we have pointed out previously (Basten et al., 2013), it is crucially important to consider in which functional network of the brain an association between intelligence and brain activation is observed. As discussed, a set of prefrontal and parietal brain regions shows increases in activation when task demands increase. These regions have been described as the task positive network (TPN). A second network, the task negative or default mode network (TNN or DMN), comprising ventromedial PFC, posterior cingulate cortex, superior frontal gyrus, and the region of the temporo-parietal junction, shows the opposite pattern of de-activation with increasing task demands (relative to task-free states; Fox et al., 2005). This task-related reduction of non-task-related brain activity is often interpreted as reflecting the suppression of task-unrelated cognitive processes to concentrate on the task at hand. For the interpretation of activation differences in terms of efficiency it obviously makes a difference if intelligence-related activation differences are observed in task-positive or task-negative regions: While a positive correlation between intelligence and fMRI BOLD signal changes may be interpreted as reflecting more activation in more intelligent individuals when involving task positive brain regions, the same correlation must be interpreted exactly opposite when localized in task negative regions, i.e., as reflecting less de-activation in more intelligent subjects, likely due to less rather than more cognitive effort exerted during task-processing (McKiernan, Kaufman, Kucera-Thompson, & Binder, 2003). The neglect of this important general distinction between task-activated and de-activated brain regions in fMRI studies may have led to incorrect interpretations of activation-intelligence associations in some of the earlier studies. First evidence suggests that individual differences in the deactivation of task-negative networks are indeed a reliable correlate of intelligence in functional brain imaging (Basten et al., 2013; Hammer et al., 2019; Lipp et al., 2012). A closely linked important factor seems to be the effectiveness of the interplay between the task-positive and task-negative networks, which is reflected in a negative coupling of the two. Here, higher intelligence was reported to be characterized by a stronger negative coupling between the TPN and the TNN under task-free conditions (Santarnecchi, Emmendorfer, Tadayon, et al., 2017; see also Chapter 6, by Barbey). It remains to be explored whether these findings can be replicated for brains engaged in cognitive processing. The relationship between intelligence and brain activation further seems to change with learning and practice on specific tasks. Studies in which
Functional Brain Imaging of Intelligence
participants received short-term training (single occasion to several weeks) on a cognitive task (e.g., a visuo-spatial task like the computer game Tetris or a complex reasoning task) reported stronger pre- to post-training decreases in task-associated brain activation for more intelligent participants (e.g., glucose metabolic rate – Haier et al., 1992; event-related EEG desynchronization – Neubauer, Grabner, Freudenthaler, Beckmann, & Guthke, 2004). In part such training-related changes in brain activity may also be due to changes in the use of cognitive strategies (Toffanin, Johnson, de Jong, & Martens, 2007) – which could of course vary depending on intelligence. Other studies have investigated the roles that long-term training and the acquisition of expertise over years play for neural efficiency. Two studies in taxi drivers (Grabner, Stern, & Neubauer, 2003) and chess players (Grabner, Neubauer, & Stern, 2006) suggest that above and beyond individual differences in general cognitive ability, higher expertise seems to make an efficient use of the brain more likely. This is often reflected in decreased frontal involvement along with an increased reliance on posterior/parietal brain systems (e.g., Grabner et al., 2006). Combined, the available evidence suggests that in the short-term (days to weeks), more intelligent people seem to profit more from practice in terms of gains in neural efficiency. However, long-term training (for several years) can even out intelligence-related differences in neural efficiency, as the acquisition of expertise in a specific task through extensive practice can lead to taskspecific efficiency independent of general cognitive ability. However, future research will have to specify in more detail the conditions under which these conclusions hold. The effects of training and practice on the development of neural efficiency suggest that intelligent people may be thought of as “experts in thinking” who are more likely to manage cognitive challenges with neural efficiency due to habitual practice in cognitive activity. Intelligent people may have had dispositional advantages for developing an efficient use of their brains in the first place. In addition, the constant challenging of their brains by cognitively demanding mental activity may further promote the development of neural efficiency in general as well as the potential to more quickly develop taskspecific efficiency when faced with new challenges. In other words, as a result of previous learning and daily routine in dealing with cognitive challenges, more intelligent people may have acquired skills and strategies that are cognitively effective and neurally efficient. From this perspective, neural efficiency may not as much refer to an overall reduced activity of the brain but rather reflect a tendency to solve a cognitive task with less control-related prefrontal involvement or the ability to quickly redistribute activity from the frontal cortex to a smaller set of brain regions that are essentially necessary for task processing. Finally, the difficulty of a task also seems to play an important role for understanding whether and under which conditions more intelligent people show more or less brain activation. Neubauer and Fink (2009), in their review, suggested that individual differences in neural efficiency are most likely to be
247
248
u. basten and c. j. fiebach
Figure 12.3 Brain activation as a function of task difficulty and intelligence. Adapted from figure 2 in Neubauer and Fink (2009). Figure available at: https://github.com/fiebachlab/figures/ under a CC-BY license
observed in tasks of low-to-medium but not high complexity, implying that the individual strength of brain activation during cognitive processing is interactively determined by the difficulty of the task and the intelligence of the individual. As illustrated in Figure 12.3, this model assumes that less intelligent people need to exert more brain activation for successful performance in even relatively easy tasks, consistent with the neural efficiency hypothesis. At some point, no further resources can be recruited to meet increasing task demands, so that brain activation reaches a plateau where it cannot further be increased – or may even drop if a task is too difficult and the participant “gives up.” More intelligent people will reach this point later, i.e., at higher levels of objective task difficulty, so that for more difficult tasks they will show greater activation than less intelligent people (Figure 12.3). This interaction between task difficulty and intelligence is equivalent to a moderation of the association between intelligence and brain activation by task difficulty. This model has often been used for post-hoc interpretations of associations found between intelligence and brain activation. Especially when observing a positive association that is not compatible with the neural efficiency hypothesis, i.e., stronger activation in more intelligent study participants, researchers tend to speculate the positive association was observed because the task used in their study was particularly difficult (e.g., Gray, Chabris, & Braver, 2003). Uncertainty about the level of difficulty addressed by a specific investigation may even remain when more than one level of task difficulty was studied, as was the case in one of our own studies for a working memory task with three levels of difficulty (Basten et al., 2013). In conclusion, there exist a number of plausible moderators of the relationship between individual differences in intelligence and brain activation. A better understanding of these moderators may help in clarifying why evidence has so far not conclusively supported or falsified the still popular neural efficiency hypothesis.
Functional Brain Imaging of Intelligence
Limitations of Available Studies: Why There May Still Be More to Learn about Brain Function and Intelligence What we know today about the association between intelligence and individual differences in patterns of brain activation during cognitive tasks is based on a body of studies that partly suffer from methodological limitations, including that many studies were based on small samples, that some studies restricted their analyses to predefined brain regions (i.e., “regions of interest”), and that most studies did not systematically investigate the roles of potential moderators. With respect to sample sizes, we observe a wide variation across studies, ranging from only eight subjects in the very first study conducted on intelligence-related brain activation differences (Haier et al., 1988) to 1,235 participants in a recent study (Takeuchi et al., 2018). The majority of studies, however, were based on data from not more than 20 to 40 participants. For example, in our meta-analysis (Basten et al., 2015), sample sizes varied between 12 and 104 participants, with a mean of 33.21 and a standard deviation of 24.41. Half of the 14 samples included in the metaanalysis comprised less than 25 participants. Such small sample sizes result in a lack of statistical power, which means that, by design, studies have a low probability of detecting the effects they study, even when a true effect exists (see, e.g., Yarkoni, 2009, for a discussion in the context of individual differences in fMRI research). On the one hand, a lack of statistical power may have led to overestimation of effect sizes or even false positives (for details, see Button et al., 2013; Cremers, Wager, & Yarkoni, 2017; Turner, Paul, Miller, & Barbey, 2018; Yarkoni, 2009). This problem, however, should effectively be taken care of in meta-analyses, because random false positives have little chance of being confirmed by evidence from other studies. On the other hand, low-powered studies will miss true effects when these are not very strong (type II error), a problem that meta-analyses cannot take care of. Effects that were missed in the first place by the original studies cannot influence the results of a metaanalysis. If the neurofunctional differences underlying individual differences in intelligence are in fact rather weak and widely distributed throughout the brain, the partial inconsistency of highly localized findings that have been reported so far may well be due to most studies being seriously underpowered (Cremers et al., 2017). This may even imply that the search for moderators (see above) might in fact not be the adequate response to the mixed findings and in part be a futile and misleading endeavor. The likelihood that part of the neural underpinnings of individual differences in intelligence have gone unnoticed until now is further increased by the fact that some of the original studies did not search the whole brain for intelligence-related differences in activation but restricted their analyses to pre-defined regions of interest, for example to brain regions activated across all participants for the task under study (e.g., Lee et al., 2006). With such an
249
250
u. basten and c. j. fiebach
approach, studies will miss effects in brain regions that may not show activation across participants exactly because of high inter-individual variation in activity or in brain regions commonly de-activated during task processing (e.g., Basten et al., 2013; Lipp et al., 2012). Finally, if the association between intelligence and brain activation is indeed moderated by factors like sex, task content, or task difficulty, the search for a general pattern across these factors will lead to only the most general mechanisms being identified and other, specific aspects (e.g., different functional implementations of intelligence in men and women) being missed. In a nutshell, these limitations make our current state of knowledge – as derived from meta-analyses – a rather conservative estimate of the neurofunctional basis of intelligence in the brain: We can be quite sure that the regions we identified in meta-analyses as replicating across studies are indeed relevant for intelligence. There may, however, be further brain regions where differences in function also contribute to individual differences in intelligence but which have been missed by neuroimaging studies so far. Given the low power of many studies available so far, and the resulting overestimation of localized effects (Cremers et al., 2017), future research may show that the neural systems associated with individual differences in intelligence are in fact much more widely distributed across the brain than currently assumed.
Trends and Perspectives for the Functional Imaging of Intelligence: What Researchers Are Working On Now and Should Tackle in the Future. Most importantly, of course, researchers should overcome the limitations of previous studies. One important need for future research is the study of bigger samples (e.g., Turner et al., 2018; Yarkoni, 2009). There has been a substantial increase in sample sizes over the years that gives reason for an optimistic outlook. Currently, researchers are more and more using larger samples (e.g., Takeuchi et al., 2018), including open access data from largescale collaborative projects like the Human Connectome Project (HCP; Van Essen et al., 2013), the Functional Connectomes Project (Mennes, Biswal, Castellanos, & Milham, 2013), or the UK Biobank (Miller et al., 2016; Sudlow et al., 2015). The first functional MRI study in a much larger sample than available before, i.e., with 1,235 subjects, suggests that effects are in fact smaller than assumed previously (Takeuchi et al., 2018) – which is in line with what one would theoretically expect for effect size estimates from larger as compared to smaller samples (e.g., Cremers et al., 2017; Ioannidis, 2008; Yarkoni, 2009). This study identified only two brain regions in which intelligence showed robust positive correlations with activation as elicited by a challenging working memory task (i.e., the 2-back task as compared to a no memory, 0-back control condition). These were not located in the lateral
Functional Brain Imaging of Intelligence
prefrontal or parietal cortex, but in the right hippocampus and the presupplementary motor area. Only when evaluating brain activation elicited by the 2-back task without correcting for the no memory control condition, were significant positive correlations also found in the dorsomedial PFC and the precuneus, and a significant negative correlation in the right intraparietal sulcus. These first task-based fMRI results from a large-scale study are not easy to integrate with previous results – including those of the meta-analyses. On the other hand, they also await replication in further large-scale datasets. For now, we must concede that the models developed so far may have to be called into question again. More studies using bigger samples – ideally combined with the integration across studies in regular meta-analyses (Yarkoni, Poldrack, Van Essen, & Wager, 2010) – will provide more reliable results and may well lead to a more refined understanding of the neural underpinnings of intelligence in the next years. A further important trend that will aid in clarifying the role of functional brain differences for intelligence is the use of cross-validated prediction approaches that allow for quantifying the generalizability of findings from one sample to others. Localization and explanation of the mechanisms by which individual differences in brain activation contribute to cognitive ability are not of primary interest here. Instead, predictive approaches test to what extent knowing patterns of brain activation during a cognitive challenge allows predicting individual intelligence test scores. Can we calculate the score that would result for a person in a traditional paper-pencil intelligence test from brain activation we have measured for this person, given a statistical model established on the basis of independent data? If this was possible, it would provide most stringent evidence for a reliable association between intelligence and brain activation, as a successful prediction ensures that the underlying model relating brain activation and intelligence is not merely capturing random characteristics of a specific sample at hand (a common problem of simple association studies known as overfitting) but rather features of brain function that are predictive of intelligence across different independent samples (Yarkoni & Westfall, 2017). Using a cross-validated predictive approach, Sripada, Angstadt, and Rutherford (2018) recently reported that they were indeed able to predict intelligence from brain activation data. They used data from 944 participants of the Human Connectome Project (i.e., from the same dataset also used by Takeuchi et al., 2018) to build and train a prediction model, which was then applied to the data from an independent test sample of another 100 participants, to predict individual intelligence scores and thereby test the model. The brain activation data had been acquired with fMRI during the processing of seven different cognitive tasks. Sripada et al. (2018) report a high correlation (r = .68) between the individual intelligence scores predicted by their model and those resulting from behavioural testing. They further note that tasks tapping executive functions and particularly demanding cognitive tasks (i.e., tasks
251
252
u. basten and c. j. fiebach
showing relatively stronger activation of the TPN and deactivation of the TNN) were especially effective in this prediction. Whether the prediction of intelligence scores is possible also with low absolute error is an important open question for future studies. A major advantage of engaging a predictive approach in the functional imaging of intelligence lies in the fact that one does not have to decide about the statistical significance of single associations between activation in specific brain regions and intelligence. The models take all available data into account and use whatever helps predicting. This will be particularly suitable if intelligence is related to rather weak differences in activation distributed across many parts of the brain. The study of Sripada et al. (2018) suggests that activation differences in the frontoparietal network that previous studies have associated with intelligence are indeed most predictive of intelligence. However, these authors also reported that activation differences in many other parts of the brain further contributed to the prediction, which supports the proposal that the neural underpinnings of intelligence may best be thought of as a distributed set of regions that are rather weakly associated with intelligence – just as the simulation study by Cremers et al. (2017) concluded for many aspects of personality and brain function. This would also reconcile the report of a successful prediction of intelligence by Sripada et al. (2018) with the report of rather small effect sizes in specific brain regions in the high-powered study by Takeuchi et al. (2018). Finally, an important open question is how the functional brain correlates of intelligence described in this chapter relate to other characteristics of the brain that also vary with intelligence, such as morphological differences (see Chapters 10 and 11) or differences in connectivity and network topology (see Chapters 2 and 6). Up to now, these different aspects have been studied separately from each other. Theoretically, however, we have to expect that they are not independent. Morphological differences, e.g., in the structure of dendrites (Genç et al., 2018) or the gyrification of the brain (Gregory et al., 2016) might be the basis for differences in the organization of functional brain networks (e.g., Hilger et al., 2017a; Hilger, Ekman, Fiebach, & Basten, 2017b) which will in turn affect energy consumption and activation associated with cognitive processing as measured by functional neuroimaging (for studies on metabolic and biochemical correlates of intelligence using magnetic resonance spectroscopy see, e.g., Paul et al., 2016). However, these assumptions remain speculation until research directly relates the different brain correlates of intelligence to each other (cf. Poldrack, 2015). Notably, regarding individual differences in brain activation, we may not only have to consider the degree of activation as a potential underpinning of differences in intelligence, but other aspects like the variability of brain responses to different kinds of stimuli (e.g., Euler, Weisend, Jung, Thoma, & Yeo, 2015).
Functional Brain Imaging of Intelligence
In sum, there is considerable evidence available to suggest that intelligencerelated cognitive tasks activate a distributed set of brain regions predominantly located in the prefrontal and parietal cortex, and to a lesser extent also in the temporal and occipital cortex as well as subcortical structures, such as the thalamus and the putamen. Furthermore, individual differences in intelligence are associated with differences in task-elicited brain activation. Uncertainty regarding the exact localizations and directions of associations between brain activation and individual differences in intelligence is most likely attributable to the low statistical power of many studies and the existence of several potential moderating factors, including sex, task content, and task difficulty. Future research should (a) strive to use sufficiently large samples to achieve the necessary statistical power to detect effect sizes of interest, (b) further clarify the effects (including interactions) of the postulated moderating variables, and (c) ensure generalizability of results by adopting predictive analysis approaches. A good understanding of the functional brain mechanisms underlying intelligence may ultimately serve the development of interventions designed to enhance cognitive performance (e.g., Daugherty et al., 2018, 2020; for a discussion see Haier, 2016), but may for example also prove valuable for informing clinical work like the rehabilitation of neurological patients with acquired cognitive deficits.
References Barbey, A. K. (2018). Network neuroscience theory of human intelligence. Trends in Cognitive Science, 22(1), 8–20. Barbey, A. K., Colom, R., & Grafman, J. (2013). Dorsolateral prefrontal contributions to human intelligence. Neuropsychologia, 51(7), 1361–1369. Barbey, A. K., Colom, R., Paul, E. J., & Grafman, J. (2014). Architecture of fluid intelligence and working memory revealed by lesion mapping. Brain Structure and Function, 219, 485–494. Barbey, A. K., Colom, R., Solomon, J., Krueger, F., Forbes, C., & Grafman, J. (2012). An integrative architecture for general intelligence and executive function revealed by lesion mapping. Brain, 135(4), 1154–1164. doi: 10.1093/brain/ aws021. Basten, U., Hilger, K., & Fiebach, C. J. (2015). Where smart brains are different: A quantitative meta-analysis of functional and structural brain imaging studies on intelligence. Intelligence, 51, 10–27. doi: 10.1016/j.intell.2015.04.009. Basten, U., Stelzel, C., & Fiebach, C. J. (2011). Trait anxiety modulates the neural efficiency of inhibitory control. Journal of Cognitive Neuroscience, 23(10), 3132–3145. doi: 10.1162/jocn_a_00003. Basten, U., Stelzel, C., & Fiebach, C. J. (2012). Trait anxiety and the neural efficiency of manipulation in working memory. Cognitive, Affective, & Behavioral Neuroscience, 12(3), 571–588. doi: 10.3758/s13415–012-0100-3.
253
254
u. basten and c. j. fiebach
Basten, U., Stelzel, C., & Fiebach, C. J. (2013). Intelligence is differentially related to neural effort in the task-positive and the task-negative brain network. Intelligence, 41(5), 517–528. doi: 10.1016/j.intell.2013.07.006. Berent, S., Giordani, B., Lehtinen, S., Markel, D., Penney, J. B., Buchtel, H. A., . . . Young, A. B. (1988). Positron emission tomographic scan investigations of Huntington’s disease: Cerebral metabolic correlates of cognitive function. Annals of Neurology, 23(6), 541–546. doi: 10.1002/ana.410230603. Burgess, G. C., Gray, J. R., Conway, A. R. A., & Braver, T. S. (2011). Neural mechanisms of interference control underlie the relationship between fluid intelligence and working memory span. Journal of Experimental Psychology: General, 140(4), 674–692. doi: 10.1037/a0024695. Button, K. S., Ioannidis, J. P. A., Mokrysz, C., Nosek, B. A., Flint, J., Robinson, E. S. J., & Munafò, M. R. (2013). Power failure: Why small sample size undermines the reliability of neuroscience. Nature Reviews Neuroscience, 14(5), 365–376. doi: 10.1038/nrn3475. Cabeza, R., & Nyberg, L. (2000). Imaging cognition II: An empirical review of 275 PET and fMRI studies. Journal of Cognitive Neuroscience, 12(1), 1–47. doi: 10.1162/08989290051137585. Cattell, R. B. (1963). Theory of fluid and crystallized intelligence: A critical experiment. Journal of Educational Psychology, 54(1), 1–22. doi: 10.1037/ h0046743. Chang, L. J., Yarkoni, T., Khaw, M. W., & Sanfey, A. G. (2013). Decoding the role of the insula in human cognition: Functional parcellation and large-scale reverse inference. Cerebral Cortex, 23(3), 739–749. doi: 10.1093/cercor/bhs065. Choi, Y. Y., Shamosh, N. A., Cho, S. H., DeYoung, C. G., Lee, M. J., Lee, J.-M., . . . Lee, K. H. (2008). Multiple bases of human intelligence revealed by cortical thickness and neural activation. Journal of Neuroscience, 28(41), 10323–10329. doi: 10.1523/JNEUROSCI.3259-08.2008. Cole, M. W., & Schneider, W. (2007). The cognitive control network: Integrated cortical regions with dissociable functions. NeuroImage, 37(1), 343–360. doi: 10.1016/j.neuroimage.2007.03.071. Corbetta, M., Patel, G., & Shulman, G. L. (2008). The reorienting system of the human brain: From environment to theory of mind. Neuron, 58(3), 306–324. doi: 10.1016/j.neuron.2008.04.017. Cremers, H. R., Wager, T. D., & Yarkoni, T. (2017). The relation between statistical power and inference in fMRI. PLoS One, 12(11), e0184923. doi: 10.1371/ journal.pone.0184923. Daugherty, A. M., Sutton, B. P., Hillman, C. H., Kramer, A. F., Cohen, N. J., & Barbey, A. K. (2020). Individual differences in the neurobiology of fluid intelligence predict responsiveness to training: Evidence from a comprehensive cognitive, mindfulness meditation, and aerobic fitness intervention. Trends in Neuroscience and Education, 18, 100123. doi: 10.1016/j.tine.2019.100123. Daugherty, A. M., Zwilling, C., Paul, E. J., Sherepa, N., Allen, C., Kramer, A. F., . . . Barbey, A. K. (2018). Multi-modal fitness and cognitive training to enhance fluid intelligence. Intelligence, 66, 32–43. Derrfuss, J., Vogt, V. L., Fiebach, C. J., von Cramon, D. Y., & Tittgemeyer, M. (2012). Functional organization of the left inferior precentral sulcus:
Functional Brain Imaging of Intelligence
Dissociating the inferior frontal eye field and the inferior frontal junction. NeuroImage, 59(4), 3829–3837. doi: 10.1016/j.neuroimage.2011.11.051. DeYoung, C. G., Shamosh, N. A., Green, A. E., Braver, T. S., & Gray, J. R. (2009). Intellect as distinct from openness: Differences revealed by fMRI of working memory. Journal of Personality and Social Psychology, 97(5), 883–892. doi: 10.1037/a0016615. Dosenbach, N. U. F., Fair, D. A., Miezin, F. M., Cohen, A. L., Wenger, K. K., Dosenbach, R. A. T., . . . Petersen, S. E. (2007). Distinct brain networks for adaptive and stable task control in humans. Proceedings of the National Academy of Sciences, 104(26), 11073–11078. doi: 10.1073/pnas.0704320104. Duncan, J. (1995). Attention, intelligence, and the frontal lobes. In M. S. Gazzaniga (ed.), The cognitive neurosciences (pp. 721–733). Cambridge, MA: The MIT Press. Duncan, J., Seitz, R. J., Kolodny, J., Bor, D., Herzog, H., Ahmed, A., . . . Emslie, H. (2000). A neural basis for general intelligence. Science, 289(5478), 457–460. doi: 10.1126/science.289.5478.457. Duncan, J. (2005). Frontal lobe function and general intelligence: Why it matters. Cortex, 41(2), 215–217. doi: 10.1016/S0010–9452(08)70896-7. Duncan, J. (2010). The multiple-demand (MD) system of the primate brain: Mental programs for intelligent behaviour. Trends in Cognitive Sciences, 14(4), 172–179. doi: 10.1016/j.tics.2010.01.004. Duncan, J., Burgess, P., & Emslie, H. (1995). Fluid intelligence after frontal lobe lesions. Neuropsychologia, 33(3), 261–268. doi: 10.1016/0028-3932(94) 00124-8. Duncan, J., Emslie, H., Williams, P., Johnson, R., & Freer, C. (1996). Intelligence and the frontal lobe: The organization of goal-directed behavior. Cognitive Psychology, 30(3), 257–303. doi: 10.1006/cogp.1996.0008. Ebisch, S. J., Perrucci, M. G., Mercuri, P., Romanelli, R., Mantini, D., Romani, G. L., . . . Saggino, A. (2012). Common and unique neuro-functional basis of induction, visualization, and spatial relationships as cognitive components of fluid intelligence. NeuroImage, 62(1), 331–342. doi: 10.1016/j. neuroimage.2012.04.053. Esposito, G., Kirkby, B. S., Van Horn, J. D., Ellmore, T. M., & Berman, K. F. (1999). Context-dependent, neural system-specific neurophysiological concomitants of ageing: Mapping PET correlates during cognitive activation. Brain: A Journal of Neurology, 122(Pt 5), 963–979. doi: 10.1093/brain/122.5.963. Euler, M. J., Weisend, M. P., Jung, R. E., Thoma, R. J., & Yeo, R. A. (2015). Reliable activation to novel stimuli predicts higher fluid intelligence. NeuroImage, 114, 311–319. doi: 10.1016/j.neuroimage.2015.03.078. Fox, M. D., Snyder, A. Z., Vincent, J. L., Corbetta, M., Van Essen, D. C., & Raichle, M. E. (2005). The human brain is intrinsically organized into dynamic, anticorrelated functional networks. Proceedings of the National Academy of Sciences of the United States of America, 102(27), 9673. doi: 10.1073/ pnas.0504136102. Geake, J. G., & Hansen, P. C. (2005). Neural correlates of intelligence as revealed by fMRI of fluid analogies. NeuroImage, 26(2), 555–564. doi: 10.1016/j. neuroimage.2005.01.035.
255
256
u. basten and c. j. fiebach
Genç, E., Fraenz, C., Schlüter, C., Friedrich, P., Hossiep, R., Voelkle, M. C., . . . Jung, R. E. (2018). Diffusion markers of dendritic density and arborization in gray matter predict differences in intelligence. Nature Communications, 9(1), 1905. doi: 10.1038/s41467–018-04268-8. Ghatan, P. H., Hsieh, J. C., Wirsén-Meurling, A., Wredling, R., Eriksson, L., StoneElander, S., . . . Ingvar, M. (1995). Brain activation induced by the perceptual maze test: A PET study of cognitive performance. NeuroImage, 2(2), 112–124. Gläscher, J., Rudrauf, D., Colom, R., Paul, L. K., Tranel, D., Damasio, H., & Adolphs, R. (2010). Distributed neural system for general intelligence revealed by lesion mapping. Proceedings of the National Academy of Sciences, 107(10), 4705–4709. doi: 10.1073/pnas.0910397107. Goel, V., & Dolan, R. J. (2001). Functional neuroanatomy of three-term relational reasoning. Neuropsychologia, 39(9), 901–909. Goel, V., Gold, B., Kapur, S., & Houle, S. (1998). Neuroanatomical correlates of human reasoning. Journal of Cognitive Neuroscience, 10(3), 293–302. doi: 10.1162/089892998562744. Grabner, R. H., Neubauer, A. C., & Stern, E. (2006). Superior performance and neural efficiency: The impact of intelligence and expertise. Brain Research Bulletin, 69(4), 422–439. doi: 10.1016/j.brainresbull.2006.02.009. Grabner, R. H., Stern, E., & Neubauer, A. C. (2003). When intelligence loses its impact: Neural efficiency during reasoning in a familiar area. International Journal of Psychophysiology, 49(2), 89–98. doi: 10.1016/S0167–8760(03) 00095-3. Gray, J. R., Chabris, C. F., & Braver, T. S. (2003). Neural mechanisms of general fluid intelligence. Nature Neuroscience, 6(3), 316–322. doi: 10.1038/nn1014. Gregory, M. D., Kippenhan, J. S., Dickinson, D., Carrasco, J., Mattay, V. S., Weinberger, D. R., & Berman, K. F. (2016). Regional variations in brain gyrification are associated with general cognitive ability in humans. Current Biology, 26(10), 1301–1305. doi: 10.1016/j.cub.2016.03.021. Haier, R. (2016). The neuroscience of intelligence (Cambridge fundamentals of neuroscience in psychology). Cambridge University Press. doi: 10.1017/ 9781316105771. Haier, R. J., Siegel, B. V., Nuechterlein, K. H., Hazlett, E., Wu, J. C., Paek, J., . . . Buchsbaum, M. S. (1988). Cortical glucose metabolic rate correlates of abstract reasoning and attention studied with positron emission tomography. Intelligence, 12(2), 199–217. doi: 10.1016/0160-2896(88)90016-5. Haier, R. J., Siegel, B., Tang, C., Abel, L., & Buchsbaum, M. S. (1992). Intelligence and changes in regional cerebral glucose metabolic rate following learning. Intelligence, 16(3–4), 415–426. do: 10.1016/0160-2896(92)90018-M. Hammer, R., Paul, E. J., Hillman, C. H., Kramer, A. F., Cohen, N. J., & Barbey, A. K. (2019). Individual differences in analogical reasoning revealed by multivariate task-based functional brain imaging. Neuroimage, 184, 993–1004. Hilger, K., Ekman, M., Fiebach, C. J., & Basten, U. (2017a). Efficient hubs in the intelligent brain: Nodal efficiency of hub regions in the salience network is associated with general intelligence. Intelligence, 60, 10–25. doi: 10.1016/j. intell.2016.11.001.
Functional Brain Imaging of Intelligence
Hilger, K., Ekman, M., Fiebach, C. J., & Basten, U. (2017b). Intelligence is associated with the modular structure of intrinsic brain networks. Scientific Reports, 7(1), 16088. doi: 10.1038/s41598–017-15795-7. Ioannidis, J. P. A. (2008). Why most discovered true associations are inflated. Epidemiology, 19(5), 640–648. doi: 10.1097/EDE.0b013e31818131e7. Jaušovec, N. (2000). Differences in cognitive processes between gifted, intelligent, creative, and average individuals while solving complex problems: An EEG study. Intelligence, 28(3), 213–237. doi: 10.1016/S0160–2896(00)00037-4. Jaušovec, N., & Jaušovec, K. (2004). Differences in induced brain activity during the performance of learning and working-memory tasks related to intelligence. Brain and Cognition, 54(1), 65–74. doi: 10.1016/S0278–2626(03)00263-X. Jung, R. E., & Haier, R. J. (2007). The Parieto-Frontal Integration Theory (P-FIT) of intelligence: Converging neuroimaging evidence. Behavioral and Brain Sciences, 30(2), 135–154. doi: 10.1017/S0140525X07001185. Kievit, R. A., Davis, S. W., Griffiths, J., Correia, M. M., Cam-Can, & Henson, R. N. (2016). A watershed model of individual differences in fluid intelligence. Neuropsychologia, 91, 186–198. doi: 10.1016/j.neuropsychologia.2016.08.008. Knauff, M., Mulack, T., Kassubek, J., Salih, H. R., & Greenlee, M. W. (2002). Spatial imagery in deductive reasoning: A functional MRI study. Brain Research Cognitive Brain Research, 13(2), 203–212. Kruschwitz, J. D., Waller, L., Daedelow, L. S., Walter, H., & Veer, I. M. (2018). General, crystallized and fluid intelligence are not associated with functional global network efficiency: A replication study with the human connectome project 1200 data set. NeuroImage, 171, 323–331. doi: 10.1016/j. neuroimage.2018.01.018. Lee, K. H., Choi, Y. Y., Gray, J. R., Cho, S. H., Chae, J.-H., Lee, S., & Kim, K. (2006). Neural correlates of superior intelligence: Stronger recruitment of posterior parietal cortex. NeuroImage, 29(2), 578–586. doi: 10.1016/j. neuroimage.2005.07.036. Lipp, I., Benedek, M., Fink, A., Koschutnig, K., Reishofer, G., Bergner, S., . . . Neubauer, A. C. (2012). Investigating neural efficiency in the visuo-spatial domain: An FMRI study. PLoS One, 7(12), e51316. doi: 10.1371/journal. pone.0051316. McKiernan, K. A., Kaufman, J. N., Kucera-Thompson, J., & Binder, J. R. (2003). A parametric manipulation of factors affecting task-induced deactivation in functional neuroimaging. Journal of Cognitive Neuroscience, 15(3), 394–408. doi: 10.1162/089892903321593117. Mennes, M., Biswal, B. B., Castellanos, F. X., & Milham, M. P. (2013). Making data sharing work: The FCP/INDI experience. NeuroImage, 82, 683–691. doi: 10.1016/j.neuroimage.2012.10.064. Miller, D. I., & Halpern, D. F. (2014). The new science of cognitive sex differences. Trends in Cognitive Sciences, 18(1), 37–45. doi: 10.1016/j.tics.2013.10.011. Miller, E. M. (1994). Intelligence and brain myelination: A hypothesis. Personality and Individual Differences, 17(6), 803–832. doi: 10.1016/0191-8869(94)90049-3. Miller, K. L., Alfaro-Almagro, F., Bangerter, N. K., Thomas, D. L., Yacoub, E., Xu, J., . . . Smith, S. M. (2016). Multimodal population brain imaging in the
257
258
u. basten and c. j. fiebach
UK Biobank prospective epidemiological study. Nature Neuroscience, 19(11), 1523–1536. doi: 10.1038/nn.4393. Neubauer, A. C., & Fink, A. (2009). Intelligence and neural efficiency. Neuroscience & Biobehavioral Reviews, 33(7), 1004–1023. doi: 10.1016/j.neubiorev.2009. 04.001. Neubauer, A. C., Fink, A., & Schrausser, D. G. (2002). Intelligence and neural efficiency: The influence of task content and sex on the brain–IQ relationship. Intelligence, 30(6), 515–536. doi: 10.1016/S0160–2896(02)00091-0. Neubauer, A. C., Freudenthaler, H. H., & Pfurtscheller, G. (1995). Intelligence and spatiotemporal patterns of event-related desynchronization (ERD). Intelligence, 20(3), 249–266. doi: 10.1016/0160-2896(95)90010-1. Neubauer, A. C., Grabner, R. H., Fink, A., & Neuper, C. (2005). Intelligence and neural efficiency: Further evidence of the influence of task content and sex on the brain–IQ relationship. Cognitive Brain Research, 25(1), 217–225. doi: 10.1016/j.cogbrainres.2005.05.011. Neubauer, A. C., Grabner, R. H., Freudenthaler, H. H., Beckmann, J. F., & Guthke, J. (2004). Intelligence and individual differences in becoming neurally efficient. Acta Psychologica, 116(1), 55–74. doi: 10.1016/j.actpsy.2003.11.005. Neuper, C., Grabner, R. H., Fink, A., & Neubauer, A. C. (2005). Long-term stability and consistency of EEG event-related (de-)synchronization across different cognitive tasks. Clinical Neurophysiology, 116(7), 1681–1694. doi: 10.1016/j. clinph.2005.03.013. Niendam, T. A., Laird, A. R., Ray, K. L., Dean, Y. M., Glahn, D. C., & Carter, C. S. (2012). Meta-analytic evidence for a superordinate cognitive control network subserving diverse executive functions. Cognitive, Affective, & Behavioral Neuroscience, 12(2), 241–268. doi: 10.3758/s13415–011-0083-5. O’Boyle, M. W., Cunnington, R., Silk, T. J., Vaughan, D., Jackson, G., Syngeniotis, A., & Egan, G. F. (2005). Mathematically gifted male adolescents activate a unique brain network during mental rotation. Cognitive Brain Research, 25(2), 583–587. doi: 10.1016/j.cogbrainres.2005.08.004. Parks, R. W., Loewenstein, D. A., Dodrill, K. L., Barker, W. W., Yoshii, F., Chang, J. Y., . . . Duara, R. (1988). Cerebral metabolic effects of a verbal fluency test: A PET scan study. Journal of Clinical and Experimental Neuropsychology, 10(5), 565–575. doi: 10.1080/01688638808402795. Paul, E. J., Larsen, R. J., Nikolaidis, A., Ward, N., Hillman, C. H., Cohen, N. J., . . . Barbey, A. K. (2016). Dissociable brain biomarkers of fluid intelligence. Neuroimage, 137, 201–211. Penke, L., Maniega, S. M., Bastin, M. E., Valdés Hernández, M. C., Murray, C., Royle, N. A., . . . Deary, I. J. (2012). Brain white matter tract integrity as a neural foundation for general intelligence. Molecular Psychiatry, 17(10), 1026–1030. doi: 10.1038/mp.2012.66. Pfurtscheller, G., & Aranibar, A. (1977). Event-related cortical desynchronization detected by power measurements of scalp EEG. Electroencephalography and Clinical Neurophysiology, 42(6), 817–826. doi: 10.1016/0013-4694(77) 90235-8. Poldrack, R.A. (2015). Is “efficiency” a useful concept in cognitive neuroscience? Developments in Cognitive Neuroscience, 11, 12–17.
Functional Brain Imaging of Intelligence
Prabhakaran, V., Rypma, B., & Gabrieli, J. D. E. (2001). Neural substrates of mathematical reasoning: A functional magnetic resonance imaging study of neocortical activation during performance of the necessary arithmetic operations test. Neuropsychology, 15(1), 115–127. doi: 10.1037/0894-4105.15.1.115. Prabhakaran, V., Smith, J. A. L., Desmond, J. E., Glover, G. H., & Gabrieli, J. D. E. (1997). Neural substrates of fluid reasoning: An fMRI study of neocortical activation during performance of the Raven’s progressive matrices test. Cognitive Psychology, 33(1), 43–63. doi: 10.1006/cogp.1997.0659. Santarnecchi, E., Emmendorfer, A., & Pascual-Leone, A. (2017). Dissecting the parieto-frontal correlates of fluid intelligence: A comprehensive ALE metaanalysis study. Intelligence, 63, 9–28. doi: 10.1016/j.intell.2017.04.008. Santarnecchi, E., Emmendorfer, A., Tadayon, S., Rossi, S., Rossi, A., & Pascual-Leone, A. (2017). Network connectivity correlates of variability in fluid intelligence performance. Intelligence, 65, 35–47. doi: 10.1016/j.intell.2017.10.002. Spearman, C. (1904). “General intelligence,” objectively determined and measured. The American Journal of Psychology, 15(2), 201–293. doi: 10.2307/1412107. Sripada, C., Angstadt, M., & Rutherford, S. (2018). Towards a “treadmill test” for cognition: Reliable prediction of intelligence from whole-brain task activation patterns. BioRxiv, 412056. doi: 10.1101/412056. Sudlow, C., Gallacher, J., Allen, N., Beral, V., Burton, P., Danesh, J., . . . Collins, R. (2015). UK Biobank: An open access resource for identifying the causes of a wide range of complex diseases of middle and old age. PLoS Medicine, 12(3), e1001779. doi: 10.1371/journal.pmed.1001779. Takeuchi, H., Taki, Y., Nouchi, R., Yokoyama, R., Kotozaki, Y., Nakagawa, S., . . . Kawashima, R. (2018). General intelligence is associated with working memory-related brain activity: New evidence from a large sample study. Brain Structure and Function, 223(9), 4243–4258. doi: 10.1007/s00429–0181747-5. Toffanin, P., Johnson, A., de Jong, R., & Martens, S. (2007). Rethinking neural efficiency: Effects of controlling for strategy use. Behavioral Neuroscience, 121(5), 854–870. doi: 10.1037/0735-7044.121.5.854. Turner, B. O., Paul, E. J., Miller, M. B., & Barbey, A. K. (2018). Small sample sizes reduce the replicability of task-based fMRI studies. Communications Biology, 1, 62. doi: 10.1038/s42003-018-0073-z. van den Heuvel, M. P., Stam, C. J., Kahn, R. S., & Hulshoff Pol, H. E. (2009). Efficiency of functional brain networks and intellectual performance. Journal of Neuroscience, 29(23), 7619–7624. doi: 10.1523/JNEUROSCI.1443-09.2009. Van Essen, D. C., Smith, S. M., Barch, D. M., Behrens, T. E. J., Yacoub, E., & Ugurbil, K. (2013). The WU-Minn Human Connectome Project: An overview. NeuroImage, 80(15), 62–79. doi: 10.1016/j.neuroimage.2013.05.041. Woolgar, A., Duncan, J., Manes, F., & Fedorenko, E. (2018). Fluid intelligence is supported by the multiple-demand system not the language system. Nature Human Behaviour, 2(3), 200–204. doi: 10.1038/s41562–017-0282-3. Woolgar, A., Parr, A., Cusack, R., Thompson, R., Nimmo-Smith, I., Torralva, T., . . . Duncan, J. (2010). Fluid intelligence loss linked to restricted regions of damage within frontal and parietal cortex. Proceedings of the National Academy of Sciences, 107(33), 14899–14902. doi: 10.1073/pnas.1007928107.
259
260
u. basten and c. j. fiebach
Yarkoni, T. (2009). Big correlations in little studies: Inflated fMRI correlations reflect low statistical power – Commentary on Vul et al. (2009). Perspectives on Psychological Science, 4(3), 294–298. doi: 10.1111/j.1745-6924.2009.01127.x. Yarkoni, T., Poldrack, R. A., Van Essen, D. C., & Wager, T. D. (2010). Cognitive neuroscience 2.0: Building a cumulative science of human brain function. Trends in Cognitive Sciences, 14(11), 489–496. doi: 10.1016/j.tics.2010.08.004. Yarkoni, T., & Westfall, J. (2017). Choosing prediction over explanation in psychology: Lessons from machine learning. Perspectives on Psychological Science, 12(6), 1100–1122. doi: 10.1177/1745691617693393. Yeo, B. T., Krienen, F. M., Sepulcre, J., Sabuncu, M. R., Lashkari, D., Hollinshead, M., . . . Buckner, R. L. (2011). The organization of the human cerebral cortex estimated by intrinsic functional connectivity. Journal of Neurophysiology, 106(3), 1125–1165. doi: 10.1152/jn.00338.2011.
13 An Integrated, Dynamic Functional Connectome Underlies Intelligence Jessica R. Cohen and Mark D’Esposito Intelligence is an elusive concept. For well over a century, what exactly intelligence is and how best to measure it has been debated (see Sternberg & Kaufman, 2011). In one predominant factorization of the components of intelligence it is separated into fluid and crystalized categories, with fluid intelligence measuring one’s reasoning and problem-solving ability, and crystallized intelligence measuring lifetime knowledge (Cattell, 1971). Influential theories of intelligence, particularly fluid intelligence, have proposed that aspects of cognitive control, most notably working memory, are the drivers of intelligent behavior (Conway, Getz, Macnamara, & Engel de Abreu, 2011; Conway, Kane, & Engle, 2003; Kane & Engle, 2002; Kovacs & Conway, 2016). More specifically, it is thought that the control aspect of working memory, the central executive proposed by Baddeley and Hitch (1974), is the basis for the types of cognitive processes tapped by intelligence assessments (Conway et al., 2003; Kane & Engle, 2002). It has further been proposed that the control process underlying intelligence may not be a single process, but instead a cluster of domain-general control processes, including attentional control, interference resolution, updating of relevant information, and others (Conway et al., 2011; Kovacs & Conway, 2016). Here, we focus on what we have learned about how intelligence emerges from brain function, taking the perspective that cognitive control ability and intelligence are supported by similar brain mechanisms, namely integration, efficiency, and plasticity. These mechanisms are best investigated using brain network methodology. From a network neuroscience perspective, integration refers to interactions across distinct brain networks; efficiency refers to the speed at which information can be transferred across the brain; and plasticity refers to the ability of brain networks to reconfigure, or rearrange, into an organization that is optimal for the current context. Therefore, we review relevant literature relating brain network function to both intelligence and cognitive control, as well as literature relating intelligence to cognitive control. Given the strong link between fluid intelligence, in particular, and cognitive control, we focus mainly on literature probing fluid intelligence in this chapter.
261
262
j. r. cohen and m. d’esposito
Correspondence between Neural Models of Intelligence and Neural Models of Cognitive Control Neural models of intelligence propose that a distributed network of brain regions underlies intelligence, and that the efficiency with which information can be transferred across this network, as well as its ability to adapt to changing environmental demands, determines the level of intelligence of an individual (Barbey, 2018; Euler, 2018; Garlick, 2002; Haier et al., 1988; Jung & Haier, 2007; Mercado, 2008). As an example, the parieto-frontal integration theory (P-FIT) model of intelligence proposes that parietal and prefrontal brain regions work together as a network, connected by white matter tracts, to produce intelligent behavior (Jung & Haier, 2007). Later work has additionally proposed that the insula and subcortical regions, as well as white matter tracts connecting frontal cortices to the rest of the brain, are related to intelligence as well (Basten, Hilger, & Fiebach, 2015; Gläscher et al., 2010). While these particular theories are focused on the relevance of specific brain regions and connections, a notable conclusion that emerges from this literature is that widespread brain regions distributed across the entire brain, with a variety of functional roles, work in concert to produce intelligent behavior. Other neural models of intelligence focus on the mechanisms through which intelligent behavior can emerge, without proposing that certain brain regions or connections are more important than others. For example, it has been proposed that whole-brain neural efficiency (i.e., speed of information transfer) is a key feature of intelligence, as is cortical plasticity, the brain’s ability to adjust patterns of communication based on current demands (Garlick, 2002; Mercado, 2008). These ideas have recently been reframed in terms of network neuroscience (Barbey, 2018). The large-scale network organization of the brain, as well as its ability to dynamically reconfigure based on current demands, is purported to be more important for intelligence than the specific brain regions or networks that are engaged at any given moment. It has further been proposed, however, that the reconfiguration and interaction of fronto-parietal networks in particular with downstream brain networks may drive fluid intelligence (Barbey, 2018). Finally, it has also been proposed that the ability of the brain to efficiently adapt its function through prediction error learning and the reduction of uncertainty (“predictive processing”) may underlie intelligence (Euler, 2018). Supporting our proposition that cognitive control and intelligence rely on highly overlapping neural mechanisms, models of the neural basis of cognitive control are strikingly similar to models of the neural basis of intelligence. In fact, they incorporate the same criteria: integration across distinct brain regions (or networks) to increase efficiency, as well as the ability to dynamically reconfigure communication patterns in response to current cognitive demands. Early research focusing on brain network organization underlying cognitive control focused on interactions across specific networks thought to
An Integrated, Dynamic Functional Connectome
be related to dissociable cognitive control processes. In this literature, brain networks are often divided into “processors”, or groups of brain regions whose role is specialized for a particular operation (i.e., sensory input or motor output), and “controllers”, or groups of brain regions whose role is to integrate across multiple processors to affect their operations (Dehaene, Kerszberg, & Changeux, 1998; Power & Petersen, 2013). Network neuroscience has made it clear that controllers, which were originally thought to lie predominantly in the prefrontal cortex (Duncan, 2001; Miller & Cohen, 2001), are in fact distributed throughout the brain (Betzel, Gu, Medaglia, Pasqualetti, & Bassett, 2016; Gratton, Sun, & Petersen, 2018). Two candidates for controller networks that are critical for distinct aspects of cognitive control are the fronto-parietal (FP) and cingulo-opercular (CO) networks (Dosenbach, Fair, Cohen, Schlaggar, & Petersen, 2008; Power & Petersen, 2013). The FP network is engaged during tasks that require updating of information to be manipulated. The CO network is engaged during tasks that require the maintenance of task goals, error monitoring, and attention to salient stimuli. Some regions of these networks, such as the anterior cingulate cortex and the insula, are important aspects of neural models of intelligence (Basten et al., 2015; Jung & Haier, 2007). Brain regions critical for both cognitive control and intelligence have been directly compared in patients with brain lesions using voxel-based lesion-symptom mapping. Regions of the FP network, as well as white matter tracts connecting these regions, were found to be critical for performance on both tasks of intelligence and tasks probing cognitive control (Barbey et al., 2012). Additionally, the multiple-demand (MD) system (Duncan, 2010), which comprises regions from both the FP and CO networks and has high overlap with the regions emphasized in the P-FIT model of intelligence, has been explicitly linked to intelligence. A key aspect of theories regarding specific networks that underlie cognitive control is that for successful cognitive control to be exerted, the networks must interact with each other as well as with task-specific networks, such as sensory networks, motor networks, or networks that underlie task-relevant cognitive processes (language, memory, etc.). More broadly, the theory that integration across distinct brain networks is critical for complex cognition has been asserted (Mesulam 1990; Sporns 2013) and is supported by empirical literature (for a review, see Shine & Poldrack, 2018). In light of these theories, this chapter provides a focused review of literature probing how functional connectivity and brain network organization underlies cognitive control and intelligence to shed light on how intelligence may emerge from large-scale brain organization and dynamics. We begin by discussing functional connectivity and graph theory methods. Next, we review literature directly relating brain network organization to measures of intelligence, followed by literature relating brain network organization to cognitive control. We then discuss translational applications, and end by suggesting promising future directions this field could take to elucidate the brain network mechanisms underlying
263
264
j. r. cohen and m. d’esposito
cognitive control and intelligence. We draw on important theories and empirical evidence regarding the brain network basis of intelligence and of cognitive control to assert that both cognitive control and intelligence emerge from similar brain mechanisms, namely integration across distinct brain networks to increase efficiency, and dynamic reconfiguration in the service of current goals.
The Brain as a Network It is well-known that individual brain regions do not act in isolation but are instead embedded within a larger system (see also Chapter 6, by Barbey). One approach toward studying the brain as a network is the use of mathematical tools, such as graph theory, which have a long history of being used to describe interactions across elements of large-scale, interconnected systems, such as social networks or airline route maps (Guimerà, Mossa, Turtschi, & Amaral, 2005; Newman & Girvan, 2004; see Chapter 2, by Hilger and Sporns, for a more detailed discussion of methods for studying networks). There is a growing body of literature that implements graph theory to describe the brain as a large-scale network, also referred to as a connectome, in which individual brain regions are graph nodes, and connections across brain regions are graph edges (Bullmore & Sporns, 2009; Sporns, 2010). Brain graph nodes can be defined using structural boundaries (i.e., individual gyri or subcortical structures), functional boundaries (i.e., voxels within dorsolateral prefrontal cortex that are engaged during working memory), or in a voxel-wise manner (i.e., each voxel is considered to be a graph node). Brain graph edges can be defined using physical structures (i.e., white matter tracts measured with diffusion MRI [dMRI]) or functional connections (i.e., coherent fluctuations in blood oxygen level-dependent [BOLD] signal measured with functional MRI [fMRI]). Structural brain graphs are often defined by counting the number of white matter tracts between each pair of brain regions, with a greater number of tracts reflecting stronger structural connectivity. Measures of white matter integrity, such as fractional anisotropy, can also be used to quantify structural connectivity strength. Functional brain graphs, on the other hand, are often defined by quantifying the strength of the correlation between temporal fluctuations in BOLD magnitude across two brain regions, with stronger correlations indicating greater functional connectivity. Other functional measures, such as coherence or covariance measures, as well as directed connectivity measures using techniques such as dynamic causal modeling or structural equation modeling, are alternate methods for estimating functional edge strength. Once nodes are defined and edges between all pairs of nodes estimated, the overall topological organization of
An Integrated, Dynamic Functional Connectome
Figure 13.1 Brain graph schematic. Gray ovals are networks of the graph, circles are nodes, dashed lines are within-network edges, and solid lines are between-network edges. Figure adapted from Cohen and D’Esposito (2016). See also Table 13.1.
a brain graph can be quantified using whole-brain summary measures without losing information about individual nodes or individual edges. Figure 13.1 and Table 13.1 describe a subset of whole-brain summary measures and nodal measures that are commonly used in network neuroscience. Using a network neuroscience approach, the brain has been found to have characteristics of both small-world and modular networks. A small-world network is one in which groups of tightly interconnected regions have sparse, long-range connections across them, and is defined as a combination of high clustering coefficient and relatively short path length (Bassett & Bullmore, 2006). Small-world networks allow for both specialized information processing (within tightly connected clusters of regions) and integrated, distributed information processing (long-range connections that cut across clusters) (Bassett & Bullmore, 2006). They further minimize wiring costs (only a small subset of connections are long, metabolically costly connections), and thus have an efficient network structure (Bullmore & Sporns, 2012). Modular networks are small-world networks in which the tightly-interconnected
265
266
j. r. cohen and m. d’esposito
Table 13.1 Definitions and descriptions of graph theory metrics. Metric
Definition
Description
Depiction
Degree
The number of edges of node
How highly connected a node is
The orange node has a degree of 6
System Segregation
Relative strength of withinnetwork connectivity as compared to betweennetwork connectivity
Network segregation into distinct networks with stronger within-network connectivity and weaker between-network connectivity
Edges within gray ovals vs. edges spanning different ovals
Modularity
Degree to which withinnetwork edges are stronger than expected at random
Network segregation into distinct networks with strong within-network connectivity
Gray ovals
Within-module Degree
Number of within-network connections of a node relative to average number of connections
Nodes important for within-network communication (provincial hub nodes)
Orange node
Participation Coefficient
Number of inter-network connections relative to all connections of a node
Nodes important for across-network integration (connector hub nodes)
Green node
Path Length
Shortest distance (number of edges) between a pair of nodes
Efficiency of information transfer between two nodes
Blue edges
Clustering Coefficient
Probability that two nodes connected to a third are also connected to each other
Description of interconnectedness (clique-ishness) of groups of neighboring nodes
Red nodes/edges
Global Efficiency
Inverse of average shortest path length across the entire system
Efficient global, integrative communication
Blue edges
Local Efficiency
Inverse of average shortest path length across a system consisting only of a node’s immediate neighbors
Efficient local, withinnetwork communication
Red nodes/edges
regions are organized into communities (also referred to as individual networks or modules). In addition to the aforementioned features of smallworld networks, modular networks are resilient to brain damage (Alstott, Breakspear, Hagmann, Cammoun, & Sporns, 2009; Gratton, Nomura, Perez, & D’Esposito, 2012) and are highly adaptable (both on short timescales, such as when engaged in a cognitive task, and on long time-scales, such as across evolution) (Meunier, Lambiotte, & Bullmore, 2010). A small-world,
An Integrated, Dynamic Functional Connectome
modular brain organization has long been proposed to be critical for complex cognition (Dehaene et al., 1998), including intelligence (Jung & Haier, 2007; Mercado, 2008). Finally, brain networks can be characterized at different timescales. Static connectivity, or functional connectivity assessed across several minutes, is useful for understanding dominant patterns of brain network organization during a specific cognitive context (i.e., during a resting state scan or during a working memory task). It has been proposed that static connectivity, in particular during a resting state, may reflect a trait-level marker of one’s brain network organization (Finn et al., 2015; Gratton, Laumann et al., 2018). Dynamic, or time-varying, functional connectivity is assessed over shorter timescales and may reflect rapid, momentary changes in cognitive demands or in internal state (Cohen, 2018; Kucyi, Tambini, Sadaghiani, Keilholz, & Cohen, 2018). While some time-varying functional connectivity methods may be particularly sensitive to artifact, model-based approaches have been shown to more accurately reflect true underlying changes in patterns of functional connectivity (see Cohen, 2018 for a discussion of methodological challenges related to time-varying functional connectivity). Dynamic changes in network organization are a key part of neural models of intelligent behavior (Barbey, 2018; Garlick, 2002; Mercado, 2008), thus both static and dynamic functional connectivity will be discussed here.
Intelligence and the Functional Connectome Some extant literature has related brain network organization to intelligence as measured with normed and psychometrically validated tests, such as estimated full-scale IQ (FSIQ) or IQ subscores using the Wechsler Adult Intelligence Scale (Wechsler, 2008) or the Wechsler Abbreviated Scale of Intelligence (Wechsler, 2011), or measures focused on fluid intelligence such as the Perceptual Reasoning Index of the Wechsler scales, Raven’s Progressive Matrices (Raven, 2000), or the Cattell Culture Fair Test (Cattell & Horn, 1978). This literature consistently finds that the degree of network integration, or interactions across widely distributed brain regions, underlies intelligence. This is observed regardless of data modality (structural white matter networks, resting state functional networks, or task-based functional networks). For example, studies of structural brain network organization characterized using dMRI have found that a greater number of white matter tracts, resulting in greater network integration, is related to higher FSIQ (Bohlken et al., 2016), as are the existence of tracts with properties that result in more efficient information flow (i.e., higher fractional anisotropy or reduced radial diffusivity or mean diffusivity) (Malpas et al., 2016). Additionally, global efficiency of white matter tracts has been found to be related to FSIQ, as well as performance IQ and verbal IQ subscales (Li et al., 2009).
267
268
j. r. cohen and m. d’esposito
Studies of brain network organization measured using resting state fMRI (rs-fMRI) have generally found that greater integration (stronger functional connectivity distributed across the brain and lower path length) is related to FSIQ (Langer et al., 2012; Malpas et al., 2016; Song et al., 2008; van den Heuvel, Stam, Kahn, & Hulshoff Pol, 2009; Wang, Song, Jiang, Zhang, & Yu, 2011). Notably, greater global and local integration (i.e., both across and within distinct brain networks) in frontal, parietal, and temporal regions may be particularly important for intelligence (Cole, Ito, & Braver, 2015; Cole, Yarkoni, Repovš, Anticevic, & Braver, 2012; Hearne, Mattingley, & Cocchi, 2016; Hilger, Ekman, Fiebach, & Basten, 2017a, 2017b; Malpas et al., 2016; Santarnecchi et al., 2017; Song et al., 2008; van den Heuvel et al., 2009; Wang et al., 2011). Using machine learning approaches, patterns of whole-brain rs-fMRI functional connectivity have been found to successfully predict intelligence scores on fluid intelligence measures (Dubois, Galdi, Paul, & Adolphs, 2018; Finn et al., 2015; Greene, Gao, Scheinost, & Constable, 2018). While some studies have found that connections within and across networks encompassing frontal, parietal, and temporal regions can more successfully predict intelligence scores than whole-brain connectivity matrices (Finn et al., 2015), other studies have found that focusing on specific networks or pairs of networks reduces success as compared to utilizing whole-brain connectivity patterns (Dubois et al., 2018). A few studies have assessed functional brain network organization during cognitive tasks as compared to during rs-fMRI and related that to fluid intelligence. In a study directly quantifying how network reconfiguration between rs-fMRI and cognitive tasks relates to fluid intelligence, it was observed that a more similar network organization between rest and the tasks was indicative of higher intelligence (Schultz & Cole, 2016). This held for both cognitive control tasks (i.e., working memory or relational reasoning) and other cognitive tasks (i.e., language comprehension). Critically, when comparing the ability to predict intelligence scores using either rs-fMRI or task-based functional connectivity patterns, it has been found that functional brain network organization measured during working memory tasks is able to predict intelligence scores more successfully than during rs-fMRI (Greene et al., 2018; Xiao, Stephen, Wilson, Calhoun, & Wang, 2018). Together, this line of research indicates that it is critical to investigate task-based network organization, and particularly that of cognitive control tasks, to understand how brain function underlies intelligence.
Cognitive Control and the Functional Connectome One of the most well studied forms of cognitive control from a network perspective is working memory. Similar to studies relating network organization to intelligence, we have found that, when engaged in a working
An Integrated, Dynamic Functional Connectome
memory task, global measures of network integration increase, while global measures of network segregation decrease (Cohen & D’Esposito, 2016). This increased integration was observed between networks important for cognitive control (FP, CO) and task-relevant sensory processes (the somatomotor network) in particular (Cohen, Gallen, Jacobs, Lee, & D’Esposito, 2014). Importantly, individuals with higher network integration during the performance of tasks probing working memory performed better on the tasks (Cohen & D’Esposito, 2016; Cohen et al., 2014). These findings have held in both young adults (Cohen & D’Esposito, 2016; Cohen et al., 2014) and in older adults (Gallen, Turner, Adnan, & D’Esposito, 2016). Increased integration between cognitive control networks during working memory has been observed by others as well (Gordon, Stollstorff, & Vaidya, 2012; Liang, Zou, He, & Yang, 2016), with a parametric relationship between increased integration across networks and increasing working memory load (Finc et al., 2017; Kitzbichler, Henson, Smith, Nathan, & Bullmore, 2011; Vatansever, Menon, Manktelow, Sahakian, & Stamatakis, 2015; Zippo, Della Rosa, Castiglioni, & Biella, 2018). In addition, the strength and density of functional connectivity in frontal, parietal, and occipital regions related to cognitive control and task-relevant sensory processes has also been shown to increase with increasing working memory load (Liu et al., 2017). Importantly, this increased integration has been related to improved working memory performance (Bassett et al., 2009; Finc et al., 2017; Stanley, Dagenbach, Lyday, Burdette, & Laurienti, 2014; Vatansever et al., 2015). Time-varying functional connectivity analyses during working memory tasks have indicated that levels of network integration and segregation fluctuate even within trials (Zippo et al., 2018), that more time is spent overall in an integrated state during a working memory task as compared to during rest or during non-cognitive control tasks (Shine et al., 2016), and that more stable functional connectivity during working memory is related to increased performance (Chen, Chang, Greicius, & Glover, 2015). Studies that have probed other aspects of cognitive control, such as task-switching (Chen, Cai, Ryali, Supekar, & Menon, 2016; Cole, Laurent, & Stocco, 2013; Yin, Wang, Pan, Liu, & Chen, 2015), interference resolution (Elton & Gao, 2014, 2015; Hutchison & Morton, 2015), and sustained attention (Ekman, Derrfuss, Tittgemeyer, & Fiebach, 2012; Godwin, Barry, & Marois, 2015; Spadone et al., 2015), have reported similar findings with regard to network integration and time-varying functional connectivity. Together, the findings from studies that examine functional brain network organization during the performance of cognitive control tasks are strikingly similar to those relating functional brain network organization to standard measures of intelligence (FSIQ or fluid intelligence). Specifically, both point to increased network integration, particularly involving FP and CO networks and their connections to task-relevant sensory and motor networks. Moreover, a greater degree of network reconfiguration during rest, in combination with
269
270
j. r. cohen and m. d’esposito
more stable network organization during task, is indicative of increased cognitive control ability. This supports our claim that cognitive control and intelligence rely on similar neural mechanisms, namely network integration and reconfiguration (i.e., plasticity) (Barbey, 2018; Dehaene et al., 1998; Garlick, 2002; Jung & Haier, 2007; Mercado, 2008; Power & Petersen, 2013). We propose that, because the brain is organized in a modular structure, integration is crucial for efficient communication across distinct modules, each of which implements a specific function (e.g., sensation, motor execution, attention, memory). The capacity for plasticity allows for rapid changes in the patterns of communication within and across modules based on current demands. Together, these features of brain organization allow for all aspects of goal-directed higher-order cognition, including cognitive control and intelligence.
Translational Applications Literature probing the translational relevance of the relationship between network organization/plasticity and intelligence is scant, thus here we highlight findings from the cognitive control literature. Impairment in cognitive control is hypothesized to be a transdiagnostic marker for brain disorders (Goschke, 2014; McTeague, Goodkind, & Etkin, 2016; Snyder, Miyake, & Hankin, 2015). Notably, transdiagnostic alterations in brain structure and function, particularly as related to regions of the CO and FP networks, are related to these impairments in cognitive control (Goodkind et al., 2015; McTeague et al., 2017). These findings are not specific to adults; in a large sample of children and adolescents with symptoms from multiple disorders, CO dysfunction during a working memory task was related to a general psychopathology trait (Shanmugan et al., 2016). From a brain network perspective, consistent, reliable disruptions to patterns of brain network organization and dynamics have also been observed transdiagnostically (Calhoun, Miller, Pearlson, & Adalı, 2014; Cao, Wang, & He, 2015; Fornito, Zalesky, & Breakspear, 2015; Xia & He, 2011). Better characterizing network topology and dynamics will allow predictions to be made about brain dysfunction and the cognitive processes and behavioral symptoms that arise from such dysfunction (Deco & Kringelbach, 2014; Fornito et al., 2015). As an example, we have shown that damage to nodes that integrate across distinct brain networks cause more extensive brain network disruption than nodes whose connections are contained within specific networks (Gratton et al., 2012). By leveraging information about dysfunctional resting state network organization and the roles of specific nodes in psychiatric and neurologic disorders, it is possible to identify promising neural sites to target with brain stimulation as a means of treatment (Dubin, 2017; Hart, Ypma, Romero-Garcia, Price, & Suckling, 2016). For example, it has
An Integrated, Dynamic Functional Connectome
been demonstrated that by combining information about canonical resting state network organization with knowledge of individual differences in nuances of connectivity patterns, it is possible to predict which target stimulation sites are more likely to be effective at treating brain disorders (Fox et al., 2014; Opitz, Fox, Craddock, Colcombe, & Milham, 2016). Future research should combine modeling of network mechanisms with empirical research relating these changes in connectivity to both disease-specific and transdiagnostic improvements in cognition and symptom profiles. Finally, we have recently proposed that modularity of intrinsic functional network organization, or the degree to which the brain segregates into distinct networks, may be a biomarker indicating potential for cognitive plasticity (Gallen & D’Esposito, 2019). Thus, individuals with cognitive control difficulties due to normal aging, neurological, or psychiatric disorders, and who have particularly modular brain organization, may be good candidates for cognitive control training. It is thought that this is because higher modularity, which leads to greater information encapsulation, allows for more efficient processing. As previously stated, these features of brain function have been proposed to be related to higher intelligence as well (Garlick, 2002; Mercado, 2008).
Promising Next Steps: Uncovering Mechanisms Underlying the Emergence of Cognitive Control and Intelligence from the Functional Connectome While we have learned which brain networks are involved in cognitive control, we do not yet understand the particular roles of these networks in different aspects of cognition, or the specific mechanisms through which they act (Mill, Ito, & Cole, 2017). There is a rich behavioral literature distinguishing among components of cognitive control (Friedman & Miyake, 2017) or components of intelligence (Conway et al., 2003), as well as how they relate to each other (Chen et al., 2019; Friedman et al., 2006). An important next step will be to integrate these cognitive theories with empirical data investigating network mechanisms (Barbey, 2018; Girn, Mills, & Christoff, 2019; Kovacs & Conway, 2016; Mill et al., 2017; see also Chapter 6, by Barbey, and Chapter 2, by Hilger and Sporns). As an example, it has recently been proposed that cognition in general emerges from interactions between stable components of network organization (mainly unimodal sensory and motor networks) and flexible components of network organization (higher-order networks associated with cognitive control, such as the FP and CO networks) (Mill et al., 2017). Future research could test this hypothesis and specify predictions for distinct aspects of cognitive control. For example, based on the hypothesized roles of the FP and CO networks (trialwise updating and task set maintenance; Dosenbach et al., 2008; Power & Petersen, 2013), perhaps the CO network drives the maintenance component of working memory, while FP network
271
272
j. r. cohen and m. d’esposito
interactions drive the updating component. Characterizing network organization and dynamics during a working memory task that can separate these two components would address this prediction. A more direct assessment of how network flexibility relates to cognitive control can be achieved using time-varying functional connectivity measures (Cohen, 2018; Gonzalez-Castillo & Bandettini, 2018; Shine & Poldrack, 2018). Using these tools, it has been found that brain network dynamics become more stable when participants are successfully focused on a challenging cognitive task (Chen et al., 2015; Elton & Gao, 2015; Hutchison & Morton, 2015), changes in brain network organization related to attention predict whether or not a subject will detect a stimulus (Ekman et al., 2012; Sadaghiani, Poline, Kleinschmidt, & D’Esposito, 2015; Thompson et al., 2013; Wang, Ong, Patanaik, Zhou, & Chee, 2016), and more variable network organization during rest, especially as related to the salience network, which is highly overlapping with the CO network, is related to greater cognitive flexibility (Chen et al., 2016). We may be able to better understand how these dynamics relate to specific cognitive demands by focusing on different time courses of stability vs. flexibility across different networks. For example, perhaps more stable CO network connectivity, in combination with more dynamic FP network connectivity, is most optimal for cognitive control. Implementing these methods will allow us to better characterize how the dynamics of brain network organization underlies successful cognitive control, both across distinct brain networks and across timescales. Much of the evidence we have reviewed thus far has been correlational. Causal manipulation of brain function with transcranial magnetic stimulation (TMS) can be used to probe neural mechanisms in humans. For example, we demonstrated that disruption of function in key nodes of the FP or CO networks with TMS induced widespread changes in functional connectivity distributed across the entire brain, while TMS to the primary somatosensory cortex did not (Gratton, Lee, Nomura, & D’Esposito, 2013). These findings add to correlational studies that have shown that cognitive control networks are more highly integrated than primary sensory networks (Cole, Pathak, & Schneider, 2010; van den Heuvel & Sporns, 2011), and that the dynamics of FP nodes (Cole et al., 2013; Yin et al., 2015) and of CO nodes (Chen et al., 2016) are critical for cognitive control. Future research could further probe the impact of causal manipulation of functional brain network organization on changes in behavioral performance during tasks assessing various aspects of intelligence. Finally, computational models based on theories of how brain network function underlies specific aspects of cognitive control will allow us to move beyond descriptive measures and explore the mechanisms that cause cognitive control and intelligence to emerge from brain network organization. Early models of cognitive control that focused primarily on functions of individual regions, particularly of the prefrontal cortex, uncovered important information about various aspects of cognitive control (for a review, see O’Reilly,
An Integrated, Dynamic Functional Connectome
Herd, & Pauli, 2010). Network-based computational models thus far have focused mainly on overall brain function – how underlying brain structure constrains brain function (Deco, Jirsa, McIntosh, Sporns, & Kötter, 2009; Gu et al., 2015; Honey, Kötter, Breakspear, & Sporns, 2007), how information flows through the system (Cole, Ito, Bassett, & Schultz, 2016; Mitra, Snyder, Blazey, & Raichle, 2015), and how network dynamics emerge (Breakspear, 2017; Cabral, Kringelbach, & Deco, 2017; Deco & Corbetta, 2011; Deco, Jirsa, & McIntosh, 2013). These models are successfully able to predict arousal state (Deco, Tononi, Boly, & Kringelbach, 2015) and inform our understanding of neuropsychiatric disorders (Deco & Kringelbach, 2014), but they have yet to be used to model specific aspects of cognition, such as cognitive control or intelligence. A promising model based on network control theory has identified brain regions that, based on their structural connectivity, have key roles in directing the brain into different “states”, or whole-brain patterns of functional connectivity (Gu et al., 2015). Different brain states are thought to underlie different cognitive and affective states (Cohen, 2018), and a particularly integrated state is thought to underlie cognitive control and intelligence (Barbey, 2018; Dehaene et al., 1998; see also Chapter 6, by Barbey). It has been found that regions within the FP and CO networks were most able to initiate transitions across brain states that are thought to underlie different facets of higher-order cognition (Gu et al., 2015). Further, the ability of the parietal cortex to initiate state transitions has been related to intelligence (Kenett et al., 2018). It has been proposed that fluid intelligence may emerge from a brain network structure that is optimally organized to reconfigure brain network organization into a variety of states that are effortful and difficult to reach (Barbey, 2018; see also Chapter 6, by Barbey). Thus, intelligence may be an emergent property of large-scale topology and dynamics. While compelling, further research is needed to directly address this hypothesis (Girn et al., 2019).
Conclusion In conclusion, we can learn more about how intelligence arises from brain function by considering studies of cognitive control and brain network organization. This literature has found that greater integration within and across brain networks, in combination with task-relevant dynamic reconfiguration, underlies successful cognitive control across a variety of domains (i.e., working memory updating, task-switching, interference resolution, controlled attention). These cognitive processes are thought to be critical for intelligence, a theory that is supported by the finding that a similar pattern of integrated brain organization and dynamics is crucial for both successful cognitive control and higher intelligence. These observations hold important implications for understanding the cognitive deficits observed across a wide range
273
274
j. r. cohen and m. d’esposito
of neurological and psychiatric disorders, as well as for targeting promising methods of treatment for these disorders. To continue to make progress in understanding how cognitive control and intelligence emerge from brain function, future work aimed at understanding how cognition emerges from dynamic interactions across brain regions is critical.
References Alstott, J., Breakspear, M., Hagmann, P., Cammoun, L., & Sporns, O. (2009). Modeling the impact of lesions in the human brain. PLoS Computational Biology, 5(6), e1000408. Baddeley, A. D., & Hitch, G. (1974). Working memory. In G. H. Bower (ed.) Psychology of learning and motivation, 8th ed. (pp. 47–89). New York: Academic Press. Barbey, A. K. (2018). Network neuroscience theory of human intelligence. Trends in Cognitive Sciences, 22(1), 8–20. Barbey, A. K., Colom, R., Solomon, J., Krueger, F., Forbes, C., & Grafman, J. (2012). An integrative architecture for general intelligence and executive function revealed by lesion mapping. Brain, 135(Pt 4), 1154–1164. Bassett, D. S., & Bullmore, E. (2006). Small-world brain networks. The Neuroscientist, 12(6), 512–523. Bassett, D. S., Bullmore, E. T., Meyer-Lindenberg, A., Apud, J. A., Weinberger, D. R., & Coppola, R. (2009). Cognitive fitness of cost-efficient brain functional networks. Proceedings of the National Academy of Sciences of the United States of America, 106(28), 11747–11752. Basten, U., Hilger, K., & Fiebach, C. J. (2015). Where smart brains are different: A quantitative meta-analysis of functional and structural brain imaging studies on intelligence. Intelligence, 51, 10–27. Betzel, R. F., Gu, S., Medaglia, J. D., Pasqualetti, F., & Bassett, D. S. (2016). Optimally controlling the human connectome: The role of network topology. Scientific Reports, 6, 30770. Bohlken, M. M., Brouwer, R. M., Mandl, R. C. W., Hedman, A. M., van den Heuvel, M. P., van Haren, N. E. M., . . . Hulshoff Pol, H. E. (2016). Topology of genetic associations between regional gray matter volume and intellectual ability: Evidence for a high capacity network. Neuroimage, 124(Pt A), 1044–1053. Breakspear, M. (2017). Dynamic models of large-scale brain activity. Nature Neuroscience, 20(3), 340–352. Bullmore, E., & Sporns, O. (2009). Complex brain networks: Graph theoretical analysis of structural and functional systems. Nature Reviews Neuroscience, 10(3), 186–198. Bullmore, E., & Sporns, O. (2012). The economy of brain network organization. Nature Reviews Neuroscience, 13(5), 336–349. Cabral, J., Kringelbach, M. L., & Deco, G. (2017). Functional connectivity dynamically evolves on multiple time-scales over a static structural connectome: Models and mechanisms. Neuroimage, 160, 84–96.
An Integrated, Dynamic Functional Connectome
Calhoun, V. D., Miller, R., Pearlson, G., & Adalı, T. (2014). The chronnectome: Time-varying connectivity networks as the next frontier in fMRI data discovery. Neuron, 84(2), 262–274. Cao, M., Wang, Z., & He, Y. (2015). Connectomics in psychiatric research: Advances and applications. Neuropsychiatric Disease and Treatment, 11, 2801–2810. Cattell, R. B. (1971). Abilities: Their structure, growth and action. Boston, MA: Houghton Mifflin. Cattell, R. B., & Horn, J. D. (1978). A check on the theory of fluid and crystallized intelligence with description of new subtest designs. Journal of Educational Measurement, 15(3), 139–164. Chen, T., Cai, W., Ryali, S., Supekar, K., & Menon, V. (2016). Distinct global brain dynamics and spatiotemporal organization of the salience network. PLoS Biology, 14(6), e1002469. Chen, J. E., Chang, C., Greicius, M. D., & Glover, G. H. (2015). Introducing co-activation pattern metrics to quantify spontaneous brain network dynamics. Neuroimage, 111, 476–488. Chen, Y., Spagna, A., Wu, T., Kim, T. H., Wu, Q., Chen, C., . . . Fan, J. (2019). Testing a cognitive control model of human intelligence. Scientific Reports, 9, 2898. Cohen, J. R. (2018). The behavioral and cognitive relevance of time-varying, dynamic changes in functional connectivity. Neuroimage, 180(Pt B), 515–525. Cohen, J. R., & D’Esposito, M. (2016). The segregation and integration of distinct brain networks and their relationship to cognition. The Journal of Neuroscience, 36(48), 12083–12094. Cohen, J. R., Gallen, C. L., Jacobs, E. G., Lee, T. G., & D’Esposito, M. (2014). Quantifying the reconfiguration of intrinsic networks during working memory. PLoS One, 9(9), e106636. Cole, M. W., Ito, T., Bassett, D. S., & Schultz, D. H. (2016). Activity flow over restingstate networks shapes cognitive task activations. Nature Neuroscience, 19(12), 1718–1726. Cole, M. W., Ito, T., & Braver, T. S. (2015). Lateral prefrontal cortex contributes to fluid intelligence through multinetwork connectivity. Brain Connectivity, 5(8), 497–504. Cole, M. W., Laurent, P., & Stocco, A. (2013). Rapid instructed task learning: A new window into the human brain’s unique capacity for flexible cognitive control. Cognitive, Affective & Behavioral Neuroscience, 13(1), 1–22. Cole, M. W., Pathak, S., & Schneider, W. (2010). Identifying the brain’s most globally connected regions. Neuroimage, 49(4), 3132–3148. Cole, M. W., Yarkoni, T., Repovš, G., Anticevic, A., & Braver, T. S. (2012). Global connectivity of prefrontal cortex predicts cognitive control and intelligence. The Journal of Neuroscience, 32(26), 8988–8999. Conway, A. R. A., Getz, S. J., Macnamara, B., & Engel de Abreu, P. M. J. (2011). Working memory and intelligence. In R. J. Sternberg, & S. B. Kaufman (eds.), The Cambridge handbook of intelligence (pp. 394–418). New York: Cambridge University Press. Conway, A. R. A., Kane, M. J., & Engle, R. W. (2003). Working memory capacity and its relation to general intelligence. Trends in Cognitive Sciences, 7(12), 547–552.
275
276
j. r. cohen and m. d’esposito
Deco, G., & Corbetta, M. (2011). The dynamical balance of the brain at rest. The Neuroscientist, 17(1), 107–123. Deco, G., Jirsa, V. K., & McIntosh, A. R. (2013). Resting brains never rest: Computational insights into potential cognitive architectures. Trends in Neurosciences, 36(5), 268–274. Deco, G., Jirsa, V., McIntosh, A. R., Sporns, O., & Kötter, R. (2009). Key role of coupling, delay, and noise in resting brain fluctuations. Proceedings of the National Academy of Sciences of the United States of America, 106(25), 10302–10307. Deco, G., & Kringelbach, M. L. (2014). Great expectations: Using whole-brain computational connectomics for understanding neuropsychiatric disorders. Neuron, 84(5), 892–905. Deco, G., Tononi, G., Boly, M., & Kringelbach, M. L. (2015). Rethinking segregation and integration: Contributions of whole-brain modelling. Nature Reviews Neuroscience, 16(7), 430–439. Dehaene, S., Kerszberg, M., & Changeux, J. P. (1998). A neuronal model of a global workspace in effortful cognitive tasks. Proceedings of the National Academy of Sciences of the United States of America, 95(24), 14529–14534. Dosenbach, N. U. F., Fair, D. A., Cohen, A. L., Schlaggar, B. L., & Petersen, S. E. (2008). A dual-networks architecture of top-down control. Trends in Cognitive Sciences, 12(3), 99–105. Dubin, M. (2017). Imaging TMS: Antidepressant mechanisms and treatment optimization. International Review of Psychiatry, 29(2), 89–97. Dubois, J., Galdi, P., Paul, L. K., & Adolphs, R. (2018). A distributed brain network predicts general intelligence from resting-state human neuroimaging data. Philosophical Transactions of the Royal Society B: Biological Sciences, 373, 20170284. Duncan, J. (2001). An adaptive coding model of neural function in prefrontal cortex. Nature Reviews Neuroscience, 2(11), 820–829. Duncan, J. (2010). The multiple-demand (MD) system of the primate brain: Mental programs for intelligent behaviour. Trends in Cognitive Sciences, 14(4), 172–179. Ekman, M., Derrfuss, J., Tittgemeyer, M., & Fiebach, C. J. (2012). Predicting errors from reconfiguration patterns in human brain networks. Proceedings of the National Academy of Sciences of the United States of America, 109(41), 16714–16719. Elton, A., & Gao, W. (2014). Divergent task-dependent functional connectivity of executive control and salience networks. Cortex, 51, 56–66. Elton, A., & Gao, W. (2015). Task-related modulation of functional connectivity variability and its behavioral correlations. Human Brain Mapping, 36(8), 3260–3272. Euler, M. J. (2018). Intelligence and uncertainty: Implications of hierarchical predictive processing for the neuroscience of cognitive ability. Neuroscience and Biobehavioral Reviews, 94, 93–112. Finc, K., Bonna, K., Lewandowska, M., Wolak, T., Nikadon, J., Dreszer, J., . . . Kühn, S. (2017). Transition of the functional brain network related to increasing cognitive demands. Human Brain Mapping, 38(7), 3659–3674.
An Integrated, Dynamic Functional Connectome
Finn, E. S., Shen, X., Scheinost, D., Rosenberg, M. D., Huang, J., Chun, M. M., . . . Constable, R. T. (2015). Functional connectome fingerprinting: Identifying individuals using patterns of brain connectivity. Nature Neuroscience, 18(11), 1664–1671. Fornito, A., Zalesky, A., & Breakspear, M. (2015). The connectomics of brain disorders. Nature Reviews Neuroscience, 16(3), 159–172. Fox, M. D., Buckner, R. L., Liu, H., Chakravarty, M. M., Lozano, A. M., & PascualLeone, A. (2014). Resting-state networks link invasive and noninvasive brain stimulation across diverse psychiatric and neurological diseases. Proceedings of the National Academy of Sciences of the United States of America, 111(41), E4367–E4375. Friedman, N. P., & Miyake, A. (2017). Unity and diversity of executive functions: Individual differences as a window on cognitive structure. Cortex, 86, 186–204. Friedman, N. P., Miyake, A., Corley, R. P., Young, S. E., Defries, J. C., & Hewitt, J. K. (2006). Not all executive functions are related to intelligence. Psychological Science, 17(2), 172–179. Gallen, C. L., & D’Esposito, M. (2019). Modular brain network organization: A biomarker of cognitive plasticity. Trends in Cognitive Sciences, 23(4), 293–304. Gallen, C. L., Turner, G. R., Adnan, A., & D’Esposito, M. (2016). Reconfiguration of brain network architecture to support executive control in aging. Neurobiology of Aging, 44, 42–52. Garlick, D. (2002). Understanding the nature of the general factor of intelligence: The role of individual differences in neural plasticity as an explanatory mechanism. Psychological Review, 109(1), 116–136. Girn, M., Mills, C., & Christoff, K. (2019). Linking brain network reconfiguration and intelligence: Are we there yet? Trends in Neuroscience and Education, 15, 62–70. Gläscher, J., Rudrauf, D., Colom, R., Paul, L. K., Tranel, D., Damasio, H., & Adolphs, R. (2010). Distributed neural system for general intelligence revealed by lesion mapping. Proceedings of the National Academy of Sciences of the United States of America, 107(10), 4705–4709. Godwin, D., Barry, R. L., & Marois, R. (2015). Breakdown of the brain’s functional network modularity with awareness. Proceedings of the National Academy of Sciences of the United States of America, 112(12), 3799–3804. Gonzalez-Castillo, J., & Bandettini, P. A. (2018). Task-based dynamic functional connectivity: Recent findings and open questions. Neuroimage, 180(Pt B), 526–533. Goodkind, M., Eickhoff, S. B., Oathes, D. J., Jiang, Y., Chang, A., Jones-Hagata, L. B., . . . Etkin, A. (2015). Identification of a common neurobiological substrate for mental illness. JAMA Psychiatry, 72(4), 305–315. Gordon, E. M., Stollstorff, M., & Vaidya, C. J. (2012). Using spatial multiple regression to identify intrinsic connectivity networks involved in working memory performance. Human Brain Mapping, 33(7), 1536–1552. Goschke, T. (2014). Dysfunctions of decision-making and cognitive control as transdiagnostic mechanisms of mental disorders: Advances, gaps, and needs
277
278
j. r. cohen and m. d’esposito
in current research. International Journal of Methods in Psychiatric Research, 23(Suppl 1), 41–57. Gratton, C., Laumann, T. O., Nielsen, A. N., Greene, D. J., Gordon, E. M., Gilmore, A. W., . . . Petersen, S. E. (2018). Functional brain networks are dominated by stable group and individual factors, not cognitive or daily variation. Neuron, 98(2), 439–452.e5. Gratton, C., Lee, T. G., Nomura, E. M., & D’Esposito, M. (2013). The effect of theta-burst TMS on cognitive control networks measured with resting state fMRI. Frontiers in Systems Neuroscience, 7, 124. Gratton, C., Nomura, E. M., Perez, F., & D’Esposito, M. (2012). Focal brain lesions to critical locations cause widespread disruption of the modular organization of the brain. Journal of Cognitive Neuroscience, 24(6), 1275–1285. Gratton, C., Sun, H., & Petersen, S. E. (2018). Control networks and hubs. Psychophysiology, 55(3), e13032. Greene, A. S., Gao, S., Scheinost, D., & Constable, R. T. (2018). Task-induced brain state manipulation improves prediction of individual traits. Nature Communications, 9(1), 2807. Gu, S., Pasqualetti, F., Cieslak, M., Telesford, Q. K., Yu, A. B., Kahn, A. E., . . . Bassett, D. S. (2015). Controllability of structural brain networks. Nature Communications, 6, 8414. Guimerà, R., Mossa, S., Turtschi, A., & Amaral, L. A. N. (2005). The worldwide air transportation network: Anomalous centrality, community structure, and cities’ global roles. Proceedings of the National Academy of Sciences of the United States of America, 102(22), 7794–7799. Haier, R. J., Siegel, B. V., Nuechterlein, K. H., Hazlett, E., Wu, J. C., Paek, J., . . . Buchsbaum, M. S. (1988). Cortical glucose metabolic rate correlates of abstract reasoning and attention studied with positron emission tomography. Intelligence, 12, 199–217. Hart, M. G., Ypma, R. J. F., Romero-Garcia, R., Price, S. J., & Suckling, J. (2016). Graph theory analysis of complex brain networks: New concepts in brain mapping applied to neurosurgery. Journal of Neurosurgery, 124(6), 1665–1678. Hearne, L. J., Mattingley, J. B., & Cocchi, L. (2016). Functional brain networks related to individual differences in human intelligence at rest. Scientific Reports, 6, 32328. Hilger, K., Ekman, M., Fiebach, C. J., & Basten, U. (2017a). Efficient hubs in the intelligent brain: Nodal efficiency of hub regions in the salience network is associated with general intelligence. Intelligence, 60, 10–25. Hilger, K., Ekman, M., Fiebach, C. J., & Basten, U. (2017b). Intelligence is associated with the modular structure of intrinsic brain networks. Scientific Reports, 7(1), 16088. Honey, C. J., Kötter, R., Breakspear, M., & Sporns, O. (2007). Network structure of cerebral cortex shapes functional connectivity on multiple time scales. Proceedings of the National Academy of Sciences of the United States of America, 104(24), 10240–10245. Hutchison, R. M., & Morton, J. B. (2015). Tracking the brain’s functional coupling dynamics over development. The Journal of Neuroscience, 35(17), 6849–6859.
An Integrated, Dynamic Functional Connectome
Jung, R. E., & Haier, R. J. (2007). The Parieto-Frontal Integration Theory (P-FIT) of intelligence: Converging neuroimaging evidence. Behavioral and Brain Sciences, 30(2), 135–154, discussion 154–187. Kane, M. J., & Engle, R. W. (2002). The role of prefrontal cortex in working-memory capacity, executive attention, and general fluid intelligence: An individualdifferences perspective. Psychonomic Bulletin & Review, 9(4), 637–671. Kenett, Y. N., Medaglia, J. D., Beaty, R. E., Chen, Q., Betzel, R. F., ThompsonSchill, S. L., & Qiu, J. (2018). Driving the brain towards creativity and intelligence: A network control theory analysis. Neuropsychologia, 118(Pt A), 79–90. Kitzbichler, M. G., Henson, R. N. A., Smith, M. L., Nathan, P. J., & Bullmore, E. T. (2011). Cognitive effort drives workspace configuration of human brain functional networks. The Journal of Neuroscience, 31(22), 8259–8270. Kovacs, K., & Conway, A. R. A. (2016). Process overlap theory: A unified account of the general factor of intelligence. Psychological Inquiry, 27(3), 151–177. Kucyi, A., Tambini, A., Sadaghiani, S., Keilholz, S., & Cohen, J. R. (2018). Spontaneous cognitive processes and the behavioral validation of timevarying brain connectivity. Network Neuroscience, 2(4), 397–417. Langer, N., Pedroni, A., Gianotti, L. R. R., Hänggi, J., Knoch, D., & Jäncke, L. (2012). Functional brain network efficiency predicts intelligence. Human Brain Mapping, 33(6), 1393–1406. Li, Y., Liu, Y., Li, J., Qin, W., Li, K., Yu, C., & Jiang, T. (2009). Brain anatomical network and intelligence. PLoS Computational Biology, 5(5), e1000395. Liang, X., Zou, Q., He, Y., & Yang, Y. (2016). Topologically reorganized connectivity architecture of default-mode, executive-control, and salience networks across working memory task loads. Cerebral Cortex, 26(4), 1501–1511. Liu, H., Yu, H., Li, Y., Qin, W., Xu, L., Yu, C., & Liang, M. (2017). An energyefficient intrinsic functional organization of human working memory: A resting-state functional connectivity study. Behavioural Brain Research, 316, 66–73. Malpas, C. B., Genc, S., Saling, M. M., Velakoulis, D., Desmond, P. M., & O’Brien, T. J. (2016). MRI correlates of general intelligence in neurotypical adults. Journal of Clinical Neuroscience, 24, 128–134. McTeague, L. M., Goodkind, M. S., & Etkin, A. (2016). Transdiagnostic impairment of cognitive control in mental illness. Journal of Psychiatric Research, 83, 37–46. McTeague, L. M., Huemer, J., Carreon, D. M., Jiang, Y., Eickhoff, S. B., & Etkin, A. (2017). Identification of common neural circuit disruptions in cognitive control across psychiatric disorders. American Journal of Psychiatry, 174(7), 676–685. Mercado, E., III. (2008). Neural and cognitive plasticity: From maps to minds. Psychological Bulletin, 134(1), 109–137. Mesulam, M.-M. (1990). Large-scale neurocognitive networks and distributed processing for attention, language, and memory. Annals of Neurology, 28(5), 597–613. Meunier, D., Lambiotte, R., & Bullmore, E. T. (2010). Modular and hierarchically modular organization of brain networks. Frontiers in Neuroscience, 4, 200.
279
280
j. r. cohen and m. d’esposito
Mill, R. D., Ito, T., & Cole, M. W. (2017). From connectome to cognition: The search for mechanism in human functional brain networks. Neuroimage, 160, 124–139. Miller, E. K., & Cohen, J. D. (2001). An integrative theory of prefrontal cortex function. Annual Review of Neuroscience, 24, 167–202. Mitra, A., Snyder, A. Z., Blazey, T., & Raichle, M. E. (2015). Lag threads organize the brain’s intrinsic activity. Proceedings of the National Academy of Sciences of the United States of America, 112(17), E2235–E2244. Newman, M. E. J., & Girvan, M. (2004). Finding and evaluating community structure in networks. Physical Review E, 69(2), 026113. O’Reilly, R. C., Herd, S. A., & Pauli, W. M. (2010). Computational models of cognitive control. Current Opinion in Neurobiology, 20(2), 257–261. Opitz, A., Fox, M. D., Craddock, R. C., Colcombe, S., & Milham, M. P. (2016). An integrated framework for targeting functional networks via transcranial magnetic stimulation. Neuroimage, 127, 86–96. Power, J. D., & Petersen, S. E. (2013). Control-related systems in the human brain. Current Opinion in Neurobiology, 23(2), 223–228. Raven, J. (2000). The Raven’s progressive matrices: Change and stability over culture and time. Cognitive Psychology, 41(1), 1–48. Sadaghiani, S., Poline, J. B., Kleinschmidt, A., & D’Esposito, M. (2015). Ongoing dynamics in large-scale functional connectivity predict perception. Proceedings of the National Academy of Sciences of the United States of America, 112(27), 8463–8468. Santarnecchi, E., Emmendorfer, A., Tadayon, S., Rossi, S., Rossi, A., Pascual-Leone, A., & Honeywell SHARP Team Authors. (2017). Network connectivity correlates of variability in fluid intelligence performance. Intelligence, 65, 35–47. Schultz, D. H., & Cole, M. W. (2016). Higher intelligence is associated with less taskrelated brain network reconfiguration. The Journal of Neuroscience, 36(33), 8551–8561. Shanmugan, S., Wolf, D. H., Calkins, M. E., Moore, T. M., Ruparel, K., Hopson, R. D., . . . Satterthwaite, T. D. (2016). Common and dissociable mechanisms of executive system dysfunction across psychiatric disorders in youth. American Journal of Psychiatry, 173(5), 517–526. Shine, J. M., Bissett, P. G., Bell, P. T., Koyejo, O., Balsters, J. H., Gorgolewski, K. J., . . . Poldrack, R. A. (2016). The dynamics of functional brain networks: Integrated network states during cognitive task performance. Neuron, 92(2), 544–554. Shine, J. M., & Poldrack, R. A. (2018). Principles of dynamic network reconfiguration across diverse brain states. Neuroimage, 180(Pt B), 396–405. Snyder, H. R., Miyake, A., & Hankin, B. L. (2015). Advancing understanding of executive function impairments and psychopathology: Bridging the gap between clinical and cognitive approaches. Frontiers in Psychology, 6, 328. Song, M., Zhou, Y., Li, J., Liu, Y., Tian, L., Yu, C., & Jiang, T. (2008). Brain spontaneous functional connectivity and intelligence. Neuroimage, 41(3), 1168–1176. Spadone, S., Della Penna, S., Sestieri, C., Betti, V., Tosoni, A., Perrucci, M. G., . . . Corbetta, M. (2015). Dynamic reorganization of human resting-state
An Integrated, Dynamic Functional Connectome
networks during visuospatial attention. Proceedings of the National Academy of Sciences of the United States of America, 112(26), 8112–8117. Sporns, O. (2010). Networks of the brain. Cambridge, MA: MIT Press. Sporns, O. (2013). Network attributes for segregation and integration in the human brain. Current Opinion in Neurobiology, 23(2), 162–171. Stanley, M. L., Dagenbach, D., Lyday, R. G., Burdette, J. H., & Laurienti, P. J. (2014). Changes in global and regional modularity associated with increasing working memory load. Frontiers in Human Neuroscience, 8, 954. Sternberg, R. J., & Kaufman, S. B. (eds.) (2011). The Cambridge handbook of intelligence. New York: Cambridge University Press. Thompson, G. J., Magnuson, M. E., Merritt, M. D., Schwarb, H., Pan, W.-J., McKinley, A., . . . Keilholz, S. D. (2013). Short-time windows of correlation between large-scale functional brain networks predict vigilance intraindividually and interindividually. Human Brain Mapping, 34(12), 3280–3298. van den Heuvel, M. P., & Sporns, O. (2011). Rich-club organization of the human connectome. The Journal of Neuroscience, 31(44), 15775–15786. van den Heuvel, M. P., Stam, C. J., Kahn, R. S., & Hulshoff Pol, H. E. (2009). Efficiency of functional brain networks and intellectual performance. The Journal of Neuroscience, 29(23), 7619–7624. Vatansever, D., Menon, D. K., Manktelow, A. E., Sahakian, B. J., & Stamatakis, E. A. (2015). Default mode dynamics for global functional integration. The Journal of Neuroscience, 35(46), 15254–15262. Wang, C., Ong, J. L., Patanaik, A., Zhou, J., & Chee, M. W. L. (2016). Spontaneous eyelid closures link vigilance fluctuation with fMRI dynamic connectivity states. Proceedings of the National Academy of Sciences of the United States of America, 113(34), 9653–9658. Wang, L., Song, M., Jiang, T., Zhang, Y., & Yu, C. (2011). Regional homogeneity of the resting-state brain activity correlates with individual intelligence. Neuroscience Letters, 488(3), 275–278. Wechsler, D. (2008). Wechsler Adult Intelligence Scale – Fourth edition (WAIS-IV). San Antonio, TX: Pearson. Wechsler, D. (2011). Wechsler Abbreviated Scale of Intelligence – Second edition (WASI-II). San Antonio, TX: Pearson. Xia, M., & He, Y. (2011). Magnetic resonance imaging and graph theoretical analysis of complex brain networks in neuropsychiatric disorders. Brain Connectivity, 1(5), 349–365. Xiao, L., Stephen, J. M., Wilson, T. W., Calhoun, V. D., & Wang, Y. (2019). Alternating diffusion map based fusion of multimodal brain connectivity networks for IQ prediction. IEEE Transactions on Biomedical Engineering, 68(8), 2140–2151. Yin, S., Wang, T., Pan, W., Liu, Y., & Chen, A. (2015). Task-switching cost and intrinsic functional connectivity in the human brain: Toward understanding individual differences in cognitive flexibility. PLoS One, 10(12), e0145826. Zippo, A. G., Della Rosa, P. A., Castiglioni, I., & Biella, G. E. M. (2018). Alternating dynamics of segregation and integration in human EEG functional networks during working-memory task. Neuroscience, 371, 191–206.
281
14 Biochemical Correlates of Intelligence Rex E. Jung and Marwa O. Chohan The search for physiological correlates of intelligence, prior to the 1990s, largely revolved around well-established correlates found across species, particularly nerve conduction velocity and overall brain size. Human studies arose naturally from the psychometric literature noting that individuals with higher IQ had both faster reaction times and less variability in their responses (Jensen, 1982). These reaction time studies implied that there was something about intelligence beyond acquisition of knowledge, learning, and skill development, which: (1) could be measured with a high degree of accuracy, (2) could be obtained with minimal bias, (3) had a developmental trajectory from childhood through the teen years, and (4) (presumably) had something to do with neuronal structure and/or functional capacity. However, the tools of the neuroscientist were rather few, and the world of functional magnetic resonance imaging (fMRI), diffusion tensor imaging (DTI), and the human connectome was only a distant dream. Several MRI studies of brain size emerged in the 1990s, however, with the first showing an r = .51 in 40 college students selected for high vs. low SAT scores (Willerman, Schultz, Rutledge, & Bigler, 1991), followed by a second in 67 normal subjects showing r = .38 between total brain volume and Full Scale IQ (FSIQ) (Andreasen et al., 1993). Interestingly, the pattern of correlation held for gray matter volumes (r = .35), but not for white matter volume (r = .14) in this later sample. Similar magnitudes of positive correlations were reported subsequently by others, ranging from r = .35 to r = .69 (Gur et al., 1999; Harvey, Persaud, Ron, Baker, & Murray, 1994; Reiss, Abrams, Singer, Ross, & Denckla, 1996; Wickett, Vernon, & Lee 1994). In a recent metaanalysis comprised of over 88 studies and more than 8,000 individuals, the relationship between brain volume and IQ was found to be r = .24, generalizing across age, IQ domain (full scale, verbal, performance), and sex (Pietschnig, Penke, Wicherts, Zeiler, & Voracek, 2015). These authors conclude that: “brain size, likely a proxy for neuron number, is one of many neuronal factors associated with individual differences in intelligence, alongside parieto-frontal neuronal networks, neuronal efficiency, white matter integrity, cortical gyrification, overall developmental stability, and probably others” (p. 429). It is to one of these “other factors,” often lost in conversations regarding brain correlates of intellectual functioning, that we now turn our attention, moving from the macroscopic to the microscopic.
282
Biochemical Correlates of Intelligence
Magnetic Resonance Spectroscopy (MRS) Spectroscopy was introduced by Sir Isaac Newton (in 1666) when he separated a “spectrum” of colors by shining white light through a glass prism, a process mediated by the electrons of the refractive material. Magnetic resonance spectroscopic techniques, however, are concerned with the degree to which electromagnetic radiation is absorbed and emitted by nuclei – the functional display of which is called a spectrum (Gadian, 1995). Thus, the chemical composition of a given sample (living or inorganic) can be determined through a non-invasive technique exploiting electromagnetic and quantum theory. Specifically, the atomic nuclei with non-zero (e.g., hydrogen spin quantum number I = ½) angular momentum (i.e., “spin”) produce a magnetic field that can be manipulated with radio frequency pulses and subsequently recorded via an MRI machine. The resonance frequency of a nucleus in a MRS experiment is proportional to the strength of a magnetic field it experiences. This field is composed of the large “static” field of the MR spectrometer or MRI (Bo), and the much smaller field produced by the circulating electrons of the atomic molecule (Be). The resultant spectrum can be explained by the electron density of a given chemical sample, summarized by the expression: v (frequency) = gamma (Bo+Be). These frequencies are known as “chemical shifts” when they are referenced to a “zero” point (i.e., trimethylsilane), and expressed in terms of parts per million (ppm) (Gadian, 1995). The intensity of a given chemical signal is the area under the peak produced at a given frequency, and is proportional to the number of nuclei that contribute to a given signal (Figure 14.1). Spectra from living systems reveal narrow linewidths for metabolites with high molecular mobility (such as N-acetylaspartate) and broad linewidths for macromolecules such as proteins, DNA, and membrane lipids. Additionally, such factors as chemical exchange, magnetic field inhomogeneities, and paramagnetic interactions can influence linewidths and lineshapes. The overall sensitivity of MRS to subtle variations in concentration of specific metabolites also depends on these factors since they bear on the signal-to-noise ratio (S/N) of the MR data. Indeed, S/N can be influenced by such diverse factors as (1) the voxel size of the region of interest, (2) the time (or number) of acquisitions, (3) regional magnetic inhomogeneities, and (4) regional movement/ tissue interface artifacts (Gadian, 1995). MRS has been utilized within the biological sciences for a long period of time, largely within the field of physics and chemistry (see Nobel Prize to Edward Purcell and Felix Bloch in 1952), but later applied to biomolecular structures of proteins, nucleic acids, and carbohydrate structure (Gadian, 1995). MRS became increasingly useful in human medicine, as it allows for a bioassay of the chemical composition of tissue, non-invasively, by use of an MRI machine, which can excite hydrogen atoms (i.e., protons) interacting with a wide range of other chemicals. One of the first studies to utilize
283
284
r. e. jung and m. o. chohan
Figure 14.1 Representative spectrum from human voxel obtained from parietal white matter. The tallest peak (right) is N-acetylaspartate (2.02 ppm), with other major peaks (right to left) being Creatine (3.03 ppm) and Choline (3.2 ppm). A second Creatine peak is present at 3.93 ppm. 1
H-MRS in the brain compared relaxation rates of water in normal and brain tumor samples, showing the potential for this technique to both distinguish normal from neoplastic tissue, and diagnose different types of brain tumors in vivo (Parrish, Kurland, Janese, & Bakay, 1974). Currently, Proton Magnetic Resonance Spectroscopy (1H-MRS) is a major technique used in humans to diagnose disease entities ranging from cancer (Kumar, Sharma, & Jagannathan 2012), Alzheimer’s disease (Graff-Radford & Kantarci, 2013), traumatic brain injury (Friedman, Brooks, Jung, Hart, & Yeo, 1998), and even rare neurological syndromes in systemic lupus erythematosus (Jung et al., 2001) (to name a few). We will describe its use in studies of human intelligence.
Key Neurochemical Variables Assessed by MRS N-acetylaspartate (NAA) is located almost entirely in neurons and is the strongest peak in the proton MR brain spectrum in adults (Moffett, Ross, Arun, Madhavarao, & Namboodiri, 2007). Although the exact mechanism by which NAA is related to neuronal functioning and hence cognition is unknown, it has been demonstrated that NAA contributes to lipid synthesis
Biochemical Correlates of Intelligence
for myelination during development in rats (Taylor et al., 2002). NAA is a metabolic precursor of N-acetyl-aspartyl-glutamate, a neuromodulator (Blakely & Coyle, 1988), which may protect neurons from osmotic stress (Taylor et al., 2002), and may be a marker of neuronal oxidative phosphorylation and mitochondrial health (Bates et al., 1996). The Creatine (Cre) peak represents the sum of intracellular creatine and phosphocreatine, reflecting tissue energetics. The Choline (Cho) peak reflects the sum of all MRS visible choline moieties, and can be elevated in stroke, multiple sclerosis, and traumatic brain injury (among other diseases), due to membrane breakdown, inflammation, and/or demyelination (Brooks, Friedman, & Gasparovic, 2001). Given that MRS assays of neurometabolites are often conducted in white matter regions, it is of interest to what extent higher levels of NAA may reflect some aspects of axonal functioning. For example, increased levels of NAA may confer more rapid neural transmission through its demonstrated role as an acetyl group donor for myelination (D’Adamo & Yatsu, 1966). Indeed, two known predictors of neural transmission speed are myelin thickness and axonal diameter (Aboitiz, Scheibel, Fisher, & Zaidel, 1992). A recent study in rats found that NAA was more concentrated in myelin as compared to neuronal dendrites, nerve terminals, and cell bodies, favoring a role in myelin synthesis for this metabolite (Nordengen, Heuser, Rinholm, Matalon, & Gundersen, 2015). This research also supports the notion that the NAA signal represents intact myelin/oligodendrocytes, as opposed to purely viable neuron/ axons (Schuff et al., 2001; Soher, van Zijl, Duyn, & Barker, 1996; Tedeschi et al., 1995). Regardless of the precise role of this neurometabolite, it appears that NAA is an important marker of cognitive disability across numerous clinical groups (Ross & Sachdev, 2004).
Early MRS Studies of Cognition Two studies led to our early interest in applying spectroscopic measures, particularly NAA, to studies of normal human intelligence. The first was a study using 1H-MRS in 28 patients diagnosed with mental retardation (MR, IQ range 20–79), compared to 25 age-matched healthy children ranging in age from 2 to 13 years old. Spectra were obtained from a voxel placed within the right parietal lobe, largely consisting of white matter. While the NAA/Cho ratio was observed to increase with advancing age in both groups (p = .01), this ratio was consistently lower in the MR group (p = .0029). Importantly, while these authors used ratios (as opposed to absolute concentrations), the relationships between metabolites and MR appeared to be driven by NAA as opposed to either choline or creative resonances. The authors interpreted these differences to reflect “maldevelopment of the apical dendrites and abnormalities of synapse formation and spine morphology” reflective of low neuron activity in MR (Hashimoto et al., 1995). This study was the first to establish NAA as a valid marker of extreme intellectual differences, with mechanistic
285
286
r. e. jung and m. o. chohan
relationships likely to be present at the level of neuronal axons within large white matter populations. The second study was conducted with 42 boys, and involved obtaining phosphorous (31P) spectroscopy of pH from a region comprising the top of each subject’s head (i.e., bilateral frontoparietal region) (Rae et al., 1996). These were normal control boys (age range 6–13), who were administered the Wechsler Intelligence Scale for Children – III, and their scores were in the average range (Mean 102.319.6). These authors found a positive relationship between brain pH and FSIQ (r = .52), with stronger relationships between crystallized intelligence (r = .60), than for fluid intelligence (r = .44). They noted that previous studies had shown relationships between IQ and averaged evoked potentials (Callaway, 1973; Ellis, 1969; Ertl & Schafer, 1969), and that increased cellular pH is associated with increased amplitude of nerve action potentials (Lehmann, 1937) and decreased conduction times (Ellis, 1969). Thus, spectroscopic measures of pH were here first shown to be associated with likely efficiency of nerve conduction capabilities as manifested by increased performance on standardized IQ tests in a normal cohort (Rae et al., 1996). Could it be that spectroscopic measures were predictive of intelligence in normal healthy adult subjects as well? Surprisingly, no one had looked at these relationships, in spite of vigorous research regarding the association between metabolites, including NAA, with neuropsychological functioning in disease states including: abstinent alcoholics (Martin et al., 1995), HIV/ AIDS (López-Villegas, Lenkinski, & Frank, 1997), adrenoleukodystrophy (Rajanayagam et al., 1997), and traumatic brain injury (Friedman et al., 1998). Given previous differentiation between high and low IQ subjects with respect to measures of NAA (Hashimoto et al., 1995), the establishment of a strong, linear relationship between spectroscopic measures and IQ in a normal cohort (Rae et al. 1996), and several studies showing decrements of NAA to be correlated with neuropsychological functioning across a wide range of disease entities, it seemed entirely plausible to hypothesize a relationship between IQ, the most sensitive and reliable measure of human cognitive functioning, and NAA, a measure sensitive to neuronal integrity. But where to put the voxel of interest?
Our MRS Studies of Intelligence One of us (RJ) became interested in MRS during graduate school, looking at studies of traumatic brain injury (Brooks et al., 2001) and systemic lupus erythematosus (Sibbitt Jr, Sibbitt, & Brooks, 1999). Spectroscopic voxels were placed within the occipito-parietal white matter, because we could get very high quality spectra from these locations without significant artifacts, as was a problem with voxels within the frontal lobes (due to air/tissue
Biochemical Correlates of Intelligence
interface associated with the nasal conchae and away from dental work which, at that time, was associated with metallic artifacts creeping into the images from fillings, implants, and the like). Our spectra were beautiful, with sharp, thin peaks, although having little to do with most aspects of higher cognitive functioning associated with the massive frontal lobes of the human brain. Indeed, nascent neuroimaging research at the time, regarding neural correlates of higher cognitive functioning (e.g., abstract reasoning, working memory, language, attention), clearly pointed to frontal lobe involvement (Cabeza & Nyberg, 2000; Posner & Raichle, 1998). RJ’s dissertation project established the first MRS study regarding the biochemical correlates of intelligence in a normal adult cohort (Jung et al., 1999). We studied 27 participants (17 female, 10 males), and placed spectroscopic voxels within bilateral frontal lobe regions and one “control” voxel within the left occipito-parietal lobe, and hypothesized that frontal NAA would be significantly associated with IQ, while the control region would be unassociated (or only weakly associated with IQ). We administered the Wechsler Adult Intelligence Scale – 3 (Mean = 111 11.4; range = 91–135) to all participants, who were screened to exclude any neurological or psychiatric disease or disorder. Absolute quantification of NAA, Cre, and Cho were obtained, as a separate water scan was acquired independently as a reference, allowing for “absolute” quantification of these metabolites at a millimolar level. We found a significant, moderate correlation between NAA (and somewhat lower for Cho) and IQ across the sample (r = .52). The only problem was that this was found in our “control” voxel within the occipito-parietal region; indeed, there was no association between frontal NAA, Cho, or Cre and IQ whatsoever! We reported the significant results to the Proceedings of the Royal Society of London. With regard to the voxel location, we noted that: The main association pathways sampled by our experimental paradigm included axonal fibers from the posterior aspects of the superior and inferior longitudinal, occipitofrontal and arcuate fasciculi, as well as the splenium of the corpus callosum. As this voxel location sampled numerous association pathways connecting many brain regions, metabolic concentrations in this voxel may widely influence cognitive processing. (p. 1378)
Several studies of NAA have emerged in normal subjects, showing rather consistent, low, positive correlations between this metabolite and various measures of cognitive functioning, most particularly intelligence (Table 14.1).
Limitations of MRS/Intelligence Studies so Far Several comments can be made with regard to the variability of these R values across studies. First, there is a general trend towards smaller studies having higher NAA-IQ relationships than larger N studies (Figure 14.2), a characteristic well established within brain–behavior literature (Button et al., 2
287
288
r. e. jung and m. o. chohan
Table 14.1 Studies of NAA. Author
N
IQ r2
(Jung et al., 1999) (Ferguson et al., 2002)
26 88
1
Location
Method
Age
Gender
0.27 .042
L Parietal L Parietal
22 (4.6) 65–70
17F/10M
62
0.311
STEAM PRESS NAA/Cr STEAM
(Pfleiderer et al., 2004)
38.5 (15.4)
22F/40M
21
0.00031
L DLPFC L ACC L Temporal PRESS NAA/ Cho L Parietal PRESS B Centrum PRESS S. CSI
(Giménez et al., 2004)
14.05 (2.46)
11F/10M
24.8 (5.9) 50–89
10F/17M 51F/55M
PRESS CSI STEAM
23.7 (4.2)
29F/34M
15.1 (.75)
30M
STEAM NAA/Cr
21.1 (3.5)
29F/11M
(Jung et al., 2005) 27 .261 106 .033 (Charlton, McIntyre, Howe, Morris, & Markus, 2007) (Jung et al., 2009) 63 .121
R Posterior
(Aydin, Uysal, Yakut, Emiroglu, & Yilmaz, 2012) (Patel & Talcott, 2014)
30
.321
CC Posterior
40
.021
L Frontal
(Paul et al., 2016)
211 .024
(Nikolaidis et al., 2017)
71
.115
Post. Cingulate Multiple L F/P
Average
12
.14
24.6 (18–44) 90F/121M PRESS CSI
21.15 (2.56)
47F/24M
1
Wechsler Intelligence Scales; 2 Raven’s Progressive Matrices Test; 3 Wechsler Abbreviated Scale of Intelligence (Matrix Reasoning, Block Design); 4 G Fluid: BOMAT, Number Series, Letter Sets; 5 G Fluid: Matrix Reasoning, Shipley Abstraction, Letter Sets, Spatial Relations Task, Paper Folding Task, Form Boards Task.
2013), and associated with a move toward N > 100 in the neurosciences (Dubois & Adolphs, 2016). Second, there has been an increase in sophistication of measurement of spectra, from a single voxel (Jung et al., 1999), to multiple voxels within frontal and posterior brain regions (Jung, Gasparovic, Chavez, Caprihan, et al., 2009), to very elegant Chemical Shift Imaging (CSI) studies assessing multiple brain regions, with both metabolic and volume comparisons by region being made against multiple behavioral measures, including measures of reasoning (Nikolaidis et al., 2017). Measures of absolute quantitation clearly show stronger relationships to measures of IQ and reasoning (average R2 = .16) than do measures of NAA as a ratio to other metabolites (e.g., Cre or Cho), with average R2 = .02. This is likely due to both increased variance associated with denominator metabolites, as well as the lack of tissue correction (gray, white, CSF) associated with quantification of
Biochemical Correlates of Intelligence
Figure 14.2 Linear relationship between size of study (Y axis) and magnitude of NAA–intelligence, reasoning, general cognitive functioning relationship (X axis), with the overall relationship being inverse (R2 = .20).
NAA in millimolar units to an underlying water scan. Third, a few studies have “de-standardized” the IQ measure in such ways that make interpretation of the findings difficult and sometimes impossible. The most baffling of these was from Charleton et al. (2007), who converted two non-verbal subtests from the WASI (Block Design and Matrix Reasoning) to an “optimal range: 25–29,” then controlled their regression analysis by numerous factors, including estimated intelligence from the National Adult Reading Test, thus ensuring that NAA-IQ relationships would be nil. We are not aware of any other neuroimaging study of Brain–IQ relationships that would use premorbid intelligence as a covariate. The results of such an analysis are considered to be self-evident: if you control for IQ (i.e., premorbid IQ, which correlates r = .72–.81 with current IQ in healthy adults; Lezak, Howieson, & Loring 2004), the relationship between any variable and IQ will approach zero. Finally, we have picked the highest R2 values for each study, although it should be noted that many of the studies show other regions or region by sex relationships that are much lower, inverse, or negligible. The focus of this chapter is to determine whether some consensus can be found, and whether any recommendations can be made towards better spectroscopic–IQ studies undertaken in the future.
289
290
r. e. jung and m. o. chohan
Recommendations for Future MRS/Intelligence Studies One major recommendation to be made when studying the construct of intelligence, reasoning, or general cognitive functioning is to choose a measure with the highest reliability and validity. The construct of intelligence has been studied for over 100 years, and a dedicated journal (Intelligence) has chronicled research regarding this construct for some 40 plus years. There are many ways to measure intelligence, including the Shipley, Raven’s, BOMAT, and Wechsler Scales; however, none possess higher reliability and validity than the Wechsler Scales with respect to measuring this important human construct (Anon, 2002). Split half reliability ranges from .97 (16–17 years old) to .98 (80–84 years old), with average reliability of .98 for the Full Scale Intelligence Quotient (FSIQ). Test–retest reliability (across 2–12 week intervals) ranged from .95 (16–29 years old) to .96 (75–89 years old). Inter-rater reliabilities across verbal subtests requiring some subjective scoring ranged from .91 (Comprehension) to .95 (Vocabulary). Both convergent validity (e.g., Standard Progressive Matrices, Stanford Binet) and discriminative validity (e.g., attention, memory, language) have been demonstrated for the WAIS FSIQ. The Raven’s Matrices test is a good substitute for a general measure of intelligence, with moderate correlation with the WAIS FSIQ (r = .64 for Standard Progressive Matrices), although for normal samples both Standard and Advanced Matrices must be used to provide adequate variance of the measure (i.e., high ceiling, low floor). The BOMAT provides a high ceiling, but should never be mixed with other measures possessing unknown reliability/validity. Mixing together various measures of “reasoning” into an average score comprising “General Fluid Ability” creates a new measure of unknown reliability or validity, with unknown relationships to well-established measures of intelligence. So, this would be general advice for all readers of this handbook: use well-validated measures of intelligence with high reliability in order to reduce the likelihood of behavioral measurement error. A second major recommendation to spectroscopic researchers (and to neuroimaging researchers more broadly) is to interrogate regions of the brain that have been demonstrated to have high relationships with intelligence in prior research. What might these brain regions be? One of the most firmly established and well-replicated findings in all of intelligence research is the modest correlation between overall brain size and IQ of around r = .24 (Pietschnig et al., 2015). Thus, any large or gross measure of brain structure or function is likely to produce measures comparable to the magnitude found in overall cortical volume (e.g., R2 = .06). Neurosurgeons have long known (and demonstrated through their surgical prowess) that the left hemisphere is more “eloquent” than the non-dominant right when it comes to higher cognitive functioning, although this dogma has recently been challenged (Vilasboas, Herbet, & Duffau, 2017). Certain subcortical structures have been demonstrated to be critical to higher cognitive functioning, and the volume of one in
Biochemical Correlates of Intelligence
particular – the caudate nucleus – has been linked to IQ (Grazioplene et al., 2015). Neither the mesial temporal lobe (including hippocampus), nor the occipital lobe (other than BA 19), nor the sensory/motor cortex, has ever consistently been demonstrated to be related to intelligence (Basten, Hilger, & Fiebach, 2015). This leaves the frontal and parietal lobes, the focus of a major theory of intelligence, relating parietal and frontal lobe integrity to the structure and function of intelligence in the human brain (Jung & Haier, 2007). The vast majority of research in both structural and functional domains have supported the importance of parietal and frontal regions, particularly regions overlapping the Task Positive Network (Glasser et al., 2016), as well as white matter tracts including the corpus callosum and other central tracts both connecting the two hemispheres (Kocevar et al., 2019; Navas-Sánchez et al., 2014; Nusbaum et al., 2017) and connecting the frontal to more posterior lobes, such as the inferior frontal occipital fasciculus (IFOF) (Haász et al., 2013). Thus, there are areas of theoretical and empirical interest in which voxel placement is more or less likely to yield results. We have noted that MRS techniques are more amenable to studying white matter volumes given that voxels must be placed away from the skull (to avoid lipid contamination from the skull and scalp), and from air/tissue interfaces. These voxels are also generally placed to avoid overlap with the ventricles, to avoid contributions from samples containing minimal to no spectra. This leaves deep white matter and subcortical gray matter structures to interrogate with either single voxel or CSI. CSI, almost invariably, is placed just dorsal to the lateral ventricles, leaving little room for choice, but maximal brain coverage of both gray and white volumes without contamination of either scalp/skull or ventricles. Placement of single voxel(s) should be carried out with some knowledge of underlying brain anatomy being interrogated, with particular white matter tracts (e.g., Inferior Frontal Occipital Fasciculus – IFOF; Arcuate Fasciculus – AF) being of particular interest given both theoretical and meta-analytical research regarding white matter contributions to intelligence (Basten et al., 2015; Jung & Haier, 2007). Given the various techniques and findings reviewed here, final recommendations can be summarized with regard to spectroscopic inquiries of intelligence as follows: Use a reliable and valid measure of intelligence – Wechsler is best (WASI subtests of Vocabulary and Matrix Reasoning yield FSIQ and can be administered in 20 minutes). Use of ad hoc and/or home grown measures of fluid intelligence, reasoning, or the like does not allow for comparison across studies. Moreover, the reliability, validity, age range of normative sample (6–89), and updates due to so-called Flynn Effects (Flynn, 1987) of such measures are highly likely to be lower than the gold standard established with the Wechsler Scales. Spectroscopic voxels need to be placed in order to yield high quality spectra; however, consideration of underlying anatomy is vital to interpretation of
291
292
r. e. jung and m. o. chohan
findings. Nikolaidis et al. (2017) provide the most elegant example for future researchers to follow, with combined CSI imaging overlaid on particular cortical regions, which are then combined statistically using Principal Component Analysis. Posterior brain regions are particularly fruitful with regard to NAAIQ relationships, with parietal-frontal white and gray matter voxels being most likely to produce moderate positive associations across studies. Absolute quantification of metabolic concentration is critical. Convolving metabolites through use of ratios (e.g., NAA/Cre), and ignoring and/or minimizing effects of tissue concentration and/or water content within voxels does not move the science forward. Sample sizes should be N > 100 with roughly equal sampling from males and females to have sufficient power to detect reliable NAA-IQ relationships, as well as to determine whether any significant sex differences exist within the sample.
References Aboitiz, F., Scheibel, A. B., Fisher, R. S., & Zaidel, E. (1992). Fiber composition of the human corpus callosum. Brain Research, 598(1–2), 143–153. Andreasen, N. C., Flaum, M., Swayze, V. D., O’Leary, D. S., Alliger, R., Cohen, G., . . . Yuh, W. T. (1993). Intelligence and brain structure in normal individuals. American Journal of Psychiatry, 150(1), 130–134. Anon. (2002). WAIS-III WMS-III technical manual. New York: The Psychological Corporation. Aydin, K., Uysal, S., Yakut, A., Emiroglu, B., & Yilmaz, F. (2012). N-Acetylaspartate concentration in corpus callosum is positively correlated with intelligence in adolescents. NeuroImage, 59(2), 1058–1064. Basten, U., Hilger, K., & Fiebach, C. J. (2015). Where smart brains are different: A quantitative meta-analysis of functional and structural brain imaging studies on intelligence. Intelligence, 51(1), 10–27. Bates, T. E., Strangward, M., Keelan, J., Davey, G. P., Munro, P. M. G. G., & Clark, J. B. 1996. Inhibition of N-acetylaspartate production: Implications for 1H MRS studies in vivo. Neuroreport, 7(8), 1397–1400. Blakely, R. D., & Coyle, J. T. (1988). The neurobiology of N-acetylasparty. International Review of Neurobiology, 30, 39–100. Brooks, W. M., Friedman, S. D., & Gasparovic, C. (2001). Magnetic resonance spectroscopy in traumatic brain injury. Journal of Head Trauma Rehabilitation, 16(2), 149–164. Button, K. S., Ioannidis, J. P. A., Mokrysz, C., Nosek, B. A. Flint, J., Robinson, E. S. J., & Munafò, M. R. (2013). Power failure: Why small sample size undermines the reliability of neuroscience. Nature Reviews Neuroscience, 14, 365–376. Cabeza, R., & Nyberg, L. (2000). Imaging cognition II: An empirical review of 275 PET and FMRI studies. Journal of Cognitive Neuroscience, 12(1), 1–47.
Biochemical Correlates of Intelligence
Callaway, E. (1973). Correlations between averaged evoked potentials and measures of intelligence: An overview. Archives of General Psychiatry, 29(4), 553–558. Charlton, R. A., McIntyre, D. J. O. O., Howe, F. A., Morris, R. G., & Markus H. S. (2007). The relationship between white matter brain metabolites and cognition in normal aging: The GENIE study. Brain Research, 1164, 108–116. D’Adamo, A. F., & Yatsu. F. M. (1966). Acetate metabolism in the nervous system. N-acetyl-l-aspartic acid and the biosynthesis of brain lipids. Journal of Neurochemistry, 13(10), 961–965. Dubois, J., & Adolphs, R. (2016). Building a science of individual differences from FMRI. Trends in Cognitive Sciences, 20(6), 425–443. Ellis, F. R. (1969). Some effects of PCO2 and PH on nerve tissue. British Journal of Pharmacology, 35(1), 197–201. Ertl, J. P., & Schafer, E. W. P. (1969). Brain response correlates of psychometric intelligence. Nature, 223, 421–422. Ferguson, K. J., MacLullich, A. M. J., Marshall, I., Deary, I. J., Starr, J. M., Seckl, J. R., & Wardlaw, J. M. (2002). Magnetic resonance spectroscopy and cognitive function in healthy elderly men. Brain, 125(Pt. 12), 2743–2749. Flynn, J. R. (1987). Massive IQ gains in 14 nations: What IQ tests really measure. Psychological Bulletin, 101(2), 171–191. Friedman, S. D., Brooks, W. M., Jung, R. E., Blaine, B. L. L., Hart, L., & Yeo, R. A. (1998). Proton MR spectroscopic findings correspond to neuropsychological function in traumatic brain injury. American Journal of Neuroradiology, 19(10), 1879–1885. Gadian, D. G. (1995). NMR and its applications to living systems. Oxford University Press. Giménez, M., Junqué, C., Narberhaus, A., Caldú, X., Segarra, D., Vendrell, P., . . . Mercader, J. M. (2004). Medial temporal MR spectroscopy is related to memory performance in normal adolescent subjects. Neuroreport, 15(4), 703–707. Glasser, M. F., Coalson, T. S., Robinson, E. C., Hacker, C. D., Harwell, J., Yacoub, E., . . . Van Essen, D. C. (2016). A multi-modal parcellation of human cerebral cortex. Nature, 536, 171–178. Graff-Radford, J., & Kantarci, K. (2013). Magnetic resonance spectroscopy in Alzheimer’s disease. Neuropsychiatric Disease and Treatment, 9, 687–696. Grazioplene, R. G., Rachael, G., Ryman, S. G., Gray, J. R., Rustichini, A., Jung, R. E., & DeYoung, C. G. (2015). Subcortical intelligence: Caudate volume predicts IQ in healthy adults. Human Brain Mapping, 36(4), 1407–1416. Gur, R. C., Turetsky, B. I., Matsui, M., Yan, M., Bilker, W., Hughett, P., & Gur, R. E. (1999). Sex differences in brain gray and white matter in healthy young adults: Correlations with cognitive performance. Journal of Neuroscience, 19(10), 4065–4072. Haász, J., Westlye, E. T., Fjær, S., Espeseth, T., Lundervold, A., & Lundervold, A. J. (2013). General fluid-type intelligence is related to indices of white matter structure in middle-aged and old adults. NeuroImage, 83, 372–383.
293
294
r. e. jung and m. o. chohan
Harvey, I., Persaud, R., Ron, M. A., Baker, G., & Murray, R. M. (1994). Volumetric MRI measurements in bipolars compared with schizophrenics and healthy controls. Psychological Medicine, 24(3), 689–699. Hashimoto, T., Tayama, M., Miyazaki, M., Yoneda, Y., Yoshimoto, T., Harada, M., . . . Kuroda, Y. (1995). Reduced N-acetylaspartate in the brain observed on in vivo proton magnetic resonance spectroscopy in patients with mental retardation. Pediatric Neurology, 13(3), 205–208. Jensen, A. R. (1982). Reaction time and psychometric g. In H. J. Eysenck (ed.), A model for intelligence (pp. 93–132). Berlin: Springer-Verlag. Jung, R. E., Brooks, W. M., Yeo, R. A., Chiulli, S. J., Weers, D. C., & Sibbitt Jr, W. L. (1999). Biochemical markers of intelligence: A proton MR spectroscopy study of normal human brain. Proceedings of the Royal Society B-Biological Sciences, 266(1426), 1375–1379. Jung, R. E., Gasparovic, C., Robert, R. S., Chavez, S., Caprihan, A., Barrow, R., & Yeo, R. A. (2009). Imaging intelligence with proton magnetic resonance spectroscopy. Intelligence, 37(2), 192–198. Jung, R. E., & Haier, R. J. (2007). The Parieto-Frontal Integration Theory (P-FIT) of intelligence: Converging neuroimaging evidence. Behavioral and Brain Sciences, 30(2):135–154. Jung, R. E., Haier, R. J., Yeo, R. A., Rowland, L. M., Petropoulos, H., Levine, A. S., . . . Brooks, W. M. (2005). Sex differences in N-acetylaspartate correlates of general intelligence: An H-1-MRS study of normal human brain. Neuroimage, 26(3), 965–972. Jung, R. E., Yeo, R. A., Sibbitt Jr., W. L., Ford, C. C., Hart, B. L., & Brooks, W. M. (2001). Gerstmann syndrome in systemic lupus erythematosus: Neuropsychological, neuroimaging and dpectroscopic findings. Neurocase, 7(6), 515–521. Kocevar, G., Suprano, I., Stamile, C., Hannoun, S., Fourneret, P., Revol, O., . . . Sappey-Marinier, D. (2019). Brain structural connectivity correlates with fluid intelligence in children: A DTI graph analysis. Intelligence, 72, 67–75. Kumar, V., Sharma, U., & Jagannathan, N. R. (2012). In vivo magnetic resonance spectroscopy of cancer. Biomedical Spectroscopy and Imaging, 1(1), 89–100. Lehmann, J. E. (1937). The effect of changes in PH on the action of mammalian A nerve fibres. American Journal of Physiology, 118(3), 600–612. Lezak, M. D., Howieson, D. B., & Loring, D. W. (2004). Neuropsychological assessment, 4th ed. New York: Oxford University Press. López-Villegas, D., Lenkinski, R. E., & Frank, I. (1997). Biochemical changes in the frontal lobe of HIV-infected individuals detected by magnetic resonance spectroscopy. Proceedings of the National Academy of Sciences of the United States of America, 94(18), 9854–9859. Martin, P. R., Gibbs, S. J., Nimmerrichter, A. A., Riddle, W. R., Welch, L. W., & Willcott, M. R. (1995). Brain proton magnetic resonance spectroscopy studies in recently abstinent alcoholics. Alcoholism: Clinical and Experimental Research, 19(4), 1078–1082. Moffett, J. R., Ross, B. D., Arun, P., Madhavarao, C. N., & Namboodiri, Aryan. (2007). N-Acetylaspartate in the CNS: From neurodiagnostics to neurobiology. Progress in Neurobiology, 81(2), 89–131.
Biochemical Correlates of Intelligence
Navas-Sánchez, F. J., Alemán-Gómez, Y., Sánchez-Gonzalez, J., Guzmán-DeVilloria, J. A., Franco, C., Robles, O., . . . Desco, M. (2014). White matter microstructure correlates of mathematical giftedness and intelligence quotient. Human Brain Mapping, 35(6), 2619–2631. Nikolaidis, A., Baniqued, P. L., Kranz, M. B., Scavuzzo, C. J., Barbey, A. K., Kramer, A. F., & Larsen, R. J. (2017). Multivariate associations of fluid intelligence and NAA. Cerebral Cortex, 27(4), 2607–2616. Nordengen, K., Heuser, C., Rinholm, J. E., Matalon, R., & Gundersen, V. (2015). Localisation of N-acetylaspartate in oligodendrocytes/myelin. Brain Structure and Function, 220(2), 899–917. Nusbaum, F., Hannoun, S., Kocevar, G., Stamile, C., Fourneret, P., Revol, O., & Sappey-Marinier, D. (2017). Hemispheric differences in white matter microstructure between two profiles of children with high intelligence quotient vs. controls: A tract-based spatial statistics study. Frontiers in Neuroscience, 11, 173. doi: 10.3389/fnins.2017.00173. eCollection 2017. Parrish, R. G., Kurland, R. J., Janese, W. W., & Bakay, L. (1974). Proton relaxation rates of water in brain and brain tumors. Science, 183(4123), 438–439. Patel, T., & Talcott, J. B. (2014). Moderate relationships between NAA and cognitive ability in healthy adults: Implications for cognitive spectroscopy. Frontiers in Human Neuroscience, 8, 39. doi: 10.3389/fnhum.2014.00039. eCollection 2014. Paul, E. J., Larsen, R. J., Nikolaidis, A., Ward, N., Hillman, C. H., Cohen, N. J., . . . Barbey, A. K. (2016). Dissociable brain biomarkers of fluid intelligence. NeuroImage, 137, 201–211. Pfleiderer, B., Ohrmann, P., Suslow, T., Wolgast, M., Gerlach, A. L., Heindel, W., & Michael, N. (2004). N-Acetylaspartate levels of left frontal cortex are associated with verbal intelligence in women but not in men: A proton magnetic resonance spectroscopy study. Neuroscience, 123(4), 1053–1058. Pietschnig, J., Penke, L., Wicherts, J. M., Zeiler, M., & Voracek, M. (2015). Meta-analysis of associations between human brain volume and intelligence differences: How strong are they and what do they mean? Neuroscience and Biobehavioral Reviews, 57, 411–432. Posner, M. I., & Raichle, M. E. (1998). The neuroimaging of human brain function. Proceedings of the National Academy of Sciences of the United States of America, 95(3), 763–764. Rae, C., Scott, R. B., Thompson, C. H., Kemp, G. J., Dumughn, I., Styles, P., . . . Radda, G. K. (1996). Is PH a biochemical marker of IQ? Proceedings of the Royal Society B: Biological Sciences, 263(1373), 1061–1064. Rajanayagam, V., Balthazor, M., Shapiro, E. G., Krivit, W., Lockman, L., & Stillman, A. E. (1997). Proton MR spectroscopy and neuropsychological testing in adrenoleukodystrophy. American Journal of Neuroradiology, 18(10), 1909–1914. Reiss, A. L., Abrams, M. T., Singer, H. S., Ross, J. L., & Denckla, M. B. (1996). Brain development, gender and IQ in children. A volumetric imaging study. Brain, 119(Pt 5), 1763–1774. doi: 10.1093/brain/119.5.1763. Ross, A. J., & Sachdev, P. S. (2004). Magnetic resonance spectroscopy in cognitive research. Brain Research Reviews, 44(2–3), 83–102.
295
296
r. e. jung and m. o. chohan
Schuff, N., Ezekiel, F., Gamst, A. C., Amend, D. L., Capizzano, A. A., Maudsley, A. A., & Weiner, M. W. (2001). Region and tissue differences of metabolites in normally aged brain using multislice 1H magnetic resonance spectroscopic imaging. Magnetic Resonance in Medicine, 45(5), 899–907. Sibbitt Jr., W. L., Sibbitt, R. R., & Brooks, W. M. (1999). Neuroimaging in neuropsychiatric systemic lupus erythematosus. Arthritis & Rheumatism, 42(10), 2026–2038. Soher, B. J., van Zijl, P. C., Duyn, J. H., & Barker, P. B. (1996). Quantitative proton MR spectroscopic imaging of the human brain. Magnetic Resonance in Medicine: Official Journal of the Society of Magnetic Resonance in Medicine/Society of Magnetic Resonance in Medicine, 35(3), 356–363. Taylor, D. L., Davies, S. E. C., Obrenovitch, T. P., Doheny, M. H., Patsalos, P. N., Clark, J. B., & Symon, L. (2002). Investigation into the role of N-acetylaspartate in cerebral osmoregulation. Journal of Neurochemistry, 65(1), 275–281. Tedeschi, G., Bertolino, A., Righini, A., Campbell, G., Raman, R., Duyn, J. H., . . . Di Chiro, G. (1995). Brain regional distribution pattern of metabolite signal intensities in young adults by proton magnetic resonance spectroscopic imaging. Neurology, 45(7), 1384–1391. Vilasboas, T., Herbet, G., & Duffau, H. (2017). Challenging the myth of right nondominant hemisphere: Lessons from corticosubcortical stimulation mapping in awake surgery and surgical implications. World Neurosurgery, 103, 449–456. Wickett, J. C., Vernon, P. A., & Lee, D. H. (1994). In vivo brain size, head perimeter, and intelligence in a sample of healthy adult females. Personality and Individual Differences, 16(6), 831–838. Willerman, L., Schultz, R., Rutledge, J. N., & Bigler, E. D. (1991). In vivo brain size and intelligence. Intelligence, 15(2), 223–228.
15 Good Sense and Good Chemistry Neurochemical Correlates of Cognitive Performance Assessed In Vivo through Magnetic Resonance Spectroscopy Naftali Raz and Jeffrey A. Stanley Intelligence is an extensively researched and psychometrically robust construct, whose biological validity remains insufficiently elucidated. The extant theorizing about neural mechanisms of intelligence links better reasoning abilities to the efficiency of information processing by the brain as a system (Neubauer & Fink, 2009), to the structural and functional integrity of the network connecting critically important brain hubs (a parietal-frontal integration or P-FIT theory, Jung & Haier, 2007), and to properties of specific brain regions, such as the prefrontal cortices (Duncan, Emslie, Williams, Johnson, & Freer, 1996). Gathering data for testing these theories is a complicated enterprise that involves interrogating the brain from multiple perspectives. Despite recent promising work on multimodal imaging (Sui, Huster, Yu, Segall, & Calhoun, 2014), it is still unrealistic to assess all relevant aspects of the brain at once. Thus, the investigators are compelled to evaluate specific salient features of the brain’s structure and function. In this chapter, we review the application of Magnetic Resonance Spectroscopy (MRS) that is used to investigate the neurochemistry of energy metabolism and neurotransmission underlying cognitive operations including complex reasoning abilities that we assume as components or expressions of intelligence. Specifically, we focus on two important sets of characteristics that underpin information processing and transfer within the brain as a system: brain energy metabolism and neurotransmission – the main consumer of the brain’s energetic resources. We restrict our discussion to specific methods of assessing the brain’s neurochemical and metabolic properties in vivo: ¹H and ³¹P MRS (Stanley, 2002; Stanley, Pettegrew, & Keshavan, 2000; Stanley & Raz, 2018). After highlighting the key aspects of cerebral energy metabolism and neurotransmission, we describe physical foundations of MRS, its capabilities in estimating the brain’s metabolites, spatial and temporal resolution constraints on MRS-generated estimates, distinct advantages and disadvantage of ¹H and ³¹P MRS, and cognitive correlates of MRS-derived indices
297
298
n. raz and j. a. stanley
described in the extant literatures. Finally, we present a road map to maximizing the advantages and overcoming the limitations of MRS for future studies of energetic and neurotransmission mechanisms that may underlie implementation of simple and complex cognitive abilities.
Magnetic Resonance Spectroscopy: A Brief Introduction The Fundamentals: Physical and Chemical Foundations of MRS Although MRS and magnetic resonance imaging (MRI) are both based on the same phenomenon of nuclear magnetic resonance (NMR), the two differ in important ways. MRI focuses on capturing the signal of a single chemical species, water, by targeting the two ¹H nuclei of its molecule. Therefore, the data collected in all MRI studies, regardless of the specific technique (structural, diffusion, or susceptibility-weighted), are based on a strong signal from the brain water, whose concentration ranges between 70% and 95% of 55.5 mol. In contrast, MRS focuses on acquiring the signal of multiple chemical species simultaneously. It can target, for example, any molecule containing ¹H nuclei, such as glutamate or γ-aminobutyric acid (GABA), or any phosphorus (³¹P)-containing molecules, such as phosphocreatine (PCr) or adenosine triphosphate (ATP). The chemical shift interaction is the primary mechanism that enables MRS to discern multiple chemical species. It is based on the principle that the “resonant” conditions or the precessional frequencies of the targeted nuclei are directly proportional to the static magnetic field strength, B0, via a nuclei-specific gyromagnetic ratio constant. In biological systems, molecules are typically composed of CH, CH2, CH3, NH3, and PO3 groups, to name a few, and are referred to as “spin groups.” The magnetic fields at each of the spin groups of a molecule are slightly different from each other due to the physical interaction of local magnetic fields. If the local magnetic field is different, so does the resonant frequency or “chemical shift” of each spin group. The nuclear magnetic resonance (NMR) signal from each group of the targeted nuclei is acquired in the time-domain and through Fourier Transform is converted to a frequency-domain representation. The latter allows chemical shift frequencies to be displayed as a MRS spectrum, which consists of a series of uniquely positioned spectral peaks (or chemical shifts) expressed in parts per million (ppm), as illustrated in Figure 15.1. In addition to the chemical shift interactions that depend on the B0 field strength, there is an additional through-bond interaction between adjacent spin groups within the same molecule – referred to as the scalar J-coupling interaction. That interaction may enable splitting the chemical shift from a singlet to multiple subpeaks or multiplets – e.g., doublets, triplets, or quartets. The peak separation of multiplets, which are field independent, hinges on the J-coupling strength and is expressed in Hertz. Thus, the combination of unique chemical shift and J-
Neurochemical Correlates of Intelligence
Figure 15.1 Examples of a quantified ¹H MRS spectrum and a quantified ³¹P MRS spectrum. (A) An example of a quantified ¹H MRS spectrum derived with the LC Model approach (Provencher, 1993). The MRS data were acquired from the dorsal anterior cingulate cortex at 3 Tesla. The modeled spectrum (red line) is superimposed on the acquired spectrum (black line). The residual and the modeled spectrum of glutamate are shown below. On the right is a single-voxel location marked by a box superimposed on the MRI images.
299
300
n. raz and j. a. stanley
coupling interactions of spin groups within and between different neurochemicals enables identification of the neurochemical composition in a given brain location, in vivo (Govindaraju, Young, & Maudsley, 2000). Greater details on the basic principles and applications relevant to MRS are described elsewhere (Fukushima & Roeder, 1981; McRobbie, Moore, Graves, & Prince, 2006). Lastly, the signal intensity of a spectral peak (or more precisely, the area under the peak or the signal amplitude at time equals zero in the the timedomain) is proportional to the concentration of the molecule associated with that chemical shift. This relationship implies that the signal amplitude of each peak is directly related to the measured number of ¹H or ³¹P nuclei of that spin group associated to the targeted chemical compound within a sampled voxel. Therefore, MRS possesses a unique ability of identifying the neurochemical composition as well as quantifying the absolute in vivo concentration of multiple neurochemical compounds from a localized volume of interest. The MRS outcome measurement can be presented in the form of a spectrum as described in the above paragraph, or, like an MRI, as a set of images with the signal intensity at each pixel representing the concentration of a specific neurochemical such as glutamate or GABA. The latter representation option is referred to as spectroscopic imaging or MRSI. Despite its advantage in assessing the composition of various neurochemical compounds, and not just assessing the MR signal of water molecules as with MRI, MRS has some limitations. Because the signal emanating from water, the most abundant source of ¹H nuclei in the brain, is stronger by several orders of magnitude in comparison to other chemical components of the brain tissue, MRI has a substantially greater temporal and spatial resolution than MRS. For example, the brain concentrations of glutamate or GABA are ~8–12 mmol and 1.5–3 mmol, respectively, whereas the concentration of water is stronger by a factor of approximately 104. The consequence of the weaker signal being produced in MRS is that the voxel size for meaningful data collection must be larger to achieve an adequate signal-to-noise ratio (S/N) for reliable quantification. Thus, on a 3 Tesla system, a typical voxel size
Figure 15.1 (cont.) Abbreviations: NAA, N-acetylaspartate; PCr+Cr, phosphocreatine plus creatine; GPC+PC, glycerophosphocholine plus phosphocholine; Glu, glutamate. (B) An example of a quantified ³¹P MRS spectrum from a left anterior white matter voxel that was extracted from a 3D CSI acquisition with ¹H decoupling at 3 Tesla. The modeled spectrum (red line) is superimposed on the acquired spectrum (black line), with the residual and the individual modeled spectra shown below. On the right is the voxel selection box superimposed on the 3D CSI grid and the MRI images. Abbreviations: PCr, phosphocreatine; Pi, inorganic orthophosphate; GPC, glycerophosphocholine; PC, phosphocholine; PE, phoshoethanolamine; GPE, glycerophosphoethanolamine; DN, dinucleotides; ATP, adenosine triphosphate.
Neurochemical Correlates of Intelligence
for ¹H MRS is 1–27 cm3, compared to 0.5–8 mm3 for routine MRI imaging. In the case of ³¹P MRS, the inherent sensitivity is approximately 1/15th of that of the ¹H nuclei and therefore, the spatial resolution of ³¹P MRS is even poorer compared to ¹H MRS, with typical voxels in excess of 20 cm3 at 3 Tesla. Conducting the MRS at higher B0 field strengths has many advantages. One, the S/N ratio scales approximately with the B0 field strength and by increasing the latter brings significant gains in spatial and temporal resolution of MRS. The enhanced S/N at higher field strengths, such as 7 Tesla, can boost the spatial resolution by at least a factor of two and reduce the acquisition to under a minute, thus bringing the temporal resolution in line with a typical cognitive process duration in task-based fMRI paradigms. Two, higher field strengths also increase the chemical shift dispersion, which leads to greater separation of the chemical shifts within and between chemical species (Ugurbil et al., 2003) and, hence, greatly improves the differentiation of coupled spin systems, such as glutamate and glutamine (Tkac et al., 2001). In all, conducting MRS experiments at higher B0 fields improves the accuracy and precision of the quantification (Pradhan et al., 2015; Yang, Hu Kou, & Yang, 2008), minimizes the partial volume effects that impedes voxel placement precision in functionally relevant brain areas, and boosts the temporal resolution required for capturing neurochemical modulations on the time scale of epochs often used in task-based fMRI paradigms (Stanley & Raz, 2018). With respect to hardware, collecting ¹H MRS data requires no additional hardware except for specialized acquisition sequences suited for MRS, which makes it a popular choice in research facilities where only clinical scanners are available. In the case of ³¹P MRS, additional hardware – a multi-nuclei capability package and a specialized transmit-receive radio frequency (RF) coil, are required but are readily available by major manufacturers and thirdparty vendors. Acquisition schemes for localizing the MRS signal fall into two main categories – single- and multi-voxel MRS. Multi-voxel MRS acquisition has a significant advantage of characterizing the neurochemistry of multiple brain areas or voxels in a single cross-sectional slice or multiple slices in a single measurement, which is also known as chemical shift imaging (CSI). The spatial resolution is typically much better than in single-voxel MRS, because CSI methods are more efficient with signal averaging per unit of time. However, the spectral quality tends to be poorer than in single-voxel MRS. For example, if the goal of a study is to measure glutamate with greatest precision, a single-voxel ¹H MRS approach is most preferred. In the past three decades, the stimulated acquisition mode (STEAM) and the point-resolved spectroscopy (PRESS) acquisition sequences have been the two most commonly used approaches in both modes, single- and multi-voxel, for ¹H MRS. More recent innovative approaches include Localization by Adiabatic SElective Refocusing (LASER) (Garwood & DelaBarre, 2001), semi-LASER (Scheenen, Klomp, Wijnen, & Heerschap, 2008), and SPin ECho, full Intensity Acquired Localized (SPECIAL) (Mlynárik, Gambarota,
301
302
n. raz and j. a. stanley
Frenkel, & Gruetter, 2006). Adiabatic pulses are highly effective for outer volume suppression (OVS), which is a key component of the acquisition sequence (Tkác, Starcuk, Choi, & Gruetter, 1999). Choices of localization for in vivo 31P MRS are limited due to the relatively shorter spin–spin T2 relaxation of 31P metabolites. Common methods include image-selected in vivo spectroscopy (ISIS) applied as a single- or multiple-voxel technique, and CSI. An example of 1H MRS tailored to evaluate glutamate is presented in Figure 15.1a.
Brain Energy Metabolism The brain’s share of the body weight is only 2%, yet it consumes about 20% of the total energy generated by the body’s cellular machinery (Attwell & Laughlin, 2001). Most of that energy is invested in neurotransmission that underpins the brain’s core function: information processing (Howarth, Gleeson, & Attwell, 2012; Sokoloff, 1991, 1993). The bulk of the brain’s energy is generated by the mitochondria, which produce daily about 6 kg of the main energy substrate, ATP, via oxidative phosphorylation (OXPHOS) through the ATPase pathway: inorganic orthophosphate (Pi) + adenosine diphosphate (ADP) ! ATP. In addition, the high-energy phosphate store, PCr, can be converted into ATP through creatine kinase (CK) reaction: PCr + ADP ! ATP + Pi. The latter is considerably more efficient compared to the mitochondrial ATPase-mediated route (Andres, Ducray, Schlattner, Wallimann, & Widmer, 2008; Schlattner, Tokarska-Schlattner, & Wallimann, 2006; Wallimann, Wyss, Brdiczka, Nicolay, & Eppenberger, 1992). The CK is also involved in shuttling PCr out of mitochondria to sites utilizing ATP. The physiological ATP production and utilization via the ATPase and CK pathways can be estimated using ³¹P MRS combined with magnetization transfer (MT), which allows quantifying the exchange between PCr and the saturated signal of Pi or γ-ATP (Du, Cooper, Lukas, Cohen, & Ongur, 2013; Shoubridge, Briggs, & Radda, 1982; Zhu et al., 2012). As noted, the state of brain energy metabolites and changes therein can be estimated in vivo using ³¹P MRS by quantifying PCr, ATP, and Pi (Goldstein et al., 2009; Kemp, 2000). The brain tissue concentration of ATP is approximately 3 μmole/g, which is well buffered by PCr with a tissue concentration of 5 μmole/g (McIlwain & Bachelard, 1985). On a ³¹P MRS spectrum, the ATP is represented by three chemical shifts, one per phosphate spin group, in which the β- and α-ADP reside on the shoulders of the γ- and α-ATP, respectively (Figure 15.1b), and therefore, the β-ATP is the preferred chemical shift in quantifying ATP. However, the interpretation of basal PCr levels in vivo may not only reflect energy consumption but also the high-energy storage capacity. That is, under increased utilization of energy or reduced PCr storage, lower levels of PCr will be observed and, therefore, it is impossible to identify a specific mechanism driving a lower PCr level. Methods such as ³¹P MRS with MT are better suited for assessing mechanisms directly related to CK
Neurochemical Correlates of Intelligence
utilization or ATP production. Lastly, one must also be mindful that PCr can be measured with ¹H MRS, which is commonly referred to in the literature as “creatine” or Cr and viewed as misleading. Because the PCr and Cr spectral peaks are indistinguishable on a ¹H MRS spectrum, and PCr and Cr are both reactants in the CK reaction, a shift in the CK utilization would not result in a net change in the combined PCr+Cr measurement by ¹H MRS. Thus, ¹H MRS is less optimal for assessing differences in the utilization of energy metabolism (Du et al., 2013; Shoubridge et al., 1982; Zhu et al., 2012).
Markers of Neuropil Expansion and Contraction Derived from 31P MRS In addition to assessing energy-related metabolites, ³¹P MRS provides an opportunity to access the molecular markers of cortical neuropil expansion and contraction, which may be particularly important in evaluating changes in brain–cognition relationships during development, aging, and treatment. These changes can be inferred from measuring precursors of membrane phospholipids (MPLs) – phosphocholine (PC) and phoshoethanolamine (PE) – as well as breakdown products of MPLs glycerophosphocholine (GPC) and glycerophosphoethanolamine (GPE) (Pettegrew, Klunk, Panchalingam, McClure, & Stanley, 2000). In the brain tissue, MPLs form membrane bilayers that physically separate the intracellular components from the extracellular environment in neurons, astrocytes, oligodendrocytes, and microglia, as well as different organelles within cells (Stanley & Pettegrew, 2001). Early in postnatal brain development, in vivo human and ex vivo rat brain 31P MRS studies have consistently shown high MPL precursor levels of mainly PE, and low levels of MPL breakdown products, GPC and GPE (Pettegrew, Panchalingam, Withers, McKeag, & Strychor, 1990; Pettegrew et al., 2000; Stanley & Pettegrew, 2001). This reflects the high demand of active MPL synthesis for the development of cell membrane structures required in the proliferation of dendritic and synaptic connections (i.e., neuropil expansion). The expansion of neuropil is followed by decreases in precursor levels and increases in breakdown products coinciding with maturation (i.e., pruning or neuropil contraction) (Goldstein et al., 2009; Stanley et al., 2008). In the context of rapid proliferating tissue, elevated levels of MPL precursor levels, specifically PC, have been reported at the time and site of neuritic sprouting in the hippocampus following unilateral lesions of the entorhinal cortex in rats (Geddes, Panchalingam, Keller, & Pettegrew, 1997). Collectively, this provides support for quantification of MPL precursor levels, PE and PC, with 31P MRS as a sensitive measure of active MPLs synthesis in the neuropil. ¹H MRS can also capture MPL metabolites by quantifying the trimethylamine chemical shift at approximately 3.2 ppm (Govindaraju et al., 2000), with peaks representing GPC plus PC, which are indistinguishable. In the literature, the trimethylamine ¹H peak, GPC+PC, is typically referred to as the “choline” peak or the “choline-containing” peak, which can be misleading
303
304
n. raz and j. a. stanley
because the contribution of choline is below the detection limit (Miller, 1991). Also, the interpretation of the GPC+PC with ¹H MRS is ambiguous due to the lack of specificity of implicating precursors or breakdown products of MPL’s (Stanley & Pettegrew, 2001; Stanley et al., 2000).
Specificity of Markers Derived from ¹H MRS: NAA and Myo-Inositol On a typical ¹H MRS spectrum of the brain, the prominent chemical shift at 2.01 ppm is attributed to the CH3 spin group of N-acetyl-aspartate (NAA) (Govindaraju et al., 2000), which, next to glutamate, is the second most abundant free amino acid, with a brain concentration of approximately 10 mmol (Tallan, 1957). NAA is synthesized in mitochondria of neurons from acetyl-CoA and aspartate with a help of the membrane-bound enzyme L-aspartate N-acetyltransferase, and catabolized by the principal enzyme, N-acetyl-L-aspartate aminohydrolase II (aspartoacylase), with the highest activity in oligodendrocytes (Baslow, 2003). Based on monoclonal antibody studies, NAA is localized to neurons with greater staining in the perikaryal, dendrites, and axons (Simmons, Frondoza, & Coyle, 1991). Cell culture studies have shown that NAA is also localized in neurons, immature oligodendrocytes, and in O-2A progenitor cells (Urenjak, Williams, Gadian, & Noble, 1993). Thus, historically NAA has been viewed exclusively as a maker of mature neurons (De Stefano et al., 1998). However, several more recent investigations have revealed that NAA is present in mature oligodendrocytes (Bhakoo & Pearce, 2000) and evidence of inter-compartmental cycling of NAA between neurons and oligodendrocytes (Baslow, 2000, 2003; Bhakoo & Pearce, 2000; Chakraborty, Mekala, Yahya, Wu, & Ledeen, 2001). Therefore, more precisely, NAA should be viewed as a marker of functioning neuroaxonal tissue including functional aspects of the formation and/or maintenance of myelin (Chakraborty et al., 2001). Thus, NAA is a rather nonspecific biomarker of neuronal and axonal viability, maturity, or maintenance. Early in postnatal brain development, levels of NAA (or NAA/PCr+Cr ratios) are low but increase progressively and plateau as the brain reaches maturation (van der Knaap et al., 1992). Of all brain compartments, cortical grey matter, cerebellum, and the thalamus are areas of greatest NAA elevations during development (Pouwels et al., 1999), which indicates that NAA does not merely reflect the number of neurons but is a marker of functioning neurons or, in the context of development, a marker of cortical expansion (Pouwels et al., 1999). Like glutamate, ¹H MRS acquired using a short echo time (TE) acquisition scheme can reliably measure myo-inositol, which has a prominent chemical shift at 3.56 ppm (Govindaraju et al., 2000). Myo-inositol, which is generally viewed as a cerebral osmolyte, is an intermediate of several important pathways involving inositol-polyphosphate messengers, inositol-1-phosphate,
Neurochemical Correlates of Intelligence
phosphatidyl inositol, glucose-6-phosphate, and glucuronic acid (Ross & Bluml, 2001). More importantly, myo-inositol is almost exclusively localized in astrocytes (Coupland et al., 2005; Kim, McGrath, & Silverstone, 2005) and, therefore, viewed as a marker of glia (Ross & Bluml, 2001).
Neurotransmission and Information Processing in the Brain The neural mechanisms of intelligence and information processing in general are unquestionably complex and not fully understood. Blood Oxygen Level Difference (BOLD) fMRI studies, which can access functional activity related to task and the correlation or functional connectivity between brain areas, have significantly contributed to the fundamental understanding of neural engagement of circuits and networks in intelligence and information processing (e.g., van den Heuvel, Stam, Kahn, & Hulshoff Pol, 2009). BOLD fMRI provides good temporal and spatial resolution, favorably compared to MRS. The BOLD signal, however, is a remote indicator of neural activity, and its surrogate nature does not allow direct probing of neuronal processes. Moreover, BOLD fMRI is influenced by major determinants of vascular tone, such as dopamine (Lauritzen, Mathiesen, Schaefer, & Thomsen, 2012), that depend on age (Bäckman, Lindenberger, Li, & Nyberg, 2010) and genetic makeup (Damoiseaux, Viviano, Yuan, & Raz, 2016). Because of these limitations, BOLD MRI, whether task-dependent or resting-state, is an inadequate tool for elucidating the functioning of the highly integrated glutamatergic and GABAergic neuronal ensembles within local and long-range circuits that drive the dynamic shifts in the excitatory and inhibitory (E/I) balance necessary for synaptic plasticity and reorganization of neuronal processes (Isaacson & Scanziani, 2011; Lauritzen et al., 2012; Maffei, 2017; Tatti, Haley, Swanson, Tselha, & Maffei, 2017). Quantifying the net shift of the E/I dynamics of microcircuits via in vivo measurements of glutamate and GABA that is driven by directed cognitive engagement is paramount for understanding the brain underpinning of cognition (Stanley & Raz, 2018). Afterall, excitatory glutamatergic neurons comprise about 80% of cortical processing units, with the remaining 20% being GABA-ergic inhibitory neurons (Somogyi, Tamás, Lujan, & Buhl, 1998). Coherent cerebral networks depend on a delicate and ever-shifting balance of glutamate and GABA neurotransmission (Isaacson & Scanziani, 2011; Lauritzen et al., 2012; Maffei, 2017; Tatti et al., 2017), and a tool that allows tracking fluctuation of these neurotransmitters in vivo is necessary for making progress in understanding brain mechanisms of intelligent activity. For example, it is unclear what underlies a significant mismatch between stimulus-driven non-oxidative glucose utilization and oxygen consumption reported in Dienel (2012) or Mergenthaler, Lindauer, Dienel, and Meisel (2013). This fundamental phenomenon in brain information processing circuitry cannot be assessed by BOLD MRI. Recent proton functional MRS
305
306
n. raz and j. a. stanley
(¹H fMRS) studies suggest that this mismatch is a transient phenomenon needed for transition between metabolic states in glutamatergic neurons during neurotransmission (Stanley & Raz, 2018). The task-related shifts in dynamics of neuronal activity may be closely associated with synaptic plasticity (McEwen & Morrison, 2013), and ¹H fMRS may be a highly promising tool for studying brain efficiency in handling neurochemical and microstructural changes induced by cognitive activity. Fulfillment of these promises, however, depends on resolving several key methodological issues.
Neurotransmitters: Glutamate, GABA, and Their Interaction As noted, ¹H MRS, with its ability to measure local levels of glutamate and GABA in vivo, is well suited for investigating the conceptual framework that emphasizes temporal dynamics of the E/I equilibrium in cortical and subcortical circuits (Stanley & Raz, 2018). Unfortunately, the dynamic aspect of glutamate and GABA activity is absent in the majority of the ¹H MRS literature, with measurements primarily reflecting static neurotransmitter levels under quasi-rest condition. Typically, ¹H MRS is acquired without any specific instructions or behavioral constraints aside from asking the participants to relax and keep the head still during acquisition. Thus, the measured neurotransmitter levels may not be static but reflect an integrated level over a time window spanning several minutes. Such coarse data structure limits the interpretation of findings with respect to neural correlates of neurotransmission and synaptic plasticity. The need of capturing the temporal dynamics of glutamate and GABA in vivo, is being met by the emerging paradigm of ¹H fMRS. This “new” ¹H MRS promises exciting contributions to the understanding of neural mechanisms relevant to cognitive neuroscience and psychiatry research (Stanley & Raz, 2018). The brain concentration of glutamate (which constitutes eight protons in total) is similar to that of the NAA (nine protons). However, the reliability of quantifying these two compounds differs greatly (Bartha, Drost, Menon, & Williamson, 2000; de Graaf & Bovee 1990; Provencher, 1993), due to differences in the chemical shift pattern of the peaks. The CH3 of NAA gives rise to an uncoupled singlet with a relatively high S/N at 2.01 ppm. In contrast, the α- and β-CH2 groups of glutamate (two protons each) are strongly coupled at low fields, leading to complex multiplets with poorer S/N and, hence, less reliable measurements. The quantification of glutamate is further hampered by other metabolites with similar chemical shifts that overlap each other, such as glutamine, GABA and signal from macromolecules. The crucial condition for reliably quantifying glutamate is using a relatively short echo time at acquisition along with the appropriate a priori knowledge in the spectral fitting. Regarding GABA, establishing reliable measurements is challenging due to its weaker ¹H MRS signal and complex multiple peaks within the
Neurochemical Correlates of Intelligence
2.5–2.0 ppm spectral region and a triplet approximately at the PCr+Cr chemical shift. Consequently, in vivo GABA quantification is typically attained by using a spectral editing type sequence or two-dimensional J-coupling-resolved spectroscopy, which can isolate particular chemical shifts of GABA (Harris, Saleh, & Edden, 2017; Keltner, Wald, Frederick, & Renshaw, 1997; Rothman, Behar, Hyder, & Shulman, 1993; Wilman & Allen, 1995).
Investigating Relationships between the Brain Neurochemistry and Cognition In the preamble to this brief survey of the extant literature on MRS correlates of cognition, it is important to note that unlike the MRI-based body of work, it includes very few studies of intelligence as a latent construct, g, or its subspecies crystallized (gc) or fluid (gf) intelligence. To date, the exploration of in vivo neurochemistry and brain energy metabolism has been almost entirely devoted to examining individual indicators that measure properties of sensory and motor systems, which are necessary for processing the data about the environment and expressing putative cognitive transformations in intelligent action or scores on specific tasks of memory and executive functions.
Cognitive Performance and Energy Metabolisms Indicators Estimated from 31P MRS Extant 31P MRS studies of cognitive processes are rare, heterogeneous in their selection of participants, instruments, and cognitive indicators, and thus are difficult to summarize. One study examined a sizable sample of healthy and carefully screened children and adolescents (age 6–17), who underwent both ¹H and 31P MRS on a 1.5 Tesla system and completed an extensive battery of cognitive tasks (Goldstein et al., 2009). The study reported multiple findings, but after applying Bonferroni correction for multiple comparisons, which set the effective Type I error at .004, two cognitive scores showed significant associations with PCr measured by 31P MRS: Verbal intelligence (“Language”) composite score and Memory composite that included five verbal and nonverbal memory tests. Executive functions composite that included Similarities, Matrix Reasoning, and Perseverative errors on Wisconsin Card Sorting Test showed no significant ¹H or ³¹P MRS correlates even before p-value adjustment. Because PCr and cognitive indicators correlated positively with age, it was imperative to examine the associations between them with age controlled for. Unfortunately, the discrepancy in N among the reported statistical tests precluded computation of partial correlations, and it is unclear if any either of the two significant correlations would survive age correction. In older adults, higher whole-brain gray matter PCr has been linked to better performance on an age-sensitive response inhibition task in healthy
307
308
n. raz and j. a. stanley
older adults (Harper, Joe, Jensen, Ravichandran, & Forester, 2016). Studying brain energy metabolites in diseases, in which cognitive impairment is the primary symptom, such as Alzheimer’s dementia (AD), can provide some clues to the role of the brain energetics in cognition. In comparing AD patients to matched healthy controls, an increase of PCr as measured by ³¹P MRS has been reported in the regions that are important for executing the componential processes of intelligence, i.e., the hippocampi, but not in the anterior cingulate cortex (Rijpma, van der Graaf, Meulenbroek, Olde Rikkert, & Heerschap, 2018). While PCr/Pi ratios and pH were also increased in AD, no changes were found for precursors or breakdown products of MPL’s. In sum, in vivo assessment of PCr may be a useful way of tapping into energetic correlates of intelligence and its components and further exploration of this imaging modality is warranted, despite meager current evidence. The ability of 31P MRS to quantify MPL precursor levels, PE, and PC, has been rarely used in cognitive neuroscience. To date, two relevant studies have been conducted. One reported associations of these indirect quantifiers of neuropil with cognitive performance in healthy children (Goldstein et al., 2009) and the other demonstrated early alterations preceding cognitive decline in older adults (Pettegrew et al., 2000).
1
H MRS and Studies of Cognition
N-acetyl Aspartate (NAA) Being the most prominent peak in a ¹H MRS spectrum, clearly detectable even at 1.5 Tesla, NAA is by far the most widely studied reported MRS-derived index and the most frequently reported correlate of cognitive performance. That said, the extant literature on cognitive correlates of basal NAA levels is relatively sparse. Moreover, these studies use diverse MRS methods and vary substantially in their selection of the brain regions and cognitive performance measures. The latter range from global and relatively coarse indices to indicators of specific cognitive operations that contribute to intelligence but pertain only to limited aspects of that construct. In a small sample of healthy children studied in a 1.5 Tesla system, prefrontal PCr+Cr and NAA levels correlated with working memory performance, although correction for multiple comparisons would have rendered the associations non-significant (Yeo, Hill Campbell Vigil, & Brooks, 2000). No correlations with IQ were found in that study. A single-voxel ¹H MRS study with placement in occipitoparietal and frontal cortices revealed modest correlations between NAA and IQ, with the authors noting some undue influence of extreme NAA values (Patel, Blyth, Griffiths, Kelly, & Talcott, 2014). In young adults a large slab of tissue above the ventricles yielded significant but weak associations between NAA and IQ, both in the right hemisphere, with lower right anterior gray NAA predicting higher VIQ and higher posterior NAA
Neurochemical Correlates of Intelligence
linked to higher PIQ (Jung et al., 2009). Higher hippocampal NAA levels were weakly associated with better performance on a global cognitive measure (Kroll et al., 2018). In children, faster cross-modal word matching was related to higher NAA (Del Tufo et al., 2018). No associations between NAA and cognition were observed in another study of healthy children (Goldstein et al., 2009). Considering the interpretation of NAA levels as indicators of cell viability, one would expect more studies exploring mediating or moderating influence of NAA on the relationship between cognitive performance with structural and functional properties of the brain assessed by other MRI modalities. Surprisingly, investigations of that type are common in various pathological conditions but rare in healthy adults. In one such study, NAA level has been positively associated with a non-specific indicator of white matter organization and integrity (fractional anisotropy, FA) in selected commissural tracts using diffusion-tensor imaging (Wijtenburg et al., 2013). Because of the illdefined nature of NAA, its use as an indicator of brain function is limited. Targeting specific neurotransmitters in evaluating the relationship between the brain and cognition seems more promising. These studies, however, are quite challenging because of significantly lower signals from neurotransmitters on the ¹H MRS compared to those of the NAA. Technical limitations inherent to generating ¹H MRS spectra on a typical MRI scanner further limit simultaneous assessment of multiple neurotransmitters in multiple brain regions.
Specific Neurotransmitters: GABA The main inhibitory neurotransmitter of the brain, GABA, has been in the focus of several investigations of brain–cognition associations. During task performance and presumably elevated neural activity, regional concentration of GABA inferred from ¹H MRS drops, and glutamate concentrations rises (Duncan, Wiebking, & Northoff, 2014). The cellular mechanisms reflected in these ¹H MRS-derived measures remain to be elucidated but the evidence seems to favor energetic, with changes in the traffic of ROS and endogenous antioxidants, rather than neurotransmitter-release explanation (Lin, Stephenson, Xin, Napolitano, & Morris, 2012). In another small sample, GABA+ levels correlated with expression of microglia-associated protein TSPO in the mPFC, but no links between GABA+ levels and cognitive performance were revealed (Da Silva et al., 2019). In right-handed young and older adults, poorer performance proficiency on a visual motor bimanual coordination task was associated with higher GABA+ levels in the left sensorimotor but not bilateral occipital cortex (Chalavi et al., 2018). In children, faster cross-modal word matching was related to low basal levels of GABA (Del Tufo et al., 2018). In general, these limited findings suggest that better cognitive performance may be associated with (temporarily) reduced levels of
309
310
n. raz and j. a. stanley
GABA, which may indicate the importance of suppressing inhibitory activity in the service of cognitive effort and efficiency.
Specific Neurotransmitters: Glutamate ¹H MRS has been used very infrequently in animal models of cognition. In a study in middle-aged marmosets (Callithrix jacchus) who were scanned within 3 months of a serial Reversal Learning task (Lacreuse, Moore, LaClair, Payne, & King, 2018), higher prefrontal cortex Glx (defined as glutamate + glutamine) level was associated with faster acquisition of the reversals but only in males, not in females. In younger adults suffering from asthma as well as healthy controls, poorer cognitive function assessed by the Montreal Cognitive Assessment (MoCA) was associated with reduced resting glutamate levels (Kroll et al., 2018).
Combined GABA and Glu Studies PCr+Cr-referenced glutamate levels in the posterior medial cortex and associated white matter correlated positively and GABA, also divided by PCr+Cr levels, correlated negatively with functional connectivity in the default network (Kapogiannis, Reiter, Willette, & Mattson, 2013).
Functional MRS (fMRS) Sensory-Motor Tasks Flashing checkerboard stimuli induces modest but consistent stimulus-related increases in steady-state glutamate levels (Bednařík et al., 2015; Lin et al., 2012; Mangia et al., 2007; Schaller, Mekle, Xin, Kunz, & Gruetter, 2013), with the magnitude of the increase depending on task duration and cognitive processing demands. Notably, novel stimuli, even when viewed passively, induce a significant elevation in glutamate, while frequently repeated familiar pictures do not (Apsvalka, Gadie, Clemence, & Mullins, 2015). In a combined ¹H fMRS and fMRI study of healthy young adults on a 7 Tesla system, BOLD and glutamate changes in response to short (64 s) repeated flickering checkerboard stimulation evidenced a moderate correlation, which was strengthened once the initial block with counterphase glutamate-BOLD time series was eliminated (Ip et al., 2017). Of note, the rise in BOLD-fMRI signal during visual stimulation was mirrored by concomitant elevation in glutamate, while none of these changes and associations were noted at rest (i.e., the control comparison condition). A motor task such as a periodic finger tapping induces a modest glutamate increase in the sensorymotor cortical regions, and these increases are co-localized with BOLD activation (Schaller, Xin, O’Brien, Magill, & Gruetter, 2014). These findings, albeit limited in scope, link an increase in sensory and motor processing to temporary elevation in glutamate levels. Thus, proportionate glutamate recruitment
Neurochemical Correlates of Intelligence
appears to underlie basic information processing that underpins complex cognitive activity and may impose an important constraint on success in accomplishing multifarious tasks that are commonly used for gauging intelligence. If further replicated and tied to cognitive performance, this task-dependent glutamate surge may reflect the efficiency of information gathering via what Galton (1883) called “the avenues of senses,” and contribute significantly to our understanding of individual differences in intellectual prowess.
Cognitive Tasks The ¹H fMRS literature on higher-level cognitive activity is, alas, even sparser than the body of research summarized in the previous section. In one study, Glx levels in the mPFC were compared between resting-state and mental imagery task conditions, with the auditory cortex used as a control region (Huang et al., 2015). The block-design of the fMRS that was implemented in the study falls short of the true event-related approach that allows tracking within-task changes in glutamate level. Moreover, combining glutamate þ glutamine (i.e., Glx) as the key outcome measurement hampers the interpretation of the results. Nonetheless, this investigation is a step forward in comparison to ¹H MRS studies of glutamate (or Glx) assessed without behavioral constraints during the acquisition. In addition, fMRI data were collected on the same day, although not within the same session. Mental imagery investigated in this study, is heavily dependent on working memory and thus is an important component of (primarily but not exclusively) many nonverbal reasoning tasks that fall under the rubric of fluid intelligence (Kyllonen & Christal, 1990). The data collection during imagined swimming in experienced practitioners of that sport had an important advantage – lack of individual differences and learning-related changes that are inevitable in all laboratory cognitive tasks performed in the scanner. The disadvantage, however, was the propensity of that task to engage regions within the default-mode network that is activated at “rest,” i.e., during unstructured mental activity that is likely to include metal imagery (Mazoyer et al., 2001). Notably, unlike resting BOLD studies, for which extensive investigations of temporal fluctuations in the brain’s hubs and networks constitute a voluminous literature, little is known about ¹H MRS changes during rest over comparable time windows and, to the best of our knowledge, nothing is known about the synchronization and desynchronization patterns of glutamate and GABA in the healthy brain. What are the physiological mechanisms underlying the observed taskrelated increase in Glx? The possibilities are twofold: The task-related demands are of energetic metabolic origin; they may increase the oxidative metabolism, and accordingly the glutamate–glutamine cycling to make available a higher concentration of glutamate at the synaptic cleft. A task-related glutamate increase may be of neuronal origin and related to an increase in synaptic glutamate release. It is unclear, however, how “cycling” is inferred if only glutamate þ glutamine is observed.
311
312
n. raz and j. a. stanley
Thielen et al. (2018) used ¹H MRS and fMRI (psychophysiological interaction analysis) for investigating the hypothesized contribution of the mPFC to performance on name-face association task in young adults. They measured, in a single mPFC voxel, both GABA and Glx levels, referenced to NAA i.e., ratios, before and after volunteers memorized face–name association. Although this study’s block design cannot capture the encoding-retrieval cycle dynamics, its results are nonetheless intriguing. Higher scores on an outof-scanner memory test were associated with elevated ratios of mPFC Glx/ NAA but unrelated to GABA-to-NAA ratios. In the fMRI study using the subsequent memory paradigm and carried out between the two ¹H MRS acquisitions, a positive correlation between the Glx/NAA increase and mPFC connectivity to the thalamus and hippocampus was observed. This correlation was noted only for associations subsequently recognized with high confidence and not those that were recognized with low confidence or forgotten altogether. The mediation analyses showed the relationship between Glx/NAA change and memory performance (the difference in recall of high- vs low-confidence items) was mediated by functional connectivity between mPFC and the hippocampus, with the magnitude of connectivity correlated with memory scores. The role of the mPFC-thalamus connectivity in a similar mediation pattern could not be established, however, with a conventional level of confidence, although the effect was in the same direction (Thielen et al., 2018). In a recent task-based ¹H fMRS study with a single-voxel placement in left dlPFC, a significant 2.7% increase in glutamate was observed during a standard 2-back WM task compared to a continuous visual crosshair fixation in healthy young adults (Woodcock, Anand, Khatib, Diwadkar, & Stanley, 2018). Notably, the glutamate increase was more pronounced during the initial moments of task performance in each task block. In another study from the same group, during performance of an associative learning task with object–location pairs, healthy adults displayed unique temporal dynamics of glutamate modulation in the right hippocampus (Stanley et al., 2017). Notably, the differences in the time course of glutamate modulation were associated with learning proficiency: faster learners demonstrated up to an 11% increase in glutamate during the early trials, whereas a significant but smaller and later increase of 8% was observed in slower learners. Taylor et al. (2015) investigated glutamate modulation during a classic Stroop task, which includes a mixture of congruent and incongruent conditions as well as trials with words only (no color) and color only (no words) and is frequently used to assess executive functions. In this study, conducted on a 7 Tesla system, the authors investigated glutamate level changes in the dorsal anterior cingulate gyrus (ACG) of healthy adults. They found that, compared to the rest condition, during the Stroop task performance, ACG glutamate increased by 2.6%. However, differences in dorsal ACG glutamate modulation between trail conditions within the Stroop were not reported.
Neurochemical Correlates of Intelligence
Visuospatial Cognition Glutamate modulation during tasks involving the visuospatial attention and memory system were recently investigated using ¹H fMRS at 3 Tesla. In healthy individuals, a non-significant modulation of glutamate was observed in the parietal-occipital cortex during a visuospatial attention task compared to the control condition (Lindner, Bell, Iqbal, Mullins, & Christakou, 2017). In another study, no significant task-related glutamate modulation was observed in the parietal-posterior cingulate cortex of healthy adults, patients with Alzheimer’s disease (AD), and individuals with amnestic mild cognitive impairment who performed a face–name associative memory task compared to the rest control condition (Jahng et al., 2016). In both studies, details on the variability of the glutamate measurements were omitted and, therefore, it remains unclear whether the method afforded detection of a task-related change in glutamate of the order of 10% or less. 31
P fMRS Several attempts to capture the brain’s energetic response to sensory stimulation via 31P MRS have been made in the past two decades, with mixed results. Although some found no changes in energy-related metabolites (Barreto, Costa, Landim, Castellano, & Salmon, 2014; Chen, Zhu, Adriany, & Ugurbil, 1997; van de Bank, Maas, Bains, Heerschap, & Scheenen, 2018), most published reports identified significant decreases in PCr (Barreto et al., 2014; Kato, Murashita, Shioiri, Hamakawa, & Inubushi, 1996; Murashita, Kato, Shioiri, Inubushi, & Kato, 1999; Rango, Castelli, & Scarlato, 1997; Sappey-Marinier et al., 1992) that appear, at least in some samples, agedependent (Murashita et al., 1999). Some findings hint at the dynamic nature of PCr changes (Rango, Bonifati, & Bresolin, 2006; Yuksel et al., 2015), that could have been obscured by integration over a wide time window and across a very large voxel. With respect to other metabolites detectable by 31P MRS, two studies reported elevation in the inorganic phosphate (Pi) levels during visual stimulation (Barreto et al., 2014; Mochel et al., 2012), while others reporting null results (van de Bank et al., 2018).
Summary, Conclusions, and Future Directions This survey of the extant literature on the brain underpinnings of intelligence revealed by ¹H and ³¹P MRS underscores the discrepancy between the great potential of the technique that opens a window to in vivo assessment of the brain chemistry and the findings that reveal small effects and show little consistency among the studies. It seems that a curse of “10%-of-the-variance barrier” hovers over this field of inquiry just as it does over the large body of studies that attempted to relate intelligence to multiple indices of information processing, that is, correlations between any measure of intelligence and any measure of physiological or neural property of the brain tend to congregate
313
314
n. raz and j. a. stanley
around the .20–.40 values (Hunt, 1980). Nonetheless, we espouse an optimistic view of the future research that can realize the promises of MRS. Here we outline the steps that in our opinion can improve the understanding of neurochemical and energetic mechanisms of human reasoning and individual differences therein. 1. Thus far, MRS studies of neurotransmitters and energy metabolites roles in intelligence have been conducted, almost exclusively, on 1.5 Tesla and 3 Tesla systems. With increasing availability of stronger (e.g., 7 Tesla) devices, greater temporal and spatial resolution can be attained. Typical 1.5 Tesla (and even some 3 Tesla) MRS studies face significant challenges in resolving glutamate and glutamine in the ¹H MRS spectrum. Because the latter is both a precursor and a break-down product of the former, using Glx as an outcome variable hampers the mechanistic understanding of glutamate’s role in cognition as it confounds differences and changes in glutamate release with variations in turnover and synthesis. 2. By their very nature, neurochemical changes in the brain’s neurons are fleeting. Measuring “stationary” levels of glutamate and GABA, integrated over a wide time window may not be useful in investigations that target normal brain–cognition relationships. However, comparing neurotransmitter or energy metabolites levels between task active conditions reflecting defined cognitive processes vs non-task-active conditions over shorter time windows are crucial for advancing the field (Stanley & Raz, 2018). In advancing that goal, high-field systems are of critical importance, as they allow collecting refined spectra with much improved temporal resolution. Significant progress in studying glutamate modulation within encoding-retrieval cycles of memory tasks obtained on clinical 3 Tesla systems give the taste of what can be achieved on high-field devices (Stanley et al., 2017). 3. The brain is a structurally and functionally heterogeneous organ. Therefore, the ability to collect the data from multiple regions is critical for understanding the mechanisms of intelligent behavior and bringing MRS studies in agreement with the decades of localization findings generated by lesion, structural MRI, and PET studies. Here again, an increase in magnetic field strength is necessary, for, although reasonably small voxels can be targeted with 3 Tesla systems, collecting the data from multiple locations simultaneously is unrealistic under the constraints of human subjects’ tolerance of lengthy in-scanner procedures. 4. The same logic applies to targeting more than one chemical compound such as glutamate and GABA with comparable precision, especially in ¹H fMRS studies. At the currently standard 3 Tesla field strength, this is an
Neurochemical Correlates of Intelligence
5.
6.
7.
8.
unrealistic aim. Increase in availability of 7 Tesla systems is expected to give a significant boost to such double-target studies and allow evaluation of complex dynamics of excitation and inhibition in a living human performing cognitive tasks (An, Araneta, Johnson, & Shen, 2018). On the cognitive side of things, a more systematic approach is in order. Intelligence is a complex construct defined by multiple indicators. Some have stronger association with it than others. Moving from reliance on arbitrary indices of memory, executive functions, and speed of processing – all of which are related to general intelligence – to deliberately selected tasks that produce indicators with the highest g-loading will advance understanding of associations between brain function and intellectual performance. The hope is that, with advancement in instrumentation and data processing, it will become possible to gauge neurochemical and metabolic changes in the course of performing multiple g-related tasks in the same individuals. Understanding of the role played by modulation of the brain energy substrates and its key neurotransmitters in cognitive processes can be advanced by examining their change, within multiple time windows, over the course of life-span development. Multi-occasion longitudinal studies can also clarify the role of rapid neurotransmitter fluctuations vs. long term plastic changes of neuropil in supporting development, maintenance, and decline of cognitive abilities. Success in interpreting the results of noninvasive MRS studies in humans hinges on validating the indices produced by these techniques in relevant animal models. With respect to higher cognitive functions and gathering of sensory information in support of the latter, it is imperative to develop neuroimaging experimental paradigms for primates. To date, only one primate study of this kind has been reported (Lacreuse et al., 2018), but the model employed in that investigation, a common marmoset, appears very promising for future developments, especially if the MRS experiments are accompanied by more invasive and precise probes of neurotransmitter changes within cognitive processing cycles. In summary, further advancement of understanding the neural underpinnings of intelligence hinges on several developments. In no particular order, these are improvement of instruments, with an emphasis on higher magnetic field strengths, greater temporal and spatial resolution of the fMRS, expansion of animal modeling studies harmonized with investigations in humans, integration of MRS findings with structural and BOLD imaging, and simultaneous within-person assessment of multiple indicators of intelligence as a construct. With such a capacious room to grow, both ¹H and ³¹P MRS will be able to deliver on its promise of revealing important neurochemical and metabolic underpinning of individual differences in intelligence.
315
316
n. raz and j. a. stanley
Acknowledgment This work was supported by National Institutes of Health grants R01-AG011230 (NR) and R21-AG059160 (JAS and NR).
References An, L., Araneta, M. F., Johnson, C., & Shen, J. (2018). Simultaneous measurement of glutamate, glutamine, GABA, and glutathione by spectral editing without subtraction. Magnetic Resonance in Medicine, 80(5), 1776–1717. doi: 10.1002/ mrm.27172. Andres, R. H., Ducray, A. D., Schlattner, U., Wallimann, T., & Widmer, H. R. (2008). Functions and effects of creatine in the central nervous system. Brain Research Bulletin, 76(4), 329–343. doi: 10.1016/j.brainresbull.2008.02.035 Apsvalka, D., Gadie, A., Clemence, M., & Mullins, P. G. (2015). Event-related dynamics of glutamate and BOLD effects measured using functional magnetic resonance spectroscopy (fMRS) at 3T in a repetition suppression paradigm. Neuroimage, 118, 292–300. Attwell, D., & Laughlin, S. B. (2001). An energy budget for signaling in the grey matter of the brain. Journal of Cerebral Blood Flow and Metabolism: Official Journal of the International Society of Cerebral Blood Flow and Metabolism, 21(10), 1133–1145. doi: 10.1097/00004647-200110000-00001. Bäckman, L., Lindenberger, U., Li, S. C., & Nyberg, L. (2010). Linking cognitive aging to alterations in dopamine neurotransmitter functioning: Recent data and future avenues. Neuroscience and Biobehavioral Reviews, 34(5), 670–677. doi: 10.1016/j.neubiorev.2009.12.008 Barreto, F. R., Costa, T. B., Landim, R. C., Castellano, G., & Salmon, C. E. (2014). (31)P-MRS using visual stimulation protocols with different durations in healthy young adult subjects. Neurochemical Research, 39(12), 2343–2350. doi: 10.1007/s1106 4-014-1433-9. Bartha, R., Drost, D. J., Menon, R. S., & Williamson, P. C. (2000). Comparison of the quantification precision of human short echo time (1)H spectroscopy at 1.5 and 4.0 Tesla. Magnetic Resonance in Medicine, 44(2), 185–192. Baslow, M. H. (2000). Functions of N-acetyl-L-aspartate and N-acetyl-L-aspartylglutamate in the vertebrate brain: Role in glial cell-specific signaling. Journal of Neurochemistry, 75(2), 453–459. Baslow, M. H. (2003). Brain N-acetylaspartate as a molecular water pump and its role in the etiology of Canavan disease: A mechanistic explanation. Journal of Molecular Neuroscience, 21(3), 185–190. Bednarik, P., Tkac, I., Giove, F., DiNuzzo, M., Deelchand, D. K., Emir, U. E., . . . Mangia, S. (2015). Neurochemical and BOLD responses during neuronal activation measured in the human visual cortex at 7 T. Journal of Cerebral. Blood Flow and Metabolism, 35(4), 601–610. Bhakoo, K., & Pearce, D. (2000). In vitro expression of N-acetyl aspartate by oligodendrocytes: Implications for proton magnetic resonance spectroscopy signal in vivo. Journal of Neurochemistry, 74(1), 254–262.
Neurochemical Correlates of Intelligence
Chakraborty, G., Mekala, P., Yahya, D., Wu, G., & Ledeen, R. W. (2001). Intraneuronal N-acetylaspartate supplies acetyl groups for myelin lipid synthesis: Evidence for myelin-associated aspartoacylase. Journal of Neurochemistry, 78(4), 736–745. Chalavi, S., Pauwels, L., Heise, K.-F., Zivari Adab, H., Maes, C., Puts, N. A. J., . . . Swinnen, S. P. (2018). The neurochemical basis of the contextual interference effect. Neurobiology of Aging, 66, 85–96. doi: 10.1016/j. neurobiolaging.2018.02.014. Chen, W., Zhu, X. H., Adriany, G., & Ugurbil, K. (1997). Increase of creatine kinase activity in the visual cortex of human brain during visual stimulation: A 31P magnetization transfer study. Magnetic Resonance in Medicine, 38(4), 551–557. Coupland, N. J., Ogilvie, C. J., Hegadoren, K. M., Seres, P., Hanstock, C. C., & Allen, P. S. (2005). Decreased prefrontal myo-inositol in major depressive disorder. Biological Psychiatry, 57(12), 1526–1534. Da Silva, T., Hafizi, S., Rusjan, P. M., Houle, S., Wilson, A. A., Price, I., . . . Mizrahi, R. (2019). GABA levels and TSPO expression in people at clinical high risk for psychosis and healthy volunteers: A PET-MRS study. Journal of Psychiatry and Neuroscience, 44(2), 111–119. doi: 10.1503/ jpn.170201. Damoiseaux, J. S., Viviano, R. P., Yuan, P., & Raz, N. (2016). Differential effect of age on posterior and anterior hippocampal functional connectivity. NeuroImage, 133, 468–476. doi: 10.1016/j.neuroimage.2016.03.047. de Graaf, A. A., & Bovee, W. M. (1990). Improved quantification of in vivo 1H NMR spectra by optimization of signal acquisition and processing and by incorporation of prior knowledge into the spectral fitting. Magnetic Resonance in Medicine, 15(2), 305–319. De Stefano, N., Matthews, P. M., Fu, L., Narayanan, S., Stanley, J., Francis, G. S., . . . Arnold, D. L. (1998). Axonal damage correlates with disability in patients with relapsing-remitting multiple sclerosis. Results of a longitudinal magnetic resonance spectroscopy study. Brain, 121(Pt 8), 1469–1477. Del Tufo, S. N., Frost, S. J., Hoeft, F., Cutting, L. E., Molfese, P. J., Mason, G. F., . . . Pugh, K. R. (2018). Neurochemistry predicts convergence of written and spoken language: A proton magnetic resonance spectroscopy study of crossmodal language integration. Frontiers in Psychology, 9, 1507. doi: 10.3389/ fpsyg.2018.01507. eCollection 2018. Dienel, G. A. (2012). Fueling and imaging brain activation. ASN Neuro, 4(5), 267–321. doi: 10.1042/AN20120021. Du, F., Cooper, A., Lukas, S. E., Cohen, B. M., & Ongur, D. (2013). Creatine kinase and ATP synthase reaction rates in human frontal lobe measured by (31)P magnetization transfer spectroscopy at 4T. Magnetic Resonance Imaging, 31(1), 102–108. doi: 10.1016/j.mri.2012.06.018 Duncan, J., Emslie, H., Williams, P., Johnson, R., & Freer, C. (1996). Intelligence and the frontal lobe: The organization of goal-directed behavior. Cognitive Psychology, 30(3), 257–303. Duncan, N. W., Wiebking, C., & Northoff, G. (2014). Associations of regional GABA and glutamate with intrinsic and extrinsic neural activity in
317
318
n. raz and j. a. stanley
humans – A review of multimodal imaging studies. Neuroscience and Biobehavioral Reviews, 47, 36–52. doi: 10.1016/j.neubiorev.2014.07.016. Fukushima, E., & Roeder, S. B. W. (1981). Experimental pulse NMR: A nuts and bolts approach. Reading, MA: Addison-Wesley. Galton, F. (1883). Inquiries into human faculty. London: Macmillan. Garwood, M., & DelaBarre, L. (2001). The return of the frequency sweep: Designing adiabatic pulses for contemporary NMR. Journal of Magnetic Resonance, 153(2), 155–177. doi: 10.1006/jmre.2001.2340. Geddes, J. W., Panchalingam, K., Keller, J. N., & Pettegrew, J. W. (1997). Elevated phosphocholine and phosphatidylcholine following rat entorhinal cortex lesions. Neurobiology of Aging, 18(3), 305–308. Goldstein, G., Panchalingam, K., McClure, R. J., Stanley, J. A.., Calhoun, V. D., Pearlson, G. D., & Pettegrew, J. W. (2009). Molecular neurodevelopment: An in vivo 31 P- 1 H MRSI study. Journal of the International Neuropsychological Society, 15(5), 671–683. Govindaraju, V., Young, K., & Maudsley, A. A. (2000). Proton NMR chemical shifts and coupling constants for brain metabolites. NMR in Biomedicine, 13(3), 129–153. Harper, D. G., Joe, E. B., Jensen, J. E., Ravichandran, C., & Forester, B. P. (2016). Brain levels of high-energy phosphate metabolites and executive function in geriatric depression. International Journal of Geriatric Psychiatry, 31(11), 1241–1249. doi: 10.1002/gps.4439. Harris, A. D., Saleh, M. G., & Edden, R. A. E. (2017). Edited 1H magnetic resonance spectroscopy in vivo: Methods and metabolites. Magnetic Resonance in Medicine, 77(4), 1377–1389. doi: 10.1002/mrm.26619. Howarth, C., Gleeson, P., & Attwell, D. (2012). Updated energy budgets for neural computation in the neocortex and cerebellum. Journal of Cerebral Blood Flow and Metabolism: Official Journal of the International Society of Cerebral Blood Flow and Metabolism, 32(7), 1222–1232. doi: 10.1038/ jcbfm.2012.35. Huang, Z., Davis, H. H., Yue, Q., Wiebking, C., Duncan, N. W., Zhang, J., . . . Northoff, G. (2015). Increase in glutamate/glutamine concentration in the medial prefrontal cortex during mental imagery: A combined functional MRS and fMRI study. Human Brain Mapping, 36(8), 3204–3212. doi: 10.1002/hbm.22841 Hunt, E. (1980). Intelligence as an information-processing concept. British Journal of Psychology, 71(4), 449–474. Ip, B., Berrington, A., Hess, A. T., Parker, A. J., Emir, U. E., & Bridge, H. (2017). Combined fMRI-MRS acquires simultaneous glutamate and BOLD-fMRI signals in the human brain. NeuroImage, 155, 113–119. Isaacson, J. S., & Scanziani, M. (2011). How inhibition shapes cortical activity. Neuron, 72(2), 231–243. doi: 10.1016/j.neuron.2011.09.027. Jahng, G. H., Oh, J., Lee, D. W., Kim, H. G., Rhee, H. Y., Shin, W., . . . Ryu, C. W. (2016). Glutamine and glutamate complex, as measured by functional magnetic resonance spectroscopy, alters during face-name association task in patients with mild cognitive impairment and Alzheimer’s disease. Journal of Alzheimers Disease, 53(2), 745. doi: 10.3233/JAD-169004.
Neurochemical Correlates of Intelligence
Jung, R. E., Gasparovic, C., Chavez, R. S., Caprihan, A., Barrow, R., & Yeo, R. A. (2009). Imaging intelligence with proton magnetic resonance spectroscopy. Intelligence, 37(2), 192–198. doi: 10.1016/j.intell.2008.10.009. Jung, R. E., & Haier, R. J. (2007). The Parieto-Frontal Integration Theory (P-FIT) of intelligence: Converging neuroimaging evidence. Behavioral and Brain Science, 30(2), 135–154; discussion 154–187. Kapogiannis, D., Reiter, D. A., Willette, A. A., & Mattson, M. P. (2013). Posteromedial cortex glutamate and GABA predict intrinsic functional connectivity of the default mode network. Neuroimage, 64, 112–119. doi: 10.1016/j.neuroimage.2012.09.029. Kato, T., Murashita, J., Shioiri, T., Hamakawa, H., & Inubushi, T. (1996) Effect of photic stimulation on energy metabolism in the human brain measured by 31P-MR spectroscopy. Journal of Neuropsychiatry and Clinical Neuroscience, 8(4), 417–422. Keltner, J. R., Wald, L. L., Frederick, B. D., & Renshaw, P. F. (1997): In vivo detection of GABA in human brain using a localized double-quantum filter technique. Magnetic Resonance in Medicine, 37(3), 366–371. Kemp, G. J. (2000). Non-invasive methods for studying brain energy metabolism: What they show and what it means. Developmental Neuroscience, 22(5–6), 418–428. doi: 10.1159/000017471. Kim, H., McGrath, B. M., & Silverstone, P. H. (2005). A review of the possible relevance of inositol and the phosphatidylinositol second messenger system (PI-cycle) to psychiatric disorders – Focus on magnetic resonance spectroscopy (MRS) studies. Human Psychopharmacology, 20(5), 309–326. Kroll, J. L., Steele, A. M., Pinkham, A. E., Choi, C., Khan, D. A., Patel, S. V., . . . Ritz, T. (2018). Hippocampal metabolites in asthma and their implications for cognitive function. Neuroimage Clinical, 19, 213–221. doi: 10.1016/j. nicl.2018.04.012. eCollection 2018. Kyllonen, P. C., & Christal, R. E. (1990). Reasoning ability is (little more than) working-memory capacity?! Intelligence, 14(4), 389–433. Lacreuse, A., Moore, C. M., LaClair, M., Payne, L., & King, J. A. (2018). Glutamine/ glutamate (Glx) concentration in prefrontal cortex predicts reversal learning performance in the marmoset. Behavioral Brain Research, 346, 11–15. doi: 10.1016/j.bbr.2018.01.025. Lauritzen, M., Mathiesen, C., Schaefer, K., & Thomsen, K. J. (2012). Neuronal inhibition and excitation, and the dichotomic control of brain hemodynamic and oxygen responses. NeuroImage, 62(2), 1040–1050. doi: 10.1016/j. neuroimage.2012.01.040 Lin, Y., Stephenson, M. C., Xin, L., Napolitano, A., & Morris, P. G. (2012). Investigating the metabolic changes due to visual stimulation using functional proton magnetic resonance spectroscopy at 7 T. Journal of Cerebral Blood Flow and Metabolism, 32(8), 1484–1495. doi: 10.1038/ jcbfm.2012.33. Lindner, M., Bell, T., Iqbal, S., Mullins, P. G., & Christakou, A. (2017). In vivo functional neurochemistry of human cortical cholinergic function during visuospatial attention. PLoS One, 12(2), e0171338. doi: 10.1371/journal. pone.0171338.
319
320
n. raz and j. a. stanley
Maffei, A. (2017). Fifty shades of inhibition. Current Opinion in Neurobiology, 43, 43–47. doi: 10.1016/j.conb.2016.12.003 Mangia, S., Tkac, I., Gruetter, R., Van de Moortele, P. F., Maraviglia, B., & Ugurbil, K. (2007) Sustained neuronal activation raises oxidative metabolism to a new steady-state level: Evidence from 1H NMR spectroscopy in the human visual cortex. Journal of Cerebral Blood Flow and Metabolism, 27(5), 1055–1063. doi: 10.1038/sj.jcbfm .96004-01. Mazoyer, B., Zago, L., Mellet, E., Bricogne, S., Etard, O., Houdé, O., . . . TzourioMazoyer, N. (2001). Cortical networks for working memory and executive functions sustain the conscious resting state in man. Brain Research Bulletin, 54(3), 287–298. McEwen, B. S., & Morrison, J. H. (2013). The brain on stress: Vulnerability and plasticity of the prefrontal cortex over the life course. Neuron, 79(1), 16–29. doi: 10.1016/j.neuron.2013.06.028. McIlwain, H., & Bachelard, H. S. (1985). Biochemistry and the central nervous system, vol. 5. Edinburgh: Churchill Livingstone. McRobbie, D., Moore, E., Graves, M., & Prince, M. (2006). MRI from picture to proton. Cambridge University Press. doi: 10.1017/CBO9780511545405. Mergenthaler, P., Lindauer, U., Dienel, G. A., & Meisel, A. (2013). Sugar for the brain: The role of glucose in physiological and pathological brain function. Trends in Neurosciences, 36(10), 587–597. doi: 10.1016/j.tins.2013.07.001. Miller, B. L. (1991). A review of chemical issues in 1H NMR spectroscopy: N-AcetylL-aspartate, creatine and choline. NMR in Biomedicine, 4(2), 47–52. Mlynárik, V., Gambarota, G., Frenkel, H., & Gruetter, R. (2006). Localized shortecho-time proton MR spectroscopy with full signal-intensity acquisition. Magnetic Resonance in Medicine, 56(5), 965–970. doi: 10.1002/mrm.21043. Mochel, F., N’Guyen, T. M., Deelchand, D., Rinaldi, D., Valabregue, R., Wary, C., . . . Henry, P. G. (2012) Abnormal response to cortical activation in early stages of Huntington disease. Movement Disorders, 27(7), 907–910. doi: 10.1002/mds.25009. Murashita, J., Kato, T., Shioiri, T., Inubushi, T., & Kato, N. (1999). Age dependent alteration of metabolic response to photic stimulation in the human brain measured by 31P MR-spectroscopy. Brain Research, 818(1), 72–76. Neubauer, A. C., & Fink, A. (2009). Intelligence and neural efficiency. Neuroscience and Biobehavioral Reviews, 33(7), 1004–1023. doi: 10.1016/j.neubiorev.2009.04.001. Patel, T., Blyth, J. C., Griffiths, G., Kelly, D., & Talcott, J. B. (2014). Moderate relationships between NAA and cognitive ability in healthy adults: Implications for cognitive spectroscopy. Frontiers in Human Neuroscience, 14(8), 39. doi: 10.3389/fnhum.2014.00039. eCollection 2014. Pettegrew, J. W., Klunk, W. E., Panchalingam, K., McClure, R. J., & Stanley, J. A. (2000). Molecular insights into neurodevelopmental and neurodegenerative diseases. Brain Research Bulletin, 53(4), 455–469. doi: S0361-9230(00)00376-2 [pii]. Pettegrew, J. W., Panchalingam, K., Withers, G., McKeag, D., & Strychor, S. (1990). Changes in brain energy and phospholipid metabolism during development and aging in the Fischer 344 rat. Journal of Neuropathology and Experimental Neurology, 49(3), 237–249.
Neurochemical Correlates of Intelligence
Pouwels, P. J., Brockmann, K., Kruse, B., Wilken, B., Wick, M., Hanefeld, F., & Frahm, J. (1999). Regional age dependence of human brain metabolites from infancy to adulthood as detected by quantitative localized proton MRS. Pediatric Research, 46(4), 474–485. Pradhan, S., Bonekamp, S., Gillen, J. S., Rowland, L. M., Wijtenburg, S. A., Edden, R. A. E., & Barker, P. B. (2015). Comparison of single voxel brain MRS AT 3T and 7T using 32-channel head coils. Magnetic Resonance Imaging, 33(8), 1013–1018. doi: 10.1016/j.mri.2015.06.003. Provencher, S. W. (1993). Estimation of metabolite concentrations from localized in vivo proton NMR spectra. Magnetic Resonance in Medicine, 30(6), 672–679. Rango, M., Bonifati, C., & Bresolin, N. (2006). Parkinson’s disease and brain mitochondrial dysfunction: A functional phosphorus magnetic resonance spectroscopy study. Journal of Cerebral Blood Flow and Metabolism, 26(2), 283–290. doi: 10.1038/sj.jcbfm.96001.-92. Rango, M., Castelli, A., & Scarlato, G. (1997). Energetics of 3.5 s neural activation in humans: A 31P MR spectroscopy study. Magnetic Resonance in Medicine, 38(6), 878–883. Rijpma, A., van der Graaf, M., Meulenbroek, O., Olde Rikkert, M. G. M., & Heerschap, A. (2018). Altered brain high-energy phosphate metabolism in mild Alzheimer’s disease: A 3-dimensional ³¹P MR spectroscopic imaging study. Neuroimage: Clinical, 18, 254–261. doi: 10.1016/j.nicl.2018.01.031. eCollection 2018. Ross, B., & Bluml, S. (2001). Magnetic resonance spectroscopy of the human brain. The Anatomical Record, 265(2), 54–84. Rothman, D. L., Petroff, O. A., Behar, K. L., & Mattson, R. H. (1993). Localized 1H NMR measurements of gamma-aminobutyric acid in human brain in vivo. Proceedings of the National Academy of Sciences USA, 90(12), 5662–5666. Sappey-Marinier, D., Calabrese, G., Fein, G., Hugg, J. W., Biggins, C., & Weiner, M. W. (1992). Effect of photic stimulation on human visual cortex lactate and phosphates using 1H and 31P magnetic resonance spectroscopy. Journal of Cerebral Blood Flow and Metabolism, 12(4), 584–592. doi: 10.1038/jcbfm .1992.82. Schaller, B., Mekle, R., Xin, L., Kunz, N., & Gruetter, R. (2013). Net increase of lactate and glutamate concentration in activated human visual cortex detected with magnetic resonance spectroscopy at 7 tesla. Journal of Neuroscience Research, 91(8), 1076–1083. doi: 10.1002/jnr.23194. Schaller, B., Xin, L., O’Brien, K., Magill, A. W., & Gruetter, R. (2014). Are glutamate and lactate increases ubiquitous to physiological activation? A (1)H functional MR spectroscopy study during motor activation in human brain at 7Tesla. NeuroImage, 93(Pt 1), 138–145. doi: 10.1016/j.neuroimage.2014.02.016. Scheenen, T. W. J., Klomp, D. W. J., Wijnen, J. P., & Heerschap, A. (2008). Short echo time 1H-MRSI of the human brain at 3T with minimal chemical shift displacement errors using adiabatic refocusing pulses. Magnetic Resonance in Medicine, 59(1), 1–6. doi: 10.1002/mrm.21302. Schlattner, U., Tokarska-Schlattner, M., & Wallimann, T. (2006). Mitochondrial creatine kinase in human health and disease. Biophysica Biochimica Acta - Molecular Basis of Disease, 1762(2), 164–180. doi: 10.1016/j.bbadis.2005.09.004.
321
322
n. raz and j. a. stanley
Shoubridge, E. A., Briggs, R. W., & Radda, G. K. (1982). 31p NMR saturation transfer measurements of the steady state rates of creatine kinase and ATP synthetase in the rat brain. FEBS Letters, 140(2), 289–292. doi: 10.1016/00145793(82)80916-2. Simmons, M. L., Frondoza, C. G., & Coyle, J. T. (1991). Immunocytochemical localization of N-acetyl-aspartate with monoclonal antibodies. Neuroscience, 45(1), 37–45. doi: 10.1016/0306-4522(91)90101-s. Sokoloff, L. (1991). Measurement of local cerebral glucose utilization and its relation to local functional activity in the brain. Advances in Experimental Medicine and Biology, 291, 21–42. doi: 10.1007/978-1-4684-5931-5994. Sokoloff, L. (1993). Function-related changes in energy metabolism in the nervous system: Localization and mechanisms. Keio Journal of Medicine, 42(3), 95-103. Somogyi, P., Tamás, G., Lujan, R., & Buhl, E. H. (1998). Salient features of synaptic organisation in the cerebral cortex. Brain Research Brain Research Reviews, 26(2–3), 113–135. Stagg, C. J. (2014). Magnetic resonance spectroscopy as a tool to study the role of GABA in motor-cortical plasticity. Neuroimage, 86, 19–27. Stanley, J. A. (2002). In vivo magnetic resonance spectroscopy and its application to neuropsychiatric disorders. Canadian Journal of Psychiatry, 47(4), 315–326. Stanley, J., Burgess, A., Khatib, D., Ramaseshan, K., Arshad, M., Wu, H., & Diwadkar, V. (2017). Functional dynamics of hippocampal glutamate during associative learning assessed with in vivo 1H functional magnetic resonance spectroscopy. NeuroImage, 153, 189–197. doi: 10.1016/j.neuroimage.2017.03.051. Stanley, J. A., Kipp, H., Greisenegger, E., MacMaster, F. P., Panchalingam, K., Keshavan, M. S., . . . Pettegrew, J. W. (2008). Evidence of developmental alterations in cortical and subcortical regions of children with attentiondeficit/hyperactivity disorder: A multivoxel in vivo phosphorus 31 spectroscopy study. Archives of General Psychiatry, 65(12), 1419–1428. doi: 65/12/ 1419 [pii]10.1001/archgenpsychiatry.2008.503. Stanley, J. A., & Pettegrew, J. W. (2001). A post-processing method to segregate and quantify the broad components underlying the phosphodiester spectral region of in vivo 31P brain spectra. Magnetic Resonance in Medicine, 45(3), 390–396. Stanley, J. A., Pettegrew, J. W., & Keshavan, M. S. (2000). Magnetic resonance spectroscopy in schizophrenia: Methodological issues and findings – Part I. Biological Psychiatry, 48(5), 357–368. doi: S0006-3223(00)00949-5 [pii]. Stanley, J. A., & Raz, N. (2018). Functional magnetic resonance spectroscopy: The “new” MRS for cognitive neuroscience and psychiatry research. Frontiers in Psychiatry – Neuroimaging and Stimulation, 9, 76. doi: 10.3389/ fpsyt.2018.00076. Sui, J., Huster, R., Yu, Q., Segall, J. M., & Calhoun, V. D. (2014). Function-structure associations of the brain: Evidence from multimodal connectivity and covariance studies. Neuroimage, 102(Pt 1), 11–23. doi: 10.1016/j. neuroimage.2013.09.044. Tallan, H. (1957). Studies on the distribution of N-acetyl-L-aspartic acid in brain. Journal of Biological Chemistry, 224(1), 41–45.
Neurochemical Correlates of Intelligence
Tatti, R., Haley, M. S., Swanson, O. K., Tselha, T., & Maffei, A. (2017). Neurophysiology and regulation of the balance between excitation and inhibition in neocortical circuits. Biological Psychiatry, 81(10), 821–831. doi:10.1016/j.biopsych.2016.09.017. Taylor, R., Schaefer, B., Densmore, M., Neufeld, R. W. J., Rajakumar, N., Williamson, P. C., & Théberge, J. (2015). Increased glutamate levels observed upon functional activation in the anterior cingulate cortex using the Stroop Task and functional spectroscopy. Neuroreport, 26(3), 107–112. doi: 10.1097/ WNR.0000000000000309. Thielen, J. W., Hong, D., Rohani Rankouhi, S., Wiltfang, J., Fernández, G., Norris, D. G., & Tendolkar, I. (2018). The increase in medial prefrontal glutamate/ glutamine concentration during memory encoding is associated with better memory performance and stronger functional connectivity in the human medial prefrontal-thalamus-hippocampus network. Human Brain Mapping, 39(6), 2381–2390. doi: 10.1002/hbm.24008. Tkac, I., Andersen, P., Adriany, G., Merkle, H., Ugurbil, K., & Gruetter, R. (2001). In vivo 1H NMR spectroscopy of the human brain at 7 T. Magnetic Resonance in Medicine, 46(3), 451–456. Tkác, I., Starcuk, Z., Choi, I. Y., & Gruetter, R. (1999). In vivo 1H NMR spectroscopy of rat brain at 1 ms echo time. Magnetic Resonance in Medicine, 41(4), 649–656. U gurbil, K., Adriany, G., Andersen, P., Chen, W., Garwood, M., Gruetter, R., . . . Zhu, X. H. (2003). Ultrahigh field magnetic resonance imaging and spectroscopy. Magnetic Resonance Imaging, 21(10), 1263–1281. Urenjak, J., Williams, S. R., Gadian, D. G., & Noble, M. (1993). Proton nuclear magnetic resonance spectroscopy unambiguously identifies different neural cell types. Journal of Neuroscience, 13(3), 981–989. van de Bank, B. L., Maas, M. C., Bains, L. J., Heerschap, A., & Scheenen, T. W. J. (2018). Is visual activation associated with changes in cerebral high-energy phosphate levels? Brain Structure and Function, 223, 2721–2731. van den Heuvel, M. P., Stam, C. J., Kahn, R. S., & Hulshoff Pol, H. E. (2009). Efficiency of functional brain networks and intellectual performance. Journal of Neuroscience, 29(23), 7619–7624. doi: 10.1523/JNEUROSCI.1443-09.2009. van der Knaap, M. S., van der Grond, J., Luyten, P. R., den Hollander, J. A., Nauta, J. J., & Valk, J. (1992). 1H and 31P magnetic resonance spectroscopy of the brain in degenerative cerebral disorders. Annals of Neurology, 31(2), 202–211. Wallimann, T., Wyss, M., Brdiczka, D., Nicolay, K., & Eppenberger, H. M. (1992). Intracellular compartmentation, structure and function of creatine kinase isoenzymes in tissues with high and fluctuating energy demands: The “phosphocreatine circuit” for cellular energy homeostasis. Biochemical Journal, 281(Pt 1), 21–40. doi: 10.1042/bj2810021. Wijtenburg, S. A., McGuire, S. A., Rowland, L. M., Sherman, P. M., Lancaster, J. L., Tate, D. F., . . . Kochunov, P. (2013). Relationship between fractional anisotropy of cerebral white matter and metabolite concentrations measured using (1)H magnetic resonance spectroscopy in healthy adults. Neuroimage, 66, 161–168. doi: 10.1016/j.neuroimage.2012.10.014.
323
324
n. raz and j. a. stanley
Wilman, A. H., & Allen, P. S. (1995). Yield enhancement of a double-quantum filter sequence designed for the edited detection of GABA. Journal of Magnetic Resonance B, 109(2), 169–174. Woodcock, E. A., Anand, C., Khatib, D., Diwadkar, V. A., & Stanley, J. A. (2018). Working memory modulates glutamate levels in the dorsolateral prefrontal cortex during (1)H fMRS. Frontiers in Psychiatry, 9, 66. Epub 2018/03/22. doi: 10.3389/fpsyt.2018.00066. Yang, S., Hu, J., Kou, Z., & Yang, Y. (2008). Spectral simplification for resolved glutamate and glutamine measurement using a standard STEAM sequence with optimized timing parameters at 3, 4, 4.7, 7, and 9.4T. Magnetic Resonance in Medicine, 59(2), 236–244. doi: 10.1002/mrm.21463. Yeo, R. A., Hill, D., Campbell, R., Vigil, J., & Brooks, W. M. (2000). Developmental instability and working memory ability in children: A magnetic resonance spectroscopy investigation. Developmental Neuropsychology, 17(2), 143–159. Yuksel, C., Du, F., Ravichandran, C., Goldbach, J. R., Thida, T., Lin, P., . . . Cohen, B. M. (2015). Abnormal high-energy phosphate molecule metabolism during regional brain activation in patients with bipolar disorder. Molecular Psychiatry, 20(9), 1079–1084. doi: 10.1038/mp.2015.13. Zhu, X.-H., Qiao, H., Du, F., Xiong, Q., Liu, X., Zhang, X., . . . Chen, W. (2012). Quantitative imaging of energy expenditure in human brain. NeuroImage, 60(4), 2107–2117. doi: 10.1016/j.neuroimage.2012.02.013.
PART IV
Predictive Modeling Approaches
16 Predicting Individual Differences in Cognitive Ability from Brain Imaging and Genetics Kevin M. Anderson and Avram J. Holmes
Introduction The study of intelligence, or general cognitive ability, is one of the earliest avenues of modern psychological enquiry (Spearman, 1904). A consistent goal of this field is the development of cognitive measures that predict real-world outcomes, ranging from academic performance, health (Calvin et al., 2017), and psychopathology (Woodberry, Giuliano, & Seidman, 2008), to mortality and morbidity rates (Batty, Deary, & Gottfredson, 2007). Despite evidence linking intelligence with a host of important life outcomes, we remain far from a mechanistic understanding of how neurobiological processes contribute to individual differences in general cognitive ability. Excitingly for researchers, advances in predictive statistical modeling, the emergence of wellpowered imaging and genetic datasets, and a cultural shift toward open access data may allow for behavioral prediction at the level of a single individual (Miller et al., 2016; Poldrack & Gorgolewski, 2014). There is a growing interest in generalizable brain- and genetic-based predictive models of intelligence (Finn et al., 2015; Lee et al., 2018; see also Chapter 17, by Willoughby and Lee). In the short term, statistical models predicting cognitive ability may yield insight into underlying neurobiology and, in the long-term, may inform empirically-driven and individualized medical and educational interventions. Here, we take a critical look at how recent advances in genetic and brain imaging methods can be used for the prediction of individual differences in cognitive ability. In doing so, we will cover prior work defining cognitive ability and mapping biological correlates of intelligence. We discuss the importance of prioritizing statistical models that are both predictive and interpretable before highlighting recent progress and future directions in the genetic and neuroimaging prediction of cognitive ability. We conclude with a brief discussion of the ethical implications and limitations associated with the creation of predictive models.
327
328
k. m. anderson and a. j. holmes
What Are Cognitive Abilities? Traditionally in neuroscience, broad cognitive ability is quantified using one or more standardized tests, including the Wechsler Adult Intelligence Scale (WAIS), Raven’s Progressive Matrices, or related measures of fluid intelligence or reasoning capacity (Deary, Penke, & Johnson, 2010). Through these, researchers hope to estimate an individual’s learning ability, comprehension, and capacity for reasoning and abstraction. This broad description of cognitive ability is sometimes called g, standing for general intelligence, which reflects the observation that individuals who do well on one test also tend to do well on others (Haier, 2017). Measures of general cognitive ability have been criticized as being overly reified, which is the translation of an abstraction or hypothetical construct into a discrete biological entity (Nisbett et al., 2012). Although there is consensus on the stability and predictive utility of psychometrically defined general cognitive ability, this abstract factor has been subject to debate regarding its interpretation (Gray & Thompson, 2004) and construct validity – which is the degree that a test unambiguously reflects what it aims to measure. Estimates of general cognitive ability show reliable associations with important real-world outcomes, but the biological mechanisms underlying intelligence remain a topic of focused study (Genç et al., 2018; Goriounova & Mansvelder, 2019). No one measure of cognitive ability is perfect, free from cultural bias, or immune to misuse (Sternberg, 2004). Indeed, perhaps no other subject in psychology has provoked more debate and controversy than the study of human intelligence and the associated concept of a unitary factor that broadly supports behavior and cognition. For instance, researchers have highlighted complementary abilities and behaviors not explicitly assessed through standard batteries (e.g., emotional intelligence; Salovey & Mayer, 1990), although there is evidence for shared variance across emotional and cognitive domains (Barbey, Colom, & Grafman, 2014). Because space does not permit a detailed discussion of this literature, general cognitive ability will be used to illustrate the potential of genetic and brain imaging methods for predicting behavior and individual differences in cognition. Readers should note that these approaches can be leveraged to generate predictions for a range of cognitive abilities and other complex behaviors. Critically, analyses linking the genome and brain biology to cognitive ability should not be taken to imply biological determinism or essentialism. The expression of the genome and the development of the brain are influenced both by stochastic processes as well as complex and bidirectional interactions with the environment (Dor & Cedar, 2018). For instance, even a “simple” genetic measure like heritability – which is the amount of variance in a trait explained by structural genetics – can vary across developmental stages and environments (Kendler & Baker, 2007; Visscher, Hill, & Wray, 2008).
Predicting Cognitive Ability: Brain Imaging and Genetics
Explanation Versus Prediction We endorse the terminology of Gabrieli, Ghosh, and WhitfieldGabrieli (2015), who differentiate between in-sample correlation and out-ofsample prediction across three types of models. First, a study is often said to “predict” an outcome by showing within-sample correlations. For instance, individual differences in intelligence may correlate with aspects of concurrently measured brain anatomy (e.g., cortical thickness). This type of correlation is critical for theory-building and nominating potentially important variables. Second, a study may demonstrate longitudinal correlation within a given sample, for instance to establish that a brain measure at time 1 correlates with subsequent behavior at time 2. While this approach meets the temporal requirement of predictive forecasting, it does not necessarily satisfy the criteria of generalizability that differentiates the third class of prediction. That is, a crucial test of a predictive model is whether it explains behavioral variance in an unseen or out-of-sample set of individuals (Gabrieli et al., 2015). This last class of prediction aims for external validity, and is arguably the most important for translating genetic or cognitive neuroscientific data into insights suitable for clinical or public health applications. However, we propose an addendum to the three-part conceptualization of Gabrielli et al. (2015), and argue for the prioritization of statistical models that are both predictive and interpretable. The decision to apply predictive and/or inferential methods is one routinely faced by researchers, although the distinction between traditional statistical approaches and machine learning is in many ways arbitrary (Bzdok, Altman, & Krzywinski, 2018; Bzdok & Ioannidis, 2019). We emphasize that these broad categories of predictive and correlational models are mutually informative, and the selection of one method over another is usually based on the inferential vs. predictive goals of the researcher (Bzdok & Ioannidis, 2019; Yarkoni & Westfall, 2017). In practice, model transparency and biological interpretability are often sacrificed for predictive performance, due in large part to the multivariate nature of the data (Bzdok & Ioannidis, 2019). An average SNP array, for instance, provides information on about 500–800,000 genomic variants, which can be expanded to upwards of 70 million genomic features using modern SNP imputation techniques. With brain imaging data, even a conservative parcellation consisting of 200 areas would produce 19,900 unique functional relationships. Accordingly, many successful predictive models of behavior employ data dimensionality reduction, feature selection, machine-learning (e.g., random forests, support vector machines), or specialized forms of regression (e.g., partial least squares, canonical correlation, elastic net) to reduce the number of comparisons and stabilize signal estimates (for a review, see Bzdok & Yeo, 2017). These approaches capture complex multivariate interactions among predictors, although often at the expense of mechanism
329
330
k. m. anderson and a. j. holmes
or interpretation. At the end of this chapter, we will review the promise of interpretable forms of machine- and deep-learning techniques. Predictive models of cognitive ability are perhaps most important for the study of human development (Rosenberg, Casey, & Holmes, 2018). Core psychological functions emerge through neurodevelopmental processes and concurrent molecular and genetic cascades that are influenced by the environment (e.g., resource availability, early life stress). Correspondingly, the heritability of general cognitive ability increases across childhood, adolescence, and into adulthood, due in part to amplification processes, sometimes called genotype-environment covariance (Briley & Tucker-Drob, 2013; Haworth et al., 2019). That is, initially small heritable differences in cognitive ability may lead to self-amplifying environmental selection (e.g., parent or teacher investment, self-sought intellectual challenges; Tucker-Drob & Harden, 2012). Although genes are far from deterministic or immutable (Kendler, Turkheimer, Ohlsson, Sundquist, & Sundquist, 2015), genetic variation is fixed at conception and may one day be a useful guide for early interventions or to improve educational outcomes, particularly during developmental periods when behavioral measurement is difficult. Overall prediction of intelligence may even prove useful for individualized psychiatric medicine (Lam et al., 2017), given that general cognitive ability is genetically associated to schizophrenia, bipolar disorder, and other forms of mental illness (Bulik-Sullivan et al., 2015; Hagenaars et al., 2016; Hill, Davies, Liewald, McIntosh, & Deary, 2016). However, biological predictive tools are far from mature and remain subject to serious ethical and technical challenges, which are addressed at the end of this chapter.
Neuroimaging Prediction of Cognitive Ability Identifying the neural correlates of generalized cognitive ability is of great importance, since molecular and biologically mechanistic explanations of intelligence remain largely theoretical (Barbey, 2018; Deary et al., 2010; Jung & Haier, 2007). To date, most research in this area prioritizes inferential hypothesis testing, for instance, to identify features of brain biology associated with individual differences in cognitive ability (Cole et al., 2013; Smith et al., 2015). These studies generally implicate the heteromodal association cortex as important for cognitive ability (Cole, Yarkoni, Repovs, Anticevic, & Braver, 2012; Jung & Haier, 2007). However, across populations, intellectual ability shows generalized and diffuse correlations with brain size (Deary et al., 2010), white matter tracts (Penke et al., 2012), regional brain anatomy (Luders, Narr, Thompson, & Toga, 2009; Tadayon, Pascual-Leone, & Santarnecchi, 2019), brain connectivity (Barbey, 2018; Smith et al., 2015; Song et al., 2008), and functional dynamics (Liégeois et al., 2019; Shine et al., 2019). Taken together, these data suggest that general cognitive ability
Predicting Cognitive Ability: Brain Imaging and Genetics
correlates widely with diverse anatomical and functional brain features measured across multiple modalities. Investigators have increasingly adopted predictive modeling approaches to simultaneously maximize variance explained and contend with highly multivariate imaging feature sets (e.g., functional connections) and behaviors (e.g., fluid intelligence). In an example from a series of landmark studies, Rosenberg et al. (2015) trained a statistical model to predict attention based on the correlation of blood oxygenation-level dependent (BOLD) time courses (i.e., functional connections). The statistical model predicted attention on the initial group of participants and was also externally predictive of attentional deficits in an out-of-sample cohort of individuals with ADHD. Further work in this domain has revealed that models built on task-based fMRI data yield more accurate predictions of fluid intelligence (20%) than resting-state only models (< 6%), indicating that individual differences in functional neurocircuitry are accentuated by task-based perturbations of the system (Greene, Gao, Scheinost, & Constable, 2018). The accuracy of brain based predictive models will continue to increase as methods are refined (Scheinost et al., 2019) and sample sizes increase (Miller et al., 2016). However, a consensus is emerging that no single imaging feature can explain a large proportion of the variance in any complex behavioral or cognitive trait (Cremers, Wager, & Yarkoni, 2017; Smith & Nichols, 2018), motivating the continued use of multivariate predictive models. However, establishing biological mechanisms that underlie predictive models will be critical, particularly for disambiguating true signal from artifactual confounds (e.g., head motion; Siegel et al., 2017). Given the cost and expertise required to obtain structural and functional brain imaging data – and the potential for disparate sampling across populations – their use as predictive tools must be justified. That is, why predict cognitive ability with brain data when it can be measured directly from behavioral assessments? Here, we provide a partial list of potential uses for brain-based predictive models: 1. Neurobiological Inference: Predictive models may identify brain imaging features (e.g., connectivity patterns, cortical thickness) that are most tied to variance in a cognitive process. 2. Outcome Prediction: A brain-based model of cognitive ability may yield unique predictions for personalized health, education, and psychiatric illness. Imaging-based models must demonstrate generalizability, and may benefit from benchmarking against behavior and focusing on developmental periods or populations where psychological assessment is difficult (Woo, Chang, Lindquist, & Wager, 2017). 3. Define Predictive Boundaries: Researchers may survey where predictions work and fail to reveal areas of shared and unique variance – for instance, to reveal trajectories of model accuracy across development (Rosenberg et al., 2018) .
331
332
k. m. anderson and a. j. holmes
4. Multivariate Integration: Machine-learning methods are particularly suited for dealing with high-dimensional and disparate types of data. 5. Measurement Inference: A validated predictive model could be applied to imaging data to impute a trait or variable that was not originally measured.
Genetic Prediction of Cognitive Ability Quantitative family and twin studies establish a genetic component to general intelligence (30–50%; Deary, Johnson, & Houlihan, 2009), educational attainment (40%; Branigan, McCallum, & Freese, 2013), and working memory (15–59%; Karlsgodt et al., 2010). Briefly, twin designs separate genetic and environmental variance by comparing monozygotic twins, who have nearly identical genomes, to dizygotic twins, who on average have 50% of their genomes in common (Boomsma, Busjahn, & Peltonen, 2002). Evidence for genetic effects emerges if a trait is more correlated among monozygotic than dizygotic twins (Boomsma et al., 2002). Heritability provides an invaluable “upper bound” estimate of the total variance that can be explained by genetics, but heritability does not imply immutability, Rather, it reflects a point estimate for a given sample within a set environment (Neisser et al., 1996). Although indispensable, heritability estimates do not provide predictive inferences at the level of a specific individual. How can general cognitive ability be predicted by individual differences in the nearly 3 billion base pairs that comprise the human genome? The most reliable and widely used method for linking genotypes to phenotypes is the Genome-Wide Association Study (GWAS). GWAS examines genomic locations that differ between individuals – termed single-nucleotide polymorphisms (SNPs) – and tests whether a phenotype is correlated with certain combinations of SNPs. By conducting linear or logistic regressions across millions of SNPs, GWAS identifies genomic locations that are associated with binary traits (e.g., disease status) or continuous measures, like height or cognitive ability. A virtue of the GWAS approach is that it allows for the identification of genetic predictors of complex traits in a data driven manner, which contrasts with targeted investigations of candidate genes. Early candidate approaches focused on specific genes with hypothesized relevance to cognitive ability, often informed by animal models or genes related to specific neurotransmitter systems. However, most reported associations between individual genes and cognitive ability have been found to either not replicate or reflect overestimates of the true effect size (Chabris et al., 2012). Advances in statistical genetics reveal that the genetic architecture of most complex (i.e., nonMendelian) traits are extremely polygenic and determined by variation that is distributed across the entire genome. It is rare to find highly penetrant individual genes or SNPs that explain more than 0.1–0.2% of the variance in a
Predicting Cognitive Ability: Brain Imaging and Genetics
trait, requiring researchers to adopt polygenic predictive approaches (Barton, Etheridge, & Véber, 2017; Boyle, Li, & Pritchard, 2017; Wray, Wijmenga, Sullivan, Yang, & Visscher, 2018) in extremely large samples. The most common form of genetic prediction utilizes polygenic scores (or polygenic risk scores), which aggregate genetic associations for a particular trait across many genetic variants and their associated weights, determined from a GWAS (Torkamani, Wineinger, & Topol, 2018). Polygenic scores are easy to understand since they are built off the sum or average of many thousands or millions of linear predictors, and they hold eventual promise for shaping early health interventions (Torkamani et al., 2018). Well-powered GWAS have recently been conducted for neurocognitive measures of intelligence (N = 269,867; Savage et al., 2018), mathematical ability (N = 564,698; Lee et al., 2018), and cognitive ability (N = 300,486; Davies et al., 2018; N = 107,207; Lam et al., 2017). These studies reveal SNPs that are associated with neurocognitive measures, however the variance explained by any individual variant is exceedingly small and current polygenic scores explain about 3–6% of the variance in cognitive ability in independent samples (Hill et al., 2019; Savage et al., 2018). Future polygenic scores based on increasingly large GWAS samples will likely explain more of the heritable variance in cognitive ability, but the endeavor has been complicated by cost of collecting standardized cognitive batteries on hundreds of thousands of individuals. A major turning point in the genomic study of intelligence occurred when researchers focused on the measure of years of education (Plomin & von Stumm, 2018). Because this demographic variable is so commonly collected by large-scale genetic consortia, investigators were able to achieve dramatic increases in sample size and power (Okbay et al., 2016; Rietveld et al., 2013). The largest GWAS of educational attainment included approximately 1.1 million individuals and explained 7–10% of the variance in cognitive measures from an independent test cohort (Lee et al., 2018), which may be further improved by leveraging cross-trait pleiotropy and correlation (Allegrini et al., 2019; Krapohl et al., 2018). Although the downstream effect of a particular variant is not always directly inferable from its genomic location (Tam et al., 2019), biological relevance of GWAS-nominated SNPs can be approximated using gene-set and cell-type enrichment methods. For instance, genetic associations with educational attainment are greater in coding regions of genes expressed in brain tissue, neurons (Watanabe, Umicevic Mirkov, de Leeuw, van den Heuvel, & Posthuma, 2019), and gene sets tied to myelination and neurogenesis (Hill et al., 2019). Well-powered GWAS of educational attainment, which is highly genetically correlated with cognitive ability, provide a route for reliable cross-modal integration of genetic and neuroimaging measures. For instance, polygenic scores of cognitive ability can be correlated with imaging features, such as brain activation in a working memory task (Heck et al., 2014) or brain size (Elliott et al., 2018).
333
334
k. m. anderson and a. j. holmes
Joint Heritability of Brain and Cognitive Ability The influence of genetics on cognitive ability is likely mediated by structural and functional features of the brain. Twin studies have shown that both cognitive abilities as well as brain structure and function are heritable (Gray & Thompson, 2004). That is, about 50% of variation in cognitive ability is attributed to genetic factors (Deary et al., 2009), and MRI based measures of brain anatomy are also heritable, including total brain volume, cortical thickness (Ge et al., 2019), and the size and shape of subcortical volumes (Hibar et al., 2017; Roshchupkin et al., 2016). These findings demonstrate that individual differences in brain and behavior are shaped by genetics, but they do not indicate whether cognitive ability and brain phenotypes are influenced by the same underlying features of the genome, nor do they reveal the relevant biological pathways that contribute to the observed heritability. Using a method called genetic correlation, researchers are able to quantify whether the same genetic factors influence both general cognitive ability and neural phenotypes (Neale & Maes, 1992). Recent imaging genetic analyses in 7,818 older adult (45–79 years) white British participants demonstrated moderate levels of genetic correlation (.10 < r < .30) between cognitive ability and cortical thickness in the somato/motor cortex and anterior temporal cortex (Ge et al., 2019). Convergent evidence indicates shared genetic relationships between cognitive ability and cortical surface area (Vuoksimaa et al., 2015), and have implicated biological pathways tied to cell growth as a driver of shared genetic variance between brain morphology and intelligence (Jansen et al., 2019). However, genetic correlations are subject to genetic confounding, for instance, the genetic relationship between one trait (e.g., cholesterol) and another (e.g., heart disease) could be mediated by pleitropic effects of a third variable (e.g., triglycerides; Bulik-Sullivan et al., 2015). Twin-based designs reveal a similar shared genetic basis of cognitive ability with brain morphology (Hulshoff Pol et al., 2006; Pennington et al., 2000; Posthuma et al., 2002; Thompson et al., 2001) and white matter structure (Chiang et al., 2011; Penke et al., 2012), although more research is needed to test for shared genetic relationships with functional connectivity.
Integrative Imaging-Genetic Approaches Investigators have identified replicable genetic and neuroimaging correlates of cognitive ability, but combining these levels of analysis to establish associated molecular mechanisms remains an outstanding challenge. If a researcher’s sole priority is to maximize predictive accuracy, then biological mechanism is largely irrelevant so long as the model is generalizable and performs well. With enough data, many of the current approaches may independently reach the “upper bound” of predictive accuracy. For instance,
Predicting Cognitive Ability: Brain Imaging and Genetics
polygenic scores derived from GWAS of height now predict nearly all of the SNP-based heritable variance in the trait (r2 = .40; (Lello et al., 2018) and polygenic scores of education attainment already explain about 7–10% of the variance in cognitive ability (Lee et al., 2018). With regard to brain imaging data, about 6–20% of the variance in general intelligence can be predicted from resting-state functional connectivity (Dubois, Galdi, Paul, & Adolphs, 2018; He et al., 2020). Integrating genomic and neural data into generative predictive models of behavior is a herculean task, in part because the two data types are separated in scale by orders of magnitude. A single base pair is measured in picometers (1/1,000,000,000,000 m) while a high quality MRI scan provides information at millimeter resolution (1/1,000 m). In between these two levels is an interdependent hierarchy of gene transcription, genomic regulation, protein synthesis, cellular-molecular processes, and complex patterns of brain cytoarchitecture and connectivity. The majority of this rich functional genomic data can only be measured in post-mortem brain tissue and is largely inaccessible to human neuroimaging approaches. How, then, can information about the brain’s molecular pathways, gene coregulation, and cell architecture be incorporated into existing imaginggenetic frameworks, and would this multi-scale approach provide for more accurate, mechanistically informative, models of complex phenotypes like cognitive ability? The daunting task of linking these data may yield, slightly, to the flood of openly available functional genomic data and the recent development of interpretable forms of machine learning (Eraslan, Avsec, Gagneur, & Theis, 2019). In a landmark series of publications, which serves as an example of work in this domain, the PsychENCODE consortium characterized the functional genomic landscape of the human brain with unprecedented precision and scale. This collaborative endeavor provides data on gene expression, brain-active transcriptional enhancers, chromatin accessibility, methylation, and gene regulatory networks in single-cell and bulk tissue data from nearly 2,000 individuals (Wang et al., 2018). Sometimes called the functional genome (e.g., gene expression, methylation, chromatin folding, cell-specific interactions), these features refer to molecular processes and interactions that encompass the activity of the genome (e.g., expression), as opposed to its structure (e.g., SNP, copy number variant). A large fraction of the PsychENCODE data were obtained from individuals with schizophrenia, bipolar disorder, and autism spectrum disorder, allowing investigators to build functional genomic models to predict psychiatric disease. Wang et al. (2018) trained a generative form of a shallow deep learning model, called a Deep Boltzman Machine (DBM; Salakhutdinov & Hinton, 2009). Deep learning is a subset of machine learning techniques that allows information to be structured into a hierarchy, and for progressively more complex and combinatorial features of the data to be extracted across levels
335
336
k. m. anderson and a. j. holmes
Figure 16.1 A graphical depiction of the Deep Boltzmann Machine (DMN) developed by Wang et al. (2018) to predict psychiatric case status. Functional genomic information (e.g., gene expression, gene enhancer activity, and co-expression networks) is embedded in the structure of the model. For instance, empirically mapped quantitative trait loci (QTL; green-dashed lines) reflect relationships between individual SNPs and downstream layers (e.g., gene expression). Lateral connections (purple solid lines) reflect gene-regulatory mechanisms and interactions (e.g., enhancers, transcription factors) also embedded in the DNN. Learned higher-order features of the model may reflect integrative and biologically plausible pathways (e.g., glutamatergic synapses) that can be deconstructed using feature interpretation techniques. The learned cross-level structure embedded in the model can be adapted to include brain imaging features or predict cognitive phenotypes in datasets where functional genomic “Imputed Layers” are not observed (e.g., UK Biobank).
(Figure 16.1). Generative deep learning techniques are distinguishable from discriminative models. While discriminative techniques aim to maximize prediction accuracy of categorical (e.g., case vs control) or continuous values (e.g., gene expression), generative models are trained to capture a realistic joint distribution of observed and higher-order latent predictive features to produce a full model of the trait (Libbrecht & Noble, 2015). Critically, deep-learning approaches are flexible and allow for realistic biological structure to be embedded within the architecture of such generative models (Gazestani & Lewis, 2019). Typically in population genetics, a model is trained to predict a phenotype (e.g., educational attainment) directly from structural genetic variations (e.g., SNPs). However, by incorporating intermediate functional genomic measures into the model framework, Wang et al. (2018) demonstrated a 6-fold improvement in the prediction of psychiatric disease relative to a genotype-only model. A feature of this approach is the flexible inclusion of domain knowledge. For instance, PsychENCODE data were
Predicting Cognitive Ability: Brain Imaging and Genetics
analyzed to find associations between SNPs and gene expression – known as quantitative trait loci (QTL) – to constrain potential links among levels of the DBM (Figure 16.1; green lines). Imposing biological reality onto the structure of the model reduces the untenably large number of possible connections between layers (e.g., all SNPs to all genes), and facilitates interpretation of higher order latent features of the model. For instance, Wang et al. found that biological pathways tied to synaptic activity and immune function were most predictive of schizophrenia status, providing targets for follow-up research and experimental validation. Although speculative, deep models may also provide a means for individualized inference. For traditional methods that collapse genome-wide polygenic load into a single predicted value (e.g., schizophrenia risk), two theoretical individuals could have an identical polygenic score but totally non-overlapping profiles of risk alleles (Wray et al., 2018). Structured deep models, however, would still yield a predicted outcome (e.g., schizophrenia risk) while also mapping features that are most relevant in one individual (e.g., synaptic genes) vs. another (e.g., immune genes). However, it is unlikely that an investigator will possess functional genomic, imaging, and behavioral data on the same individuals. How, then, can rich data, like that described Wang et al. (2018), be integrated into predictive models to reveal biological insights? One possibility comes from transfer learning (Pan & Yang, 2010), which incorporates knowledge from an already trained model as the starting point for a new one. This approach is particularly useful when training data is expensive or difficult to obtain and has been successfully applied in biology and medicine. For instance, a Google deep model (i.e., a convolutional neural network) trained to classify images into categories (e.g., building, cat, dog) has been adapted, or transferred, to detect skin cancer from clinical images using a much smaller training set (Esteva et al., 2017). For a hypothetical investigator studying the genetic basis of cognitive ability, they will likely possess data on structural nucleotide polymorphisms, possibly some brain phenotypes, and one or several measures of behavior or cognition. Rather than calculating a polygenic score or directly running a linear regression between millions of SNPs and a brain or cognitive phenotype, this researcher could leverage already trained deep neural networks (e.g., Wang et al., 2018) or embed empirically defined biological information (e.g., QTLs, gene-coexpression networks) into the structure of an integrative predictive model. Although imaging-genetic deep models have not yet been used to predict cognitive ability, incorporating biological information into generative machine-learning models has been shown to increase both predictive performance and interpretability across disciplines and applications (Eraslan et al., 2019; Libbrecht & Noble, 2015), particularly when training data is limited. For instance, deep neural networks have revealed regulatory motifs of non-coding segments of DNA (Zou et al., 2019). In biology, similar techniques that incorporate gene pathway information were best able to
337
338
k. m. anderson and a. j. holmes
predict drug sensitivity (Costello et al., 2014), mechanisms of bactericidal antibiotics (Yang et al., 2019), and nominate drug targets for immune-related disorders (Fange et al., 2019). In one noteworthy example, Ma et al. (2018) modeled the architecture of a deep network to reflect the biological hierarchy and interconnected processes of a eukaryotic cell. That is, genes were clustered into expert defined groupings that reflect biological processes, cell features, or functions (e.g., DNA repair, cell-wall structure). Extensive prior knowledge was then used to define the hierarchical structure between groupings, and cell growth was predicted from the genotypes of millions of training cell observations. The biological realism of the model allowed for highly predictive features to be linked to specific pathways or processes, and even permitted insilico simulations on the effect of gene deletions. In such simulations, inputs to the model (e.g., SNPs, genes) can be selectively removed and their downstream effects on learned higher-order features can be estimated. For instance, deletion of two growth genes by Ma et al. (2018) led to predicted disruption in a select biological process (e.g., DNA repair) that was later validated through experimental gene knockouts. Such “white-box” or “visible” forms of deep learning address the common criticism of multivariate predictive models as being a “black box” that fail to explain underlying mechanisms (Ching et al., 2018). Unconstrained deep models learn relationships between an input and output, but the internal structure of the trained model is not obligated to be understandable by humans or reflect biological reality. We have highlighted examples of biologically inspired machine learning models to show how interpretability can be aided by incorporation of empirical and mechanistic relationships (e.g., gene networks). Rapid progress in these transparent forms of machine learning open the door for understanding how predictions are produced and could provide a means to integrate explanation and prediction in previously unimagined ways (Camacho, Collins, Powers, Costello, & Collins, 2018; Yang et al., 2019; Yu et al., 2018). If such methods are to be applied to the study of human behavior, researchers must carefully consider which neuroimaging features best represent the fundamental “units” of psychological processes, and to structure brain hierarchies (e.g., region to network) and cross-modal relationships (e.g., brain structure to function; Poldrack & Yarkoni, 2016).
Ethics and Limitations Genetic and brain-based prediction of complex traits may one day allow for the development of preventative interventions and clinical biomarkers to assess risk and improve cognitive outcomes (Ashley, 2016; Gabrieli et al., 2015). However, the potential for misuse or misapplication of biological predictors of cognitive ability raises moral, societal, and practical
Predicting Cognitive Ability: Brain Imaging and Genetics
concerns that must be directly addressed by both scientists and policy makers. Currently, GWAS samples almost exclusively include white European populations, in large part due to data availability (Popejoy & Fullerton, 2016). This systemic bias is problematic given evidence that polygenic scores from GWAS derived in one ancestral population give inaccurate predictions when applied to other groups (Martin et al., 2017; Wojcik et al., 2019). Even the genotyping arrays used to measure individual allelic frequencies may be most sensitive to variation occurring in Europeans, leading to measurement biases in other populations (Kim, Patel, Teng, Berens, & Lachance, 2018). This confound exacerbates existing issues associated with the development and application of standardized assessment batteries for cognitive ability. Left unaddressed, the convergence of these points may exacerbate societal health and educational inequalities (Martin et al., 2019). For instance, application of current polygenic scores as clinical tools would be more accurate, and thus of greater utility, in individuals of European descent, and would thus perpetuate existing imbalances in the provision of healthcare. Dissemination of imaging-genetic predictions and related information also poses a major challenge, especially given the existence of direct-to-consumer genetic testing services. Misinterpretation of prediction accuracies may lead to mistaken reductive biological determinism, the wholesale discounting of social and environmental factors, and unwarranted harmful stigmatization (Palk, Dalvie, de Vries, Martin, & Stein, 2019). Emerging evidence indicates that simply learning about genetic risk may even lead to self-fulfilling behavioral predispositions, suggesting that access to genetic knowledge may alter individual outcomes in unintended ways (Turnwald et al., 2019). Further, individuals tend to overweight neuroscientific explanations, even when they may be completely erroneous (Weisberg, Keil, Goodstein, Rawson, & Gray, 2008). This could cause harm by de-emphasizing behavioral or environmental factors that are more amenable to intervention (e.g., reducing chronic environmental stress, dietary supplementation, early attachment to parents and caregivers, access to educational resources). Further, the presence of a “genetic” signal, either in the form of GWAS or heritability estimates, must be interpreted with caution given the potential for gene–environment correlations (Haworth et al., 2019), incomplete correction for genetic population stratification, and genetic confounding due to intergenerational transfer of risk (e.g., maternal smoking and prenatal development; Leppert et al., 2019)
Conclusion Investigators are increasingly able to predict individual differences in cognitive abilities using neuroimaging and genetic data. Brain-based predictive models of intelligence have leveraged large scale and open-access datasets (e.g., UK Biobank, Human Connectome Project) to train multivariate statistical
339
340
k. m. anderson and a. j. holmes
models based on brain anatomy and function. Genomic predictions most commonly use polygenic scores, derived from GWAS on hundreds of thousands of individuals, to predict individual differences in cognition. In both domains, biological interpretation remains a challenge and there is a pressing need for integrative predictive methods that combine genomic, neuroimaging, and behavioral observations. Recent advances in interpretable “white-box” forms of deep learning are a promising approach for cross-modal data integration and flexible incorporation of prior biological knowledge. In the short term, genomic and brain-based predictive models promise to yield deep insights into the neurobiology of behavior. In the long-term, some hope to use biology to guide early preventative interventions or inform individualized precision medicine. However, this raises serious ethical, societal, and pragmatic concerns that must be addressed by the scientific community and the larger public as these methods continue to develop.
References Allegrini, A. G., Selzam, S., Rimfeld, K., von Stumm, S., Pingault, J. B., & Plomin, R. (2019). Genomic prediction of cognitive traits in childhood and adolescence. Molecular Psychiatry, 24(6), 819–827. doi: 10.1038/s41380–019-0394-4. Ashley, E. A. (2016). Towards precision medicine. Nature Reviews Genetics, 17(9), 507–522. doi: 10.1038/nrg.2016.86. Barbey, A. K. (2018). Network neuroscience theory of human intelligence. Trends in Cognitive Sciences, 22(1), 8–20. doi: 10.1016/j.tics.2017.10.001. Barbey, A. K., Colom, R., & Grafman, J. (2014). Distributed neural system for emotional intelligence revealed by lesion mapping. Social Cognitive and Affective Neuroscience, 9(3), 265–272. doi: 10.1093/scan/nss124. Barton, N. H., Etheridge, A. M., & Véber, A. (2017). The infinitesimal model: Definition, derivation, and implications. Theoretical Population Biology, 118, 50–73. doi: 10.1016/j.tpb.2017.06.001. Batty, G. D., Deary, I. J., & Gottfredson, L. S. (2007). Premorbid (early life) IQ and later mortality risk: Systematic review. Annals of Epidemiology, 17(4), 278–288. doi: 10.1016/j.annepidem.2006.07.010. Boomsma, D., Busjahn, A., & Peltonen, L. (2002). Classical twin studies and beyond. Nature Reviews Genetics, 3(11), 872–882. doi: 10.1038/nrg932. Boyle, E. A., Li, Y. I., & Pritchard, J. K. (2017). An expanded view of complex traits: From polygenic to omnigenic. Cell, 169(7), 1177–1186. doi: 10.1016/j. cell.2017.05.038. Branigan, A. R., McCallum, K. J., & Freese, J. (2013). Variation in the heritability of educational attainment: An international meta-analysis. Social Forces, 92(1), 109–140. doi: 10.1093/sf/sot076. Briley, D. A., & Tucker-Drob, E. M. (2013). Explaining the increasing heritability of cognitive ability across development: A meta-analysis of longitudinal twin and adoption studies. Psychological Science, 24(9), 1704–1713. doi: 10.1177/ 0956797613478618.
Predicting Cognitive Ability: Brain Imaging and Genetics
Bulik-Sullivan, B., Finucane, H. K., Anttila, V., Gusev, A., Day, F. R., Loh, P.-R., . . . Neale, B. M. (2015). An atlas of genetic correlations across human diseases and traits. Nature Genetics, 47(11), 1236–1241. doi.org/10.1038/ng .3406. Bzdok, D., Altman, N., & Krzywinski, M. (2018). Statistics versus machine learning. Nature Methods, 15(4), 233–234. doi: 10.1038/nmeth.4642. Bzdok, D., & Ioannidis, J. P. A. (2019). Exploration, inference, and prediction in neuroscience and biomedicine. Trends in Neurosciences, 42(4), 251–262. doi: 10.1016/j.tins.2019.02.001. Bzdok, D., & Yeo, B. T. T. (2017). Inference in the age of big data: Future perspectives on neuroscience. NeuroImage, 155, 549–564. doi: 10.1016/j. neuroimage.2017.04.061. Calvin, C. M., Batty, G. D., Der, G., Brett, C. E., Taylor, A., Pattie, A., . . . Deary, I. J. (2017). Childhood intelligence in relation to major causes of death in 68 year follow-up: Prospective population study. British Medical Journal, 357(j2708), 1–14. doi: 10.1136/bmj.j2708. Camacho, D. M., Collins, K. M., Powers, R. K., Costello, J. C., & Collins, J. J. (2018). Next-generation machine learning for biological networks. Cell, 173(7), 1581–1592. doi: 10.1016/j.cell.2018.05.015. Chabris, C. F., Hebert, B. M., Benjamin, D. J., Beauchamp, J., Cesarini, D., van der Loos, M., . . . Laibson, D. (2012). Most reported genetic associations with general intelligence are probably false positives. Psychological Science, 23(11), 1314–1323. doi: 10.1177/0956797611435528. Chiang, M.-C., McMahon, K. L., de Zubicaray, G. I., Martin, N. G., Hickie, I., Toga, A. W., . . . Thompson, P. M. (2011). Genetics of white matter development: A DTI study of 705 twins and their siblings aged 12 to 29. NeuroImage, 54(3), 2308–2317. doi: 10.1016/j.neuroimage.2010.10.015. Ching, T., Himmelstein, D. S., Beaulieu-Jones, B. K., Kalinin, A. A., Do, B. T., Way, G. P., . . . Greene, C. S. (2018). Opportunities and obstacles for deep learning in biology and medicine. Journal of The Royal Society Interface, 15(141), 20170387. doi: 10.1098/rsif.2017.0387. Cole, M. W., Reynolds, J. R., Power, J. D., Repovs, G., Anticevic, A., & Braver, T. S. (2013). Multi-task connectivity reveals flexible hubs for adaptive task control. Nature Neuroscience, 16(9), 1348–1355. doi: 10.1038/nn.3470. Cole, M. W., Yarkoni, T., Repovs, G., Anticevic, A., & Braver, T. S. (2012). Global connectivity of prefrontal cortex predicts cognitive control and intelligence. Journal of Neuroscience, 32(26), 8988–8999. doi: 10.1523/JNEUROSCI.053612.2012. Costello, J. C., Georgii, E., Gönen, M., Menden, M. P., Wang, N. J., Bansal, M., . . . Stolovitzky, G. (2014). A community effort to assess and improve drug sensitivity prediction algorithms. Nature Biotechnology, 32(12), 1202–1212. doi: 10.1038/nbt.2877. Cremers, H. R., Wager, T. D., & Yarkoni, T. (2017). The relation between statistical power and inference in fMRI. PLoS One, 12(11), e0184923. doi: 10.1371/ journal.pone.0184923. Davies, G., Lam, M., Harris, S. E., Trampush, J. W., Luciano, M., Hill, W. D., . . . Deary, I. J. (2018). Study of 300,486 individuals identifies 148 independent
341
342
k. m. anderson and a. j. holmes
genetic loci influencing general cognitive function. Nature Communications, 9(1), 2098. doi: 10.1038/s41467–018-04362-x. Deary, I. J., Johnson, W., & Houlihan, L. M. (2009). Genetic foundations of human intelligence. Human Genetics, 126(1), 215–232. doi: 10.1007/s00439–0090655-4. Deary, I. J., Penke, L., & Johnson, W. (2010). The neuroscience of human intelligence differences. Nature Reviews Neuroscience, 11(3), 201–211. doi: 10.1038/ nrn2793. Dor, Y., & Cedar, H. (2018). Principles of DNA methylation and their implications for biology and medicine. The Lancet, 392(10149), 777–786. doi: 10.1016/ S0140–6736(18)31268-6. Dubois, J., Galdi, P., Paul, L. K., & Adolphs, R. (2018). A distributed brain network predicts general intelligence from resting-state human neuroimaging data. Philosophical Transactions of the Royal Society B: Biological Sciences, 373(1756), 20170284. Elliott, M. L., Belsky, D. W., Anderson, K., Corcoran, D. L., Ge, T., Knodt, A., . . . Hariri, A. R. (2018). A polygenic score for higher educational attainment is associated with larger brains. Cerebral Cortex, 491(8), 56–59. doi: 10.1093/ cercor/bhy219. Eraslan, G., Avsec, Ž., Gagneur, J., & Theis, F. J. (2019). Deep learning: New computational modelling techniques for genomics. Nature Reviews Genetics, 20(7), 389–403. doi: 10.1038/s41576–019-0122-6. Esteva, A., Kuprel, B., Novoa, R. A., Ko, J., Swetter, S. M., Blau, H. M., & Thrun, S. (2017). Dermatologist-level classification of skin cancer with deep neural networks. Nature, 542(7639), 115–118. doi: 10.1038/nature21056. Fange, H., Knezevic, B., Burnham, K. L., Osgood, J., Sanniti, A., Lledó Lara, A., . . . Knight, J. C. (2019). A genetics-led approach defines the drug target landscape of 30 immune-related traits. Nature Genetics, 51(7), 1082–1091. doi: 10.1038/s41588–019-0456-1. Finn, E. S., Shen, X., Scheinost, D., Rosenberg, M. D., Huang, J., Chun, M. M., . . . Constable, R. T. (2015). Functional connectome fingerprinting: Identifying individuals using patterns of brain connectivity. Nature Neuroscience, 18(11), 1664–1671. doi: 10.1038/nn.4135. Gabrieli, J. D. E., Ghosh, S. S., & Whitfield-Gabrieli, S. (2015). Prediction as a humanitarian and pragmatic contribution from human cognitive neuroscience. Neuron, 85(1), 11–26. doi: 10.1016/j.neuron.2014.10.047. Gazestani, V. H., & Lewis, N. E. (2019). From genotype to phenotype: Augmenting deep learning with networks and systems biology. Current Opinion in Systems Biology, 15, 68–73. doi: 10.1016/j.coisb.2019.04.001. Ge, T., Chen, C.-Y., Doyle, A. E., Vettermann, R., Tuominen, L. J., Holt, D. J., . . . Smoller, J. W. (2019). The shared genetic basis of educational attainment and cerebral cortical morphology. Cerebral Cortex, 29(8), 3471–3481. doi: 10.1093/cercor/bhy216. Genç, E., Fraenz, C., Schlüter, C., Friedrich, P., Hossiep, R., Voelkle, M. C., . . . Jung, R. E. (2018). Diffusion markers of dendritic density and arborization in gray matter predict differences in intelligence. Nature Communications, 9(1), 1905. doi: 10.1038/s41467–018-04268-8.
Predicting Cognitive Ability: Brain Imaging and Genetics
Goriounova, N. A., & Mansvelder, H. D. (2019). Genes, cells and brain areas of intelligence. Frontiers in Human Neuroscience, 13, 44. doi: 10.3389/ fnhum.2019.00044. Gray, J. R., & Thompson, P. M. (2004). Neurobiology of intelligence: Science and ethics. Nature Reviews Neuroscience, 5(6), 471–482. doi: 10.1038/nrn1405. Greene, A. S., Gao, S., Scheinost, D., & Constable, R. T. (2018). Task-induced brain state manipulation improves prediction of individual traits. Nature Communications, 9(1), 2807. doi: 10.1038/s41467–018-04920-3. Hagenaars, S. P., Harris, S. E., Davies, G., Hill, W. D., Liewald, D. C. M., Ritchie, S. J., . . . Deary, I. J. (2016). Shared genetic aetiology between cognitive functions and physical and mental health in UK Biobank (N=112 151) and 24 GWAS consortia. Molecular Psychiatry, 21(11), 1624–1632. doi: 10.1038/ mp.2015.225. Haier, R. J. (2017). The neuroscience of intelligence. New York: Cambridge University Press. Haworth, S., Mitchell, R., Corbin, L., Wade, K. H., Dudding, T., Budu-Aggrey, A. J., . . . Timpson, N. (2019). Apparent latent structure within the UK Biobank sample has implications for epidemiological analysis. Nature Communications, 10(1), 333. doi: 10.1038/s41467–018-08219-1. He, T., Kong, R., Holmes, A., Nguyen, M., Sabuncu, M., Eickhoff, S. B., . . . Yeo, B. T. T. (2020). Deep neural networks and kernel regression achieve comparable accuracies for functional connectivity prediction of behavior and demographics. NeuroImage, 206, 116276. doi: 10.1016/j.neuroimage.2019.116276. Heck, A., Fastenrath, M., Ackermann, S., Auschra, B., Bickel, H., Coynel, D., . . . Papassotiropoulos, A. (2014). Converging genetic and functional brain imaging evidence links neuronal excitability to working memory, psychiatric disease, and brain activity. Neuron, 81(5), 1203–1213. doi: 10.1016/j. neuron.2014.01.010. Hibar, D. P., Adams, H. H. H., Chauhan, G., Hofer, E., Rentería, M. E., Adams, H. H. H., . . . Ikram, M. A. (2017). Novel genetic loci associated with hippocampal volume. Nature Communications, 8(13624), 1–12. doi: 10.1038/ncomms13624. Hill, D., Davies, G., Liewald, D. C., McIntosh, A. M., & Deary, I. J. (2016). Age-dependent pleiotropy between general cognitive function and major psychiatric disorders. Biological Psychiatry, 80(4), 266–273. doi: 10.1016/j. biopsych.2015.08.033. Hill, W. D., Marioni, R. E., Maghzian, O., Ritchie, S. J., Hagenaars, S. P., McIntosh, A. M., . . . Deary, I. J. (2019). A combined analysis of genetically correlated traits identifies 187 loci and a role for neurogenesis and myelination in intelligence. Molecular Psychiatry, 24(2), 169–181. doi: 10.1038/s41380–0170001-5. Hulshoff Pol, H. E., Schnack, H. G., Posthuma, D., Mandl, R. C. W., Baare, W. F., van Oel, C., . . . Kahn, R. S. (2006). Genetic contributions to human brain morphology and intelligence. Journal of Neuroscience, 26(40), 10235–10242. doi: 10.1523/JNEUROSCI.1312-06.2006. Jansen, P. R., Nagel, M., Watanabe, K., Wei, Y., Savage, J. E., de Leeuw, C. A., . . . Posthuma, D. (2019). GWAS of brain volume on 54,407 individuals and
343
344
k. m. anderson and a. j. holmes
cross-trait analysis with intelligence identifies shared genomic loci and genes. BioRxiv. 1–34. doi: 10.1101/613489. Jung, R. E., & Haier, R. J. (2007). The Parieto-Frontal Integration Theory (P-FIT) of intelligence: Converging neuroimaging evidence. Behavioral and Brain Sciences, 30(2), 135–154. doi: 10.1017/S0140525X07001185. Karlsgodt, K. H., Kochunov, P., Winkler, A. M., Laird, A. R., Almasy, L., Duggirala, R., . . . Glahn, D. C. (2010). A multimodal assessment of the genetic control over working memory. Journal of Neuroscience, 30(24), 8197–8202. doi: 10.1523/JNEUROSCI.0359-10.2010. Kendler, K. S., & Baker, J. H. (2007). Genetic influences on measures of the environment: A systematic review. Psychological Medicine, 37(05), 615. doi: 10.1017/ S0033291706009524. Kendler, K. S., Turkheimer, E., Ohlsson, H., Sundquist, J., & Sundquist, K. (2015). Family environment and the malleability of cognitive ability: A Swedish national home-reared and adopted-away cosibling control study. Proceedings of the National Academy of Sciences, 112(15), 4612–4617. doi: 10.1073/pnas.1417106112. Kim, M. S., Patel, K. P., Teng, A. K., Berens, A. J., & Lachance, J. (2018). Genetic disease risks can be misestimated across global populations. Genome Biology, 19(1), 179. doi: 10.1186/s13059–018-1561-7. Krapohl, E., Patel, H., Newhouse, S., Curtis, C. J., von Stumm, S., Dale, P. S., . . . Plomin, R. (2018). Multi-polygenic score approach to trait prediction. Molecular Psychiatry, 23(5), 1368–1374. doi: 10.1038/mp.2017.163. Lam, M., Trampush, J. W., Yu, J., Knowles, E., Davies, G., Liewald, D. C., . . . Lencz, T. (2017). Large-scale cognitive GWAS meta-analysis reveals tissuespecific neural expression and potential nootropic drug targets. Cell Reports, 21(9), 2597–2613. doi: 10.1016/j.celrep.2017.11.028. Lee, J., Wedow, R., Okbay, A., Kong, E., Meghzian, O., Zacher, M., . . . Cesarini, D. (2018). Gene discovery and polygenic prediction from a genome-wide association study of educational attainment in 1.1 million individuals. Nature Genetics, 50(8), 1112–1121. doi: 10.1038/s41588–018-0147-3. Lello, L., Avery, S. G., Tellier, L., Vazquez, A. I., de los Campos, G., & Hsu, S. D. H. (2018). Accurate genomic prediction of human height. Genetics, 210(2), 477–497. doi: 10.1534/genetics.118.301267. Leppert, B., Havdahl, A., Riglin, L., Jones, H. J., Zheng, J., Davey Smith, G., . . . Stergiakouli, E. (2019). Association of maternal neurodevelopmental risk alleles with early-life exposures. JAMA Psychiatry, 76(8), 834. doi: 10.1001/ jamapsychiatry.2019.0774. Libbrecht, M. W., & Noble, W. S. (2015). Machine learning applications in genetics and genomics. Nature Reviews Genetics, 16(6), 321–332. doi: 10.1038/nrg3920. Liégeois, R., Li, J., Kong, R., Orban, C., Van De Ville, D., Ge, T., . . . Yeo, B. T. T. (2019). Resting brain dynamics at different timescales capture distinct aspects of human behavior. Nature Communications, 10(1), 2317. doi: 10.1038/ s41467–019-10317-7. Luders, E., Narr, K. L., Thompson, P. M., & Toga, A. W. (2009). Neuroanatomical correlates of intelligence. Intelligence, 37(2), 156–163. doi: 10.1016/j. intell.2008.07.002.
Predicting Cognitive Ability: Brain Imaging and Genetics
Ma, J., Yu, M. K., Fong, S., Ono, K., Sage, E., Demchak, B., . . . Ideker, T. (2018). Using deep learning to model the hierarchical structure and function of a cell. Nature Methods, 15(4), 290–298. doi: 10.1038/nmeth.4627. Martin, A. R., Gignoux, C. R., Walters, R. K., Wojcik, G. L., Neale, B. M., Gravel, S., . . . Kenny, E. E. (2017). Human demographic history impacts genetic risk prediction across diverse populations. The A