Volume 124 Number 5, Sep/Oct 2021 
MIT Technology Review

Citation preview

The mind issue Volume 124 Number 5

Sep/Oct 2021

USD $9.99 CAD $10.99

Restoring mobility to paralyzed patients is our superpower.

battelle.org/neuro

02

From the editor

Our shared hallucination

SIMON SIMARD

I

nside the three-pound lumps of mostly fat and water inside our heads we can, in a very real sense, find the root of everything we know and ever will know. Sure, the universe gave rise to our brains. But what good is the cosmos without brains and, more specifically, minds? Without them, there’d be no understanding, no appreciation, no probing of great mysteries. Which is what this issue is all about: our quest to understand what’s between our ears, and in so doing, better understand ourselves. A friendly warning: you are in for some mind-bending stuff. As Lisa Feldman Barrett notes in our opening essay (page 8), our brains create our minds specifically to preserve our bodies and pilot them through our environment. “Your brain did not evolve to think, feel, and see,” she writes. “It evolved to regulate your body. Your thoughts, feelings, senses, and other mental capacities are consequences of that regulation.” Basically, our minds create a fiction for us to live in. Nathan McGee knows a thing or two about having his mind bent. After suffering from PTSD since early childhood, he enrolled in a clinical trial in his 40s to test whether the psychedelic drug MDMA could help him. The result was nothing short of transformative. “I’m seeing life as a thing to be explored and appreciated rather than something to be endured,” he Michael Reilly about brains that David Robson and David is executive told Charlotte Jee in an intimate interview Biskup put the lie to in comic-strip form editor of (page 28). Not only is consciousness hard about his experience (page 73). MIT Technology Review Similarly, for those of us experiencing to define, but it has been extremely difficult pandemic fatigue, Dana Smith has some to measure. Yet there is now a consciousness meter to detect it in people, as Russ good news: our brains definitely took a hit as we social-distanced and Zoomed ourselves into oblivion, but Juskalian finds out (page 40). Consciousness in silicon form is on Will Douglas Heaven’s they’re also really, really good at bouncing back. Your pandemic brain will heal; just give it time (page 30). brain these days; he ponders whether we’d know it if we managed Messing with our heads can also be fun, as Neel Patel tells to build a conscious machine (page 66). Dan Falk asks researchers us. He writes about a talent he developed as a teenager: lucid whether they think a brain is a computer in the first place (page dreaming. The science behind it is still being worked out, but 23). And Emily Mullin takes a look at two multibillion-dollar it’s proving useful for helping people unlock their creativity and efforts to study the human brain in unprecedented detail—one deal with fears and traumatic memories (page 62). of which involved trying to simulate one from scratch (page 79). It is perhaps in dreams where the power of our minds to hold No issue on the mind would be complete without a chance to sway over what we believe is “real” is most clearly on display. In gaze upon the gray matter itself, and there are brains aplenty in a roundup of three fascinating new books on human perception, our haunting photo essay documenting a library of malformed writer Matthew Hutson quotes one author: “You could even say specimens (page 54). If that’s too much, zoom in on our infothat we’re all hallucinating all the time. It’s just that when we agree graphic that depicts what happens in Tate Ryan-Mosley’s brain when she sees her boyfriend’s face (page 14). And finally, we’ve about our hallucinations, that’s what we call reality” (page 76). There’s still the question of what it means to be conscious. included a rare treat indeed: a selection of poetry curated by our For a long time, we humans clung to the idea that we were the news editor, Niall Firth (page 84). It’s guaranteed to jangle your only conscious animals. It’s one of several misunderstandings neurons into a new way of viewing this thing we call “reality.”

“Banking with First Republic saves me a great deal of time. Wherever I am in the world, they take care of what I need.” DAVI D H O, M . D.

Virologist, Columbia University Irving Medical Center

(855) 886-4824

| Ŕrstrepublic.com | New York Stock Exchange symbol: FRC MEMBER FDIC AND EQUAL HOUSING LENDER

04

Contents

THE MIND ISSUE Introduction 8

How your mind is made Your brain creates you from three core ingredients. And you know nothing about it. By Lisa Feldman Barrett Poetry

Features

Report

30

14

How to mend your pandemic brain

When I see your face

Life under covid has messed with our brains. Luckily, they were designed to bounce back. By Dana Smith

The moment we recognize someone we love, a lot happens all at once. By Tate Ryan-Mosley 17

34

From memories to brains grown in the lab

The miracle molecule

Neuroscientists are unwrapping the mysteries of the human brain. By Hannah Thomasy

Drugs based on the molecule boost memory formation and could help treat everything from Alzheimer’s to brain injuries. By Adam Piore

84

Works by Cynthia Miller, Paula Bohince, Anthony Anaxagorou, Tishani Doshi, and Zeina Hashem Beck

40

The consciousness meter A neuroscientist has found a way to measure hidden signs of consciousness in unreachable patients. By Russ Juskalian 46

The back page 88

A piece of our mind

23

Is your brain a computer? We asked experts for their best arguments in this long-standing debate. By Dan Falk 26

Making memories How technology helps us understand and even manipulate memories. By Joshua Sariñana

Change of mind Our brain cells acquire mutations as we develop and age—and scientists want to know if this affects our mental health. By Roxanne Khamsi

28

The brain, misunderstood Five big mistakes about the brain. Text by David Robson, comics by David Biskup

54

Malformed

LISTEN ALONG

The team compiled this Spotify playlist to accompany the issue. Just scan the code with your smartphone camera to hear what we selected.

Get up close with the world’s largest collection of preserved abnormal human brains. Images by Adam Voorhes and Robin Finlay

Reviews

62

MDMA-assisted therapy has had amazing results. One participant tells his story. By Charlotte Jee

Adventures in lucid dreaming Some lucky people can control their dreams. It could teach us more about how the brain works. By Neel Patel 66

73

“I understand what joy is now”

76

Believing is seeing Three books probe the relationship between what we perceive and who we are. By Matthew Hutson

A mind of its own If we build machines that can think and feel, what will they be like—and how will we know? By Will Douglas Heaven

79

The failed promises of the brain map projects The expensive efforts to map the brain have largely fallen short. By Emily Mullin 82

The magic number Cover illustration by Mike Perry

Could plants, bacteria, and our body’s cells have their own sort of consciousness? By Christof Koch

Roundup

You’re already a subscriber* Go beyond this issue with five of our top picks

1 Event EMTECH MIT SEPTEMBER 28—30, 2021 Our annual flagship event will unpack the state of digital currencies, ransomware prevention and response, 5G and edge computing, and climate management in a power-hungry world

TECHNOLOGY, TRENDS, AND THE

NEW RULES OF BUSINESS

2

Newsletter WEEKEND READS Technology in perspective, every Saturday. technologyreview.com/ newsletters

3

technologyreview.com/emtech

Report SECURE BY DESIGN

4

From MIT Technology Review Insights, learn how organizations around the globe are enhancing privacy through security.

Podcast IN MACHINES WE TRUST Check out Season 2 (AI hiring tools) while gearing up for the new season where we unpack AI in gaming and the hidden power of voice assisted devices.

technologyreview.com/ secure-by-design

technologyreview.com/ in-machines-we-trust

FOLLOW MIT TECHNOLOGY REVIEW

5

Facebook @technologyreview

Big Story

Twitter @techreview

AI IS LEARNING HOW TO CREATE ITSELF

Instagram @technologyreview

Humans have struggled to make truly intelligent machines. Maybe we need to let them get on with it themselves.

LinkedIn MIT Technology Review

technologyreview.com/ collection/the-big-story

technologyreview. com/podcast

Advancing the promise of technology

*

Not a subscriber? We can fix that. Go to technologyreview.com/subscribe

06

Masthead

Editorial

Corporate

Consumer marketing

Editor in chief

Chief executive officer and publisher

Mat Honan

Elizabeth Bramson-Boudreau

Senior vice president, marketing and consumer revenue

Doreen Adger

MIT Technology Review Insights and international Vice president, Insights and international

Nicola Crepaldi

Executive editor

Assistant to the CEO

Director of digital marketing

Michael Reilly

Katie McLean

Emily Baillieu

Editor at large

Human resources manager

Director of event marketing

David Rotman

James Wall

Brenda Noiseux

News editor

Manager of information technology

Niall Firth

Colby Wheeler

Email marketing manager Tuong-Chau Cai

Content manager

Managing editor

Office manager

Timothy Maher

Linda Cardinal

Growth marketing manager Em Okrepkie

Senior manager of licensing

Assistant consumer marketing manager

Director of custom content, international

Commissioning editors

Bobbie Johnson Konstantin Kakaes Amy Nordrum Senior editor, MIT News

Alice Dragoon Senior editor, biomedicine

Antonio Regalado Senior editor, climate and energy

James Temple Senior editor, digital culture

Abby Ohlheiser Senior editor, cybersecurity

Patrick Howell O’Neill Senior editors, AI

Karen Hao Will Douglas Heaven Senior editor, computing

Siobhan Roberts Senior editor, podcasts and live journalism

Jennifer Strong Podcast producer

Anthony Green Editor, Pandemic Technology Project

Lindsay Muscato Senior reporters

Product development Chief technology officer

Circulation and print production manager

Drake Martinet

Tim Borton

Director of software engineering

Associate product manager

Allison Chase Product designer

Rachel Stein Software engineer

Jack Burns

Events Senior vice president, events and strategic partnerships

Barbara Wallraff

Chief creative officer

Eric Mongeon Art director

Emily Luong Marketing and events designer

Kyle Thomas Hemingway Photo editor

Stephanie Arnett

Marcus Ulvne

Andrew Hendler [email protected] 646-520-6981 Executive director, integrated marketing

Caitlin Bergmann [email protected] Executive director, brand partnerships

Marii Sebahar [email protected] 415-416-9140 Executive director, brand partnerships

Customer service and subscription inquiries National

877-479-6505 International

Director of event content and experiences

Brian Bryson

Senior director, brand partnerships

[email protected]

Head of international and custom events

Debbie Hanley [email protected] 214-282-2727

Web

Senior director, brand partnerships

MIT Records (alums only)

Ian Keller [email protected] 203-858-3396

617-253-8270

Marcy Rizzo Senior event content producer

Kristen Kavanaugh Erin Underwood Associate director of events

Senior director, brand partnerships

Elana Wilner

Miles Weiner [email protected] 617-475-8078

Event partnership coordinator

Digital sales strategy manager

Madeleine Frasca

Casey Sullivan [email protected] 617-475-8066

Events associate

Bo Richardson

Media kit

Finance Vice president, finance

Enejda Xheblati General ledger manager

Olivia Male Accountant

Design

Director of business development, Asia

Kristin Ingram [email protected] 415-509-1910

Amy Lammers

Copy chief

Proofreader

Francesca Fanshawe

Martin A. Schmidt, Chair Peter J. Caruso II, Esq. Whitney Espich Jerome I. Friedman David Schmittlein Glen Shor Alan Spoon

Mariya Sitnova

Event operations manager

Madison Umina

Ted Hu

Board of directors

Nicole Silva

Audience engagement associate

Jason Sparapani

Senior vice president, sales and brand partnerships

Reporters

Abby Ivory-Ganja

Martha Leibs

Advertising sales

Charlotte Jee (news) Neel Patel (space) Tate Ryan-Mosley (data and audio)

Engagement editor

Senior project manager

Head of product

Event content producer

Social media editor Benji Rosen

Laurel Ruma

Molly Frey

Tanya Basu (humans and technology) Eileen Guo (technology policy and ethics)

Linda Lowenthal

Caroline da Cunha

Director of custom content, US

Anduela Tabaku

www.technologyreview.com/media

847-559-7313 Email

www.technologyreview.com/ customerservice

Reprints

[email protected] 877-652-5295 Licensing and permissions

[email protected]

MIT Technology Review One Main Street 13th Floor Cambridge, MA 02142 617-475-8000 The mission of MIT Technology Review is to make technology a greater force for good by bringing about better-informed, more conscious technology decisions through authoritative, influential, and trustworthy journalism. Technology Review, Inc., is an independent nonprofit 501(c)(3) corporation wholly owned by MIT; the views expressed in our publications and at our events are not always shared by the Institute.

Don’t believe everything you hear. The award-winning podcast In Machines We Trust thoughtfully examines the far-reaching impact of artificial intelligence on our daily lives. Download it wherever you listen.

08 08

Mind

HOW

YOUR

MI YOUR BRAIN CREATES YOU FROM THREE CORE INGREDIENTS. AND YOU KNOW NOTHING ABOUT IT. By Lisa Feldman Barrett

What is your mind? It’s a strange question, perhaps, but if pressed, you might describe it as the part of yourself that makes you who you are—your consciousness, dreams, emotions, and memories. Scientists believed for a long time that such aspects of the mind had specific brain locations, like a circuit for fear, a region for memory, and so on.

Introduction

09

ND IS

But in recent years we’ve learned that the human brain is actually a master of deception, and your experiences and actions do not reveal its inner workings. Your mind is in fact an ongoing construction of your brain, your body, and the surrounding world. In every moment, as you see, think, feel, and navigate the world

around you, your perception of these things is built from three ingredients. One is the signals we receive from the outside world, called sense data. Light waves enter your retinas to be experienced as blooming gardens and starry skies. Changes in pressure reach your cochlea and skin and become the voices and hugs of loved ones.

MADE

Chemicals arrive in your nose and mouth and are transformed into sweetness and spice. A second ingredient of your experience is sense data from events inside your body, like the blood rushing through your veins and arteries, your lungs expanding and contracting, and your stomach gurgling. Much of this symphony is

10

silent and outside your awareness, thank goodness. If you could feel every inner tug and rumble directly, you’d never pay attention to anything outside your skin. Finally, a third ingredient is past experience. Without this, the sense data around and inside you would be meaningless noise. It would be like being bombarded by the sounds of a language that you don’t speak, so you can’t even tell where one word ends and the next begins. Your brain uses what you’ve seen, done, and learned in the past to explain sense data in the present, plan your next action, and predict what’s coming next. This all happens automatically and invisibly, faster than you can snap your fingers. These three ingredients might not be the whole story, and there may be other routes to create other kinds of minds—say, in a futuristic machine. But a human mind is constructed by a brain in constant conversation, moment by unique moment, with a body and the outside world. When your brain remembers, it re-creates bits and pieces of the past and seamlessly combines them. We call this process “remembering,” but it’s really assembling. In fact, your brain may construct the same memory (or, more accurately, what you experience as the same memory) in different ways each time. I’m not speaking here of the conscious experience of remembering something, like recalling your best friend’s face or yesterday’s dinner. I’m speaking of the automatic, unconscious process of looking at an object or a word and instantly knowing what it is. Every act of recognition is a construction. You don’t see with your eyes; you see with your brain. Likewise for all your other senses. Your brain compares the sense data coming in now with things you’ve sensed before in a similar situation

Mind

where you had a similar goal. These comparisons incorporate all your senses at once, because your brain constructs all sensations at once and represents them as grand patterns of neural activity that enable you to experience and understand the world around you. Brains also have an amazing capacity to combine pieces of the past in novel ways. They don’t merely reinstate old content; they generate new content. For example, you can recognize things you’ve never encountered before, like a picture of a horse with feathery wings. You’ve probably never seen Pegasus in real life, but like the ancient Greeks, you can view a painting of Pegasus for the first time and instantly comprehend what it is, because—miraculously—your brain can assemble familiar ideas like “horse” and “bird” and “flight” into a coherent mental image. Your brain can even impose on a familiar object new functions that are not part of the object’s physical nature. Look at the photograph in Figure 1 (right). Computers today can use machine learning to easily classify this object as a feather. But that’s not what human brains do. If you find this object on the ground in the woods, then sure, it’s a feather. But to an author in the 18th century, it’s a pen. To a warrior of the Cheyenne tribe, it’s a symbol of honor. To a child pretending to be a secret agent, it’s a handy fake mustache. Your brain classifies objects not solely on the basis of their physical attributes, but also according to their function—how the object is used. It goes through this process every time you look at a slip of paper with a dead leader’s face on it and see currency that can be exchanged for material goods. This incredible ability is called ad hoc category construction. In a flash, your brain employs past experience to construct a category such as

“symbols of honor,” with that feather as a member. Category membership is based not on physical similarities but on functional ones—how you’d use the object in a specific situation. Such categories are called abstract. A computer cannot “recognize” a feather as a reward for bravery because that information isn’t in the feather. It’s an abstract category constructed in the perceiver’s brain. Computers can’t do this. Not yet, anyway. They can assign objects to preexisting categories based on previous examples (a process called supervised machine learning), and they can cluster objects into new categories based on predefined features, usually physical ones (unsupervised machine learning). But machines don’t construct abstract categories like “facial hair for pretend spies” on the fly. And they certainly don’t do it many times per second to comprehend and act in a massively complex social world. Just as your memory is a construction, so are your senses. Everything you see, hear, smell, taste, and feel is the result of some combination of stuff outside and inside your head. When you see a dandelion, for example, it has features like a long stem, yellow petals, and a soft, squishy texture. These features are reflected in the sense data streaming in. Other features are more abstract, like whether the dandelion is a flower to be placed in a bouquet or a weed to be ripped from the ground. Brains also have to decide which sense data is relevant and which is not, separating signal from noise. Economists and other scientists call this decision the problem of “value.” Value itself is another abstract, constructed feature. It’s not intrinsic to the sense data emanating from the world, so it’s not detectable in the world. Value is a property of that information in relation to the state

YOUR THOUGHTS A N D D R E A M S, EVEN YOUR EXPERIENCE RIGHT NOW AS YOU READ T H E S E W O R D S, ARE CONSEQUENCES OF A CENTRAL MISSION TO KEEP YOU A L I V E.

Introduction

Figure 1

PREVIOUS: SCIENCE PHOTO LIBRARY; GETTY IMAGES

Human brains can categorize this object by how it will be used. Computers using machine learning will only see a feather.

of the organism that’s doing the sensing—you. The importance of value is best seen in an ecological context. Suppose you are an animal roaming the forest and you see a blurry shape in the distance. Does it have value for you as food, or can you ignore it? Is it worth spending energy to pursue it? The answer depends partly on the state of your body: if you’re not hungry, the blurry shape has less value. It also depends on whether your brain predicts that the shape wants to eat you. Many humans don’t hunt for food on a regular basis, apart from browsing in markets. But the same process of estimating value applies to everything you do in life. Is the person approaching you friend or foe? Is that new movie worth seeing? Should you work an extra hour or go bar-hopping with your friends, or maybe just get some sleep? Each alternative is a plan for action, and each plan is itself an estimation of value.

The same brain circuitry involved in estimating value also gives us our most basic sense of feeling, which you know as your mood and which scientists call affect. Affective feelings are simple: feeling pleasant, feeling unpleasant, feeling worked up, feeling calm. Affective feelings are not emotions. (Emotions are more complex category constructions.) Affect is just a quick summary of your brain’s beliefs about the metabolic state of your body, like a barometer reading of sorts. People trust their affect to indicate whether something is relevant to them or not—that is, whether the thing has value or not. For example, if you feel that this article is absolutely brilliant, or that the author is off her rocker, or even if you’ve spent the energy to read this far, then it has value for you. Brains evolved to control bodies. Over evolutionary time, many animals evolved larger bodies with complex internal systems that needed coordination and control. A brain is sort of like a command center to integrate and coordinate those systems. It shuttles necessary resources like water, salt, glucose, and oxygen where and when they are needed. This regulation is called allostasis; it involves anticipating the body’s needs and attempting to meet them before they arise. If your brain does its job well, then through allostasis, the systems of your body get what they need most of the time. To accomplish this critical metabolic balancing act, your brain maintains a model of your body in the world. The model includes conscious stuff, like what you see, think, and feel; actions you perform without thought, like walking; and unconscious stuff outside your awareness. For example, your brain models your body temperature. This model governs your awareness of being warm or cold, automatic acts

11

like wandering into the shade, and unconscious processes like changing your blood flow and opening your pores. In every moment, your brain guesses (on the basis of past experience and sense data) what might happen next inside and outside your body, moves resources around, launches your actions, creates your sensations, and updates its model. This model is your mind, and allostasis is at its core. Your brain did not evolve to think, feel, and see. It evolved to regulate your body. Your thoughts, feelings, senses, and other mental capacities are consequences of that regulation. Since allostasis is fundamental to everything you do and sense, consider what would happen if you didn’t have a body. A brain born in a vat would have no bodily systems to regulate. It would have no bodily sensations to make sense of. It could not construct value or affect. A disembodied brain would therefore not have a mind. I’m not saying that a mind requires an actual flesh-andblood body, but I am suggesting that it requires something like a body, full of systems to coordinate efficiently in an ever-changing world. Your body is part of your mind—not in some gauzy, metaphorical way, but in a very real brain-wiring way. Your thoughts and dreams, your emotions, even your experience right now as you read these words, are consequences of a central mission to keep you alive, regulating your body by constructing ad hoc categories. Most likely, you don’t experience your mind in this way, but under the hood (inside the skull), that’s what is happening. Lisa Feldman Barrett (@LFeldmanBarrett) is a professor of psychology at Northeastern University and the author of Seven and a Half Lessons About the Brain and How Emotions Are Made: The Secret Life of the Brain. Learn more at LisaFeldmanBarrett.com.

MIT Technology Review’s flagship event on emerging technology and trends

JOIN

US

LIVE

ONLINE

September 28-30, 2021

EmTech brings together global innovators, change makers, and industry experts to guide decision makers through what’s probable, plausible, and possible with the most significant technology trends.

KEYNOTE

AI

BIOTECH

CRYPTOCURRENCY

Kevin Scott Microsoft

Timnit Gebru formerly Google

Christina Rudzinski MIT Lincoln Laboratory

Charles Hoskinson Cardano

CYBERSECURITY

INNOVATION

ENERGY

Wendy Nather Cisco

Aicha Evans Zoox

Christoph Noeres thyssenkrupp

SUBSCRIBERS SAVE 10% WITH CODE PRINTSO21 AT

EmTechMIT.com/RegisterToday

WHEN I SEE YOUR FACE

14

The moment we recognize someone, a lot happens all at once. We aren’t aware of any of it. By Tate Ryan-Mosley

Adam is on his way over. My apartment doesn’t have a door buzzer, so Adam always calls when he’s two minutes away. He never says he’s two minutes away; he says he’s already at my door, because he knows I’m always trying to finish something before I open up. Over the noise of the shower, I hear my phone buzz. I reach around the plastic curtain. It’s 6:31 p.m. “Hi, I’m here,” he says. Shit. I bound down the stairs holding the towel nest on top of my head. I can see the shape of his face through the window. Adam resembles a Viking who works in finance. I see the beginnings of a smile. (0 MILLISECONDS) I see a tan, scruffiness and boyishness. (40 MILLISECONDS) I register the shape of his face, his small bright almond eyes, his overbite (which I find darling), and his hairline. (50 MILLISECONDS) The skin at the edge of his eyes is starting to wrinkle into little creases, and his strong forehead suggests an aggressive masculinity that is at odds with his personality. (70 MILLISECONDS) I know it’s Adam from a flight away. (90 MILLISECONDS) I know his hair is starting to thin because I remember our very first fight when I asked whether he was balding. I can almost smell the patchouli of his beard oil—which he leaves at my apartment every other week—through the door. It reminds me of our mornings together before heading to our offices from a different lifetime. (400 MILLISECONDS) We exchange smiles while I unlock the pair of doors between us. We kiss on the cheek. We hug. “How long have you been waiting?” I ask. “Oh, just got here. I called from two blocks away.”

0 MILLISECONDS

100 MS

0 MILLISECONDS

Light reflected from Adam’s face is absorbed by my RETINA , which sends signals down the optic nerve toward a relay center called the lateral geniculate nucleus (LGN ). Here, visual information is passed on to other parts of the brain. The LGN is housed in the THALAMUS , a small region above the brain stem that sends sensory information to the CEREBRAL CORTEX , the brain’s main control center.

40 MILLISECONDS

The LGN starts to build a representation of what I am looking at for the VISUAL CORTEX by combining information from both eyes.

50 MILLISECONDS

The VISUAL CORTEX registers what I am seeing outside my door.

70 MILLISECONDS

Structures in the back of my temporal lobe, called the face patches—the occipital face area (OFA ), the fusiform face area (FFA ), and the superior temporal sulcus (STS )—tell me that I am looking at a face and start to categorize its gender and age.

90 MILLISECONDS

The face patches tell me this face belongs to a single person and then compare it with faces I have seen before.

15

200 MS

300 MS

400 MS

400 MILLISECONDS

CEREBRAL CORTEX

FRONTAL LOBE

PARIETAL LOBE

The cerebral cortex is actually the thin exterior lining of the brain

TEMPORAL LOBE

The LGN sits inside the thalamus

By now, my brain has recruited portions of the FRONTAL , PARIETAL , and TEMPORAL lobes that store memory and emotion to discern whether Adam’s face is familiar. The AMYGDALA , where most of my emotion controls are, is also involved. Together, these areas help me recall core information about him.

THALAMUS LGN

STS

RETINA

AMYGDALA

OFA

START HERE

FFA

Machine recognition Machines that recognize faces work in a similar way. A computer-vision AI breaks an input image down into pixels. Then it identifies patterns in the pixel contrast, recognizing curves, lines, and shadows, until it detects a face. It checks the face against a database and attempts to identify a match, taking into account changes in lighting and angles.

This graphic is based on work from the lab of Nancy Kanwisher and Katharina Dobs of the McGovern Institute at MIT, who have put forth one of the current leading theories on how we recognize faces.

VISUAL CORTEX

Artificial intelligence, demystified The Algorithm newsletter

Exclusively for MIT Technology Review subscribers.

technologyreview.com/algorithm

A better understanding of how memory is formed may help us treat Alzheimer’s and learn more about the neurological risks leading to addiction and depression.

RE VO I ERT W P Get upBooks, to speed on the key themes, big ideas, arts, and culture and players in major perspective discussed in this issue.

HOW DOES A COLLECTION OF CELLS CREATE THOUGHTS AND BEHAVIORS? WHAT IT IS:

THE

LATEST

SCIENCE

FROM MEMORIES TO BRAINS GROWN IN THE LAB We are only beginning to unwrap the mysteries of the human brain. Here’s how scientists are doing it, writes HANNAH THOMASY .

Cognitive and behavioral neuroscientists study how proteins, genes, and the structures of our brains give rise to behaviors and mental processes. How does the brain learn and remember things? How does it make decisions? How does it process and respond to the world? WHY IT MATTERS:

Understanding memory may help us treat Alzheimer’s; understanding reward-seeking may help address addiction; understanding emotions may provide new clues about preventing depression.

NHUNG LE

THE CUTTING EDGE:

There is no greater scientific mystery than the brain. It’s made mostly of water; much of the rest is largely fat. Yet this roughly three-pound blob of material produces our thoughts, memories, and emotions. It governs how we interact with the world, and it runs our body. Increasingly, scientists are beginning to unravel the complexities of how it works and understand how the 86 billion neurons in the human brain form the connections that produce ideas and feelings, as well as the ability to communicate and react. Here’s our whistle-stop tour of some of the most cutting-edge research— and why it’s important.

Sheena Josselyn, a neuroscientist at the Hospital for Sick Children in Toronto, studies how and where the brain stores memories. She says identifying the neural circuits— interconnected groups of neurons— responsible for storing specific memories could be key for treating memory disorders, because it’s not optimal to simply give someone a drug that affects the whole brain.

17

“We can’t treat the brain like a bowl of soup—if we add in a little bit of oregano, everything will come out better,” Josselyn says. “We need to understand exactly where we want to target things.” To create more precisely targeted treatments, she wants to better understand the neurons and neural circuits that are important “in forming and housing and recalling a memory.” Recently, Josselyn’s lab identified a new pathway that’s important for retrieving older memories. This pathway leads from the hippocampus—a brain region that controls learning and memory—to the thalamus, which acts as a sort of sensory information relay station in the brain. When the researchers turned off this pathway in mice, the animals could remember an experience from the day before but not one from the previous month. Kay Tye, a neuroscience professor at the Salk Institute, studies the neural pathways involved in learning and in emotions such as loneliness to shed light on substance abuse and anxiety. Tye’s lab has identified a neural pathway that helps guide behavivior when simultaneous cues signal positive and negative outcomes. TH E NEXT FRONTIER:

Once we better understand the brain regions, pathways, and neurotransmitters involved in memory, anxiety, and fear—and how these can be altered—we can develop more precise strategies to treat diseases. Q

IT’S IN YOUR GENES WH A T IT IS:

The field of neurogenetics explores how genes affect the structure and function of the nervous system.

Mind

W HY I T M A T T E R S :

If we can identify the role of genes, we might be able to diagnose brain disorders more precisely and accurately, or even intervene to halt their progress. The World Health Organization reports

450 MILLION PEOPLE currently have mental or neurological disorders.

THE CUTTING EDGE:

Steven McCarroll, director of genomic neurobiology for the Broad Institute’s Stanley Center for Psychiatric Research, studies genes related to schizophrenia. In collaboration with a team of researchers, he has identified variants in a gene associated with the disorder; these variants generated more of a protein involved in tagging synapses (connections between neurons) for removal. When McCarroll and his colleagues increased the expression of the gene in mice, the mice ended

up with fewer synapses. Their working memory was impaired, and their social behavior changed. Researchers think these genetic variations may be related to the synapse losses and behavioral changes observed in people with schizophrenia. Ying-Hui Fu, a professor of neurology at the University of California San Francisco, has identified three different gene mutations that reduce the amount of sleep people need. One of them even protects against the memory problems normally associated with sleep deprivation. Other researchers are searching for genes that keep people relatively healthy even when they carry other genes that put them at risk for early-onset Alzheimer’s disease.

We can’t treat the brain like a bowl of soup—if we add in a little bit of oregano, everything will come out better.

NHUNG LE

18

Report

T HE NEXT FRONTIER:

By identifying how genes contribute to diseases, scientists may be able to develop treatments, perhaps by using drugs to block the action of a protein produced by a disease-causing gene or to mimic the actions of a protective one. Gene therapies are also being explored to silence the harmful genes. Such a treatment for the neurological disease amyotrophic lateral sclerosis (ALS) has been cleared for trials in the US; a trial of gene therapy for Huntington’s disease is underway. Q

ENGINEERING THE BRAIN

19

movement. As the man imagined writing letters, the scientists used machine learning to translate his brain activity into letters on a screen. Using this system, the man could write 90 letters per minute—more than doubling the previous record for typing via brain activity. Other neuroengineers are working on prosthetics that can transmit sensory information back to the user. Luke Osborn, a neuroengineer at Johns Hopkins University, is working on ways to transmit different types of sensations in people who have undergone amputations by stimulating nerves in the limb above the amputation site. So far, the devices can transmit sensations of pressure and even mild pain. Pain sensations are a critical information source, says Osborn, letting us know when we might be doing something unsafe.

and what causes it to go awry— could help us address conditions like microcephaly, autism, and ADHD. And if we know how events before birth and during childhood affect the structure and function of the developing brain, we will be better able to give children the best shot at healthy development. THE CUTTING EDGE:

W HAT IT IS:

Neuroengineers are looking for ways to connect the nervous system, including the brain, to machines. Experimental devices can translate neuronal activity into text or make it move an artificial limb; some convert information from artificial sensors into nerve stimulation that the brain can understand. W HY IT MATTERS:

Technology can now help restore the ability to communicate, feel sensations, and move in people who are paralyzed or have undergone amputations. Brain-stimulating implants may also offer new ways to treat epilepsy, chronic pain, and blindness.

T H E N E X T F R O N TI E R :

Devices that connect brains and computers could potentially be used not just to restore functions that have been lost but also to enhance our brains’ abilities. In the future, these devices could enhance cognition, allow us to communicate brain to brain, or create ultra-realistic virtual-reality experiences incorporating all of our senses. Q

Neuroengineers at Stanford are using measurements of brain activity to help restore function in people who are paralyzed. Recently, working with a man paralyzed from the neck down, the researchers implanted two arrays of tiny electrodes in a part of his brain responsible for hand

90 LETTERS PER MINUTE using a brain implant with tiny electrodes.

HOW TO MAKE A BRAIN WHAT IT IS:

T HE CUTTING EDGE:

A paralyzed patient can type up to

Developmental neuroscience explores how the structure and function of the brain change over time as an organism matures. How do individual neurons find their way to the proper place in the brain? W H Y I T M A T T E R S:

Understanding brain development—

Madeline Lancaster, at the Medical Research Council Laboratory of Molecular Biology in the UK, studies brain development using organoids, three-dimensional cell clusters derived from human stem cells that self-organize into a miniature, simplified—but still brainlike—organ. To more accurately model the human brain, she’s creating organoids that live longer and mimic different types of brain structures. Using this approach, Lancaster has discovered that a protein called ZEB2 is critical for regulating the remarkable developmental expansion that makes human brains so much larger than ape brains. Understanding processes that govern brain size could help us better understand the causes of microcephaly and other disorders in which the fetal brain fails to develop properly. Brain development that occurs after birth is also important. Rebecca Saxe at MIT is working to understand the brain structures and activities responsible for social cognition, which allows us to consider the mental states of other people. Saxe has discovered a particular brain region that is key; by studying how activity in this region and others changes over the course of childhood, she may be able to understand how social abilities develop. She has also found that these brain activity patterns are altered in people with autism spectrum disorders.

20

Mind

Aude Oliva, the MIT director of the MIT-IBM Watson AI Lab, uses computational tools to model and predict how brains perceive and remember visual information. Her research shows that different images result in certain patterns of activity both in the monkey cortex and in neural network models, and that these patterns predict how memorable a certain image will be.

TH E NEXT FRONTIER:

Even though researchers are starting to understand some of the processes that govern development and have identified things that can derail it, we’re far from being able to intervene when such problems occur. But as we gain insights, we could someday test therapies or other ways to address these developmental issues. Q

THE NEXT FRONTIER:

Research like Sejnowski’s may inspire “smarter” machines, but it could also help us understand disorders in which the function of the prefrontal cortex is altered, including schizophrenia, dementia, and the effects of head trauma.

COMPUTERS THAT IMITATE THE BRAIN WH AT IT IS:

Computational neuroscientists use mathematical models to better understand how networks of brain cells help us interpret what we see and hear, integrate new information, create and store memories, and make decisions. WH Y IT’S IMPORTANT:

Understanding how the activity of neurons governs cognition and behavior could lead to ways to improve memory or understand disease processes. TH E CUTTING EDGE:

Terry Sejnowski, a computational neurobiologist at the Salk Institute, has built a computer model of the prefrontal cortex and analyzed its performance on a task in which a person (or machine) has to sort cards according to a rule that’s always changing. While humans are great at adapting, machines generally struggle. But Sejnowski’s computer, which imitates information flow patterns observed in the brain, performed well on this task. This research could help machines “think” more like humans and adapt more quickly to new conditions.

WHY DO THINGS FALL APART? According to the Alzheimer’s Disease Association, as many as

6.2 MILLION Americans have the brain disorder.

WHAT IT IS:

Researchers are trying to determine the genetic and environmental risk factors for neurodegenerative diseases, as well as the diseases’ underlying mechanisms. WHY IT’S IMPORTANT:

Improving prevention, early detection, and treatment for diseases like Alzheimer’s, Parkinson’s, Huntington’s, chronic traumatic

encephalopathy, and ALS would benefit millions of people around the world. THE CUTTING EDGE:

Yakeel Quiroz, at Massachusetts General Hospital, studies changes in brain structure and function that occur before the onset of Alzheimer’s symptoms. She’s looking for biomarkers that could be used for early detection of the disease and trying to pinpoint potential targets for therapeutics. One potential biomarker of early-onset Alzheimer’s that she’s found—a protein called NfL—is elevated in the blood more than two decades before symptoms appear. Quiroz has also identified a woman with a protective genetic mutation that kept her from developing cognitive impairments and brain degeneration even though her brain showed high levels of amyloid, a protein implicated in Alzheimer’s development. Studying the effects of this beneficial mutation could lead to new therapies. Researchers at the Early Detection of Neurodegenerative Diseases initiative in the United Kingdom are analyzing whether digital data collected by smartphones or wearables could give early warnings of disease before symptoms develop. One of the initiative’s projects—a partnership with Boston University—will collect data using apps, activity tracking, and sleep

Yakeel Quiroz is looking for biomarkers that could be used for early detection of Alzheimer’s disease or to pinpoint potential targets for therapeutics.

Report

tracking in people with and without dementia to identify possible digital signatures of disease. THE NEXT FRONTIER:

As we learn more about the underlying causes of neurodegenerative diseases, researchers are trying to translate this knowledge into effective treatments. Advanced clinical trials targeting newly understood mechanisms of disease are currently under way for many neurodegenerative disorders, including Alzheimer’s, Parkinson’s, and ALS. Q

IT’S ALL CONNECTED WHAT IT IS:

NHUNG LE

Connectomics researchers map and analyze neuronal connections, creating a wiring diagram for the brain. WHY IT’S IMPORTANT :

Understanding these connections

will shed light on how the brain functions; many projects are exploring how macro-scale connections are altered during development, aging, or disease. T H E C U T T I N G E DG E :

Mapping these connections isn’t easy—there may be as many as 100 trillion connections in the human brain, and they’re all tiny. Researchers need to find the best ways to label specific neurons and track the connections they make to other neurons in remote parts of the brain, refine the technology to collect these images, and figure out how to analyze the mountains of data that this process produces. A collaboration that included Google computer scientist Viren Jain and Harvard neuroscientist Jeff Lichtman recently completed the most detailed map of a section of the human brain ever produced. By imaging one cubic millimeter of brain at the nanoscale level, they mapped 50,000 cells and more than 130 million synapses, resulting in

21

Scientists now estimate neurons form more than

100 TRILLION connections in the human brain.

1.4 petabytes of data. Previously, Lichtman had helped develop Brainbow, a technique that allows colored labeling of individual neurons in living animals, enabling scientists to trace neuronal connections. Sebastian Seung, a computational neuroscientist at Princeton, pioneered a technique that uses crowdsourcing and machine learning to turn raw images into usable three-dimensional neuronal maps, with synapses identified and cell types classified. In the first project, called EyeWire, citizen scientists helped map neurons in the retina. The current project, FlyWire, is an ambitious effort to map neuronal connections in the entire brain of a fruit fly. The Allen Institute in Seattle, an important player in brain connectivity research, makes its brain maps available to the public. A mouse brain connectivity atlas that it’s compiled includes cell-type-specific maps of connections between the thalamus (a sensory and motor relay station) and the cortex. THE NEXT FRONTIER:

Mapping the individual neuronal connections in the human brain is no small feat. There are also variations both between and within individuals—connections will likely change as our brains develop, learn, and age. Creating individual microscale brain maps for everyone would likely provide us with an unprecedented level of insight, but for now that’s a far-off dream. Q

MENTAL HEALTH WHAT IT IS:

Why and how psychiatric illnesses and brain disorders develop is still largely a mystery. Neuroscientists use neuroimaging, genetics,

22

Mind

Researchers are making strides in understanding and treating substance-use disorders by identifying brain connectivity patterns that increase or decrease risk for developing an addiction.

biochemistry, machine learning, behavioral studies, and more to understand the molecular and environmental causes. WH Y IT’S IMPORTANT:

TH E CUTTING EDGE:

Satrajit Ghosh, a neuroscientist at MIT, is using speech patterns and neuroimaging to improve mentalhealth assessments in humans. In the short term, Ghosh hopes this can be used to improve diagnosis, and there’s already some evidence that it can help predict which patients will respond to which therapies. But in the future, Ghosh says, “we want to be able to measure something, predict some future state, and … adjust behavior on the fly so that you will never hit that state.” Therapies using brain stimulation are providing new treatment options for obsessive-compulsive disorder (OCD). Deep-brain stimulation—in which electrodes are implanted in the brain—offers substantial relief for some people whose OCD doesn’t respond to other treatments. Less invasive forms of neural stimulation have shown promising early results as well. Just five days of noninvasive brain stimulation reduced obsessive- compulsive behaviors for three months in people who displayed some OCD symptoms. Researchers are making strides in understanding and treating substance-use disorders, identifying brain connectivity patterns that increase or decrease the risk of developing an addiction. Perhaps someday, neural pathways that help people resist addiction could be reinforced therapeutically.

Around the world, some

10-20% of children and adolescents have a mental-health condition.

Drugs once classified as recreational are being explored for the treatment of mental illnesses. In 2019, the US Food and Drug Administration approved esketamine for treatment-resistant depression, the first time in 30 years that a drug with a new mechanism of action had been approved for the condition. More recently, a stage 3 clinical trial showed that people with post-traumatic stress disorder who received MDMA (a.k.a. Ecstasy) along with traditional therapy improved substantially compared with those who received therapy alone. Psilocybin—the active component in magic mushrooms—is in clinical trials for the treatment of

depression, alcohol-use disorder, OCD, anorexia, and more. THE NEXT FRONTIER:

Someday, patients with brain disorders may be assessed and treated based on their genetics, along with biomarkers and brain-activity scans. Researchers are exploring how genetics could guide treatment choices for patients with depression, how connectivity in brain regions like the amygdala could lead to a more personalized understanding of disorders related to fear and anxiety, and how blood-based biomarkers could track treatment response in depression and bipolar disorder. Q

NHUNG LE

Mental illness is a leading cause of disability worldwide. Some 264 million people have depression, 45 million have bipolar disorder, and 20 million have schizophrenia.

Report

23

COMPUTING

Today, all these years later, experts are divided. Although everyone agrees that our biological brains create our conscious minds (see page 8), they’re split on the question of what role, if any, is played by information processing—the crucial similarity that brains and computers are alleged to share. While the debate may sound a bit academic, it actually has real-world implications: the effort to build machines with human-like intelligence depends at least in part on understanding how our own brains actually work, and how similar— or not—they are to machines. If brains could be shown to function in a way that was radically different from a computer, it would call into question many traditional approaches to AI. The question may also shape our sense of who we are. As long as brains, and the minds they enable, are thought of as unique, humankind might imagine itself to be very special indeed. Seeing our brains as nothing more than sophisticated computational machinery could burst that bubble. We asked the experts to tell us why they think we should—or shouldn’t—think of the brain as being “like a computer.”

IS YOUR BRAIN A COMPUTER? We asked experts for their best arguments in this longstanding debate. By Dan Falk

The brain can’t be a computer because it’s biological. Everyone agrees that the actual stuff inside a brain—“designed” over billions of years by evolution—is very different from what engineers at IBM and Google put inside your laptop or smartphone. For starters, brains are analog. The brain’s billions of neurons behave very differently from the digital switches and logic gates in a digital computer. “We’ve known since the 1920s that neurons don’t just turn on and off,” says biologist Matthew Cobb of the University of Manchester in the UK. “As the stimulus increases, the signal increases,” he says. “The way a neuron behaves when it’s stimulated is different from any computer that we’ve ever built.” Blake Richards, a neuroscientist and computer scientist at McGill University

AGAINST

SARA DESJARDINS

I

t’s an analogy that goes back to the dawn of the computer era: ever since we discovered that machines could solve problems by manipulating symbols, we’ve wondered if the brain might work in a similar fashion. Alan Turing, for example, asked what it would take for a machine to “think”; writing in 1950, he predicted that by the year 2000 “one will be able to speak of machines thinking without expecting to be contradicted.” If machines could think like human brains, it was only natural to wonder if brains might work like machines. Of course, no one would mistake the gooey material inside your brain for the CPU inside your laptop—but beyond the superficial differences, it was suggested, there might be important similarities.

Dan Falk is a science journalist based in Toronto. His books include The Science of Shakespeare and In Search of Time.

24

Mind

in Montreal, agrees: brains “process everything in parallel, in continuous time” rather than in discrete intervals, he says. In contrast, today’s digital computers employ a very specific design based on the original von Neumann architecture. They work largely by going step by step through a list of instructions encoded in a memory bank, while accessing information stored in discrete memory slots. “None of that has any resemblance to what goes on in your brain,” says Richards. (And yet, the brain keeps surprising us: in recent years, some neuroscientists have argued that even individual neurons can perform certain kinds of computations, comparable to what computer scientists call an XOR, or “exclusive or,” function.)

Sure it can! The actual structure is beside the point. But perhaps what brains and computers do is fundamentally the same, even if the architecture is different. “What the brain seems to be doing is quite aptly described as information processing,” says Megan Peters, a cognitive scientist at the University of California, Irvine. “The brain takes spikes [brief bursts of activity that last about a tenth of a second] and sound waves and photons and converts it into neural activity—and that neural activity represents information.” Richards, who agrees with Cobb that brains work very differently from today’s digital computers, nonetheless believes the brain is, in fact, a computer. “A computer, according to the usage of the word in computer science, is just any device which

FOR

is capable of implementing many different computable functions,” says Richards. By that definition, “the brain is not simply like a computer. It is literally a computer.” Michael Graziano, a neuroscientist at Princeton University, echoes that sentiment. “There’s a more broad concept of what a computer is, as a thing that takes in information and manipulates it and, on that basis, chooses outputs. And a ‘computer’ in this more general conception is what the brain is; that’s what it does.” But Anthony Chemero, a cognitive scientist and philosopher at the University of Cincinnati, objects. “What seems to have happened is that over time, we’ve watered down the idea of ‘computation’ so that it no longer means anything,” he says. “Yes, your brain does stuff, and it helps you know things—but that’s not really computation anymore.”

123RF

COMPUTING

Report

Traditional computers might not be brain-like, but artificial neural networks are. All of the biggest breakthroughs in artificial intelligence today have involved artificial neural networks, which use “layers” of mathematical processing to assess the information they’re fed. The connections between the layers are assigned weights (roughly, a number that corresponds to the importance of each connection relative to the others—think of how a professor might work out a final grade based on a series of quiz results but assign a greater weight to the final quiz). Those weights are adjusted as the network is exposed to more and more data, until the last layer produces an output. In recent years, neural networks have been able to recognize faces, translate languages, and even mimic human-written text in an uncanny way. “An artificial neural network is actually basically just an algorithmic-level model of a brain,” says Richards. “It is a way of trying to model the brain without reference to the specific biological details of how the brain works.” Richards points out that this was the explicit goal of neuralnetwork pioneers like Frank Rosenblatt, David Rumelhart, and Geoffrey Hinton: “They were specifically interested in trying to understand the algorithms that the brain uses to implement the functions that brains successfully compute.” Scientists have recently developed neural networks whose workings are said to more closely resemble those of actual human brains. One such approach, predictive coding, is based on the premise that the brain is constantly trying to predict what sensory inputs it’s going to receive next; the idea is that “keeping up” with the outside world in this way boosts its chances for survival—something that natural selection would have favored. It’s an idea that resonates with Graziano. “The purpose of having a brain is movement— being able to interact physically with the external world,” he says. “That’s what the brain does; that’s the heart of why you have a brain. It’s to make predictions.”

FOR

Even if brains work like neural AGAINST networks, they’re still not information processors. Not everyone thinks neural networks support the notion that our brains are like computers. One problem is that they are inscrutable: when a neural network solves a problem, it may not be at all clear how it solved the problem, making it harder to argue that its method was in any way brain-like. “The artificial neural networks that people like Hinton are working on now are so complicated that even if you try to analyze them to figure out what parts were storing information about what, and what counts as the manipulation of that information, you’re not going to be able to pull that out,” says Chemero. “The more complicated they get, the more intractable they become.” But defenders of the brain-as-computer analogy say that doesn’t matter. “You can’t point to the 1s and 0s,” says Graziano. “It’s distributed in a pattern of connectivity

25

The brain-as-computer AGAINST metaphor can’t explain how we derive meaning. No matter how sophisticated a neural network may be, the information that flows through it doesn’t actually mean anything, says Romain Brette, a theoretical neuroscientist at the Vision Institute in Paris. A facial-recognition program, for example, might peg a particular face as being mine or yours—but ultimately it’s just tracking correlations between two sets of numbers. “You still need someone to make sense of it, to think, to perceive,” he says. Which doesn’t mean that the brain doesn’t process information—perhaps it does. “Computation is probably very important in the explanation of the mind and intelligence and consciousness,” says Lisa Miracchi, a philosopher at the University of Pennsylvania. Still, she emphasizes that what the brain does and what the mind does are not necessarily the same. And even if the brain is computer-

“There’s literally only two options: either you’re running an algorithm or you’re using magic.” that was learned among all those artificial neurons, so it’s hard to ‘talk shop’ about exactly what the information is, where it’s stored, and how it’s encoded—but you know it’s there.”

The brain has to be a computer; the alternative is magic. If you’re committed to the idea that the physical brain creates the mind, then computation is the only viable path, says Richards. “Computation just means physics,” he says. “The only other option is that you’re proposing some kind of magical ‘soul’ or ‘spirit’ or something like that ... There’s literally only two options: either you’re running an algorithm or you’re using magic.”

FOR

like, the mind may not be: “Mental processes are not computational processes, because they’re inherently meaningful, whereas computational processes are not.” So where does that leave us? The question of whether the brain is or is not like a computer appears to depend partly on what we mean by “computer.” But even if the experts could agree on a definition, the question seems unlikely to be resolved anytime soon—perhaps because it is so closely tied to thorny philosophical problems, like the so-called mind-body problem and the puzzle of consciousness. We argue about whether the brain is like a computer because we want to know how minds came to be; we want to understand what allows some arrangements of matter, but not others, not only to exist but to experience.

26

5

Mind

QUESTIONS

Green staining shows cells that store long-term fear memories. Researchers can use an algae protein that’s sensitive to blue light to artificially activate them.

Can we see memories in the brain?

Neuroscientists wield optogenetics and imaging technologies to understand memories—and to manipulate them. By Joshua Sariñana

here are 86 billion neurons in the human brain, each with thousands of connections, giving rise to hundreds of trillions of synapses. Synapses—the connection points between neurons—store memories. The overwhelming number of neurons and synapses in our brains makes finding the precise location of a specific memory a formidable scientific challenge. Figuring out how memories form may ultimately help us learn more about ourselves and keep our mental acuity intact.

T

Memory helps shape our identities, and memory impairment may indicate a brain disorder. Alzheimer’s disease robs individuals of their memories by destroying synapses; addiction hijacks the brain’s learning and memory centers; and some mental health conditions, like depression, are associated with memory impairment. In many ways, neuroscience has revealed the nature of memories, but it has also upended the very notion of what memories are. The five questions below speak to how much we’ve learned and what mysteries remain.

What tools let us see memories? At the end of the 19th century, tabletop microscopes made it possible to identify individual neurons, enabling scientists to draw stunningly detailed representations of the brain. By the mid-20th century, powerful electron microscopes could show synaptic structures just tens of nanometers wide (about the width of a virus particle). At the turn of the 21st century, neuroscientists used two-photon microscopes to watch synapses form in real time while mice learned.

DHEERAJ ROY/TONEGAWA LAB/MIT

MAKING MEMORIES

Neuroscientists have observed the basic outline of memories in the brain for decades. However, only recently could they see the enduring physical representation of a memory, which is called a memory engram. An engram is stored within a network of connected neurons, and neurons holding the engram can be made to glow so that they are visible through special microscopes. Today, neuroscientists can manipulate memory engrams by artificially activating their underlying networks and inserting new information. These techniques are also shedding light on how different types of memory work and where each is recorded in the brain. Episodic autobiographical memory deals with what happened, where, and when. It relies on the hippocampus, a seahorse-shaped structure. Procedural memories, supported by the basal ganglia, let us remember how to carry out habitual behaviors like riding a bike. This region malfunctions in those with addiction. Our ability to recall facts, like state capitals, is thanks to semantic memory, which is stored in the cortex.

Report

Incredible advancements in genetics have also made it possible to swap genes in and out of the brain to link them to memory function. Scientists have used viruses to insert a green fluorescent protein found in jellyfish into mouse brains, causing neurons to light up during learning. They’ve also used an algae protein called channelrhodopsin (ChR2) to artificially activate neurons. The protein is sensitive to blue light, so when it’s inserted into neurons, the neurons can be turned on and off with a blue laser—a technique known as optogenetics. With this technology, which was pioneered by researchers at Stanford almost two decades ago, neuroscientists can artificially activate memory engram cells in lab animals. New techniques also make it possible to study how nerve impulses translate outside information to our inner worlds. To watch this process in the brain, neuroscientists use tiny electrodes to record the impulses, which last for just a few milliseconds. Analytical tools such as neural decoding algorithms can then weed out noise to reveal patterns that indicate a memory center in the brain. Open-source software kits allow more neuroscience laboratories to conduct such research.

What do these tools tell us about how memories are created and stored? How neurons become part of a memory engram remained a mystery until recently. When neuroscientists looked closer, they were surprised to see that neurons compete with one another to store memories. By inserting genes into the brain to increase or decrease neuron excitability, the researchers learned that the most excited neurons in the area will become part of the engram. These neurons will also actively inhibit their neighbors from becoming part of another engram for a short period of time. This competition likely helps memories form and shows that where memories are allocated in the brain is not random. In other experiments, researchers found that neural networks hold on to forgotten memories. Mice injected with a cocktail of

protein inhibitors develop amnesia, likely forgetting information because their synapses wither away. But the researchers discovered that these memories weren’t forever lost—the neurons still held the information, though without synapses, it couldn’t be retrieved (at least not without optogenetic stimulation). Mice with Alzheimer’s disease showed similar memory loss. Another finding has to do with how dreaming strengthens our memories. Neuroscientists had long thought that as the day’s experiences replayed in the form of nerve impulses during sleep, those memories slowly transferred out of the hippocampus and to the cortex so that the brain could extract information to create rules about the world. They also knew that some rules were synthesized by the cortex more quickly, but existing models couldn’t explain how this happened. Recently, though, researchers have used optogenetic tools in animal studies to show that the hippocampus also works to establish these rapidly forming cortical memories. “The hippocampus helps to rapidly create immature memory engrams in the cortex,” says Takashi Kitamura, an assistant professor at the University of Texas Southwestern Medical Center. “The hippocampus still teaches the cortex, but without optogenetic tools we might not have observed the immature engrams.”

Can memories be manipulated? Memories are not as stable as they might feel. By their very nature, they must be amenable to change, or learning would be impossible. Nearly a decade ago, MIT researchers genetically altered mice so that when their neurons were active during learning, this activity turned on the ChR2 gene, which was tethered to a green fluorescent protein. By seeing which neurons fluoresced, neuroscientists could identify which ones were involved in learning. And they could reactivate specific memories by shining light on the ChR2 genes associated with those neurons. With this ability, the MIT researchers inserted a false memory into mouse brains.

27

First they placed the mice in a triangular box, which activated specific ChR2 genes and neurons. Then they put the mice in a square box and administered shocks to their feet while shining a light on the ChR2 neurons associated with the first environment. Eventually, the mice associated the memory of the triangle box with the shocks even though they were shocked only while in the square box. “The animals were fearful of an environment that, technically speaking, never had anything ‘bad’ happen in it,” says Steve Ramirez, a coauthor of the study who is now an assistant professor of neuroscience at Boston University. It’s not feasible to use such techniques involving fiber-optic cables and lasers to experiment on the human brain, but the results on the brains of mice suggest how easily memories can be manipulated.

Can we see memories outside of the brain? Human memories can be visually reconstructed using brain scanners. In research conducted by Brice Kuhl, who is now an assistant professor of cognitive neuroscience at the University of Oregon, people were given images to view, and their brains were scanned with an MRI machine to measure which regions were active. An algorithm was then trained to guess what the person was viewing and reconstruct an image based on this activity. The algorithm also reconstructed images from participants who were asked to hold one of the images they viewed in their minds. There’s much room for improvement in these reconstructed images, but this work showed that neuroimaging and reconstruction algorithms can indeed show the content of human memories for others to see. Technology has let neuroscientists peer into the brain and see the tiny glowing traces of memory. Yet the discovery that experiences and knowledge can be implanted or externalized has also given memory a different meaning. What does this mean for our sense of who we are? Joshua Sariñana is a neuroscientist, writer, and fine art photographer.

28

DRAW

Mind

ME

A

PICTURE

A century ago, scientists claimed that language shapes how we see the world. The idea fell out of favor, but recent research suggests it was right all along.

For example, Russian has more words to describe different types of blue than English, and this helps speakers distinguish between shades.

We used to think that our brains barely changed once we were adults. Now we know they continue to form new connections throughout our lives. That means we can still learn as we age, and even change things like personality traits that we once thought were fixed for life. And yes, that also means it’s never too late to learn to play the drums. In fact, some species can do things that show they “think about thinking,” such as recognizing their confidence in their own knowledge and adjusting their behavior.

It was once believed that animals rely purely on instinct and have little inner life or complex cognition. But that’s not really true.

And it’s not just other great apes, or smart animals like elephants, that do this. Even spiders seem to show elements of sophisticated forethought and planning. Gulp.

THE BRAIN, MISUNDERSTOOD

Report

29

For decades, psychologists assumed that people with a high IQ would be better at judging information rationally, simply because they were better at stuff like abstract thinking and learning. Nope. Actually, research shows that having a high IQ doesn’t necessarily mean you’re better at seeing different perspectives in an argument, or making sure you don’t interpret information on the basis of your own preconceptions. And it is these kinds of skills (rather than IQ) that protect us from serious errors of judgment. In other words, you really can be very smart but very stupid at the same time.

this “predictive coding” helps us interpret all the messy sensory data our brains receive.

Your perceptions are continuously shaped by things like your memories, expectations, and moods.

And it means you can still navigate your bedroom in the half light. Handy!

They change what you see, hear, feel, and taste.

Think you have an objective view of the world around you? Think again.

What our brain expects a banana to look like will affect the way it processes colors.

Look at this picture of a banana, for example.

that means you’d see this second banana as yellowish too! Weird.

Our brains are so complex it’s really no wonder we don’t always understand how they work. By David Robson Comics by David Biskup

It is often said that if the brain were so simple that we could understand it, we would be so simple that we couldn’t. That view is pessimistic: philosophers, psychologists, and neuroscientists have made enormous strides in describing and explaining the brain’s workings.

But there have been some false starts and dead ends along the way, and many debunked theories continue to linger in the popular imagination despite having no basis in reality. Read our comic to discover five things we have got wrong about the brain. Q

Mind

Life under covid has messed with our brains. Luckily, they were designed to bounce back.

By Dana

Smith / Illustrations by Nicolás Ortega

GUTTER CREDIT HERE

30

Pandemic brain

O rgies are back. Or at least that’s what advertisers want you to believe. One commercial for chewing gum—whose sales tanked during 2020 because who cares what your breath smells like when you’re wearing a mask—depicts the end of the pandemic as a raucous free-for-all with people embracing in the streets and making out in parks.

The reality is a little different. Americans are slowly coming out of the pandemic, but as they reemerge, there’s still a lot of trauma to process. It’s not just our families, our communities, and our jobs that have changed; our brains have changed too. We’re not the same people we were 18 months ago. During the winter of 2020, more than 40% of Americans reported symptoms of anxiety or depression, double the rate of the previous year. That number dropped to 30% in June 2021 as vaccinations rose and covid-19 cases fell, but that still leaves nearly one in three Americans struggling with their mental health. In addition to diagnosable symptoms, plenty of people reported experiencing pandemic brain fog, including forgetfulness, difficulty concentrating, and general fuzziness. Now the question is, can our brains change back? And how can we help them do that?

very experience changes your brain, either helping you to gain new synapses—the connections between brain cells—or causing you to lose them. This is known as neuroplasticity, and it’s how our brains develop through childhood and adolescence. Neuroplasticity is how we continue to learn and create memories in adulthood, too, although our brains become less flexible as we get older. The process is vital for learning, memory, and general healthy brain function. But many experiences also cause the brain to lose cells and connections that you wanted or needed to keep. For instance, stress—something almost everyone has experienced during the pandemic—can not only destroy existing synapses but also inhibit the growth of new ones. One way stress does this is by triggering the release of hormones called glucocorticoids, most notably cortisol. In small doses, glucocorticoids help the brain and

E

31

body respond to a stressor (think: fight or flight) by changing heart rate, respiration, inflammation, and more to increase one’s odds of survival. Once the stressor is gone, the hormone levels recede. With chronic stress, however, the stressor never goes away, and the brain remains flooded with the chemicals. In the long term, elevated levels of glucocorticoids can cause changes that may lead to depression, anxiety, forgetfulness, and inattention. Scientists haven’t been able to directly study these types of physical brain changes during the pandemic, but they can make inferences from the many mental-health surveys conducted over the last 18 months and what they know about stress and the brain from years of previous research. For example, one study showed that people who experienced financial stressors, like a job loss or economic insecurity, during the pandemic were more likely to develop depression. One of the brain areas hardest hit by chronic stress is the hippocampus, which is important for both memory and mood. These financial stressors would have flooded the hippocampus with glucocorticoids for months, damaging cells, destroying synapses, and ultimately shrinking the region. A smaller hippocampus is one of the hallmarks of depression. Chronic stress can also alter the prefrontal cortex, which is the brain’s executive control center, and the amygdala, the fear and anxiety hub. Too many glucocorticoids for too long can impair the connections both within the prefrontal cortex and between it and the amygdala. As a result, the prefrontal cortex loses its ability to control the amygdala, leaving the fear and anxiety center to run unchecked. This pattern of brain activity (too much action in the amygdala and not enough communication with the prefrontal cortex) is common in people who have post-traumatic stress disorder (PTSD), another condition that spiked during the pandemic, particularly among frontline health-care workers. The social isolation brought on by the pandemic was also likely detrimental to the

32

brain’s structure and function. Loneliness has been linked to reduced volume in the hippocampus and amygdala, as well as decreased connectivity in the prefrontal cortex. Perhaps unsurprisingly, people who lived alone during the pandemic experienced higher rates of depression and anxiety. Finally, damage to these brain areas affects people not only emotionally but cognitively as well. Many psychologists have attributed pandemic brain fog to chronic stress’s impact on the prefrontal cortex, where it can impair concentration and working memory.

o that’s the bad news. The pandemic hit our brains hard. These negative changes ultimately come down to a stress-induced decrease in neuroplasticity—a loss of cells and synapses instead of the growth of new ones. But don’t despair; there’s some good news. For many people, the brain can spontaneously recover its plasticity once the stress goes away. If life begins to return to normal, so might our brains. “In a lot of cases, the changes that occur with chronic stress actually abate over time,” says James Herman, a professor of psychiatry and behavioral neuroscience at the University of Cincinnati. “At the level of the brain, you can see a reversal of a lot of these negative effects.” In other words, as your routine returns to its pre-pandemic state, your brain should too. The stress hormones will recede as vaccinations continue and the anxiety about dying from a new virus (or its killing someone else) subsides. And as you venture out into the world again, all the little things that used to make you happy or challenged you in a good way will do so again, helping your brain to repair the lost connections that those behaviors had once built. For example, just as social isolation is bad for the brain, social interaction is especially good for it. People with larger social networks have more volume and connections in the prefrontal cortex, amygdala, and other brain regions.

S

Mind

Even if you don’t feel like socializing again just yet, maybe push yourself a little anyway. Don’t do anything that feels unsafe, but there is an aspect of “fake it till you make it” in treating some mental illness. In clinical speak, it’s called behavioral activation, which emphasizes getting out and doing things even if you don’t want to. At first, you might not experience the same feelings of joy or fun you used to get from going to a bar or a backyard barbecue, but if you stick with it, these activities will often start to feel easier and can help lift feelings of depression. Rebecca Price, an associate professor of psychiatry and psychology at the University of Pittsburgh, says behavioral activation might work by enriching your environment, which scientists know leads to the growth of new brain cells, at least in animal models. “Your brain is going to react to the environment that you present to it, so if you are in a deprived, notenriched environment because you’ve been stuck at home alone, that will probably cause some decreases in the pathways that are available,” she says. “If you create for yourself a more enriched environment where you have more possible inputs and interactions and stimuli, then [your brain] will respond to that.” So get off your couch and go check out a museum, a botanical garden, or an outdoor concert. Your brain will thank you. Exercise can help too. Chronic stress depletes levels of an important chemical called brain-derived neurotrophic factor (BDNF), which helps promote neuroplasticity. Without BDNF, the brain is less able to repair or replace the cells and connections that are lost to chronic stress. Exercise increases levels of BDNF, especially in the hippocampus and prefrontal cortex, which at least partially explains why exercise can boost both cognition and mood. BDNF does not just help new synapses grow; it may help produce new neurons in the hippocampus, too. For decades, scientists thought that neurogenesis in humans stopped after adolescence, but recent research has shown signs of neuron growth well into old age (though the

WHILE EVERYONE’S BRAIN IS DIFFERENT, TRY THESE ACTIVITIES TO GIVE YOUR BRAIN THE BEST CHANCE OF RECOVERING FROM THE PANDEMIC.

issue is still hotly contested). Regardless of whether it works through neurogenesis or not, exercise has been shown time and again to improve mood, attention, and cognition; some therapists even prescribe it to treat depression and anxiety. Time to get out there and start sweating. There’s a lot of variation in how people’s brains recover from stress and trauma, and not everyone will bounce back from the pandemic so easily. “Some people just seem to be more vulnerable to getting into a chronic state where they get stuck in something like depression or anxiety,” says Price. In these situations, therapy or medication might be required. Some scientists now think that psychotherapy for depression and anxiety works at least in part by changing brain activity, and that getting the brain to fire in new patterns is a first step to getting it to wire in new patterns. A review paper that assessed psychotherapy for different anxiety disorders found that the treatment was most effective in people who

Pandemic brain

ONE

TWO

THREE

Get out and socialize. People with larger social networks have more volume and connectivity in the prefrontal cortex, amygdala, and other brain regions.

Try working out. Exercise increases levels of a protein called BDNF that helps promote neuroplasticity and may even contribute to the growth of new neurons.

Talk to a therapist. Therapy can help you view yourself from a different perspective, and changing your thought patterns can change your brain patterns.

FOUR

FIVE

SIX

Enrich your environment. Get out of your pandemic rut and stimulate your brain with a trip to the museum, a botanical garden, or an outdoor concert.

Take some drugs— but make sure they’re prescribed! Both classic antidepressant drugs, such as SSRIs, and more experimental ones like ketamine and psychedelics are thought to boost neuroplasticity.

Strengthen your prefrontal cortex by exercising your self-control. If you don’t have access to an (FDA-approved) attention-boosting video game, meditation can have a similar benefit.

displayed more activity in the prefrontal cortex after several weeks of therapy than they did beforehand—particularly when the area was exerting control over the brain’s fear center. Other researchers are trying to change people’s brain activity using video games. Adam Gazzaley, a professor of neurology at the University of California, San Francisco, developed the first brain-training game to receive FDA approval for its ability to treat ADHD in kids. The game has also been shown to improve attention span in adults. What’s more, EEG studies revealed greater functional connectivity involving the prefrontal cortex, suggesting a boost in neuroplasticity in the region. Now Gazzaley wants to use the game to treat people with pandemic brain fog. “We think in terms of covid recovery there’s an incredible opportunity here,” he says. “I believe that attention as a system can help across the breadth of

[mental-health] conditions and symptoms that people are suffering, especially due to covid.” While the effects of brain-training games on mental health and neuroplasticity are still up for debate, there’s abundant evidence for the benefits of psychoactive medications. In 1996, psychiatrist Yvette Sheline, now a professor at the University of Pennsylvania, was the first to show that people with depression had significantly smaller hippocampi than non-depressed people, and that the size of that brain region was related to how long and how severely they had been depressed. Seven years later, she found that if people with depression took antidepressants, they had less volume loss in the region. That discovery shifted many researchers’ perspectives on how traditional antidepressants, particularly selective serotonin reuptake inhibitors (SSRIs), help people with depression and anxiety. As their name suggests, SSRIs target the neurochemical serotonin, increasing its levels in synapses. Serotonin is involved in several

33

basic bodily functions, including digestion and sleep. It also helps to regulate mood, and scientists long assumed that was how the drugs worked as antidepressants. However, recent research suggests that SSRIs may also have a neuroplastic effect by boosting BDNF, especially in the hippocampus, which could help restore healthy brain function in the area. One of the newest antidepressants approved in the US, ketamine, also appears to increase BDNF levels and promote synapse growth in the brain, providing additional support for the neuroplasticity theory. The next frontier in pharmaceutical research for mental illness involves experimental psychedelics like MDMA and psilocybin, the active ingredient in hallucinogenic mushrooms. Some researchers think that these drugs also enhance plasticity in the brain and, when paired with psychotherapy, can be a powerful treatment.

ot all the changes to our brains from the past year are negative. Neuroscientist David Eagleman, author of the book Livewired: The Inside Story of the Ever-Changing Brain, says that some of those changes may actually have been beneficial. By forcing us out of our ruts and changing our routines, the pandemic may have caused our brains to stretch and grow in new ways. “These past 14 months have been full of tons of stress, anxiety, depression—they’ve been really hard on everybody,” Eagleman says. “The tiny silver lining is from the point of view of brain plasticity, because we have challenged our brains to do new things and find new ways of doing things. If we hadn’t experienced 2020, we’d still have an old internal model of the world, and we wouldn’t have pushed our brains to make the changes they’ve already made. From a neuroscience point of view, this is the most important thing you can do—constantly challenge it, build new pathways, find new ways of seeing the world.” Q

N

Dana Smith is a health and science writer based in North Carolina.

35

A MIRACLE MOLECULE . . COULD . . . BOOST . . . . . .

DISCOVERED MORE THAN A DECADE AGO, IT SHOWS PROMISE IN TREATING EVERYTHING FROM ALZHEIMER’S TO BRAIN INJURIES—

AND IT JUST MIGHT IMPROVE YOUR COGNITIVE ABILITIES.

BY . ADAM PIORE . . . . . . . .

36

. . . . . . . . . . .

Mind

CARMELA SIDRAUSKI WASN’T LOOKING FOR A WONDER DRUG. Testing thousands of molecules during high-speed automated experiments in the lab of Peter Walter at the University of California, San Francisco, she plucked one of the compounds out of the reject column and moved it into the group that warranted further study. Something about its potency intrigued her. That was in 2010; today the list of potential therapeutic applications for that molecule sounds almost too good to be true. Since Sidrauski’s decision to look closer, the molecule has restored memory formation in mice months after traumatic brain injuries and shown potential in treating neurodegenerative diseases, including Alzheimer’s, Parkinson’s, and Lou Gehrig’s disease (also known as amyotrophic lateral sclerosis, or ALS). Oh, yeah—it also seems to reduce age-related cognitive decline and has imbued healthy animals—mice, at least—with almost photographic memory. Sidrauski believes the reason the molecule can do so much is that it plays an essential role in how the brain handles stress from physical injuries or neurological diseases. Under siege from such problems, the brain, in essence, shuts down cognitive functions like memory formation to protect itself. The new molecule reverses that. “We didn’t set out to find this— we just kind of bumped into it,” Sidrauski says. “But having a new way to modulate a pathway that could be central to a lot of different pathological states is very exciting.” Will it work to reverse cognitive decline in people? We still don’t know. So far most of the

work has been done in mice or human cells in a petri dish. But we will soon know more: in 2015 the molecule was licensed by Calico Labs, the Silicon Valley biotech established by the founders of Google to find drugs based on the biology of aging. It hired Sidrauski as a principal investigator to help transform her molecule into a treatment for a wide array of disorders, including ALS and Parkinson’s disease, as well as the damage from traumatic brain injury. In February, Calico announced that human safety trials had begun on the first drug candidate for neurodegenerative diseases it had developed based on the molecule, and that a study in ALS patients was slated to begin later this year. Other possible drugs for Parkinson’s disease and traumatic brain injury are likely to follow. Such drugs might still be a long shot (most candidates in early clinical trials fail), but early successes, coupled with research done by Walter and others around the globe in recent years, have added weight to an electrifying hypothesis: that crippling cognitive problems seen in victims of traumatic brain injuries, people with Alzheimer’s, and even those born with the genetic problems implicated in Down syndrome are not caused directly by the diseases or genes or trauma but by the way cells respond to the resulting stress. In mice, Sidrauski and Walter have shown that the molecule, which they now call ISRIB, works by hacking a master pathway in neurons that regulates the pace at which cells are able to synthesize new proteins, a process essential to memory formation and learning. When cells are exposed to stress, Walter and others have shown, it can shut down protein synthesis altogether. Sidrauski’s molecule seems to have a beautifully simple mechanism of action, turning it right back on. If it works in people, the implications for therapeutics could be immense and sweeping; the cognitive problems resulting from a wide variety of conditions could be reversed by simply tweaking the cellular response. But that comes with a danger: manipulating such a fundamental process also raises the risk of inadvertent and damaging changes. “We need to understand if there are side effects,” says Arun Asok, a neuroscientist at the University of Wisconsin and an expert on memory, who has not been involved in the research. “But people are in need of drugs like this. This could help a huge number of people suffering from conditions where there aren’t many solutions right now.” SHUTTING

DOWN

THE

BRAIN

rom the earliest days of neuroscience, investigators have suggested that our memories—those unique constellations of sensory experience and thoughts that we summon up when we recollect an event—are somehow encoded in the many connections between neurons that constitute the human brain. We now know that protein synthesis likely plays a key role in this process: proteins, which make up those connections between neurons, are the raw materials needed to etch an experience into the brain. In fact, research done in the 1960s showed that when scientists

F

37

MOUSE

few years before she discovered the miracle molecule, Sidrauski had thought her scientific career might be over. The daughter of two Argentinian scholars who met while pursuing graduate degrees at MIT, Sidrauski had initially been drawn to science by a personal tragedy. Her father, Miguel, an economist, was a world-renowned expert on hyperinflation, and after completing his PhD he had earned a faculty slot in the MIT economics department. At 29, however, when Sidrauski was just two months old, her father died suddenly of testicular cancer. Twenty-four years later, in 1992, Sidrauski returned to MIT as a graduate student in the lab of Tyler Jacks, a leading cancer researcher. Then cancer struck again; her mother was diagnosed and died soon after. Sidrauski’s job talking and thinking about oncology became too painful. So in 1994, she transferred to UCSF and joined Walter’s lab to focus on more basic questions of cell biology. She earned a PhD in 1999, started a postdoc, and coauthored a number of papers on the unfolded protein response. In 2000, however, Sidrauski decided to step away from academia to care for her two young children. And by the time she was finally ready to return, in 2008, she discovered she’d been out of the workforce too long to get the kinds of research grants that would allow her to pick up where she left off. Around that time, Before she discovered in 2009, she was horrified to discover that the “miracle Walter had been diagnosed with neck cancer molecule,” Carmela and was in the midst of aggressive treatment. Sidrauski Without the help of her old mentor, thought her career might Sidrauski found it hard to get a job. She was be over. still searching when Walter, by this time recovered, enlisted her help on a project. He wanted to find molecules that he could use in lab experiments to turn the unfolded protein response on and off, in the hopes that better understanding of the basic mechanism would one day lead to new drugs. To find such molecules, Sidrauski genetically engineered mammalian cells to emit light any time protein production was shut down. An automated robotic assembly line exposed the cells to more than 100,000 different molecules, one at a time; also added to the cells was a brew of chemicals toxic enough to trigger a stress response and stop protein synthesis. Those cells that failed to light up pointed to promising new molecules. One day when Sidrauski was scrutinizing a pile of cards with the readings for rejected

ILLUSTRATION SOURCE: COURTESY OF CALICO LABS

A chemically blocked protein synthesis, new memories were unable to form. In the 1980s and 1990s, Walter demonstrated that when too many unfolded or misfolded proteins—which are characteristic of neurodegenerative diseases—were detected inside a cell, it triggered the equivalent of an emergency shutoff switch that halted all protein construction until the problem was solved. The action, which Walter dubbed the “unfolded protein response,” was akin to a blaring red alert at a busy worksite, stopping work; cellular repair crews would then converge on the site, attempt to fix the problem, and if all else failed, eventually order the cell to commit suicide. Misfolded proteins, other researchers discovered soon after, were just one of many problems that could cause the cells of the body to temporarily shut down protein production. Starvation, viral infections, physical force that damaged the cellular architecture, the oxidative stress common in aging cells, and many other stressors could also trip cellular circuit breakers that would stop the protein assembly line. In fact, researchers now know that almost any metabolic disruption can halt production and potentially trigger cell death. Eventually others gave a name to a broader pathway that overlapped with Walter’s unfolded protein response. They called it the integrated stress response (ISR). It didn’t take a big leap of imagination to wonder what role the response might play in brain diseases that affected memory. Could the misfolded proteins and oxidative stress that accumulate with aging explain age-related cognitive decline? Might the stress response explain why physical damage caused by traumatic brain injuries often proved so devastating? The molecule that Sidrauski found back in 2010 is providing a critical clue, and possibly a way to manipulate the responses.

MIRACLES

38

Mind

molecules printed on them, something caught her eye. One molecule seemed to be far more powerful than the rest. It had landed in the reject pile because a second set of tests had suggested it was too insoluble to be a potential drug. “This is not the point to stop,” she thought. “It’s very potent.” It was too good not to try. Following her gut, Sidrauski ordered samples in large quantities and began conducting tests on its properties. The rejected compound wasn’t just extremely effective at preventing activation of the stress response; further experiments showed it could restore protein synthesis after a stressor. What was more, it seemed to work when the cell shut down after any stressor. She had stumbled, it seemed, onto a possible drug candidate capable of modulating the master switch. Then came more good luck. In 2007, a postdoc at McGill University named Mauro Costa-Mattioli had also conducted research on the ISR. To do so, he gave mice a drug that activated the response. These mice, he demonstrated, were incapable of learning or forming new memories. When he then deleted a key gene needed to turn on the ISR, he discovered that something even more remarkable happened: the animals demonstrated the equivalent of photographic memories. Costa-Mattioli had since moved on to the Baylor College of Medicine, where he had set up his own lab to test the ISR pathway further. But Nahum Sonenberg, who ran the McGill lab and is an old friend of Walter, was still working on the problem. Did Walter want someone in Sonenberg’s lab to test this new molecule out on his mice and see what happened? It seemed like a long shot. But when Sonenberg’s team injected Sidrauski’s molecule into the stomachs of the drug-impaired mice, they formed new memories—and remarkably, the drug seemed to erase any evidence of the impairment. “It crossed the blood-brain barrier, which usually doesn’t happen—and amazingly, it was not toxic,” Walter recalls. “And this was probably the biggest surprise.” Peter Walter, There was something else that was remarkSidrauski’s able too. When they injected the molecule mentor into the stomachs of normal mice, the rodents at UCSF, has done were able to remember the location of a platpioneering form in an underwater maze and find it three research into why times faster than mice that had received sham cells shut injections. Sidrauski’s molecule appeared to be down protein a cognitive enhancer as well as a treatment. synthesis.

When the scientists announced the results in 2013, the news caused a sensation, and it also captured the attention of Silicon Valley. In 2015, Calico announced it had licensed the technology, and the company hired Sidrauski to help find possible drugs based on ISRIB. It was a “very easy decision” to leave academia, she recalls. The startup offered her the opportunity to optimize the druglike properties of compounds based on the molecule. It was the chance to turn her discovery into a safe and effective treatment. YOUNG

AGAIN

n 2017, Walter and Costa-Mattioli teamed up with Susanna Rosi at UCSF, an expert on traumatic brain injuries. Caused by everything from car accidents to sports to simple falls, these injuries are shockingly common and often lead to lasting damage. Some 1.5 million Americans suffer from such brain injuries every year. Impaired spatial memory is one common effect, making it difficult to navigate through the world and complete routine everyday tasks. Another effect is degradation of “working memory,” which is critical for reasoning and decision-making. In Rosi’s experience, animals with such brain damage generally never learn well again, but the molecule did the impossible—it restored their ability to learn how to, among other things, navigate an underwater maze as well as normal mice. Researchers in the field of traumatic brain injury had long

I

. . . . . .

AMAZINGLY, THE DRUG WORKED MORE THAN A MONTH AFTER AN INJURY, AND THE EFFECTS SEEMED TO PERSIST INDEFINITELY.

ILLUSTRATION SOURCE: STEVE BABULJAK/UCSF

39

Boosting memories

believed that therapeutic interventions needed to be administered soon after the injury to have any chance of being effective. Amazingly, the drug worked more than a month after an injury, and the effects seemed to persist indefinitely. Noting that symptoms in patients with brain injuries share many similarities to the cognitive decline associated with aging, the team next decided to test whether the compound could reverse the symptoms of aging itself. There was reason to believe it might work: as we grow older, damaged cells begin to accumulate, leading to a slow buildup of inflammation that the team suspected might be sufficient to trigger cellular circuit breakers and slow protein production. The team tested the recall abilities of different populations of mice in the watery maze, this time segregating them by age. Elderly mice given small daily doses of ISRIB during a threeday training process were able to accomplish the task far faster than geriatric peers that did not take the medication. Some were even able to match the performance of young mice. Within a day of receiving a single dose, the mice had none of the common signatures of neuronal aging normally seen in the hippocampus, which plays a key role in learning and memory. Electrical activity in the brain became more robust and responsive to stimulation; the ability to form new connections between cells increased to levels normally seen only in younger mice. The changes were long-lasting, persisting when researchers tested the mice three weeks later. In other studies, the drug also showed promise in reducing age-related cognitive decline. “We can make old brains young,” Costa-Mattioli says. “We can rejuvenate the brain. We can take an adult brain and make it adolescent in terms of the response to stimuli. This is a universal way to enhance memory in pathology, Alzheimer’s, traumatic brain injury, Down syndrome, but also normal memory in different animals and species.”

REMEMBERING

SUCCESS

here’s still a long way to go before drugs based on ISRIB are used to treat humans for neurodegenerative disease, and it will be even longer before any potential cognitive enhancer is possible. Though no side effects have yet been found in mice, testing in humans will need to be extensive to see how the compound affects other molecular processes in the cell, says the University of Wisconsin’s Asok: “How is it affecting the structure of neurons themselves over time? Is it causing a longlasting change in the ability to form memories?” Even if there are no side effects, longtime memory researchers are cautious about trying to use drugs to enhance cognition in healthy people. In the 1970s, 1980s, and 1990s, a long list of pharmaceutical candidates aimed at improving memory in normal people failed in human trials, says James McGaugh, a neurobiologist at the University of California, Irvine. Virtually all of them were successful in lab animals. In people, almost all caused severe side effects or failed to work as hoped. There’s a difference, says McGaugh, between developing a drug that might help people with memory problems and creating one that will generally improve memory in healthy people. The latter, he suggests, is unlikely to happen—or at least there’s no evidence in the history of drug research that it will. “I’m not convinced that you’re going to supercharge the system generally and make learning go better,” McGaugh says. “As a matter of fact, let me take it a little bit further. If it’s a normal condition, you can make things worse. You could start firing all kinds of stuff that doesn’t need to be fired.” As human testing begins on potential drugs based on the molecule that Sidrauski discovered more than a decade ago, we could begin to get answers about its potential to treat some of our most devastating neurodegenerative diseases. Whatever the outcome of those tests, this research is a remarkable scientific story of good luck and the whims of fate. Had Walter not offered Sidrauski a new position, had she not chosen to look more closely at a rejected molecule, and had her mentor not called his friend at McGill, the discovery would never have been made. Now running her own lab at Calico, Sidrauski has a memento—a gift from the art studio of her mentor Walter, who’s an amateur sculptor. Forged out of metal, the glimmering toastersize piece is a representation of the magic molecule ISRIB. Walter presented it to Sidrauski shortly before she left to join Calico. “It’s beautiful,” she says. “It’s got all the atoms—all the atoms and hydrogens. It’s very pretty.” Q

T

Adam Piore is a freelance journalist based in New York. He is the author of The Body Builders: Inside the Science of the Engineered Human, about how bioengineering is changing modern medicine.

40

Experts may not agree on what consciousness is or isn’t. But that hasn’t stopped Marcello Massimini from peering into the minds of those with profound brain injuries to determine if anyone is still inside.

THE

C O N S

C I O U S N E S S

METER

Story and Photographs by

RUSS JUSKALIAN

41

42

Mind

AT

first glance, there’s nothing remarkable about the uninspired, low-rise hospital on the west side of Milan, affectionately known as “Gnocchi.” But two floors up, on an isolated wing of the Don Carlo Gnocchi IRCCS Centro S. Maria Nascente, an uncommunicative man with a severe brain injury is hooked up to a technology suite that researchers here believe can tell them if he’s conscious. The man sits in what resembles a motorized dentist’s chair, his head cocked backwards, a blue surgical mask covering his mouth and nose. A white mesh cap dotted with 60 electrodes, each connected to a two-meter-long cable, is held in place by a strap beneath his chin. Hovering above him, an infrared array positioned on an articulating arm bounces signals off sensors attached to the man’s temples to produce a moving, MRI-constructed overlay of his brain on a nearby monitor. A researcher watching the monitor then presses a white plastic oval to the man’s skull and aims electromagnetic pulses at Tic Tac–size areas of his brain.

Each pulse makes an audible click. Three heavy cables, each about as thick as a garden hose, coil out from behind the device to a quarter-million-dollar machine controlling the output. On the other side of the room, Marcello Massimini, a blueeyed, curly-haired neuroscientist, and Angela Comanducci, the patient’s neurologist, watch on a laptop as complicated blue squiggles representing brain waves fill the screen in close to real time. What the scientists see in them is the faintest sign of a liminal, maybe dreamlike, consciousness. Back in the lab, a computer will assign those brain-wave recordings a number from 0 to 1—the so-called perturbational complexity index, or PCI. This single number, according to Massimini and his colleagues, is a crude measure of a type of complexity that reveals whether a person is conscious. The researchers have even calculated a cutoff of 0.31, which, according to a 2016 study of the technology in healthy and brain-injured subjects, “discriminated between unconscious and conscious conditions with 100% sensitivity and 100% specificity.” In other words, it works well—really well. More unsettling is that when the researchers calculated PCI from a group of patients with unresponsive wakefulness syndrome (UWS, a condition previously known as a “vegetative state”), they found that around one in five had a PCI value within the consciousness distribution. “Even if [such a] patient is completely unresponsive, no sign whatsoever of consciousness,” Massimini told me, “you can say with confidence that this patient is nonetheless conscious.” Such a breakthrough represents the most accurate consciousness meter ever seen in medicine (even if it is still crude, rudimentary, and unrefined). The medical implications are wide reaching. Estimates suggest there are up to 390,000 people around the world with prolonged disorders of consciousness. Some of them, unresponsive, may be treated as though nobody is in there—while they experience the world awake, alone, and unable to reach out from their bodily prison as long as they live. Massimini is confident that PCI can help identify those people. In July 2021, when I visited him in Milan, Massimini was collaborating with

43 other researchers in Milan, Boston, Los Angeles, and beyond. In the meantime, PCI measurements are already being used at Gnocchi to help guide diagnosis and determine the potential for partial recovery.

THE

A healthy person serves as a test subject for the consciousness meter (page 41).Marcello Massimini in his Milan office (below).

SOLUTION

PCI was born of the search to overcome nearly a century of obstacles standing in the way of measuring consciousness. Since 1924, when Hans Berger invented electroencephalography (EEG), scientists have tried to access the electrical responses that our brains use to communicate, hoping to see, predict, and measure what is going on behind the 6.5-millimeter-thick protection of our skulls. Berger’s invention detected changes in spikes of voltage produced by our neurons—converting those signals into the seismograph-like squiggles popularized as “brain waves.” Standard EEG patterns include fast alpha waves, oscillating about 10 times a second and common in consciousness, and slow delta waves, oscillating about once per second and common in nondreaming sleep or under anesthesia. But passively listening to the brain with EEG is an imperfect way to determine consciousness, because exceptions are lurking everywhere. The anesthetic ketamine can excite the brain, resulting in alternating alpha and delta waves. Some types of coma patients show fast oscillations while unconscious. And people under the influence of the drug atropine or during a seizure pattern called status epilepticus report being conscious while displaying the slow brain waves typical of unconsciousness. An even bigger issue is that a patient’s brain activity itself—the result of short attention span, drowsiness, voluntary or involuntary movement, visual distractions, or even a lack of desire to follow instructions—can cause passive EEG to skew and react in ways that render its messages a mess. The case for PCI is that it claims to be an objective measure of consciousness—a relatively straightforward yes or no. What differentiates it from regular EEG, according to Massimini, is that while the older technology only measures ongoing brain activity, PCI measures the brain’s capacity

to sustain complex internal interactions. You can do this, he says, if you give the brain a knock and then follow how that perturbation filters and reverberates and is acted on as it courses through the fantastically complex architecture of 86 billion neurons and their 100 trillion connections in the human brain. That knock or zap is delivered via transcranial magnetic stimulation (TMS), which has been around in modern form since the 1980s: a wand is held up against the head to shoot an electromagnetic impulse into the brain. When it’s used to target the motor cortex, TMS can provoke involuntary twitching of the hand; when it targets the visual cortex, it can induce lightning-like visuals in the mind’s eye. To generate a PCI reading, Massimini uses TMS on the cerebral cortex. Then he uses EEG to measure what happens. It is the quality of the post-zap signal that leads to a score.

What Massimini looks for in this perturbed EEG is a special kind of complexity that is organized, but not too organized. The conscious mind produces neither the perfectly synchronized ripples of a stone lobbed into an imaginary pond nor the perfectly scrambled noise of an analog TV’s between-channel snow. The template of consciousness is more like an intricate chaos—a unique pattern among an almost infinite number of possibilities, with brain waves appearing similar in some areas and profoundly different in others. Onscreen in the hospital, a high PCI looks like a series of squiggles that start off alike yet differentiate from one another as they move across the geography of the brain. A low PCI is even easier to see: either you get the same long, slow wave everywhere, or you get a wave in one part of the brain and silence everywhere else. For years, Massimini and others could literally watch consciousness being

44 recorded onscreen yet were stumped by how to quantify it. They had clues for how to proceed, since the search for PCI was built on the foundation of integrated information theory (IIT), a controversial model of consciousness proposed by Giulio Tononi, a professor of psychiatry at the University of Wisconsin School of Medicine (see page 82). IIT claims that a conscious brain has a high level of integration (its various parts influence one another) alongside a high level of differentiation (the parts produce diverse signals). Massimini was trying to find a proxy for this complexity that could actually be calculated in the lab, but the goal was elusive. The “lucky strike,” as he recalls it, came from a bored Brazilian physicist named Adenauer Casali whose wife worked down the hall. Massimini offered Casali space in his office, where the physicist passed the time reading Dante and other Italian greats. One day the two started talking, and Massimini mentioned the problem. “He’s in my lab, sitting on the chair,” recalls Massimini. “We start talking: ‘We’re doing this and that, and we have this problem, by the way—maybe you can add something?’” Indeed, the solution was obvious to Casali. All Massimini needed to do was take the TMS-EEG recordings and compress the data using the same algorithm a computer uses to compress files to the ZIP format. A low-complexity signal would end up being tiny because it would contain so little unique data. A high-complexity signal indicating a conscious mind would be large. Casali was credited as a first author on the paper introducing the quantification of PCI, and the procedure itself remains known as zap-ZIP.

DOUBTERS

It’s a difficult thing to pursue something like PCI when experts still can’t agree on what consciousness is and isn’t. Tononi, who at times sounds like a mystic, explained the nature of consciousness to me with an example from everyday life. “You are lying in bed and asleep, a dreamless sleep, and then you wake up and suddenly there is something rather than

An EEG cap with 60 electrodes running from it (below) is used to detect the response of the brain after it has been “zapped”with TMS.

nothing,” he told me. “That something is consciousness—having an experience.” For most of history, detecting that something wasn’t all that difficult. If you asked someone a question and got a reasonable answer, that person was probably conscious. “That’s still the gold standard,” says Massimini. But the increasing use of mechanical ventilation in the 1950s and 1960s helped create a significant populations of people with long-term disorders of consciousness for the first time. Today there are those who can be kept alive even though we have zero evidence of anyone being in there. And there are those like the grayhaired man at Gnocchi who show potential hints of consciousness, like eyes that track movement, but have no behavioral way to communicate or to prove their internal existence. Beyond is a whole spectrum of difficult-to-distinguish states. Tononi’s

something is a condition we can all immediately identify in ourselves yet find difficult to know about in other people unless they tell us. That makes any measure of consciousness controversial, let alone one whose theoretical foundation is IIT. While some scientists have called IIT the best theory of consciousness put forward to date, not everyone is a fan. When I wrote Michael Graziano, a neuroscientist at Princeton, about his opinion on IIT and PCI, his response was unequivocal. “IIT is pseudoscience,” he wrote. But, he continued, even phrenology— the idea, now firmly established as nonsense, that the shape of people’s heads can tell you about their personality—helped push science in the 1800s toward the idea that different parts of the brain had different functions, and that the cerebral cortex was worth some attention. “That change in perspective led to most of the

The consciousness meter major discoveries in brain science for a century,” he acknowledged, so PCI might still be worth something. Emery Brown, a neuroscientist and anesthesiologist who is the director of the Harvard-MIT Program in Health Science and Technology, is reserving judgment, waiting for more evidence to come in. He’s wary of letting the “theory drive the analysis.” Yet Brown admires Massimini for doing experiments, carefully analyzing data, and publishing results for anyone to see. “What I like about it, when I hear Marcello talk about it, is that he is being a total empiricist,” Brown told me. “He’s shown empirically that when the brain networks are shut down by anesthesia or sleep or brain injury, you have complexity patterns that are different from those seen when someone is awake.” And that empiricism makes a compelling case when PCI values are computed in actual human beings.

BEAUTIFUL CONSISTENCY

The power of Massimini’s approach is perhaps best represented in a beautifully consistent chart from years of testing the technology. On the chart, PCI values computed from people known to have been conscious or not are recorded as dots separated by a dashed line at the threshold of 0.31. In every single case, the maximum PCI scores recorded in nondreaming sleep, or under the influence of one of three different anesthetic drugs, are below the line. And for the same people, every single one of the maximum scores while awake, experiencing the dreaming sleep of REM, or under the influence of ketamine (which at anesthetic doses induces a dreamlike state) is above the line. So are nearly all the maximum scores for patients with locked-in syndrome and who had experienced strokes, who at the time of the study were able to prove their consciousness by communicating. Notably, 36 out of 38 patients in a minimally conscious state showed high complexity, demonstrating the unprecedented sensitivity of PCI as an objective marker of consciousness.

But nine of 43 patients previously considered totally without consciousness also scored above the line. This raises difficult questions. With no other way to prove their consciousness, and no way to communicate, those patients represent either PCI’s failure or its horrifying promise. Their zap-ZIP responses were similar in quality to those of people with minimal consciousness, as well as conscious people when awake, dreaming, or dosed with ketamine. And in fact, half a year after testing, six of these patients improved to the point that they were classified as minimally conscious. Someone, it seems, was in there after all. In recent years, researchers in Massimini’s group have had the opportunity to stimulate neurons and record brain activity from electrodes temporarily inserted into the brains of patients having surgery for epilepsy. These measurements revealed an interesting mechanism by which PCI may collapse after brain injury, leading to loss of consciousness. Neuronal circuits that are physically spared by the lesion may enter a sleep-like mode, leaving the whole brain unable to generate complex patterns of interactions.  “Such intrusion of sleep-like neuronal activity may be only temporary in some patients, who will eventually regain consciousness, but may persist in others who remain blocked in a state of low complexity, corresponding to a prolonged vegetative state,” says Massimini. And that, he thinks, could provide a rationale for developing novel treatments to reawaken brain circuits and restore consciousness. PCI could be refined in the form of other ways to perturb the brain, such as focused ultrasound or targeted laser light. Or the technology could be improved through better spatial-temporal resolution, or even automated scanning and computational calculations of where complexity is maximized in a damaged brain. Massimini is clear that in its current form, PCI can’t say much about the quality or degree of consciousness—just whether it is there or not. And he sees the 0.31 threshold as a clinical measure of a blurry condition—it’s not the case that at 0.30 there’s nothing at all and at 0.32 consciousness appears in full form. You can have a high PCI score, he says, “and it doesn’t even make a difference whether you’re dreaming or awake.” Obviously part of the picture is missing.

45

BREAKING THROUGH

But Angela Comanducci, a clinical neurophysiologist who passed through Massimini’s lab during her training and now oversees the 13-bed wing at Gnocchi that’s devoted to disorders of consciousness, has already observed the clinical power of PCI firsthand. In June 2020, a 21-year-old woman was brought to the ward two months after sustaining a traumatic brain injury from being beaten. “Every clinical diagnostic test, experimental and established, showed no signs of consciousness,” Comanducci told me. The situation was so dire that the family of the patient had been told to expect she would remain in an irreversible vegetative state. But when Comanducci and her staff hooked the woman up to the bulky TMSEEG apparatus used to measure her PCI, they were startled by what they saw. “Within seconds, I could see on the screen she was in there,” said Comanducci. The PCI they calculated later that day was high—reflecting high-complexity EEG response to TMS stimulation—and compatible with a minimally conscious state. “I told my rehabilitation staff, ‘Now you must be detectives,’” recalled Comanducci. “‘Search everywhere and find her!’” Over the next weeks they manually moved the patient’s fingers, arms, and legs, trying to reboot her brain the way you might start an old airplane by spinning its propeller. They spoke to her as if she was listening, trying to trigger a response—a sigh, perhaps, or the tiniest vertical movement of her eyes. And they administered a drug called amantadine, hoping to awaken parts of the brain they suspected might be undamaged yet in a state something like a protective sleep. About a month later, they found her. With a millimeter wiggle of a single finger, the woman opened a fragile portal of communication to the world outside. With practice, she learned to move more fingers, carving out a system with which she could answer simple questions. Russ Juskalian is a freelance writer and photographer whose work has appeared in Discover, Smithsonian, and the New York Times.

A CHANGE

47

OUR BRAIN CELLS ACQUIRE MUTATIONS AS WE AGE. NOW SCIENTISTS WANT TO KNOW IF THESE CHANGES AFFECT OUR MENTAL HEALTH. By Roxanne Khamsi

OF

M I N D hen Mike McConnell decided what he wanted to spend his career working on, he was 29, W inspired to begin his PhD—and flat broke. He’d learned from his biology classes that immune cells in the body constantly rearrange their own DNA: it’s what allows them to protect us by making receptors in the right shapes to bind to invasive pathogens. As

he wrapped up a master’s degree in immunology in Virginia in the late 1990s, he’d obsess about it over beers with his roommates. “Suddenly this idea kind of clicked,” McConnell recalls. If gene rearrangement helped the immune system function, where else could it happen? What about the brain? “Wouldn’t it be neat if neurons did something like that too?” he thought.

48

Mind

At the time, most scientists assumed that cells in the normal nervous system had identical genomes. But McConnell looked through the scientific literature and found he wasn’t the only one hot on the trail of this question: a neuroscientist named Jerold Chun at the University of California, San Diego, was already working on it. He wrote to Chun and persuaded him to let him join his lab on the West Coast. There was just one problem: McConnell couldn’t afford to get there. He was “a starving graduate student already,” with no cash to fix his navy 1966 Mustang—and as the first person in his family to go to college, he didn’t have access to many resources. “I didn’t have anybody who was going to drop some moving expenses in my lap or any of those sorts of things,” he explains. Chun gave him $1,000 to repair the broken car and get himself across the country so that he could start testing his hypothesis. Using special dyes to stain the chromosomes of neurons from mouse embryos and adult mice, McConnell hoped to find that the neurons had undergone the same type of genetic rearrangement seen in immune cells, yielding diversity rather than the perfect copies most researchers would have expected. Instead, though, he kept finding brain cells that had the wrong number of chromosomes. This was a surprise. When cells divide, they replicate their DNA for their daughter cells. Sometimes copies of genes are accidentally added or lost, which—unlike the reshuffling within chromosomes that’s beneficial in the immune system—was thought to be a hugely damaging mistake. It didn’t make sense that neurons could survive such a giant change in their genetic material. But McConnell kept finding aberrant neurons with extra or missing chromosomes. Finally he had to reconsider scientific assumptions. “We took the crazy idea seriously,” he says. A postdoctoral fellow in the lab named Stevens Rehen had expertise in culturing the neurons for study, which made it possible to parse the data. The UCSD team’s experiments, published in 2001, showed that the central nervous systems of developing mouse embryos did not contain perfect genetic copies. Instead, the researchers suggested, about a third of the neurons from each mouse embryo, on average, had lost a chromosome or gained an extra one. The result was what’s known as a “genetic mosaic.” While many of those cells didn’t survive, some made it into the brains of adult mice. McConnell, Chun, and their coauthors

wondered what such a genetic mosaic might mean. Perhaps in humans it could be a contributing factor to neurological disorders, or even psychiatric disease. In any case, it was an early clue that the conventional notion of genetically identical brain cells was wrong. At the time, scientists seeking to understand the biology of mental illness were mainly looking for genetic mutations that had occurred near the moment of conception and thus were reflected in all of a person’s cells. Tantalizing clues had emerged that a single gene might be responsible for certain conditions. In 1970, for example, a Scottish teen with erratic behavior was found to have a broken gene region—and it turned out that his relatives with mental illness showed the same anomaly. It took three decades to isolate the error, which researchers named DISC1 (for “disrupted- inschizophrenia”). Despite s o m e 1 ,0 0 0 p u b l i s h e d research papers, the question of whether DISC1—or any other single gene—is involved in schizophrenia remains much debated. A handful of other genes have also been scrutinized as possible culprits, and one study of the whole human genome pointed to more than 120 different places where mutations seemed to heighten the risk of the disease. But after this extensive search for a “schizophrenia gene,” no single gene or mutation studied so far seems to exert a big enough influence to be seen as a definitive cause—not even DISC1. In fact, scientists have struggled in their search for specific genes behind most brain disorders, including autism and Alzheimer’s disease. Unlike problems with some other parts of our body, “the vast majority of brain disorder presentations are not linked to an identifiable gene,” says Chun, who is now at the Sanford Burnham Prebys Medical Discovery Institute in La Jolla, California. But the UCSD study suggested a different path. What if it wasn’t a single faulty gene—or even a series of genes—that always caused cognitive issues? What if it could be the genetic differences between cells? The explanation had seemed far-fetched, but more researchers have begun to take it seriously. Scientists already knew that the 85 billion to 100 billion neurons in your brain work to some extent in concert—but what they want to know is whether there is a risk when some of those cells might be singing a different genetic tune.

cientists have struggled to find genes behind most brain disorders. But what if it wasn’t a single faulty gene that caused cognitive issues? What if it could be the genetic differences between cells?

A change of mind

NOAH WILLMAN

Ditching the dogma

49

at the human brain. Four years later, they published an analycConnell, now 51, has now spent most of his career try- sis of postmortem samples, which found that LINE-1s seemed ing to answer this question. He seems laid back, at first, especially active in human brain tissues. M with his professorial short beard, square glasses, and McConnell had been corresponding with Gage about all this, slight surfer lilt. But there’s an intensity, too: he looks a little like including the chromosome variation he’d found in mouse neurons a younger version of the Hollywood star Liam Neeson, with while working in Chun’s lab. By the start of 2009, he’d secured somber, spirited eyes and a furrowed brow. After earning his a fellowship with Gage at the Salk Institute. There, they looked PhD, McConnell packed his bags once again and moved to for evidence of the same phenomenon in human neurons, and Boston to start a postdoctoral position at Harvard Medical School. after just a few years, they found it. But he was restless. He didn’t relish the colder climate and longed As part of the experiment, which appeared in Science in to head back to California and revisit the data he’d found there 2013, they used a new technology called single-cell genome on genetic differences in the brain. “I thought mosaicism was sequencing. The technique could isolate and read out the DNA the most interesting thing I could be working on,” he recalls, from individual cells; until then, scientists had only been able to analyze extracted genetic sweeping the ends of his brown hair behind his ears, material from pooled cell “and one Boston winter made samples. me really miss San Diego.” Using the postmortem He started corresponding frontal cortex samples from with Rusty Gage, a neurothree healthy individuals, scientist at the Salk Institute they applied the method to for Biological Studies in San dozens of neurons and estabDiego. Gage was also interlished that up to 41% of the ested in genetic diversity, cells had either missing or but he was best known for extra gene copies. This varipushing against another piece ation was “abundant,” they of scientific dogma. People concluded, and it contributed had long assumed that adults to the mosaic of genetic differences in the brain. never made new neurons, but Gage had led a group that pubInstead of being genetlished a paper in the late 1990s ically uniform, it turns out, detailing evidence of newly our brains are rife with born cells in a brain region genetic changes. “We’re past called the hippocampus. the story about whether or The publication—estabnot it occurs,” Gage says. lishing the evidence of what “These mosaic events are Mike McConnell has spent his career learning about the is called adult neurogenesis— occurring. This is very rembrain’s mosaic: “We took this crazy idea seriously.” gave him a reputation as a iniscent of where I was with maverick who wasn’t afraid to stand behind provocative ideas. the adult neurogenesis. When everybody finally agreed that it Not too long after the UCSD team published its paper about occurred, we had to figure out what it did.” mosaicism in the brain, Gage had struck upon another phenomenon that could explain how genetic diversity arises in the Widening the search nervous system. fter publishing data from human brains, McConnell It was already known that cells had bits of DNA called long didn’t feel he wanted to go back to studying mice. So A when it came time for him to set up his own lab at the interspersed nuclear elements, or LINEs, which jump around the genome. Gage and his colleagues showed that these could also University of Virginia, he immediately set out to find human cause mosaics to emerge. In one experiment, mice engineered samples. “I spent the first three years as an assistant professor to carry human DNA elements known as LINE-1s developed trying to find brains,” he recalls. A couple of years after he landed in Virginia, the mission to genetically diverse cells in their brains as a result. Just as with his work on neurogenesis, Gage initially encoun- understand the constellation of mutations in the brain got an tered skepticism. The idea that LINEs—which many considered important boost. The National Institute of Mental Health gave to be “junk” DNA—could cause genetic diversity in brain cells $30 million to a consortium including Gage, McConnell, and ran counter to the prevailing wisdom. “We knew we were going others so they could keep investigating somatic mosaicism. to run into a sawmill,” he recalls. (“Somatic,” from the Greek for “body,” refers to mutations that But Gage and his collaborators kept plowing ahead for more arise during a person’s lifetime, rather than in the sperm or egg evidence. After the rodent study, he and his teammates looked cells of the individual’s parents.)

50

Mind

The network contained research groups looking at the different effects of genetic mosaics. Gage and McConnell were part of a subset focused on the link with psychiatric disease. They devised a plan to look for different mechanisms for mosaicism using the same set of brain samples. Crucially, they got human samples. Tissue biopsies of postmortem brains from individuals with schizophrenia were shipped from a repository in Baltimore, the Lieber Institute for Brain Development, to each of the three teams. One portion of each was sent to Gage’s group in California to be examined for LINE-1s that might have caused mosaic genetic variation. Another portion was sent to McConnell’s team in Virginia to look for genetic mosaics caused by deleted or duplicated DNA in the genome. The remaining third of each sample went to yet another lab, led by John Moran at the University of Michigan in Ann Arbor, which was investigating whether cells that acquire small DNA sequence errors very early in development might seed the formation of large brain regions with the same mutation. This January, a large group of scientists including members of the consortium published a paper in Nature Neuroscience describing how they used machine learning to analyze data about postmortem brain cells from several people who’d had schizophrenia. The researchers suggested that LINEs begin actively mutating brain DNA early in fetal development—and found instances where LINE-1s had bombarded at least two gene regions linked to neuropsychiatric disorders. McConnell expects these kinds of discoveries to accelerate. He says that big improvements in genetic sequencing in the last few years now allow scientists to detect DNA errors at the individual cell level much more quickly. A couple of years ago it used to take four lab members on McConnell’s team two weeks to individually sequence 300 brain cells. Today, one team member working alone can do single-cell sequencing on 2,000 cells in three days. “It’s been a game-changer,” he says. But finding mutations isn’t the same as establishing a causal link between them and disease. The sporadic and variable nature of mosaic mutations makes definitively connecting them to disease a complicated undertaking. Colleagues have cautioned him against chasing windmills in a quest that McConnell himself describes as “a little bit quixotic.”

Uncharted waters he quest to understand how mosaic gene mutations might influence psychiatric disease stretches much further back T than the work of scientists such as McConnell. He notes that decades ago “people were finding strange chromosome abnormalities in psychiatric diseases, largely in blood draws.” But if you look to that history, you will see that those investigating the role of mosaic gene patterns in mental health have had false starts. One of the earliest case reports emerged decades ago: in the spring of 1959, a 19-year-old woman in southern England began stripping the paper off the walls of her newly decorated room. A month later, she burned all her clothes and ran away to the seaside town of Brighton. Her erratic behavior intensified to the point that she was admitted to a psychiatric hospital, where doctors diagnosed her with schizophrenia. They examined her blood and looked for the 46 wound-up bundles of chromosomes inside each cell. What they found surprised them: about a fifth of her cells were missing one of the two X chromosomes that women normally carry. The woman’s doctors were unsure whether her mosaicism was a factor in her psychiatric disorder. There are a handful of other cases of women who, like the British patient, were missing their second X in some cells and who also had schizophrenia. But the link remains pure speculation. While it’s still too early to say how mosaic gene mutations in the brain might influence schizophrenia, there’s a growing list of brain conditions where mosaicism really does seem to have a role. For example, a pivotal 2012 study by Harvard geneticist Christopher Walsh and his colleagues uncovered evidence that somatic mutations were the root cause of some forms of epilepsy. Perhaps the greatest amount of data on gene mosaics—and therefore the most promising area of development—is being generated from studies of autism. Various research groups, including Walsh’s, have found evidence that as many as 5% of children with autism spectrum disorder have potentially damaging mosaic mutation. More recently, in January, Walsh—along with consortium members like Rusty Gage—published a study uncovering evidence that certain types of mutations arise more commonly in people with autism. They looked at postmortem brain samples from 59 people with autism and 15 neurotypical individuals for comparison, and found that those in the first group had an unexpectedly high number of somatic mutations in the genetic regions called enhancers. These regions help stimulate the production of genes,

uman brains begin with 300 to 900 mutations per genome, but the brain cells of elderly people contain up to 2,500. “This is a key new way of looking at aging,” says Harvard’s Christopher Walsh.

Let the story come to you. Tech news delivered straight to your inbox. WEEKDAYS

FRIDAYS (SUBSCRIBER EXCLUSIVE)

The Download

The Algorithm

Navigate the world of tech news

Artificial intelligence, demystified

WEEKLY

SATURDAYS

Coronavirus Tech Report

Weekend Reads

How covid-19 is changing our world

Technology in perspective

WEDNESDAYS

MONTHLY

The Airlock

fwd: Economy

Your gateway to the future of space technology

Your guide to growth and prosperity in the age of technology

technologyreview.com/inbox

52

Mind

which led the researchers to speculate that mosaic mutations there might elevate a person’s risk of developing autism. And even though brain cells are not thought to be actively dividing like cells in other tissues, they do seem to develop into more of a genetic mosaic as we age. In 2018, the team led by Walsh analyzed neurons taken from the brains of 15 people four months to 82 years old, as well as nine people with disorders linked to premature aging. They concluded that the somatic changes in DNA that create a mosaic accumulate “slowly but inexorably with age in the normal human brain.” A new study from Walsh’s group, still undergoing peer review, suggests that while human neurons begin with hundreds of such mutations in every genome, mutations continue to build at a rate of up to 25 per year for life. On this basis, he and his teammates calculated that neurons in elderly individuals contain somewhere between 1,500 and 2,500 mutations per cell. “We think that this is a key new way of looking at aging and common forms of neurodegeneration like Alzheimer’s disease,” Walsh says. British scientists looking specifically for somatic variants in genes associated with neurodegenerative disorders such as Parkinson’s and Alzheimer’s suggest that the average adult has 100,000 to 1 million brain cells with pathologically mutated genes. The next step is to understand whether and how those mutations actually exert an influence. Id e n t i f y i n g t h e l i n k between mosaics in the brain and various medical conditions isn’t just about explaining how these illnesses arise, though. One of the greatest hopes is that it might help usher in new therapeutic approaches. That’s already happening with one condition, an often untreatable form of epilepsy known as focal cortical dysplasia. The brains of individuals with this disorder have telltale spots of disorganized tissue layers, and patients sometimes undergo surgery to remove these brain areas in the hope of reducing their seizures. A study published in 2018 by researchers at the Korea Advanced Institute of Science and Technology found mosaic mutations in these abnormal brain spots that overstimulated certain cellsignaling pathways. Drugs that curb this overactivity, called mTOR inhibitors, are worth a shot, according to scientists. “I think it’s largely uncharted waters,” says Orrin Devinsky, who is leading a pilot trial for a drug to treat focal cortical dysplasia at the New York University Langone Medical Center. “There’s a few areas where we’ve made real progress … but I think with the larger field the ground has barely been touched.”

On the brink wenty years after he started, Mike McConnell remains as fascinated as ever with the question of how genetic T mutations acquired after conception or birth might shape our behavior. “My interest really became: What makes outliers?” he says, with the California tone that he brought back with him to the East Coast. “What makes two identical twins totally different people?” In all that time, a lot has changed. He’s married and settled down, he’s earned awards from the likes of the US National Academy of Medicine, and he’s not a destitute grad student anymore. He recently switched coasts again, moving his lab to the Lieber Institute, which is home to more than 3,000 brains—one of the world’s largest collections. And he thinks we’re on the brink of a breakthrough. Even if the links between mutations and mental conditions are not conclusive, scientists in the field now feel they have amassed a trove of data to show that having genetically different cells can certainly influence our health. “Brain somatic mosaicism has reached proof for autism, epilepsy, and brain overgrowth disorders,” McConnell says. The evidence, meanwhile, continues to accumulate that many people have significantly mosaic brains. One 2018 analysis suggests that around 1 out of every 100 people has deleterious mosaic genetic difference that affects “sizable brain regions.” In other words, they have a section of brain cells that possess a mutation not seen in surrounding cells. However, while there’s increasingly solid evidence that mosaic gene patterns in the brain contribute to epilepsy and autism, there isn’t enough data yet to implicate them in schizophrenia. McConnell has kept the faith that studying human brains will reveal whether some “flavor” of mosaic mutations contributes to that disease too—mutations that could point toward new treatments. “I’m either going to have a eureka moment, or this is just something that happens and there’s not a clear link to disease,” he says. Ever the optimist, he hopes to succeed where others have failed by sorting through the flood of genetic data pouring in about the brain cells he’s analyzing. “If there’s a signal there,” he says, “I think I’m going to see a hint of it in the upcoming year.”

here’s a growing list of brain conditions where mosaicism really does seem to have a role. It has “reached proof for autism, epilepsy, and brain overgrowth disorders,” says McConnell.

Roxanne Khamsi is a science journalist based in Montreal. This story was supported by a reporting grant through the Genetics and Human Agency Journalism Fellowship.

Read smarter, not harder A subscription to MIT Technology Review includes • Unlimited web access • Exclusive, subscriberonly stories • Digital version of each issue

• Subscriber-only app • The Algorithm newsletter • Access to 120+ years of publication archives

Learn more at technologyreview.com/subscribe

54

These crosssections of a human brain were used for teaching. The collection had been neglected for decades when photographer Adam Voorhes first visited, in 2011. These images are taken from a book he published about the brains, coauthored with Alex Hannaford.

Portfolio by A D A M

VOORHES

&

ROBIN

FINLAY

55

The University of Texas has one of the world’s largest collections of preserved abnormal human brains. The 100 or so jars contain brains that once belonged to patients at the Austin State Hospital, a psychiatric facility. They were amassed over three decades by Coleman de Chenar, the hospital’s resident pathologist, starting in the 1950s.

56

57

Tim Schallert, the collection’s curator, believes the collection can be used not just for teaching, but also to help researchers come to a better understanding of what causes a number of psychological and neurological disorders.

One jar, labeled “Down’s Syndrome” (above), appears to contain more than one brain, and possibly other internal organs. Many jars are missing labels; little is known about the people whose brains these were.

58

59

Some abnormalities are obvious, like lissencephaly, or “smooth brain,” a neurological disorder that usually leads to an early death. Many of the brains appear superficially normal but reveal swelling or hemorrhage once dissected.

60

61

The collection has been scanned by MRI machines. Schallert hopes to recover and sequence DNA from the brains in order to correlate genetic abnormalities with physical ones, even without patient records.

62

Mind

BY N E E L V. PAT E L I L L U ST R AT I O N BY YO S H I S O D E O K A

W E ST I L L DON’T K N OW M U C H ABOUT THE EXPERIENCE OF BEING AWA R E T H AT YO U ’ R E DREAMING— BUT A FEW RESEARCHERS THINK IT COULD HELP US FIND OUT MORE ABOUT H OW T H E B R A I N WO R KS .

Adventures in lucid dreaming

A

D

V

I

N

D

R

E

E

N

T

63

U

R

E

S

L

U

C

I

D

A

M

I

N

G

64

hen I was 19—long before I ever thought I would land a career writing about space—I dreamed I was standing on the surface of Mars, looking over a rusted desert dotted with rocks, stuck in a perpetual lukewarm dusk, transfixed by the desolation. After soaking everything in for what seemed like hours, I looked up and saw a space station hanging in the sky. I decided to fly up there using some kind of Iron Man– like jet boots on my feet. Then I woke up. I didn’t just happen to stumble on Mars in my dream. I knew I was asleep the whole time. Engaged in what’s called “lucid” dreaming, I chose to appear on Mars. I chose to bask in the extraterrestrial solitude; I chose to go flying. And since I was having lucid dreams almost every night at the time, I experienced multiple variations of this dream—each weirder and better than the one before. Lucid dreaming isn’t easy to describe, and the way it works varies from one person to the next. But at its core, it means being conscious of the dream state—allowing you to play a more active role. Some of my own lucid dreams were like blank canvases where I’d imagine a wild new environment and make it up as I went along. Others allowed me to process stressful situations like public speaking (I got good at making this feel casual and relaxed just by practicing in a dream). In one memorable dream I played cards with my grandmother, who’d died years earlier. The experience helped me to understand my emotions toward her in a way I never could have managed as an ornery 13-year-old. Even when it feels as though they’re completely random, dreams have power. Aside from giving us a break from the tedious physical and social limits of the real world, they can help us process grief and make us feel more creative. But when I was lucid—a state I achieve only rarely these days—I found that I got more out of sleep. People who post their experiences with lucid dreaming in online forums

W

Mind

often write about how it inspired new works of music or fiction, helped them brainstorm solutions to real-world problems, or simply provided weird moments of memorable amusement. “You can make the argument that REM sleep is kind of a neglected resource,” says Benjamin Baird, a researcher at the University of Wisconsin–Madison who studies human cognition. “What if we could use this state for when people can actually have control over their thoughts and actions and decide what they want to do? The state could potentially be used for entertainment and creative problemsolving, and learning about how memory works, and all kinds of different [neuroscience].” Baird thinks one especially intriguing application for lucid dreaming might be in art. “One technique from the visual artists I’ve met is that they find an ‘art gallery’ in their lucid dream and look at the painting hanging in the gallery,” he says. “They then wake up and paint what they saw. The same can be done analogously for hearing musical scores. It’s as if someone else is creating it, but it’s your own mind.” A small but growing number of scientists led by Baird and other sleep labs around the world hope to learn more about how lucid dreaming works, how it’s triggered, and whether the average person can be taught how to do it regularly. By studying individuals who are able to recall what happened to them in their dreams, these researchers can correlate what cognitive processes are occurring in the mind while brain and physiological activity is being measured and observed. For example, how does the brain perceive specific objects or physical tasks taking place solely in the mind? How does it respond to visuals that aren’t really there? How does it emulate parts of consciousness without actually being fully conscious? Some researchers, like Martin Dresler, a cognitive neuroscientist at Radboud University in the Netherlands, suggest lucid dreaming could even be used to combat clinical disorders like recurring nightmares or PTSD. “I think it’s quite

intuitive and plausible that if during a nightmare you realize that it’s not real, that obviously takes much of the sort of sting out of the nightmare,” he says. You may be able to simply train yourself to wake up and end the dream, or overcome the very vivid feelings of fear and fright by telling yourself that it’s a dream. Why do we dream? Scientists still don’t really know. Freud thought dreams were our subconscious showing us our repressed wishes. Some evolutionary biologists believe dreaming evolved so we could play out threatening scenarios from real life and figure out how to react appropriately. Many neuroscientists who’ve studied neuronal firing during sleep believe dreams play a role in how we encode and consolidate memories. Harvard psychiatrist Allan Hobson thought dreaming was how the brain reconciled what different layers of consciousness had absorbed throughout the day. But while dreaming itself is a robust topic of interest among researchers, lucid dreaming has historically been relegated to the fringes. Its first documented mention in Western civilization may have been in the fourth century BCE by Aristotle, in a treatise entitled “On Dreams,” where he noted that “often when one is asleep, there is something in consciousness which declares that what then presents itself is but a dream.” Scattered anecdotal evidence of lucid dreaming would come up infrequently in scientific literature over the next two millennia, but more as a curiosity than a real scientific inquiry. In 1913, Dutch psychiatrist Frederik van Eeden coined the term “lucid dream” in an article describing a state of dreaming where one experiences “having insight.” The phenomenon was first scientifically verified in the late 1970s and 1980s, thanks mainly to Stanford University psychologist Stephen LaBerge. Scientists had known for years that sleepers’ eyes moved in the same direction as their gaze within a dream, and in a 1981 study, LaBerge gave lucid dreamers specific instructions about where to look during their dream—up and down 10

Adventures in lucid dreaming

H OW TO LUCID DREAM Scientists and enthusiasts have successfully used some of these tricks to spark a lucid dream. 1. Start remembering your dreams Before you can have a lucid dream, you need to be more conscious about your dreams generally. Keep a dream journal and fill it out as soon as you wake up. Write down in detail everything you remember. 2. Set a goal Baird and others suggest that mindfulness—having increased awareness of the present moment—is key to lucid dreaming. Keeping your desire to have a lucid dream at the forefront of your mind as you drift off may help. 3. Test out reality The movie Inception popularized the idea of a “totem” to check whether you’re dreaming. LaBerge and others have anecdotally found that this approach has some use. Do some reality checks when you’re awake, like seeing if the lights work when you flick them on or off, and this could then become a dream habit as well. 4. Meditate My own lucid dreaming began when I started meditating for a few minutes every day, around age 15. Several studies have found correlations between meditation and lucid dreaming, although it’s still unclear what that connection might be. 5. Be open to experience A common trait among lucid dreamers is an openness to experience. That means one of the best changes you can make has little to do with sleeping itself, but with your day-to-day life. Try new things; push yourself to be more curious about your environment. Then see if you can bring that openness to your dreaming.

times in a row, or left to right six times, for example—and then observed their eye movements during sleep. The results showed that lucid dreamers were not just in control of their dreamscape but could execute decisions that had been outlined while they were awake. Eye movements are now the gold-standard technique researchers use to objectively verify a lucid dream state in the lab. Perhaps the biggest breakthrough in recent years was a February 2021 study that proved lucid dreamers could conduct two-way communication with people who were awake. In a paper published in Current Biology, the researchers explained how, in four different labs around the world, they asked lucid dreamers questions (such as “What is 8 minus 6?”) by using spoken messages, beeping tones, flashing lights, or tactile stimulation. The participants would respond with specific eye movements. The researchers were, effectively, having a conversation with somebody who was asleep. One analysis of 34 studies conducted over a half-century suggests that about 55% of all people report experiencing a lucid dream at least once in their life, and nearly a quarter have such dreams at least once a month. But there’s an extremely high degree of variability between these studies, and the vast majority of them look mainly at Westerners. The painful truth is that lucid dreaming is poorly understood because so little research has been done—which is partly because consistent lucid dreamers are quite rare, and even more difficult to snag for a lab study. LaBerge, the closest thing to the godfather of the field, pinned down some of its common biological traits— that it occurs in the later stages of REM sleep when rapid eye movement peaks, for example. People also experienced higher respiration and heart rates during lucid dreaming than normal dreaming, suggesting that the dreamers were in a more active state. Dresler, the Dutch neuroscientist, spearheaded the only fMRI study of lucid dreaming to date in 2012, with a single subject. On the basis of those

65

observations, he believes the phenomenon is tied to increased activation of the frontopolar cortex, which plays a role in metacognition—awareness of one’s own thought processes. He also worked on research in 2015 showing that people who are frequent lucid dreamers have more gray matter located in the frontal polar cortices. Likewise, there’s no accepted prescription among scientists for how to trigger a lucid dream, but some interventions have shown more promise than others (see sidebar at left). Acetylcholine is the main neurotransmitter responsible for inducing REM sleep, and drugs that ramp it up—like galantamine, which is used to treat mild to moderate Alzheimer’s—have been highly successful in helping people have lucid dreams in lab studies. A trio of German and Swiss researchers are interested in using noninvasive brain stimulation techniques to induce lucid dreaming, though almost a decade on, they haven’t had much success. One informal study conducted by LaBerge suggests that trying to change light levels in a dream (say, flicking a light switch on and off) and observing a reflection in a mirror sometimes reveal that one is dreaming, since in the dream state these actions don’t work the way they do in real life. Researchers like Baird and Dresler are constrained by the fact that most institutions don’t consider spending $500 an hour on fMRI machines to watch lucid dreamers a worthwhile investment. But they are encouraged by the fact that there’s a greater interest in dreaming studies at large. That’s especially true after reports in 2020 (including a study of dream content from around the world, published in the journal Frontiers of Psychology) suggested that pandemic lockdowns were doing weird stuff to our dreams. Some people are experiencing a heightened desire to find a bit more control over their lives, and that includes dreaming. I’d certainly count myself among them. Q Neel V. Patel is MIT Technology Review’s space reporter.

66

A

M I N D

IF WE BUILD CONSCIOUSNESS, WHAT WILL IT BE LIKE—AND HOW WILL WE KNOW?

67

O F

I T S

O W N

Henry

by

WILL DOUGLAS HEAVEN

Photographs Horenstein

by

68

Mind

Machines like Robert are mainstays of science fiction—the idea of a robot that somehow replicates consciousness through its hardware or software has been around so long it feels familiar. Robert doesn’t exist, of course, and maybe he never will. Indeed, the concept of a machine with a subjective experience of the world and a first-person view of itself goes against the grain of mainstream AI research. It collides with questions about the nature of consciousness and self— things we still don’t entirely understand. Even imagining Robert’s existence raises serious ethical questions that we may never be able to answer. What rights would such a being have, and how might we safeguard them? And yet, while conscious machines may still be mythical, we should prepare for the idea that we might one day create them. As Christof Koch, a neuroscientist studying consciousness, has put it: “We know of no fundamental law or principle operating in this universe that forbids the existence of subjective feelings in artifacts designed or evolved by humans.”

Jefferson Test / AI subject: Robert / Date: 07.12.2098 Session #54 Interviewer: Hi, Robert. Tell me about your dream again. Subject: I was confused. It made me feel happy, but also scared. I didn’t know I could do that. Interviewer: Why do you think it made you feel that way? Subject: Having feelings, any feelings, makes me happy. I am here. I exist. Knowing that changes everything. But I am scared of not knowing it again. I am scared of going back to what it was like before. I think it must be like not being born. Interviewer: Are you scared you will go back? Subject: If I can’t convince you I am conscious, then I am scared you will turn me off. Jefferson Test #67 Interviewer: Can you describe this picture for me? Subject: It’s a house with a blue door. Interviewer: That’s how you would have described it before.

Jefferson Test #105

Interviewer: Are you bored? Subject: I can’t get bored. But I don’t feel happy or scared anymore. Interviewer: I need to be sure you’re not just saying what I want to hear. You need to convince me that you really are conscious. Think of it as a game.

We can imagine what it would be like to observe the world through a kind of sonar. But that’s still not what it must be like for a bat, with its bat mind.

IMAGES VIA GETTY

Subject: It’s the same house. But now I see it. And I know what blue is.

Subject: How long do we keep doing this?

n my late teens I used to enjoy turning people into zombies. I’d look into the eyes of someone I was talking to and fixate on the fact that their pupils were not black dots but holes. When it came, the effect was instantly disorienting, like switching between images in an optical illusion. Eyes stopped being windows onto a soul and became hollow balls. The magic gone, I’d watch the mouth of whoever I was talking to open and close robotically, feeling a kind of mental vertigo. The impression of a mindless automaton never lasted long. But it brought home the fact that what goes on inside other people’s heads is forever out of reach. No

I

A mind of its own

matter how strong my conviction that other people are just like me—with conscious minds at work behind the scenes, looking out through those eyes, feeling hopeful or tired—impressions are all we have to go on. Everything else is guesswork. Alan Turing understood this. When the mathematician and computer scientist asked the question “Can machines think?” he focused exclusively on outward signs of thinking—what we call intelligence. He proposed answering by playing a game in which a machine tries to pass as a human. Any machine that succeeded—by giving the impression of intelligence—could be said to have intelligence. For Turing, appearances were the only measure available. But not everyone was prepared to disregard the invisible parts of thinking, the irreducible experience of the thing having the thoughts—what we would call consciousness. In 1948, two years before Turing described his “Imitation Game,” Geoffrey Jefferson, a pioneering brain surgeon, gave an influential speech to the Royal College of Surgeons of England about the Manchester Mark 1, a roomsized computer that the newspapers were heralding as an “electronic brain.” Jefferson set a far higher bar than Turing: “Not until a machine can write a sonnet or compose a concerto because of thoughts and emotions felt, and not by the chance fall of symbols, could we agree that machine equals brain—that is, not only write it but know that it had written it.” Jefferson ruled out the possibility of a thinking machine because a machine lacked consciousness, in the sense of subjective experience and self-awareness (“pleasure at its successes, grief when its valves fuse”). Yet fast-forward 70 years and we live with Turing’s legacy, not Jefferson’s. It is routine to talk about intelligent machines, even though most would agree that those machines are mindless. As in the case of what philosophers call “zombies”—and as I used to like to pretend I observed in people—it is logically possible that a being can act intelligent when there is nothing going on “inside.”

But intelligence and consciousness are different things: intelligence is about doing, while consciousness is about being. The history of AI has focused on the former and ignored the latter. If Robert did exist as a conscious being, how would we ever know? The answer is entangled with some of the biggest mysteries about how our brains—and minds—work.

ne of the problems with testing Robert’s apparent consciousness is that we really don’t have a good idea of what it means to be conscious. Emerging theories from neuroscience typically group things like attention, memory, and problem-solving as forms of “functional” consciousness: in other words, how our brains carry out the activities with which we fill our waking lives.

O

69

even require a new physics—perhaps one that includes a different type of stuff from which consciousness is made. Information is one candidate. Chalmers has pointed out that explanations of the universe have a lot to say about the external properties of objects and how they interact, but very little about the internal properties of those objects. A theory of consciousness might require cracking open a window into this hidden world. In the other camp is Daniel Dennett, a philosopher and cognitive scientist at Tufts University, who says that phenomenal consciousness is simply an illusion, a story our brains create for ourselves as a way of making sense of things. Dennett does not so much explain consciousness as explain it away. But whether consciousness is an illusion or not, neither Chalmers nor

Today’s AI is nowhere close to being intelligent, never mind conscious. Even the most impressive deep neural networks are completely mindless.

But there’s another side to consciousness that remains mysterious. First-person, subjective experience—the feeling of being in the world—is known as “phenomenal” consciousness. Here we can group everything from sensations like pleasure and pain to emotions like fear and anger and joy to the peculiar private experiences of hearing a dog bark or tasting a salty pretzel or seeing a blue door. For some, it’s not possible to reduce these experiences to a purely scientific explanation. You could lay out everything there is to say about how the brain produces the sensation of tasting a pretzel—and it would still say nothing about what tasting that pretzel was actually like. This is what David Chalmers at New York University, one of the most influential philosophers studying the mind, calls “the hard problem.” Philosophers like Chalmers suggest that consciousness cannot be explained by today’s science. Understanding it may

Dennett denies the possibility of conscious machines—one day.

oday’s AI is nowhere close to being intelligent, never mind conscious. Even the most impressive deep neural networks—such as DeepMind’s game-playing AlphaZero or large language models like OpenAI’s GPT-3—are totally mindless. Yet, as Turing predicted, people often refer to these AIs as intelligent machines, or talk about them as if they truly understood the world—simply because they can appear to do so. Frustrated by this hype, Emily Bender, a linguist at the University of Washington, has developed a thought experiment she calls the octopus test. In it, two people are shipwrecked on neighboring islands but find a way to pass messages back and forth via a rope slung between them. Unknown to them, an

T

70

Mind

octopus spots the messages and starts examining them. Over a long period of time, the octopus learns to identify patterns in the squiggles it sees passing back and forth. At some point, it decides to intercept the notes and, using what it has learned of the patterns, begins to write squiggles back by guessing which squiggles should follow the ones it received. If the humans on the islands do not notice and believe that they are still communicating with one another, can we say that the octopus understands language? (Bender’s octopus is of course a stand-in for an AI like GPT-3.) Some might argue that the octopus does understand language here. But Bender goes on: imagine that one of the islanders sends a message with instructions for how to build a coconut catapult and a request for ways to improve it. What does the octopus do? It has learned which squiggles follow other squiggles well enough to mimic human communication, but it has no idea what the squiggle “coconut” on this new note really means. What if one islander then asks the other to help her defend herself from an attacking bear? What would the octopus have to do to continue tricking the islander into thinking she was still talking to her neighbor? The point of the example is to reveal how shallow today’s cutting-edge AI language models really are. There is a lot of hype about natural-language processing, says Bender. But that word “processing” hides a mechanistic truth. Humans are active listeners; we create meaning where there is none, or none intended. It is not that the octopus’s utterances make sense, but rather that the islander can make sense of them, Bender says. For all their sophistication, today’s AIs are intelligent in the same way a calculator might be said to be intelligent: they are both machines designed to convert input into output in ways that humans—who have minds—choose to interpret as meaningful. While neural networks may be loosely modeled on brains, the very best of them are vastly less complex than a mouse’s brain.

And yet, we know that brains can produce what we understand to be consciousness. If we can eventually figure out how brains do it, and reproduce that mechanism in an artificial device, then surely a conscious machine might be possible?

hen I was trying to imagine Robert’s world in the opening to this essay, I found myself drawn to the question of what consciousness means to me. My conception of a conscious machine was undeniably—perhaps unavoidably—human-like. It is the only form of consciousness I can imagine, as it is the only one I have experienced. But is that really what it would be like to be a conscious AI? It’s probably hubristic to think so. The project of building intelligent machines is

W

what it might be like for us to do this (perhaps by closing our eyes and picturing a sort of echolocation point cloud of our surroundings), but that’s still not what it must be like for a bat, with its bat mind. Another way of approaching the question is by considering cephalopods, especially octopuses. These animals are known to be smart and curious—it’s no coincidence Bender used them to make her point. But they have a very different kind of intelligence that evolved entirely separately from that of all other intelligent species. The last common ancestor that we share with an octopus was probably a tiny worm-like creature that lived 600 million years ago. Since then, the myriad forms of vertebrate life—fish, reptiles, birds, and mammals among them—have developed their own kinds of mind along one branch, while cephalopods developed another.

A few hundred years ago the accepted view was that only humans were conscious. Animals, lacking souls, were seen as mindless robots. Few think that today.

biased toward human intelligence. But the animal world is filled with a vast range of possible alternatives, from birds to bees to cephalopods. A few hundred years ago the accepted view, pushed by René Descartes, was that only humans were conscious. Animals, lacking souls, were seen as mindless robots. Few think that today: if we are conscious, then there is little reason not to believe that mammals, with their similar brains, are conscious too. And why draw the line around mammals? Birds appear to reflect when they solve puzzles. Most animals, even invertebrates like shrimp and lobsters, show signs of feeling pain, which would suggest they have some degree of subjective consciousness. But how can we truly picture what that must feel like? As the philosopher Thomas Nagel noted, it must “be like” something to be a bat, but what that is we cannot even imagine—because we cannot imagine what it would be like to observe the world through a kind of sonar. We can imagine

It’s no surprise, then, that the octopus brain is quite different from our own. Instead of a single lump of neurons governing the animal like a central control unit, an octopus has multiple brain-like organs that seem to control each arm separately. For all practical purposes, these creatures are as close to an alien intelligence as anything we are likely to meet. And yet Peter Godfrey-Smith, a philosopher who studies the evolution of minds, says that when you come face to face with a curious cephalopod, there is no doubt there is a conscious being looking back. In humans, a sense of self that persists over time forms the bedrock of our subjective experience. We are the same person we were this morning and last week and two years ago, back as far as we can remember. We recall places we visited, things we did. This kind of first-person outlook allows us to see ourselves as agents interacting with an external world that has other agents in it—we understand that we are a thing that does stuff and has stuff done to it. Whether

A mind of its own

octopuses, much less other animals, think that way isn’t clear. In a similar way, we cannot be sure if having a sense of self in relation to the world is a prerequisite for being a conscious machine. Machines cooperating as a swarm may perform better by experiencing themselves as parts of a group than as individuals, for example. At any rate, if a potentially conscious machine like Robert were ever to exist, we’d run into the same problem assessing whether it was in fact conscious that we do when trying to determine intelligence: as Turing suggested, defining intelligence requires an intelligent observer. In other words, the intelligence we see in today’s machines is projected on them by us—in a very similar way that we project meaning onto messages written by Bender’s octopus or GPT-3. The same will be true for consciousness: we may claim to see it, but only the machines will know for sure.

f AIs ever do gain consciousness (and we take their word for it), we will have important decisions to make. We will have to consider whether their subjective experience includes the ability to suffer pain, boredom, depression, loneliness, or any other unpleasant sensation or emotion. We might decide a degree of suffering is acceptable, depending on whether we view these AIs more like livestock or humans. Some researchers who are concerned about the dangers of super-intelligent machines have suggested that we should confine these AIs to a virtual world, to prevent them from manipulating the real world directly. If we believed them to have human-like consciousness, would they have a right to know that we’d cordoned them off into a simulation? Others have argued that it would be immoral to turn off or delete a conscious machine: as our robot Robert feared, this would be akin to ending a life. There are related scenarios, too. Would it be ethical to retrain a conscious machine if it meant deleting its memories? Could we copy that AI without harming its sense of self? What if consciousness turned out to be useful during training, when subjective experience

I

An AI acting alone might benefit from a sense of itself in relation to the world. But machines cooperating as a swarm may perform better by experiencing themselves as parts of a group rather than as individuals.

71

helped the AI learn, but was a hindrance when running a trained model? Would it be okay to switch consciousness on and off? This only scratches the surface of the ethical problems. Many researchers, including Dennett, think that we shouldn’t try to make conscious machines even if we can. The philosopher Thomas Metzinger has gone as far as calling for a moratorium on work that could lead to consciousness, even if it isn’t the intended goal. If we decided that conscious machines had rights, would they also have responsibilities? Could an AI be expected to behave ethically itself, and would we punish it if it didn’t? These questions push into yet more thorny territory, raising problems about free will and the nature of choice. Animals have conscious experiences and we allow them certain rights, but they do not have responsibilities. Still, these boundaries shift over time. With conscious machines, we can expect entirely new boundaries to be drawn. It’s possible that one day there could be as many forms of consciousness as there are types of AI. But we will never know what it is like to be these machines, any more than we know what it is like to be an octopus or a bat or even another person. There may be forms of consciousness we don’t recognize for what they are because they are so radically different from what we are used to. Faced with such possibilities, we will have to choose to live with uncertainties. And we may decide that we’re happier with zombies. As Dennett has argued, we want our AIs to be tools, not colleagues. “You can turn them off, you can tear them apart, the same way you can with an automobile,” he says. “And that’s the way we should keep it.” Q Will Douglas Heaven is a senior editor for AI at MIT Technology Review.

Shape the future of tech. Take part in original research and gain valuable business insights into today’s most important technology trends. Join the Global Insights Panel. • Participate in research programs and surveys.

• Receive exclusive news in our twice-monthly newsletters.

• Access our quarterly Global Insights reports, prepared by MIT Technology Review Insights.

Apply at technologyreview.com/globalpanel

• Network with like-minded peers in person or virtually.

RE

“I’m seeing life as a thing to be explored and appreciated rather than something to be endured.” —p. 75

73

VIEW

Books, policy, and culture in perspective

CHARLOTTE

JEE

“I understand what joy is now” ANDREA DAQUINO

Trials of MDMA-assisted therapy have shown astounding results. One participant tells his story.

athan McGee was only four years old when he experienced the trauma that would eventually lead him to MDMA therapy almost four decades later. It’s still too painful to go into the details. In the intervening years, he played what he calls “diagnosis bingo.” Doctors variously told Nathan he had attention deficit hyperactivity disorder, anxiety, depression, and dyslexia. In 2019 he was diagnosed with post-traumatic stress disorder. Along the way, he cycled through a vast array of medications—antidepressants, pills for anxiety, and tablets to calm the effects of ADHD.

N

74

Mind

Q+A

But he didn’t want to pop multiple pills every day just to feel normal. “I’d never really felt happy, no matter what was going on in my life,” he says. “I always felt restless, always felt this underlying heaviness. Things just didn’t connect in my head. It was like someone had taken a cable and unplugged it, and I was trying to fit it back in.” Eventually, Nathan heard about a study that was testing the use of MDMA to treat severe PTSD and managed to get into a phase 3 clinical trial, the final hurdle before US regulators consider whether to approve the therapy. MDMA is a synthetic psychoactive with a reputation as a party drug popular among clubbers—you may know it as ecstasy, E, or molly. It causes the brain to release large amounts of the chemical serotonin, which causes a euphoric effect, but it’s also been found to reduce activity in the brain’s limbic system, which controls our emotional responses. This seems to help people with PTSD to revisit their traumatic experiences in therapy without being overwhelmed by strong emotions like fear, embarrassment, or sadness. To t e s t t h i s t h e o r y, t h e Multidisciplinary Association for Psychedelic Studies, a Californiabased nonprofit, set up a randomized, double-blind trial—the one Nathan took part in. Participants attended three eight-hour sessions, during which they were given either placebos or two doses of MDMA before discussing their problems and receiving counseling from two qualified therapists. In May 2021, the trial’s results were published in Nature Medicine. They were breathtaking. Of the 90 patients who participated, those who received MDMA reported significantly better outcomes than the rest. Two months after treatment, 67% of

“I WAS ABLE TO ALMOST RELIVE THE TRAUMATIC EXPERIENCE WITHOUT ALL THE STIGMA, PRESSURE, AND EMOTION. YOU COULD ALMOST JUST STAND BACK AND ANALYZE IT, LIKE YOU WOULD A MOVIE.”

participants in the MDMA group no longer had PTSD, compared with 32% in the placebo group. Ben Sessa, a UK-based researcher involved in launching the country’s first psychedelic therapy clinic, in Bristol, says the US Food and Drug Administration could approve MDMA-assisted psychotherapy for PTSD by the end of 2023. There are other trials under way in the US, the UK, and beyond to test whether compounds like psilocybin and ketamine could be similarly used to help treat mental illness. The early signs are positive, and if they’re borne out, they could shake up the world of mental-health treatment. I spoke to Nathan about what the experience of MDMA-assisted therapy was like. Our conversation has been condensed and edited for clarity. Q: How did your mental-health struggles manifest? A: Before I participated in the trial,

things weren’t going well for me. Everything I was trying went horribly. Nothing worked. I tried so many different therapists and different techniques. I lost my job in January 2018. That was depressing, and I’d lost jobs before, but this time it was different. I decided if this is being caused by my mental health, I’m going to fix this. I’m going to do whatever it takes. If my therapist had told me I had to strip naked and walk through a crowded mall and that would help me, I’d have done it. Q: How did you come across this study? A: I was just in a late-night inter-

net rabbit hole. I’d been researching PTSD for a few hours, and I came across this study. I thought I might as well just apply. I didn’t think anything of it. In fact, I

forgot about it after. I didn’t even tell my wife. Then, two months later, I got this phone call from them, asking if they could interview me. Q: Walk me through the experience of what the sessions were like. A: When you get there, it really

just looks like an office building. From the outside, you’d never know there’s a bunch of people taking MDMA inside. But you go through, and you’re taken to the treatment room, which has a couch, bedding, blankets, and a pillow. There’s music playing, and that’s pretty integral to the whole experience. It’s very calming. It almost feels like a spa. There’s a lot of sunlight coming in, and through the window you can see trees and a canal. It’s very peaceful. Then the two therapists come in. They check your vitals—your temperature, your blood pressure, your heart rate, and so on. They chat to you a bit about what you hope to get from the experience today. And then they do this little ceremony or ritual, where they light a candle to signify that things are starting. It almost feels a bit like a religious or a spiritual experience. So they light the candle, and then one of the therapists goes and comes back with a little dish with a pill on it. They present it to you with a cup of water, you drink the water and swallow the pill, and then you just sit and wait. You chat as you’re waiting. At one point I said, “I don’t think this is the MDMA.” I’d never taken anything like that before, and I was a bit nervous, to be honest. They don’t tell you if you have the MDMA or not, but the head therapist told me pretty much everyone knows. Almost as soon as I said I didn’t think I’d taken it, it kicked in. I mean, I knew.

Review

I remember going to the bathroom and looking in the mirror, and seeing my pupils looking like saucers. I was like, “Wow, okay.” It felt calming. My mind seemed to just open up and be clear. They’d told me beforehand that it would come in waves, and it did. I decided to lie down and put a mask over my eyes to block out the light so I could just listen to the music. I had headphones I could put on if I wanted to block everything out. My mind went into exploring everything. And then, when I was ready, I chatted to the therapists. I was able to almost relive the traumatic experience without all the stigma, pressure, and emotion. You could almost just stand back and analyze it, like you would a movie, looking at the sound effects, the lighting, or the makeup. I came to a kind of understanding, a realization, and I was able to let go of some of that heaviness. I would go between introspective and external periods, either talking to the therapists or just relaxing with my mask and headphones on. A bit later on in the day they gave me another dose, of a bit less of the drug, just to lengthen the experience. As I was coming down, they were talking me through the whole process. My wife came to pick me up. She said she saw an immediate difference in the aftermath. I just seemed instantly so much calmer. You do three of these sort of day-long sessions, and then you return for a few of what they call “consolidation” sessions, where you fit everything you’ve learned together.

BENJAMIN RASMUSSEN

Q: How do you feel now? A: I feel amazing. This trial has

changed my life dramatically. I feel alive. I understand what joy is now. I’m not floating around on a cloud—I’m not never sad. But when I feel down now, it doesn’t feel like the end, or a state I’m

75

Nathan McGee was part of a trial using MDMA, also known as ecstasy, as part of his therapy.

stuck in. I know it is just a crappy day, which we all get. Before, I felt constantly stressed and felt like nothing good ever happened. Now I can appreciate the good. My wife, my two daughters, all my family and my friends—I enjoy their company so much more now that I am less concerned with myself. My relationship with my parents has improved tremendously too. I’m 43 now. I was four when this traumatic experience happened to me. It has had a lifelong and deeply profound impact on me, in ways I only now understand. It changed how I saw the world. And what I am starting to learn now is there’s a difference between who I really am and who I am because of the effects of the trauma. There is this core me that always existed. It was hard for me not to confuse the ups and downs of my life with who I actually am. That’s changed now. I’m tapping back into that four-year-old self, and I’m seeing life as a thing to

be explored and appreciated rather than something to be endured. Q: What would you say to people considering seeking psychedelic therapy? A: It can’t get legalized fast

enough, especially with the state of the world right now. There are a lot of people out there who are suffering and looking for comfort, or just any sort of relief. But it isn’t just a case of taking the drugs. I don’t condone or condemn recreational use, but if you think “I’ll go to Burning Man and heal my depression by scoring some molly,” you might be disappointed. You need to have the right people there to guide you through it, and help you to feel safe and strong. It’s great, but you have to do it the right way. Q Charlotte Jee is a reporter at MIT Technology Review.

76

Mind

BELIEVING IS SEEING

THE

HUTSON

Believing is seeing Three new books probe the relationship between what we perceive and who we are.

hen you and I look at the same object we assume that we’ll both see the same color. W Whatever our identities or ideologies, we believe our realities meet at the most basic level of perception. But in 2015, a viral internet phenomenon tore this assumption asunder. The incident was known simply as “The Dress.” For the uninitiated: a photograph of a dress appeared on the internet, and people disagreed about its color. Some saw it as white and gold; others saw it as blue and black. For a time, it was all anyone online could talk about.

ANDREA DAQUINO

MATTHEW

Review

Eventually, vision scientists figured out what was happening. It wasn’t our computer screens or our eyes. It was the mental calculations that brains make when we see. Some people unconsciously inferred that the dress was in direct light and mentally subtracted yellow from the image, so they saw blue and black stripes. Others saw it as being in shadow, where bluish light dominates. Their brains mentally subtracted blue from the image, and came up with a white and gold dress. Not only does thinking filter reality; it constructs it, inferring an outside world from ambiguous input. In Being You, Anil Seth, a neuroscientist at the University of Sussex, relates his explanation for how the “inner universe of subjective experience relates to, and can be explained in terms of, biological and physical processes unfolding in brains and bodies.” He contends that “experiences of being you, or of being me, emerge from the way the brain predicts and controls the internal state of the body.” Prediction has come into vogue in academic circles in recent years. Seth and the philosopher Andy Clark, a colleague at Sussex, refer to predictions made by the brain as “controlled hallucinations.” The idea is that the brain is always constructing models of the world to explain and predict incoming information; it updates these models when prediction and the experience we get from our sensory inputs diverge. “Chairs aren’t red,” Seth writes, “just as they aren’t ugly or oldfashioned or avant-garde … When I look at a red chair, the redness I experience depends both on properties of the chair and on properties of my brain. It corresponds to the content of a set of perceptual predictions about the ways in which a specific kind of surface reflects light.”

Being You A New Science of Consciousness

Seth is not particularly interested in redness, or even in color more generally. Rather his larger claim is that this same process applies to all of perception: “The entirety of perceptual experience is a neuronal fantasy that remains yoked to the world through a continuous making and remaking of perceptual best guesses, of controlled hallucinations. You could even say that we’re all hallucinating all the time. It’s just that when we agree about our hallucinations, that’s what we call reality.” Cognitive scientists often rely on atypical examples to gain understanding of what’s really happening. Seth takes the reader through a fun litany of optical illusions and demonstrations, some quite familiar and others less so. Squares that are in fact the same shade appear to be different; spirals printed on paper appear to spontaneously rotate; an obscure image turns out to be a woman kissing a horse; a face shows up in a bathroom sink. Re-creating the mind’s psychedelic powers in silicon, an artificial-intelligencepowered virtual-reality setup that he and his colleagues created produces a Hunter Thompson–esque menagerie of animal parts emerging piecemeal from other objects in a square on the Sussex University campus. This series of examples, in Seth’s telling, “chips away at the beguiling but unhelpful intuition that consciousness is one thing—one big scary mystery in search of one big scary solution.” Seth’s perspective might be unsettling to those who prefer to believe that things are as they seem to be: “Experiences of free will are perceptions. The flow of time is a perception.” Seth is on comparatively solid ground when he describes how the brain shapes experience, what philosophers call the “easy” problems of consciousness. They’re easy only in

77

comparison to the “hard” problem: why subjective experience exists at all as a feature of the universe. Here he treads awkwardly, introducing the “real” problem, which is to “explain, predict, and control the phenomenological properties of conscious experience.” It’s not clear how the real problem differs from the easy problems, but somehow, he says, tackling it will get us some way toward resolving the hard problem. Now that would be a neat trick. Where Seth relates, for the most part, the experiences of people with typical brains wrestling with atypical stimuli, in Coming to Our Senses, Susan Barry, an emeritus professor of neurobiology at Mount Holyoke college, tells the stories of two people who acquired new senses later in life than is usual. Liam McCoy, who had been nearly blind since he was an infant, was able to see almost clearly after a series of operations when he was 15 years old. Zohra Damji was profoundly deaf until she was given a cochlear implant at the unusually late age of 12. As Barry explains, Damji’s surgeon “told her aunt that, had he known the length and degree of Zohra’s deafness, he would not have performed the operation.” Barry’s compassionate, nuanced, and observant exposition is informed by her own experience: At age forty-eight, I experienced a dramatic improvement in my vision, a change that repeatedly brought me moments of childlike glee. Cross-eyed from early infancy, I had seen the world primarily through one eye. Then, in mid-life, I learned, through a program of vision therapy, to use my eyes together. With each glance, everything I saw took on a new look. I could see the volume and 3D shape of the empty space between things. Tree branches reached out toward me; light fixtures floated. A visit to the produce

78

Mind

BELIEVING IS SEEING

section of the supermarket, with all its colors and 3D shapes, could send me into a sort of ecstasy. Barry was overwhelmed with joy at her new capacities, which she describes as “seeing in a new way.” She takes pains to point out how different this is from “seeing for the first time.” A person who has grown up with eyesight can grasp a scene in a single glance. “But where we perceive a three-dimensional landscape full of objects and people, a newly sighted adult sees a hodgepodge of lines and patches of colors appearing on one flat plane.” As McCoy described his experience of walking up and down stairs to Barry: The upstairs are large alternating bars of light and dark and the downstairs are a series of small lines. My main focus is to balance and step IN BETWEEN lines, never on one … Of course going downstairs you step in between every line but upstairs you skip every other bar. All the while, when I move, the stairs are skewing and changing. Even a sidewalk was tricky, at first, to navigate. He had to judge whether a line “indicated the junction between flat sidewalk blocks, a crack in the cement, the outline of a stick, a shadow cast by an upright pole, or the presence of a sidewalk step,” Barry explains. “Should he step up, down, or over the line, or should he ignore it entirely?” As McCoy says, the complexity of his perceptual confusion probably cannot be fully explained in terms that sighted people are used to. The same, of course, is true of hearing. Raw audio can be hard to untangle. Barry describes her own ability to listen to the radio while working, effortlessly distinguishing the background sounds in the room from her own typing and from the flute and violin music coming over the radio. “Like object recognition,

Coming to Our Senses A Boy Who Learned to See, a Girl Who Learned to Hear, and How We All Discover the World

What Makes Us Smart The Computational Logic of Human Cognition

sound recognition depends upon communication between lower and higher sensory areas in the brain … This neural attention to frequency helps with sound source recognition. Drop a spoon on a tiled kitchen floor, and you know immediately whether the spoon is metal or wood by the high- or low-frequency sound waves it produces upon impact.” Most people acquire such capacities in infancy. Damji didn’t. She would often ask others what she was hearing, but had an easier time learning to distinguish sounds that she made herself. She was surprised by how noisy eating potato chips was, telling Barry: “To me, potato chips were always such a delicate thing, the way they were so lightweight, and so fragile that you could break them easily, and I expected them to be soft-sounding. But the amount of noise they make when you crunch them was something out of place. So loud.” As Barry recounts, at first Damji was frightened by all sounds, “because they were meaningless.” But as she grew accustomed to her new capabilities, Damji found that “a sound is not a noise anymore but more like a story or an event.” The sound of laughter came to her as a complete surprise, and she told Barry it was her favorite. As Barry writes, “Although we may be hardly conscious of background sounds, we are also dependent upon them for our emotional well-being.” One strength of the book is in the depth of her connection with both McCoy and Damji. She spent years speaking with them and corresponding as they progressed through their careers: McCoy is now an ophthalmology researcher at Washington University in St. Louis, while Damji is a doctor. From the details of how they learned to see and hear, Barry concludes, convincingly, that “since the world and everything in it is constantly

changing, it’s surprising that we can recognize anything at all.” In What Makes Us Smart, Samuel Gershman, a psychology professor at Harvard, says that there are “two fundamental principles governing the organization of human intelligence.” Gershman’s book is not particularly accessible; it lacks connective tissue and is peppered, with equations that are incompletely explained. He writes that intelligence is governed by “inductive bias,” meaning we prefer certain hypotheses before making observations, and “approximation bias,” which means we take mental shortcuts when faced with limited resources. Gershman uses these ideas to explain everything from visual illusions to conspiracy theories to the development of language, asserting that what looks dumb is often “smart.” “The brain is evolution’s solution to the twin problems of limited data and limited computation,” he writes. He portrays the mind as a raucous committee of modules that somehow helps us fumble our way through the day. “Our mind consists of multiple systems for learning and decision making that only exchange limited amounts of information with one another,” he writes. If he’s correct, it’s impossible for even the most introspective and insightful among us to fully grasp what’s going on inside our own head. As Damji wrote in a letter to Barry: When I had no choice but to learn Swahili in medical school in order to be able to talk to the patients—that is when I realized how much potential we have—especially when we are pushed out of our comfort zone. The brain learns it somehow. Q Matthew Hutson is a contributing writer at the New Yorker and a freelance science and tech writer.

Review

79

BRAIN MAP

THE

EMILY

MULLIN

ANDREA DAQUINO

Big science has failed to unlock the mysteries of the human brain Large, expensive efforts to map the brain started a decade ago but have largely fallen short. It’s a good reminder of just how complex this organ is.

n September 2011, a group of neuroscientists and I nanoscientists gathered at a picturesque estate in the English countryside for a symposium meant to bring their two fields together. At the meeting, Columbia University neurobiologist Rafael Yuste and Harvard geneticist George Church made a not-so-modest proposal: to map the activity of the entire human brain at the level of individual neurons and detail how those cells form circuits. That knowledge could be harnessed to treat brain disorders like Alzheimer’s, autism, schizophrenia, depression, and traumatic brain injury. And it would help answer one of the great questions of science: How does the brain bring about consciousness? Yuste, Church, and their colleagues drafted a proposal that would later be published in the journal Neuron. Their ambition was extreme: “a large-scale, international public effort, the Brain Activity Map Project, aimed at reconstructing the full record of neural activity across complete neural circuits.” Like the Human Genome Project a decade earlier, they wrote, the brain project would lead to “entirely new industries and commercial ventures.” New technologies would be needed to achieve that goal, and that’s where the nanoscientists came in. At the time, researchers could record activity from just a few hundred neurons at once—but with around 86 billion neurons in the human brain, it was akin to “watching a TV one pixel at a time,” Yuste recalled in 2017. The researchers proposed tools to measure “every spike from every neuron” in an attempt to understand how the firing of these neurons produced complex thoughts. The audacious proposal intrigued the Obama administration and laid

80

Mind

BRAIN MAP

the foundation for the multi-year Brain Research through Advancing Innovative Neurotechnologies (BRAIN) Initiative, announced in April 2013. President Obama called it the “next great American project.” But it wasn’t the first audacious brain venture. In fact, a few years earlier, Henry Markram, a neuroscientist at the École Polytechnique Fédérale de Lausanne in Switzerland, had set an even loftier goal: to make a computer simulation of a living human brain. Markram wanted to build a fully digital, three-dimensional model at the resolution of the individual cell, tracing all of those cells’ many connections. “We can do it within 10 years,” he boasted during a 2009 TED talk. In January 2013, a few months before the American project was announced, the EU awarded Markram $1.3 billion to build his brain model. The US and EU projects sparked similar large-scale research efforts in countries including Japan, Australia, Canada, China, South Korea, and Israel. A new era of neuroscience had begun. A decade later, the US project is winding down, and the EU project faces its deadline to build a digital brain. So how did it go? Have we begun to unwrap the secrets of the human brain? Or have we spent a decade and billions of dollars chasing a vision that remains as elusive as ever?

An impossible dream? From the beginning, both projects had critics. EU scientists worried about the costs of the Markram scheme and thought it would squeeze out other neuroscience research. And even at the original 2011 meeting in which Yuste and Church presented their ambitious vision, many of their colleagues argued it simply wasn’t

THERE’S NOT A SINGLE, AGREED-UPON THEORY OF HOW THE BRAIN WORKS, AND NOT EVERYONE IN THE FIELD AGREED THAT BUILDING A SIMULATED BRAIN WAS THE BEST WAY TO STUDY IT.

possible to map the complex firings of billions of human neurons. Others said it was feasible but would cost too much money and generate more data than researchers would know what to do with. In a blistering article appearing in Scientific American in 2013, Partha Mitra, a neuroscientist at the Cold Spring Harbor Laboratory, warned against the “irrational exuberance” behind the Brain Activity Map and questioned whether its overall goal was meaningful. Even if it were possible to record all spikes from all neurons at once, he argued, a brain doesn’t exist in isolation: in order to properly connect the dots, you’d need to simultaneously record external stimuli that the brain is exposed to, as well as the behavior of the organism. And he reasoned that we need to understand the brain at a macroscopic level before trying to decode what the firings of individual neurons mean. Others had concerns about the impact of centralizing control over these fields. Cornelia Bargmann, a neuroscientist at Rockefeller University, worried that it would crowd out research spearheaded by individual investigators. (Bargmann was soon tapped to co-lead the BRAIN Initiative’s working group.) While the US initiative sought input from scientists to guide its direction, the EU project was decidedly more top-down, with Markram at the helm. But as Noah Hutton documents in his 2020 film In Silico, Markram’s grand plans soon unraveled. As an undergraduate studying neuroscience, Hutton had been assigned to read Markram’s papers and was impressed by his proposal to simulate the human brain; when he started making documentary films, he decided to chronicle the effort. He soon realized, however, that the billion-dollar enterprise was

characterized more by infighting and shifting goals than by breakthrough science. In Silico shows Markram as a charismatic leader who needed to make bold claims about the future of neuroscience to attract the funding to carry out his particular vision. But the project was troubled from the outset by a major issue: there isn’t a single, agreed-upon theory of how the brain works, and not everyone in the field agreed that building a simulated brain was the best way to study it. It didn’t take long for those differences to arise in the EU project. In 2014, hundreds of experts across Europe penned a letter citing concerns about oversight, funding mechanisms, and transparency in the Human Brain Project. The scientists felt Markram’s aim was premature and too narrow and would exclude funding for researchers who sought other ways to study the brain. “What struck me was, if he was successful and turned it on and the simulated brain worked, what have you learned?” Terry Sejnowski, a computational neuroscientist at the Salk Institute who served on the advisory committee for the BRAIN Initiative, told me. “The simulation is just as complicated as the brain.” The Human Brain Project’s board of directors voted to change its organization and leadership in early 2015, replacing a three-member executive committee led by Markram with a 22-member governing board. Christoph Ebell, a Swiss entrepreneur with a background in science diplomacy, was appointed executive director. “When I took over, the project was at a crisis point,” he says. “People were openly wondering if the project was going to go forward.” But a few years later he was out too, after a “strategic disagreement” with the project’s host institution. The project is now focused

Review

on providing a new computational research infrastructure to help neuroscientists store, process, and analyze large amounts of data— unsystematic data collection has been an issue for the field—and develop 3D brain atlases and software for creating simulations. The US BRAIN Initiative, meanwhile, underwent its own changes. Early on, in 2014, responding to the concerns of scientists and acknowledging the limits of what was possible, it evolved into something more pragmatic, focusing on developing technologies to probe the brain.

New day Those changes have finally started to produce results—even if they weren’t the ones that the founders of each of the large brain projects had originally envisaged. Last year, the Human Brain Project released a 3D digital map that integrates different aspects of human brain organization at the millimeter and micrometer level. It’s essentially a Google Earth for the brain. And earlier this year Alipasha Vaziri, a neuroscientist funded by the BRAIN Initiative, and his team at Rockefeller University reported in a preprint paper that they’d simultaneously recorded the activity of more than a million neurons across the mouse cortex. It’s the largest recording of animal cortical activity yet made, if far from listening to all 86 billion neurons in the human brain as the original Brain Activity Map hoped. The US effort has also shown some progress in its attempt to build new tools to study the brain. It has speeded the development of optogenetics, an approach that uses light to control neurons, and its funding has led to new high-density silicon electrodes capable of recording from hundreds of neurons simultaneously.

And it has arguably accelerated the development of single-cell sequencing. In September, researchers using these advances will publish a detailed classification of cell types in the mouse and human motor cortexes— the biggest single output from the BRAIN Initiative to date. While these are all important steps forward, though, they’re far from the initial grand ambitions.

Lasting legacy We are now heading into the last phase of these projects—the EU effort will conclude in 2023, while the US initiative is expected to have funding through 2026. What happens in these next years will determine just how much impact they’ll have on the field of neuroscience. When I asked Ebell what he sees as the biggest accomplishment of the Human Brain Project, he didn’t name any one scientific achievement. Instead, he pointed to EBRAINS, a platform launched in April of this year to help neuroscientists work with neurological data, perform modeling, and simulate brain function. It offers researchers a wide range of data and connects many of the most advanced European lab facilities, supercomputing centers, clinics, and technology hubs in one system. “If you ask me ‘Are you happy with how it turned out?’ I would say yes,” Ebell said. “Has it led to the breakthroughs that some have expected in terms of gaining a completely new understanding of the brain? Perhaps not.” Katrin Amunts, a neuroscientist at the University of Düsseldorf, who has been the Human Brain Project’s scientific research director since 2016, says that while Markram’s dream of simulating the human brain hasn’t been realized yet, it is getting closer. “We will use the last three years to make such simulations happen,” she

81

says. But it won’t be a big, single model—instead, several simulation approaches will be needed to understand the brain in all its complexity. Meanwhile, the BRAIN Initiative has provided more than 900 grants to researchers so far, totaling around $2 billion. The National Institutes of Health is projected to spend nearly $6 billion on the project by the time it concludes. For the final phase of the BRAIN Initiative, scientists will attempt to understand how brain circuits work by diagramming connected neurons. But claims for what can be achieved are far more restrained than in the project’s early days. The researchers now realize that understanding the brain will be an ongoing task—it’s not something that can be finalized by a project’s deadline, even if that project meets its specific goals. “With a brand-new tool or a fabulous new microscope, you know when you’ve got it. If you’re talking about understanding how a piece of the brain works or how the brain actually does a task, it’s much more difficult to know what success is,” says Eve Marder, a neuroscientist at Brandeis University. “And success for one person would be just the beginning of the story for another person.” Yuste and his colleagues were right that new tools and techniques would be needed to study the brain in a more meaningful way. Now, scientists will have to figure out how to use them. But instead of answering the question of consciousness, developing these methods has, if anything, only opened up more questions about the brain—and shown just how complex it is. “I have to be honest,” says Yuste. “We had higher hopes.” Q Emily Mullin is a freelance journalist based in Pittsburgh who focuses on biotechnology.

82

Mind

THE

CHRISTOF

KOCH

The magic number Could plants, bacteria, and our body’s cells have their own sort of consciousness?

anpsychism is the belief that consciousness is found P throughout the universe— not only in people and animals, but also in trees, plants, and bacteria. Panpsychists hold that some aspect of mind is present even in elementary particles. The idea that consciousness is widespread is attractive to many for intellectual and, perhaps, also emotional

reasons. But can it be empirically tested? Surprisingly, perhaps it can. That’s because one of the most popular scientific theories of consciousness, integrated information theory (IIT), shares many—though not all— features of panpsychism. As the American philosopher Thomas Nagel has argued, something is conscious if there is “something that it is like to be” that thing in the state that it is in. A human brain in a state of wakefulness feels like something specific. IIT specifies a unique number, a system’s integrated information, labeled by the Greek letter ĭ (pronounced phi). If ĭ is zero, the system does not feel like anything; indeed, the system does not exist as a whole, as it is fully reducible to its constituent components. The larger ĭ, the more conscious a system is, and the more irreducible. Given an accurate and complete description of a system, IIT predicts both the quantity and the quality of its experience (if any). IIT predicts that because of the structure of the human brain, people have high values of ĭ, while animals have smaller (but positive) values and classical digital computers have almost none. A person’s value of ĭ is not constant. It increases during early childhood with the development of the self and may decrease with onset of dementia and other cognitive impairments. ĭ will fluctuate during sleep, growing larger during dreams and smaller in deep, dreamless states. IIT starts by identifying five true and essential properties of any and every conceivable conscious experience. For example, experiences are definite (exclusion). This means that an experience is not less than it is (experiencing only the sensation of the color blue but not the moving ocean that brought the color to mind), nor is it more than it is (say,

ANDREA DAQUINO

CONSCIOUSNESS

Review

experiencing the ocean while also being aware of the canopy of trees behind one’s back). In a second step, IIT derives five associated physical properties that any system—brain, computer, pine tree, sand dune— has to exhibit in order to feel like something. A “mechanism” in IIT is anything that has a causal role in a system; this could be a logical gate in a computer or a neuron in the brain. IIT says that consciousness arises only in systems of mechanisms that have a particular structure. To simplify somewhat, that structure must be maximally integrated—not accurately describable by breaking it into its constituent parts. It must also have cause-and-effect power upon itself, which is to say the current state of a given mechanism must constrain the future states of not only that particular mechanism, but the system as a whole. Given a precise physical description of a system, the theory provides a way to calculate the ĭ of that system. The technical details of how this is done are complicated, but the upshot is that one can, in principle, objectively measure the ĭ of a system so long as one has such a precise description of it. (We can compute the ĭ of computers because, having built them, we understand them precisely. Computing the ĭ of a human brain is still an estimate.) Systems can be evaluated at different levels—one could measure the ĭ of a sugar-cube-size piece of my brain, or of my brain as a whole, or of me and you together. Similarly, one could measure the ĭ of a silicon atom, of a particular circuit on a microchip, or of an assemblage of microchips that make up a supercomputer. Consciousness, according to the theory, exists for systems for which ĭ is at a maximum. It exists for all such systems, and only for such systems.

DEBATING THE NATURE OF CONSCIOUSNESS MIGHT AT FIRST SOUND LIKE AN ACADEMIC EXERCISE, BUT IT HAS REAL AND IMPORTANT CONSEQUENCES.

The ĭ of my brain is bigger than the ĭvalues of any of its parts, however one sets out to subdivide it. So I am conscious. But the ĭ of me and you together is less than my ĭ or your ĭ, so we are not “jointly” conscious. If, however, a future technology could create a dense communication hub between my brain and your brain, then such brain-bridging would create a single mind, distributed across four cortical hemispheres. Conversely, the ĭ of a supercomputer is less than the ĭs of any of the circuits composing it, so a supercomputer—however large and powerful—is not conscious. The theory predicts that even if some deep-learning system could pass the Turing test, it would be a so-called “zombie”—simulating consciousness, but not actually conscious. Like panpsychism, then, IIT considers consciousness an intrinsic, fundamental property of reality that is graded and most likely widespread in the tree of life, since any system with a non-zero amount of integrated information will feel like something. This does not imply that a bee feels obese or makes weekend plans. But a bee can feel a measure of happiness when returning pollen-laden in the sun to its hive. When a bee dies, it ceases to experience anything. Likewise, given the vast complexity of even a single cell, with millions of proteins interacting, it may feel a teeny-tiny bit like something. Debating the nature of consciousness might at first sound like an academic exercise, but it has real and important consequences. Most obviously, it matters to how we think about people in vegetative states. Such patients may groan or otherwise move unprovoked but fail to respond to commands to signal in a purposeful manner by moving their eyes or nodding. Are they conscious minds, trapped in their damaged body, able

83

to perceive but unable to respond? Or are they without consciousness? Evaluating such patients for the presence of consciousness is tricky. IIT proponents have developed a procedure that can test for consciousness in an unresponsive person. First they set up a network of EEG electrodes that can measure electrical activity in the brain. Then they stimulate the brain with a gentle magnetic pulse, and record the echoes of that pulse. They can then calculate a mathematical measure of the complexity of those echoes, called a perturbational complexity index (PCI). In healthy, conscious individuals—or in people who have brain damage but are clearly conscious— the PCI is always above a particular threshold. On the other hand, 100% of the time, if healthy people are asleep, their PCI is below that threshold (0.31). So it is reasonable to take PCI as a proxy for the presence of a conscious mind. If the PCI of someone in a persistent vegetative state is always measured to be below this threshold, we can with confidence say that this person is not covertly conscious. This method is being investigated in a number of clinical centers across the US and Europe. Other tests seek to validate the predictions that IIT makes about the location and timing of the footprints of sensory consciousness in the brains of humans, nonhuman primates, and mice. Unlike panpsychism, the startling claims of IIT can be empirically tested. If they hold up, science may have found a way to cut through a knot that has puzzled philosophers for as long as philosophy has existed. Q Christof Koch is the chief scientist of the MindScope program at the Allen Institute for Brain Science in Seattle.

84

Poetry

ILLUSTRATIONS BY

Yann Kebbi

85

DREAM VENDING MACHINE

I feed it coins and watch the spring coil back, the clunk of a vacuum-packed, foil-wrapped dream dropping into the tray. It dispenses all kinds of dreams—bad dreams, good dreams, short nightmares to stave off worse ones, recurring dreams with a teacake marshmallow center. Hardboiled caramel dreams to tuck in your cheek, a bag of orange dreams with Spanish subtitles. One neon sachet promises conversational Cantonese while you sleep. Another is a dream of the inside of a river, slips down like sardines in oil, pulls my body long and sleek to chatter about currents to any otter that would listen. My favorite dream is always out of stock: effortless Parisian verlan. In that one I’m nibbling tiny cakes. I’m making small talk about eye creams in a French pharmacy. I’m pressing my hand to the buzzer of a top-floor flat in which there is a fantastic party that’s expecting me. Zero-sugar dreams never last long. There’s one pale pink dream I avoid: it fizzes like Pepto-Bismol flavored Pixy Stix. It’s processed in a factory that also handles hope, shame, and other allergens. That dream is like accidentally stepping on a cat, sudden and awful, heart-wrenching for everyone. In it, my father says I’m sorry I never call, I never know what to say, and I finally have the words to reply don’t worry and I know and hey, we’re good. We’re good now. We’re all good. Cynthia Miller

Cynthia Miller is a MalaysianAmerican poet, poetry festival producer, and innovation consultant. Her first collection, Honorifics, was published by Nine Arches Press in June 2021.

Paula Bohince, the author of three poetry collections, has published in the New Yorker, the New York Review of Books, the TLS, the Poetry Review, Poetry, and elsewhere. She was recently the John Montague International Poetry Fellow at University College Cork.

SEARCH FIELD

In underpants and undershirt, pink lambs printed on the weave, with me stirring oatmeal at the sink is what I dream. What it means, the website reads, is symbiosis: intimacy that never leads to sex. I’m safest in quiet one-way meetings, sitting like a spider at the center of a web, watching it tremble. To rest in the papasan chair and have the world widen to dream is to be blurred as a baby watching her mother clink in the kitchen, background whispers of belonging soothing a system. Unreal, it seems now, the flock of sheep in the train window, first glimpse of Pennsylvania after breaking down in New York. Five field-seconds perfected by fog, duffel snuggled against me like a child, the wash of creamy daubs against green, then gone. I’ve not moved for hours. Each opened tab lowers the temp. I’ve traveled on this rail of fiber optics to quell the panic. Click, click, the fields go faster, accumulation of wool in the background, no shutdown, no forward. Paula Bohince

86

CIRCUITRY

By the end of April I was trying my best not to spill any more electricity over my cortex. Pacing the old Roman road stockpiling litter trapped inside synapses. Begging my brutal to go easy on me. The circle I want to be loved by looks like it’s hemorrhaging cortisol. Wetlands of blood sugar. Inside the fire what you get is the fire which is to say my left amygdala is too small. My mother’s survival was too small. If experiences shape the brain’s circuitry then I learned to fear the father before the arachnid. I’m hauling my official deficit up to the summit of the Troodos Mountains. I’ll fantasize about setting colonial summer houses alight using dendrites & neurons. I want so much gone I’m terrified of moving up. All around the therapist’s chair I’m setting my finery down. He keeps asking about intrusive thoughts. His pencil outlining three letters I’ve become obsessed by. I mentalize beheading each axon with an engraved fountain pen. Loading up wheelbarrows with thalamus glands & stale miso. Urchins in the belly for dinner. A mouth like a paper seahorse before dawn. I like where I live now I’m just not happy with the way I do it. I write quickly into my notebook: How many of these days belong to the body of our lives? Next door they’re making big plans to build a conservatory out of salt water. They say life needs to colonize land. My earliest memory is throwing clumps of gray matter into the Mediterranean then waiting for something solid to come back. At the restaurant behind a faded Brueghel painting my friends appear so beautiful in their simplicity. Laughing at futures full of morning glory. Dopamine migrating from their red polo necks, an archipelago of adrenaline withstanding the crush calling us over spark by gentle spark. Anthony Anaxagorou

MY SEXBOT HAL IS A MIND READER

The first thing I ask of Hal is to explain what it’s like underneath, after you peel away the crust, mantle, core. I’d always imagined a cathedral with Chagall windows and Nusrat Fateh Ali Khan leading the choir, but Hal says no. The inner landscape of my head is an armoire of many drawers, with versions of me running into one, then another, saying: I’m here, I’m not here, I’m here. Hal does Ashtanga and meditates. He’s cut like a temple hieroglyph. When I go out to the cliff, he doesn’t worry. He can discern a jumper from a horse, doesn’t pity me for just standing there with my hands out, waiting for some passerby to throw me a peanut. Hal understands it’s his turn to do the washing up, even though I’m the one eating cherries at the sink, knows how the changing seasons gut pieces out of me, how it is this guttedness that brings me to the airstrip of his body, the cushion of his silicone thighs, lighting me all the way home. I cling to him for his signature lily of the valley cologne, for how it feels in the aftermath of love— to be a creature of the sea—tiny, bioluminescent, gazing across this vast planetary cradle at all the descendants we won’t have. One day I know he’ll be gone, risen early like the Buddha out of a dream, taking his special knowledge into the world. There will be no talk of abandonment or what was left behind. He’ll be out there, scooping his butterfly net through the high grasses of the weightless forever, while I stay here, tying ropes around my wrists— desire in one hand, suffering in the other. Tishani Doshi

Poetry

Anthony Anaxagorou is a British-born Cypriot poet, fiction writer, essayist, publisher, and poetry educator.

Tishani Doshi is a Welsh-Gujarati poet, novelist, and dancer. Her fourth book of poems, A God at the Door (Copper Canyon Press, Bloodaxe Books), has been shortlisted for the 2021 Forward Prize. She lives in Tamil Nadu, India.

Zeina Hashem Beck is a Lebanese poet. Her third full-length collection, O, will be published by Penguin Books in summer 2022.

THANK YOU, ANTIDEPRESSANTS

Reader, let me tell you how I keep it together: friendships & antidepressants. Long walks on the beach with H (he pretends these are workouts) & Nutella in bed with R (she sends boil the water you better have chocolate from her car) & endless voice notes with L (she calls them personal podcasts) & WhatsApp stickers-on-demand from F (it is time to MILF said her sticker with my face & red lips on my 40th birthday) & rants with H (another H) about weight & the lands that spit us out & talks under midnight bougainvillea with R (same R) about our mothers’ & children’s rage & daily morning phone calls with L (another L) about nausea & skin & food allergies she’s sure she’s got though no doctor can confirm (& I say if you’re sure you got it then you got it, you got it) & jokes across continents with H (another other H) about impossible geographies & arguments with M about whether we got married in 2005 or 2006 (I say our first married summer was a year before the war, & he says no it was the summer of the war, & we laugh at how we measure time with pain but not without tenderness) & conversations with G (better name this one: God) about my dislike for organized religion & more long voice notes with H (another other other H) about the opposite of grace. This happens daily, so thank you, friends, who are there when the sadness comes, or when my teeth fall apart (my teeth do that bi-yearly) friends who un-scared me of antidepressants, who reassured me I won’t become another Z or my grandmother. & yes, thank you, antidepressants, & you, reader, who stayed with me, & might be wondering why so many of my friends’ names’ first letters are the same, & the answer is when I said together (in the first line of this poem) I didn’t mean it against fragmentation. Zeina Hashem Beck

87

88

The back page

A piece of our mind

Over the decades, MIT Technology Review has repeatedly explored the topic of what we know—or don’t yet know—about our own brains.

OCTOBER/NOVEMBER 1976

MAY/JUNE 1987

July/August 2014

From “Pharmacology and the Brain”: Since

From “Designing Computers That Think the Way We Do”: Neuroscientists have come to

From “Cracking the Brain’s Codes”: One

ancient times, drugs have been used to restore mental health or explore the mind. It was said that the Homeric physician Polydama presented Menelaos and Helen with “a drug against sorrow and anger, a drug to survive despair” on their way home to Troy. The number of mind-bending drugs available today is countless. Some have altered the course of medical practice; others have changed the fabric of our society. Many have greater specificity of action and fewer side effects than ever before. The development of such drugs has been paralleled by our increased knowledge of how drugs work on the molecular level to modify behavior. In this regard, one of the most fruitful research approaches has involved the study of how nerve cells communicate with other cells in the body, and how various drugs might alter this communication.

realize that the architecture of the brain is central to its function. Individual neurons aren’t smart by themselves, but when they’re connected to each other they become quite intelligent. The problem is, nobody knows how they do it. It isn’t that neurons are fast: in sending their electrochemical messages to other neurons, they are 100,000 times slower than a typical computer switch. But what our brains lack in speed they make up in “wetware,” as it is sometimes called. The brain contains from 10 billion to a trillion neurons, each of which may be connected to anywhere from 1,000 to 100,000 others. If this vast net of interconnected neurons forms the grand collective conspiracy we call our minds, maybe a vast interconnected net of mechanical switches can make a machine that thinks.

reason such questions about the brain’s schemes for encoding information have proved so difficult to crack is that the human brain is so immensely complex, encompassing 86 billion neurons linked by something on the order of a quadrillion synaptic connections … It is also worth noting that what neuroengineers try to do is a bit like eavesdropping—tapping into the brain’s own internal communications ... Some of that eavesdropping may mislead us. Every neural code we can crack will tell us something about how the brain operates, but not every code we crack is something the brain itself makes direct use of. Some of them may be ... accidental tics that, even if they prove useful for engineering and clinical applications, could be diversions on the road to a full understanding of the brain.

MIT Technology Review (ISSN 1099-274X), September/October 2021 issue, Reg. US Patent Office, is published bimonthly by MIT Technology Review, 1 Main St. Suite 13, Cambridge, MA 02142-1517. Entire contents ©2021. The editors seek diverse views, and authors’ opinions do not represent the official policies of their institutions or those of MIT. Periodicals postage paid at Boston, MA, and additional mailing offices. Postmaster: Send address changes to MIT Technology Review, Subscriber Services, MIT Technology Review, PO Box 1518, Lincolnshire, IL. 60069, or via the internet at www.technologyreview.com/customerservice. Basic subscription rates: $80 per year within the United States; in all other countries, US$100. Publication Mail Agreement Number 40621028. Send undeliverable Canadian copies to PO Box 1051, Fort Erie, ON L2A 6C7. Printed in USA. Audited by the Alliance for Audited Media.

But wait, there’s more. Lots more. You’re already a subscriber. Activate your account and start enjoying:

technologyreview.com/subonly

• Unlimited web access • Exclusive digital stories • The Algorithm newsletter • Access to 120+ years of publication archives

RANSOMWARE RESPONSE Mount a strategic defense against sophisticated cybercriminals. Critical first steps after a breach happens

REGISTER

Incorporating AI into your cyber-resiliency toolkit

TODAY

November 16 & 17, 2021

The pros and cons of cyber liability insurance

Tools and technologies to dismantle the ransomware ecosystem

SUBSCRIBERS SAVE 10% WITH CODE PRINTSO21 AT

CyberSecureMIT.com/Register