Secrets of the Brain by Scientific American
 9781948933148

Table of contents :
Copyright
Title Page
Table of Contents
SECTION 1 Maintenance and Monitoring
1.1 The Seventh Sense
1.2 Brain Drain
1.3 Deep Sleep Gives Your Brain a Deep Clean
1.4 Sleep Learning Gets Real
1.5 Sleep Deprivation Halts Production of Brain Proteins
SECTION 2 Navigating Space and Time
2.1 Where Am I? Where Am I Going?
2.2 Times of Our Lives
2.3 The Tick Tock of the Biological Clock
SECTION 3 Intuition
3.1 Without a Thought
3.2 The Powers and Perils of Intuition
3.3 Can We Rely on Our Intuition?
SECTION 4 Creating Reality
4.1 How Matter Becomes Mind
4.2 Our Inner Universes
4.3 Learning When No One Is Watching
SECTION 5 The Ultimate Question
5.1 Partly-Revived Pig Brains Raise Questions about When Life Ends
5.2 Is Death Reversible?
5.3 How Can We Tell If a Comatose Patient Is Conscious?

Citation preview

Secrets of the Brain From the Editors of Scientific American Cover Image: DrAfter123/GettyImages Letters to the Editor Scientific American One New York Plaza Suite 4500 New York, NY 10004-1562 or [email protected] Copyright © 2019 Scientific American, a division of Springer Nature America, Inc. All rights reserved. Published by Scientific American www.scientificamerican.com ISBN: 978-1-948933-14-8

Secrets of the Brain

From the Editors of Scientific American

Table of Contents SECTION 1 Maintenance and Monitoring 1.1 The Seventh Sense by Jonathan Kipnis 1.2 Brain Drain by Maiken Nedergaard & Steven A. Goldman 1.3 Deep Sleep Gives Your Brain a Deep Clean by Simon Makin 1.4 Sleep Learning Gets Real by Ken A. Paller & Delphine Oudiette 1.5 Sleep Deprivation Halts Production of Brain Proteins by Emily Willingham

SECTION 2 Navigating Space and Time 2.1 Where Am I? Where Am I Going? by May-Britt Moser & Edvard I. Moser 2.2 Times of Our Lives by Karen Wright 2.3

The Tick Tock of the Biological Clock by Michael W. Young

SECTION 3 Intuition 3.1 Without a Thought by Christof Koch 3.2 The Powers and Perils of Intuition by David G. Myers 3.3 Can We Rely on Our Intuition? by Laura Kutsch

SECTION 4 Creating Reality 4.1 How Matter Becomes Mind by Max Bertolero & Danielle S. Bassett 4.2 Our Inner Universes by Anil K. Seth 4.3 Learning When No One Is Watching by R. Douglas Fields

SECTION 5 The Ultimate Question

5.1 Partly-Revived Pig Brains Raise Questions about When Life Ends by Simon Makin 5.2 Is Death Reversible? by Christoph Koch 5.3 How Can We Tell If a Comatose Patient Is Conscious? by Anouk Bercht & Steven Laureys

SECTION 1 Maintenance and Monitoring

The Seventh Sense by Jonathan Kipnis For decades anatomy textbooks taught that the two most complicated systems in the body—the brain and the immune system —existed in almost complete isolation from each other. By all accounts, the brain focused on the business of operating the body, and the immune system focused on defending it. In healthy individuals, the twain never met. Only in certain cases of disease or trauma did cells from the immune system enter the brain, and when they did so, it was to attack. But in recent years a rush of new findings has revolutionized scientists’ understanding of the two systems. Mounting evidence indicates that the brain and the immune system interact routinely, both in sickness and in health. The immune system can help support an injured brain, for example. It also plays a role in helping the brain to cope with stress and aids such essential brain functions as learning and social behavior. What is more, the immune system might qualify as a kind of surveillance organ that detects microorganisms in and around the body and informs the brain about them, much as our eyes relay visual information and our ears transmit auditory signals. In other words, the brain and immune system do not just cross paths more often than previously thought— they are thoroughly entwined. Researchers are still in the early stages of studying this burgeoning new field of neuroimmunology. But already it is becoming clear that the brain’s response to immunological information and how that information controls and affects brain circuitry could be the key to understanding many neurological diseases—from autism to

Alzheimer’s—and developing new therapies for them. Efforts to treat such disorders have typically met with disappointing results because most drugs cannot easily penetrate the brain. The findings from neuroimmunology raise the tantalizing possibility that targeting the immune system might be a more effective tactic. Received Wisdom To understand the significance of these discoveries, it helps to know a bit about how the brain and immune system are structured and how they work. The brain is our supercomputer and master regulator. Working with the spinal cord and several cranial nerves, which together constitute the central nervous system (CNS), it controls all the body’s functions. Given the vast scope of the brain’s responsibility, it is perhaps no surprise that the organ is incredibly intricate. Its basic functional units are neurons, which occupy roughly half of the brain. The human brain contains an estimated 100 billion neurons interlinked by approximately 100 trillion connections called synapses. The neurons, along with various types of nonneuronal cells called glia, make up the brain’s parenchyma, the functional tissue responsible for processing information. Other key players include stromal cells, which physically support the parenchymal tissues, and endothelial cells, which compose the blood vessels that supply the brain and form the blood-brain barrier, which limits the passage of substances from other parts of the body into the brain. For its part, the immune system has two major components, innate immunity and adaptive immunity. Innate immunity is the more primitive element, having evolved about a billion years ago in the first cells to detect and dispatch enemy forces quickly but without much precision. It is the body’s first line of defense against pathogens, consisting of physical and chemical barriers to them, as well as cells that kill them. Innate immunity initiates the inflammatory response, in which white blood cells swarm the site of infection and churn out proteins that induce heat and swelling to confine and destroy pathogens. Adaptive immunity, which evolved after the innate component, consists mainly of cells called T lymphocytes and B lymphocytes, which can recognize a specific pathogen and mount a

correspondingly targeted attack against it. In a perfect world, all adaptive immune cells would take aim only at external pathogens and would not touch the body’s own proteins or cells. But in about 1 percent of the population, adaptive immunity loses control and attacks cells in the individual’s own tissues, causing autoimmune diseases such as multiple sclerosis, arthritis and certain forms of diabetes, among many others. Still, the system has an impressive success rate, targeting foreign invaders exclusively in some 99 percent of individuals. Researchers long thought that the immune system worked by simply distinguishing an organism’s own constituents from nonself ones. But eventually more complex theories began to emerge. In the 1990s Polly Matzinger of the National Institute of Allergy and Infectious Diseases proposed that the immune system recognizes not only foreign invaders but also damage to tissues. This notion gained support from the subsequent identification of molecules that are released by injured, infected or otherwise damaged tissues. These molecules attract the attention of the immune cells, triggering a cascade of events that lead to activation of the immune system, recruitment of immune cells to the site of injury, and elimination (or at least an attempt at elimination) of the alarm-causing invader or injury. In addition, experiments have found that suppression of adaptive immunity accelerates the development and growth of tumors and slows down the healing process in damaged tissues. Such findings show that the immune system—once considered to be laser-focused on protecting the body from foreign invaders—actually has a far greater purview: regulating the body’s tissues to help them to maintain equilibrium in the face of all manner of insults, whether from without or within. But until recently, scientists were quite sure that this purview did not extend to the brain. As early as the 1920s, researchers observed that although the healthy brain harbors immune cells native to the CNS called microglia, immune cells from elsewhere in the body (socalled peripheral immune cells) are not usually found there. The blood-brain barrier keeps them out. In the 1940s biologist Peter Medawar, who won a Nobel Prize for his research, showed that the

body is slower to reject foreign tissue grafted onto the brain than grafts placed elsewhere in the body. The brain was “immune privileged,” Medawar argued, impervious to the immune system. Peripheral immune cells do appear in the parenchyma and spinal cord of patients with brain infections or injuries, however. And mouse studies demonstrate that these cells cause the debilitating paralysis associated with the disease. Based on such findings, scientists suggested that the brain and immune system have nothing to do with one another except in cases of pathologies that allow immune cells to enter the CNS and wage war on neurons. (Exactly how the immune cells breach the blood-brain barrier in such instances is uncertain. But it may be that the barrier gets activated during brain diseases in ways that allow immune cells to cross over. In a seminal study published in 1992, Lawrence Steinman of Stanford University and his colleagues found that in mice with a condition similar to multiple sclerosis, peripheral immune cells make a protein called α4β1 integrin that allows them to penetrate the barrier. A drug that inhibits the interaction between the integrin and the endothelial cells, Tysabri, is one of the most potent treatments for multiple sclerosis patients.) The theory that the brain and immune system lead separate lives prevailed for decades, but it was not without skeptics. Some wondered why, if the immune system is the body’s main fighting force against pathogens, the brain would give up ready access to such a system of defense. Supporters of the theory responded that the blood-brain barrier prevents the entry of most pathogens into the brain, so the brain has no need to accommodate the immune system, especially if it could cause problems by being there—doing battle with neurons, for instance. The skeptics pointed out that several viruses, as well as some bacteria and parasites, can access the brain. And far from ignoring these transgressions, the immune system responds to them, rushing to the brain to manage the invading agent. Perhaps the scarcity of pathogens in the brain is not because the blood-brain barrier is so effective at filtering them out but because the immune system is so efficient at fighting them.

Indeed, studies have shown that immunosuppressed patients suffer complications that often affect the CNS. Rewriting the Textbooks Eventually such arguments and a growing appreciation of the immune system’s role in supporting damaged bodily tissues prompted researchers to reexamine its role in the CNS. When they took a closer look at the CNS in rats and mice with spinal cord injuries, they found it overrun with infiltrating immune cells. In experiments carried out in the late 1990s, Michal Schwartz of the Weizmann Institute of Science in Rehovot, Israel, showed that eliminating immune cells after injury to the CNS worsens neuron loss and brain function, whereas boosting the immune response improves neuron survival. More recently, studies led by Stanley Appel of Houston Methodist Hospital and Mathew Blurton-Jones of the University of California, Irvine, have found that amyotrophic lateral sclerosis and Alzheimer’s disease develop more severely and rapidly in mice engineered to lack adaptive immunity than in normal mice. Restoring adaptive immunity slows the progression of such diseases. These results indicate that immune cells help neurons rather than only hurting them, as was previously supposed. At first glance, the immune system’s intervention to protect the injured CNS does not make sense. When the CNS sustains trauma, the immune system mounts an inflammatory response, releasing toxic substances to eliminate pathogens and, in some cases, to remove damaged cells, which thereby restores equilibrium. The inflammatory response is a blunt instrument, however, taking out some of the good guys along with the bad. In other tissues, such collateral damage is tolerable because the tissues regenerate readily. But CNS tissue is limited in its ability to grow back, which means that damage from the immune response is typically permanent. Given the potential for immune activity to wreak havoc in the brain, the costs of intervention could often outweigh the benefits. But maybe the immune response observed after CNS injury is simply an extension of the immune response that aids brain function under normal conditions.

Recent studies support this notion. My collaboration with Hagit Cohen of Ben-Gurion University of the Negev in Israel and Schwartz revealed that mice that experience stressful stimuli, such as exposure to the smell of their natural predators, develop an immediate stress response—in this case, hiding in a maze rather than exploring it. In 90 percent of cases, the stress response disappears within hours or days. But for the other 10 percent, the response persists for days to weeks. Mice in the latter group can thus serve as an animal model for post-traumatic stress disorder (PTSD). Interestingly, when mice lacking adaptive immunity are compared with mice that have a normal immune system, the incidence of PTSD is increased severalfold. These results provided the first indication that the immune system supports the brain not only during infections and injuries but also during psychological stress. Moreover, some evidence links the immune system to PTSD in humans. Though not as nerve-racking as exposure to a predator, tasks that require learning are also stressful. Think of preparing for an exam or even cooking a new recipe. Could an inability to deal with stress hinder the learning process itself? To test this hypoth esis, my colleagues and I compared the performance of mice lacking adaptive immunity with that of a control group in various behavioral tests. We found that mice without adaptive immunity, unlike the controls, performed poorly in tasks requiring spatial learning and memory, such as figuring out the location of a platform hidden in a large pool of water. We have since shown that the mice lacking adaptive immunity exhibit not only impaired spatial learning behavior but also compromised social behavior, preferring to spend their time with an inanimate object rather than another mouse. As evidence that the immune system plays important roles in different brain functions has accumulated, new unknowns have emerged. How the immune system exerts its influence in the CNS is one. After all, apart from microglia, no immune cells are present within the parenchyma of healthy individuals. Clues have come from proteins called cytokines, which are made by immune cells and in fluence the behavior of other cells. Cytokines released by peripheral

immune cells can affect the brain. They presumably gain entrance through brain areas that lack the regular blood-brain barrier and could directly impact the brain through the vagus nerve, which runs from the brain to the abdomen. The available evidence suggests that the immune cells within the meninges—the membranes that surround the brain—are also the source of the cytokines that may affect brain function. How these immune cells enter the meninges, how they circulate there and how they produce their cytokines are currently subjects of intensive research. Recently my colleagues and I made an intriguing discovery that bears on these questions: It has to do with how the body gets rid of toxins and waste. The tissues in the body contain two types of vessels. Just as a house has two types of pipes that serve it, one for water and the other for sewage, our tissues have the blood vessels that carry oxygen and nutrients to them and the lymphatic vessels that remove toxins and other waste materials that the tissues produce. The lymphatic vessels also ferry antigens—substances capable of inducing an immune response—from the tissues into tissue-draining lymph nodes, where they are presented to immune cells to be inspected for information on the draining tissue. On detecting a problem, such as injury or infection in the tissue, the immune cells activate and migrate to the affected tissue to try to resolve the problem. Because of the enduring belief that the healthy brain is disconnected from the immune system and because the parenchyma does not contain lymphatic vessels, scientists long assumed that neither the brain nor the rest of the CNS is serviced by the lymphatic network. Yet this assumption presented a conundrum: Why would the brain not report to the immune system about potential problems that might be affecting it and that the immune system might help solve? And how does the immune system nonetheless receive information on brain infections? Furthermore, studies have found that brain injuries provoke a strong immune response in lymph nodes located outside the brain. How is that possible?

Fascinated by the immune activity in the meninges and its effects on brain function, my colleagues and I decided to take a closer look at those membranes. In doing so, we made a serendipitous discovery: it turns out they house lymphatic vessels. Several other research groups have since made similar findings in fish, mice, rats, nonhuman primates and humans. The results confirm earlier proposals for a link between the brain and lymph system that were made some 200 years ago but largely dismissed. These vessels represent a bona fide lymphatic network that drains the CNS, a missing link that can relay information about brain infections and injuries to the immune system. The presence of both lymph vessels and immune cells in the meninges means researchers need to rethink the exact function of these membranes. The traditional explanation holds that they simply carry the cerebrospinal fluid, which buoys the brain. But considering how densely packed the brain’s constituent cells are and how sensitive its neurons are when they fire their electrical signals, perhaps moving all of the brain’s immune activity to its meningeal borders was evolution’s solution to the problem of allowing the immune system to serve the entire CNS without interfering with neuron function. The Brain-Immune Connection The healthy brain was long thought to be off-limits to the immune system. Although the brain harbors native immune cells known as microglia, immune cells that originate elsewhere in the body are not normally found there. The so-called blood-brain barrier (inset) keeps these peripheral immune cells from entering. But recent findings have shown that the immune system is nonetheless highly active in the healthy brain and essential to its functioning.

Illustration by David Cheney

The discovery of the brain’s lymphatic vessels revealed how the immune system receives information about tissue damage in the CNS. For insights into how the meningeal immune cells actually communicate with the parenchyma and affect it from afar, however, we have to turn to another branch of the brain’s waste-removal system. In addition to the lymphatic network that we discovered, the CNS also has a network of channels in the parenchyma through which the cerebrospinal fluid gets access to the brain. Maiken Nedergaard of the University of Rochester has dubbed this network the glymphatic system. The fluid enters the parenchyma through spaces surrounding the arteries that pipe into the brain from the meninges and washes through the tissues until it is recollected in the spaces surrounding the veins and then returned to the pool of cerebrospinal fluid in the meninges. This flow of fluid presumably carries immune molecules such as cytokines from the meninges into the parenchyma, where they can exert their influence.

Studies of cytokines have illuminated how they modulate behavior. For example, Robert Dantzer, now at the University of Texas MD Anderson Cancer Center, and Keith Kelley of the University of Illinois at Urbana- Champaign have determined that interleukin-1 beta initiates sickness behavior, the name given to the constellation of behaviors people typically exhibit when ill, such as sleeping excessively, eating less and withdrawing from social contact. And my own team has recently shown that interferon gamma, a cytokine produced by meningeal T cells, interacts with neurons in the brain’s prefrontal cortex, which, among its other functions, is involved in social behavior. Surprisingly, this cytokine does not exert its influence via the brain’s resident immune cells (the microglia) but rather those neurons that control the circuits associated with social behavior. In fact, the cytokines are essential for proper functioning of these circuits: in the absence of T cells or their interferon gamma, these neurons fail to regulate the circuits correctly, and circuit hyperactivity ensues—a disturbance linked to social deficits. Thus, a cytokine produced by immune cells in the meninges can change the activity of neurons, thereby altering the function of the circuit and changing the underlying behavior. Interferon gamma is not the only immune molecule that affects brain function. Mario de Bono of the MRC Laboratory of Molecular Biology in England and his colleagues have shown that another cytokine, IL-17, activates sensory neurons in the roundworm Caenorhabditis elegans and changes the creature’s oxygen-sensing behavior. And recent work in mice by Gloria Choi of the Massachusetts Institute of Technology and her collaborators has demonstrated that IL-17 can interact with neurons in the brain’s cortex and alter behaviors related to autism spectrum disorder. Another Sense Organ? One might wonder why an organ as powerful as the brain needs to be controlled or supported by the immune system to function properly. I have developed a hypothesis for why the two systems are so closely linked. We have five established senses—smell, touch, taste, sight and hearing. The sense of position and movement, or

proprioception, is often referred to as the sixth sense. These senses report to the brain about our external and internal environments, providing a basis on which the brain can compute the activity needed for self-preservation. Microorganisms abound in these environments, and the ability to sense them—and defend against them when needed—is central to survival. Our immune system excels at exactly that, with innate immunity’s ability to generally recognize patterns and types of invaders and adaptive immunity’s talent for recognizing specific invaders. I propose that the defining role of the immune system is to detect microorganisms and inform the brain about them. If, as I suspect, the immune response is hardwired into the brain, that would make it a seventh sense. There are ways to test this hypothesis. Because the brain’s circuits are all interconnected, interference with one circuit tends affect others as well. For instance, food tastes different when our sense of smell is impaired. Evidence that interference with immune input disturbs other circuits would support the idea that the immune response is a hardwired seventh sense. One possible example comes from sickness behavior. Perhaps an overwhelming input of signals from the seventh sense informing the brain of pathogenic infection spills over and disrupts the circuits that modulate sleepiness, hunger, and so on during illness, leading to this characteristic set of behavioral changes that develop in affected individuals. Alternatively, the microorganism information relayed to the brain by the immune sensory system may prompt the brain to initiate sickness behavior as a means of protecting the sick individual by minimizing exposure to other pathogens and conserving energy. Our knowledge of the relation between the brain and the immune system is still in its infancy. We should not be surprised if new discoveries in this field over the next 10 or 20 years reveal the two systems in a completely different light. I hope, though, that the fundamental understanding we possess today will be enriched by the results of such research rather than overturned altogether. One research priority will be mapping how the immune components and neural circuits connect, interact and interdepend in health and disease. Knowing those relations will allow investigators to target

immune signaling in their treatment of neurological and mental disorders. The immune system is an easier drug target than the CNS, and it is plausible that one day repair of the immune system through gene therapy or even the replacement of a flawed immune system via bone marrow transplantation will be a viable means of treating brain disorders. Given the myriad immune alterations in brain disorders, research on neuroimmune interactions will probably continue for decades to come and gradually reveal to us even deeper mysteries of the brain. --Originally published: Scientific American 319(2); 28-35 (August 2018).

Brain Drain by Maiken Nedergaard & Steven A. Goldman The human brain weighs only about 3 pounds, or roughly 2 percent of the average adult body mass. Yet its cells consume 20 to 25 percent of the body’s total energy. In the process, inordinate amounts of potentially toxic protein wastes and biological debris are generated. Each day, the adult brain eliminates a quarter of an ounce of worn-out proteins that must be replaced with newly made ones, a figure that translates into the replacement of half a pound of detritus a month and three pounds, the brain’s own weight, over the course of a year. To survive, the brain must have some way of flushing out debris. It is inconceivable that an organ so finely tuned to producing thoughts and actions would lack an efficient waste disposal system. But until quite recently, the brain’s plumbing system remained mysterious in several ways. Questions persisted as to what extent brain cells processed their own wastes or whether they might be transported out of the nervous system for disposal. And why is it that evolution did not seem to have made brains adept at delivering wastes to other organs in the body that are more specialized for removing debris? The liver, after all, is a powerhouse for disposing of or recycling waste products. In 2011 we began trying to clarify how the brain eliminates proteins and other wastes. We also began to explore how interference with that process might cause the cognitive problems encountered in neurodegenerative disease. We thought that disturbances in waste clearance could contribute to such disorders because the disruption

would be expected to lead to the accumulation of protein debris in and around cells. This idea intrigued us because it was already known that such protein clumps, or aggregates, do indeed form in brain cells, most often in association with neurodegenerative disorders. What is more, it was known that the aggregates could impede the transmission of electrical and chemical signals in the brain and cause irreparable harm. In fact, the pathology of Alzheimer’s, Parkinson’s and other neurodegenerative diseases of aging can be reproduced in animal models by the forced overproduction of these protein aggregates. In our research, we found an undiscovered system for clearing proteins and other wastes from the brain—and learned that this system is most active during sleep. The need to remove potentially toxic wastes from the brain may, in fact, help explain the mystery of why we sleep and hence retreat from wakefulness for a third of our lives. We fully expect that an understanding of what happens when this system malfunctions will lead us to both new diagnostic techniques and treatments for a host of neurological illnesses. The Glymphatic System In most regions of the body, a network of intricate fluid-carrying vessels, known as the lymphatic system, eliminates protein waste from tissues. Waste-carrying fluid moves throughout this network between cells. The fluid collects into small ducts that then lead to larger ones and eventually into blood vessels. This duct structure also provides a path for immune defense, because lymph nodes, a repository of infection-fighting white blood cells, populate ducts at key points throughout the network. Yet for a century neuroscientists had believed that the lymphatic system did not exist in the brain or spinal cord. The prevailing view held that the brain eliminated wastes on its own. Our research suggests that this is not the complete story. The brain’s blood vessels are surrounded by what are called perivascular spaces. They are doughnut-shaped tunnels that surround every vessel. The inner wall of each space is made of the surface of vascular cells, mostly endothelial cells and smooth muscle

cells. But the outer wall is unique to the brain and spinal cord and consists of extensions branching out from a specialized cell type called the astrocyte. Astrocytes are support cells that perform a multitude of functions for the interconnected network of neurons that relay signals throughout the brain. The astrocytes’ extensions—astrocytic end feet —completely surround the arteries, capillaries and veins in the brain and spinal cord. The hollow, tubelike cavity that forms between the feet and the vessels remains largely free of obstructions, creating a spillway that allows for the rapid transport of fluid through the brain. Scientists knew about the existence of the perivascular space but until very recently had not identified any specific function for it. Thirty years ago Patricia Grady, then at the University of Maryland, described perivascular fluid flows, but the significance of this finding was not recognized until much later. She reported that large proteins injected into the cerebrospinal fluid (CSF) could later be found in the perivascular spaces of both dogs and cats. At the time, other groups could not replicate her findings, and not knowing the meaning of what such an observation might be, research did not proceed any further. When we began our investigations into the waste-disposal system of the brain just a few years ago, we focused on prior discoveries that water channels built from a protein called aquaporin-4 were embedded in the astrocytic end feet. In fact, the density of the water channels was comparable to that of those in the kidney, an organ whose primary job is to transport water. We were immediately interested in the multiplicity of the astrocytic water channels and their positions facing the blood vessel walls. Our interest only grew when we looked more closely because we found that the vascular endothelial cells bordering the perivascular space lacked these channels. Thus, fluid could not be moving directly from the bloodstream into brain tissue. Rather the liquid had to be flowing between the perivascular space and into the astrocytes, thereby gaining access to the brain tissue.

We asked whether the perivascular space might constitute a neural lymphatic system. Could it perhaps provide a conduit for cerebrospinal fluid? Arterial pulsations might drive the CSF through the perivascular space. From there, some of it could enter astrocytes through their end feet. It could then move into the area between cells and finally to the perivascular space around veins to clear waste products from the brain. Along with our laboratory members Jeff Iliff and Rashid Deane, we went on to confirm this hypothesis. Using chemical dyes that stained the fluid, combined with microscopic techniques that allowed us to image deep inside live brain tissue, we could directly observe that the pumping of blood propelled large quantities of CSF into the perivascular space surrounding arteries. Using astrocytes as conduits, the CSF then moved through the brain tissue, where it left the astrocytes and picked up discarded proteins. The fluids exited the brain through the perivascular space that surrounded small veins draining the brain, and these veins in turn merged into larger ones that continued into the neck. The waste liquids went on to enter the lymph system, from which they flowed back into the general blood circulation. They combined there with protein waste products from other organs that were ultimately destined for filtering by the kidneys or processing by the liver. When we began our research, we had no idea that astrocytes played such a critical role in the brain’s counterpart of a lymphatic system. Additional proof came when we used genetically engineered mice that lacked the aquaporin-4 protein that makes up the astrocytes’ water channels. The rate of CSF flow entering the astrocytes dropped by 60 percent, greatly slowing fluid transport through their brain. We had now traced a complete pathway within the brain for these cleansing fluids to effectively sweep away waste products. We named our discovery the glymphatic system. The newly coined word combined the words “glia”—a type of brain cell of which the astrocyte is one example—and “lymphatic,” thus referencing this newly discovered function of the brain’s glial cells.

As we came to recognize the important role of the glymphatic system, we immediately wondered whether proteins that build up in the brain in neurodegenerative diseases might, in the healthy brain, be typically washed out along with other, more mundane cellular waste. In particular, we focused on a protein linked to Alzheimer’s called beta-amyloid, which had previously been thought to be cleared under normal circumstances by degradation or recycling processes that take place within all brain cells. In Alzheimer’s, aggregates of beta-amyloid form amyloid plaques between cells that may contribute to the disease process. We found that in a healthy brain, beta-amyloid is cleared by the glymphatic system. Other proteins implicated in neurodegenerative diseases, such as the synuclein proteins that turn up in Parkinson’s, Lewy body disease and multisystem atrophy, might also be carried away and could build up abnormally if the glymphatic system were to malfunction. A symptom that accompanies Alzheimer’s and other neurodegenerative diseases provided a hint of how to proceed. Many patients with Alzheimer’s experience sleep disturbances long before their dementia becomes apparent. In older individuals, sleep becomes more fragmented and shallow and lasts a shorter time. Epidemiological studies have shown that patients who reported poor sleep in middle age were at greater risk for cognitive decline than control subjects when tested 25 years later. Even healthy individuals who are forced to stay awake exhibit symptoms more typical of neurological disease and mental illness— poor concentration, memory lapses, fatigue, irritability, and emotional ups and downs. Profound sleep deprivation may produce confusion and hallucinations, potentially leading to epileptic seizures and even death. Indeed, lab animals may die when deprived of sleep for as little as several days, and humans are no more resilient. In humans, fatal familial insomnia is an inherited disease that causes patients to sleep progressively less until they die, usually within 18 months of diagnosis. Knowing all this, we speculated that the sleep difficulties of dementia might not just be a side effect of the disorder but might

contribute to the disease process itself. Moreover, if the glymphatic system cleared beta-amyloid during sleep at a higher rate than when awake, perhaps the poor sleeping patterns of patients with neurodegenerative disorders might contribute to a worsening of the disease. Because our initial experiments had been performed in anesthetized mice, we further speculated that the fast fluid flows that we noted were not necessarily what we might anticipate in an awake and active brain, which would be subject to other demands in its typical functioning. To test the idea, Lulu Xie and Hongyi Kang, both in the Nedergaard Laboratory, trained mice to sit still underneath a microscope to capture images of a tracer chemical in the CSF using a novel imaging technique called two-photon microscopy. We compared how the tracer moved through the glymphatic system in awake versus sleeping mice. Because imaging is neither invasive nor painful, the mice remain quiet and compliant, so much so that animals often fall asleep while being imaged. We were thus able to image inflows of CSF in a particular area of the same mouse brain during both sleep and wakefulness. CSF in the glymphatic system, it turned out, fell dramatically while the study mice were awake. Within minutes after the onset of sleep or the effects of anesthesia, however, influxes of the fluid increased significantly. In a collaboration with Charles Nicholson of New York University, we found that the brain’s interstitial space—the area between cells through which glymphatic fluid flows on its way to perivascular spaces around veins—rose by more than 60 percent when mice fell asleep. We now believe that the flow of glymphatic fluid increases during sleep because the space between the cells expands, which helps to push fluid through the brain tissue. Our research also revealed how the rate of fluid flow is controlled. A neurotransmitter, or signaling molecule, called norepinephrine appeared to regulate the volume of the interstitial area and consequently the pace of glymphatic flow. Levels of norepinephrine rose when mice were awake and were scarce during sleep, implying

that transient, sleep-related dips in norepinephrine availability led to enhanced glymphatic flow. The Power of Sleep Having demonstrated that the expansion and contraction of the interstitial space during sleep were important to both brain function and protein-waste clearance, we then wanted to test a corollary to this observation: Could sleep deprivation precipitate neurodegenerative disease? Experiments that we conducted in mice showed that during sleep, the glymphatic system did indeed remove beta-amyloid from the brain with remarkable efficiency: its clearance rate more than doubled with sleep. On the other hand, mice genetically engineered so that they lacked aquaporin-4 water channels in astrocytes demonstrated markedly impaired glymphatic function, clearing 40 percent less betaamyloid than control animals. The remarkably high percentage of beta-amyloid removed challenged the widely held idea that brain cells break down all their own wastes internally (through degradation processes called ubiquitination and autophagy); now we know that the brain removes a good deal of unwanted proteins whole, sweeping them out for later degradation. These new findings, moreover, seemed to confirm that the sleeping brain exports protein waste, including beta-amyloid, through the glymphatic transport system. Additional support for this thesis came from David M. Holtzman’s group at Washington University in St. Louis, which demonstrated that beta-amyloid concentration in the interstitial space is higher during wakefulness than in sleep and that sleep deprivation aggravates amyloid-plaque formation in mice genetically engineered to accumulate it in excess. So far these investigations have not moved beyond basic research labs. Drug companies have yet to consider antidementia therapies that would physically remove amyloid and other toxic proteins by washing out the brain with glymphatic fluids. But maybe they should. New strategies are desperately needed for a disease that costs the U.S. health care system $226 billion annually. A number of clinical trials for Alzheimer’s are under way, although no drug in

development has yet demonstrated a clear-cut benefit. Stimulating glymphatic flows offers a new approach that is worth investigating. A pharmaceutical that regulates the glymphatic system by increasing the rate of CSF flow during sleep could literally flush amyloid out of the brain. A treatment used for a well-known neurological syndrome provides a clue that this approach might work. Normal-pressure hydrocephalus, an illness typically seen in the elderly, is a form of dementia in which excessive CSF accumulates in the hollow central brain cavities, the cerebral ventricles. When a procedure called lumbar puncture removes the fluid by draining it out, patients often exhibit remarkable improvements in their cognitive abilities. The basis for this observation has long been a mystery. Our research suggests that restoring fluid flows through the glymphatic system might mediate the restoration of cognition in these patients. Even if a new drug is not imminent, knowledge of the glymphatic systems suggests fresh ideas for diagnosing Alzheimer’s and other neurological conditions. A recent study by Helene Benveniste of the Stony Brook School of Medicine has shown that standard magnetic resonance imaging can visualize and quantify the activity of the glymphatic system. The technology may allow tests of glymphatic flow designed to predict disease progression in patients suffering from Alzheimer’s or related dementias or normal-pressure hydrocephalus. It might even foretell the ability of patients with traumatic brain injuries to recover. Most of our studies of the glymphatic system to date have focused on the removal of protein wastes. Yet the glymphatic system may also prove to be a fertile area for gaining a basic understanding of how the brain works. Intriguingly, fluids moving through the glymphatic system may do more than remove wastes; they may deliver various nutrients and other cargo to brain tissue. A new study showed that glymphatic channels deliver glucose to neurons to provide energy. Further studies are now investigating whether white matter, the insulationlike sheathing around neurons’ wirelike extensions, called axons, may rely on the glymphatic system for delivery of both nutrients and

materials needed for maintaining the cells’ structural integrity. Such studies promise to elucidate the many unexpected roles of the glymphatic system in the daily life—and nightlife—of the brain. --Originally published: Scientific American 314(3); 44-49 (March 2016).

Deep Sleep Gives Your Brain a Deep Clean by Simon Makin Why sleep has restorative—or damaging—effects on cognition and brain health has been an enduring mystery in biology. Researchers think cerebrospinal fluid (CSF) may flush toxic waste out, “cleaning” the brain and studies have shown that garbage clearance is hugely improved during sleep. They were not sure exactly how all this works, however, or why it should be so enhanced during sleep. One aspect of sleep that is well understood is how the slow electrical oscillations (or “slow waves”) that characterize deep, nonREM sleep contribute to memory consolidation, the process whereby new memories are transferred into long-term storage. A new study, from a team led by neuroscientist Laura Lewis of Boston University, now gives insight into what drives CSF flow through the brain, suggesting that the same slow waves that coordinate memory consolidation drive oscillations in blood flow and CSF in the brain. The work has implications for understanding the relations between sleep disturbance and psychiatric and neurodegenerative conditions, and may even point to new approaches to diagnosis and treatment. “We’ve discovered there are really large waves of CSF that appear in the brain only during sleep,” Lewis says. “This effect is really striking, and we’re also interested in what it means for maintaining brain health, especially in disorders such as Alzheimer’s disease.” In the study, published on October 31, 2019 in Science, the team set out to investigate how the dynamics of CSF flow changes during sleep, and how this might relate to alterations in brain blood flow and

electrical activity. “We know sleep is really important for brain health, and waste clearance is probably a key reason why; what was less clear is: Why is this changed during sleep?” Lewis says. “That led us to ask what was happening in the CSF.” The researchers used electroencephalography (EEG) to monitor the brain waves of 13 sleeping healthy adults, while also using a cutting-edge, “accelerated” fMRI technique to capture faster changes than standard fMRI can manage. That allowed for the measurement of both blood-oxygenation changes (which indicate blood flowing to electrically active, oxygen-hungry regions) and CSF flows. The latter was only possible due to a flaw in this method that means any newly arriving fluid (not just oxygenated blood) lights up in the image. “We realized we could take advantage of this to measure CSF flow at the same time as blood oxygenation,” Lewis says. “That was critical, because it turns out these things are coupled to each other in a way we never would have seen if we didn’t measure blood, CSF and electrical activity simultaneously.” What the team found was that the slow waves seen in non-REM sleep occur in lockstep with changes in both blood flow and CSF. Just because things occur together doesn’t necessarily mean one causes the other, but the team also built a computer model incorporating what we know about the physics linking these processes, which predicted that slow waves would have just these kinds of effects on blood and CSF. What seems to be happening is that as brain activity alters blood flow, this reduces the volume of blood in the brain, and because the brain is a closed vessel, CSF flows in to fill the space. “It’s very convincing,” says neurologist Maiken Nedergaard of the University of Rochester, who was not involved with the research. “It also really makes sense: electrical activity drives blood flow changes, that then drive CSF changes.” The team measured this CSF inflow going into the fourth ventricle, one of four fluid-filled cavities involved in producing CSF (by filtering blood plasma) and circulating it around the brain. As CSF usually flows out of the fourth ventricle, this suggests a “pulsatile” flow, like a wave. This pushes CSF around the ventricles and into spaces

between membranes surrounding the brain and spinal cord, called the meninges, where it mixes with “interstitial fluid” within the brain to carry away toxic waste products. As slow waves are important for memory consolidation, this links two disparate functions of sleep. “What’s exciting about this is it’s combining features of brain function that people don’t normally think of as connected,” Nedergaard says. It isn’t obvious things had to be this way, Lewis says, but it may represent an example of nature being efficient. “It’s a matter of nature not dividing tasks between higher level and lower level, like how you run a company, where you have a boss making decisions and cleaning people coming in,” Nedergaard says. “In biology, it’s everybody contributing, as it makes more sense.” The findings have implications for neurodegenerative diseases, which are thought to be caused by build-up of toxic proteins in the brain, such as amyloid-Beta in Alzheimer’s disease. Previous research has shown that amyloid-Beta is cleared more efficiently during sleep, which is often disrupted in patients. Disturbances in slow-wave sleep also often accompany aging, which may be linked to cognitive decline. “We know that people with Alzheimer’s have fewer slow waves, so we may find they also have fewer CSF waves,” Lewis says. “We have to do these studies now in older adults and patient populations, to understand what this might mean for those disorders.” Sleep disturbance is also a feature of many psychiatric disorders, from depression to schizophrenia. “Different electrical signatures of sleep are disrupted in different psychiatric conditions,” she says. “So this will be very interesting to follow up on in a multitude of disorders.” The team next hope to nail down whether electrical oscillations truly do cause the changes they observed in CSF flow, by experimentally manipulating brain activity. “It would be great to find the right collaborator and do a study in mice where we manipulate neural activity, then watch the downstream consequences,” Lewis says. “We’re also thinking about ways to safely and noninvasively manipulate neural oscillations in humans.” It may ultimately be

possible to use electromagnetic stimulation to influence brain waves as a treatment for brain disorders. Researchers have already seen encouraging results of this approach in mice, and these findings may help explain why. Another potential application may come from assessing whether changes in CSF flows can serve as a diagnostic marker for some of these conditions. “It gives us a ton of interesting new biology to explore and understand, since it seems like things the brain is doing during sleep are related to each other in surprising ways,” Lewis says. “Maybe the most important take-home message is that sleep is a serious thing,” Nedergaard says. “You really need to sleep to keep a healthy brain because it links electrical activity to a practical housekeeping function.” --Originally published: Scientific American online November 1, 2019.

Sleep Learning Gets Real by Ken A. Paller and Delphine Oudiette

In Aldous Huxley's Brave New World, a boy memorizes each word of a lecture in English, a language he does not speak. The learning happens as the boy sleeps within earshot of a radio broadcast of the lecture. On awakening, he is able to recite the entire lecture. Based on this discovery, the totalitarian authorities of Huxley’s dystopian world adapt the method to shape the unconscious minds of all their citizens. Sleep learning turns up throughout literature, pop culture and ancient lore. Take Dexter, the lead character in the animated television series Dexter’s Laboratory. In one episode, Dexter squanders his time for homework, so instead he invents a contraption for learning to speak French overnight. He wakes up the next day unable to speak anything but French. The idea of sleep learning isn’t just a modern invention. It also appears within a centuries-old mind-training practice of Tibetan Buddhists; a message whispered during sleep was intended to help a monk recognize the events in his dreams as illusory. Everyone knows we learn better when we are well rested. Most people, however, dismiss the notion of sleep learning out of hand. Yet a set of new neuroscientific findings complicates this picture by showing that a critical part of learning occurs during sleep: recently formed memories resurface during the night, and this playback can help reinforce them, allowing at least a few to be remembered for a lifetime.

Some studies have even explored whether sleep might be manipulated to enhance learning. They reveal that sleep’s program for making daytime memories stronger can be boosted using sounds and odors. Results in rodents have even demonstrated a primitive form of memory implantation: using electrical stimulation while animals slept, researchers taught them where they should go in their enclosures on awakening. Huxley’s imagined version of sleep education, in which entire texts are absorbed verbatim during the night, is still relegated to the pages of his 1932 classic. But experiments now indicate that it is possible to tinker with memories while a person is immersed in the depths of slumber, creating the basis for a new science of sleep learning. The Psychophone For these techniques to work, scientists have to explore how information can be absorbed when consciousness is seemingly on a well-deserved break. Around the time that Huxley was writing Brave New World, serious explorations into the possibility of meddling with sleep had begun. In 1927 New Yorker Alois B. Saliger invented an “Automatic Time-Controlled Suggestion Machine,” which he marketed as the “PsychoPhone,” to allow a recorded message to be replayed during the night. The setup seemed to evoke Huxley’s imagined technology except that the user, rather than the state, could select the message to be played. Saliger’s invention was followed, in the 1930s and 1940s, by studies documenting ostensible examples of sleep learning. A 1942 paper by Lawrence LeShan, then at the College of William & Mary, detailed an experiment in which the researcher visited a summer camp where many of the boys had the habit of biting their fingernails. In a room where 20 such boys slept, LeShan used a portable phonograph to play a voice repeating the sentence “My fingernails taste terribly bitter.” The string of words recurred 300 times each night, beginning 150 minutes after the onset of sleep. The experiment continued for 54 consecutive nights. During the last two weeks of camp, the phonograph broke, so the intrepid LeShan delivered the sentence himself. Eight of the 20 boys stopped biting

their nails, whereas none of 20 others who slept without exposure to the recording did so. These early efforts did not use physiological monitoring to verify that the boys were really asleep, though, so the results remain suspect. The whole field took a severe hit in 1956, when two scientists at RAND Corporation used electroencephalography (EEG) to record brain activity while 96 questions and answers were read to sleeping study participants. (One example: “In what kind of store did Ulysses S. Grant work before the war?” Answer: “A hardware store.”) The next day correct answers were recalled only for information presented when sleepers showed signs of awakening. These results led to a shift in the field that persisted for 50 years, as researchers began to lose faith in sleep learning as a viable phenomenon: participants in these experiments appeared to learn only if they were not really sleeping when information was presented to them. Most scientists during this time tended to avoid the topic of sleep learning, although a few researchers did plug away at asking whether sleep assists in remembering new information. One typical study protocol probed whether overnight sleep deprivation affected recall the day after learning something new. Another asked whether remembering was better after a nap than after an equal period of time spent awake. Various confounding factors can interfere with such studies. For example, the stress of sleep deprivation can harm cognitive functions that then decrease memory recall. Eventually cognitive neuroscientists began to tackle these challenges by bringing together evidence from multiple research methods. A substantive foundation of evidence gradually accrued to confirm that sleep is a means of reviving memories acquired during the day, reopening the relation between sleep and memory as a legitimate area of scientific study. Many researchers who took up the challenge focused on rapid eye movement (REM) sleep, the period when dreams are the most frequent and vivid. The guiding assumption held that the brain’s nighttime processing of memories would be tied to dreaming, but

clear-cut data did not materialize. In 1983 two noted scientists— Graeme Mitchison and Francis Crick, neither psychologists—went so far as to speculate that REM sleep was for forgetting. In a similar vein, Giulio Tononi and Chiara Cirelli, both at the University of Wisconsin–Madison, proposed that sleep could be the time for weakening connections among brain cells, making it easier for new information to be acquired the following day. Instead of REM, some investigators focused their attention on slow-wave sleep (SWS), a period of deep slumber without rapid eye movements. In 2007 Björn Rasch, then at the University of Lübeck in Germany, and his colleagues prepared people for a sleep experiment by requiring them to learn the locations of a set of objects while simultaneously smelling the odor of a rose. Later, in their beds in the laboratory, sleeping study participants again encountered the same odor as electrical recordings confirmed one sleep stage or another. The odor activated the hippocampus, a brain area critical for learning to navigate one’s surroundings and for storing the new knowledge gained. On awakening, participants recalled locations more accurately—but only following cueing from odors that emanated during the course of slow-wave (not REM) sleep. The Maestros of Slumber Brain rhythms provide clues to how sleep helps to store memories for later retrieval. One type of neural signal, called a slow wave, cycling from 0.5 to four times a second, orchestrates the activity of neurons in the cerebral cortex. Each slow oscillation consists of a “down” phase, when neurons are silent, and an “up” phase, when they resume activity. This timing pattern helps to reinforce recently formed memories by ensuring that multiple cortical regions remain in an up state at the same time. The up phase can coincide with sleep spindles, brief increases of a rhythm of 12 to 15 cycles per second. Spindles originate in the thalamus, which serves as a crossroads for information that is transmitted to virtually all parts of the cerebral cortex. Spindles have a rhythm of their own, recurring at approximately five-second intervals. They coordinate the activity of sharp-wave ripples in the hippocampus. Ripples, for their part, are concurrent with the replay of memories. Slow waves, all the while, assume the role of orchestra conductor: their measured oscillations in the cortex coordinate the pacing for sleep spindles and sharp-wave ripples.

The intricate coupling of these oscillations underlies not only memory reactivation but also the altering of connections among neurons to strengthen memory storage. A dialogue between the hippo camp us and the cortex involving all these brain rhythms triggers a set of complex network interactions. Through this process, known as consolidation, new information can become integrated with existing memories. The intertwining of memories, moreover, enables the gist of recent experiences to be extracted to make sense of a complex world. Memory difficulties can arise when this neural dialogue becomes impaired. Individuals with major damage centered in the hippo campus or parts of the thalamus may develop a profound amnesia. Without the expected interactions with these brain regions during both sleep and waking, the cortex cannot store mental records of facts and events known as declarative memories. In addition, a milder form of memory disorder may result when memory processing during sleep is seriously disrupted. As our understanding of the physiological orchestration of the sleeping brain continues to expand, new strategies may be used to enhance the brain’s natural rhythms with various forms of electrical or sensory stimulation. Humans have always had such inclinations, having taken advantage of a lullaby’s rhythm or rocking motions to lull a baby to sleep.

— K.A.P. and D.O.

Credit: Illustration by Mesa Schumacher

Targeted Memory Reactivation In 2009 our lab extended this methodology by using sounds instead of odors. We found that sounds played during SWS could improve recall for individual objects of our choosing (instead of the recall of an entire collection of objects, as was the case in the odor study). In our procedure—termed targeted memory reactivation, or TMR—we first taught people the locations of 50 objects. They might learn to place a cat at one designated spot on a computer screen and a teakettle at another. At the same time, they would hear a corresponding sound (a meow for the cat, a whistle for the kettle, and so on). After this learning phase, participants took a nap in a comfortable place in our lab. We monitored EEG recordings from electrodes placed on the head to verify that each individual was soundly asleep. These recordings provided intriguing data on the synchronized activity of networks of neurons in the brain’s outer layer, the cerebral cortex, that are relevant for memory reactivation. When we detected slow-wave sleep, we played the meow, whistle and other sounds associated with a subset of the objects from the learning phase. Sounds were presented softly, not much louder than background noise, so the sleeper did not awaken. On awakening, people remembered locations cued during sleep better than places that had not been flagged during the experiment. Whether sounds or odors served as cues in these experiments, they apparently triggered the reactivation of spatial memories and so reduced forgetting. At first, the auditory procedures we used were highly controversial. The received wisdom among sleep researchers held that sensory circuits in the cortex are largely switched off during sleep, except for the sense of smell. We were not swayed by this orthodox view. Instead we followed our hunch that the repeated playing of soft sounds might influence the sleeping brain and produce changes in recently stored memories.

Indeed, the same memory benefits were also found in many subsequent studies. A technique called functional magnetic resonance imaging highlighted which brain areas take part in TMR, and EEG results brought out the importance of specific brain oscillations. Two papers published in 2018—one by Scott Cairney of the University of York in England and his colleagues; the other by James Antony of Princeton University and his colleagues—linked an oscillation, the sleep spindle, with the memory benefits of TMR. Besides boosting spatial memory, these procedures have also helped improve recall in other settings. TMR can assist in mastery of playing a keyboard melody and learning new vocabulary or grammatical rules. The technique can also help with simpler types of learning, such as adjustments in one’s body image. In conditioning experiments, TMR alters prior learning of an automatic reaction to a stimulus caused by an earlier pairing of that stimulus with an electric shock. Ongoing studies are examining still other types of recall, such as associating names with faces when first meeting new people. As the technology evolves, TMR should be tested to see if it could help to treat various disorders, reduce addictions or speed recovery from illness. Our lab, together with Northwestern University neurologist Marc Slutzky, is currently testing a novel rehabilitation procedure for recovering arm-movement abilities after stroke. Cue sounds are incorporated as part of the therapy and are replayed during sleep to try to accelerate relearning of lost movements. The prospects appear promising because TMR can alter similar forms of motor learning in healthy individuals. What About Learning French? The demonstrated ability to reinforce memories raises the question of whether new information can be loaded into a person’s brain after falling asleep, a technique that calls forth the ethical specter of mind control invoked by Brave New World. Is it going too far, though, to imagine that memories can be created surreptitiously? Although the orthodox response to such conjectures has for many years been an unqualified no, studies by Anat Arzi, now at the

University of Cambridge, and her colleagues demonstrated the creation of relatively simple memories using odors. In one experiment, the researchers succeeded in diminishing the desire for tobacco in smokers who were keen to quit. When asleep, study participants were exposed to two odors, cigarette smoke and rotten fish. During the following week, those who had smelled the mix of both odors lit up 30 percent less, having apparently been conditioned to associate smoking with the aversive fish odor. Acquiring a more complex memory is not as easy, but even that may one day prove possible. Karim Benchenane of the French National Center for Scientific Research (CNRS) and his colleagues have shown how to literally change the mind—of a mouse. When they began their work, Benchenane and his team knew that when a mouse explores a new environment, neurons called place cells fire as the animal traverses specific parts of an enclosure. These same neurons discharge again during sleep as the memory is apparently replayed. The researchers stimulated the reward system of the mouse brain (the medial forebrain bundle) precisely when place cells became spontaneously active while the animal was asleep. Amazingly, mice subsequently spent more time at the locations that corresponded to the stimulated place cells, heading there directly after they woke up. More experiments still need to disentangle whether fully formed false memories were implanted in the mice during sleep or whether they were automatically guided to those spots by a process of conditioning, without any knowledge about why they were drawn to those locations. The boundaries of what may be possible remain to be tested, but this research has established that a normal component of learning continues nocturnally off-line. Sleep is needed not just to stay alert and rejuvenated but also to reinforce memories initially acquired while awake. We still need to learn much more about off-line memory processing. Further work must ascertain how sleep helps learning and which brain mechanisms are engaged to preserve the most valuable memories. It is also essential to find out more about the

perils of poor or inadequate sleep that might be affected by various forms of life stress, certain diseases or the experience of growing older. A study led by Carmen Westerberg, then at Northwestern, points in the desired direction. Westerberg tested patients with the memory dysfunction that often precedes Alzheimer’s disease—amnestic mild cognitive impairment. The results documented a link between poor sleep and reduced ability to remember information after an intervening overnight delay. All of this knowledge might help in creating programs of sleep learning to preserve memories, to speed the acquisition of new knowledge, or even to change bad habits such as smoking. Looking still further ahead, scientists might also explore whether we can gain control over our dreams, which could lead to the prospect of nightmare therapies, sleep-based problem-solving and perhaps even recreational dream travel. In a culture that already offers wrist-based sleep trackers and mail-order gene tests, we can begin to contemplate new ways to convert daily downtime into a productive endeavor—for some, a chilling prospect, and for others, another welcome opportunity for hacking the self. --Originally published Scientific American 319(5); 26-31 (November 2018).

Sleep Deprivation Halts Production of Brain Proteins by Emily Willingham Most of us could use more sleep. We feel it in our urge for an extra cup of coffee and in a slipping cognitive grasp as a busy day grinds on. And sleep has been strongly tied to our thinking, sharpening it when we get enough and blunting it when we get too little. What produces these effects are familiar to neuroscientists: external light and dark signals that help set our daily, or circadian, rhythms, “clock” genes that act as internal timekeepers, and neurons that signal to one another through connections called synapses. But how these factors interact to freshen a brain once we do sleep has remained enigmatic. Findings published on October 10, 2019, in two papers in Science place synapses at center stage. These nodes of neuronal communication, researchers show, are where internal preparations for sleep and the effects of our sleep-related behaviors converge. Cellular timekeepers rhythmically prep areas around the synapses in anticipation of building synaptic proteins during slumber. But the new findings indicate neurons don’t end up building these critical proteins in the absence of sleep. The results suggest the brain is “getting prepared for an event, but it doesn’t mean you actually follow through on doing it,” says Robert Greene, a neuroscientist at the University of Texas Southwestern Medical Center, who was not involved in the study. Greene calls the studies “fascinating,” saying they confirm a “long suspected” connection between internal timekeeping and sleep behaviors.

When we become sleepy, two factors are in play: “sleep pressure,” or the growing allure of a beckoning pillow as waking time lengthens, and our internal clock sounding the signal that the usual point for shut-eye has arrived. In one of the two studies, Sara B. Noya of the Institute of Pharmacology and Toxicology at the University of Zurich and her colleagues showed that in mice, the internal clock regulates the rhythmic generation of instructions, or transcripts, for making proteins. Giving in to sleep pressure and hitting the hay, they found, triggers the final steps of protein production. At two peak times in the 24-hour day, just before waking and sleeping, neurons in cognition-related brain areas packed a timekeeping cell’s signaling stations with these transcripts, Noya’s team discovered. The “sleep time” transcripts tended to be for proteins that regulate building other proteins, while the “wake time” instructions were for proteins linked to synapse function. These stashed molecules set the stage for the rapid refreshing of synapses during sleep. Mice lacking important clock genes did not show these peaks. With a regular sleep-wake cycle, the proteins built using these instructions also showed peak production at dawn and dusk. In sleep-deprived mice, however, Noya and her colleagues demonstrated that the cell still produced many of the transcripts but did not build the related proteins. That result implies sleeping regulates the final, protein-building step in ensuring robust synapses. Not all proteins that the cell makes necessarily go into active service, though. In a companion paper, Franziska Brüning of the Ludwig Maximilian University of Munich and the Max Planck Institute of Biochemistry in Martinsried, Germany, and her colleagues explored the rhythmic use of those that do. Attachment or removal of a phosphate molecule acts as a toggle to turn proteins on or off, so the investigators took a close look at this process. They found levels of proteins that had been tagged with phosphates also peaked twice, with the bigger peak occurring just before waking. And as with proteins in the other study, sleep deprivation flattened these peaks.

The researchers made their measurements every four hours, an advance on earlier studies that usually looked at a single time point during a 24-hour period, says Chiara Cirelli, a neuroscientist at the University of Wisconsin–Madison, who co-wrote a commentary accompanying the two papers. “It’s a very comprehensive analysis across the entire light-dark cycle,” she says. Cirelli emphasizes the importance of isolating the synaptic regions where these molecules accumulate and are produced. The researchers pinned down when transcripts were positioned at the ready and when proteins—tagged with phosphates or not —were made or used, she says. Maria Robles, a neuroscientist at Ludwig Maximilian University of Munich and a co-author of both papers, says the findings distinguishing the different stages of protein production and activity are eye-opening, revealing the brain has “a beautiful way to control” these molecules. Even though the studies were done in mice, the brains of these animals have proved to be pretty reliable substitutes for those of humans, says Akhilesh B. Reddy, a neuroscientist at the Perelman School of Medicine at the University of Pennsylvania, who was not involved in the work. The findings have implications for how we consolidate memories during sleep, among other promising avenues of research, he says, drawing the focus straight to events at the synapse. That does not mean, however, that interventions to boost memory and cognition loom in the near future, based on these findings, Robles says. “This is just the tip of the iceberg,” she adds. --Originally published: Scientific American online, October 10, 2019.

SECTION 2 Navigating Space and Time

Where Am I? Where Am I Going? by May-Britt Moser and Edvard I. Moser Our ability to pilot a car or airplane—or even to walk through city streets—has been completely transformed by the invention of the Global Positioning System (GPS). How did we navigate, though, before we had GPS? Recent work has shown that the mammalian brain uses an incredibly sophisticated GPS-like track ing system of its own to guide us from one location to the next. Like the GPS in our phones and cars, our brain’s system assesses where we are and where we are heading by integrating multiple signals relating to our position and the passage of time. The brain normally makes these calculations with minimal effort, so we are barely conscious of them. It is only when we get lost or when our navigation skills are compromised by injury or a neurodegenerative disease that we get a glimpse of how critical this mapping-andnavigation system is to our existence. The ability to figure out where we are and where we need to go is key to survival. Without it, we, like all animals, would be un able to find food or reproduce. Individuals—and, in fact, the entire species— would perish. The sophistication of the mammalian system becomes particularly clear when contrasted to those of other animals. The simple roundworm Caenorhabditis elegans, which has just 302 neurons, navigates almost solely in response to olfactory signals, following the path of an increasing or decreasing odor gradient. Animals with more sophisticated nervous systems, such as desert ants or honeybees, find their way with the help of additional

strategies. One of these methods is called path integration, a GPSlike mechanism in which neurons calculate position based on constant monitoring of the animal’s direction and speed of movement relative to a starting point—a task carried out without reference to external cues such as physical landmarks. In vertebrates, particularly in mammals, the repertoire of behaviors that enable an animal to locate itself in its environment has expanded still further. More than any other class of animals, mammals rely on the capacity to form neural maps of the environment—patterns of electrical activity in the brain in which groups of nerve cells fire in a way that reflects the layout of the surrounding environment and an animal’s position in it. The formation of such mental maps is mostly thought to occur in the cortex, the brain’s wrinkled upper layers that developed quite late in evolution. Over the past few decades researchers have gained a deep understanding of just how the brain forms and then revises these maps as an animal moves. The recent work, conducted mostly in rodents, has revealed that the navigation systems consist of several specialized cell types that continuously calculate an animal’s location, the distance it has traveled, the direction it is moving and its speed. Collectively these different cells form a dynamic map of local space that not only operates in the present but also can be stored as a memory for later use. A Neuroscience of Space The study of the brain’s spatial maps began with Edward C. Tolman, a psychology professor at the University of California, Berkeley, from 1918 to 1954. Before Tolman’s work, laboratory experiments in rats seemed to suggest that animals find their way around by responding to—and memorizing—successive stimuli along the path they move. In learning to run a maze, for instance, they were thought to recall sequences of turns they made from the maze’s start to its end. This idea, however, did not take into account that the animals might visualize an overall picture of the entire maze to be able to plan the best route.

Tolman broke radically with prevailing views. He had observed rats take shortcuts or make detours, behaviors that would not be expected if they had learned only one long sequence of behaviors. Based on his observations, he proposed that animals form mental maps of the environment that mirror the spatial geometry of the outer world. These cognitive maps did more than help animals to find their way; they also appeared to record information about the events that the animals experienced at specific locales. Tolman’s ideas, proposed for the first time around 1930, remained controversial for decades. Acceptance came slowly, in part because they were based entirely on observing the behavior of experimental animals, which could be interpreted in many ways. Tolman did not have the concepts or tools to test whether an internal map of the environment actually existed in an animal’s brain. It took about 40 years before direct evidence for such a map appeared in studies of neural activity. In the 1950s progress in the development of microelectrodes made it possible to monitor electrical activity from individual neurons in awake animals. These very thin electrodes enabled researchers to identify the firing of single neurons as the animals went about their business. A cell “fires” when it triggers an action potential—a short-lasting change in the voltage across the neuronal cell membrane. Action potentials cause neurons to release neurotransmitter molecules that convey signals from one neuron to another. John O’Keefe of University College London used microelectrodes to monitor action potentials in rats in the hippocampus, an area of the brain known for decades to be important for memory functions. In 1971 he reported that neurons there fired when a rat in a box spent time at a certain location—thus, he called them place cells. O’Keefe observed that different place cells fired at different locations in the box and that the firing pattern of the cells collectively formed a map of locations in the box. The combined activity of multiple place cells could be read out from the electrodes to identify the animal’s precise location at any given time. In 1978 O’Keefe and his colleague Lynn Nadel, now at the University of Arizona, suggested that place cells

were, in fact, an integral part of the cognitive map Tolman had envisaged. The Nervous System’s Incredible Pathfinding Skills Survival for any species requires an ability to take into account the surrounding environment and to make a calculation, even a crude one, of where an animal has been, where it is and where it is going. On higher rungs of the evolutionary chain, many species have developed “path integration” systems that allow them to perform this task without the need to locate where they are by referencing external landmarks. Mammals have found an even more elaborate solution that uses internalized mental maps.

Credit: Illustration by Jen Christiansen

A Cortical Map

The discovery of place cells opened a window into the deepest parts of the cortex, in areas farthest away from the sensory cortices (those that receive inputs from the senses) and from the motor cortex (which emits the signals that initiate or control movement). At the end of the 1960s, when O’Keefe started his work, knowledge about when neurons switched on and off was largely restricted to areas called the primary sensory cortices, where neural activity was controlled directly by such sensory inputs as light, sound and touch. Neuroscientists of that era speculated that the hippocampus was too far removed from the sensory organs to process their inputs in any manner that could easily be understood from a microelectrode recording. The discovery of cells in the hippocampus that created a map of an animal’s immediate environment dashed that speculation. Even though the finding was remarkable and suggested a role for place cells in navigation, no one knew what that role might be for decades after their discovery. Place cells were in an area of the hippocampus, called CA1, that was the end point in a signaling chain originating elsewhere in the hippocampus. It was hypothesized that place cells received many of the critical navigation-related computations from other hippocampal regions. In the early 2000s the two of us decided to explore this idea further in the new lab we had set up at the Norwegian University of Science and Technology in Trondheim. This pursuit ultimately led to a major discovery. In collaboration with Menno Witter, now at our institute, and a set of highly creative students, we began by using microelectrodes to monitor the activity of place cells in the rat hippocampus after we had disrupted part of a neuronal circuit there known to feed information to these cells. We expected the work to confirm that this circuit was important to the proper functioning of the place cells. To our surprise, the neurons at the end of that circuit, in CA1, still fired when the animals arrived at specific locations. Our team's inescapable conclusion was that place cells did not depend on this hippocampal circuit to gauge an animal’s bearings. Our attention then turned to the only neural pathway that had been spared by our intervention: the direct connections to CA1 from the

entorhinal cortex, an adjoining area that provides an interface to the rest of the cortex. In 2002 we inserted microelectrodes in the entorhinal cortex, still in a collaboration with Witter, and began recording as the animals performed tasks that were similar to the ones we had used for our place cell studies. We guided electrodes into an area of the entorhinal cortex having direct connections to the parts of hippocampus where place cells had been recorded in almost every study before ours. Many cells in the entorhinal cortex turned out to fire when an animal was at a particular spot in the enclosure, much like the place cells in the hippocampus do. But unlike a place cell, a single cell in the entorhinal cortex fired, not only at one location visited by a rodent but at many. The most striking property of these cells, though, was the way they fired. Their pattern of activity became obvious to us only when, in 2005, we increased the size of the enclosure in which we were recording. After expanding it to a certain size, we found that the multiple locations at which an entorhinal cell fired formed the vertices of a hexagon. At each vertex, the cell, which we called a grid cell, fired when the animal passed over it. The hexagons, which covered the entire enclosure, appeared to form the individual units of a grid—similar to the squares formed by the coordinate lines on a road map. The firing pattern raised the possibility that grid cells, unlike place cells, provide information about distance and direction, helping an animal to track its trajectory based on internal cues from the body’s motions without relying on inputs from the environment. Several aspects of the grid also changed as we examined the activity of cells in different parts of the entorhinal cortex. At the dorsal part, near the top of this structure, the cells generated a grid of the enclosure that consisted of tightly spaced hexagons. The size of the hexagons increased in a series of steps—or modules—as one moved toward the lower, or ventral, part of the entorhinal cortex. The hexagonal grid elements in each module had a unique spacing.

The spacing of the grid cells in each successive module moving downward could be determined by multiplying the distance between cells in the previous module by a factor of about 1.4, approximately the square root of 2. In the module at the top of the entorhinal cortex, a rat that activated a grid cell at one vertex of a hexagon would have to travel 30 to 35 centimeters to an adjoining vertex. In the next module down, the animal would have to travel 42 to 49 centimeters, and so on. In the lowest module, the distance extended up to several meters in length. We were extremely excited by the grid cells and their tidy organization. In most parts of the cortex, the neurons have firing patterns that appear chaotic and inaccessible, but here, deep in the cortex, there was a system of cells that fired in a predictable and orderly pattern. We were eager to investigate. But these cells and place cells were not the only ones involved in mapping the mammal’s world—other surprises also awaited us. Back in the mid-1980s and early 1990s, James B. Ranck of SUNY Downstate Medical Center and Jeffrey S. Taube, now at Dartmouth College, had described cells that fired when a rodent faced a particular direction. Ranck and Taube had discovered such headdirection cells in the presubiculum, another region of the cortex adjacent to the hippocampus. Our studies found that these cells were also present in the entorhinal cortex, intermingled among grid cells. Many headdirection cells in the entorhinal cortex also functioned as grid cells: the locations in the enclosure where they fired also formed a grid, but the cells became active at those locales only when the rat was facing a certain direction. These cells appeared to provide a compass for the animal; by monitoring the cells, one could read out the direction the animal was facing at any given time relative to the surrounding environment. A few years later, in 2008, we made a discovery in the entorhinal cortex of another cell type. These border cells fired whenever the animal approached a wall or an edge of the enclosure or some other divide. These cells appeared to calculate how far the animal was

from a boundary. This information could then be used by grid cells to estimate how far the animal had traveled from the wall, and it could also be established as a reference point to remind the rat of the wall’s whereabouts at a later time. Finally, in 2015, yet a fourth kind of cell entered the scene. It responded specifically to the running speed, regardless of the animal’s location or direction. The firing rates of these neurons increased in proportion to the speed of movement. Indeed, we could ascertain how fast an animal was moving at a given moment by looking at the firing rates of just a handful of speed cells. In conjunction with head-direction cells, speed cells may serve the role of providing grid cells continually updated information about the animal’s movement—its speed, direction and the distance from where it started. How the Brain Takes Its Bearings The idea that the brains of mammals make a mental map that mirrors the spatial geometry of the outer world first emerged around 1930. Neuroscientists have subsequently identified cells that work together to create such maps. A key development came in 1971, when an AmericanBritish researcher found that place cells in the rat hippocampus fire at particular locations on the willy-nilly path an animal travels. In 2005 the authors discovered grid cells that let an animal measure its location in its environment—say, in relation to the walls of an enclosure. As the animal moves about, each grid cell fires at multiple locations that correspond to the vertices of a hexagon.

Credit: Illustration by Jen Christiansen. Source: “Scientific Background: The Brain’s Navigational Place and Grid Cell System,” by Ole Kiehn and Hand Forssberg, with Illustrations by Mattias Karlen. Nobelprize.org, Nobel Media AB, 2014, www.nobelprize.org/nobel_prizes/medicine/laureates/2014/advanced.html (top)

From Grid to Place Cells Our discovery of grid cells grew out of our desire to uncover the inputs that allow place cells to give mammals an internal picture of their environment. We now understand that place cells integrate the signals from various types of cells in the entorhinal cortex as the brain attempts to track the route an animal has traveled and where it is going in its environment. Yet even these processes do not tell the whole story of how mammals navigate. Our initial work focused on the medial (inner) entorhinal cortex. Place cells may also receive signals from the lateral entorhinal cortex, which relays processed input from a number of sensory

systems, including information about odor and identity of objects. By integrating inputs from the medial and lateral parts of the entorhinal cortex, place cells interpret signals from throughout the brain. The complex interaction of messages arriving in the hippocampus and the formation of location-specific memories that this enables are still being investigated by our lab and others, and this research will undoubtedly continue for many years to come. One way to begin to understand how the spatial maps of the medial entorhinal cortex and the hippocampus combine to aid navigation is to ask how the maps differ. John Kubie and the late Robert U. Muller, both at SUNY Downstate Medical Center, showed in the 1980s that maps in the hippocampus made up of place cells may change entirely when an animal moves to a new environment— even to a different colored enclosure at the same location in the same room. Experiments performed in our own lab, with rats foraging in up to 11 enclosures in a series of different rooms, have shown that each room, in fact, rapidly gives rise to its own independent map, further supporting the idea that the hippocampus forms spatial maps tailored to specific environments. In contrast, the maps in the medial entorhinal cortex are universal. Grid cells—and head-direction and border cells—that fire together at a particular set of locations on the grid map for one environment also fire at analogous positions on the map for another environment—as if latitude and longitude lines from the first map were imposed on the new setting. The sequence of cells that fire as the animal moves northeast in one room of the cage repeats when the rat goes in that same direction in the other room. The pattern of signaling among these cells in the entorhinal cortex is what the brain uses for navigating through its surroundings. These codes are then transmitted from the entorhinal cortex to the hippocampus, where they are used to form maps specific to a particular place. From the standpoint of evolution, two sets of maps that integrate their information to guide animals appear to be an efficient solution for a system used by animals for spatial navigation.

The grids formed in the medial entorhinal cortex that measure distance and direction do not change from one room to the next. In contrast, the place cells of the hippocampus form individual maps for every single room. Inside the Brain’s GPS The neural navigation system of the human brain resides deep within a region known as the medial temporal lobe. Two areas of the medial temporal lobe—the entorhinal cortex and the hippocampus—act as key components of the brain’s GPS. Networks of specialized cell types in the entorhinal cortex contribute to the complexity in the mammalian brain’s pathfinding system.

Credit: Illustration by Jen Christiansen.

Local Maps

Understanding of the neural navigation system remains a work in progress. Almost all our knowledge of place and grid cells has been obtained in experiments in which electrical activity from neurons is recorded when rats or mice walk about randomly in highly artificial environments—boxes with flat bottoms and no internal structures to serve as landmarks. A lab differs substantially from natural environments, which change constantly and are full of three-dimensional objects. The reductionism of the studies raises questions about whether place cells and grid cells fire in the same way when animals find themselves outside the lab. Experiments in complex mazes that try to mimic animals’ natural habitat provide a few clues to what might be going on. In 2009 we recorded grid cells as animals moved through an intricate maze in which they encountered a hairpin turn at the end of each alley that marked the beginning of the next passageway. The study showed that, as expected, grid cells formed patterns of hexagons to map out distances for the rats in individual alleys of the maze. But each time an animal turned from one alley to the next, an abrupt transition occurred. A separate grid pattern was then superimposed on the new alley, almost as if the rat were entering an entirely different room. Later work in our lab has shown that grid maps also fragment into smaller maps in open environments if these spaces are large enough. We are now researching how these smaller maps merge to form an integrated map of a given area. Even these experiments are oversimplified because the enclosures are flat and horizontal. Experiments performed in other labs—observing flying bats and rats that climb around in cages—are beginning to provide some clues: place cells and head-direction cells seem to fire in specific places throughout any three-dimensional space, and most likely grid cells do as well. Space and Memory

The navigational system in the hippocampus does more than help animals get from point A to point B. Beyond receiving information about position, distance and direction from the medial entorhinal cortex, the hippocampus makes a record of what is located in a particular place—whether a car or a flagpole—as well as the events that take place there. The map of space created by place cells thus contains not only information about an animal’s whereabouts but also details about the animal’s experiences, similar to Tolman’s conception of a cognitive map. Some of this added information appears to come from neurons in the lateral part of the entorhinal cortex. Particulars about objects and events fuse with an animal’s coordinates and are laid down as a memory. When the memory is later retrieved, both the event and the position are called to mind. This coupling of place with memory recalls a strategy for memorization invented by ancient Greeks and Romans. The “method of loci” lets a person memorize a list of items by imagining putting each item at a position along a well-known path through a place, say, a landscape or a building—an arrangement often called a memory palace. Participants in memory contests still use the technique to recall long lists of numbers, letters or playing cards. Sadly, the entorhinal cortex is among the first areas to fail in people with Alzheimer’s disease. The illness causes brain cells there to die, and a reduction in its size is considered a reliable measure for identifying at-risk individuals. The tendency to wander and get lost is also among the earliest indicators of the disorder. In the later stages of Alzheimer’s, cells die in the hippocampus, producing an inability to recall experiences or remember concepts such as the names of colors. In fact, a recent study has provided evidence that young individuals with a gene that places them at an elevated risk for Alzheimer’s may have deficiencies in the functioning of their grid cell networks—a finding that may lead to new ways of diagnosing the disease. A Rich Repertoire

Today, more than 80 years since Tolman first proposed the existence of a mental map of our surroundings, it is clear that place cells are just one component of an intricate representation the brain makes of its spatial environment to calculate location, distance, speed and direction. The multiple cell types that have been found in the navigation system of the rodent brain also occur in bats, monkeys and humans. Their existence across mammalian taxonomic orders suggests that grid and other cells involved in navigation arose early in the evolution of mammals and that similar neural algorithms are used to compute position across species. Many of the building blocks of Tolman’s map have been discovered, and we are beginning to understand how the brain creates and deploys them. The spatial representation system has become one of the best-understood circuits of the mammalian cortex, and the algorithms it uses are beginning to be identified to help unlock the neural codes the brain uses for navigation. As with so many other areas of inquiry, new findings raise new questions. We know that the brain has an internal map, but we still need a better understanding of how the elements of the map work together to produce a cohesive representation of positioning and how the information is read by other brain systems to make decisions about where to go and how to get there. Other questions abound. Is the spatial network of the hippocampus and the entorhinal cortex limited to navigation of local space? In rodents, we examine areas that have radii of only a few meters. Are place and grid cells also used for long-distance navigation, such as when bats migrate hundreds or thousands of kilometers? Finally, we wonder how grid cells originate, whether there is a critical formative period for them in an animal’s development and whether place and grid cells can be found in other vertebrates or invertebrates. If invertebrates use them, the finding would imply that evolution has used this spatial-mapping system for hundreds of millions of years. The brain’s GPS will continue to provide a rich trove of leads for new research that will occupy generations of scientists in the decades ahead.

--Originally published: Scientific American 26(3); 34-41 (Summer 2017).

Times of Our Lives by Karen Wright The biopsychologist John Gibbon called time the “primordial context”: a fact of life that has been felt by all organisms in every era. For the morning glory that spreads its petals at dawn, for geese flying south in autumn, for locusts swarming every 17 years and even for lowly slime molds sporing in daily cycles, timing is everything. In human bodies, biological clocks keep track of seconds, minutes, days, months and years. They govern the splitsecond moves of a tennis serve and account for the trauma of jet lag, monthly surges of menstrual hormones and bouts of wintertime blues. Cellular chronometers may even decide when your time is up. Life ticks, then you die. The pacemakers involved are as different as stopwatches and sundials. Some are accurate and inflexible, others less reliable but subject to conscious control. Some are set by planetary cycles, others by molecular ones. They are essential to the most sophisticated tasks the brain and body perform. And timing mechanisms offer insights into aging and disease. Cancer, Parkinson’s disease, seasonal depression and attention-deficit disorder have all been linked to defects in biological clocks. The physiology of these timepieces is not completely understood. But neurologists and other clock researchers have begun to answer some of the most pressing questions raised by human experience in the fourth dimension. Why, for example, a watched pot never boils. Why time flies when you’re having fun. Why all-nighters can give you indigestion. Or why people live longer than hamsters. It’s only a

matter of time before clock studies resolve even more profound quandaries of temporal existence. The Psychoactive Stopwatch If this article intrigues you, the time you spend reading it will pass quickly. It’ll drag if you get bored. That’s a quirk of a “stopwatch” in the brain—the so-called interval timer—that marks time spans of seconds to hours. The interval timer helps you figure out how fast you have to run to catch a baseball. It tells you when to clap to your favorite song. It lets you sense how long you can lounge in bed after the alarm goes off. Interval timing enlists the higher cognitive powers of the cerebral cortex, the brain center that governs perception, memory and conscious thought. When you approach a yellow traffic light while driving your car, for example, you time how long it has been yellow and compare that with a memory of how long yellow lights usually last. “Then you have to make a judgment about whether to put on the brakes or keep driving,” says Stephen M. Rao, now at the Cleveland Clinic Lou Ruvo Center for Brain Health. Rao’s studies with functional magnetic resonance imaging (fMRI) have pointed to the parts of the brain engaged in each of those stages. Inside the fMRI machine, subjects listen to two pairs of tones and decide whether the interval between the second pair is shorter or longer than the interval between the first pair. The brain structures that are involved in the task consume more oxygen than those that are not involved, and the fMRI scan records changes in blood flow and oxygenation once every 250 milliseconds. “When we do this, the very first structures that are activated are the basal ganglia,” Rao says. Long associated with movement, this collection of brain regions has become a prime suspect in the search for the interval-timing mechanism as well. One area of the basal ganglia, the striatum, hosts a population of conspicuously well-connected nerve cells that receive signals from other parts of the brain. The long arms of these striatal cells are covered with between 10,000 and 30,000 spines,

each of which gathers information from a different neuron in another locale. If the brain acts like a network, then the striatal spiny neurons are critical nodes. “This is one of only a few places in the brain where you see thousands of neurons converge on a single neuron,” says Warren H. Meck of Duke University. Striatal spiny neurons are central to an interval-timing theory Meck developed with Gibbon, who worked at Columbia University until his death in 2001. The theory posits a collection of neural oscillators in the cerebral cortex: nerve cells firing at different rates, without regard to their neighbors’ tempos. In fact, many cortical cells are known to fire at rates between 10 and 40 cycles per second without external provocation. “All these neurons are oscillating on their own schedules,” Meck observes, “like people talking in a crowd. None of them are synchronized.” The cortical oscillators connect to the striatum via millions of signal-carrying arms, so the striatal spiny neurons can eavesdrop on all those haphazard “conversations.” Then something—a yellow traffic light, say—gets the cortical cells’ attention. The stimulation prompts all the neurons in the cortex to fire simultaneously, causing a characteristic spike in electrical output some 300 milliseconds later. This attentional spike acts like a starting gun, after which the cortical cells resume their disorderly oscillations. But because they have begun simultaneously, the cycles now make a distinct, reproducible pattern of nerve activation from moment to moment. The spiny neurons monitor those patterns, which help them to “count” elapsed time. At the end of a specified interval—when, for example, the traffic light turns red—a part of the basal ganglia called the substantia nigra sends a burst of the neurotransmitter dopamine to the striatum. The dopamine burst induces the spiny neurons to record the pattern of cortical oscillations they receive at that instant, like a flashbulb exposing the interval’s cortical signature on the spiny neurons’ film. “There’s a unique time stamp for every interval you can imagine,” Meck says. Once a spiny neuron has learned the time stamp of the interval for a given event, subsequent occurrences of the event prompt both the

“firing” of the cortical starting gun and a burst of dopamine at the beginning of the interval. The dopamine burst now tells the spiny neurons to start tracking the patterns of cortical impulses that follow. When the spiny neurons recognize the time stamp marking the end of the interval, they send an electrical pulse from the striatum to another brain center, called the thalamus. The thalamus, in turn, communicates with the cortex, and the higher cognitive functions— such as memory and decision making—take over. Hence, the timing mechanism loops from the cortex to the striatum to the thalamus and back to the cortex again. Clocks in the Brain Scientists are uncovering the workings of two neural timepieces: an interval timer, which measures intervals lasting up to hours, and a circadian clock, which causes certain body processes to peak and ebb on 24-hour cycles.

The Interval Timer According to one model, the onset of an event lasting a familiar amount of

time (such as the switching on of a four-second yellow traffic light) activates the “start button” of the interval timer by evoking two brain responses. It induces a particular subset of cortical nerve cells that fire at different rates (a) to momentarily act together (b and green arrows on brain), and it prompts neurons of the substantia nigra to release a burst of the signaling chemical dopamine (purple arrow). Both signals impinge on spiny cells of the striatum (c), which proceed to monitor the overall patterns of impulses coming from the cortical cells after those neurons resume their various firing rates. Because the cortical cells act in synchrony at the start of the interval, the subsequent patterns occur in the same sequence every time and take a unique form when the end of the familiar interval is reached (d). At that point, the striatum sends a time’s-up signal (red arrows) through other parts of the brain to the decision-making cortex.

The Circadian Clock Daily cycles of light and dark dictate when many physiological processes that operate on 24-hour cycles will be most and least active. The brain tracks fluctuations in light with the help of ganglion cells in the retina of the eye. A pigment in some of the cells—melanopsin—probably detects light, leading the retinal ganglion cells to send information about its brightness and duration to the suprachiasmatic nucleus (SCN) of the brain. Then the

SCN dispatches the information to the parts of the brain and body that control circadian processes. Researchers best understand the events leading the pineal gland to secrete melatonin, sometimes called the sleep hormone (diagram). In response to daylight, the SCN emits signals (red arrow) that stop another brain region—the paraventricular nucleus—from producing a message that would ultimately result in melatonin’s release. After dark, however, the SCN releases the brake, allowing the paraventricular nucleus to relay a “secrete melatonin” signal (green arrows) through neurons in the upper spine and the neck to the pineal gland. Illustrations by Terese Winslow

If Meck is right and dopamine bursts play an important role in framing a time interval, then diseases and drugs that affect dopamine levels should also disrupt that loop. So far that is what Meck and others have found. Patients with untreated Parkinson’s disease, for example, release less dopamine into the striatum, and their clocks run slow. In trials these patients consistently underestimate the duration of time intervals. Marijuana also lowers dopamine availability and slows time. Recreational stimulants such as cocaine and methamphetamine increase the availability of dopamine and make the interval clock speed up, so that time seems to expand. Adrenaline and other stress hormones make the clock speed up, too, which may be why a second can feel like an hour during unpleasant situations. States of deep concentration or extreme emotion may flood the system or bypass it altogether; in such cases, time may seem to stand still or not exist at all. Because an attentional spike initiates the timing process, Meck thinks people with attention-deficit hyperactivity disorder might also have problems gauging the true length of intervals. The interval clock can also be trained to greater precision. Musicians and athletes know that practice improves their timing; ordinary folk can rely on tricks such as chronometric counting (“one one-thousand”) to make up for the mechanism’s deficits. Rao forbids his subjects from counting in experiments because it could activate brain centers related to language as well as timing. But counting works, he says—well enough to expose cheaters. “The effect is so

dramatic that we can tell whether they’re counting or timing based just on the accuracy of their responses.” The Somatic Sundial One of the virtues of the interval-timing stopwatch is its flexibility. You can start and stop it at will or ignore it completely. It can work subliminally or submit to conscious control. But it won’t win any prizes for accuracy. The precision of interval timers has been found to range from 5 to 60 percent. They don’t work too well if you’re distracted or tense. And timing errors get worse as an interval gets longer. That is why we rely on cell phones and wristwatches to tell time. Fortunately, a more rigorous timepiece chimes in at intervals of 24 hours. The circadian clock—from the Latin circa (“about”) and diem (“a day”)—tunes our bodies to the cycles of sunlight and darkness that are caused by the earth’s rotation. It helps to program the daily habit of sleeping at night and waking in the morning. Its influence extends much further, however. Body temperature regularly peaks in the late afternoon or early evening and bottoms out a few hours before we rise in the morning. Blood pressure typically starts to surge between 6:00 and 7:00 A.M. Secretion of the stress hormone cortisol is 10 to 20 times higher in the morning than at night. Urination and bowel movements are generally suppressed at night and then pick up again in the morning. The circadian timepiece is more like a clock than a stopwatch because it runs without the need for a stimulus from the external environment. Studies of volunteer cave dwellers and other human guinea pigs have demonstrated that circadian patterns persist even in the absence of daylight, occupational demands and caffeine. Moreover, they are expressed in every cell of the body. Confined to a petri dish under constant lighting, human cells still follow 24-hour cycles of gene activity, hormone secretion and energy production. The cycles are hardwired, and they vary by as little as 1 percent— just minutes a day.

But if light isn’t required to establish a circadian cycle, it is needed to synchronize the phase of the hardwired clock with natural day and night cycles. Like an ordinary clock that runs a few minutes slow or fast each day, the circadian clock needs to be continually reset to stay accurate. Neurologists have made great progress in understanding how daylight sets the clock. Two clusters of 10,000 nerve cells in the hypothalamus of the brain have long been considered the clock’s locus. Decades of animal studies have demonstrated that these centers, each called a suprachiasmatic nucleus (SCN), drive daily fluctuations in blood pressure, body temperature, activity level and alertness. The SCN also tells the brain’s pineal gland when to release melatonin, which promotes sleep in humans and is secreted only at night. More than 15 years ago researchers proved that dedicated cells in the retina of the eye transmit information about light levels to the SCN. These cells—a subset of those known as ganglion cells— operate completely independently of the rods and cones that mediate vision, and they are far less responsive to sudden changes in light. That sluggishness befits a circadian system. It would be no good if watching fi reworks or going to a movie matinee tripped the mechanism. The SCN’s role in circadian rhythms has been reevaluated in view of other findings. Scientists had assumed that the SCN somehow coordinated all the individual cellular clocks in the body’s organs and tissues. Then, in the mid-1990s, researchers discovered four critical genes that govern circadian cycles in flies, mice and humans. These genes turned up not just in the SCN but everywhere else, too. “These clock genes are expressed throughout the whole body, in every tissue,” says Joseph Takahashi, now at the University of Texas Southwestern Medical Center. “We didn’t expect that.” More recently, researchers at Harvard University found that the expression of more than 1,000 genes in the heart and liver tissue of mice varied in regular 24-hour periods. But the genes that showed these circadian cycles differed in the two tissues, and their expression peaked in the heart at different hours than in the liver.

“They’re all over the map,” says Michael Menaker of the University of Virginia. “Some are peaking at night, some in the morning and some in the daytime.” Menaker has shown that specific feeding schedules can shift the phase of the liver’s circadian clock, overriding the light-dark rhythm followed by the SCN. When lab rats that usually ate at will were fed just once a day, for example, peak expression of a clock gene in the liver shifted by 12 hours, whereas the same clock gene in the SCN stayed locked in sync with light schedules. It makes sense that daily rhythms in feeding would affect the liver, given its role in digestion. Researchers think circadian clocks in other organs and tissues may respond to other external cues—including stress, exercise, and temperature changes—that occur regularly every 24 hours. No one is ready to dethrone the SCN: its authority over body temperature, blood pressure and other core rhythms is still secure. Yet this brain center is no longer thought to rule the peripheral clocks with an iron fist. “We have oscillators in our organs that can function independently of our oscillators in our brain,” Takahashi says. The autonomy of the peripheral clocks makes a phenomenon such as jet lag far more comprehensible. Whereas the interval timer, like a stopwatch, can be reset in an instant, circadian rhythms take days and sometimes weeks to adjust to a sudden shift in day length or time zone. A new schedule of light will slowly reset the SCN clock. But the other clocks may not follow its lead. The body is not only lagging; it’s lagging at a dozen different paces. Jet lag doesn’t last, presumably because all those different drummers are able to eventually sync up again. Shift workers, party animals, college students and other night owls face a worse chrono dilemma. They may be leading a kind of physiological double life. Even if they get plenty of shut-eye by day, their core rhythms are still ruled by the SCN—hence, the core functions continue “sleeping” at night. “You can will your sleep cycle earlier or later,” says Alfred J. Lewy of the Oregon Health & Science University. “But you can’t will your melatonin levels earlier or later, or your cortisol levels, or your body temperature.”

Meanwhile their schedules for eating and exercising could be setting their peripheral clocks to entirely different phases from either the sleep-wake cycle or the light-dark cycle. With their body living in so many time zones at once, it’s no wonder shift workers have an increased incidence of heart disease, gastrointestinal complaints and, of course, sleep disorders. A Clock for All Seasons Jet lag and shift work are exceptional conditions in which the innate circadian clock is abruptly thrown out of phase with the lightdark cycles or sleep-wake cycles. The same thing can happen every year, albeit less abruptly, when the seasons change. Research shows that although bedtimes may vary, people tend to get up at about the same time in the morning year-round—usually because their dogs, kids, parents or careers demand it. In the winter, at northern latitudes, that means many people wake up two to three hours before the sun makes an appearance. Their sleep-wake cycle is several time zones away from the cues they get from daylight. The mismatch between day length and daily life could explain the syndrome known as seasonal affective disorder, or SAD. In the U.S., SAD afflicts as many as one in 20 adults with depressive symptoms such as weight gain, apathy and fatigue between October and March. The condition is 10 times more common in the north than the south. Although SAD occurs seasonally, some experts suspect it is actually a circadian problem. Lewy’s work suggests that SAD patients would come out of their depression if they could get up at the natural dawn in the winter. In his view, SAD is not so much a pathology as evidence of an adaptive, seasonal rhythm in sleepwake cycles. “If we adjusted our daily schedules according to the seasons, we might not have seasonal depression,” Lewy says. “We got into trouble when we stopped going to bed at dusk and getting up at dawn.” If modern civilization doesn’t honor seasonal rhythms, it’s partly because human beings are among the least seasonally sensitive creatures around. SAD is nothing compared to the annual cycles other animals go through: hibernation, migration, molting and

especially mating, the master metronome to which all other seasonal cycles keep time. It is possible that these seasonal cycles may also be regulated by the circadian clock, which is equipped to keep track of the length of days and nights. Darkness, as detected by the SCN and the pineal gland, prolongs melatonin signals in the long nights of winter and reduces them in the summer. “Hamsters can tell the difference between a 12-hour day, when their gonads don’t grow, and a 12-hour-15-minute day, when their gonads do grow,” Menaker says. If seasonal rhythms are so robust in other animals and if humans have the equipment to express them, then how did we ever lose them? “What makes you think we ever had them?” Menaker asks. “We evolved in the tropics.” Menaker’s point is that many tropical animals don’t exhibit dramatic patterns of annual behavior. They don’t need them, because the seasons themselves vary so little. Most tropical animals mate without regard to seasons because there is no “best time” to give birth. People, too, are always in heat. As our ancestors gained greater control of their environment over the millennia, seasons probably became an even less significant evolutionary force. But one aspect of human fertility is cyclical: women and other female primates produce eggs just once a month. The clock that regulates ovulation and menstruation is a well-documented chemical feedback loop that can be manipulated by hormone treatments, exercise and even the presence of other menstruating women. The reason for the specific duration of the menstrual cycle is unknown, though. The fact that it is the same length as the lunar cycle is a coincidence few scientists have bothered to investigate, let alone explain. No convincing link has yet been found between the moon’s radiant or gravitational energy and a woman’s reproductive hormones. In that regard, the monthly menstrual clock remains a mystery—outdone perhaps only by the ultimate conundrum, mortality. Time the Avenger

People tend to equate aging with the diseases of aging—cancer, heart disease, osteoporosis, arthritis and Alzheimer’s, to name a few —as if the absence of disease would be enough to confer immortality. Biology suggests otherwise. Modern humans in developed countries have a life expectancy of more than 70 years. The life expectancy of your average mayfly, in contrast, is a day. Biologists are just beginning to explore why different species have different life expectancies. If your days are numbered, what’s doing the counting? Comparisons within and among animal species, along with research on aging, have challenged many common assumptions about the factors that determine natural life span. The answer cannot lie solely with a species’ genetics: worker honeybees, for example, last a few months, whereas queen bees live for years. Still, genetics are important: a single-gene mutation in mice can produce a strain that lives up to 50 percent longer than usual. High metabolic rates can shorten life span, yet many species of birds, which have fast metabolisms, live longer than mammals of comparable body size. And big, slow-metabolizing animals do not necessarily outlast the small ones. The life expectancy of a parrot is about the same as a human’s. Among dog species, small breeds typically live longer than large ones. Scientists in search of the limits to human life span have traditionally approached the subject from the cellular level rather than considering whole organisms. So far the closest thing they have to a terminal timepiece is the so-called mitotic clock. The clock keeps track of cell division, or mitosis, the process by which a single cell splits into two. The mitotic clock is like an hourglass in which each grain of sand represents one episode of cell division. Just as there are a finite number of grains in an hourglass, there seems to be a ceiling on how many times normal cells of the human body can divide. In culture they will undergo 60 to 100 mitotic divisions, then call it quits. “All of a sudden they just stop growing,” says John Sedivy of Brown University. “They respire, they metabolize, they move, but they will never divide again.”

Cultured cells usually reach this state of senescence in a few months. Fortunately, most cells in the body divide much, much more slowly than cultured cells. Eventually—perhaps after 70 years or so —they, too, can get put out to pasture. “What the cells are counting is not chronological time,” Sedivy says. “It’s the number of cell divisions.” Sedivy has shown that he could squeeze 20 to 30 more cycles out of human fibroblasts by mutating a single gene. This gene encodes a protein called p21, which responds to changes in structures called telomeres that cap the ends of chromosomes. Telomeres are made of the same stuff that genes are: DNA. They consist of thousands of repetitions of a six-base DNA sequence that does not code for any known protein. Each time a cell divides, chunks of its telomeres are lost. Young human embryos have telomeres between 18,000 and 20,000 bases long. By the time senescence kicks in, the telomeres are only 6,000 to 8,000 bases long. Biologists suspect that cells become senescent when telomeres shrink below some specific length. Titia de Lange of the Rockefeller University has proposed an explanation for this link. In healthy cells, she showed, the chromosome ends are looped back on themselves like a hand tucked in a pocket. The “hand” is the last 100 to 200 bases of the telomere, which are single-stranded, not paired like the rest. With the help of more than a dozen specialized proteins, the single-stranded end is inserted into the double strands upstream for protection. If telomeres are allowed to shrink enough, “they can no longer do this looping trick,” de Lange says. Untucked, a single-stranded telomere end is vulnerable to fusion with other single-stranded ends. The fusion wreaks havoc in a cell by stringing together all the chromosomes. That could be why Sedivy’s mutated p21 cells died after they got in their extra rounds of mitosis. Other cells bred to ignore short telomeres have turned cancerous. The job of normal p21 and telomeres themselves may be to stop cells from dividing so much that they die or become malignant. Cellular senescence could actually be prolonging human life rather than spelling its doom. It

might be cells’ imperfect defense against malignant growth and certain death. “Our hope is that we’ll gain enough information from this reductionist approach to help us understand what’s going on in the whole person,” de Lange comments. For now, the link between shortened telomeres and aging is tenuous at best, although you wouldn’t know that from some of the outsized claims certain telomere enthusiasts are making. Maria Blasco, a molecular oncologist at the Spanish National Cancer Research Center in Madrid, for example, has developed a $700 blood test that she says may predict life span by measuring the length of a person’s telomeres. The test can determine biological age to within a decade, according to one consultant for the company, Life Length, that markets the test. Other experts point out that telomere length varies so much among individuals that it can’t be used as a reliable indicator of biological age. In any case, most cells do not need to keep dividing to do their job—white blood cells that fight infection and sperm precursors being obvious exceptions. Yet many older people do die of simple infections that a younger body could withstand. “Senescence probably has nothing to do with the nervous system,” Sedivy says, because most nerve cells do not divide. “On the other hand, it might very well have something to do with the aging of the immune system.” In any case, telomere loss is just one of the numerous insults cells sustain when they divide, says Judith Campisi, a professor at the Buck Institute for Research on Aging in Novato, Calif., and a cell biologist at Lawrence Berkeley National Laboratory. DNA often gets damaged when it is replicated during cell division, so cells that have split many times are more likely to harbor genetic errors than young cells. Genes related to aging in animals and people often code for proteins that prevent or repair those mistakes. And with each mitotic episode, the by-products of copying DNA build up in cell nuclei, complicating subsequent bouts of replication.

“Cell division is very risky business,” Campisi observes. So perhaps it is not surprising that the body puts a cap on mitosis. And cheating cell senescence probably wouldn’t grant immortality. Once the grains of sand have fallen through the mitotic hourglass, there’s no point in turning it over again. --Originally published: Scientific American 27(2); 34-41 (Summer 2018).

The Tick Tock of the Biological Clock by Michael W. Young Editor’s Note: In October 2017, the Nobel Prize in Physiology or Medicine was awarded to Jeffrey C. Hall, Michael Rosbash and Michael W. Young for discoveries of molecular mechanisms controlling circadian rhythms. Seventeen years before, Michael W. Young wrote this account in Scientific American describing the genetic studies that identified the “molecular timepieces” that are ubiquitous throughout the animal kingdom. You have to fight the urge to fall asleep at 7:00 in the evening. You are ravenous at 3 P.M. but have no appetite when suppertime rolls around. You wake up at 4:00 in the morning and cannot get back to sleep. This scenario is familiar to many people who have flown from the East Coast of the U.S. to California, a trip that entails jumping a three-hour time difference. During a weeklong business trip or vacation, your body no sooner acclimatizes to the new schedule than it is time to return home again, where you must get used to the old routine once more. Nearly every day my colleagues and I put a batch of Drosophila fruit flies through the jet lag of a simulated trip from New York to San Francisco or back. We have several refrigerator-size incubators in the laboratory: one labeled “New York” and another tagged “San Francisco.” Lights inside these incubators go on and off as the sun rises and sets in those two cities. (For consistency, we schedule sunup at 6 A.M. and sundown at 6 P.M. for both locations.) The temperature in the two incubators is a constant, balmy 77 degrees Fahrenheit.

The flies take their simulated journey inside small glass tubes packed into special trays that monitor their movements with a narrow beam of infrared light. Each time a fly moves into the beam, it casts a shadow on a phototransistor in the tray, which is connected to a computer that records the activity. Going from New York to San Francisco time does not involve a five-hour flight for our flies: we simply disconnect a fly-filled tray in one incubator, move it to the other one and plug it in. We have used our transcontinental express to identify and study the functions of several genes that appear to be the very cogs and wheels in the works of the biological clock that controls the day-night cycles of a wide range of organisms that includes not only fruit flies but mice and humans as well. Identifying the genes allows us to determine the proteins they encode—proteins that might serve as targets for therapies for a wide range of disorders, from sleep disturbances to seasonal depression. The main cog in the human biological clock is the suprachiasmatic nucleus (SCN), a group of nerve cells in a region at the base of the brain called the hypothalamus. When light hits the retinas of the eyes every morning, specialized nerves send signals to the SCN, which in turn controls the production cycle of a multitude of biologically active substances. The SCN stimulates a nearby brain region called the pineal gland, for instance. According to instructions from the SCN, the pineal rhythmically produces melatonin, the so-called sleep hormone that is now available in pill form in many health-food stores. As day progresses into evening, the pineal gradually begins to make more melatonin. When blood levels of the hormone rise, there is a modest decrease in body temperature and an increased tendency to sleep. The Human Clock Although light appears to “reset” the biological clock each day, the day-night, or circadian, rhythm continues to operate even in individuals who are deprived of light, indicating that the activity of the SCN is innate. In the early 1960s Jürgen Aschoff, then at the Max Planck Institute of Behavioral Physiology in Seewiesen, Germany,

and his colleagues showed that volunteers who lived in an isolation bunker—with no natural light, clocks or other clues about time— nevertheless maintained a roughly normal sleep-wake cycle of 25 hours. More recently Charles Czeisler, Richard E. Kronauer and their colleagues at Harvard University have determined that the human circadian rhythm is actually closer to 24 hours—24.18 hours, to be exact. The scientists studied 24 men and women (11 of whom were in their 20s and 13 of whom were in their 60s) who lived for more than three weeks in an environment with no time cues other than a weak cycle of light and dark that was artificially set at 28 hours and that gave the subjects their signals for bedtime. They measured the participants’ core body temperature, which normally falls at night, as well as blood concentrations of melatonin and of a stress hormone called cortisol that drops in the evening. The researchers observed that even though the subjects’ days had been abnormally extended by four hours, their body temperature and melatonin and cortisol levels continued to function according to their own internal 24-hour circadian clock. What is more, age seemed to have no effect on the ticking of the clock: unlike the results of previous studies, which had suggested that aging disrupts circadian rhythms, the body-temperature and hormone fluctuations of the older subjects in the Harvard study were as regular as those of the younger group. As informative as the bunker studies are, to investigate the genes that underlie the biological clock scientists had to turn to fruit flies. Flies are ideal for genetic studies because they have short life spans and are small, which means that researchers can breed and interbreed thousands of them in the laboratory until interesting mutations crop up. To speed up the mutation process, scientists usually expose flies to mutation-causing chemicals called mutagens. The first fly mutants to show altered circadian rhythms were identified in the early 1970s by Ron Konopka and Seymour Benzer of the California Institute of Technology. These researchers fed a mutagen to a few fruit flies and then monitored the movement of

2,000 of the progeny, in part using a form of the same apparatus that we now use in our New York to San Francisco experiments. Most of the flies had a normal 24-hour circadian rhythm: the insects were active for roughly 12 hours a day and rested for the other 12 hours. But three of the flies had mutations that caused them to break the pattern. One had a 19-hour cycle, one had a 28-hour cycle, and the third fly appeared to have no circadian rhythm at all, resting and becoming active seemingly at random. Time Flies In 1986 my research group at the Rockefeller University and another led by Jeffrey Hall of Brandeis University and Michael Rosbash of the Howard Hughes Medical Institute at Brandeis found that the three mutant flies had three different alterations in a single gene named period, or per, which each of our teams had independently isolated two years earlier. Because different mutations in the same gene caused the three behaviors, we concluded that per is somehow actively involved both in producing circadian rhythm in flies and in setting the rhythm’s pace. After isolating per, we began to question whether the gene acted alone in controlling the day-night cycle. To find out, two postdoctoral fellows in my laboratory, Amita Sehgal and Jeffrey Price, screened more than 7,000 flies to see if they could identify other rhythm mutants. They finally found a fly that, like one of the per mutants, had no apparent circadian rhythm. The new mutation turned out to be on chromosome 2, whereas per had been mapped to the X chromosome. We knew this had to be a new gene, and we named it timeless, or tim. But how did the new gene relate to per? Genes are made of DNA, which contains the instructions for making proteins. DNA never leaves the nucleus of the cell; its molecular recipes are read out in the form of messenger RNA, which leaves the nucleus and enters the cytoplasm, where proteins are made. We used the tim and per genes to make PER and TIM proteins in the laboratory. In collaboration with Charles Weitz of Harvard Medical School, we

observed that when we mixed the two proteins, they stuck to each other, suggesting that they might interact within cells. In a series of experiments, we found that the production of PER and TIM proteins involves a clocklike feedback loop. The per and tim genes are active until concentrations of their proteins become high enough that the two begin to bind to each other. When they do, they form complexes that enter the nucleus and shut down the genes that made them. After a few hours enzymes degrade the complexes, the genes start up again, and the cycle begins anew.

Illustration by Cynthia Turner

Moving the Hands of Time Once we had found two genes that functioned in concert to make a molecular clock, we began to wonder how the clock could be reset. After all, our sleep-wake cycles fully adapt to travel across any number of time zones, even though the adjustment might take a couple of days or weeks.

That is when we began to shuttle trays of flies back and forth between the “New York” and “San Francisco” incubators. One of the first things we and others noticed was that whenever a fly was moved from a darkened incubator to one that was brightly lit to mimic day-light, the TIM proteins in the fly’s brain disappeared—in a matter of minutes. Even more interestingly, we noted that the direction the flies “traveled” affected the levels of their TIM proteins. If we removed flies from “New York” at 8 P.M. local time, when it was dark, and put them into “San Francisco,” where it was still light at 5 P.M. local time, their TIM levels plunged. But an hour later, when the lights went off in “San Francisco,” TIM began to reaccumulate. Evidently the flies’ molecular clocks were initially stopped by the transfer, but after a delay they resumed ticking in the pattern of the new time zone. In contrast, flies moved at 4 A.M. from “San Francisco” experienced a premature sunrise when they were placed in “New York,” where it was 7 A.M. This move also caused TIM levels to drop, but this time the protein did not begin to build up again because the molecular clock was advanced by the time-zone switch. We learned more about the mechanism behind the different molecular responses by examining the timing of the production of tim RNA. Levels of tim RNA are highest at about 8 P.M. local time and lowest between 6 A.M. and 8 A.M. A fly moving at 8 P.M. from “New York” to “San Francisco” is producing maximum levels of tim RNA, so protein lost by exposure to light in “San Francisco” is easily replaced after sunset in the new location. A fly traveling at 4 A.M. from “San Francisco” to “New York,” however, was making very little tim RNA before departure. What the fly experiences as a premature sunrise eliminates TIM and allows the next cycle of production to begin with an earlier schedule. Not Just Bugs Giving flies jet lag has turned out to have direct implications for understanding circadian rhythm in mammals, including humans. In 1997 researchers led by Hajime Tei of the University of Tokyo and

Hitoshi Okamura of Kobe University in Japan—and, independently, Cheng Chi Lee of Baylor College of Medicine—isolated the mouse and human equivalents of per. Another flurry of work, this time involving many laboratories, turned up mouse and human forms of tim in 1998. And the genes were active in the suprachiasmatic nucleus. Studies involving mice also helped to answer a key question: What turns on the activity of the per and tim genes in the first place? In 1997 Joseph Takahashi of the Howard Hughes Medical Institute at Northwestern University and his colleagues isolated a gene they called Clock that when mutated yielded mice with no discernible circadian rhythm. The gene encodes a transcription factor, a protein that in this case binds to DNA and allows it to be read out as messenger RNA. Shortly thereafter a fly version of the mouse Clock gene was isolated, and various research teams began to introduce combinations of the per, tim and Clock genes into mammalian and fruit fly cells. These experiments revealed that the CLOCK protein targets the per gene in mice and both the per and tim genes in flies. The system had come full circle: in flies, whose clocks are the best understood, the CLOCK protein—in combination with a protein encoded by a gene called cycle—binds to and activates the per and tim genes, but only if no PER and TIM proteins are present in the nucleus. These four genes and their proteins constitute the heart of the biological clock in flies, and with some modifications they appear to form a mechanism governing circadian rhythms throughout the animal kingdom, from fish to frogs, mice to humans. Recently Steve Reppert’s group at Harvard and Justin Blau in my laboratory have begun to explore the specific signals connecting the mouse and fruit fly biological clocks to the timing of various behaviors, hormone fluctuations and other functions. It seems that some output genes are turned on by a direct interaction with the CLOCK protein. PER and TIM block the ability of CLOCK to turn on these genes at the same time as they are producing the oscillations

of the central feedback loop—setting up extended patterns of cycling gene activity. An exciting prospect for the future involves the recovery of an entire system of clock-regulated genes in organisms such as fruit flies and mice. It is likely that previously uncharacterized gene products with intriguing effects on behavior will be discovered within these networks. Perhaps one of these, or a component of the molecular clock itself, will become a favored target for drugs to relieve jet lag, the side effects of shift work, or sleep disorders and related depressive illnesses. Adjusting to a trip from New York to San Francisco might one day be much easier. --Originally published: Scientific American 282(3); 64-71 (March 2000).

SECTION 3 Intuition

Without a Thought by Christof Koch Sometimes a solution just appears out of nowhere. You bring your multipage spreadsheet to the finance department, and within seconds the accountant tells you something isn’t quite right without being able to say what. You’re perched on a narrow ledge halfway up Half Dome in Yosemite Valley, 1,000 feet above deck, searching for the continuation of the climb on the granite wall that appears featureless, when your senior climbing partner quickly points to a tiny series of flakes: “Trust me, this is it.” Understanding computer code, deciphering a differential equation, diagnosing a tumor from the shadowy patterns on an x-ray image, telling a fake from an authentic painting, knowing when to hold and when to fold in poker. Experts decide in a flash, without thought. Intuition is the name we give to the uncanny ability to quickly and effortlessly know the answer, unconsciously, either without or well before knowing why. The conscious explanation comes later, if at all, and involves a much more deliberate process. Intuition arises within a circumscribed cognitive domain. It may take years of training to develop, and it does not easily transfer from one domain of expertise to another. Chess mastery is useless when playing bridge. Professionals, who may spend a lifetime honing their skills, are much in demand for their proficiency. Let us consider a series of elegant experiments in functional brain imaging that finger one brain structure as being centrally involved in intuition. Shogi is a Japanese strategy game played on a nine-bynine board, with two sets of 20 distinct pieces facing each other. It is much more complex than chess, given that captured pieces can be

dropped into an empty position anywhere on the board at the discretion of the capturer. This rule multiplies the number of possible moves available at any point in the game and prevents the steady attrition of the two opposing armies that face off in a chess match. Keiji Tanaka of the RIKEN Brain Science Institute outside Tokyo led a group of cognitive neuroscientists who studied the brains of shogi players, using functional MRI to search for the neural signatures of proficiency. First, subjects inside the scanner looked at drawings of shogi boards taken either from tournament games or from randomly shuffled board positions. They also looked at sketches that had nothing to do with shogi: games of chess and Chinese chess, as well as pictures of faces and houses. In professional players, pictures of board positions taken from real shogi games activated a piece of cortex, the precuneus in the parietal lobe (located at the top of the brain toward the back), much more strongly than any of the other categories of pictures. That is, a region of their parietal cortex read out certain perceptual features associated with shogi games and distinguished them from random board positions. Experts see configurations of pieces, lines of control, a weakened defense or an imminent attack—patterns that amateurs do not notice. In a second experiment, Tanaka and his group presented players with checkmate-like shogi puzzles while they lay in the scanner. Subjects had to find the next move that would lead, inexorably, to the capture of the king. They had to do this within one second, pushing them to rely on their intuition because there was no time to analyze the various moves, countermoves, counter-countermoves, and so on. When they controlled for confounding cognitive factors, the scientists found nothing activated in the cortex. They did, however, isolate a small region in the front of the caudate nucleus, under the cortex, that reliably and very distinctly turned on in professional shogi players. The caudate was less reliably and less prominently activated when amateur players tried to find the correct move. And when subjects had up to eight seconds to more deliberately search for the best solution, this subcortical region remained silent.

Special-Purpose Hardware This elegant finding links intuition with the caudate nucleus, which is part of the basal ganglia—a set of interlinked brain areas responsible for learning, executing habits and automatic behaviors. The basal ganglia receive massive input from the cortex, the outer, rind-like surface of the brain. Ultimately these structures project back to the cortex, creating a series of cortical–basal ganglia loops. In one interpretation, the cortex is associated with conscious perception and the deliberate and conscious analysis of any given situation, novel or familiar, whereas the caudate nucleus is the site where highly specialized expertise resides that allows you to come up with an appropriate answer without conscious thought. In computer engineering parlance, a constantly used class of computations (namely those associated with playing a strategy game) is downloaded into special-purpose hardware, the caudate, to lighten the burden of the main processor, the cortex. So far these experiments relate the task of generating shogi moves to brain activity. Of course, we are not allowed to infer causation from correlation. Just because two things are associated does not imply that one causes the other. As research progresses, the causal structure of intuition and brain activity could be probed by inhibiting or blocking the caudate nucleus to see whether doing so prevents the rapid generation of correct shogi moves. Regrettably there are no reliable technologies to turn bits of brain deep inside the skull on and off in a way conducive to the long-term health of the subject. Instead Tanaka and his collaborators wondered whether novices who learn to play shogi wire up their caudate nucleus in a similar manner to that of experts. They recruited naive volunteers and subjected them to an intensive 15-week regime of daily play on a simplified computer version of the game. Motivated by prize money, the subjects improved over the approximately 100 days of training, during which they accumulated total practice time ranging from 37 to 107 hours. Asking subjects in these experiments to quickly come up with the best next move led to increased cortical activity, but that activity did

not change over the training period, nor did it correlate with the fraction of correct responses. In contrast, changes in blood flow in the front of the caudate nucleus evolved over the course of training in parallel with better performance. Furthermore, the strength of the caudate signal at the end of the training correlated with how much subjects improved over time. The more the subject learned, the larger the caudate signal. It appears that the site of fast, automatic, unconscious cognitive operations— from where a solution materializes all of a sudden—lies in the basal ganglia, linked to but apart from the cortex. These studies provide a telling hint of what happens when the brain brings the output of unconscious processing into awareness. What remains unclear is why furious activity in the caudate should remain unconscious while exertions in some part of the cortex give rise to conscious sensation. Finding an answer may illuminate the central challenge—why excitable matter produces feelings at all. --Originally published: Scientific American Mind 26(3); 25-26 (May/June 2015).

The Powers and Perils of Intuition by David G. Myers On an April morning in 2001 Christopher Bono, a clean-cut, wellmannered 16-year-old, approached Jackie Larsen in Grand Marais, Minn. His car had broken down, and he needed a ride to meet friends in Thunder Bay. As Larsen talked with him, she came to feel that something was very wrong. “I am a mother, and I have to talk to you like a mother,” she said. “I can tell by your manners that you have a nice mother.” Bono replied: “I don’t know where my mother is.” After Bono left, she called the police and suggested they trace his license plates. On July 1, 2002, a Russian Bashkirian Airlines jet’s collisionavoidance system instructed its pilot to ascend when a DHL cargo jet approached in the Swiss-controlled airspace over southern Germany. Nearly simultaneously, a Swiss air traffic controller— whose computerized air traffic system was down—offered an instant human judgment: descend. The Russian pilot overrode the software, and the plane began to angle downward. Larsen’s intuition was prescient. Police traced the car to Bono’s mother, then went to her apartment, where they found her battered body in the bathtub. Bono was charged with first-degree murder. The pilot’s instinct was also fateful, but tragically so. The two planes collided, killing 71 people. Such stories make us wonder: When is intuition powerfully helpful? When is it perilous? And what underlies those differences? “Buried deep within each and every one of us, there is an instinctive, heart-felt awareness that provides—if we allow it to—the

most reliable guide,” Britain’s Prince Charles has said. But bright people who rely on intuition also go astray. “I’m a gut player. I rely on my instincts,” President George W. Bush explained to Bob Woodward of the Washington Post regarding his decision to launch the Iraq war. As popular books on “intuitive healing,” “intuitive learning,” “intuitive managing” and “intuitive trading” urge, should we listen more to our “intuitive voice” and exercise our “intuitive muscle”? Or should we instead recall King Solomon’s wisdom: “He that trusteth in his own heart is a fool”? These questions are both deep and practical. They go to the heart of our understanding of the human mind. And the answers could provide a valuable guide in our everyday lives when we must decide whether to follow gut instinct or use evidence-based rationality— such as when interviewing job candidates, investing money and assessing integrity. As studies over the past decade have confirmed, our brains operate with a vast unconscious mind that even Freud never suspected. Much of our information processing occurs below the radar of our awareness—off stage, out of sight. The extent to which “automatic nonconscious processes pervade all aspects of mental and social life” is a difficult truth for people to accept, notes Yale University psychologist John Bargh. Our consciousness naturally assumes that its own intentions and choices rule our life. But consciousness overrates its control. In reality, we fly through life mostly on autopilot. As Galileo “removed the earth from its privileged position at the center of the universe,” so Bargh sees automatic thinking research “removing consciousness from its privileged place.” By studying the forces that shape our intuitions, scientists have revealed how this hidden mind feeds not only our insight and creativity but also our implicit prejudices and irrational fears. What Is Intuition? Consider the two-track mind revealed by modern cognitive science. In his 2002 Nobel Prize lecture, psychologist Daniel Kahneman noted that Track (“System”) 1—our behind-the-scenes, intuitive mind—is fast, automatic, effortless, associative, implicit (not

available to introspection) and often emotionally charged. Track 2— our familiar, conscious (explicit) mind—is deliberate, sequential and rational, and it requires effort to employ. Two phenomena are thought to shape the processing performed by Track 1. Kahneman and his late collaborator Amos Tversky, two Magellans of the mind, proposed one influence. They theorized that humans have evolved mental shortcuts, called heuristics, which enable efficient, snap judgments. “Fast and frugal” heuristics are like perceptual cues that usually work well but can occasionally trigger illusions or misperceptions. We intuitively assume that fuzzy-looking objects are farther away than clear ones, and usually they are. But on a foggy morning that car up ahead may be closer than it looks. A second influence on our intuitions comes from learned associations, which automatically surface as feelings that guide our judgments. Our life history provides us with a great reservoir of experiences that inform our actions. Thus, if a stranger looks like a person who previously harmed or threatened us, we may—without consciously recalling the earlier experience—react warily. In a 1985 experiment led by psychologist Pawel Lewicki of the University of Tulsa, one group of students was initially split about 50–50 in choosing which of two pictured women looked friendlier. Other students, having interacted previously with a warm, sociable experimenter who resembled one of the women, preferred that person by a six-to-one margin. In a follow-up, the experimenter acted unfriendly toward half the subjects. When these subjects later had to turn in their data to one of two women, they nearly always avoided the one who resembled the unfriendly experimenter. Intuition’s Powers Our explicit and implicit minds interact. When speaking, for example, we communicate intended meaning with instantly organized strings of words that somehow effortlessly spill out of our mouth. We just know, without knowing how we know, to articulate the word “pad” rather than “bad” or to say “a big, red barn” rather than “a red, big barn.” Studies of “automatic processing,” “subliminal

priming,” “implicit memory” and instant emotions unveil our intuitive capacities. Blindsight. A striking example of our two-track mind comes from studies of D.F., a woman who suffered carbon monoxide–related brain damage that left her unable to recognize objects. Psychologists Melvyn Goodale of the University of Western Ontario and David Milner of Durham University in England found that, functionally, D.F. is only partly blind. Asked to slip a postcard into a vertical or horizontal mail slot, she can intuitively do so without error. Though unable to report the width of a block in front of her, she will grasp it with just the right finger-thumb distance. Thanks to her “sight unseen,” she operates as if she has a “zombie within,” report Goodale and Milner. We commonly think of our vision as one system that controls our visually guided actions. Actually, vision is two systems, each with its own centers in the brain. A “visual perception track” enables us, as Goodale and Milner put it, “to create the mental furniture that allows us to think about the world”—that is, to recognize things and plan actions. A “visual action track” guides our moment-to-moment actions. On special occasions, the two can conflict. For example, we consciously perceive a protruding face in the “hollow face illusion” (in which a concave face appears convex). At the same time, our hand, guided by the subconscious, will unhesitatingly reach inside the mask when we are asked to flick off a buglike target on the face. Reading “thin slices.” In their widely publicized studies from the early 1990s, social psychologist Nalini Ambady, then at Harvard University, and psychologist Robert Rosenthal of the University of California, Riverside, have shown that we often form positive or negative impressions of people in a mere “blink” or “thin slice” of time. After subjects observed three two-second video clips of professors teaching, their teacher ratings predicted the actual endof-the-term ratings by the professors’ own students. To get a sense of someone’s energy and warmth, the researchers found, a mere six seconds will often do.

Even micro slices can be revealing, as Bargh has found in a series of studies conducted from the late 1980s to the present. When he flashed an image of a face or object for just two tenths of a second, people evaluated it instantly. “We’re finding that everything is evaluated as good or bad within a quarter of a second,” Bargh said in 1998. Thanks to pathways that run from the eye to the brain’s rapid-response emotional-control centers—bypassing the thinking part of the brain, the cortex—we often feel before we analyze. There is presumed biological wisdom to such instant feelings. When our ancestors confronted strangers, those who speedily and accurately discriminated anger, sadness, fear and happiness were more likely to survive and leave descendants. And there appears to be a sliver of truth in the presumption that women may, on average, slightly exceed men at quickly reading others’ emotions, reports Judith Hall of Northeastern University, based on an analysis of 125 studies. Shown a silent two-second video of an upset woman, for example, women, more accurately than men, intuit that she is discussing her divorce rather than criticizing someone. Women also have an edge in spotting lies and in discerning whether a man and a woman are genuinely romantic or are a posed, phony couple. Intuitive expertise. If experience informs our intuition, then as we learn to associate cues with particular feelings, many judgments should become automatic. Driving a car initially requires concentration but with practice becomes second nature; one’s hands and feet seem to do it intuitively, while the conscious mind is elsewhere. Studies of learned professional expertise reveal a similarly acquired automaticity. Rather than wending their way through a decision tree, experienced car mechanics and physicians will often, after a quick look and listen, recognize the problem. After a mere glance at a chessboard, masters (who may have 50,000 patterns stored in memory) can play speedy “blitz chess” with little performance decline. Experienced Japanese chicken sexers use complex pattern recognition to separate up to 1,000 newly hatched female pullets and look-alike male cockerels an hour, with near-

perfect accuracy. But all these experts are hard-pressed to explain how they do it. Intuition, said Herbert Simon, another Nobel laureate psychologist, “is nothing more and nothing less than recognition.” Experiments demonstrate that we are all capable of such “nonconscious learning.” In Lewicki’s research, people have learned to anticipate the computer screen quadrant in which a character will appear next, even before being able to articulate the underlying rule. In recent experiments at the University of Erfurt in Germany, Tilmann Betsch of the University of Heidelberg and his colleagues deluged people with information about the performance of various stock shares over time. Although the participants were unable to recall the return distributions afterward, their intuitive feeling about each stock “revealed a remarkable degree of sensitivity” to its performance. In experiments conducted during the 1980s and 1990s, psychologist Timothy D. Wilson of the University of Virginia learned that gut feelings have also predicted, better than rationally explained preferences, the future of people’s romantic relationships and their satisfaction with art posters. Sometimes the heart has its reasons. University of Amsterdam psychologist Ap Dijksterhuis and his colleagues confirmed the surprising powers of unconscious thought in recent experiments that showed people complex information about potential apartments, roommates or art posters. The researchers invited some participants to state their immediate preference after reading, say, a dozen pieces of information about each of four apartments. A second group, given several minutes to analyze the information consciously, tended to make slightly smarter decisions. But wisest of all, in study after study, was a third group, whose attention was distracted for a time—enabling the subjects’ minds to process the complex information unconsciously and to achieve more organized and crystallized judgments, with more satisfying results. Faced with complex decisions involving many factors, the best advice may indeed be to take our time—to “sleep on it”—and to await the intuitive result of our unconscious processing. Intuition’s Perils

So, just by living, we acquire intuitive expertise that enables quick and effortless judgments and actions. Yet psychological science is replete with examples of smart people making predictable and sometimes costly intuitive errors. They occur when our experience has exposed us to an atypical sample or when a quick and dirty heuristic leads us astray. After watching a basketball team overwhelm weak opponents, we may—thinking the team invincible— be stunned when it is overwhelmed by a strong opponent. Or, make your own snap judgment with this quick quiz: In English words, does the letter k appear more often as the first or third letter? For most people, words beginning with k are more immediately available in memory. Thus, using the “availability heuristic,” they assume that k occurs more frequently in the first position. Actually, k appears two to three times more often in the third position. Intuitive prejudice. After actor Mel Gibson’s drunken anti-Semitic tirade during a traffic arrest, after comedian Michael Richards’s vile racial response to a black heckler, and after New York City police officers in two incidents killed unarmed black residents with hailstorms of bullets, each perpetrator reassured us that he was not racist. At the conscious, explicit attitude level, they may well be right. But their (and our) unconscious, implicit attitudes—which typically manifest wariness toward those unfamiliar to us or those who resemble people with whom we have negative past associations— may not agree. And so it is that people may exhibit a primitive, automatic dislike or fear of people for whom they express sincere respect and appreciation. And whereas our explicit attitudes may predict our deliberate, intentional actions, our slower-to-change implicit attitudes may erupt in our spontaneous feelings and outbursts. Various experiments have briefly flashed words or faces that “prime” (automatically activate) stereotypes for some racial, gender or age group. Project Implicit, a collaboration among researchers at Harvard, the University of Virginia and the University of Washington, probes the results. Without the participants’ awareness, their activated stereotypes often bias their behavior. When primed with a black rather than white face, people may react with more hostility to

an experimenter’s annoying request. And they more often think of guns: they more quickly recognize a gun or mistake tools, such as a wrench, for a gun. Even the most seemingly tolerant, egalitarian white people will take longer to identify pleasant words (such as “peace” and “paradise”) as “good” when associated with black rather than white faces. Moreover, the more strongly people exhibit such implicit prejudice, the readier they are to perceive anger in black faces. If aware of a gap between how we should feel and how we intuitively do feel, self-conscious people may try to inhibit their automatic responses. Overcoming what prejudice researcher Patricia G. Devine of the University of Wisconsin– Madison calls “the prejudice habit” is not easy. If we find ourselves reacting with kneejerk presumptions or feelings, we should not despair, she advises; that is not unusual. It is what we do with that awareness that matters. Do we let those feelings hijack our behavior? Or do we compensate by monitoring and correcting our behavior? Intuitive fears. This much is beyond doubt: we often fear the wrong things. With images of 9/11 indelibly in mind, many people experienced heightened anxiety about flying. But our fears were misaligned with the facts. The National Safety Council reports that from 2001 to 2003 Americans were, mile for mile, 37 times more likely to die in a passenger vehicle than on a commercial flight. For the majority of air travelers, the most dangerous parts of the journey are the drives to and from the airport. In a late 2001 essay I calculated that if Americans flew 20 percent less frequently and instead drove half those unflown miles, about 800 more people would die in traffic accidents during the next year. In a follow-up article, psychologist Gerd Gigerenzer of the Max Planck Institute for Human Development in Berlin confirmed that the last three months of 2001 indeed produced an excess 353 American traffic fatalities. From their graves, the 9/11 terrorists were still killing us. And they continued to instill fear. “We’re striking terrorists abroad so we do not have to face them here at home,” Bush said on a visit

to Holland, Mich., my picturesque Midwestern town. “Today’s terrorists can strike at any place, at any time and with virtually any weapon,” echoed Homeland Security. We hear. In a 2006 Gallup poll 45 percent of Americans said they were “very” or “somewhat” worried that they or a family member would become a terrorist victim. Nevertheless, the odds that you or I will be victimized by the next terrorist incident are infinitesimal. Even in 2001, the year more than 2,900 perished during the attacks on the World Trade Center and the Pentagon, the average American was 10 times more likely to die in a car accident and 100 times more likely to die a slow smoking-related death. Why do we so often fear the wrong things? Why do so many smokers (whose habits shorten their lives, on average, by about five years) worry before flying (which, averaged across people, shortens life by one day)? Why do we fear violent crime more than obesity and clogged arteries? Why have most women feared breast cancer more than heart disease, which is more lethal? Why do we fear tragic but isolated terrorist acts more than the future’s omnipresent weapon of mass destruction: global climate change? In a nutshell, why do we fret about remote possibilities while ignoring higher probabilities? Psychological science has identified four factors that feed our risk intuitions: ■ We fear what our ancestral history has prepared us to fear. With our old brain living in a new world, we are disposed to fear confinement and heights, snakes and spiders, and humans outside our tribe. ■ We fear what we cannot control. Behind the wheel of our car, but not in airplane seat 17B, we feel control. ■ We fear what is immediate. Smoking’s lethality and the threats of rising seas and extreme weather are in the distant future. The airplane take-off is now. ■ We fear threats readily available in memory. If a surface-to-air missile brings down a single American airliner, the result—thanks to the availability heuristic—will be traumatic for the airline industry.

Given the difficulty in grasping the infinitesimal odds of its being (among 11 million annual airline flights) the plane that we are on, probabilities will not persuade us. Intuitive fears will hijack the rational mind. For these reasons, we fear too little those things that claim lives undramatically (smoking quietly kills 400,000 Americans annually) and too much those things that kill in spectacular bunches. By checking our intuitive fears against the facts, with mindfulness of the realities of how humans die, we can prepare for tomorrow’s biggest dangers and deprive terrorists of their biggest weapon: exaggerated fear. In experiments presented at the 2007 American Association for the Advancement of Science meeting, cognitive psychologist Paul Slovic of the University of Oregon observed a parallel tendency to feel proportionately little concern for the many victims of genocide and greater moral concern for dramatically portrayed individual victims. In collaboration with behavioral psychologists Deborah Small of the University of Pennsylvania and George Loewenstein of Carnegie Mellon University, Slovic also found that people were more willing to contribute money to support a single starving African child than to support many such children. Moreover, donations declined sharply when the child’s image was accompanied by a statistical summary of the millions of needy children like her in other African countries. “The numbers appeared to interfere with people’s feelings of compassion toward the young victim,” Slovic noted. Although it may be true that “the mark of a civilized human is the capacity to read a column of numbers and weep” (as Bertrand Russell allegedly said), the logical Track 2 mind is overridden by the feeling-based Track 1 mind. Mother Teresa spoke for most people: “If I look at the mass, I will never act. If I look at the one, I will.” So, intuition—fast, automatic, unreasoned thought and feeling— harvests our experience and guides our lives. Intuition is powerful, often wise, but sometimes perilous, and especially so when we overfeel and underthink. Today’s cognitive science enhances our appreciation for intuition but also reminds us to check it against

reality. Smart, critical thinking often begins as we listen to the creative whispers of our vast unseen mind and builds as we evaluate evidence, test conclusions and plan for the future. --Originally published: Scientific American Mind 18(3); 24-31 (June/July 2007).

Can We Rely on Our Intuition? by Laura Kutsch We face decisions all day long. Intuition, some believe, is an ability that can be trained and can play a constructive role in decisionmaking. “I go with my gut feelings,” says investor Judith Williams. Sure, you might think, “so do I,”— if the choice is between chocolate and vanilla ice cream. But Williams is dealing with real money in the five and six figures. Williams is one of the lions on the program The Lions’ Den, a German television show akin to Shark Tank. She and other participants invest their own money in business ideas presented by contestants. She is not the only one who trusts her gut. Intuition, it seems, is on a roll: bookstores are full of guides advising us how to heal, eat or invest intuitively. They promise to unleash our inner wisdom and strengths we do not yet know we have. But can we really rely on intuition, or is it a counsel to failure? Although researchers have been debating the value of intuition in decision-making for decades, they continue to disagree. A Source of Error? Intuition can be thought of as insight that arises spontaneously without conscious reasoning. Daniel Kahneman, who won a Nobel prize in economics for his work on human judgment and decisionmaking, has proposed that we have two different thought systems: system 1 is fast and intuitive; system 2 is slower and relies on reasoning. The fast system, he holds, is more prone to error. It has its place: it may increase the chance of survival by enabling us to

anticipate serious threats and recognize promising opportunities. But the slower thought system, by engaging critical thinking and analysis, is less susceptible to producing bad decisions. Kahneman, who acknowledges that both systems usually operate when people think, has described many ways that the intuitive system can cloud judgment. Consider, for example, the framing effect: the tendency to be influenced by the way a problem is posed or a question is asked. In the 1980s Kahneman and his colleague Amos Tversky presented a hypothetical public health problem to volunteers and framed the set of possible solutions in different ways to different volunteers. In all cases, the volunteers were told to imagine that the U.S. was preparing for an outbreak of an unusual disease expected to kill 600 people and that two alternative programs for combating the disease had been proposed. For one group, the choices were framed by Tversky and Kahneman in terms of gains—how many people would be saved: If Program A is adopted, 200 people will be saved. If Program B is adopted, there is 1/3 probability that 600 people will be saved, and 2/3 probability that no people will be saved. The majority of volunteers selected the first option, Program A. For another group, the choices were framed in terms of losses— how many people would die: If Program C is adopted 400 people will die. If Program D is adopted there is 1/3 probability that nobody will die, and 2/3 probability that 600 people will die. In this case, the vast majority of volunteers were willing to gamble and selected the second option, Program D. In fact, the options presented to both groups were the same: The first program would save 200 people and lose 400. The second program offered a one-in-three chance that everyone would live and a two-in-three chance that everyone would die. Framing the alternatives in terms of lives saved or lives lost is what made the

difference. When choices are framed in terms of gains, people often become risk-averse, whereas when choices are framed in terms of losses, people often became more willing to take risks. Intuition’s Benefits Other cognitive scientists argue that intuition can lead to effective decision-making more commonly than Kahneman suggests. Gerd Gigerenzer of the Max Planck Institute for Human Development in Berlin is among them. He, too, says that people rarely make decisions on the basis of reason alone, especially when the problems faced are complex. But he thinks intuition’s merit has been vastly underappreciated. He views intuition as a form of unconscious intelligence. Intuitive decisions can be grounded in heuristics: simple rules of thumb. Heuristics screen out large amounts of information, thereby limiting how much needs to be processed. Such rules of thumb may be applied consciously, but in general we simply follow them without being aware that we are doing so. Although they can lead to mistakes, as Kahneman points out, Gigerenzer emphasizes that they can be based on reliable information while leaving out unnecessary information. For example, an individual who wants to buy a good pair of running shoes might bypass research and brain work by simply purchasing the same running shoes used by an acquaintance who is an experienced runner. In 2006 a paper by Ap Dijksterhuis and his colleagues, then at the University of Amsterdam, came to a similarly favorable view of intuition’s value. The researchers tested what they called the “deliberation without attention” hypothesis: although conscious thought makes the most sense for simple decisions (for example, what size skillet to use), it can actually be detrimental when considering more complex matters, such as buying a house. In one of their experiments, test subjects were asked to select which of the four cars was the best, taking into account four characteristics, among them gas consumption and luggage space. One set of subjects had four minutes to think about the decision;

another set was distracted by solving brainteasers. The distracted group made the wrong choice (according to the researchers’ criteria for the best car) more often than those who were able to think without being distracted. But if participants were asked to assess 12 characteristics, the opposite happened: undisturbed reflection had a negative effect on decision-making; only 25 percent selected the best car. In contrast, 60 percent of the subjects distracted by brainteasers got it right. Investigators have been unable to replicate these findings, however. And in a 2014 review Ben R. Newell of the University of New South Wales and David R. Shanks of University College London concluded that the effect of intuition has been overrated by many researchers and that there is little evidence that conscious thought arrives at worse solutions in complex situations. What about Real Life? Of course, problems in the real world can be considerably more complicated than the artificially constructed ones often presented in laboratory experiments. In the late 1980s this difference sparked the Naturalistic Decision-Making movement, which seeks to determine how people make decisions in real life. With questionnaires, videos and observations, it studies how firefighters, nurses, managers and pilots use their experience to deal with challenging situations involving time pressure, uncertainty, unclear goals and organizational constraints. Researchers in the field found that highly experienced individuals tend to compare patterns when making decisions. They are able to recognize regularities, repetitions and similarities between the information available to them and their past experiences. They then imagine how a given situation might play out. This combination enables them to make relevant decisions quickly and competently. It further became evident that the certainty of the decider did not necessarily increase with an increase in information. On the contrary: too much information can prove detrimental.

Gary Klein, one of the movement’s founders, has called pattern matching “the intuitive part” and mental simulation “the conscious, deliberate and analytical part.” He has explained the benefits of the combination this way: “A purely intuitive strategy relying only on pattern matching would be too risky because sometimes the pattern matching generates flawed options. A completely deliberative and analytic strategy would be too slow.” In the case of firefighters, he notes, if a slow, systematic approach were used, “the fires would be out of control by the time the commanders finished deliberating.” Intuition Is Not Irrational Kamila Malewska of the Poznán University of Economics and Business in Poland has also studied intuition in real-world settings and likewise finds that people often apply a combination of strategies. She asked managers at a food company how they use intuition in their everyday work. Almost all of them stated that, in addition to rational analyses, they tapped gut feelings when making decisions. More than half tended to lean on rational approaches; about a quarter used a strategy that blended rational and intuitive elements; and about a fifth generally relied on intuition alone. Interestingly, the more upper-level managers tended more toward intuition. Malewska thinks that intuition is neither irrational nor the opposite of logic. Rather it is a quicker and more automatic process that plumbs the many deep resources of experience and knowledge that people have gathered over the course of their lives. Intuition, she believes, is an ability that can be trained and can play a constructive role in decision-making. Field findings published in 2017 by Lutz Kaufmann of the Otto Beisheim School of Management in Germany and his co-workers support the view that a mixture of thinking styles can be helpful in decision-making. The participants in their study, all purchasing managers, indicated how strongly they agreed or disagreed with various statements relating to their decision-making over the prior three months. For example: “I looked extensively for information before making a decision” (rational), “I did not have time to decide

analytically, so I relied on my experience” (experience-based), or “I was not completely sure how to decide, so I decided based on my gut feeling” (emotional). The researchers, who consider experiencebased and emotional processes as “two dimensions of intuitive processing,” also rated the success of a manager based on the unit price the person negotiated for a purchased product, as well as on the quality of the product and the punctuality of delivery. Rational decision-making was associated with good performance. A mixture of intuitive and rational approaches also proved useful; however, a purely experience-based and a purely emotional approach did not work well. In other words, a blending of styles, which is frequently seen in everyday life, seems beneficial. Economists Marco Sahm of the University of Bamberg and Robert K. von Weizsäcker of the Technical University of Munich study the extent to which our background knowledge determines whether rationality or gut feeling is more effective. Both Sahm and Weizsäcker are avid chess players, and they brought this knowledge to bear on their research. As children, they both learned intuitively by imitating the moves of their opponents and seeing where they led. Later, they approached the game more analytically, by reading chess books that explained and illustrated promising moves. Over time Weizsäcker became a very good chess player and has won international prizes. These days he bases his play mainly on intuition. The two economists developed a mathematical model that takes the costs and benefits of both strategies into account. They have come to the conclusion that whether it is better to rely more on rational assessments or intuition depends both on the complexity of a particular problem and on the prior knowledge and cognitive abilities of the person. Rational decisions are more precise but entail higher costs than intuitive ones—for example, they involve more effort spent gathering and then analyzing information. This additional cost can decrease over time, but it will never disappear. The cost may be worth it if the problem is multifaceted and the decision maker gains a lot of useful information quickly (if the decision maker’s

“learning curve is steep”). Once a person has had enough experience with related problems, though, intuitive decision-making that draws on past learning is more likely to yield effective decisions, Sahm and Weizsäcker say. The intuitive approach works better in that case because relying on accumulated experience and intuitive pattern recognition spares one the high costs of rational analysis. One thing is clear: intuition and rationality are not necessarily opposites. Rather it is advantageous to master both intuition and analytic skills. Let us not follow our inner voice blindly, but let us not underestimate it either. --Originally published: Scientific American online August 15, 2019.

SECTION 4 Creating Reality

How Matter Becomes Mind by Max Bertolero & Danielle S. Bassett Networks pervade our lives. Every day we use intricate networks of roads, railways, maritime routes and skyways traversed by commercial flights. They exist even beyond our immediate experience. Think of the World Wide Web, the power grid and the universe, of which the Milky Way is an infinitesimal node in a seemingly boundless network of galaxies. Few such systems of interacting connections, however, match the complexity of the one underneath our skull. Neuroscience has gained a higher profile in recent years, as many people have grown familiar with splashily colored images that show brain regions “lighting up” during a mental task. There is, for instance, the temporal lobe, the area by your ear, which is involved with memory, and the occipital lobe at the back of your head, which dedicates itself to vision. What has been missing from this account of human brain function is how all these distinct regions interact to give rise to who we are. Our laboratory and others have borrowed a language from a branch of mathematics called graph theory that allows us to parse, probe and predict complex interactions of the brain that bridge the seemingly vast gap between frenzied neural electrical activity and an array of cognitive tasks—sensing, remembering, making decisions, learning a new skill and initiating movement. This new field of network neuroscience builds on and reinforces the idea that certain regions of the brain carry out defined activities. In the most fundamental sense, what the brain is—and thus who we are as conscious beings—is, in fact, defined by a sprawling network of 100

billion neurons with at least 100 trillion connecting points, or synapses. Network neuroscience seeks to capture this complexity. We can now model the data supplied by brain imaging as a graph composed of nodes and edges. In a graph, nodes represent the units of the network, such as neurons or, in another context, airports. Edges serve as the connections between nodes—think of one neuron intertwined to the next or contemplate airline flight routes. In our work, the human brain is reduced to a graph of roughly 300 nodes. Diverse areas can be linked together by edges representing the brain’s structural connections: thick bundles of tubular wires called white matter tracts that tie together brain regions. This depiction of the brain as a unified network has already furnished a clearer picture of cognitive functioning, along with the practical benefit of enabling better diagnoses and treatment of psychiatric disorders . As we glimpse ahead, an understanding of brain networks may lead to a blueprint for improved artificial intelligence, new medicines and electrical-stimulation technology to alter malfunctioning neural circuitry in depression—and perhaps also the development of genetic therapies to treat mental illness. The Music of the Mind To understand how networks underlie our cognitive capabilities, first consider the analogy of an orchestra playing a symphony. Until recently, neuroscientists have largely studied the functioning of individual brain regions in isolation, the neural equivalent of separate brass, percussion, strings and woodwind sections. In the brain, this stratification represents an approach that dates back to Plato—quite simply, it entails carving nature at the joints and then studying the individual components that remain. Just as it is useful to understand how the amygdala helps to process emotions, it is similarly vital to grasp how a violin produces high-pitched sounds. Still, even a complete list of brain regions and their functions—vision, motor, emotion, and so on—does not tell us how the brain really works. Nor does an inventory of instruments provide a recipe for Beethoven’s Eroica symphony.

Network neuroscientists have begun to tame these mysteries by examining the way each brain region is embedded in a larger network of such regions and by mapping the connections between regions to study how each is embedded in the large, integrated network that is the brain. There are two major approaches. First, examining structural connectivity captures the instrumentation of the brain’s orchestra. It is the physical means of creating the music, and the unique instrumentation of a given musical work constrains what can be played. Instrumentation matters, but it is not the music itself. Put another way, just as a collection of instruments is not music, an assemblage of wires does not represent brain function. Second, living brains are massive orchestras of neurons that fire together in quite specific patterns. We hear a brain’s music by measuring the correlation between the activity of each pair of regions, indicating that they are working in concert. This measure of joint activity is known as functional connectivity, and we colloquially think of it as reflecting the music of the brain. If two regions fire with the same time-varying fluctuations, they are considered to be functionally connected. This music is just as important as the decibels produced by a French horn or viola. The volume of the brain’s music can be thought of as the level of activity of electrical signals buzzing about one brain area or another. At any moment, though, some areas within the three-pound organ are more active than others. We have all heard the saying that people use a small fraction of their brain capacity. In fact, the entire brain is active at any point in time, but a given task modulates the activity of only a portion of the brain from its baseline level of activity. That arrangement does not mean that you fulfill only half of your cognitive potential. In fact, if your entire brain were strongly active at the same time, it would be as if all the orchestra members were playing as loudly as possible—and that scenario would create chaos, not enable communication. The deafening sound would not convey the emotional overtones present in a great musical piece. It is the pitch, rhythms, tempo and strategic pauses that communicate information, both during a symphony and inside your head.

Modularity Just as an orchestra can be divided into groups of instruments from different families, the brain can be separated into collections of nodes called modules—a description of localized networks. All brains are modular. Even the 302-neuron network of the nematode Caenorhabditis elegans has a modular structure. Nodes within a module share stronger connections to one another than to nodes in other modules. Each module in the brain has a certain function, just as every family of instruments plays a role in the symphony. We recently performed an evaluation of a large number of independent studies— a meta-analysis—that included more than 10,000 functional magnetic resonance imaging (fMRI) experiments of subjects performing 83 different cognitive tasks and discovered that separate tasks map to different brain-network modules. There are modules occupied with attention, memory and introspective thought. Other modules, we found, are dedicated to hearing, motor movement and vision. These sensory and motor cognitive processes involve single, contiguous modules, most of which are confined to one lobe of the brain. We also found that computations in modules do not spur more activity in other modules—a critical aspect of modular processing. Imagine a scenario in which every musician in an orchestra had to change the notes played every time another musician changed his or her notes. The orchestra would spiral out of control and would certainly not produce aesthetically pleasing sounds. Processing in the brain is similar—each module must be able to function mostly independently. Philosophers as early as Plato and as recent as Jerry Fodor have noted this necessity, and our research confirms it. Even though brain modules are largely independent, a symphony requires that families of instruments be played in unison. Information generated by one module must eventually be integrated with other modules. Watching a movie with only a brain module for vision— without access to the one for emotions—would detract greatly from the experience.

For that reason, to complete many cognitive tasks, modules must often work together. A short-term memory task—holding a new phone number in your head—requires the cooperation of auditory, attention and memory-processing modules. To integrate and control the activity of multiple modules, the brain uses hubs—nodes where connections from the brain’s different modules meet. Some key modules that control and integrate brain activity are less circumspect than others in their doings. Their connections extend globally to multiple brain lobes. The frontoparietal control module spans the frontal, parietal and temporal lobes. It developed relatively recently on the timescale of evolution. The module is especially large in humans, relative to our closest primate ancestors. It is analogous to an orchestra conductor and becomes active across a large number of cognitive tasks. The frontoparietal module ensures that the brain’s multiple modules function in unison. It is heavily involved in what is called executive function, which encompasses the separate processes of decision-making, short-term memory and cognitive control. The last is the ability to develop complex strategies and inhibit inappropriate behavior. Another highly interconnected module is the salience module, which hooks up to the frontoparietal control module and contributes to a range of behaviors related to attention and responding to novel stimuli. For example, take a look at two words: blue and red. If you are asked to respond with the color of the word, you will react much faster to the one set in red. The frontoparietal and salience modules activate when responding to the color green because you have to suppress a natural inclination to read the word as “blue.” Finally, the default mode module spans the same lobes as the frontoparietal control network. It contains many hubs and is linked to a variety of cognitive tasks, including introspective thought, learning, memory retrieval, emotional processing, inference of the mental state of others and even gambling. Critically, damage to these hubrich modules disturbs functional connections throughout the brain

and causes widespread cognitive difficulties, just as bad weather at a hub airport delays air traffic all over the country. Personal Connections Although our brains have certain basic network components— modules interconnected by hubs—each of us shows slight variations in the way our neural circuits are wired. Researchers have recently devoted intense scrutiny to this diversity. In an initial phase of what is called the Human Connectome Project, 1,200 young people have volunteered to participate in a study of brain-network architecture, funded by the National Institutes of Health. (The final goal of the project is to cover the entire life span.) Each individual’s structural and functional connectivity networks were probed using fMRI. These data were supplemented by a cognitive battery of testing and questionnaires to analyze 280 behavioral and cognitive traits. Participants provided information about how well they slept, how often they drank alcohol, their language and memory abilities, and their emotional states. Neuroscientists from all over the world have begun to pore over this incredibly rich data set to learn how our brain networks encode who we are. Using data from hundreds of participants in the Human Connectome Project, our lab and others have demonstrated that brain-connectivity patterns establish a “fingerprint” that distinguishes each individual. People with strong functional connections among certain regions have an extensive vocabulary and exhibit higher fluid intelligence—helpful for solving novel problems—and are able to delay gratification. They tend to have more education and life satisfaction and better memory and attention. Others with weaker functional connections among those same brain areas have lower fluid intelligence, histories of substance abuse, poor sleep and a decreased capacity for concentration. Inspired by this research, we showed that the findings could be described by particular patterns among the hub connections. If your brain network has strong hubs with many connections across modules, it tends to have modules that are clearly segregated from one another, and you will perform better on a range of tasks, from

short-term memory to mathematics, language or social cognition. Put simply, your thoughts, feelings, quirks, flaws and mental strengths are all encoded by the specific organization of the brain as a unified, integrated network. In sum, it is the music your brain plays that makes you you. The brain’s synchronized modules both establish your identity and help to retain it over time. The musical compositions they play appear to always be similar. The likeness could be witnessed when participants in two other studies in the Human Connectome Project engaged in various tasks that involved short-term memory, recognition of the emotions of others, gambling, finger tapping, language, mathematics, social reasoning and a self-induced “resting state” in which they let their mind wander. Fascinatingly, the networks’ functional wiring has more similarities than expected across all these activities. Returning to our analogy, it is not as if the brain plays Beethoven when doing math and Tupac when resting. The symphony in our head is the same musician playing the same musical genre. This consistency derives from the fact that the brain’s physical pathways, or structural connections, place constraints on the routes over the brain’s integrated network that a neural signal can travel. And those pathways delineate how functional connections—the ones, say, for math or language—can be configured. In the musical metaphor, a bass drum cannot play the melodic line of a piano. Changes in the brain’s music inevitably occur, just as new arrangements do for orchestral music. Physical connections undergo alterations over the course of months or years, whereas functional connectivity shifts on the order of seconds, when a person switches between one mental task and the next. Transformations in both structural and functional connectivity are important during adolescent brain development, when the finishing touches of the brain’s wiring diagram are being refined. This period is of critical importance because the first signs of mental disorders often appear in adolescence or early adulthood.

One area our research relates to is understanding how brain networks develop through childhood and adolescence and into adulthood. These processes are driven by underlying physiological changes, but they are also influenced by learning, exposure to new ideas and skills, an individual’s socioeconomic status and other experiences. Brain-network modules emerge very early in life, even in the womb, but their connectivity is refined as we grow up. Consistent strengthening of the structural connections to hubs throughout the course of childhood is associated with an increase in the segregation between modules and an augmentation in the efficiency with which young people perform executive tasks such as complex reasoning and self-regulation. We have also found that the extent to which modules segregate from one another is more rapid in children who have a higher socioeconomic status, highlighting the key impact of their environment. Although changes in structural connectivity are slow, the reconfiguration of functional connections can occur quickly, in a few seconds or minutes. These rapid shifts are instrumental for moving between tasks and for the massive amount of learning demanded even by a single task. In a set of studies that we published from 2011 to the present, we found that networks with modules that can change readily turn up in individuals who have greater executive function and learning capacity. To better understand what was happening, we used publicly available data from a landmark study known as MyConnectome, in which Stanford University psychology professor Russell Poldrack personally underwent imaging and cognitive appraisals three times a week for more than a year. Whereas modules are mostly autonomous and segregated, at times the brain will spontaneously reorganize its connections. This property, called functional network flexibility, lets a node with strong functional connections within a module suddenly establish many connections to a different module, changing the flow of information through the network. Using data from this study, we found that the rerouting of a network’s

connections changes from day to day in a manner that matches positive mood, arousal and fatigue. In healthy individuals, such network flexibility correlates with better cognitive function. Dissonant Notes The configuration of brain connections also reflects one’s mental health. Aberrant connectivity patterns accompany depression, schizophrenia, Alzheimer’s, Parkinson’s, autism spectrum disorder, attention deficit disorder, dementia and epilepsy. Most mental illnesses are not confined to one area of the brain. The circuitry affected in schizophrenia extends quite widely across the entire organ. The so-called disconnectivity hypothesis for schizophrenia holds that there is nothing abnormal about the individual modules. Instead the disarray relates to an overabundance of connections between modules. In a healthy brain, modules are mostly autonomous and segregated, and the ability to bring about flexible changes in network connections is beneficial for cognitive functioning—within certain limits. In our lab, we found that in the brains of people with schizophrenia and their first-degree relatives, there is an overabundance of flexibility in how networks reconfigure themselves. Auditory hallucinations might result when nodes unexpectedly switch links between speech and auditory modules. The uninvited mix can result in what seem to be the utterings of voices in one’s head. Like schizophrenia, major depressive disorder is not caused by a single abnormal brain region. Three specific modules appear to be affected in depression: the frontoparietal control, salience and default mode modules. In fact, the symptoms of depression— emotional disinhibition, altered sensitivity to emotional events and rumination—map to these modules. As a result, normal communication among the three modules becomes destabilized. Activities from module to module typically tug back and forth to balance the cognitive processing of sensory inputs with more introspective thoughts. In depression, though, the default mode dominates, and the afflicted person lapses into ruminative

thought. The music of the brain thus becomes increasingly unbalanced, with one family of instruments governing the symphony. These observations have broadened our understanding of the network properties of depression to the extent that a connectivity pattern in a brain can allow us to diagnose certain subtypes of the disorder and determine which areas should be treated with electricalstimulation technology. Networks Evolve Besides studying development, network neuroscientists have begun to ask why brain networks have taken their present form over tens of thousands of years. The areas identified as hubs are also the locations in the human brain that have expanded the most during evolution, making them up to 30 times the size they are in macaques. Larger brain hubs most likely permit greater integration of processing across modules and so support more complex computations. It is as if evolution increased the number of musicians in a section of the orchestra, fostering more intricate melodies. Another way neuroscientists have explored these questions is by creating computer-generated networks and subjecting them to evolutionary pressures. In our lab, we have begun to probe the evolutionary origins of hubs. This exercise started with a network in which all edges were placed uniformly at random. Next, the network was rewired, mimicking natural selection to form segregated modules and display a property known in network science as smallworldness, in which paths form to let distant network nodes communicate with surprising ease. Thousands of such networks then evolved, each of which ultimately contained hubs strongly connected to multiple modules but also tightly interconnected to one another, forming what is called a club. Nothing in the selection process explicitly selected for a club of hubs—they simply emerged from this iterative process. This simulation demonstrates that one potential solution to evolving a brain capable of exchanging information among modules requires hubs with strong connections. Notably, real networks— brains, airports, power grids—also have durable, tightly

interconnected hubs, exactly as predicted by evolutionary experiments. That observation does not mean evolution necessarily occurred in the same way as the simulation, but it shows a possible means by which one of nature’s tricks might operate. States of Mind When Nobel Prize–winning physicist Richard Feynman died in 1988, his blackboard read, “What I cannot create, I do not understand.” He created a beautiful aphorism, yet it misses a pivotal idea: it should be revised to “What I cannot create and control, I do not understand.” Absent such control, we still know enough to enjoy a symphony, even if we do not qualify to be the conductor. When it comes to the brain, we have a basic understanding of its form and the importance of its network architecture. We know that our brain determines who we are, but we are just beginning to understand how it all happens. To rephrase mathematician PierreSimon Laplace’s explanation of determinism and mechanics and apply it to the brain, one’s present brain, and so one’s mental state, can be thought of as a compilation of past states that can be used to predict the future. A neuroscientist who knew all the principles of brain function and everything about someone’s brain could predict that person’s mental conditions—the future, as well as the past, would be present inside the person’s mind. This knowledge could be used to prevent pain and suffering, given that many mental illnesses are associated with network abnormalities. With enough engineering ingenuity, we may develop implanted devices that alter or even generate new brain networks or edit genomes to prevent the disorganized networks associated with mental disorders from occurring in the first place. Such an achievement would enable us to treat diseases and to restore brain function after stroke or injury and enhance it in healthy individuals. Before those futuristic scenarios materialize, two major gaps must be filled: we need to know more about how personal genetics, earlylife development and environment determine one’s brain’s structure and how that structure leads to functional capacities. Neuroscientists

have some knowledge from the human genome about the structure that gives rise to functional networks but still need to learn precisely how this process occurs. We are starting to grasp the way brain networks develop and are shaped by the environment but are not close to explaining the entire complexity of this process. The brain’s wiring, its structural connectivity, constrains how various modules interact with one another, but our knowledge remains limited. As we fill in these gaps, chances improve for interventions to guide brain functioning into healthy trajectories. What holds us back, for the moment, is our still blurry vision of the brain—it is as if we are outside the concert hall and have seen only sketches of the instruments. Inside each brain region that neuroscientists study are millions of neurons firing every millisecond, and we are able just to indirectly measure their average activity levels every second or so. Thus far we can roughly identify the human brain’s structural connections. Luckily, scientists and engineers have taken steps to deliver ever clearer data that will enable a deeper look into perhaps the most complex network in the known universe: your brain. --Originally published: Scientific American 321(1); 26-33 (July 2019).

Our Inner Universes by Anil K. Seth On the 10th of April 2019, Pope Francis, President Salva Kiir of South Sudan and former rebel leader Riek Machar sat down together for dinner at the Vatican. They ate in silence, the start of a two-day retreat aimed at reconciliation from a civil war that has killed some 400,000 people since 2013. At about the same time in my laboratory at the University of Sussex in England, Ph.D. student Alberto Mariola was putting the finishing touches to a new experiment in which volunteers experience being in a room that they believe is there but that is not. In psychiatry clinics across the globe, people arrive complaining that things no longer seem “real” to them, whether it is the world around them or their own selves. In the fractured societies in which we live, what is real—and what is not— seems to be increasingly up for grabs. Warring sides may experience and believe in different realities. Perhaps eating together in silence can help because it offers a small slice of reality that can be agreed on, a stable platform on which to build further understanding. We need not look to war and psychosis to find radically different inner universes. In 2015 a badly exposed photograph of a dress tore across the Internet, dividing the world into those who saw it as blue and black (me included) and those who saw it as white and gold (half my lab). Those who saw it one way were so convinced they were right—that the dress truly was blue and black or white and gold—that they found it almost impossible to believe that others might perceive it differently.

We all know that our perceptual systems are easy to fool. The popularity of visual illusions is testament to this phenomenon. Things seem to be one way, and they are revealed to be another: two lines appear to be different lengths, but when measured they are exactly the same; we see movement in an image we know to be still. The story usually told about illusions is that they exploit quirks in the circuitry of perception, so that what we perceive deviates from what is there. Implicit in this story, however, is the assumption that a properly functioning perceptual system will render to our consciousness things precisely as they are. The deeper truth is that perception is never a direct window onto an objective reality. All our perceptions are active constructions, brain-based best guesses at the nature of a world that is forever obscured behind a sensory veil. Visual illusions are fractures in the Matrix, fleeting glimpses into this deeper truth. Take, for example, the experience of color—say, the bright red of the coffee mug on my desk. The mug really does seem to be red: its redness seems as real as its roundness and its solidity. These features of my experience seem to be truly existent properties of the world, detected by our senses and revealed to our mind through the complex mechanisms of perception. Yet we have known since Isaac Newton that colors do not exist out there in the world. Instead they are cooked up by the brain from mixtures of different wavelengths of colorless electromagnetic radiation. Colors are a clever trick that evolution has hit on to help the brain keep track of surfaces under changing lighting conditions. And we humans can sense only a tiny slice of the full electromagnetic spectrum, nestled between the lows of infrared and the highs of ultraviolet. Every color we perceive, every part of the totality of each of our visual worlds, comes from this thin slice of reality. Just knowing this is enough to tell us that perceptual experience cannot be a comprehensive representation of an external objective world. It is both less than that and more than that. The reality we experience—the way things seem—is not a direct reflection of what

is actually out there. It is a clever construction by the brain, for the brain. And if my brain is different from your brain, my reality may be different from yours, too. The Predictive Brain In Plato's Allegory of the Cave prisoners are chained to a blank wall all their lives, so that they see only the play of shadows cast by objects passing by a fire behind them, and they give the shadows names because for them the shadows are what is real. A thousand years later, but still a thousand years ago, Arabian scholar Ibn alHaytham wrote that perception, in the here and now, depends on processes of “judgment and inference” rather than involving direct access to an objective reality. Hundreds of years later again Immanuel Kant realized that the chaos of unrestricted sensory data would always remain meaningless without being given structure by preexisting conceptions or “beliefs,” which for him included a priori frameworks such as space and time. Kant’s term “noumenon” refers to a “thing in itself ”—Ding an sich—an objective reality that will always be inaccessible to human perception. Today these ideas have gained a new momentum through an influential collection of theories that turn on the idea that the brain is a kind of prediction machine and that perception of the world—and of the self within it—is a process of brain-based prediction about the causes of sensory signals. These new theories are usually traced to German physicist and physiologist Hermann von Helmholtz, who in the late 19th century proposed that perception is a process of unconscious inference. Toward the end of the 20th century Helmholtz’s notion was taken up by cognitive scientists and artificial-intelligence researchers, who reformulated it in terms of what is now generally known as predictive coding or predictive processing. The central idea of predictive perception is that the brain is attempting to figure out what is out there in the world (or in here, in the body) by continually making and updating best guesses about the causes of its sensory inputs. It forms these best guesses by

combining prior expectations or “beliefs” about the world, together with incoming sensory data, in a way that takes into account how reliable the sensory signals are. Scientists usually conceive of this process as a form of Bayesian inference, a framework that specifies how to update beliefs or best guesses with new data when both are laden with uncertainty. In theories of predictive perception, the brain approximates this kind of Bayesian inference by continually generating predictions about sensory signals and comparing these predictions with the sensory signals that arrive at the eyes and the ears (and the nose and the fingertips, and all the other sensory surfaces on the outside and inside of the body). The differences between predicted and actual sensory signals give rise to so-called prediction errors, which are used by the brain to update its predictions, readying it for the next round of sensory inputs. By striving to minimize sensoryprediction errors everywhere and all the time, the brain implements approximate Bayesian inference, and the resulting Bayesian best guess is what we perceive. To understand how dramatically this perspective shifts our intuitions about the neurological basis of perception, it is helpful to think in terms of bottom-up and top-down directions of signal flow in the brain. If we assume that perception is a direct window onto an external reality, then it is natural to think that the content of perception is carried by bottom-up signals—those that flow from the sensory surfaces inward. Topdown signals might contextualize or finesse what is perceived, but nothing more. Call this the “how things seem” view because it seems as if the world is revealing itself to us directly through our senses. The prediction machine scenario is very different. Here the heavy lifting of perception is performed by the top-down signals that convey perceptual predictions, with the bottom-up sensory flow serving only to calibrate these predictions, keeping them yoked, in some appropriate way, to their causes in the world. In this view, our perceptions come from the inside out just as much as, if not more than, from the outside in. Rather than being a passive registration of

an external objective reality, perception emerges as a process of active construction—a controlled hallucination, as it has come to be known. Origins of Perception The classical view of perception (blue panel) holds that it is a direct window onto an external reality. Sensory signals flow from the bottom up, entering the brain through receptors in our eyes, ears, nose, tongue and skin to reveal the outside world to us as it is. Top-down signals within the brain serve only to finesse what is perceived. In the prediction machine view of perception (green panel), in contrast, perceptual content is carried by top-down predictions made by the brain based on prior experience. Bottom-up signals function mainly to convey prediction errors, which rein in the brain’s hypotheses. Perception is thus a controlled hallucination in this model.

Illustration by Matteo Farinella

Why controlled hallucination? People tend to think of hallucination as a kind of false perception, in clear contrast to veridical, true-toreality, normal perception. The prediction machine view suggests instead a continuity between hallucination and normal perception. Both depend on an interaction between top-down, brain-based predictions and bottom-up sensory data, but during hallucinations, sensory signals no longer keep these top-down predictions appropriately tied to their causes in the world. What we call hallucination, then, is just a form of uncontrolled perception, just as normal perception is a controlled form of hallucination. This view of perception does not mean that nothing is real. Writing in the 17th century, English philosopher John Locke made an influential distinction between “primary” and “secondary” qualities. Primary qualities of an object, such as solidity and occupancy of space, exist independently of a perceiver. Secondary qualities, in contrast, exist only in relation to a perceiver—color is a good example. This distinction explains why conceiving of perception as controlled hallucination does not mean it is okay to jump in front of a bus. This bus has primary qualities of solidity and space occupancy that exist independently of our perceptual machinery and that can do us injury. It is the way in which the bus appears to us that is a controlled hallucination, not the bus itself. Tripping in the Lab A growing body of evidence supports the idea that perception is controlled hallucination, at least in its broad outlines. A 2015 study by Christoph Teufel of Cardiff University in Wales and his colleagues offers a striking example. In this study, patients with early-stage psychosis who were prone to hallucinations were compared with healthy individuals on their ability to recognize so-called two-tone images. Take a look at the first photo in the figure following this paragraph —a sample of a two-tone image. Probably all you will see is a bunch of black-and-white splotches. Then, look at the second image in the same figure. Finally, go back and take another look at the first photo; it ought to look rather different. Where previously there was a

splotchy mess, there are now distinct objects, and something is happening. Perceptual Shift A two-tone image looks like a splotchy mess until viewing a photograph changes our perceptual expectation, and thus what we consciously see.

Photos by Richard Armstrong, Getty Images

What I find remarkable about this exercise is that in your second examination of the first photo, the sensory signals arriving at your eyes have not changed at all from the first time you saw it. All that has changed are your brain’s predictions about the causes of these sensory signals. You have acquired a new high-level perceptual expectation, and this is what changes what you consciously see. If you show people many of these two-tone images, each followed by the full picture, they might subsequently be able to identify a good proportion of two-tone images, though not all of them. In Teufel’s study, people with early-stage psychosis were better at recognizing two-tone images after having seen the full image than were healthy control subjects. In other words, being hallucination-prone went along with perceptual priors having a stronger effect on perception. This is exactly what would be expected if hallucinations in psychosis depended on an overweighting of perceptual priors so that they overwhelmed sensory prediction errors, unmooring perceptual best guesses from their causes in the world.

Recent research has revealed more of this story. Phil Corlett of Yale University and his colleagues paired lights and sounds in a simple design to engender expectations among their study subjects of whether or not a light would appear on a given experimental trial. They combined this design with brain imaging to uncover some of the brain regions implicated in predictive perception. When they looked at the data, Corlett and his team were able to identify regions such as the superior temporal sulcus, deep in the temporal lobe of the cortex, that were specifically associated with top-down predictions about auditory sensations. This is an exciting new development in mapping the brain basis of controlled hallucinations. In my lab we have taken a different approach to exploring the nature of perception and hallucination. Rather than looking into the brain directly, we decided to simulate the influence of overactive perceptual priors using a unique virtual-reality setup masterminded by our resident VR guru, Keisuke Suzuki. We call it, with tongue firmly in cheek, the “hallucination machine.” Using a 360-degree camera, we first recorded panoramic video footage of a busy square in the University of Sussex campus on a Tuesday at lunchtime. We then processed the footage through an algorithm based on Google’s AI program DeepDream to generate a simulated hallucination. What happens is that the algorithm takes a so-called neural network—one of the workhorses of AI—and runs it backward. The network we used had been trained to recognize objects in images, so if you run it backward, updating the network’s input instead of its output, the network effectively projects what it “thinks” is there onto and into the image. Its predictions overwhelm the sensory inputs, tipping the balance of perceptual best guessing toward these predictions. Our particular network was good at classifying different breeds of dogs, so the video became unusually suffused by dog presences. Many people who have viewed the processed footage through the VR headset have commented that the experience is rather reminiscent not of the hallucinations of psychosis but of the exuberant phenomenology of psychedelic trips.

By implementing the hallucination machine in slightly different ways, we could generate different kinds of conscious experience. For example, running the neural network backward from one of its middle layers, rather than from the output layer, leads to hallucinations of object parts, rather than whole objects. As we look ahead, this method will help us match specific features of the computational architecture of predictive perception to specific aspects of what experiences of hallucinations are like. And by understanding hallucinations better, we will be able to understand normal experience better, too, because predictive perception is at the root of all our perceptual experience. The Perception of Reality Although the hallucination machine is undoubtedly trippy, people who experience it are fully aware that what they are experiencing is not real. Indeed, despite rapid advances in VR technology and computer graphics, no current VR setup delivers an experience that is sufficiently convincing to be indistinguishable from reality. This is the challenge we took up when designing a new “substitutional reality” setup at Sussex—the one we were working on when Pope Francis convened the retreat with Salva Kiir and Riek Machar. Our aim was to create a system in which volunteers would experience an environment as being real—and believe it to be real— when in fact it was not real. The basic idea is simple. We again prerecorded some panoramic video footage, this time of the interior of our VR lab rather than of an outside campus scene. People coming to the lab are invited to sit on a stool in the middle of the room and to put on a VR headset that has a camera attached to the front. They are encouraged to look around the room and to see the room as it actually is, via the camera. But at some point, without telling them, we switch the feed so that the headset now displays not the live real-world scene but rather the prerecorded panoramic video. Most people in this situation continue to experience what they are seeing as real even though it is now a fake prerecording. (This is actually very tricky to pull off in practice—

it requires careful color balancing and alignment to avoid people noticing any difference that would tip them off to the shift.) I find this result fascinating because it shows that it is possible to have people experience an unreal environment as being fully real. This demonstration alone opens new frontiers for VR research: we can test the limits of what people will experience, and believe, to be real. It also allows us to investigate how experiencing things as being real can affect other aspects of perception. Right now we are running an experiment to find out whether people are worse at detecting unexpected changes in the room when they believe that what they are experiencing is real. If things do turn out this way (the study is ongoing), that finding would support the idea that the perception of things as being real itself acts as a high-level prior that can substantively shape our perceptual best guesses, affecting the contents of what we perceive. The Reality of Reality The idea that the world of our experience might not be real is an enduring trope of philosophy and science fiction, as well as of latenight pub discussions. Neo in The Matrix takes the red pill, and Morpheus shows him how what he thought was real is an elaborate simulation, while the real Neo lies prone in a human body farm, a brain-in-a-vat power source for a dystopian AI. Philosopher Nick Bostrom of the University of Oxford has famously argued, based largely on statistics, that we are likely to be living inside a computer simulation created in a posthuman age. I disagree with this argument because it assumes that consciousness can be simulated—I do not think this is a safe assumption—but it is thought-provoking nonetheless. Although these chunky metaphysical topics are fun to chew on, they are probably impossible to resolve. Instead what we have been exploring throughout this article is the relation between appearance and reality in our conscious perceptions, where part of this appearance is the appearance of being real itself.

The central idea here is that perception is a process of active interpretation geared toward adaptive interaction with the world through the body rather than a recreation of the world within the mind. The contents of our perceptual worlds are controlled hallucinations, brain-based best guesses about the ultimately unknowable causes of sensory signals. And for most of us, most of the time, these controlled hallucinations are experienced as real. As Canadian rapper and science communicator Baba Brinkman suggested to me, when we agree about our hallucinations, maybe that is what we call reality. But we do not always agree, and we do not always experience things as real. People with dissociative psychiatric conditions such as derealization or depersonalization syndrome report that their perceptual worlds, even their own selves, lack a sense of reality. Some varieties of hallucination, various psychedelic hallucinations among them, combine a sense of unreality with perceptual vividness, as does lucid dreaming. People with synesthesia consistently have additional sensory experiences, such as perceiving colors when viewing black letters, which they recognize as not real. Even with normal perception, if you look directly at the sun you will experience the subsequent retinal afterimage as not being real. There are many such ways in which we experience our perceptions as not fully real. What this means to me is that the property of realness that attends most of our perceptions should not be taken for granted. It is another aspect of the way our brain settles on its Bayesian best guesses about its sensory causes. One might therefore ask what purpose it serves. Perhaps the answer is that a perceptual best guess that includes the property of being real is usually more fit for purpose— that is, better able to guide behavior—than one that does not. We will behave more appropriately with respect to a coffee cup, an approaching bus or our partner’s mental state when we experience it as really existing. But there is a trade-off. As illustrated by the dress illusion, when we experience things as being real, we are less able to appreciate that our perceptual worlds may differ from those of others. (The leading

explanation for the differing perceptions of the garment holds that people who spend most of their waking hours in daylight see it as white and gold; night owls, who are mainly exposed to artificial light, see it as blue and black.) And even if these differences start out small, they can become entrenched and reinforced as we proceed to harvest information differently, selecting sensory data that are best aligned with our individual emerging models of the world, and then updating our perceptual models based on these biased data. We are all familiar with this process from the echo chambers of social media and the newspapers we choose to read. I am suggesting that the same principles apply also at a deeper level, underneath our sociopolitical beliefs, right down to the fabric of our perceptual realities. They may even apply to our perception of being a self—the experience of being me or of being you—because the experience of being a self is itself a perception. This is why understanding the constructive, creative mechanisms of perception has an unexpected social relevance. Perhaps once we can better appreciate the diversity of experienced realities scattered among the billions of perceiving brains on this planet, we will find new platforms on which to build a shared understanding and a better future—whether between sides in a civil war, followers of different political parties, or two people sharing a house and faced with washing the dishes. --Originally published: Scientific American 321(3); 40-47 (September 2019).

Learning When No One Is Watching by R. Douglas Fields Imagine you are on your first visit to a foreign city—let’s say Istanbul. You find your way to the metro station and stand bewildered before the ticket machine. After puzzling out how to pay your fare, you thread your way through the noisy throng and search for the train that will take you to your hotel. You move tentatively, in fits and starts, with many changes of direction. Yet after a few days of commuting by subway, you breeze through the system effortlessly. Simply by experiencing the new environment, you quickly master its complexities. How was that learning possible? The truth is, neuroscientists do not know. Learning theory as we know it today still rests largely on the century-old experiments of Ivan Pavlov and his dogs salivating at the sound of a bell. His theory has yielded plenty of knowledge about how we acquire behaviors through the pairing of stimulus and reward (or punishment) and the strengthening of connections between neurons that fire together. It is the kind of training we do with our pets and, to some degree, our children, but it explains little about most human learning. In fact, whether getting to know a stranger, negotiating a new setting or picking up slang, our brain absorbs enormous volumes of information constantly and effortlessly as we go about everyday life, without treats or praise or electric shocks to motivate us. Until recently, if you asked neuroscientists like me how this process worked, we would shrug our shoulders. But a number of researchers have begun to use technology, including virtual reality, in innovative ways to explore how the human brain operates in complex, real-

world environments—a process known as unsupervised learning. What they are finding, as I learned by visiting several pioneering laboratories, is that this type of cognition entails more than building up pathways that link localized neurons. Instead unsupervised learning engages broad swaths of the brain and involves wholesale changes in how neural circuits process information. Moreover, by studying the shifting electrical patterns of brain waves as we learn, researchers can reliably guess what we are thinking about (yes, rudimentary mind reading is possible!), and they can predict our aptitude for learning certain subjects. As these scientists confront the complexity of unsupervised learning, they find themselves grappling with one of the deepest mysteries of being human: how the brain creates the mind. Onboard a Virtual Ship The walls and ceiling of the cavernous room are painted black. Twenty-four digital cameras arrayed around the space detect infrared diodes on my body to track my movements, feeding them into a computer as I walk about. I am in a virtual-reality room in the supercomputer center at the University of California, San Diego— probably the closest thing on Earth to the holodeck on Star Trek’s USS Enterprise. Neuroscientist Howard Poizner uses this facility to study unsupervised learning—in this case, how we learn to master an unfamiliar environment. The diodes are not the only gizmos I am wearing. On my head is a rubber cap studded with 70 electrodes that send electrical signals generated by my brain to instruments inside a specialized backpack I am toting. I also wear large goggles equipped with 12 miniature video projectors and high-resolution screens. The day before my visit here, I toured the U.S. Navy aircraft carrier Midway at its anchorage in San Diego Harbor. Little did I know what a happy coincidence that would turn out to be: Poizner and his colleagues had modeled their virtual-reality sequences on the carrier’s layout. When they turn on the projectors inside my goggles, I am instantly transported back to the ship. What I see is an utterly convincing 120-degree vista of a storeroom inside the aircraft carrier.

Looking up, I see triangular steel trusses reinforcing the ceiling that supports the flight deck. Looking down, I see hideous blue government-issued linoleum. High-fidelity speakers all around the lab create a three-dimensional sonic space to complete the illusion. Verisimilitude is critical, Poizner explains, both for immersion and for helping the brain organize the rich sensory information available to it. “If you are just moving a joystick or hitting a button, you are not activating the brain circuits that construct spatial maps,” he says. “Here you are walking out in the environment. You are learning how to move in it, how to interact with it. Your brain is always predicting.” The fact that I can walk through the virtual environment while my brain waves are being recorded is a breakthrough in itself. Usually people must keep still during electroencephalographic (EEG) recordings to eliminate electrical signals generated by their muscles as they contract, which would obscure the feeble brain waves. Poizner’s group devised hardware and software to eliminate this noise as subjects move about freely. “We’re putting you in the video game,” Poizner says. I wander over to an oval hatch and peer out onto the hangar deck where fighter jets are stationed in rows. I raise my leg to step over the high threshold leading to the deck. “Don’t go out there,” Poizner says. “You must stay inside the storage room.” I quickly retract my leg. From his perspective, it must look as if I am pantomiming in an empty room. I see gray bubbles the size of beach balls resting on storage racks inside the room. “You are looking for a green bubble,” Poizner says. I search the room. Turning to my left, I see it sitting on the shelf next to the other gray spheres. I reach out and touch the green bubble. It pops! An object hidden inside appears—a red fire extinguisher. I turn, find and probe another green bubble in the opposite corner of the room. I pop it and see that it contains a wrench. As I explore the novel environment, Poizner can tell from changes in my brain-wave activity that I am forming a mental map of the storeroom space. Neurons communicate by generating brief

electrical impulses of about a tenth of a volt in flashes that last a thousandth of a second—a signal so faint that to detect the firing of a single neuron, you would have to open the skull and place a microelectrode into direct contact with the nerve cell. Still, when large groups of neurons fire together, the ensuing fluctuations in the electrical field of the tissue surrounding them are sufficiently strong that electrodes on the scalp can detect them. These EEG recordings are much like the roar of a crowd, which is audible in the stadium parking lot while conversations of individual spectators are not. Building Maps with Brain Waves The brain’s electrical activity takes the form of waves of different frequencies that sweep across the brain. Some brain waves crash in a high-frequency tempest, while others roll by in slow oscillations like ocean swells. Brain waves change dramatically with different cognitive functions. Poizner’s experiments have found that lowfrequency theta waves—which oscillate at about three to eight hertz —increase in the parietal lobe as the subjects move through the room and build spatial maps. (The parietal lobe is at the top back of the brain, roughly below the part of the head covered by a skullcap.) Scientists are not sure why brainwave power at the theta frequency changes during spatial learning. But they do know that theta waves are important in strengthening synapses as we form memories. In fact, in my own research on the cellular mechanisms of memory, I stimulate neurons at the theta frequency to strengthen synapses in slices of rat brain that I keep alive in a dish. Joseph Snider, the research scientist who was operating the computer as I explored the virtual Midway, suggests that because of their low frequency, theta waves could be responsible for long-range communication within brain networks, much as lower-frequency AM radio signals propagate farther than high-frequency FM broadcasts. In that model, the role of brain waves in learning would be to combine large groups of neurons into functional assemblies so that they can fire together and ride the peaks and troughs of electrical waves as they traverse the brain—which is exactly what must happen to form a spatial map of our environment or to encode any

complex recollection. Consider all the sensory elements, cognitive processes and emotional sensations that must converge to give us a vivid memory: the green color of the sphere, the unexpected surprise and sound of the pop, the location in the storeroom, the recognition of the fire extinguisher hidden inside. Each aspect of that experience is coded in circuits in different parts of the brain specialized for sound, color and other sensations. Yet to learn and remember this array as a coherent experience, all these elements must coalesce. From Poizner’s eavesdropping on people’s brain waves as they encounter the virtual reality environment, we now know that theta waves are crucial to this synthesis and learning. In addition to their role in the formation of spatial maps, brain waves are key to cognitive function in the wake of a specific stimulus. Such evoked responses are like ripples from a stone cast into a pond, in contrast to the random, ever present movements of the water. Poizner analyzed the brain-wave response at the instant I popped the green bubble and discovered the object hidden inside. He found that a characteristic ripple in my evoked brain wave erupted 160 milliseconds after I popped the green bubble. “This is amazingly fast,” Poizner observes. “It takes 200 milliseconds just to make an eye movement. It is preconscious perception that the brain is detecting something amiss.” When Poizner brought subjects in his VR study back for a second day, he found that they had clearly memorized the storeroom in detail without any instruction, forewarning or effort. The evoked brain wave revealed this fact in a surprising way. Poizner and his colleagues deliberately misplaced some of the objects that were concealed in the green bubbles. So when a person popped a green bubble that had held a fire extinguisher the previous day but now contained a wrench, the evoked brain-wave response was much larger than when subjects found objects in the same location as before. Faster than the blink of an eye, our brain knows something has changed in our environment, and our brain knows it before our mind can comprehend it. The U.S. Navy, which funds Poizner’s research,

is interested in tapping into these rapid preconscious brain signals. Reading a pilot’s brain waves could let a computer take action even before the pilot is consciously aware of the threat. The quickest draw in such a gunfight would not even know he had pulled the trigger. Poizner’s research reveals another ripple in the evoked brain wave about half a second later, the result of the brain cogitating on the anomaly and putting it into context. “We think this represents a second pass [of neural processing],” he says. “The first pass is, Something is wrong. The second is, Oh! Okay, I’ve now incorporated the new information into my reconstruction of the environment.” Researchers have reported similar results in very different experiments. When a subject hears an unexpected remark—“ I take my coffee with cream and dog,” for example—a similar brain-wave response erupts at about the same time. Finding the Way to Speech Learning our native language through everyday experience is very much like unsupervised learning of a new space. Despite the complexity of language, we all master our spoken tongue as children, simply by experiencing it. “We know that in utero, fetuses are already starting to learn about the properties of their language,” says Chantel S. Prat, an associate professor of psychology at the University of Washington and a leading researcher on changes in the brain during language learning. According to a 2011 study led by psychologist Lillian May, while at the University of British Columbia, newborns can recognize their mother’s voice and prefer their native language. Psychologist Barbara Kisilevsky and her colleagues at Queen’s University in Ontario found that even fetuses at 33 to 41 weeks of age show startle responses to their mother’s voice and to a novel foreign language, which means that these sounds capture their attention amid the surrounding buzz. We often fail to appreciate the complexities of language because we use it constantly every day in conversation and in our thoughts. But when we try to learn a second language, the challenges become obvious.

Prat and her colleagues have been monitoring brain-wave activity of subjects learning a second language to see how we meet these challenges. Remarkably, they have found that the brainwave patterns themselves indicate how well the students are doing. As in Poizner’s research, the changes Prat observed during this learning were in specific frequencies of brain-wave activity in particular regions of the brain. After eight weeks of foreign-language training, the power of brain waves increased not only in Broca’s area, the language region of the brain located in the left hemisphere, but also in the beta waves (with a frequency of 12 to 30 Hz) of the right hemisphere—a surprise because language is not typically associated with that side of the brain. “The bigger the change, the better they learned,” she said. It was a surprise that would prove to be significant. Reading Minds If thoughts are the essence of being, some scientists are preparing to peer into our souls. That is, they can now tell a great deal about what someone is thinking by observing their brain activity, which has intriguing implications for how unsupervised learning works. Marcel Just and his colleagues at the Center for Cognitive Brain Imaging at Carnegie Mellon University can reliably say whether a person is thinking of a chair or a door, or which number from 1 to 7 a person has in mind, or even what emotion the person may be feeling— anger or disgust, fear or happiness, lust or shame—simply by looking at a functional MRI scan. Specific clusters of neurons throughout the brain increase activity with each of these concepts or emotions, and these clusters appear in the same places from one person to the next. In research to be published this year, Just is demonstrating that he can read minds even when people are learning abstract concepts. As students review material from a college physics course, the researchers are able to identify which of 30 concepts a person is focusing on from fMRIs of the student’s brain. What is more, the data show that different abstract scientific concepts map onto brain regions that control what might be considered analogous, though

more concrete, functions. Learning or thinking about the way waves propagate, for example, engages the same brain regions activated in dancing—essentially a metaphor for rhythmic patterns. And concepts related to the physics of motion, centripetal force, gravity and torque activate brain regions that respond when people watch objects collide. It seems that abstract concepts are anchored to discrete physical actions controlled by specific circuits in the brain. These investigators are beginning to unravel the secret of how the human brain represents and retains information. And this insight is helping scientists transmit information from brains to machines. For instance, researchers in many labs around the world are developing prosthetic limbs controlled by a person’s thoughts. Computers detect and analyze brain waves associated with limb movements and then activate electric motors in a robotic limb to produce the intended motion. The next step sounds a little like induced telepathy or Vulcan mind melding. “We’ve found that you can use brain signals from one person to communicate with another,” Prat says. “We can encode information into a human brain.” In a fascinating study published in 2014, she uses a technique called transcranial magnetic stimulation to modify a subject’s brain waves so that they take the shape of the brain waves she had observed in a different person—in effect downloading information from one brain into another. Prat’s motive in this futuristic research is not to figure out how to transmit the contents of my mind into yours; we already have very effective means for accomplishing that goal. In fact, I am doing so right now as you read these patterns of type and reproduce my thoughts in your brain. Rather they are trying to test their findings about learning and encryption of information in the brain. “If I stimulate your visual cortex and you see,” Prat says, “you are seeing with your brain, not with your eyes.” That achievement will prove she has indeed cracked the brain’s coding of visual information. And she will have written part of a new chapter in our neuroscience textbooks, alongside the one about Pavlov and his dogs.

Predicting Your Future In her latest research, Prat has used EEG analysis to an even more exceptional end: to accurately forecast which students will be able to learn a new language rapidly and which ones will struggle. What our brain does at rest tells researchers a great deal about how it is wired and how it operates as a system. Mirroring her discovery of beta-wave activity in the right hemisphere during language learning, Prat found that the higher the power of beta waves in a person’s resting-state EEG in the right temporal and parietal regions, the faster the student will be able to learn a second language. The reasons are not clear, but one possibility is that if most neural circuits in the region were fully engaged in a variety of other tasks, many small groups of neurons would be oscillating at their own slightly different frequencies, so high power at any one frequency suggests a large untapped pool. “They are sort of waiting to learn a language,” Prat theorizes. That propensity is significant because mastering a new language is associated with many cognitive benefits, including improved skill in mathematics and multitasking. But, she warns, our brain cannot be good at everything: “When you get better at one thing, it comes at a cost to something else.” I challenge Prat to measure my brain waves to see if she can predict how quickly I can learn a second language. She eagerly agrees. Prat and her graduate student Brianna Yamasaki apply electrodes to my head, moistening each one with a salt solution to improve conduction of the tiny signals from my brain. As she tests each electrode, it appears on a computer monitor, changing color from red to green when the signal strength is strong. Once they are all green, Prat says, “Close your eyes. It’ll be five minutes. Remain still.” As she dims the lights and slips out the door, she says, “Just relax. Clear your mind.” I try, but my mind is racing. Can this contraption really tell Prat how easily I could learn a new language while I sit here doing nothing? I recall a similar boast Poizner had made to me in his VR lab—that he could predict how well people would perform in his spatial-learning experiment from an fMRI scan of their brain activity as they sat and

let their mind wander. This so-called resting- state fMRI of the brain’s activity while people are doing nothing but letting their mind drift is different from the familiar fMRI studies of the brain’s response to a specific stimulus. Indeed, months after taking such readings of a group of people, Poizner brought them for a VR trial and found that those who learned the layout of the virtual storeroom faster had resting-state fMRI recordings that showed tighter functional integration of the brain networks responsible for visuospatial processing. The five minutes pass. Prat and Yamasaki return. “Did you get good data?” I ask. “This is a little lower than average,” Prat says looking at my feeble beta waves. She then pulls up a recording of her own brain waves, which shows a sharp peak in the alpha-frequency band. It looks something like a spike in a stock-market chart. My brain instead shows a power shift to higher frequencies, characteristic of information processing in the cerebral cortex. I clearly was not able to zone out and let my mind rest. “Am I a good second-language learner?” I ask. “No,” Prat says. “Your slope is about 0.5, and the average is about 0.7.” It’s true. I took Spanish in high school and German in college, but they didn’t really stick. This is creepier than tarot cards. “There must be something good about it,” I say. “Sure ... plenty of things.” “Tell me one.” “You are very entrenched in your first language.” I groan. Then she adds, “The relation of beta power to reading is the opposite. You are probably an excellent reader.” A few days after returning to my lab, a new paper by Tomas Folke of the University of Cambridge and his colleagues reports that monolinguals are superior to bilinguals at metacognition, or thinking

about thinking, and that they excel at correcting their performance after making errors. I feel a little better. Thinking about thinking and learning from failed experiments: that is exactly what I do as a neuroscientist. You could have read that in my bio—and in my brain waves, too. --Originally published: Scientific American Mind 27(5); 56-63 (Sept/Oct 2016).

SECTION 5 The Ultimate Question

Partly-Revived Pig Brains Raise Questions about When Life Ends by Simon Makin One of the two legal definitions of death is irreversible cessation of all brain function, commonly known as “brain death.” (The other is the halting of circulatory and respiratory function.) It was widely believed that brain cells undergo rapid—and irreversible— degeneration immediately after death. But a striking new study, published in April 2019 in Nature, suggests that much functionality can be preserved or restored—even hours after death. A research team, based primarily at the Yale School of Medicine, managed to revive some functions in the whole brains of pigs slaughtered four hours previously and to sustain them for a further six hours. The work was motivated by the observation that cells can be harvested from postmortem brains and sustained in cell cultures for study, neuroscientist and team leader Nenad Sestan said in a press briefing: “In short, if we can do this in a petri dish, can we do it in an intact brain?” The system Sestan and his colleague developed, called BrainEx, comprises three elements: a computerized system of pumps, filters and reservoirs; a blood substitute containing no cells but capable of carrying oxygen, along with numerous compounds designed to protect cells; and a surgical procedure to hook everything up. The researchers compared brains they sustained using BrainEx with brains that were perfused with an inert fluid or that were not hooked up to anything to assess their relative states at different times. The system reduced cell death, preserved anatomical integrity, and restored circulatory, metabolic and some cellular

functions. The team was even able to observe inflammatory responses from immune cells, called glia, by introducing a molecule that mimics bacterial infection. The findings suggest that cells are far more resilient to the damage caused by stopping blood flow, which deprives the brain of oxygen (known as ischemia), than previously appreciated. “We didn’t have any a priori hypothesis that we’d be able to restore cells to this level,” Sestan told journalists. “We were really surprised.” This work could represent a major contribution to methods available for studying the brain. The research was funded under the National Institutes of Health’s BRAIN (Brain Research through Advancing Innovative Neurotechnologies) Initiative, and NIH experts also briefed the press. “This is a real breakthrough for brain research; it’s a new tool that bridges the gap between basic neuroscience and clinical research,” said Andrea Beckel-Mitchener, a BRAIN Initiative team leader at the National Institute of Mental Health. “It provides experimental access like we’ve never had before; we anticipate interesting studies on brain circulation, cell metabolism, other cell biology and mapping long-range connections.” The immediate findings have implications for how we understand brain death. “It’s drilled into us as scientists and doctors that after even just a couple of minutes, there’s no going back; this clearly turns that on its head,” says Madeline Lancaster, an expert in research on brain organoids (so-called “mini brains” grown from stem cells—at the University of Cambridge, who was not involved in the work. “Where I see the most potential in the short term is just changing thinking about that and hopefully spurring people to do more research into humans who are potentially brain dead—and understanding how we might be able to bring them back.” Extending the time before declaration of brain death has other implications—it could delay when organs could become available for donations, as discussed in a commentary article in Nature. One near-term benefit is the opportunity to learn more about ischemic injury. “We hope to better understand how brain cells react to circulatory arrest and if we can intervene and salvage these cells,” Sestan said. “By doing this,

we can possibly come up with better therapies for stroke and other disorders that cause brain cells to die.” In the longer term, the system could provide a powerful method for studying brain connectivity, circuit function and disease processes. A certain amount can already be learned using brain slices, brain organoids (so-called “mini-brains” grown from stem cells) and postmortem brains, but this system offers at least two advantages: First, an intact brain offers an unparalleled opportunity to study brain circuitry. “If the question is one where you really need the context of the whole organ, this definitely gives you an advantage,” Lancaster says. “If we knew [brain circuits] were functional to some degree, being able to look at a fully intact circuit would be very powerful.” Second, postmortem studies limit observations to discrete points in time, which curbs understanding of how diseases progress. For instance, neurodegenerative diseases such as Alzheimer’s are thought, by some, to involve toxic proteins spreading in the brain. “You could do a lot more here in terms of perturbing the brain in various ways: introducing a prion protein, or amyloid-beta, for example, and looking at spreading,” Lancaster says. “Being able to see it in real time is really key. This would be a way to do that.” The team engaged with existing ethics frameworks from the time it began to plan the experiments. Chief among the ethics concerns is whether resuscitated brains might exhibit signs of consciousness. The study specifically wanted to avoid the remote possibility that consciousness would return, and the researchers were prepared to lower temperatures and deploy anesthetics to extinguish such signs if they emerged. They continually monitored electrical recordings from the brains’ surface and saw no evidence of the global electrical activity that would be expected if there was anything approaching cognition. “I’m very confident no consciousness was present in these restored brains,” says Christof Koch of the Allen Institute for Brain Science in Seattle, a leading researcher in the neuroscience of consciousness. There were none of the signals we associate with consciousness, or even sleep, Koch says: “Only a flat line, implying a complete absence of any sort of consciousness.”

But part of the reason for the lack of electrical activity may relate to the fact that the perfusion solution contained neural activity blockers. The researchers included these blockers because they wanted to keep the brains quiescent to maximize cellular recovery. An active brain would require a massively greater energy supply, and the very act of firing can damage neurons (a phenomenon known as excitotoxicity). The team took tissue samples to show that individual neurons were still electrically functional, which necessarily involved washing out the solution to prepare the samples for electrophysiological recordings. But what would have happened if these blockers were not used? “We cannot speak with any scientific certainty to that point, since we didn’t run those experiments,” Stefano Daniele, co-lead author of the study, told journalists. If such future experiments were to move resuscitated brains closer to conscious activity, that would spur discussions about what can be considered truly dead. These considerations are discussed in another accompanying commentary article co-authored by legal scholar Nita Farahany, a bioethicist who is a member of the Neuroethics Working Group at the BRAIN Initiative and who the researchers consulted from an early stage. The team also consulted the Institutional Animal Care and Use Committee (IACUC) at Yale, and the members were told the study was not subject to animal welfare-protection guidelines. Most obviously, the pigs were already dead: the researchers procured the brains from a pork processing plant, so no animal was sacrificed for this research. In any case, such guidelines do not apply to animals raised for food. Moving forward, the work must be replicated by other labs that will have to learn the intricacies of the manually operated system. The team itself wants to establish how long brains can be sustained this way. The perfusion stage of the experiment only lasted six hours because at that point, the control brains that were not in the BrainEx system had undergone too much disintegration for meaningful comparisons to be made.

If brains can be kept going for long periods, and researchers turn from prioritizing cellular recovery to reviving electrical function in situ, that would enter uncharted ethical territory. “There are some questions that need to be answered first,” Farahany says. “Can we ever get EEG [electroencephalogram] recovery? What are the limits to it if we ever get to that? And what are the implications, then, for animal research and for human research one day?” In Farahany’s view, these unknowns put what was initially considered dead tissue into a new ethical category. “It’s the potential [for greater recovery] that creates a moral status that’s different and requires that we treat it differently,” Farahany says. “One could go with the safest possible approach there, which is to give it some or similar protections that would be accorded to animal research subjects.” Such an experiment would likely be first tried in rodents, initially by just removing the chemicals that block electrical activity. If anything that looked remotely like conscious activity were to be detected, we would be in territory for which new ethical guidelines would be needed. “At that point, if you start thinking about it more like a living animal, then minimizing any risk of pain or distress would be appropriate,” Farahany says. “The problem is: right now, we think of this as tissue research and it's no longer just clearly dead. It’s just not exactly alive either.” --Originally published: Scientific American online April 19, 2019.

Is Death Reversible? by Christoph Koch You will die, sooner or later. We all will. For everything that has a beginning has an end, an ineluctable consequence of the second law of thermodynamics. Few of us like to think about this troubling fact. But once birthed, the thought of oblivion can’t be completely erased. It lurks in the unconscious shadows, ready to burst forth. In my case, it was only as a mature man that I became fully mortal. I had wasted an entire evening playing an addictive, first-person shooter video game— running through subterranean halls, flooded corridors, nightmarishly turning tunnels, and empty plazas under a foreign sun, firing my weapons at hordes of aliens relentlessly pursuing me. I went to bed, easily falling asleep but awoke abruptly a few hours later. Abstract knowledge had turned to felt reality— I was going to die! Not right there and then but eventually. Evolution equipped our species with powerful defense mechanisms to deal with this foreknowledge—in particular, psychological suppression and religion. The former prevents us from consciously acknowledging or dwelling on such uncomfortable truths while the latter reassures us by promising never-ending life in a Christian heaven, an eternal cycle of Buddhist reincarnations or an uploading of our mind to the Cloud, the 21st-century equivalent of rapture for nerds. Death has no such dominion over nonhuman animals. Although they can grieve for dead offspring and companions, there is no credible evidence that apes, dogs, crows and bees have minds sufficiently self-aware to be troubled by the insight that one day they

will be no more. Thus, these defense mechanisms must have arisen in recent hominin evolution, in less than 10 million years. Teachings from religious and philosophical traditions have long emphasized the opposite: look squarely into the hollow eyes of death to remove its sting. Daily meditation on nonbeing lessens its terror. As a scientist with intimations of my own mortality, my reflections turn toward understanding what death is. Anyone who undertakes this quest will soon come to realize that death, this looming presence just over the horizon, is quite ill defined from both a scientific as well as a medical point of view. From the Chest to the Head Throughout history, everyone knew what death was. When somebody stopped breathing and his or her heart ceased beating for more than a few minutes, the person was, quite simply, dead. Death was a well-demarcated moment in time. All of this changed with the advent of mechanical ventilators and cardiac pacemakers in the middle of the 20th century. Modern high-tech intensive care decoupled the heart and the lungs from the brain that is responsible for mind, thought and action. In response to these technological developments, in 1968, the famous Report of the Ad Hoc Committee of the Harvard Medical School introduced the concept of death as irreversible coma—that is, loss of brain function. This adjustment was given the force of law by the Uniform Determination of Death Act in 1981. This document defines death as either irreversible cessation of circulatory and respiratory functions or irreversible halting of brain function. Quite simply, when your brain is dead, you are dead. This definition is, by and large, in use throughout most of the advanced world. The locus of death shifted from the chest to the brain (and from public view into the private sphere of the hospital room), with the exact time of actual brain death uncertain. This rapid and widespread acceptance of brain death, reaffirmed by a presidential commission in 2008, is remarkable when compared with the ongoing controversy around abortion and the beginning of life. It

may perhaps be reflective of another little noticed asymmetry— people agonize about what happens in the hereafter but rarely about where they were before being born! The vast majority of deaths still occur following cardiopulmonary cessation, which then terminates brain functioning as well. Neurological death—specified by irreversible coma, absence of responses, brain stem reflexes or respiration—is uncommon beyond the intensive care unit, where patients with traumatic or anoxic brain injury or toxic-metabolic coma (say, following an opioid overdose) are typically admitted. Brain death may be the defining factor, but that does not simplify clinical diagnosis—biological processes can persist after the brain shuts down. Indeed, a brain-dead body can be kept “alive” or on “life support” for hours, days or longer. For the grieving relatives and friends, it is challenging to understand what is happening. When visiting the ICU, they see the chest moving in and out, they feel a pulse, the skin pallor looks normal, and the body is warm. Looking healthier than some of the other denizens of the ICU, their beloved is now legally a corpse, a beating-heart cadaver. The body is ventilated and kept suspended in this quasi-living state because it is now a potential organ donor. If permission has been obtained, the organs can be harvested from the cadaver to help the living who need a heart, kidney, liver or lung, which are always in short supply. Brain-dead bodies can continue to grow fingernails, to menstruate, with at least some working immune function that allows them to fight off infections. There are more than 30 known cases of pregnant brain-dead mothers placed on a ventilator to support gestation of a surviving fetus, born weeks or months (in one case 107 days) after the mother became brain-dead. In a widely discussed 2018 story in the New Yorker, a young woman, Jahi McMath, was maintained on ventilation in a home care setting in New Jersey by her family following her brain death in a hospital in California. To the law and established medical consensus, she was dead. To her loving family, she was alive for close to five years until she died from bleeding associated with liver failure.

Despite technological advances, biology and medicine still lack a coherent and principled understanding of what precisely defines birth and death—the two bookends that delimit life. Aristotle wrote in De anima more than two millennia ago that any living body is more than the sum of its parts. He taught that the vegetative soul of any organism, whether a plant, animal or person, is the form or the essence of this living thing. The essence of a vegetative soul encompasses its powers of nutrition, growth and reproduction that depend on the body. When these vital capacities are gone, the organism ceases to be animate (a term whose roots lead back to anima, Latin for “soul”). The sensitive soul mediates the capacities of both animals and humans to sense the world and their bodies. It is the closest to what we moderns call “conscious experience.” Finally, the rational soul is the sole province of people, mediating reason, language and speech. Of course, this is now increasingly mimicked by artificial intelligence algorithms. The modern emphasis on machine learning, genomics, proteomics and big data provides the illusion of understanding what this sensitive soul is. Yet it obscures the depth of our ignorance about what explains the breakdown of the vegetative soul. A conceptual challenge remains to define what constitutes anyone’s living body— which is clearly more than the sum of its individual organs. How can one precisely delimit this body in space (are clothing, dental implants and contact lenses part of the body?) and in time (its beginning and its end)? Note the word “irreversible” in the contemporary definition of neurological death. In the absence of a precise conceptual formulation of when an organism is alive or dead, the concept of irreversibility depends on the technology du jour, which is constantly evolving. What at the beginning of the 20th century was irreversible —cessation of breathing—became reversible by the end of the century. Is it too difficult to contemplate that the same may be true for brain death? A recent experiment suggests this idea is not just a wild imagining.

Partial Revival of Dead Brains This year a large team of physicians and scientists at the Yale School of Medicine under Nenad Sestan took advantage of hundreds of pigs killed at a Department of Agriculture–approved slaughterhouse for a remarkable experiment, published in the journal Nature. The researchers removed the brains from their skulls and connected the carotid arteries and veins to a perfusion device that mimics a beating heart. It circulates a kind of artificial blood, a synthetic mixture of compounds that carry oxygen and drugs that protect cells from damage. The magic resides in the exact molecular constitution of the circulating solution. Think of closed-circuit dialysis machines that thousands of patients use daily to flush out toxins from their body because their own kidneys have stopped working. These machines are needed because when blood stops flowing through the large, energy-demanding brain, oxygen stores are depleted within seconds, and consciousness is lost. Depriving a brain of oxygen and blood flow for more than a few minutes begins to trigger irreversible damage. Cells start degenerating in all sorts of ways (tissue damage and decomposition, edema, and so on) that are readily visible under a microscope. The Sestan team studied the brains’ viability four hours after the pigs were electrically stunned, bled out and decapitated. (If this sounds gruesome, it is what happens to livestock in an abattoir, one reason I’m a vegetarian.) The researchers compared a variety of biological indicators with those of postmortem control brains from pigs that did not undergo this perfusion procedure four hours after death, an eternity for the sensitive nervous system. At first glance, the restored brains with the circulating solution appeared relatively normal. As the compound circulated, the fine net of arteries, capillaries and veins that suffuse brain tissue responded appropriately; the tissue integrity was preserved with a reduction in swelling that leads to cell death; synapses, neurons and their output wires (axons) looked normal. Glial cells, the underappreciated entities supporting neurons proper, showed some functionality, and the brain consumed oxygen and glucose, the universal energy

currency of the body, an indication of some metabolic functioning. The title of the researchers’ paper announcing their technology boldly states “Restoration of Brain Circulation and Cellular Functions Hours Post-mortem.” What was not present in these results were brain waves of the kind familiar from electroencephalographic (EEG) recordings. Electrodes placed onto the surface of the pig brains measured no spontaneous global electrical activity: none of the deep-slow waves that march in lockstep across the cerebral cortex during deep sleep, no abrupt paroxysm of electrical activity followed by silence—what is known as burst suppression. Only a flat line everywhere—a global isoelectric line—implying a complete absence of any sort of consciousness. A silent brain, electrically speaking, is not harboring an experiencing mind. But this was not a surprise. This state was exactly what was intended by Sestan and his co-workers, which is why the circulating solution contained a cocktail of drugs that suppresses neuronal function and corresponding synaptic communication among cells. Even with the absence of brain waves it came as a surprise to me, a working neuroscientist, that individual pig cortical neurons still retained their capacity to generate electrical and synaptic activity. The Yale team demonstrated this by snipping a tiny sliver of neural tissue from these brains, washing off the perfused solution and then exciting individual neurons via an electric current delivered by a tiny electrode. Some of these cells responded appropriately by generating one or a series of the stereotypical electrical pulses, socalled action potentials or spikes, that are the universal idiom of rapid communication in any advanced nervous system. This finding raises a profound question: What would happen if the team were to remove the neural-activity blockers from the solution suffusing the brain? Most likely nothing. Just because some individual neurons retain some potential for excitability does not imply that millions and millions of neurons can spontaneously selforganize and break out into an electrical chorus. And yet! It cannot be ruled out that with some kind of external help, a sort of cortical

defibrillator, these “dead” brains could be booted up, reviving the brain rhythms characteristic of the living brain. To state the obvious, decapitating any sentient creature and letting its brain bleed out is not conducive to its well-being. Reanimating it after such a major trauma could well lead to profound pathology, such as massive epileptic seizures, delirium, deep-seated pain, distress, psychosis, and so on. No creature should ever suffer in this manner. It is precisely to avoid this situation that the Yale team obstructed neuronal function. This brings me to the elephant in the room. Can this procedure be applied to the human brain? Before you recoil, think of the following. What would you want done if your child or partner were found drowned or overdosed, without a pulse or breath for hours? Today it is likely that they would be declared dead. Could this change tomorrow with the kind of technology pioneered by the Yale group? Isn’t that a worthwhile goal to pursue? The pig brain is a large brain, unlike the one of the much smaller mouse, by far the most popular laboratory animal. Pig cortex is highly folded, like the human cortex. Neurosurgical procedures are routinely tested on pigs before moving to human trials. So, the technical answer is yes; in principle, this could be done. But should it be done? Certainly not until we understand much better whether a reconstituted animal brain shows global electrical activity typical of a healthy brain, without stress responses indicative of pain, distress or agony. The field as a whole should pause and discuss the medical, scientific, legal, ethical, philosophical and political questions of such research with all stakeholders. Yet the fear of the grim reaper will not be denied. Sooner or later, somewhere on the planet’s face, someone will try to temporarily cheat death. --Originally published: Scientific American 321(4); 34-37 (October 2019).

How Can We Tell If a Comatose Patient Is Conscious? by Anouk Bercht & Steven Laureys Steven Laureys greets me with a smile as I enter his office overlooking the hills of Liège. Although his phone rings constantly, he takes the time to talk to me about the fine points of what consciousness is and how to identify it in patients who seem to lack it. Doctors from all over Europe send their apparently unconscious patients to Laureys—a clinician and researcher at the University of Liège—for comprehensive testing. To provide proper care, physicians and family members need to know whether patients have some degree of awareness. At the same time, these patients add to Laureys’ understanding. The interview has been edited for clarity. What is consciousness? It is difficult enough to define “life,” even more so to define “conscious” life. There is no single definition. But of course, in clinical practice we need unambiguous criteria. In that setting, everyone needs to know what we mean by an “unconscious” patient. Consciousness is not “all or nothing.” We can be more or less awake, more or less conscious. Consciousness is often underestimated; much more is going on in the brains of newborns, animals and coma patients than we think. So how is it possible to study something as complex as consciousness?

There are a number of ways to go about it, and the technology we have at our disposal is crucial in this regard. For example, without brain scanners we would know much, much less than we now do. We study the damaged brains of people who have at least partially lost consciousness. We examine what happens during deep sleep, when people temporarily lose consciousness. We’ve also been working with Buddhist monks because we know that meditation can trigger alterations in the brain; connections that are important in the networks involved in consciousness show changes in activity. Hypnosis and anesthesia can also teach us a great deal about consciousness. In Liège, surgeons routinely operate on patients under hypnosis (including Queen Fabiola of Belgium). Just as under anesthesia, the connections between certain brain areas are less active under hypnosis. And finally, we are curious to understand what near-death experiences can tell us about consciousness. What does it mean that some people feel they are leaving their bodies, whereas others suddenly feel elated? What processes in the brain create consciousness? Two different networks seem to play a role: the external, or sensory, network and the internal self-consciousness network. The former is important for the perception of all sensory stimuli. To hear, we need not only ears and the auditory cortex but also this external network, which probably exists in each hemisphere of the brain—in the outermost layer of the prefrontal cortex as well as farther back, in the parietal-temporal lobes. Our internal consciousness network, on the other hand, has to do with our imagination—that is, our internal voice. This network is located deep within the cingulate cortex and in the precuneus. For us to be conscious of our thoughts, this network must exchange information with the thalamus. What happens in a comatose person? The brain is so heavily damaged that neither of the networks functions correctly anymore. This malfunction can occur as a result of serious injury, a brain hemorrhage, cardiac arrest or a heart attack. At most, a coma lasts for a few days or weeks. As soon as patients open their eyes, they are said to “awaken” from the coma.

This does not, however, mean that a person is conscious. Most patients who awaken from a coma soon recuperate. But a minority will succumb to brain death; a brain that is dead is completely destroyed and cannot recover. But some patients who are not braindead will never recover either. How do we know whether a coma patient who has awakened is conscious? For that we use the Glasgow Coma Scale. The physician says, “Squeeze my hand.” Or we observe whether the patient responds to sounds or touch. If patients do not respond, the condition used to be called “vegetative”; they appear to be unconscious. If a patient responds but is unable to communicate, we categorize the consciousness as “minimal.” Such patients may, for example, follow a person with their eyes or answer simple questions. If we pinch their hand, they will move it away. But these signs of consciousness are not always evident, nor do we see them in every patient. A patient who awakens from a coma may also develop a so-called locked-in syndrome, being completely conscious but paralyzed and unable to communicate, except through eye blinks. So the difference between unresponsiveness, minimal consciousness and locked-in would seem to be hard to determine. That’s right. If there is no response to commands, sounds or pain stimuli, this does not necessarily mean that the patient is unconscious. It may be that the patient does not want to respond to a command or that the regions of the brain that process language are so damaged that the person simply doesn’t understand me. Then there are cases in which the brain says, “Move!” but the motor neural pathways have been severed. Family members are often quicker than physicians to recognize whether a patient exhibits consciousness. They may perceive subtle changes in facial expression or notice slight movements that escape the physician’s attention.

Patients are brought to Liège from all over Europe to undergo testing. How do you determine whether they are conscious? Well, of course, the physician will say, “Squeeze my hand”—but this time while the patient is in a brain scanner. If the motor cortex is activated, we know that the patient heard and understood and therefore is conscious. We also want to determine the chances of recovery and what the physician or the patient’s family can do. With different brain scanners, I can find out where brain damage is located and which connections are still intact. This information tells family members what the chances of recovery are. If the results show that there is no hope whatsoever, we then discuss difficult topics with the family, such as end-of-life options. Occasionally we see much more brain activity than anticipated, and then we can initiate treatment aimed at rehabilitation. One well-known case was that of Rom Houben. That’s right. He was a very important patient for us: as far as anyone could tell, he had been left completely unresponsive for 23 years after a car accident. But in the mid-2000s we placed him in a brain scanner and saw clear signs of consciousness. It is possible that he experienced emotions over all those years. He was the first of our patients who was given a different diagnosis after such a long time. We subsequently conducted a study in several Belgian rehab centers and found that 30 to 40 percent of unresponsive patients may exhibit signs of consciousness. I’ve heard that Houben was eventually able to type words with the help of his communication facilitator. Yes, but his facilitator was the only person who seemed able to understand and translate his minimal hand signals. She probably typed words of her own unconsciously. This form of communication doesn’t generally work, and our team was wrongly connected with it. It is a complex case that the media has failed to report adequately. They were more interested in telling sensational, simplistic humaninterest stories. Nonetheless, it’s a good example of why we must be extraordinarily careful in diagnosing this condition.

How can minimal consciousness be distinguished from locked-in syndrome? Minimally conscious patients can barely move and are not completely aware of their surroundings. In other words, their motor and mental abilities are limited. Locked-in patients can’t move either, but they are completely conscious. They have suffered a particular type of injury to the brain stem. Their cerebral cortex is intact but is disconnected from their body. All they can move is their eyes— something that neither the patient nor the physician is aware of at the beginning. This is why diagnosis is so difficult. Just because patients cannot move does not mean they are unconscious. This is a classic fallacy; consciousness does not reside in our muscles but in our brains. How can a communicate?

person

who

cannot

move

manage

to

To communicate with a minimally conscious patient for the first time here in Liège, we placed him in a scanner. Of course, the scanner cannot tell us directly whether someone is saying yes or no. But there are a couple of tricks. For example, we can tell the patient, “If you want to say yes, imagine that you are playing tennis. If you intend to say no, make a mental trip from your front door to your bedroom.” “Yes” answers activate the motor cortex; “ no” answers engage the hippocampus, which plays a role in spatial memory. Because these two regions of the brain are located far apart from each other, it is pretty easy to tell the difference between yes and no. From that point on, we can ask the patient pertinent questions. What other potential techniques do you have in the pipeline? In the future, it may be possible to read brain signals using scalp electrodes and a brain-computer interface. This would make communication much quicker and less costly than with a brain scanner. We have also found that it is possible to examine a person’s pupils: we ask patients to multiply 23 by 17 if they intend to say yes. This difficult problem causes the patients to concentrate, and their pupils will dilate slightly as a result. If we direct a camera at

their eyes and a computer analyzes the signals, we can determine quite quickly whether the intended answer is positive or negative. Anything else? Think of the movie The Diving Bell and the Butterfly about JeanDominique Bauby, the editor of the French fashion magazine Elle. He suffered a stroke that left him with locked-in syndrome. He wrote an entire book—on which the movie was based—by blinking his one remaining functional eye. We are now able to place an infrared camera over patients’ eyes, which enables them to chat or write relatively easily. Can consciousness be stimulated? Yes, by transcranial direct-current stimulation. Using scalp electrodes, we can stimulate particular regions of the brain. By careful placement, we can select the region responsible for speech, which is connected with consciousness. If I stimulate this region of the brain, the patient may hear and understand what I say. In some cases, a patient has been able to communicate transiently for the first time after a 20-minute stimulation—by, for example, making a simple movement in response to a question. Other patients have been able to follow a person with their eyes. Although consciousness does not reside in our muscles, stimulating patients may enable them to move muscles consciously. This technique works in about half of patients with minimal consciousness. In my opinion, this represents the future of treatment, even though we do not yet know precisely which regions of the brain are the most responsive to stimulation or whether they should be stimulated on a daily basis. But I don’t want to give people false hope. We are still faced with the question of the minimum acceptable quality of life. This is a major philosophical and ethical problem that will be answered differently by different people. I would recommend that everyone discuss these issues in advance with a trusted person. Then you will know that, if you are ever in that position, your desires and values will be taken into account.

Do you think that consciousness can be reduced to the brain alone? We already know quite a bit about the brain processes that underlie attention, perception and emotions. There is no point in throwing this knowledge out the window. As a neurologist, I see the consequences of brain damage every day. It remains to be discovered whether the brain is the entire story. Scientific research has to be conducted with an open mind. The topic of consciousness is rife with philosophical implications and questions. As a physician, it is my aim to translate this knowledge into practice. It may be frustrating that we currently lack the tools to measure the hundreds of billions of synapses with their tangled mass of neurotransmitters. Nonetheless, I think it is a mistake to infer from this that we can never understand consciousness. --Originally published: Scientific American Online, August 23, 2018.