Hardwired: How Our Instincts to Be Healthy are Making Us Sick [1st ed.] 9783030517281, 9783030517298

For the first time in a thousand years, Americans are experiencing a reversal in lifespan. Despite living in one of the

1,412 66 2MB

English Pages XXV, 164 [183] Year 2021

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Hardwired: How Our Instincts to Be Healthy are Making Us Sick [1st ed.]
 9783030517281, 9783030517298

Table of contents :
Front Matter ....Pages i-xxv
Why a Hospital Is the Most Dangerous Place on Earth (Robert S. Barrett, Louis Hugo Francescutti)....Pages 1-23
Why Do We Crave Bad Things? (Robert S. Barrett, Louis Hugo Francescutti)....Pages 25-41
Raising Children on War, Cartoons, and Social Media (Robert S. Barrett, Louis Hugo Francescutti)....Pages 43-69
The Truth About Happiness (Robert S. Barrett, Louis Hugo Francescutti)....Pages 71-87
Why Do We Ignore Sleep? (Robert S. Barrett, Louis Hugo Francescutti)....Pages 89-113
Are We Hardwired for Risk? (Robert S. Barrett, Louis Hugo Francescutti)....Pages 115-136
From Pandemics to Prosperity: Feeding Our Hardwired Health (Robert S. Barrett, Louis Hugo Francescutti)....Pages 137-156
Back Matter ....Pages 157-164

Citation preview

Robert S. Barrett, PhD Louis Hugo Francescutti, MD, PhD

HARDWIRED How Our Instincts to Be Healthy are Making Us Sick

Hardwired: How Our Instincts to Be Healthy are Making Us Sick

Robert S. Barrett • Louis Hugo Francescutti

Hardwired: How Our Instincts to Be Healthy are Making Us Sick

Robert S. Barrett University of Alberta Calgary AB Canada

Louis Hugo Francescutti University of Alberta Edmonton AB Canada

ISBN 978-3-030-51728-1    ISBN 978-3-030-51729-8 (eBook) https://doi.org/10.1007/978-3-030-51729-8 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Copernicus imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland

Book Blurbs

“In this important, informative, and thought-provoking book, Barrett and Francescutti explain the extensive links between social life and health. Filled with alternately inspiring and horrifying anecdotes as well as jaw-dropping (and carefully researched) facts, their book will grab your attention and keep you turning the pages. The difference between a long healthy life and a shorter sicker one often depends more on personal choice and social interaction than on expert medical care. The authors’ combined expertise of medical knowledge and social science research makes them a perfect team to write this book.” Roy F.  Baumeister, co-author of The Power of Bad and Willpower: Rediscovering the Greatest Human Strength “This book will definitely make you think differently!” Alan M Langlieb, MD, MPH, MBA Former Director, Boosters Project at The Johns Hopkins Bloomberg School of Public Health “An excellent book that integrates broad scholarship and interesting supportive data and anecdotes.” Maj-Gen Jean-Robert Bernier M.D. Former Canadian Surgeon General and NATO Chief Medical Advisor “A fascinating new perspective on health for the twenty-first century.” Dr. Owen Adams, Vice-President, Canadian Medical Association “The authors break down, and break through, the entanglements of our behavioral wiring.” Dr. Deon Louw, Neurosurgeon, Inventor, Movie Producer “Today’s self-made health calamity is like watching a train wreck in slow motion. This book is timely and essential.” Dr Loubert Suddaby, Neurosurgeon, Author, Inventor

v

Introduction

Health Uncharted Your chances of surviving this day are far better than they have ever been in all of human history. Never before in our turbulent 2 million-year societal evolution have we understood so much about what it takes to live longer, healthier, and more fulfilling lives. Our ever-increasing medical knowledge, advanced management of disease processes, and our respect for personal and industrial safety are at an all-time high – and improving rapidly. Despite this vast knowledge, many of us feel more sluggish, fatigued, stressed, and rundown than ever before. In the USA, where there are more health clubs and gyms than anywhere else in the world, as well as more medical innovation than any other nation on the planet, ill-health is on the rise, with obesity rates quickly approaching 50%.1 Depression has become an epidemic as suicide rates now double homicides, and 20% of all college students, in the past year, having contemplated taking of their own life.2 Over one-third of Americans suffer from chronic pain, the highest reported level in the world, while consuming some 80% of the global opioid supply [1]. In the last 100 years average length of sleep has dropped by over 2 hours per night and has been linked to a host of disease processes, from Alzheimer’s to Diabetes [2]. If you have access to the Internet, you can peruse the abstracts of some 50 million peer-reviewed journal articles in more than 30,000 science journals. The problem is certainly not that we lack the knowledge to be healthier, rather it is understanding what the particular barriers are to health in today’s modern world [3]. As such, it’s not necessarily a question of how do we become healthy. It’s how do we address those things that are stopping us from being healthy. As much as we may like to theorize that more health information will yield better health outcomes, this relationship remains elusive at best, and at worst, entirely false. So, why is it that we suffer from such poor health and well-being when we are living in the safest, most secure, and most prosperous era in all of human history?  See Centre for Disease Control NCHS Data Brief No.288. 2017.  See Harvard University Study, Depression and Anxiety, discussed in, 2018. One in Five College Students Reported Thoughts of Suicide in Last Year, Association of American Universities, 10 September. 1 2

vii

viii

Introduction

Theorist Buckminster Fuller famously proclaimed that it once took 1500 years for all of human knowledge to double. By the twentieth century, this period was shortened to about 100  years [4]. Today, varying estimates suggest that current human knowledge is doubling at a rate of about every 13–18 months, and an oft-­ quoted IBM prediction claims that the doubling rate will soon shorten to an astounding rate of every 11 hours. This prophecy is as horrifying as it is exciting. The idea that what you knew yesterday could be obsolete today raises questions about not only what is real – but how we even go about educating ourselves. Medical knowledge too is growing at an exponential rate. In 1950, healthcare knowledge was doubling every 50 years, in 2010 it was less than 5 years, and now we are on track for a doubling of medical knowledge every 2 months [5]. Despite this deluge of data, our actual health standards are in freefall. In some sectors of the American population, lifespan is even reversing – for the first time in a thousand years among wealthy nations. When our collective knowledge of how to stay healthy is increasing exponentially while real health is swiftly decreasing, something is terribly amiss. The evidence suggests that we are rapidly entering a global public health emergency in which our physiological and psychological well-being are failing to keep pace with the positive rate of societal and technological change around us. Evolutionary psychologists and biologists hint at a sort of “evolutionary mismatch,” by which our evolution, having taken place over millions of years, can be suddenly out of step with our modern world [6]. This leads to a type of maladaptation, in which our behaviors, typically meant to quench our burning biological needs and desires, actually cause us harm. To be sure, it’s not that we have forgotten how to survive. In fact, our “caveman within” is superbly evolved to do just that. The problem arises when these survival instincts – hardwired into our brains and bodies – become immersed in an environment which is so rich in stimuli that we lack the biological fortitude to manage it. Never before in all of human history has our wellness been so profoundly shaped by our social world. As highly evolved creatures, our instincts to survive are fueled by our brain’s commanding reward circuitry, which creates the powerful urge to eat, procreate, and band together into communal groups. While this may have served us well as hunter-gatherers, our modern ecosystem of fast-paced information, global networks, social media, and vast array of ready-made foods have presented us with an intoxicatingly rich ecosystem, far beyond what our survivalist drives were ever evolved to manage. While our ancient ancestors may have had only a few basic decisions to make each day, principally centered around survival, today’s modern world presents us with a surplus of never-ending choices. Indeed, evolution has gifted our brains with a singular full-throttle setting in our pursuit of gratification – be it food, sex, or social bonding. Adding to our challenges, the more stimulus we enjoy, the more of the stimulus we need to reach the same level of satisfaction. And, in a world of limitless brain candy, the consequences for such health spirals are truly profound. Critically, we need to acknowledge that modern wellness can no longer be the sole purvey of medicine or biology. As our emerging reality suggests, our health and

Introduction

ix

well-being must consider the fundamental influence of the social sciences, which analyze why we think and act the way we do. It is only through this combined lens that we can see, more clearly, the way in which our social world governs our physical condition. How we view the world, the relationships we keep, the choices we make, and the things that make us stressed, hungry, or horny can all be attributed to our evolutionary hardwiring, which has evolved to provide us with the basic survival software of life. This is our story – and it is where we begin our journey. What follows are some of the most fascinating and little-told stories of how our hardwired instincts control our attitudes and behaviors without our awareness, and how these behind-the-scenes processes play such an enormous role in determining our health and wellness. Our love affair with pharmaceuticals for instance illustrates how our brains and bodies are perfectly evolved to succumb to the intoxicating effects of chemical substances. While we have witnessed “controlled” uses, like President Kennedy’s use of methamphetamines, to SEAL Team Six’s liberal consumption of Ambien prior to the Osama bin Laden raid, to NASA’s recommendation that the Apollo 13 crew take Dexedrine to stay focused during NASA’s “finest hour,” to Hitler’s shocking drug cocktail regime, to Silicon Valley’s trendy biohacking, our love affair with opioids and other prescription mind-altering pills is now causing us more collective harm than heroin and cocaine combined. In our hardwired brains, we also create our own in-house “drugs,” in the forms of neurotransmitters that feed voraciously off social status. To be liked and admired by others, to feel belonging, and to garner sexual attention are eternally sparking survival instincts upon which social media pours pure gasoline. While we all possess this hardwired software, most of us never realize that it is running in the background, controlling our moods and behavior. Why, for example, is the 12-degree head tilt for female selfies (and the 8-degree head tilt for men) so optimal in social media? Why was it common for ancient Rome’s high-society women to routinely slink out of their wealthy abodes to pursue torrid midnight sex with gladiator champions? We showcase research on what happens to testosterone levels and risk-taking behavior when teenage boys are observed by attractive female scientists? We take a deep dive into why social media offers us the ultimate platform for “trait amplification,” an evolutionary strategy that we share with the humble guppy fish. We are also hardwired to seek happiness – an emotion that comes far more naturally than the thousands of self-help books on achieving it would lead us to believe. Happiness can also yield healthiness, while the opposite is also true. We explore what a broken heart looks like to a cardiologist, the link between depression and inflammatory diseases, how social distancing can affect our very chromosomes, and why loneliness is more dangerous to our health than smoking? One of our most basic drives is the need to sleep. Yet, our modern world of artificial light and binge-worthy television shows that stimulate our brain and its senses have shortened this critical nightly recharging, altering our schedules, our diets, and our hormones. And, even though the stimulation feels good at the time and satiates our hardwired instincts for reward, this sleepy trend runs counter to physiological and mental needs, inviting obesity, dementia, and disease. We discuss what happens

x

Introduction

when you miss even a small amount of sleep, and which parts of the brain. Fall asleep first if you do so while driving, along with how coffee works in the brain and why nighttime alcohol and food are different than daytime. Children, our most precious beings, are not immune to their innate hardwiring either, although theirs is a special story. How their brains grow and develop, from the bottom up, plays a critical role in why screen-time can harm more than it helps. From the frontlines of Syria’s children of war to Switzerland’s role in child slavery, we discuss how stress that is experienced in childhood has a profound effect on brain development. We delve into the disturbing world of attention-deficit drugs and the reason why teens sometimes take selfies in front of approaching trains. Our quick evaluation of risk and reward is also a hardwired survival instinct. Have you ever wondered why some people love adventure sports and others not? We discuss what it feels like to leave your best friend to die on a frozen mountain and why car drivers started to hit more pedestrians when seatbelts became mandatory. We tackle the popular theory that CEOs are more likely to be psychopaths and why the idea of willpower doesn’t always work the way we think it should. We run headlong onto Omaha Beach during D-Day to understand stress and decision-making and to see the horrific assault through the bloodied lens of one of the bravest war photographers of all time. We look at why the television show Melrose Place and Beverly Hills 90,210 led to a public health emergency in Fiji, what Tinder does to our brain, and whether the term “snowflake generation” is a fair one. We discuss how our hardwired brains and social drives perpetuate error in hospitals, regardless of modern medical advances, and learn from pilots and astronauts – from lessons hard-won – on how to work with our hardwiring and not against it during times of stress. We consider a historical period in which we fed our hardwiring in positive ways and which allowed society to flourish not flounder, why eating salad was once thought to be erotic, and how scientists attempted to engineer sexual orientation in a lab. Through stories and science, we explore the hidden realities of our ancient instincts and how they secretly inform our views and our behavior, each and every day. Awareness of how this hidden force – the hardwired “caveman” within – controls our decisions is the first step in understanding who we are and how we can begin taking control of our lives, our health, and our future. Of the roughly 3.8 billion years that life has existed on Earth, our human ancestors have only been here for 6 million of it. That’s just over 1% of 1% of the time. Put another way, if life on Earth was a 1000 years old, our first human ancestors would have arrived in the last 30 hours. If we consider modern humans – the ones who actually look like us – on the 1000-year timeline, they would have appeared only 4 minutes ago. Each of us carries two sets of genetic code – one from our mother and one from our father. When we make our own sperm or egg, small variations occur when our parents’ genes are copied. For men, who make about 1500 sperm per second, that’s a lot of potential genetic variation. However, of the 6 billion letters that make up a

Introduction

xi

genome, only about 60 will be slightly changed or mutated in the creation of each new egg or sperm [7]. That’s not really a significant percentage. Until quite recently, scientists believed that this variation occurred in far greater numbers, meaning our updated understanding of evolution is that it may be marching along at an even slower pace than first imagined. While we clearly need to make health choices, evolution is a slow bedfellow. We could make the common analogy that evolution moves at a glacial pace – but it’s actually so very much slower than that. So, while humans can dress in haute couture, double kiss, and talk the part of futuristic beings in an increasingly modern world, the reality is we remain slaved to our ancient beastliness. Our ancient pre-agrarian brains and bodies were never evolved to cope with a world in which a bounty of delights is rarely more than an arm’s reach away. Ten thousand years ago, sensations of hunger served a critical evolutionary purpose, generating powerful motivating urges to “hunt and gather,” efforts that were often rewarded with but a few morsels of food. And while this physiology may have served us well in our ancient past, the very same inner drives are still at play in our modern world, despite being surrounded by high-caloric fast foods, loaded with intoxicating sugar, fat, and salt. With as much junk food as we can eat, social media constantly pinging our brains, miraculous and fantastic medications that make us calm, and blazingly fast communication in which we can display limitless gestures of status and courtship, we are unwittingly oversaturating our ancient survival instincts. We simply haven’t yet evolved to make the necessary health choices in our current modern ecosystem, and this inability is playing out in disturbing ways, most notably in terms of deteriorating mental and physical health trends. To be clear, it’s not that our brains and bodies are somehow daft or inept. Quite the contrary, humans have evolved to become the ultimate survival machines, but for a much harsher time, when our needs and desires were not so quickly satisfied. Reconciling our highly evolved ancient drives with our bountiful modern-day world means, first and foremost, understanding who we are. What are the inner motivations that make us tick? Why do we do the things we do? And, what are the evolutionary forces that work in the background that propel us to make decisions that impact our lives? Some have articulated these challenges as a type of Frankenstein Effect, by which humans inevitably create the thing that destroys them. Yet this seems to be the easy way out of explaining a far more complex set of problems. Humans, as a whole, have an uncanny track record for surviving their own deadly recipes. Our basic physiological and chemical needs are not the only marionette strings that inform our daily decisions. Our ancient social world also required us to be highly attuned to our status within groups and communities. We are evolved to ask questions like: Am I strong enough to compete and survive within the group? Do I need protection from group members? Does my community see me as valuable or am I vulnerable to exile? How do I fit in to the social hierarchy? These constructs were critical for human survival and remain equally evident in our modern lives. Like junk food for our bellies, our fast-pasted world of social media is fueling this

xii

Introduction

ancient social hardwiring and compounding it. This is altering our sense of self and, for some, manifesting as a daily battle in the way in which we value and rank ourselves within our chosen peer groups, families, and communities – often with negative implications for our mental health. Never before has our wellness been so influenced by our social world. If you feel like it’s getting more difficult to separate fact from fiction when it comes to living your life and making informed health decisions, you’re certainly not alone. While our ancestors may have learned life skills, dietary habits, and child-­ rearing strategies from their parents and immediate relatives, today we are inundated with information on how to improve ourselves, much of it conflicting, and much of it based on the social currency that the message provides the sender, as opposed to any genuine intention of wellness. Not only does this lead to wacky fad diets and extreme habit-makeovers – often to win acceptance from one’s online peer group – it can also lead to feelings of confusion, inferiority, and anxiety. According to the American Psychological Association, the most stressed-out demographic in the USA today are those who grew up with the Internet at their fingertips.3 And, it’s not because their brains are being overloaded with round the clock information. The human brain, which has about 21,000 times more synapses than there are people on the planet, can hold about as much information as the entire World Wide Web. With this immense storage capacity and our ever-increasing colossal flow of knowledge, you’d think that our collective intelligence would be soaring to new heights. Counterintuitively, this is not the case. In fact, scientists have recently discovered that worldwide Intelligence Quotient (IQ) levels have been steadily dropping. While some attribute this observation to mean that countries without sufficient formal education are experiencing greater birthrates, others point to places like Denmark, the UK, and Australia, where IQ levels all show decline [8]. In Denmark, where military service is mandatory, the Army has been testing up to 30,000 young men of the exact same age-group each and every year since the 1950s. Like Australia and the UK, Denmark’s data indicates that intelligence seems to have peaked around 1998 and has been dropping ever since. One possible theory of declining IQ is that the computer age has simply helped us foster new forms of intelligence. Out with the old and in with the new. Out with reading Latin and doing long-division by candlelight and in with blisteringly fast hand-eye coordination and video-gaming skills. Processing speed, one might theorize, is our new intelligence. Yet this too is debatable after an analysis of 14 intelligence studies since the Victorian Era indicated that our reaction times have also slowed. Not only would the Victorians be able to beat us at Nintendo, but if we include their reaction time as a measure of IQ – which many scientists do – it equates to a 13.35-point decline in IQ since the late 1800s [9]. Were our grade four teachers correct when they said that calculators would make us dumb? Does spellcheck and autocorrect make our kindergarten-level writing fall onto the page like Shakespearian prose? Is the bounty of information and knowledge that is pushed to our brains each day, already half-digested by media hubs,  Stress in America, 2012. American Psychological Association. Washington, DC.

3

Introduction

xiii

tampering with our capacity to think critically, or to quickly separate fact from fiction? The everyday choices we make, our health, our consumption, our work, and our family life, may provide clues as to how well, or how poorly, we really are at navigating the heavy haze of information that flows and fills every nook and cranny of our world. If we look back to the first and largest era of human development – the “prehistory” period – we see that it actually spanned most of our entire evolution, from 3.3 million years ago to about 5000 years ago. The other nine or so major eras of human development occurred since then, in the most recent 5000  years. Admittedly, throughout much of this evolutionary history, the world around us changed very little. Yet, as we move toward present day, and particularly in the last 100 years, the changes we see in society have been accelerating at an astonishing rate. We can imagine that the world 2000 years ago would have been fairly recognizable to people living 500 years before. However, our world today would be nearly incomprehensible to someone looking at it from merely a generation ago. Sure, some things haven’t changed. Hamburgers and pancakes look the same but the things that have changed, have changed remarkably. If a time traveler from the 1970s were to travel to our present day, our world, and specifically the technological advances, would seem completely foreign. Standing on the street, in an airport, or on a bus, the first thing the 1970s traveler would surely notice would be the downward gaze of nearly every member of the public – looking at, or tapping on, their remarkable handheld rectangular tablets. Our capacity for societal progress has not always been limited by our ability to absorb fast-flowing information or pixels on a screen. The Greek origin of the word “technology” combines the concepts: “art” and “logos.”4 The original use of the word therefore included the application of art as a measurement for technological advancement, in addition to new forms of mechanization and tools. We might infer then that progress depends upon our capacity to integrate hard technical achievements with our innate social drives. Yet, for much of our history, societal evolution has been defined by technological achievements. The bronze age, the iron age, the industrial age, and the atomic age are examples of epochs in which the human experience fundamentally shifted – primarily based on specific material breakthroughs. All of these periods propelled the human race to new heights, and while all of the eras were ultimately tied to chemical or physical achievements, or smarter engineering solutions, they all represented technological advancements created by humans  – for humans. Today, so-called “man-made” solutions are, by far, the dominant form of change. Particularly in the last several thousand years, and certainly in the last several hundred, human technological advancements have constituted the most significant and rapid human ecosystem shifts in history. We know, from our study of Darwin, that when a species is faced with ecological change, evolutionary processes will favor particular genetic attributes, or select  Encyclopedia Britannica. Online. History of Technology, by Robert Angus Buchanan.

4

xiv

Introduction

against unfavorable ones. Ultimately, over very long periods of time, a species may acquire specific traits that enable its population to uniquely adapt to emerging environmental realities. But what if the environmental changes happen so rapidly, a species, like humans, has little or no time to fully adapt? Like the hapless lumbering dinosaurs whose magnificently long tenure came to an abrupt and deadly halt, significant ecosystem changes can very quickly render a species outside its normal zone of survivability. Today’s human challenges are not quite so meteoric in nature but are fundamentally governed by the same principle – an inability to adapt in step with change, a problem that is putting increasing strain on our health and well-being. While many of us may well feel this life pressure, our understanding of its root causes and why such feelings are different from before can be a difficult mystery to solve. Of the many fantastic evolutionary human adaptations is our ability to maintain a level of physiological homeostasis, with our food cravings and our desires carefully manipulated and balanced by our bodies to meet our precise physiological needs. Sugar, sodium, and pH levels, as well as our drive to eat, sleep, and procreate, are all tied to processes that favored our evolutionary survival. Yet such incredible physiological hardwiring took thousands of years to emerge, developing in relatively stable environmental conditions. If you’re wondering how long it does take to adapt to an ecosystem change, a fascinating example of humans altering their own environment resulting in real evolutionary change is highlighted by our relationship with lactose, the sugar that makes up around 8% of milk. With very rare exception, baby mammals are all born with the ability to digest lactose due to their intestinal villi secreting the enzyme lactase. For most of human history, humans have lacked the ability to digest milk beyond weening, in large part because there were no other forms of exposure to dairy products after our mother’s own milk. Today, some 65% of the world’s population does not possess lactase, the enzyme needed to digest milk products, and, as such, are by definition lactose intolerant. However, regional percentages vary dramatically around the world. If you’ve ever enjoyed a buffet breakfast in central or northern Europe, you’d surely see a wide variety of cheese and yoghurt, while a breakfast buffet in Asia would likely feature very few dairy products. Not surprisingly, only 5% of Northern Europeans are lactose intolerant, while in Asia, intolerance to dairy runs close to 90%.5 For millennia, humans had little or no exposure to dairy in adulthood, that is, until some 7500  years ago, when the domestication of cattle, sheep, and goats resulted in the production of milk products, particularly in northern Europe, where dairy could keep longer in the cooler climate. The process by which societal or cultural change can lead to evolutionary adaptation is called gene-culture coevolution, or dual-inheritance theory, and dairying is a great example. Societies that did not domesticate dairy livestock to the same degree as northern Europeans now tend to

5  Lactose Intolerance. Genetics Home Reference. U.S. National Library of Medicine. https://ghr. nlm.nih.gov.

Introduction

xv

lack the genetic trait for lactose digestion. Unlike many other animals, humans have the unique capacity to introduce cultural change into their ecosystem. As with the proverbial tortoise and the hare, rapid hare-like ecosystem changes challenge our slow-paced tortoise-like evolution, which, like the fable, can catch up if granted enough time. Likewise, our brains and bodies have been perfectly honed through unhurried steady evolutionary processes to present the fittest possible solutions for survival. Today, like the hare, our societal advances are racing wildly by presenting us with new environmental realities that completely outpace our capacity to keep up. What’s left is a widening gap between our slow biological adaptation and the challenges that our new environment is posing. To be clear, it’s not that humans have stopped evolving, it’s that our physiology and our minds are currently developed for a previous time. The “gap” between where we are today and where we ought to be is playing out in negative and stressful ways  – primarily seen in social, mental, and physical well-being. Consider food. Our brains and bodies are superbly evolved to seek out nutrients that are high in energy, such as food rich in sugar and fat. Thousands of years ago, if we were lucky, our daily foraging may have provided us with a few scraps of nourishment. Today, equipped with the same drive for sugar and fat, our brains leap to attention when we happen upon a sugary or salty vending-machine snack or a hamburger and fries drive-through. When these conveniences are on every corner, what do we do? We eat and eat, and eat some more, just as our brains and bodies have been so perfectly honed to do. Add to this, our increased activity at night due to artificial lighting combined with our elevated anxiety and stress levels, and we are keeping our daily hunger sensation active much longer into the evening hours than our ancestors ever experienced. However, this is not simply a story of brain chemistry and reward mechanisms, it’s also one of social change, how we communicate with others, what we use to entertain and stimulate our brains and bodies, and how we live within our families and communities. Today, the most rapid of all man-made changes is occurring at the level of technology – and especially the integration of personal computing and digital communication in our everyday social lives. The way in which we utilize media, particularly as an all-day social tool, is creating visible and growing public-health fault lines. Yet, unlike the ill-fated dinosaurs whose demise was entirely the result of external forces of nature, the trigger for our maladaptive strategies is almost entirely of our own doing. The very same caveman instincts that afforded us survival in our ancient social settings are now awash in waves of technological jet fuel, creating a host of problems with the way we communicate, the way we view our own lives, and the way we rank our social status against those around us. For many, the result of life in this social climate is one in which we are left searching for meaning, value, and purpose. The irony of this situation is that while the gap between societal change and physiological adaptation is widening – and causing a host of social maladies – we are, generally speaking, somewhat blind to the problem. In many other situations that challenge us, such as divorce, death of a loved one, or job-loss, we realize that the situation is grim and we understand that we must find ways to adapt, heal, and grow. Today, most of us would be reticent to slow or stop the technological advancements

xvi

Introduction

or modern-day conveniences that make our lives “easier,” even though their effects need our attention. There are many that advocate and promote the concept of “resilience” as the solution to today’s added stressors – a theme that has become a cottage industry for countless writers and consultants. From its etymological origins in the 1620s – the Latin “resilire”  – has experienced little variation. One might even say the word resilience has been resilient itself. The idea of having qualities of “springing back” or being “elastic” are central to the concept of resilience, and while noble in theory, the fond application of the word to current challenges may fall short. As a remedy for modern stress, advocates of resilience suggest we exhibit a type of strength or power to resist environmental or ecosystem pressures, avoid unwanted negative influence, and return to our previous state of being. There are countless self-help books on resilience, volumes on how to improve resilience in children, and even think-tanks and institutes dedicated to the word. To be resilient, one must foster a type of intelligence or quality that provides isolation or immunity from forces going on around them. However, the idea that we retain our capacity to return to a previous state, regardless of our environmental demands, runs counter to the very notion of adaptation. The pace of change around us requires recognition that a rather unique and accelerated adaptive process is required. As with our lactose story, gene-culture coevolution considers the possibility that cultural changes can lead to actual evolutionary adaptations. Indeed, paleo-archeologists have suggested that cultural learning leading to evolutionary adaptation has been occurring for over 280,000 years [10]. But, this is an extremely slow process, which is surely why the idea of resilience is so attractive, as it promises immediate relief. Indeed, resisting change, or finding ways to return to one’s original state in spite of societal demands, is a simple and eloquent proposition. Yet, returning to a previous state that is ill-equipped to handle our modern world is not an entirely viable solution. A far better path is to empower ourselves with real knowledge and understanding as to how and why our masterfully engineered brains and bodies push us in unwelcome directions in the face of modern challenges. While we once possessed the wisdom to hunt and gather in order to survive and improve our lives, today we must empower ourselves with knowledge and understanding in order to ensure that our total health and well-being is supported. Presently, we are going in the very opposite direction. Surviving challenge means adapting to the strong cultural riptides that are all-too common in a world in which mere second-long thoughts are considered “lingering.” It means being able to understand how our bodies and our minds react to the stimuli around us, and how, at times, we so-often screw it up. It means learning to disseminate information, to understand what keeps us steady and on course when we experience competing expert opinions on health, relationships, or work. And, it means understanding how new forms of media bend the light through the various lenses we use to see and evaluate our lives, how they can tamper with our moods, and how they can affect our bonds of love and friendship.

Introduction

xvii

As is so characteristic of our era, the road to success requires, first and foremost, a healthy ability to strain information – to distill the important soundbites from the white noise. It’s not difficult to feel the pressure of information overload when we open our web browser each day, which can sometimes have the undesired effect of causing us to shut down our filters altogether. For many of us, it can be too daunting to keep track of all the new studies and opinions on how we should live, or what “breakthrough” bits of knowledge we should cling to this year or next – so in order to protect ourselves and maintain some semblance of normalcy, many of us detach. Sadly, it’s not uncommon to hear people mockingly explain away new scientific evidence on diet and exercise as fleeting academic fancy, arguing that new health claims and warnings are simply the latest trends – here today and gone tomorrow. We’ve all heard the argument about what our grandparents ate and that it didn’t kill them, spoken as proof that “science” is always conjuring up new ways to disrupt our common senses. Cynicism surrounding new knowledge can be addictive because it helps shield us from having to process or cope with overwhelming volumes of new data and opinions – particularly if those new bits of evidence demand a change to our established habits or beliefs. Try, for example, to tell a host who just delivered a tray of sizzling barbecued hotdogs to jubilant children at a birthday party that the World Health Organization recently declared processed meat a “Group 1” carcinogen, alongside the likes of tobacco, asbestos, and mustard gas.6 The bearer of said dogs would probably disavow you as a friend, despite the fact that your act was to mitigate the harming of children. In fact, the host would probably dismiss you as a preachy fad-following “nut-job” because, as sure as summer barbecues by the lake are fun, to many folks, hotdogs have been around for as long as kids have had mouths to eat them! One point for tradition – zero for reason. You might clinch the award for diabolical party-pooper if you reminded everyone that the kids’ root beer and juice contains around 11 teaspoons of sugar per cup.7 They might pretend not to hear you, which is not at all surprising when, to our brain, sugar is more addictive than cocaine [11]. Try spoon-feeding 11 teaspoons of white sugar to a toddler in front of a group of moms and see what happens. They’d surely turn themselves inside-out to stop you. We have all this scientific information flowing to our eyes and ears and while we inherently know what’s good for us and what is not – somehow, above all else, we are still at the mercy of our hardwiring. It’s logical that we should crave some measure of stability in our lives, especially if we have witnessed scientific “indecision” over the years, on issues of lifestyle and health. Why was something good for us a short while ago, then not – and now good for us again? Logically, we lose trust in the scientific community. We’ve also seen doctors promoting competing diets at the same time: low saturated fat versus high saturated fat (both advertised as healthier than the other), Paleo diets versus 6  Press Release No. 240. IARC Monographs evaluate consumption of red meat and processed meat. International Agency for Research on Cancer. World Health Organization, 26 Oct 2015. 7  How Much Sugar and Calories are in Your Favourite Drink? National Institute of Health Publication. We Can Series. U.S. Department of Health & Human Sciences. Retrieved Apr 2016.

xviii

Introduction

Mediterranean diets, to eat eggs or not to eat eggs, dairy versus no dairy, high fish consumption versus mercury poisoning, alcohol and heart disease versus heart-­ healthy red wine, no salt versus some salt, and our personal all-time favorite, coffee-­ bad versus coffee-good. We only need to experience this scientific flip-flopping or competing narratives a few times and we quickly become skeptical of new information and advances. Perhaps even more dangerous, however, are the those who cling to every pop-­ culture “health” trend they see. Many flock to what they perceive to be enlightened new-age wisdom, often from famous Hollywood celebrities whose near limitless resources can (it is believed) be relied upon to unearth the secrets to life, health, and happiness. In a world where information never stops competing for our attention, dryly written scientific abstracts from evidence-based journals provide little satiation for our hungry brains – particularly so when compared to attractively edited and often revealing Instagram photos of celebrities showing off their toned body parts, all in the name of health. To be fair, many of these uber-popular celebs do work extremely hard on their craft and their fitness, mercilessly honing their physiques through punishing workout regimes – which, by the way, almost always boil down to highly disciplined eating and vigorous exercise. Actors like Hugh Jackman, Gerard Butler, Ryan Reynolds, Gal Gadot, Brie Larson, and Daniel Craig work insanely hard to prepare for their physical roles. Yet, other celebrities provide such a bizarre departure from accepted health and fitness standards that they offer a superb outlet for cult-like medical counterculture groupies bored with convention. Whether it’s an unhealthy throwback trend like wearing highly constrictive corsets that narrow one’s waist, or truly bizarre diets such as the “kale and chewing gum” diet or the “coffee and butter diet” (both actual diets!), to you-can’t-make-­ this-stuff-up gems like steaming one’s nether-regions, these freakish health fads often win favor when pitted against uninspiring and often noncommittal scientific research. Adherents to these outlandish fads not only suffer from chronic disenchantment over glacially slow scientific “debate” (as we are fond of calling it), they, like many of us, have been seduced by an instantaneous selfie-culture that makes it far more interesting to scroll through up to the minute bathroom photos of celebrities showing off their “bikini bridges,” “8-packs,” and inner “thigh gaps” than research studies that, in the end, often makes lackluster revelations about the need for even more research. Body-part selfies, or “body badges”, provide never ending brainfood missiles for vulnerable online masses who wish to be liked as much as their silver-­ screen or music heroes. At best, these body trends provide unattainable standards that can lead to anxiety or depression and, at worst, serious medical harm. It turns out that no demographic sector of society is really spared this strong pop-­ culture tidal wave, but some may be particularly vulnerable. According to the American Psychological Association, Millennials, those born between the early 1980s to early 1990s, and those who are just reaching adulthood now, are currently the most stressed-out segment of society.8 Contributing to this stress is undoubtedly  Stress in America, 2012. American Psychological Association. Washington, DC.

8

Introduction

xix

our 24-hour fixation with merciless forms of online messaging that provide never-­ ending reminders of what ideal beauty standards look like as well as the exciting and ostentatiously expensive lifestyles of others. Indeed, creating a perfect version of ourselves in social media is the new modern era’s version of “keeping up with the Joneses’” – and it’s taking a real toll on our health. A UK study by the All Party Parliamentary Group on Body Image recently presented disturbing evidence that anxiety over one’s body is reaching epidemic proportions – with almost 10 million women in the UK reporting that they feel depressed because of the way they look and nearly a quarter of all women avoiding exercise because they are too self-­ conscious about how their bodies appear.9 According to the study, low “body confidence” is so widespread it has become a public health emergency that now affects both genders – and, disturbingly, children as young as 5 years old. The information age – in all forms of online media, including “social” – was cited as constituting today’s biggest threat to positive body image, with over half of the respondents in the UK survey admitting they feel completely powerless to control it. The youngest members of our society have never known a world without instantaneous online media at their fingertips. But, despite their inherent digital dexterity, 24-hour access to imagery that promotes certain lofty standards of money, body image, and lifestyle are worrying mental health professionals who note that the rates of depression in teenagers have increased by 70% over the past 25 years, particularly so since the advent of the Internet [12]. More mature folks are not immune either. Although statistics show that “baby boomers” and seniors use the Internet regularly, they are less likely to use mobile devices for all-day connectivity. Yet older generations may be vulnerable when not possessing the seemingly built-in online agility of their younger peers when it comes to sifting through massive search results in order to tease out fact from fiction. As the UK study of adolescents highlighted, we have lost the capacity to shield our youth from the Internet of life. Indeed, we probably shouldn’t or needn’t try. How then, do we reconcile the competing forces of new health and wellness advances with our craving for simplification? On one hand we need to be open and flexible to new medical discoveries and lifestyle recommendations, even if they challenge our preferred or hard-to-kick habits, while on the other hand building a type of immunity against misinformation and junk science. Both are a challenge. Surviving this information tidal wave is a to-and-fro battle that requires a special form of defense. And, this defense begins with understanding how our highly evolved brains and bodies are reacting to our new modern world. Despite our lifespans being longer than generations past – in large part due to medical advancements – our basic human physiology and brains remain mercilessly vulnerable to lifestyle choices and behaviors that run counter to our health and longevity. Today, 33% of all adults in the USA are considered overweight or obese and

 Campaign for Body Confidence. 2012. All Party Parliamentary Group on Body Image. London.

9

xx

Introduction

75% of men are now classified as obese.10 That’s an astounding statistic. The Center for Disease Control and Prevention indicates that obesity related conditions: diabetes, poor diet, and physical inactivity (in addition to being overweight) constitute the bulk of heart disease factors.11 In fact, in America today, 25% of those walking the streets will die of heart disease [13]. Moreover, heart disease is no longer a “man’s disease.” In fact, it is now the leading cause of death for women in the USA. Yet, when surveyed, nearly half of all women are not aware of this deadly fact [14]. With all this ill-health, it’s not uncommon to solicit the miracles of modern medicine at one of our numerous hospitals, as well as the wondrous pharmaceuticals that de-clog our arteries, reduce our blood pressure, and stabilize our sugars. And yet, many of us are unaware that hospitals are actually extraordinarily hazardous to our health due to the incredible and underreported levels of error that occur within them. In fact, preventable medical error in US hospitals is now the third leading cause of death in the USA, behind heart disease and cancer [15]. Feel free to read that again. The things that are most likely to kill you in the USA are: (1) heart disease, (2) cancer, and (3) mistakes in hospitals. According to Leapfrog Group’s Hospital Safety Score, it is estimated that 440,000 Americans needlessly die each year in hospitals from errors that were entirely preventable. Shockingly, the Health Grades Report indicates that 40,000 people are harmed in US hospitals each day, meaning that one in four people who walk in the door of a hospital are likely to suffer harm [16]. In countries like Canada, the per capita numbers are similar, if not worse [17]. Despite the incredible number of adverse events in hospitals, few are ever publicly reported – unlike car crashes or aviation incidents, which are fully disclosed and recorded as a matter of public record. While some of these errors are medication related, most fall into non-technical categories like communication failures, decision-­biases, leadership issues, diagnostic errors, or poor patient handovers, which makes them far more difficult to categorize and track. Of course, hospitals are there for a reason. If we do get sick, injured, require surgery, or need to deliver a baby, then hospitals are our best, if not only, bet. But we need to be smarter about our health and well-being and give careful consideration to the habits that will keep us healthy and out of hospitals unless absolutely necessary. Building adaptive solutions to these modern challenges requires a combination of social and medical awareness. This is new. It means being able to understand how around-the-clock online media may threaten to undermine our self-image, how stress forms when trying to maintain social appearances, how anxiety amidst our fast-paced world is causing us long-term harm, and how bizarre health fads can damage us. Making sense of these changes is both daunting and empowering – and it is the first step to taking charge of our lives and our well-being.

 Overweight and Obesity Statistics. 2011. US Department of Health and Human Services. NIH Publication 04-4158. Updated October 2012. 11  Women and Heart Disease Fact Sheet: Heart Disease Death Rates, 2011–2013. Center for Disease Control. 10

Introduction

xxi

Research shows that you are not alone – not by a long shot – if you feel the silent undertow of stress pulling at you while you muster a brave face for the outside world. For many, this duality between the “on-the-surface-me” and the “below-the-­ surface-me” is becoming increasingly apparent and emerging as a significant theme in physical and mental well-being. Nearly one in three Americans now suffers from chronic stress conditions that include anxiety, irritability, sleeplessness, and fatigue [18]. Over one in three Americans now say they are suffering from chronic pain, with over half of the population experiencing at least one symptom of chronic inflammatory disease [19]. Depression, which, according to the World Health Organization, is now the world’s leading disability, has been increasingly linked to disease-promoting inflammation [20]. With no generation spared, anxiety and stress are beginning to increase in older generations too, a demographic traditionally known as being amongst the lowest sufferers for mental stress. Today, emerging adults, from their teens to late 30s, who should feel the most eager about their future, bear the brunt of it, as the most stressed out generation in America. Indeed, the bridge between mental well-being and physical well-being is becoming increasingly observable and defined. Inspired to understand the nexus between our evolving social world and our health, the authors of this book – Dr. Robert Barrett and Dr. Louis Hugo Francescutti first recognized this relationship and pattern in many of their public health projects. It became clear that the most pressing health trends and challenges in society today could only be truly understood from a combined medical and social framework. The combination of Dr. Rob’s social science and humanities insight and Dr. Lou’s medical prowess is just what the doctors ordered in capturing the unique forces at play in our changing world. In getting to the root of our behavioral choices, Dr. Rob has worked on projects spanning violent conflict and terrorism, through to surgical team performance, to astronaut crew conflict for future Mars missions. Dr. Lou, an emergency medicine physician and professor, the outgoing President of the Canadian Medical Association and the former President of the Royal College of Physicians & Surgeons of Canada, is one of world’s leading experts and thought leaders in medicine and public health. With a shared interest in human behavior and well-being, Drs. Rob and Lou found a natural partnership in diagnosing the nuanced relationship between our social world and our physical and mental health. In the course of their work, they discovered a world in which physical and mental health are influenced more by social change than perhaps at any other time in our history. Throughout this book we will dig deep into some of the most fascinating and little told stories of modern health and well-being. We will learn about how our social world, which is key to influencing the way we think, is often rooted in ancient means of survival. We will introduce numerous examples and cases that demonstrate how our prehistoric hardwiring, so perfectly evolved to keep us healthy, is now doing us harm. Finally, we will underscore the inseparable new connection between our physical health and social world and how these two domains interact within our subconscious to direct our decisions and behavior.

xxii

Introduction

Before we dive in, it may help to provide a quick understanding of the book’s direction. When we talk of global health trends, we acknowledge that many of the most significant and detrimental trends are occurring in parts of the world where people do not enjoy basic levels of sanitation or healthcare. This is admitted as much by the World Health Organization’s reports on Sustainable Development Goals (SDGs) [21]. Poor health outcomes, from maternal and women’s health through to access to clean water and vaccines, are still a significant problem for billions of the world’s populations. While we appreciate these immense health challenges – and have certainly not forgotten them – this book focuses on a different sort of problem. Perhaps equally complex, our work considers why we are seeing deteriorating health in the most advanced societies, which one might assume, given the near limitless technology, research, and information, should be somewhat inoculated against poor health. Many of the growing adverse health trends in countries like the USA, the UK, and Canada run counter to our increasing collective knowledge of medicine and disease. By all measure, we should be getting healthier. If knowledge on how to live a healthier life is going up and health is going down, the problem is greater than delivering medicine or orchestrating public health campaigns. The lifestyle decisions we are making are seemingly misaligned with the wealth of information at our fingertips, which means that something else is driving us with respect to our behavior. Perhaps, at this point, it is worth mentioning that not all health indicators are going in the wrong direction. Yet, those which we have chosen to highlight in this book are significant and rank as some of the most contributory to disease and mortality among those enjoying the fruits of our modern world. Many self-help books focus on single solutions or mantras that often fall into one of two broad camps: the social sciences or the biological sciences (including medicine). What we have discovered is that each, when viewed separately, tends to fall short of capturing the uniqueness of today’s modern health challenges. Only by combining the social and the biological can we understand the growing divide between modern society and health. Other clues, like growing anxiety levels, inflammatory diseases and pain pathways, prescription medicine addiction, and poor lifestyle habits, indicate that other social forces are at play. While our brain and basic physiology remain relatively unchanged during recent history, the world around us – the conveniences, the way we communicate, and our expectations – is changing inordinately. Our basic hardwired drives, like a type of built-in evolutionary software, remain intact, active, and responding to the modern ecosystem, although now, the very hardwired instincts that were meant to keep us healthy are making us sick. This book does not pretend to offer a one-solution-fits-all prescription. In many respects, we are facing down the road of the greatest health challenges of our generation and it would seem somewhat disingenuous to try to pin that all down with one cure-all remedy. Our aim has always been to unearth, to shed light on, and to discuss many of the most troublesome health trends of our time and to underscore the connection between our social and physical health. In the course of investigating hardwired instincts, it is apparent that human history offers some truly fascinating examples – many largely untold. Certainly, our story need not be a dull one and this is the flavor that we bring forward in the

Introduction

xxiii

incredibly varied stories and scientific studies that we have selected to help guide our discussion. Capitalizing on Dr. Lou’s unique insider understanding of emergency medicine and Dr. Rob’s extensive background in aviation safety and human factors, we begin the book with a backstage tour of patient safety in our modern hospital system. Chapter 1’s shocking statistics on error rates underscore the fact that even our greatest technological advances cannot overcome our humanness. The not so subtle message of the chapter is to stay healthy enough to avoid visiting these hospitals in the first place. The chapter sets the stage for the rest of the book’s discussion on some of our most pressing, yet little recognized, emergent health trends. Chapter 2, on sugar, salt, fat, and stress, explores our most common modernday dietary afflictions and why these are so difficult to shake because of how our ancient, yet magnificent, physiology has evolved. Chapter 3, on brain development, discusses how the human brain, which is so very unique in the animal kingdom, is having such an extraordinarily difficult time keeping pace with our new social technologies and what this means for our health and wellness, from childhood through to adulthood. Happiness, a theme that dominates much of our modern wellness literature, is the focus of Chap. 4, in which we ask why we know so little about how to attain it? This chapter unearths the link between our social world, our happiness, and our health. Chapter 5 is devoted to the role of sleep, not only as an inarguable necessity of life but as a biological function that has been under full-scale assault because of how our social and cultural worlds are diminishing its importance. No understanding of our human drives would be complete without a deeper look at how humans evaluate risk and make decisions. Risk and reward, the topic of Chap. 6, delves into this mysterious theme, to bring about a much richer understanding of why we do the things we do. Finally, Chap. 7, on hardwired health, reminds us that even during times when all seems lost, humans have an uncanny talent for rising from the ashes. Referencing a fascinating and real historical period that provides evidence of improved health alongside vast societal advancements, we are able to understand that it is not by ignoring our hardwiring, but by capitalizing on it, that we are able to achieve our greatest triumphs. This book is written to illuminate our human condition in a fast-moving modern world. It is densely packed with rarely discussed facts and figures about how you, dear reader, function. The aim of this book is not to provide a quick fix – it is to provide perspective and understanding.

References 1. Khazan O. America experiences more pain than other countries. The Atlantic, 20 Dec 2017. 2. Jones JM. In U.S., 40% get less than recommended amount of sleep. Gallup, 19 Dec 2013. See also, Howe N. America the sleep-deprived. Forbes, 18 Aug 2017. 3. Boon S. 21st century science overload. Canadian Science Publishing, 7 Jan 2017. 4. Fuller RB. Critical path. New York: St. Martins Press; 1981. 5. Corish B. Medical knowledge doubles every few months; how can clinicians keep up? Elsevier Connect, 23 Apr 2018.

xxiv

Introduction

6. Lieberman DE.  The story of the human body: evolution, health, and disease. New  York: Vintage; 2014. 7. Welsh J. Humans evolving slower than expected. Live Science, 15 June 2011. 8. Holmes B. Brain drain: are we evolving stupidity? New Scientist, 20 Aug 2014. 9. Woodley MA, Nijenhuis J, Murphy R. Were the Victorians cleverer than us? The decline in general intelligence estimated from a meta-analysis of the slowing of simple reaction time. Intelligence. 2014;41(6):843–50. 10. Henrich J, McElreath R. Dual-inheritance theory: the evolution of human cultural capacities and cultural evolution. In: Oxford handbook of evolutionary psychology. Oxford: Oxford University Press; 2007. 11. Ahmed SH, Guillem K, Vandaele Y. Sugar addiction: pushing the drug sugar analogy to the limit. Curr Opin Clin Nutr Metab Care. 2013;16(4):434–9. 12. Mental Health Foundation. Childhood and Adolescent Mental Health: understanding the lifetime impacts. London: Mental Health Foundation; 2004. 13. Kochanek KD, Xu JQ, Murphy SL, Minino AM, Kung HC. Deaths: final data for 2009. Natl Vital Stat Rep. 2011;60(3):1–116. 14. Mosca L, Mochari-Greenberger H, Dolor RJ, Newby LK, Robb KJ. Twelve-year follow-up of American women’s awareness of cardiovascular disease risk and barriers to heart health. Circ Cardiovasc Qual Outcomes. 2010;3:120–7. 15. The Leapfrog Group. Hospital errors are the third leading cause of death in the U.S. and new hospital safety scores show improvements are too slow. Hospital Safety Score. Washington, DC, 23 Oct 2013. 16. Landrigan CP, et al. Temporal trends in rates of patient harm resulting from medical care. New Engl J Med. 2010;363:2125–34. 17. Baker GR, et al. The Canadian Adverse Events Study: the incidence of adverse events among hospital patients in Canada. Can Med Assoc J. 2004;170(11):1678–86. Society, the Individual, and Medicine. Patient Safety. University of Ottawa. URL: http://www.med.uottawa.ca/sim/ data/Patient_Safety_Intro_e.htm. Retrieved 5 May 2016. 18. Winerman L. By the numbers: our stressed-out nation. Am Psychol Assoc. 2017;48(11):80. 19. Gambini B. Pain is widespread, legitimate problem that must be remembered amidst opioid concerns, researcher says, 2017. 20. Azab M. The brain on fire: depression and inflammation. Psychology Today, 29 Oct 2018. 21. World Health Statistics. World Health Statistics overview 2019: monitoring health for the SDGs, sustainable development goals. Geneva: World Health Organization. p. 2019.

Contents

1 Why a Hospital Is the Most Dangerous Place on Earth ������������������������   1 References��������������������������������������������������������������������������������������������������   22 2 Why Do We Crave Bad Things? ��������������������������������������������������������������  25 Reference ��������������������������������������������������������������������������������������������������   41 3 Raising Children on War, Cartoons, and Social Media��������������������������  43 Reference ��������������������������������������������������������������������������������������������������   69 4 The Truth About Happiness����������������������������������������������������������������������  71 References��������������������������������������������������������������������������������������������������   86 5 Why Do We Ignore Sleep?������������������������������������������������������������������������  89 References��������������������������������������������������������������������������������������������������  111 6 Are We Hardwired for Risk?�������������������������������������������������������������������� 115 References��������������������������������������������������������������������������������������������������  135 7 From Pandemics to Prosperity: Feeding Our Hardwired Health �������� 137 References��������������������������������������������������������������������������������������������������  156 Index�������������������������������������������������������������������������������������������������������������������� 157

xxv

1

Why a Hospital Is the Most Dangerous Place on Earth

Statistically, you are safer as a soldier fighting in a war zone than you are in a modern American hospital. During the deadliest year of the Iraq war, in the midst of the “Surge” in 2007, when the United States had approximately 160,000 “boots on the ground,” the United States suffered a loss of 904 service personnel [1]. During the same time period, approximately 35.1 million Americans sought medical treatment in US hospitals, and of that number, researchers estimate that some 400,000 died of “preventable” errors [2]. This number does not include those who died from their injuries, from trauma, or so-called “natural” causes such as heart disease, stroke, or cancer while under hospital care. Crunching the numbers, 1 out of every 200 servicemen and women deployed to Iraq during the Surge stood a chance of dying. By comparison, just over 1 out of every 100 people visiting a US hospital was at risk of dying because of a preventable error. Put bluntly, your chance of dying in an American hospital due to an error committed by hospital staff was greater than dying as a soldier in the deadliest year of the Iraq war. In considering the role that our social world has on our health, there is no better place to start than at the very heart of medicine – our hospitals. With an abundance of technological innovations and diagnostic equipment at their fingertips, medical professionals should be the least likely to succumb to ancient social hardwiring that imperils our health and wellbeing, and yet, the reality is, that it is these very tendencies that render hospitals the most dangerous places on the planet. Medical care has a lot of moving parts. Our physiology can be highly predictable at times and at other times present as an uncrackable enigma. Individuals may respond to medications differently, have varying levels of immunity, or may have underlying health conditions that can complicate recovery. Adding to this complexity, we typically show up on the hospital doorstep in rough shape, quite often with unexplained deteriorating symptoms for which we desire, and require, immediate resolve. As we journey into the hospital labyrinth, we will undoubtedly be analyzed and treated by different types of healthcare professionals with varying levels of experience and areas of expertise. And while highly trained, doctors and nurses are first and foremost humans and are just as vulnerable as any of us are to fatigue, © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 R. S. Barrett, L. H. Francescutti, Hardwired: How Our Instincts to Be Healthy are Making Us Sick, https://doi.org/10.1007/978-3-030-51729-8_1

1

2

1  Why a Hospital Is the Most Dangerous Place on Earth

errors of communication, noisy environments, overtasking, hunger, and toxic team environments. Where physiological complexities meet the perplexities and vagaries of modern diagnoses and medical care, there is a cavernous chasm for error. Despite the often valiant and even heroic efforts of our doctors and nurses, who deliver downright brilliant outcomes each and every day, chronic system failures in patient safety inside care facilities combined with serious inefficiencies in team coordination and communication mean that modern hospitals continue to be a breeding ground for error. There is perhaps no faster field of advancement than biotech medicine – and no greater arena for experiencing the gap between innovation and adaptation. In the early 2000s, the first human genome was mapped, at a total cost of $3 billion. Today, the same genome mapping costs less than $1000, with the price tag dropping rapidly [3]. Technologies, such as CRISPR (clustered regularly inter-spaced short palindromic repeats) may soon permit scientists to engineer human beings that are short on disease and big on physical and mental acuity by using the enzyme Cas-9 to “edit” specific strands of human DNA. The cost of CRISPR technology? Around $40. Yes, 40. With hundreds of labs around the world exploring everything from eradicating disease to creating super-strength humans, it’s no wonder CRISPR has garnered so much eyebrow-raising attention [4]. There is no greater need for adaptation than in the face of human-cultivated medical advancements. And yet, so much of what fails us, as humans and practitioners, is not the technical side of the medical world but the non-technical – our leadership, our decision-making, and our communication. Our very humanness, and particularly our biases around social interaction and decision-making, continue to undermine and limit our ability to fully utilize the advancements of our modern medical world. On a cold winter day, Andrew, a 58-year-old male patient was admitted to the prestigious New York Hospital in Manhattan’s Upper East Side for a rather uncomplicated removal of an inflamed gall bladder. Although he had been suffering from gall bladder issues for some time, he was petrified of hospitals and doctors and had long been hesitant about the procedure. Yet, a sonogram ordered 3 days earlier indicated that surgery was, unfortunately, a necessity. Mustering strength to overcome his fear of hospitals, Andrew underwent gall bladder surgery at 8:45 am the next morning and woke in good postoperative condition. His spent 3 hours in the recovery room and was then moved to a private hospital room with a dedicated nurse. Throughout the night hours, the nurse checked in on him periodically, recording his blood pressure and vitals as stable. But, when the nurse looked in on him at 5:45 am, she discovered that he had turned blue. When she tried to wake him, he was non-­ responsive. For the next 45 minutes, hospital staff frantically attempted to resuscitate him, including trying to intubate him, which was ultimately unsuccessful because rigor mortis had already begun to set in. He was pronounced dead at 6:21 am. The name on the death certificate read: Andrew Warhol. One of the most influential and creative minds of the century had died of medical error. How could a relatively young man in generally good health and who had just completed a routine surgery without complication suddenly die in recovery? An autopsy on Warhol found that his lungs and trachea were completely filled with

1  Why a Hospital Is the Most Dangerous Place on Earth

3

liquid. Investigation of Warhol’s death quickly turned to the amount of fluids that he had received in hospital, which it turned out, were twice the amount he should have received. The fluid overdose had caused an imbalance of minerals, which when combined with doses of morphine had caused fluid pressure to build up in Warhol’s body. Left unnoticed, the combination eventually culminated in coma and cardiac arrest. While the pop icon’s death was directly attributable to a clinical turn of events, the factors that led to his demise were ultimately a cascade of wholly preventable human errors. Doctors have an official word for this form of harm: iatrogenesis, which essentially means the “preventable” introduction of disease or complication as a direct result of care provided by a physician or surgeon [5]. Alex James was a 19-year-old college student who was taken to hospital after he collapsed while running on a hot August day. His initial triage showed low potassium and an abnormal heart rhythm called a “long QT interval.” A cardiac MRI was ordered by a cardiologist, but strangely it was never completed; the technicians who were conducting the cardiac MRI had stopped midway through the procedure because they were not properly trained on the new software. However, the technicians never told anyone that the test had not been accomplished and Alex’s physicians believed that the MRI had been done. Based on this assumption, a heart catheterization and electrophysiology test were performed  – both fairly invasive procedures – and Alex spent the next 5 days recovering in hospital. During the procedures, the initial observation of the “long QT interval” was once again confirmed and Alex’s family was informed that he would indeed receive the proper potassium replenishment to resolve his low potassium levels. Astonishingly, the potassium replenishment that was promised was never accomplished. Finally, while being discharged from the hospital, Alex was given specific verbal instructions on what activities to avoid, which included not running. However, during this time, Alex had also been prescribed and was taking Versed. This particular drug depresses the central nervous system, with the favorable effect of sedation, anxiety reduction, and muscle relaxation. Yet, along with these effects is anterograde amnesia – or memory loss. The effect of this is so powerful in fact that if you were given Versed prior to entering an operating room, you would likely not remember who was in the room or even being prepped for anesthesia. As Alex was taking Versed and the doctor’s instructions were merely verbal, Alex would not fully remember his most critical discharge instructions. Making matters worse, it was assessed later on that the dose of Versed Alex had received was potentially too great for his body weight. On schedule, 5 days after being discharged Alex met with the family doctor, his cardiologist had recommended to follow-up on Alex’s progress. The doctor, a resident, had little experience and little knowledge of cardiology and missed referring Alex for important follow-on tests as part of his recovery plan. Instead, she gave him an unrestricted clean bill of health. Nearly 3  weeks later, Alex was running alone and once again collapsed. He never woke and died 3  days later of severe potassium depletion [6]. In a Winnipeg emergency room, Brian Sinclair, a 45-year-old double-amputee in a wheelchair was seen by a triage aide at the city’s Health Sciences Centre. Sinclair went to the emergency room on the recommendation of a local

4

1  Why a Hospital Is the Most Dangerous Place on Earth

health clinic because he had not urinated in 24 hours. After talking to the emergency room aide upon his arrival, Sinclair wheeled himself to the waiting room to await his turn. As hours pass, Sinclair’s condition worsened. He became extremely ill, even vomiting on himself several times. Sinclair’s distress did not go unnoticed, as other patients brought his deteriorating and alarming condition to the attention of emergency room nurses and security staff. One observant person in the waiting room was a concerned healthcare aide with 35 years of experience and who was visiting her daughter in the hospital. She had noticed that Sinclair was slumped over in his chair in the very same spot she had seen him before, a full 24  hours earlier. She alerted nursing staff of her concerns, which were ignored. Looking for help, she approached a security guard to check on Sinclair; he remarked that to do so “takes too much paperwork.” Others in the waiting room tried to get help as well, but hospital staff were dismissive, assuming that Sinclair was simply there to watch television or to stay warm. After 34 hours, Brian Sinclair died in his wheelchair of a treatable bladder infection resulting from of a blocked catheter. Despite having spoken to emergency room staff upon his arrival, he had never been entered into the hospital system [7]. Today, in the United States, your chance of experiencing a medical error ranges from 10% to 25%. On the higher end, that means that one out of every four people who walks through the doors of a modern US hospital will suffer harm [8]. In Canada, 25% of all Canadians say that they or a loved one have experienced an adverse event while in hospital [9]. In Europe, the World Health Organization claims that 23% of Europeans have been directly affected by medical error and 18% (nearly 1 in 5), claiming that they have experienced “a serious medical error in a hospital” [10]. Preventable error in hospitals a global epidemic. In Canada, research indicates that each year 70,000 patients experience serious injury as a result of medical treatment. Of these, 23,000 die from “preventable” medical errors [11]. This means that, per capita, Canada has a higher rate of preventable medical errors leading to death than the United States. These are statistics that should alarm all users of our medical system. Comparing numbers, death by preventable medical error has become a national public health emergency. In approximately the same year for which the 23,000 preventable deaths was noted, nearly 3000 Canadians died in car crashes and nearly 16,000 by stroke [12]. For all we hear about motor vehicle accidents and road safety, you are actually much more likely to die from a mistake in a hospital than from a car crash. Furthermore, confidence among healthcare professionals is not comforting. In Canada, 77% of hospitals managers, 75% of nurses, and 40% of doctors feel that someone is “likely to be subject to a serious medical error while being treated in a Canadian hospital” [13]. The financial burden of these preventable errors is extraordinary. It’s estimated that mistakes in hospitals result in an extra 1.1  million days of hospital care in Canada. At a conservative estimate of $5500 per day, the cost to Canadian taxpayers is over $6 billion. A study in the United States put the cost of preventable medical errors at $19.5 billion. When adjusted for lost years of life, the study’s authors estimate the cost of errors to be $1 trillion per year [14].

1  Why a Hospital Is the Most Dangerous Place on Earth

5

The problem is not one of medical knowledge. Most mistakes are not due to poor technical skills or inadequate medical technology. Rather, it’s the non-technical attributes of patient care which involve communication, teamwork, and leadership that play the biggest role. Change may be slow and challenging, but not impossible. We’ve seen it done before  – hard-won lessons and safety change that benefit us every time we take to the skies. In January 1982, an Air Florida Boeing 737 that was scheduled to fly from Washington National Airport to Fort Lauderdale crashed into the Potomac River’s 14th Street Bridge after failing to gain altitude after takeoff. The crash has been intensely analyzed and widely discussed in airline pilot training as a poignant case of leadership failure, lack of procedural discipline, and poor communication and decision-making. On that fateful January day, Washington National airport was experiencing an unusually heavy snowfall, with total snow accumulations of over 6  inches. The snow had been so intense that the airport had to temporarily suspend its departures and arrivals in order to keep up with the snow removal. Although Air Florida Flight 90, carrying 74 passengers and 5 crew, had deiced its wings prior to departure, it was later determined that the deicing fluid had not been carefully monitored for proper deice fluid concentration and that standard deicing procedures had been largely ignored. The ramp was so slippery with snow and ice that the tractor that was tasked with pushing the 737 back from the gate couldn’t get any traction, its tires spinning in the slush. Very much contrary to Boeing procedures, the Captain attempted to back the airplane away from the departure gate using the plane’s thrust reversers, which channel the thrust of a jet engine forward instead of backward. When this didn’t prove fruitful, a new tractor was found, this time with chains on its tires, and the pushback was completed. On the taxi out to the runway, and during the reading of the pre-takeoff checklist, the Captain dismissed the First Officer’s suggestion that the engines’ anti-icing system be selected on  – even though the snowy conditions clearly warranted it. On most airliners, engine anti-ice uses either warm air tapped from the engines or electrical heating to keep the engine intake areas free and clear of ice accumulation on the ground or in flight. Today’s modern deice fluids are extremely well monitored and backed by extensive research that allow pilots to make very accurate decisions as to the wing’s readiness for takeoff. In the early 1980s, it would have been more of a guessing game. Moreover, additional snowfall on the deiced wing could eventually dilute the deice fluid’s ability to keep the wing ice-free. To receive more deice fluid, the flight would have to leave the long takeoff queue and return to the main deice area or gate. The Air Florida Captain, electing not to lose his place in line, decided to maneuver his 737 very close to the plane ahead in an effort to use the warm exhaust from that plane’s engines to melt the snow on his 737. This nonsensical tactic actually exacerbated the icing situation by creating a thaw-freeze cycle on the Air Florida plane. When Air Florida Flight 90 was finally first in line and cleared for takeoff, the First Officer, who was at the controls for the flight, began advancing the power. He

6

1  Why a Hospital Is the Most Dangerous Place on Earth

soon noticed something was wrong, as the engine instruments were not providing accurate readings. As the 737 began to pick up speed, he stated his concern to the Captain several times – but was ignored. • Captain: Okay, your throttles. (As the First Officer assumes control of the takeoff). • Captain: Holler if you need the wipers. • Captain: It’s spooled. Real cold, real cold. (Spooling referring reaching takeoff thrust). • First Officer: God, look at that thing. That doesn’t seem right, does it? Uh, that’s not right. • Captain: Yes, it is, there’s eighty. (Referring to reaching 80 knots). • First Officer: Naw, I don’t think that’s right. Ah, maybe it is. • Captain: Hundred and twenty. (Reading the aircraft speed in knots). • First Officer: I don’t know. • Captain: V1. Easy. V2. (V1 and V2 are performance speeds and are normally called out). What follows the liftoff (as heard on the cockpit voice recorder, or CVR) is the sound of the aircraft stall warning computer engaging the “stick-shaker” which is a warning system that physically vibrates the Captain and First Officer’s control columns to alert the pilot that the airplane’s wing is dangerously close to losing its lift. • Control Tower: Palm 90, contact departure control. (Normal radio protocol after takeoff). • Captain: Forward, forward, easy. We only want five hundred. • Captain: Come on forward. Forward. Just barely climb. • Captain: Stalling. We’re falling! • First Officer: Larry, we’re going down. Larry. • Captain: I know! Flight 90 barely reached a height of 350 feet when it began sinking back toward the Potomac River. It struck the 14th Street Bridge, killing nearly everyone onboard the plane as well as 4 motorists who were traveling across the bridge at the time. The Air Florida Flight 90 story became a driving force of change in the airline industry and a case with which all airline pilots would become familiar. But it wasn’t the technical issues that made Air Florida’s disaster such a game-changer; it was the way in which the Air Florida pilots had communicated, or indeed failed to communicate, that would fundamentally forever change the way airline pilots train and fly. A year prior to the Air Florida crash into the Potomac, United Airlines had just launched a new and innovative program for pilot training under the title “Cockpit Resource Management.” Two years prior, one of United Airlines’ DC-8s had run out of fuel and crashed into a Portland neighborhood after aborting a landing at that city’s airport when one of the plane’s wheels had failed to properly extend into the

1  Why a Hospital Is the Most Dangerous Place on Earth

7

landing position on final approach. While the pilots had prudently executed a goaround when they realized the wheels were not in the landing position, they became so preoccupied with the details of fixing the landing gear issue that they completely failed to see that a much bigger problem looming on the horizon – that their plane’s fuel reserves were growing critically thin. As the large four-engine DC-8 circled over the Portland subdivisions, its fuel tanks ran dry, its four engines flamed out, and it slowly sank and crashed into a residential neighborhood. The pilots had either failed to monitor the fuel situation or had failed to properly speak up and communicate it to each other. Aviation has been the principal leader in no-nonsense safety. The Air Florida and United Airlines disasters are examples of the types of air crashes that led to a wholesale change in the way today’s airline pilots train. That training starts with an acknowledgement that the majority of error chains can be broken by building better team dynamics and communication while understanding and reducing sources of team conflict. A corollary understanding is the highly sensible principle that we cannot simply wait for accidents to happen before we take corrective action. Doing so would be tantamount to “tombstone safety,” by which we morbidly jest that safety is achieved one death at a time. Prevention is the key, and that means recognizing how things can go wrong before they actually go wrong. Today, modern airline pilots train in simulators that are so advanced that they are virtually indistinguishable from the cockpits of real airplanes. A simulator for the Boeing 787 Dreamliner has a sticker price of about $30 million and can do all the things the real 787 can do – and much more. Today’s advanced simulation is so real that pilots can (theoretically) fly the real airplane without ever having actually seen one. Perched on three enormous hissing hydraulic mounts in cavernous simulator bays, the “sims,” which look like giant houseboats, roll and tilt to give the precise feel of full acceleration in all directions. Using the simulator’s extensive navigation database, pilots can fly actual routes anywhere in the world and into any airport. Simulators can replicate all emergencies completely risk-free, while pilots can practice landing desperately crippled airplanes, into the world’s most challenging airports, in the worst weather imaginable. At the simulator controls, pilots can also recreate known disasters. A few years ago, Dr. Rob simulated the precise situation and feel of the high-altitude jet upset in an Airbus A330, using the precise profile experienced by the Air France crew that crashed off the coast of Brazil. In simulators, pilots learn to survive conditions that may have once proven disastrous without hands-on training, such as engine fires on takeoff, control failures, or deadly wind shear. While pilots have to demonstrate high proficiency at maneuvering the aircraft – what the dictionary of pilot slang calls “hands and feet” or “stick and rudder” skills – nearly half of all pilot training and evaluation today is based on the non-­ technical aspects of flying big jets. Today, no pilots work alone; it’s a team effort. Landing a 600,000-pound jumbo onto a slippery runway at 160 mph, in driving rain and howling winds, requires not only skilled hands on the wheel but also all hands on deck.

8

1  Why a Hospital Is the Most Dangerous Place on Earth

What the United Airlines and Air Florida crash taught the pilot training community was that aviation had to become more reliable with respect to safety; the question was how to do it? For United Airlines it was the creation of the Cockpit Resource Management, which was not so much a new training program, as it was an entirely new training paradigm. Known today by the acronym CRM or Crew Resource Management, the change in philosophy stemmed from an admission that many, if not most, of the errors committed by flight crew were not as a result of deficiencies in technical knowledge but rather, the non-technical issues associated with human error. Many pilots of earlier generations had built their flying skills in the military, training to the highest standards, flying challenging combat missions, hunting submarines at wave top, landing on rolling and pitching aircraft carriers in the rain and darkness, or flying older generation aircraft in remote arctic or jungle environments onto short strips, dimly lit by fire-burning flare pots. One such pilot told Dr. Rob a story about delivering food to a remote village in Africa with a Convair 580 aircraft — a large, slow, lumbering low-wing turboprop. There was only one approach possible into the short jungle strip because if the pilots had to abort the landing in order to try another approach, it would give the jungle militia in the adjacent hills enough time to find their guns and try to shoot the plane down. By far, the much-preferred method was a fast tree-top straight in approach and landing that would typically catch any jungle militia off guard, before they had a chance to put the pilots in their gun sites. The storytelling pilot recalled one such landing. After making their low and fast approach, they were just about to touch down onto the short bumpy grass strip when they were suddenly blocked by a villager, who had wandered out onto the makeshift runway. The Captain did a go-around maneuver, pulling the plane up hard into a stomach-churning steep left climbing turn. Deciding it best not to climb slowly out across the jungle the pilot elected to keep the plane’s landing gear and flaps in the full landing position and to do a tree-top level steep turn to get the plane back on the ground as fast as possible. As he carved the lumbering turboprop around, he could hear the bullets begin to hit plane’s side. The pilot pushed the plane’s nose down through the gunfire and onto the short jungle strip. Unbeknownst to the Captain, the villagers had thought it a nice gesture to remove some of the large underground rocks from the grassy runway and in doing so had unwittingly left several soft sinkholes. The plane’s nosewheel sunk into one such hole, slamming the plane to an abrupt halt with the nose gear half buried. Lifting the 50,000-pound plane out of a hole in the middle of a jungle with no machinery or tools was next to impossible. But, as the saying goes, it takes a village, and after a lengthy struggle, the plane’s nose was finally lifted free of the hole. But it was far too risky to use the runway for takeoff. If the plane hit any more sinkholes during a full-power takeoff, it could spell disaster. Another option was to cut a new runway strip through an adjacent grassy field, although it would mean doing the takeoff run down the side of a hill, and not just any hill, but a minefield. Carefully, foot-by-foot, the grass was slashed down and the ground poked and prodded for explosives. Finally, the makeshift strip was ready. The Captain emptied the plane of anything and everything that he didn’t need, in an effort to decrease the plane’s

1  Why a Hospital Is the Most Dangerous Place on Earth

9

weight and shorten its takeoff run. He also had to modify the normal takeoff technique to get the plane into the air at the lowest possible speed, to better their odds of not triggering any undiscovered landmines. His skill and knowledge of the plane paid off, and he survived the day, not only to tell his story, and not only to get food and medicine to villagers who desperately needed it, but to become an airline pilot who safely flies passengers around North America each day. In many professions, great respect is often bestowed to those who can get the job done when others cannot – or will not. Flying used to be characteristically defined by varying degrees of bravado. Landing a plane in bad weather when others have decided it’s too risky used to be seen as a measure of pilot skill. Pilots wanted to be that guy who could get it done. This placed emphasis on “hands and feet” skills, pitting one pilot’s abilities against another. Any such machoism ultimately came with a price as airline training departments began to see evidence that it wasn’t always flying skills that were causing accidents, but it was the lack of teamwork. If it were theatre, “Those Magnificent Men in Their Flying Machines,” the opening headline lyric for the 1965 British comedy of the same name, would ultimately give way to “those magnificently trained teams in their flying machines” [15]. Today’s airline crews will tell you that there is no place for lone rangers in the flight-deck. The idea that one person does it all against all odds has given way to a much more sophisticated philosophy built around the understanding that problem-­ solving, leadership, and teamwork are enhanced when input is received from as many good sources as possible: other pilots (regardless of rank or years of experience), flight attendant crew, airline dispatch, air traffic control, aircraft engineers, and ground crew. While Cockpit Resource Management has given way to Crew Resource Management (by the same “CRM” acronym), at its core, it’s an operational philosophy that emphasizes collaborative approaches to leadership. This means that airline Captains (and their crew) actively seek input from other crew members while expecting each other to speak up if they see something of concern. CRM is really about building up a cultural environment that welcomes and rewards input in order to make more informed (and hence safer) leadership decisions. It’s not just the person in charge that “activates” CRM in a team environment (although there is a lot that a leader can do to facilitate it); rather CRM is an automated cultural meme whereby junior members are fully expected to voice their opinions and concerns, and where senior crew members are expected to facilitate and encourage communication and actively solicit input. If we were to rewind history and see CRM at work on the doomed Air Florida flight that crashed into the Potomac, we would have seen a Captain far less cavalier about standard procedures and negligent practices such as trying to back the airplane away from the gate with reverse thrust, not attempting to use the engine heat from a preceding aircraft to melt snow and ice off his 737, not using his available resources to double-check the status of the wings prior to takeoff, not using engine anti-ice as required, and not listening to the First Officer’s questioning of abnormal engine parameters at the start of the takeoff roll. The reason the Captain would be less likely to behave recklessly would be because the First Officer’s suggestions,

10

1  Why a Hospital Is the Most Dangerous Place on Earth

opinions, and prompting would be respected under CRM. If at any point, the two pilots were not on the same page – or what pilots call shared situational awareness – the operation would be temporarily halted until they resolved the confusion. Using CRM principles, the Air Florida First Officer would have felt empowered to speak up, querying, reminding, or cordially challenging the Captain as need be, and then, in more critical moments, like during the takeoff roll when the engines were not producing sufficient thrust, expressing himself far more directly and assertively. Pilots understand that the physical act of flying an airplane – especially in poor weather – requires considerable skill and attention. Good crew coordination ensures that the other pilot or pilots keep their eyes on the big picture while the pilot at the controls flies the plane. Modern airline pilots divide their tasks into “pilot flying” and “pilot monitoring” duties on each flight. It’s customary that these duties transfer back and forth on consecutive flight legs. It’s the job of the pilot monitoring to speak up if any significant deviation to flight profile, aircraft status, or procedure is noticed. Done properly, the coordination offers a critical safety net for enhancing the pilot flying’s situational awareness and for trapping errors. This is also seen in the operation of the Canadian Space Agency’s (and NASA’s) Canadarm, in which one astronaut operator sits close to the controls to maneuver the arm for precise tasks while another astronaut operator sits farther back and observes “the big picture.” For the United Airlines DC-8 that ran out of fuel over Portland when the pilots became fixated on the details of the gear problem while ignoring the much more threatening issue of fuel starvation, the principles of CRM would have dictated that the crew organize and prioritize their tasks in order to avoid precisely that situation which ultimately doomed the flight. For any of the DC-8 pilots who may have taken notice of the fuel situation that fateful night, and who mistakenly assumed that the Captain was aware of it, a good CRM environment would have welcomed the fuel status being verbally communicated, even if the observing pilot’s information turned out to be redundant or irrelevant. In a good CRM culture, Captains will thank other crew members (regardless of rank or position) for bringing information to their attention, no matter if the information was useful or not. By thanking them honestly and not dismissing their input, they encourage future communication, which at some point might truly be of life-saving importance. The opposite is also true. How often have we felt like we wanted to say something but felt like we couldn’t risk the unspoken social or professional demoralization if we were wrong? We have all felt this. If we are right in speaking up, our stock rises among our peers; if wrong, our stock sinks, and subsequently, our perceived worth to the team declines and our opinions become less valued. We fear that in the future, our good ideas may be casually dismissed. As social creatures, there are few of us willing to speak up if it risks destroying our precious and hard-won social capital among those whose favor we value. The power of making decisions based on social acceptance was brightly highlighted in in the now famous 1951 Solomon Asch experiments on conformity [16]. Asch devised a brilliantly simple study in which he constructed two cards, one with

1  Why a Hospital Is the Most Dangerous Place on Earth

11

three straight vertical lines of different length (labeled A, B, or C) and another card with one line that matched the length of one of the lines on the first card. The task was simple: look at the single line, and tell the experimenter which of the lines it matched from the second card, A, B, or C? The actual task of matching the lines was relatively easy and uncomplicated, but sneakily, that’s not what Asch wanted to measure. Asch set up a table with eight participants, seven of which had been specially briefed by Asch about the true intent of the experiment and were given scripts on how to answer; they comprised the study’s “confederates.” The remaining eighth person would become the subject of the experiment. As each round of the experiment started, Asch would ask every person at the table to take a turn, saying aloud which line (A, B, or C) matched the single line. By design, the lone subject of the experiment was always the last person to answer. The confederates, who were in on the experiment, were instructed to answer correctly on some rounds and on other rounds, incorrectly. As the answers were fairly obvious, the experiment would test whether the subject – having just heard all seven of the others confidently answer the same  – would contradict them with the correct response. Would the subject speak up and give the right answer or would the subject conform to the group’s obviously incorrect answer, simply to not risk standing out? 123 subjects were tested; the results were fascinating. Even when the subject knew the group was dead wrong, one-third of all subjects would conform to the group all the time. Three-quarters of the subjects went along with the group’s incorrect answer at least once. When Asch conducted follow-up interviews with the subjects immediately after the experiment, the subjects’ most common reason for conforming to the group majority was fear of being ridiculed and that it was more important to fit in to the group than to be right. Critics of the experiment’s grand summations about our fickle social fortitude may argue that the conclusions were based merely on a test that dealt with lines on a piece of paper. If social capital is so precious, why would we risk compromising it on such a trivial game? Isn’t it plausible that we’d simply shrug and yield to the majority when it didn’t really matter but stand up against the group under more serious circumstances? While an admirable critique, such idealism simply doesn’t stand up to our unforgiving historical record of group deference. On an unusually cold Florida morning on January 28, 1986, NASA engineers inspected the launch pad of Space Shuttle Challenger. The overnight temperatures, which had dipped below freezing, had caused sheets of ice and rows of icicles to blanket structures on the launch pad. Allan McDonald, then Director of the Space Shuttle Solid Rocket Motor Project for the engineering firm Morton Thiokol, the design agency for the solid rocket boosters, had refused to put his signature on the launch recommendation for fear that the rubber O-rings that act to seal the segments of the solid rockets might fail in the extreme cold. The integrity of the O-ring seals was simply not known at those temperatures, and so while no definitive threat to the launch was confirmed, the risk of O-ring failure, in McDonald’s view, was too great. Challenger, like other shuttles before it, had two solid rocket boosters, one on each side. While these boosters look narrow when viewed alongside the shuttlecraft and

12

1  Why a Hospital Is the Most Dangerous Place on Earth

its main fuel tank, each booster is actually only 2  feet shorter than the Statue of Liberty and each holding 1.1 million pounds of solid propellant designed to deliver over 3 million pounds of thrust. Despite McDonald’s misgivings, the Challenger launch would go ahead at temperatures 20-degrees Fahrenheit below any previous space shuttle launch. McDonald predicted that if the large rubber O-rings were to get too cold, lose their pliable properties, and become brittle, it could very well lead to disaster. 73 seconds after liftoff, at an altitude of 9 miles, an O-ring seal opened enough to allow fiery hot exhaust gases to burst through the seal and bathe the central liquid oxygen and hydrogen fuel tank in extreme heat. The tank exploded and Challenger was ripped apart by the high-speed aerodynamic forces. The solid rocket boosters continued to burn, propelling themselves off in different directions under their own power, only to be destroyed by remote control at the hand of NASA’s range safety officer. The crew compartment, which contained seven crew members, including social studies schoolteacher Christa McAuliffe, continued to ascend another 3 miles before arcing back down into the Atlantic Ocean. That evening, US President Ronald Reagan, who was scheduled to deliver the State of the Union Address, instead took to the air in a live televised broadcast to address the day’s disaster, including a special message to the millions of school children who had eagerly watched the launch live from their classrooms. It was a national tragedy and one that would put the microscope directly overtop NASA’s safety program. Reagan would create the Commission on the Space Shuttle Challenger Accident, the results of which were published in the Rogers Commission Report [17]. While the report cited the O-ring failure as the principal cause of the Challenger disaster it noted that the contributing factors included NASA’s failure to provide sufficient safety checks and balances, a failure to properly communicate risk and hazards, and a culture of (peer) pressure to launch amidst contrary opinions concerning launch readiness. It pointed to an environment in which negative opinions of launch safety were selected out and positive ones were selected in. In Allan McDonald’s book, Truth, Lies, and O-Rings as well as his subsequent public lectures, he points to the key lessons learned from the Challenger disaster. Chief among them is the ability to feel comfortable speaking up when you think something isn’t quite right. While McDonald did just that by refusing to sign-off on the launch endorsement, his warnings were largely overruled by other engineers who, in the aftermath of the disaster, indicated that there was a very strong underlying cultural pressure to bend and conform to the management team’s need to deliver a launch. Creating an environment in which we can feel comfortable about raising our hands, even if we think we have a “dumb” question, is something Allan emphasizes in his talks. High reliability organizations (HROs), like many in the nuclear and aviation sector, understand that gold-star safety standards depend, in large part, on the acknowledgement that human rationality can be an unfaithful friend in the face of social pressure. And while aviation’s CRM program sets its sites directly on these issues

1  Why a Hospital Is the Most Dangerous Place on Earth

13

by encouraging a culture of speaking up as well as accepting and rewarding those who do so, this is not the norm in healthcare. A few years ago, Dr. Rob was asked to assist in the development of Canada’s first operating room (OR) checklist, which was an adaptation of the World Health Organization’s checklist protocol. To the best of our knowledge, Dr. Rob was the only non-medical social scientist on the Canada’s committee, which was tasked with reviewing the checklist components, recommending changes, and implementing the checklist across Canada. Using checklists is matter or routine in aviation. It’s not because pilots can’t remember all the actions they have to do to get planes off the ground; it’s because checklists provide a significant measure of safety in busy, complex environments [18]. As anticipated, there was some pushback to the checklist in operating room settings, mostly by surgeons, some of whom viewed the checklist as either a waste of time, too redundant, or most importantly, as a way of democratizing the OR, by which a surgeon’s abilities and authority would be questioned. Hence, in addition to the rather tactical issues of how long it took to read the checklist, or who would read it, Dr. Rob could sense an underlying cultural concern for what the checklist represented – a threat to command and control in the OR. Over the course of the project, Dr. Rob spoke to a number of doctors and nurses about the checklist with particular emphasis on alleviating their misconceptions about its use. Checklists not only provide tangible means of cross-checking preparations and trapping potential errors, but they begin to create a cultural backdrop for greater team cohesion and inclusiveness. The operating room checklist as it is now used in Canada ensures that everyone is on the same page (even the patient if conscious), helping forge shared situational awareness. It also reemphasizes roles and responsibilities by ensuring that team members have done their jobs preparing for the surgery as well as having prepared for any emergency situations that may arise. More importantly, the checklist sets a tone for enhanced communication — its message helps foster a more accepting and less hostile environment for team members to speak up if they see something or need something. The message Dr. Rob encouraged surgeons to embrace was that the checklist gives them the opportunity to be a better leader, reduce error, and make better informed decisions. Where this hasn’t happened, surgeries can all too easily trend sideways because of poor communication. While doing research on team dynamics in operating rooms, Dr. Rob stood (gowned and gloved) alongside surgeons at they operated on a patient’s spine. Two surgeons, their nursing team, and an anesthesiologist carried out their various tasks with routine precision. Part of this precision includes the potency and timing of the anesthetic. Often, this dance is well coordinated so that the time under anesthetic is kept to the minimum necessary (or as dictated by the procedure). On this day, while observing the spinal surgery, an issue began to develop with the anesthetic equipment. While it wasn’t failing necessarily, the equipment was a new model, and it was behaving in an

14

1  Why a Hospital Is the Most Dangerous Place on Earth

unexpected way. Observing the anesthesiologist, Dr. Rob noticed that he was considering outside assistance to decipher the machine’s activities but that the only number he had was for the central manufacturer’s 9-to-5 office line. As he tried to work the problem, the two operating surgeons began discussing the time available for the remainder of the surgery and their progress – much of this to be coordinated with the anesthesiologist. Dr. Rob watched as the surgeons verbalized their progress but noted that shared situational awareness was beginning to slide as a result of the focus on anesthetic equipment. Not helping the situation was the large curtain that created a partition along the patient’s neckline so that the anesthesiologist and the surgeons could not see each other, what many jokingly call the “blood-brain barrier.” Dr. Rob likened the curtain to a flight deck door, where there is a physical barrier between team members who need to coordinate their activities. As the anesthesiologist became increasingly preoccupied with the puzzling new machinery, he began missing several communication cues from the surgeons, a messaging stalemate that was ultimately broken when one of the surgeons asked that the anesthesiologist directly whether everything was okay? The anesthesiologist summarized what was happening, making the rest of the team aware of the issue and allowing the surgeons to consider how this new information might affect their plan. After a few beats, the surgeon asked that anesthesiologist provide a status report in few minutes and to keep the team updated. With this simple opening of communication, the lead surgeon had brought his team back together on the same page with shared situational awareness, had reduced any fixation on equipment issues in order to keep the big picture in focus, had begun to think of surgical implications, and created an environment in which team members were empowered to speak up – and, as I often point out, by way of building more open communication among the team – enhanced the surgeon’s ability to lead the team with more complete information. Viewing this unfold, it was evident that the anesthesiologist had originally desired to keep his problems with the equipment to himself. He may have felt that the problem was too small, that it could be easily handled, or that it was not worth alerting and bothering others. Like so many of us, he may have not wanted to risk other team members thinking he didn’t know what he was doing. These are hardwired social forces. Creating open communication, soliciting input, and gathering information from team members and other operational groups is not the democratization of leadership. It is not a move to degrade command. As aviators have learned by analyzing landmark accidents like Air Florida’s 737 and United Airline’s DC-8, the real threat to safety comes when we don’t recognize the power of social influence on our decisions to speak up and communicate. Yet these principles are no mystery to healthcare professionals, who, by and large, are pretty intelligent people. When studies have investigated the root causes of major hospital errors – so-called never events – like wrong-side surgery (of which there are nearly 2000 per year in the United States), communication is the number one problem [19]. But just saying you want people speak up more often is not enough. In many healthcare settings, nurses and doctors are still hamstrung by

1  Why a Hospital Is the Most Dangerous Place on Earth

15

dominant social undercurrents that emphasize competition over cooperation, clinical knowledge over asking for input, and most importantly, blame-and-shame over system learning. In order to loosen the strong social ropes that constrain speaking up, healthcare cultures must improve their psychological safety. The most expeditious path to enhancing psychological safety in an organization is through a mandatory safety reporting system. In aviation, mandatory reporting is a matter of routine and is a non-punitive tool that is used to record incidents, accidents, errors, or any other safety concerns that may arise in day-to-day practice. The need for safety reporting is quite basic, really. We cannot (and should not) wait for accidents to happen before we try to figure out weaknesses or holes in our safety protocol. A good reporting system will collect data on hundreds, if not thousands, of errors, hazards, or underlying threats, that if left unchecked, could someday spell disaster. Even so, this core logic seems to run counter to medical thinking. Some healthcare administrators we’ve spoken to worry that a formal error reporting system will make their hospitals look worse, because on paper the hospital’s recorded error rates will typically skyrocket. They worry that rising error rates will be viewed, not as the results of a more robust and participatory safety culture, but as a hospital with a poorer safety record. Ideally, we want every error, incident, or safety threat recorded  – rather than covered up – that’s the only way we can build a learning system. Dr. Rob has experienced resistance many times by healthcare professionals who wring their hands about what constitutes a reportable error or incident? What metrics do we use to define what should be reported? “We need studies!,” many explain. “We’re just not there yet”, they say. All hogwash of course — the answer is that you report harm, mistakes, or anything that remotely smells as if it could lead to a mistake. Anything and everything. Yes, it’s a lot of data – but that’s how we will see critical trends, adjust training, and plug the holes in the safety net. The Aviation Safety Reporting System in the United States, originally started by NASA 40 years ago, receives some 30,000 aviation reports per year, despite the fact that there aren’t 30,000 accidents or incidents each year. The data is compiled and disseminated to the Federal Aviation Administration in order to plug safety gaps and set a course for even safer skies. The one-page reporting system is easy to use and is completely confidential, and it offers indemnity to those who report their error. Many airlines take this a step further by making reporting mandatory  – offering indemnity to those who report error but removing indemnity should the incident go unreported. Reporting errors begins with an understanding that operational lapses leading to errors and adverse events are almost always part and parcel of a chain of smaller system faults that ultimately failed to prevent the mishap. In fact, accident investigators typically cite several contributing factors necessary for a particular error chain to ultimately lead to an accident. Had one of those contributing factors been absent, the incident or accident may not have occurred. Professor James Reason, a long-­ standing leader in safety science, suggests this path to an accident is like slipping through the holes of Swiss cheese [20]. He describes the analogy like this: if we were to slice Swiss cheese into many slices, we’d observe that normally the holes in the cheese do not line up for us to see through all the slices at once; similarly,

16

1  Why a Hospital Is the Most Dangerous Place on Earth

accident chains don’t normally make it through all the defense layers to create an actual accident. But, if, on that one rare occasion, all the holes do line up and all the contributing factors are in place at the same time, an accident can occur. In organizations without robust reporting systems, investigations tend to be forced upon the organization when major accidents happen, what healthcare folks call “sentinel” events – the types of events that lead to significant harm or loss of life and which should never happen, and of course, which cannot be ignored. While it’s true that hospitals genuinely want to learn from these events, the rarity of sentinel events means that an enormous amount of early warnings may well go unacknowledged. How many times have we been driving on the road and seen a close call between two vehicles – only to see one or more of the drivers continue on and act as if never happened – because it didn’t. As far back as 1931, the ratio of fatal accidents to near-accidents (so-called near misses) was put forth in a model called the “safety triangle” in a much-cited book by H.W.  Heinrich, called Industrial Accident Prevention: A Scientific Approach [21]. The safety triangle sought to discover the ratio between major accidents resulting in a fatality to lesser accidents and incidents. While Heinrich’s original numbers have been debated, many have jumped on the question of ratios, including Frank E. Bird Jr., who, at the time of his study, was the Director of Engineering Services for the Insurance Company of North America. Bird carried out a massive study in which he analyzed 1,753,498 accidents reported by 297 companies over 21 different industrial groups – comprising 1,750,000 employees and over 3 billion work hours [22]. With co-researcher, George Germain, the two men, and their team, conducted 4000  hours of confidential interviews. They were interested in the ratio between accidents that resulted in fatalities and ones that “only” resulted in injury or in no injuries at all. The top of the triangle in their model represented a fatal accident. As one moved down the pyramid, the ever-broadening base represented all the times an organization was likely to experience close calls for a given fatal accident. The results of the Bird-Germain study showed a relationship of 1:600, meaning that for every fatal accident an organization would experience, that organization could expect to experience 600 near-miss events. Put back into the analogy of car crashes, this would mean that for every fatal car crash there are 600 very close calls. Other organizations have extended the pyramid’s base more broadly by looking at high-risk behaviors. One such company, ConocoPhillips Marine, found that for every fatal accident, a company can expect 300,000 at-risk behaviors, which include procedural work-arounds or shortcuts [23]. Consider this in view of hospitals. Assuming recent numbers are accurate and that we are experiencing some 440,000 preventable deaths in US hospitals each year and if the ratio was anywhere close to Bird and Germain’s numbers, that would mean there are approximately 264 million (preventable) near-miss events each year in US hospitals. Other studies look at the number of errors that are likely to be committed for every fatal accident and found that there are about 10,000 errors or failures for every fatal event [24]. For a hospital, errors and near-misses constitute a gold mine of information with respect to their safety performance — far more than a single sentinel event ever will. They represent opportunities for intervention before harm is experienced. If a

1  Why a Hospital Is the Most Dangerous Place on Earth

17

healthcare worker has a close call and narrowly avoids a medication error, it’s quite likely that the exact same circumstances that lead to the near-miss will be encountered by another healthcare worker in the future – perhaps resulting in real harm. In most cases, errors are not committed by “bad eggs” who don’t care; it’s almost always a problem with the system, not the person. And yet, healthcare is failing to capitalize on near-miss and error information, in large part because of a hesitation to report. In a 2012 Medscape survey of physicians, 37% of doctors answered either “yes” or “it depends” to the question: “Is it ever acceptable to cover up or avoid revealing a mistake if that mistake would not cause harm to the patient?” [25]. The same is true for doctors who have witnessed error by others. In a study by Dr. Thomas Gallagher, of the University of Washington School of Medicine, more than half of the doctors who took part in the study said that they had witnessed a fellow physician make a mistake in the past year [26]. Gallagher explains that doctors are not immune to the very same social dynamics that would inhibit any one of us taking action to report a colleague with whom we work with every day. We aren’t robots impervious to social consequence and as such our reasoning remains vulnerable to a variety of social drives, such as empathy for others who make “honest” mistakes, fear of being perceived a “rat,” fear of peer reprisal from those who disagreed with the need to report. Again, our social pillars represent immovable constructs. Good reporting systems begin with good leadership – and in the case of hospitals, that means the commitment of the hospital’s Chief Operating Officer (or equivalent) to the reporting program. The CEO needs to provide commitment in a written statement that promises indemnity to anyone who makes a safety report, including self-­reporting of error. The exceptions to this would include willful disregard for safety or criminal activity such as sabotage. By signing the safety reporting declaration, the CEO sends the important message that the institution’s principal interest is in improving the system and to build a learning culture, rather than pointing fingers and blaming and shaming individuals for errors that could have just as easily been committed by others in the same situation. This also gives middle managers and supervisors direction on how to process safety reports that come across their desk — to view the problem as one of blocking holes in the “Swiss cheese” (as per James Reason’s model), rather than focusing on punishing individuals. We don’t often check ourselves in to medical facilities for the fun of it — so is there anything a patient can do to help reduce the odds of being a victim of hospital errors and adverse events? The rule of thumb is to avoid and advocate in that order. This means, your first mission is to steer clear of hospitals unless necessary: to avoid. This doesn’t mean avoid getting medical care when you need it, it means avoid putting yourself into a state of poor health — either through lifestyle or injury  — so that hospital staff don’t begin to know you on a firstname basis. Barring acute injury or illness, your job is to take care of your health so that you can avoid having to visit the most dangerous places on the planet. And, believe it or not, you do have a choice. Heart disease, cancer, and stroke  – the most likely

18

1  Why a Hospital Is the Most Dangerous Place on Earth

maladies you’ll die from  – are almost entirely lifestyle induced. A well-known study that researched data over a 15 year span concluded, without a doubt, that the greatest cause of death in the United States was non-genetic lifestyle factors [27]. Despite our constant pursuit for new drugs and expensive medical trinkets to keep our hearts pumping, the battle we are fighting is actually against ourselves. The top nine lifestyle choices – from tobacco, diet and exercise, to motor vehicle accidents and firearms – now constitute half of all deaths in America. Put another way, the leading causes of death in the US today are entirely the result of personal choice and behavior. Of course, between robust health and death, there may well be hospital visits. We know the dangerous track record of hospitals and their role in our population’s mortality. So, if our primary directive is to live longer, happier, and healthier lives, then our priority becomes one of staying healthy enough to avoid either killing ourselves through poor lifestyle choices or exposing ourselves to hospitals to have others do it for us. The second part of our rule of thumb: advocate, means being aware of your hospital environment and particularly the communication and behavior of the healthcare professionals who attend to you. It’s a form of vigilance that many patients may be unfamiliar with, especially for older generations who may be unaccustomed to questioning doctors. Advocacy means personal advocacy or advocating on behalf of a friend or loved one. It means being attuned to hospital charting and to healthcare professionals’ conversations, which may elude to an impending error. In the past 3  years, Dr. Rob has personally witnessed the wrong name and wrong birthdate written on a label for a pediatric patient’s blood transfusion record, heard a surgeon tell a relative that there were no diagnostic options available to assess a botched surgery – only to find out later that there were many options, seen a surgeon leave tissue in a patient after a Caesarian section while arguing it’s impossible that tissue could be left inside – and when it was indeed discovered to be there, shrug it off. Dr. Rob has seen a mother of a newborn ask about the infant’s racing heart rate, only to be overruled by the doctor but later proved correct as the infant was admitted to ICU, and he has seen two doctors remove a central venous catheter in the chest of a child without washing their hands or wearing gloves. We should note that, in the United States, up to 4000 patients a year die from central line infections [28]. As patients or as patient advocates, we need to speak up! This does not mean being confrontational – only that a patient also has eyes and ears and should be viewed as one of the layers of Reason’s Swiss Cheese Model, to help block seemingly improbable coincidences from making their way through the layers toward injury or accident. If you don’t know where to begin, start with handwashing. Our hands are flytraps for millions of harmful microbes that can compromise vulnerable hospital patients. The threat of infection has also been exacerbated by our voracious use of cellphones and other mobile devices. Not only do most of us touch and handle our cellphones regularly throughout the day, we press it against our face and then slide it back into an ideal germ incubator – a warm pant pocket. Of course, healthcare workers have mobile phones too and are also increasingly using mobile devices and tablets in

1  Why a Hospital Is the Most Dangerous Place on Earth

19

their places of work during bedside visits. Many do not realize that our mobile phones can be dirtier than the bottoms of our shoes, than toilet seats, or doorknobs. Not only should we and our loved ones participate in regular handwashing every time we enter or exit a patient’s immediate area, but we should insist that healthcare workers do as well. “Hospital-acquired infections” (HAI) are infections that a patient picks up from healthcare workers or the hospital environment. About 1.7  million people in the United States will suffer from hospital infections each year, killing nearly 100,000 of them [29]. You guessed it – this means that hospital-­acquired infections are one of the top ten causes of death in the United States today. Some 40% of these infections are brought directly to the patient on the hands of the hospital worker. Vigorous and relentless campaigns to remind doctors and nurses (and other healthcare workers) to wash hands have had mixed results; some highly successful, and others not so much. Looking at 96 different studies on hand- compliance over a 26-year period, the Joint Commission (a not-for-profit hospital accrediting agency) found that, on average, hospital workers complied with handwashing guidelines only 36% of the time [30]. And while we should all wash hands when we enter a patient’s hospital room, hospitals are also trying to stop germs from even entering the building by putting handwashing hubs (often with multiple alcohol dispensers) directly inside the main entrance doors of hospitals. Even with signs that ask the public to wash their hands or use the alcohol dispenser, and despite our general knowledge that clean hands can reduce the transmission of deadly pathogens to vulnerable hospital patients or reduce fatal infections, observations of the general public indicate that 98% of people will walk past the hand hygiene station without cleaning their hands [31]. Dr. Rob recalls visiting a relative at a hospital who was recovering from abdominal surgery. Although not infected and expected to close up, the post-surgical incision was healing a great deal slower than expected with sections of the incision remaining partly open and exposed between the sutures. One of the nurses attending the patient said, “the best thing you can do is get out of the hospital as fast as possible. Get your sterile wound-care with the home-care nurse.” This comment was surely based on experience  – the rather counterintuitive understanding that there was far greater immediate risk to my relative’s health in the hospital than outside the hospital. So, why is hand hygiene such an elusive prize when, frankly, nothing could be easier than washing one’s hands or using hand alcohol dispenser? The Federal Aviation Administration and the Flight Safety Foundation have spent a great deal of effort studying situations in which workers who know the rules, willfully choose not to follow them. High reliability organizations call it Procedural Intentional Noncompliance and it’s a red-hot topic among safety professionals. When it comes to handwashing, many hospitals are swinging and missing in their aim to boost hygiene practices because they fail to understand that it’s not a marketing campaign to raise awareness among doctors and nurses, it’s a problem of PINC. Highly educated healthcare workers know fully well that hand hygiene is important, and they know that they are supposed to follow good handwashing practices before and after

20

1  Why a Hospital Is the Most Dangerous Place on Earth

touching patients, so if they don’t do this, they are choosing not to do it. It’s not a matter of educating them. There are three core ingredients necessary for healthcare workers to willfully ignore a procedure or rule. The first is motivation. Perhaps a healthcare worker might be “motivated” to not follow proper handwashing because they have other important cases waiting for their attention and they need to cut down on time. This would be a motivating factor. Put another way, there has to be some sort of “reward” for the noncompliance. The second ingredient needed for intentional noncompliance is an awareness and acceptance of the risks involved. Here, the healthcare worker may view the patient as low risk – perhaps no open wounds nor compromised immunity – or perhaps the healthcare worker feels that because he or she just washed their hands a few minutes ago, the level of potential contamination is acceptable. Typically, this second ingredient is backed up by previous experience, perhaps in this case where the healthcare worker has skipped handwashing at other times and no harm has resulted – as such they assess the risk as low. Finally, the third ingredient, and the most important one, is a lack of peer reaction to the noncompliance. When peers or colleagues do not challenge the noncompliance, either subtly or overtly, then the worker who’s bending the rules feels it was acceptable to their peers — and thus justified. This third ingredient is a very slippery slope. When peers do not react to a colleague’s noncompliance it sends a strong message that, generally speaking, bending the rules is okay if the rule-breaker believes it’s acceptable to do so. While some rules are small or less significant, an environment that permits minor rulebreaking can easily backslide into a culture whereby rule-breaking is commonplace, a process that safety scientists call “procedural drift”; little bouts of rule-breaking become gateway events to more frequent and more significant rulebreaking. This is called “routine” procedural intentional noncompliance, where a culture of bending the rules exists and is tolerated. Indeed, aviation safety scientists point out that those who engage in intentional noncompliance are two to three times more likely to suffer an “unintentional” error. So, here we are, back to errors in hospitals. The good news is that the same social dynamics that keep people from speaking up or reporting errors can also help manage intentional noncompliance. Peers who speak up  – even politely and indirectly  – can begin to build a message that routine violations of standard practices or rules is not okay. Like the jujitsu master who can lead someone around by their thumb, it takes very little positive peer pressure to alter the course of noncompliance. This means patients themselves can do it too. One of the easiest and most impactful forms of “advocacy” you can do in a hospital is to ask the doctor or nurse if they washed their hands. I understand full well that as simple and as logical as that sounds, it’s not easy. I too have failed to speak up – when I witnessed the central venous catheter being removed, the doctors went from saying “hello” to putting their clipboards down and putting their hand on the pediatric patient in a matter of seconds. I regret not speaking up — and thankfully, the little toddler did not suffer an infection in her vulnerable state of health.

1  Why a Hospital Is the Most Dangerous Place on Earth

21

“Avoid and advocate” is the rule of thumb. Like airline pilots – be aware that it’s the non-technical or human interaction – the social issues that are most likely to trip up and jeopardize your safety in a hospital environment. Empower yourself with this knowledge, and create an atmosphere of open communication and shared situational awareness. Holding hospitals and medical staff to account for their performance is slowly taking root – but not quickly enough. Writing in The Wall Street Journal, Dr. Marty Makary, the well-known Johns Hopkins Hospital surgeon who helped pioneer the surgical checklist, points to a survey which showed that 60% of New Yorkers will look up a restaurant’s performance and review but few would consider doing the same for their heart surgery [32]. Makary, who advocates for open transparency of hospital performance and ratings of safety culture, points to two cases where small changes have made a big difference. In one study, putting video cameras in operating rooms resulted in more careful and precise surgeries, increasing surgical time by 50% but improving surgical “quality scores” by 30% [32]. Similarly, Long Island’s North Shore University Hospital had a dismal record for handwashing at less than 10% compliance. The use of cameras to track whether healthcare workers washed their hands boosted handwashing compliance to over 90% [33]. Of course, we know it’s not the camera itself that changed the behavior; it’s the instantaneous recognition by the healthcare worker that the calculus of intentional noncompliance has shifted. The risks of having one’s lack of hand hygiene being linked to any infection on the floor and the understanding that one’s peers might watch the video and expose the noncompliance mean that the reward, risk, and peer scrutiny tip the balance in favor of following the rules. The life plan to “avoid and advocate” begins with every one of us – in our daily choices. In addition to holding hospitals and healthcare workers to account for their actions, we need to hold ourselves to account for behaviors that can land us inside hospital walls. Take stock of the lifestyle choices that may eventually put you within striking distance of hospital errors. The empowerment gap in hospitals is clear. Our technological advancement is not merely marching along; it is moving forward in leaps and bounds. Many medical procedures a century ago would be cringeworthy and nearly unrecognizable to modern-day doctors. And yet, our overall capacity to create safe patient experiences today, with consistently low rates of harm, remains hamstrung by the very same social and cultural issues that have informed our behavior for millennia. Naturally, we search for technological solutions to plug the various leaks of psychological nuance. The very same brilliance that provides us with the capacity to build fantastic spacecraft, carry out surgical wonders, and edit our very DNA is also intertwined with the same humanness that so routinely trips us up in carrying out these same feats. The answer is not one of resilience – it’s one of adaptation – a recognition that with increasing technology comes a responsibility to build concurrent models of enhanced human performance. In the next chapter, we will explore the other end of the empowerment gap. Given the threat our medical system imposes upon our very health and longevity, it

22

1  Why a Hospital Is the Most Dangerous Place on Earth

behooves us to remain clear of their grasp. Yet, the ancient drives that have enabled us to survive as a species for millions of years are ill-adapted for a world in which previously hard-won rewards, like food, are so immediately within reach.

References 1. Iraq war in figures. BBC. 2011. http://www.bbc.com/news/world-middle-east-11107739. Retrieved May 4 2016. For core statistics, see Iraq: the Human Cost. MIT Center for International Studies. web.mit.edu/humancostiraq/. Retrieved May 4 2016. Also, Iraq Index. Brookings Institute. http://www.brookings.edu/about/centers/middle-east-policy/iraq-index. Retrieved May 4, 2016. Iraq Coalition Military Fatalities by Year. casualties.org icasualties. org. Retrieved May 4 2016. 2. James JT. A new, evidence-based estimate of patient harms associated with hospital care. J Patient Saf. 2013;9(3):122–8. 3. The Cost of Sequencing a Human Genome. National Human Genome Research Institute. 4. Vivek Wadhwa, 2015. Why there’s an urgent need for a moratorium on gene editing. The Washington Post. 5. Merriam-Webster, Online Medical Dictionary, 2016. 6. The tragic details of Alex James’ story are paraphrased from the testimony provided by John James on the website: Patient Safety Movement. http://patientsafetymovement.org/ patient-story/alex-james/. 7. For more on Brian Sinclair’s tragic story, see Chris Puxley, Woman tells inquest she tried to get nurses to check on man in Winnipeg ER, Maclean’s Magazine, and Manitoba looks at overhauling ER layouts after death of man during 34-hour wait, Metro News Online. 8. Landrigan CP, et al. Temporal trends in rates of patient harm resulting from medical care. N Engl J Med. 2010;363:2125–34. 9. Gagnon L. Medical error affects nearly 25% of Canadians. J Can Med Assoc. 2004;20(171):2. 10. Patient Safety Data and Statistics. 2016. World Health Organization Regional Office for Europe. Copenhagen. URL: http://www.euro.who.int/en/health-topics/Health-systems/ patient-safety/data-and-statistics. Retrieved May 5 2016. 11. Baker RG, et al. The Canadian Adverse Events Study: the incidence of adverse events among hospital patients in Canada. J Can Med Assoc. 2004;25(170):11. 12. The 10 leading causes of death, 2011. 2015. Statistics Canada. Modified Nov. 27, 2015. Retrieved May 5 2016. 13. POLLARA Research, Health Care in Canada Survey, 2006, [online], cited June 11, 2007, from http://www.mediresource.com/e/pages/hcc_survey/pdf/2006_hcic_ppt.pdf. 14. Andel C, Davidow SL, Hollander M, Moreno DA. The economics of health care quality and medical errors. J Health Care Fin. 2012;39(1):39–50. 15. Those magnificent men in their flying machines: or, how I flew from London to Paris in 25 hours 11 minutes was a 1965 British comedy. 16. Asch SE.  Studies of independence and conformity: a minority of one against unanimous majority. Psychol Monogr. 1956;70(9):1–70. 17. Report of the presidential commission on the space shuttle challenger accident, June 6th 1986, Washington DC. http://history.nasa.gov/rogersrep/genindex.htm. 18. For an excellent read on the value of medical checklists.Gawande A. The checklist manifesto: how to get things right. New York: Picador; 2010. 19. Sherman, R.O. 2012. Creating psychological safety in our workplaces. Emerging RN Leader. http://www.emergingrnleader.com/emergingnurseleader-8/. 20. Reason J. Human error: models and management. BMJ. 2000;320(7237):768–70. 21. Heinrich, H.W. 1931. Industrial accident prevention: a scientific approach. 22. McKinnon RC. Safety management: near miss identification, recognition, and investigation. Boca Raton: Taylor & Francis Group; 2012.

References

23

23. ConocoPhillips. 24. Bridges WG. Gains from getting near misses reported: Process Improvement Institute Inc; 2012. 25. Medscape 2012 Survey Results. 26. Gallagher TH, et  al. Talking with patients about other clinicians’ errors. N Engl J Med. 2013;31(369):18. 27. For a good synopsis of this study and other lifestyle factors, see Chapter by Phelps, Charles E. “We Have Met Our Enemies and They Are Us”, In Meyer, D. 2016. Economics of Health, W.E. Upjohn Institute for Employment Research, Kalamazoo. 28. Central Line Infections (CLI), Canadian Patient Safety Institute 2016. https://www.patientsafetyinstitute.ca/en/Topic/Pages/Central-Line-Infections-(CLI).aspx. 29. Hand washing: a simple step to prevent hospital infections. CDC Foundation. 30. Measuring hand hygiene adherence, the joint commission. 31. Vaidotas M, et al. Measuring hand hygiene compliance rates in hospital entrances. Am J Infect Control. 2015;43(7):694–6. 32. How to stop hospitals from killing us in The Wall Street Journal. 2012. 33. Op cit. Also see Monitoring hand hygiene, Press Room, Northwell Health.

2

Why Do We Crave Bad Things?

Dr. Lou moves across the stage in front of the hundreds of attendees whose faces remain hidden against the dramatically bright house lights. He gestures toward a two-story-high image projected on the center-stage screen, depicting an African hunter-gatherer, whose lean and muscular physique would make any hardcore triathlete look like he was constructed of muffins. “This is what humans are supposed to look like,” Dr. Lou explains. “But this is what we actually look like,” as he switches slides to reveal a shirtless man slouching on the couch  – with a round, doughy, bare, and grotesquely obese belly spilling over his belt – a television remote in hand. The comedic timing between the two contrasting images always conjures up laughter – and the odd “eww.” It does not take long before the sobering reality of our current public health emergency begins to settle in. For the past thousand years, our lifespans have been slowly and steadily increasing. With the exception of somewhat unscheduled plagues and wars, each generation has enjoyed their years on our planet a slight bit longer than the previous. That is, until now. Rather shockingly, new research indicates that segments of the population have begun, just recently, to experience a reversal of this thousand-year-long trend. Moreover, while other wealthy nations around the world continue to see a steady march toward longer lives; mysteriously, the reversal in lifespan is, thus far, only evident in the United States. Our longevity, including our visible health, is the most apparent and abundant measure of proof that our incredibly evolved physiology is falling short in its capacity to adapt to a world in which all the previously hard-won sugars, fats, and salts are now everywhere and in abundance. The very same urges to consume high-­ energy food at every opportunity still inform our behavior. The outcome of the widening gap between our rapid ecosystem changes and our slow-moving physiology are becoming all too evident. Analyzed for demographic indicators, education level tends to be the most significant determinant of lifespan in America. US-based research indicates that among lesser educated Caucasians, average life expectancy has actually dropped 4 years in just one generation. The hardest hit in terms of decline were less educated Caucasian © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 R. S. Barrett, L. H. Francescutti, Hardwired: How Our Instincts to Be Healthy are Making Us Sick, https://doi.org/10.1007/978-3-030-51729-8_2

25

26

2  Why Do We Crave Bad Things?

women, whose life expectancy has dropped 5 years over a 20-year period. On average, college-educated women now live a full decade longer than their non-educated counterparts, and college-educated men live nearly 13  years longer than non-­ educated men. For demographers, this is a cataclysmic event. After watching longevity improve for over 30 generations, scientists are accustomed to seeing small year-over-year increases of nearly negligible proportion  – perhaps upward of 3 months between years – so to lose a half decade of lifespan in just one generation is ringing emergency bells. This alarming trend runs counter to predictions that life expectancy at birth would soon rise to 100  years in developed countries. Formal calculations by the United Nations had suggested that centenarian status – the term for 100 years old – would be the average life expectancy by the year 2300. Other scientists were even more optimistic, suggesting that in the next 50–60 years, newborns would enjoy an average life expectancy of 100 years. Such predictions in the rise of lifespan are really not too difficult to grasp. Modern medicine, including technological breakthroughs in diagnosis and repair, organ transplants, as well as our vast array of pharmaceuticals, are undoubtedly holy instruments in the pursuit of longer lives. Advances in human growth hormone (HGH) and other non-senescence technologies will surely bring even more realistic promises of longer and healthier lives – although, to be statistically valid, these breakthroughs would have to be widely administered. What makes this even more troubling is that the United States now produces more innovation in a year than all other countries combined. To see such a sharp and unexpected decline in lifespan in a nation of that caliber is both equally troubling and baffling. According to oft-quoted Occam’s Razor, the simplest answer is usually the right answer. So, it’s therefore tempting to theorize that less educated folk are simply not able to secure the kinds of jobs that would afford them corporate health and dental plans, grant them levels of disposable income to buy gym memberships and mountain bikes, or allow them to eat fresh organic whole foods. This is perhaps the simplest explanation – and it’s certainly not without merit. The other explanation, however, has to do with behavior. By all accounts, studying longevity should be a rather dull occupation. While rates of change in average lifespans move at glacial speeds, stagnating or inching up by minuscule amounts each year, there is a good case that what we are now witnessing in lifespan decline is far too great to be attributed to a lack of well-paying jobs. For demographers, a serious drop in lifespan is like a giant asteroid colliding with our public-health planet – and this points to sabotage. Of the two most significant “behavioral” changes seen in less privileged Caucasian women are dramatic increases in the use of prescription medication and smoking. Among the countries where pharmaceuticals are growing most rapidly, Brazil, the United States, China, Great Britain, Germany, Canada, Italy, France, Spain, and Japan lead the charge. Today, nearly one-third of Americans or 100 million people suffer from chronic pain. Treatment options often include prescription medications called “opioids,” which include drugs such as hydrocodone (like Vicodin) and oxycodone (like Percocet), which have soared from 76  million

2  Why Do We Crave Bad Things?

27

prescriptions in 1991 to 207 million in 2013. America consumes nearly 100% of the worldwide Vicodin supply. Opioid drugs not only reduce pain; they provide a sense of wellbeing for the user – but at a cost. The opioid drug activates the same reward-­ seeking part of the brain that responds to heroin and morphine  – but like many addictive substances, the desired effects often wane over time, meaning more of the drug is needed to achieve the same result. In some cases, opioid users quit and then start the drug again but at previously tolerated levels that are much too high for the brain. Moreover, many users enjoy the soothing and euphoric “high” that comes from opioids. But, by design, drugs like oxycodone are released into the body slowly, often negating the desired instant gratification that the drug is capable of producing. Grinding the drug up, snorting it, or injecting it can accelerate the euphoric effects but, at the same time, greatly increase the risk of overdose. This has resulted in a proliferation of opioids by those who did not receive a prescription for them. In 2012, over 5% of the US population, 12  years and older, were using opioids non-medically. In 2010, opioid pain relievers accounted for 82% of all prescription drug deaths. Whitney Houston, Heath Ledger, Phillip Seymour Hoffman, Michael Jackson, and Prince, all died, in part, from medications that had been prescribed to them and are part of a disturbing statistic: that more Americans die of drug overdoses than car crashes [1]. Despite America’s so-called war on drugs, opioid overdoses now kill an average of 78 people a day in the United States, while deaths due to prescription medications now outnumber deaths from cocaine and heroin. Among pregnant women, the misuse of prescription opioids can result in “neonatal abstinence syndrome,” whereby the newborn becomes addicted to the drug and suffers through withdrawal after birth  – and a syndrome that has increased by some 300%. Despite the risks, 14% of American women are prescribed opioids during pregnancy. Today, the fastest-growing demographic for opioid addiction is Americans, aged 50–69. Husband and wife team, Case and Deaton, who together won a Nobel Prize in Economic Science for their work on declining longevity, suggest that poor economic outlook in the United States, combined with an increase in opioid use has contributed greatly to the problem, with less educated non-Hispanic white women, who have taken the full force of the lifespan hit. It’s not a stretch for us to empathize with those who may not be able to afford healthy food every single day, or who cannot afford gym memberships or a set of family bicycles, or who work long hours in low-paying jobs that leave little room – if any – for daily fitness routines. No doubt that this is a common reality for many. These folks are plentiful in society, and we need to do a much, much better job of taking care of them. But, to be dismissal of the lifespan emergency as a problem of the “poor” would be to miss the point. Less educated folks merely represent the thin sharp edge of the wedge. They are the most visible and poignant symptom of a public health emergency that is now impacting all sectors of society. Punch-clock workers who each too much sugar and salt, lunching ladies who gargle a bit too much chardonnay, and executives who find glistening salty plate-sized steak under

28

2  Why Do We Crave Bad Things?

fat-sauce far more alluring after a long day on the road than a spinach salad and a treadmill are all unwitting actors in this public health crisis. Like the proverbial sirens on the rocks, our reward-seeking brains are drawn ever closer to our perilous demise for nothing more than to relish in the sweet song of our daily vices. Indeed, we do not have to venture far to hit those rocks. We practically bathe ourselves in sugar, salt, and fat, as we trundle about our day-to-day lives. When ancient philosophers argued over the meaning of life, the debates often drew down to how much of our time here is devoted – if not a slave to – our self-­ serving hedonistic and reward-seeking selves. Of all the amazing things we’ve invented and all the incredible natural phenomena in this world and beyond, our brains are still one of the most complex and highly sophisticated wonders ever known. Our brains have a hundred billion neurons, each with 1000–10,000 synapses, which that are able join in forming new pathways based on our learning and experiences. Many of these function in a reward-seeking way that would delight and vindicate our ancient philosophers. Our behaviors are very much driven by these cerebral forces. And aside from the biologically necessary urge to have sex, one of the most common longings we have is hunger. In fact, it’s interesting how much we all take this requirement for granted. If you were an alien robot who was going to be sent to Earth to pretend to be a human, your instructions would surely include a warning that you must remember to ingest food every few hours while awake in order to stave off digestive “hunger”  – the uncomfortable sensation your brain provides to you when your body realizes it may soon run low on fuel. You might think: how inefficient are these humans that they must spend so much of their day organized around feedings!? Yet this is our reality. And when we go about our daily lives and especially when we venture outside the home, our world becomes a giant aromatic and mouthwatering world of food choice. Within this world, we have to make personal decisions – what to eat, when to eat, and how often to eat. And these decisions have an effect on our health, our performance, and our future behaviors. Ironically, our brains – for all the wondrous good they do in regulating the incredibly small but critically life-sustaining details in our bodies – have proven to be very poor counsellors when it comes to food choice. Like the classic image of the arguing angel and devil on our shoulders, the logic of healthy food choice goes nine rounds against our lust for sugar, fat, and salt. Our younger contemporaries might even characterize hunger, our greatest and most beloved of biological assets, as a sort of sophisticated neural “frenemy”? As we go about our day, and make our behavioral choices, our brains reward those choices. Or, more appropriately, our brains are rewarded, and we feel the benefit of that. Neuroscientists point to the neurotransmitter dopamine, which is released when we engage in rewarding behavior. When we bite into a piece of chocolate, our brains activate dopamine transmitters in a central part of our brain, called the ventral segmental area, or what the brainiacs call: the VTA. Dopamine transmitters jet out along a runway called the mesolimbic pathway, which, if our brain was a city, would take us to the “historic district”. In this ancient area of the brain, we find basic, and arguably essential urges temporarily satisfied by the dopamine rush.

2  Why Do We Crave Bad Things?

29

Another dopamine route is upward to the cerebral forebrain, the home of our executive function, where we make judgments and decisions about how best to achieve our desired award. Along these pathways, dopamine acts like a runner in a race, giving high-fives to other important organs of the brain responsible for memory, emotion, and cognition. Our brain quickly learns what the reward feels like and where it might come from – and what patterns of behavior we should follow to get more of the same. In reality, dopamine’s role is far more complex than straight-forward stimulus-­ reward circuitry. Some neuroscientists characterize dopamine as a flag-waving neurotransmitter, predicting when a reward might take place, rather than actually giving you the reward. Dopamine begins when we see reward coming and floods the brain when we carry out a rewarding experience or even when we get close to a rewarding experience. A study of roulette players showed that the reward circuitry of the brain, along with dopamine, was activated the same way when the players won a round as when they did not win the round but had a near-miss. This is why it might be better to think of dopamine as the neurotransmitter that fuels our motivation or urges to get the reward we know is coming. When we lust for a new love, that’s dopamine, and when we crave a soft, buttery croissant, that’s dopamine, too. It’s an extremely powerful force. When scientists James Old and Peter Milner inadvertently discovered the “pleasure center” of the brain during the implanting of electrodes in the brains of rats, the rats exhibited extraordinary behavior. Given the opportunity to self-reward by pressing a lever to briefly stimulate pleasure in their brains, the rats quickly became fanatical, if not desperate seekers of the pleasure – and to the detriment of everything else. The rats pressed the lever up to 7000 times per hour, forgoing all other external needs, such as food, water, or sex – even pressing the lever to the point of physical exhaustion. Mother rats abandoned their nursing young for the self-stimulation, and the rats would even walk across electrified wires in order to get to the lever, in some cases pressing it for up to 24 hours straight – to starvation or dehydration, had not the experimenters intervened. It did not take long before scientists began asking the question, “If we can so easily activate such powerful reward circuitry in our brains, can we make people desire things they normally would not?” If behavior experimentation through brain electrodes already sounds sketchy and unethical, it was about to be much more so in one of the most controversial human experiments of the twentieth century. Lead by psychiatrist Robert Heath, the subject of his experiment, an openly homosexual male that he called B-19, had undergone hospitalization for several bouts of depression and attempted suicide. A treatment option included planting electrodes in his brain, which Heath observed activated feelings of pleasure, motivation, relaxation, euphoria, and feelings of sexuality. Replicating the rat experiment, Heath permitted B-19 to self-stimulate his own pleasure center by pressing a lever. B-19 became obsessed with the activity, sitting for 3-hour sessions and pressing the lever up to 1500 times per session. When the session ended, B-19 would try to fight the research assistants off, asking for just one more press.

30

2  Why Do We Crave Bad Things?

Turning from unethical to grossly unethical, Heath decided to see if B-19, who was overtly homosexual, could be made to desire heterosexual activity. At first, Heath used pornographic movies depicting heterosexual sex, of which B-19 showed little interest, if not being somewhat turned off. Then, following a session of reward-­ center electrode stimulation, the same movie was shown, this time, with B-19 became extremely aroused. Following this response, Heath decided to experiment with the real thing, hiring a 21-year-old female prostitute to visit B-19 after a bout of electrode reward. Despite the rather unromantic setting of the laboratory, Heath recorded an observation in his scientific paper, in which he described the lengthy timeframe that B-19 and the prostitute carried on a conversation, and then writing (in a most scientific tone), “Later, the patient began active participation and achieved successful penetration, which culminated in a highly satisfactory orgasmic response, despite the milieu and the encumbrances of the lead wires to the electrodes.” Heath notes, to his delight, that after all the experimentation had ended, B-19 continued to have a heterosexual relationship with a married woman for several months. For Heath, this significant behavioral change pointed to the relationship between the activation of reward circuitry and whatever stimuli one associates with that reward. Like the roulette players, some addicts have expressed that a hit of drugs does not actually make them feel better or satisfied, despite the fact that they are being guided, if not controlled, by an extraordinarily powerful reward-seeking urges. Regardless of his experimental results, there is little doubt that Heath’s experiment was grossly unethical – if not downright ugly in its intent and execution. What we associate with reward is indeed very important. If it’s actually the environmental cues or motivating forces that rev up our reward circuitry rather than the activity itself, then perhaps we have a chance to change some of the more detrimental behaviors that surround poor dietary choices that feed our cravings for salt, sugar, and fat. When Dr. Lou, in his presentation, shows the image of the lean hunter-gatherer, that hunter-gatherer’s brain would have also surely sought sugar, salt, and fat, as we do. But, those meals would not likely have emerged every few hours, so the reward, although powerful and directing, would not have resulted in the hunter-gatherer gorging multiple times each day. More likely, food would have been scarce and meals far and few between. To adapt to this irregular and perhaps insufficient diet, our bodies adapted to convert and store the byproduct of sugar digestion – fructose – as fat. With just a little bit of stored fat, our ancestors’ bodies learned to survive when meals were far and few between. Yet, the problem is that they didn’t actually eat very sweet foods. Some vegetables and perhaps fruit (although less likely) meant that even a body that was very efficient at converting and storing fat had little real opportunity to do so. Fast forward to today and you see that same body and physiological capacity to store sugar as fat, although now we are up to our neck in available sugar. To put things in perspective, if our ancestors were at one time savoring a nice sugary carrot, we’d need a bushel full of carrots to equal the amount of sugar found in one chocolate bar – a food that many people often add to their lunch as dessert or have as a snack during coffee break.

2  Why Do We Crave Bad Things?

31

The amount of excess and needless sugar that we pour into our bodies is truly astonishing, most of it in liquid form. Whereas one serving of fresh strawberries contains the equivalent 1 1/2 cubes of sugar, a standard cola soft drink contains the equivalent of about 16  cubes of white sugar. A frozen ice-type slushy drink in a corner store contains the equivalent of 22  cubes of sugar and a chocolate shake contains a whopping 27 cubes of sugar. Why is it that many of us cringe if we see someone drop sugar cube after sugar cube in a cup of coffee but think nothing when someone grabs a cola drink that contains enough sugar cubes to fill two cupped hands? But those are “bad” drinks – soda, chocolate shakes, and frozen slushies. What about drinking fruit juices throughout the day? According to a UK study that looked at only drinks marketed directly to children, almost half contained enough sugar to constitute a child’s daily maximum. Today, added sugar in our food and drink is literally killing us. Two-thirds of adults in America and one-third of children are overweight. Worse yet, according to BMI scores, nearly one-third of the entire adult population in the United States can be classified as obese. While researchers are careful to point the finger at just one cause of skyrocketing obesity, high daily intake of added sugar seems to be the leading suspect – and among the most likely vehicle for this sugar is beverages. Indeed, the World Health Organization has placed “added sugar” or “free sugar” at the top of its most-wanted list – the leading culprit in the battle against obesity. Most foods contain some sugar, but the sugar in fruit, for example, is ingested along with copious amounts of fiber and other bulk, and this “hard” food acts to slow down the absorption of sugar so that our bodies are able to better digest it. Studies show that in the United States, sugary soft drinks make up nearly 10% of our total daily caloric intake. So, what makes this 10% so sinister? One problem that researchers point to is the low satiating effect of these beverages. Most of us wouldn’t consume a soft drink as a means to stave off hunger. Despite the fact that we’re taking in so many calories, we don’t feel full. It turns out that too much sugar actually affects the part of our brain that lets us know that we’ve eaten enough food. Not only is the effect that we are still hungry, but even worse, the brain often tells us that we need to eat even more than we would have without the sugary drink. Add to this our portion sizes. In the 1950s, a soda drink was about 6 1/2 oz, but in today’s world it’s not uncommon for kids to carry around a super-sized 20 oz drink, which, while packing a massive amount of energy, has little to none – or worse – effect on our feeling full. Any sugars in our diet have to be processed by the digestive system, which sends the resultant glucose to our liver. Our liver is then tasked with managing this sugar tsunami the best way it can – and this is by calling in the insulin troops to combat the sugary assault. The result in our bodies is that the excess glucose is stored as fat. If we do this too much or too often, we risk developing a problem with the way our body reacts – or fails to react – to insulin, an issue that can sometimes manifest in the form of type 2 diabetes. The link between obesity and type 2 diabetes is well established, although researchers are still working on why this is precisely the case. One interesting theory has to do with the inflammatory response that liver fat produces when it has to deal with massive amounts of sugar – another leading theory is the location of the fat on the body – deep stomach fat being the leading suspect.

32

2  Why Do We Crave Bad Things?

Interestingly, it wasn’t long ago that type 2 diabetes was commonly referred to as “adult-onset diabetes.” But in one generation, this handle has become obsolete. At that time, it was generally understood that type 2 diabetes simply did not happen to anyone under 30 years of age; today, the journal Diabetes Care is calling childhood diabetes an “emerging epidemic.” The World Journal of Paediatrics reports that in the year 2010, 35  million children were obese or overweight, and by 2020, that figure was expected to double. When it comes to diabetes, the worry among pediatric specialists comes from a rather disturbing view of the connection between childhood obesity and diabetes onset. While adult onset toward diabetes is a rather slow process – often taking a decade or so – overweight children make this transition much faster, at about 2 1/2 years. Another unique and disturbing difference between children and adults is that children seem to be far more resistant to conventional means of diabetic intervention, leading some pediatricians to characterize childhood type 2 diabetes as a “one-way trip.” It is understandable why type 2 diabetes has been characterized as a disease of the “affluent.” A sedentary life eating bonbons, the stereotype existence of excess leisure time, has also been undone. Around the globe, less privileged and less wealthy populations have become increasingly sedentary as their lives depend less on foraging for their day’s meal, hiking long distances, or chopping wood to fire their stoves, while having increasing access to ready-made and affordable calorie-­ rich foods. According to the International Diabetes Federation, 75% of people with diabetes live in low- and middle-income countries, with South and Central America, Africa, and Southeast Asia, predicted to nearly double their numbers of diabetics over the next 25 years. Astonishingly, 12% of global health expenditure is spent on diabetes. Prevention is the key, but that answer is easier expressed than done. First off, there is the rather cold argument that public or private dollars invested in disease prevention programs only result in people living longer and ultimately costing the healthcare system even more money by virtue of their longer lives. Others suggest that the benefits of prevention are beyond dispute – that never getting the disease logically burdens our society the least. In reality, this debate really only fires up the bellies of academics who argue whether the problem should be analyzed in terms of pure fiscal economics or value to society. Moreover, the value proposition that adjusting lifestyle and behavior costs little is a solid one. We’ll leave the academics to carry on the debate. A more important concept to understand is that prevention typically comes in three forms: “primary prevention” is that which helps avoid getting the disease in the first place; “secondary prevention” is the tool or teat that helps discover the disease process early; and “tertiary prevention” is the technique or tool to help control the ill effects of the disease. Of these three “opportunities,” the one that we are in immediate control of is the first one: primary. Here, we can adjust behaviors – at little or no cost to us or society – in order to take control of our health and our lives. And if dying early isn’t motivating enough, there’s another reason why we may wish to take the bull by the health horns, our brain fitness.

2  Why Do We Crave Bad Things?

33

Regardless, obesity resulting from diets high in added dietary sugar can affect our brain health and mental acuity. If high sugar and high fat food is the enemy of health, this enemy is extremely adept at psychological warfare. Back to Dr. Lou’s hunter-gatherer slide, and we understand now that the way we eat, and store fat is more than how many miles we have to run each day for a bite of food, wood for our fires, or water to drink. It’s also how our brains respond to our sugary diets. A fascinating study of obese women demonstrated just how much our brains can trick us into self-destructive patterns. Scientists hooked up a function MRI (fMRI) to a group of somewhat overweight women and watched their brains as they ingested a milkshake made up of rich ice cream and syrup. As we might suspect, the reward centers of their brain shone bright with glee. Six months later, the very same women were invited to repeat the experiment with the same FMRI and exact same milkshake. That’s when the scientists discovered something interesting. The women who had gained weight during the 6 months showed less of a reward response than those who had not gained weight. Scientists refer to phenomena as hypo-functioning reward circuitry. At its core, it’s the same problem we see with addiction  – that sensitivity to the stimulant becomes reduced over time so that more and more of the substance is needed in order to achieve adequate brain reward. It’s no surprise that many scientists have compared the reward deficit in obese individuals with similar reward deficits seen in heroin and cocaine users. The disastrous effect of this obesity-­induced reward dysfunction is a downward health spiral in which overeating inhibits reward which then causes more overeating, inhibiting reward further. No wonder it’s so difficult to lose those pounds! You’re literally battling your own brain. When scientists delved deeper into how our brains trick us into eating more of what’s bad for us, they found something else about how being overweight and obese may cause us hardship. The same reward circuitry and dopamine struggles that lead us to eat more and more, also cause us to be compulsive. Binge eating and particularly yo-yo dieting (that typically run counter to long-term health) may well be a result of the same blunted reward circuitry. Compulsivity means less ability to control our decisions and lives, feelings of failure, and the search for even more high-­ sugar and high-fat “comfort” foods to help us feel better. With nearly one in three American adults considered obese and one in five worldwide considered obese, scientists have begun exploring what other downline effects we might expect from this global epidemic. Of extreme interest is the research that now points to a relationship between obesity and impaired cognitive functioning. Several studies now link midlife obesity to dementia and Alzheimer’s later in life, but more interesting perhaps are those studies that are beginning to show a link between body fat and poor memory, spatial memory, executive function, and cognitive performance  – in particular, goal-directed behavior. A study that spanned decades showed (for the test group) that the higher the body mass index (BMI), the more gray matter atrophy was noted in the brain regions associated with judgment, decision-making, and memory. Increasingly, studies confirm that those with higher BMIs not only have less brain volume in terms of gray matter, but they actually have smaller brains. So, for the astute reader, the question arises: which is the chicken,

34

2  Why Do We Crave Bad Things?

and which is the egg? Are those who might be born with less gray matter volume simply more vulnerable to obesity? Or, does obesity cause the brain to shrink? In a fascinating study, brain researchers tackled this question and the conclusions are rather groundbreaking. The study surmises that in overweight and obese individuals, the choice pattern moves away from “goal-oriented” behavior to “habit-based” behavior, thereby causing a reduction in thickness of the ventromedial prefrontal cortex. The prefrontal cortex is responsible for a host of executive functions associated with judgment, reasoning, and decision-making. Put another way, the thinking part of our brain – the part that makes us creatures of reason, logic, and judgment – is, with increasing obesity, giving way to stimulus response-type behavior. In a shocking way, we are literally zombifying ourselves, turning ourselves from more complex and thinking animals into more simple-minded ones. In our world of high sensitivity, scientists are treading lightly on declaring a potential link between obesity and poor cognitive performance – but the evidence is becoming increasingly difficult to ignore. In a fascinating study on obesity, scientists at Kent State University recruited some 150 participants in a research study, who, on average, weighed just under 300 pounds. After cognitive tests were performed and compared with the existing international database, the researchers found that their obese participants scored much lower on cognitive tests related to learning and memory, with 25% of the participants falling into the “impaired range.” To back up the findings, the willing participants in the study underwent bariatric surgery, or gastric bypass surgery, and were tested again 3 months later. By this time, most of the participants had lost about 50 pounds as a result of the surgery. Astonishingly, most of the test scores showed substantial improvement, not only to average but in many cases, above average cognitive test scores. The participants who chose not to undergo the weight loss surgery were also retested and in keeping with the findings, scored worse than their previous tests. To get to the bottom of what was happening, the researchers slid the participants into an MRI to understand how their brains had changed with obesity and, then, weight loss. Interestingly, the researchers found that the white matter of the obese participants’ brains showed signs of damage. White matter is like a sheath or protective wrapping that covers the nerves of the brain (the axons) as they transmit information. The myelin sheath that makes up the protective barrier around nerves is white in color, giving white matter its name. The axons that the white matter surrounds are responsible for the transmission of information between areas of the brain, specifically the speed of the transmission. Damaged white matter, as shown in the obese participants, impairs both learning and memory. Researchers believe that this white matter degeneration may be associated with inflammation linked to obesity because of an elevated protein called C-reactive protein (CRP), a classic marker of systematic inflammation in the body. C-reactive protein plays an important role in your body’s ability to deal with damage and actually helps contribute to inflammation by kickstarting the body’s protective inflammatory reaction. However, if inflammation remains chronic, as is often the case with disease processes involved with the heart, cancer, or diabetes, then C-reactive protein can remain elevated.

2  Why Do We Crave Bad Things?

35

Today, your family doctor can order a simple blood CRP test that will measure your C-reactive protein, a lab test that is gaining in popularity as an early indicator of many impending chronic disease conditions. While bouts of inflammation are normal if your body has been injured, reversing chronic inflammation should be a priority. Promotions of anti-inflammatory diets abound, but what all the diets typically have in common is a reduction or eradication of simple carbohydrates (sugar) as well as dietary saturated fat. The third siren in our trio is salt. Like fat and sugar, dietary salt is essential for life. The chemical name, sodium, allows the cells in our bodies to function, maintain proper ionic balance, help our respiration, and other muscle functions. Unlike plants, which have thick cellular walls that hold in immense internal pressure, animal cells (which include those in humans, of course) have softer and more permeable cell walls. Sodium, and specifically, the “sodium-pump,” helps generate energy through an active transport mechanism across cell walls by adjusting the ratio of potassium and sodium inside and outside of cells, producing a critical life-­sustaining ionic ballet. Sodium and potassium ions act as electrolytes, which carry tiny electrical charges important in maintaining the flow of water across cellular membranes and in carrying electrical impulses necessary for our nerves and muscles (including the heart). Indeed, improper electrolyte function can be deadly. For the human animal, electrolyte imbalance can result from too much or too little water ingestion or too much or too little intake of the electrolytes themselves. By far the most common form of imbalance is dehydration, caused by too little water in the diet, through either food or drink. When we have too little fluid in our bodies, either because we don’t ingest enough or because of diarrhea, vomiting, or excessive sweating, the body draws water out of our cells and into our blood in order to maintain blood pressure and other essential functions. The kidneys, whose job it is to regulate body fluid and electrolytes, also begin to conserve water by reducing the fluid output in urine – making urine darker in color, which, incidentally, is one of the best ways to self-assess one’s own hydration level. This reaction to dehydration can alter our electrolyte balance, ultimately affecting the function of our nerves, muscles, and organs. Severely dehydrated people, like those who have been rescued from natural disasters, typically have very high sodium concentrations in their bloodstream. This condition, called hypernatremia, can cause dizziness, vomiting, diarrhea, and sweating. But we are learning that we do not need to be on death’s door to suffer the effects of dehydration. The human body is about 60% water so measuring dehydration in terms of body weight loss is relatively effective. This amount of water, called total body water (TBW), is normally regulated by the body so that TBW does not drop more than 1%. Recent studies indicate that up to 80% of Americans go about their day in a mildly dehydrated condition. The effects of even mild dehydration of the order of 1% bodyweight (which would be barely noticeable) include measurable cognitive impairment and reduced decision-making. As dehydration increases to around 2%, individuals will experience a marked decline in visual-motor tracking, short-term

36

2  Why Do We Crave Bad Things?

memory, attention, and arithmetic efficiency. At about 4% dehydration, individuals can expect to lose up to a quarter of their reaction time. A fascinating and logical area of new research considers the role that dehydration plays in automobile accidents. Worldwide, there are 1.2 million deaths a year due to car crashes and another 50 million injuries. One UK study claimed that 68% of all motor vehicle accidents are caused by human error. Looking at typically mild dehydration levels, researchers used driving simulators to assess driver error during prolonged driving times. Researchers found that the difference between hydrated and mildly dehydrated drivers was significant in terms of committed errors on the road. In fact, equivalent error rates were noted in other simulations involving subjects whose blood alcohol levels were 0.08% (the impaired driving designation in many state jurisdictions). Not only can severe dehydration be fatal, but mild dehydration can be fatal too, if one is behind the wheel of a car. With the majority of the population going about their workday in some form of dehydrated state, one has to wonder why we are only beginning to pay attention to this routine impairment and its glaringly obvious and extremely simple remedy. One of the myths around hydration is that coffee and tea tend to dehydrate you. This is simply not true when drinking caffeinated beverages typical in North America that contain large volumes of liquid – which outweigh the relatively minor diuretic effects of the caffeine. In the United States, nearly 150 million people drink coffee every day but this pales in comparison to European countries. Per capita, Finland, Sweden, Switzerland, France, and Germany far exceed America’s love affair with coffee. Despite this, office dehydration could still be significantly impacting our collective performance. Up to 75% of Americans may be living in a chronically dehydrated state. Even if only half of Americans are dehydrated, that represents an enormous amount of business in the hands of individuals who may be suffering from some level of cognitive impairment. Our daily behaviors are the key to reversing the deleterious health effects of the relentless dietary sirens who threaten to lull us onto the rocks of ill-health. Our brains, both stalwart guardians of our bodies and nefarious gate openers for Trojan horses like sugar and salt, harbor a weakness of kryptonite proportions  – the dopamine-­fueled reward circuitry. Overcoming instantaneous reward is not easy. Our ancient brain has evolved for survival and provides us with simple, nearly automatic orders about whether to run from a thing, hide from it, eat it, or have sex with it. Keeping this rather crude part of our brain in check is our prefrontal cortex (PFC), the larger forebrain (near our forehead) that provides us with reasoning, judgment, planning, and sound decision-making. Indeed, it’s this executive function that separates us from the rest of the animal world. The prefrontal cortex allows us to plan, to set goals, and to control our urge for instant gratification. Many of us can relate to this, when we see a chocolate glazed donut or a bag of salty chips, we need to dig deep into our willpower, if we want to hold off eating it. As described by Kelly McGonigal, a Stanford University professor who teaches a very popular class in the science of willpower, we can describe willpower as being made up of three parts: an “I will” region in the upper left side of our prefrontal

2  Why Do We Crave Bad Things?

37

cortex, an “I won’t” in the upper right, and an “I want” in the lower middle region. Together, these make up our willpower – the center of our self-control. Willpower most often comes to mind when we think of overcoming simple urges. Literally defined as “empowering our will” – it is the uniquely human capacity to manage our choices and behavior so that we can ultimately plan and achieve goals. Just as an overactive and over-satisfied reward circuitry seeking sugar and fat seems to create a rather primitive habituation spiral, the opposite can be true for those who work hard to overcome these more basic hardwired drives. This is a critical piece; it is about overcoming short-term urges in order to achieve long-term goals. In the now famous Stanford marshmallow experiments, conducted by psychologist Walter Mischel in the late 1960s, Nursery School children were each given the option of enjoying one marshmallow (or cookie or pretzel, as chosen by each child) right away or delaying the gratification in order to receive two marshmallows 15 minutes later. It was therefore a choice between quick gratification and delayed gratification – in order to achieve an even bigger prize. While the original experiment had only 28 participants, subsequent experiments ultimately provided 600 sets of results. Researchers discovered that approximately one-third of the children delayed gratification in order to receive the bigger reward. Most interesting perhaps is what this apparent willpower meant for these little children later in life. Follow-up research on these same children some 30 years later showed that the children who had shown the greatest self-control during the experiment had, later in life, higher SAT scores, better physical fitness scores, and better overall life satisfaction scores. As technology improved, experimenters were able to conduct fMRI brain scans on these very same people, years later in their midlife. A selection of original participants included those who demonstrated the greatest willpower and those who demonstrated the poorest. The MRI results revealed significantly higher prefrontal cortex activity in midlife for those who had originally demonstrated the greatest willpower as children, while those who had originally scored the lowest for willpower showed higher activity in the ventral striatum, an area of the brain that is known to be linked to addiction. Put another way, an individual’s capacity for self-­ control, as observed as early as preschool, seems to be one of the greatest predictors of overall life success, even more than intelligence. Indeed, willpower – the ability to chart a course and stick to it – may very well be the greatest element of human achievement. As is always the case, grand experiments attract great follow-on critiques, and the one that the marshmallow experiment brought forth was the question around whether we are simply born with certain levels of willpower or whether willpower is malleable and changeable over time, and whether it is able to be learned? Doing a similar experiment as the original marshmallow one, experimenters at the University of Rochester wanted to see what would happen if some of the children began to question the trustworthiness of the tester to actually deliver on the promise of a bigger (delayed) reward. Without trust that the second marshmallow would be delivered, how many of us wouldn’t just choose to enjoy the first marshmallow? Not surprisingly, those who trusted the experimenter showed nearly four times the level of self-control as the children who had reason not to trust the experimenter. This

38

2  Why Do We Crave Bad Things?

result seemed to back up the idea that we are not innately born and empowered with self-control superpowers but that most of us are making decisions about whether to delay gratification based on our confidence that the future will, in fact, deliver those gratifications. Interestingly, this seems to help explain the controversial experiment that actually gave rise to the marshmallow test. In the late 1950s, the marshmallow experiment’s creator, Walter Mischel, had observed that extremely strong stereotypes existed on the island of Trinidad, between children of Indian and African heritage, the former charging that the latter lacked self-control and were reckless. Mischel decided to run an experiment in which the children could receive a one cent candy immediately or wait 1 week for a ten-cent candy. Mischel’s results showed a significant difference between the two groups, with the Indian children much better at delaying gratification in order to win the bigger reward. However, when Mischel dug deeper into the results, it was not ethnicity at all that was the greatest determinant of willpower – it was trust. He noted the absence of a father figure was quite prevalent amongst the African heritage children but not so amongst the Indian children. As with the much more recent Rochester experiment, it is how much children trust in their future, whether they believe the promises of others, and how safe they feel in their decision to delay gratification that most often determines whether they will exercise willpower and self-control. Any reason that makes us feel insecure may create a perception that we need to seize upon small advantages before they are taken away from us. For those of us who routinely give in to urges, dietary or otherwise, we know that our brains begin to change in reaction to a form of decision making based on satisfying habits, rather than goal-oriented decision. Turning this ship around is not without challenge. As willpower satisfies short-term rather than long-term thinking and emotional rather than logical thought processes, it’s important that we set achievable short-term goals that bring about a sense of excitement and accomplishment. These short-term goals make up medium-term goals and eventually long-term goals. Many individuals make the mistake of making the long-term goal the target, but for those whose lives (and brains) are oriented toward habit decisions, the notion of long-term thinking may be too difficult to achieve. Moreover, if it’s not fun or interesting, it will run counter to the way habit decision-making feeds off emotional reaction. An “emotional eater” for instance, needs either to address the core issue and trigger of the eating, or they need to replace “eater” with another, healthier activity. When people say that quitting something “cold turkey” is difficult, they aren’t kidding, because they are directly challenging their dopamine reward-seeking circuitry. For a lucky few, if they can survive it, so-called “cold turkey” may work, but for others, it often leads to repeating cycles of successes and failures, like the typical yo-yo dieting many of us have experienced. Once on course, self-control still takes concentration and , and it even takes energy to use willpower. In other words, the effort it takes to resist consumes our resources. In a somewhat tasty experiment (for some), participants were brought into a room with freshly baked cookies on one plate, their aroma filling the room. Beside the warm cookies sat a bowl of radishes. Some of the participants were asked to

2  Why Do We Crave Bad Things?

39

sample the cookies and a separate group of participants were asked to sample the radishes. Then, they were all given 30 minutes to solve a somewhat complex puzzle. Those who had tasted the cookies performed much better. In fact, many of the radish eaters simply gave up on the puzzle. All in all, the cookie eaters lasted nearly twice as long as the radish eaters. Why? Experimenters began to investigate the idea that our willpower is not fixed and limitless but that we can run out of willpower, like running out of gas in your car. When we spend a great deal of time being disciplined, our brains seem to run out willpower and we revert to something that feeds our short-term emotional urges. Roy Baumeister, a pioneer in the field of willpower and who conducted the original cookie versus radish experiment, concluded that those who ate the radishes became willpower depleted after having to overcome the urge to eat the fresh baked cookies. They had little willpower left for the tedious task of completing the challenging puzzle. The cookie eaters, having spent no willpower reserves, were able to use their willpower to last longer during the challenging cognitive test. As strange as this may sound, we all know the feeling. Take for example the rather simple case of attending a daylong event with your supervisors, or attending a conference and meeting new colleagues or even having distant relatives stay at your home. Even if the continuous social interactions are not extremely tasking, they use brain power to stay alert, engaging, and on your best behavior. It’s what some people call “being on” all day — and it can most certainly be draining. This over-performance affliction can be particularly poignant for high achievers who push themselves to have their “game face” on all day, particularly in settings where their performance is being judged by others. It’s no surprise that we think of “shore leave” as being that stereotypical weekend where disciplined sailors, who have had little opportunity to indulge their cravings and impulses, go to blow off steam. In the popular movie, The Matrix, the character Neo (played by Keanu Reeves) had to make the choice between taking the red pill or the blue pill, one returning him home and the other leading him toward a journey of truth and adventure. Our brains are not so black and white – or red and blue. If the blue represented cooler and less emotional long-term goal-oriented thinking and red represented hotter and emotional short-term thinking, the science tells us is that we can learn to use our hotter (red) side to help build our cooler (blue) side. While the marshmallow test showed that there may be “blue thinkers” and “red thinkers” among us, we now know that those who allow themselves minor indulgences, or red choices, empower their capacity for sustainable blue choices. One of the areas where we see incredible feats of self-control and willpower is in athletics. There are world class athletes who admit they have never tasted a chocolate cookie, never ever had an alcoholic drink, weigh every crumb of their food, or engage in sleep patterns so ritualistic you could use their sleep habits to set an atomic clock. In fact, Dr. Rob knew many such athletes – and perhaps for some time while competing on the national team cross country ski team, Dr. Rob would include himself in this group. The ability to control oneself is not only a uniquely human trait, it’s also what enables us to achieve truly exceptional ends. As Stanford’s Kelly McGonical reminds us, willpower is more than self-prohibition, or what you won’t

40

2  Why Do We Crave Bad Things?

do, it’s also defining what you want, and what you will do to get it. In reality, it’s our ‘drive’. Athletes exemplify this trait. Dr. Rob recalls workouts where he and his teammates would run hill repeats with ski poles (up the ski-hills  – not down them), nearly collapsing after each interval, lungs burning and screaming for air, choking, and nearly vomiting, while their legs shook uncontrollably under the stress. While still barely able to coordinate a walking movement, the athletes would descend down the hill to do it again, and again, and again. It would not be uncommon to do 20 such intervals. And that was one workout, of two workouts a day, 7 days a week. In fact, Dr. Rob’s “fun day” was a near 50 km (approximately 30 mile) slow run on each Sunday, during which the team could chat and joke with each other. Life was spent training, measuring and eating food and water, and being strapped to a heartrate monitor – sometimes even while they slept. To survive what must seem like insufferable discipline, nearly all athletes Dr. Rob knew all had one coping strategy in common. They permitted themselves mental outlets. It was not uncommon at all to see an athlete wearing punk rock clothes when not training, talking of going to some head-banger concert, talking about partying hard with girls they liked, tackling a set of Class 4 rapids on a river, or simply lying around watching movies in their underwear with a bag of popcorn on their chests. Whatever it was, most athletes permitted themselves stolen moments to blow off steam – to relieve the pressure that builds up living a life locked tight by shackles of willpower and self-control. Perhaps it’s because of the coaching, or the direction of sports psychologists, or simply experience about what works, but athletes have become rather unique masters of the dance between self-discipline and indulgence in order to reach extremely high levels of performance. It’s almost as if you can’t have one without the other. Yet athletes also have one other unique trait, and that is an extraordinary ability to push through hardship when most of us would call it quits. The willpower to continue to fight under extreme physical and mental stress is rather unique because for most of the population, willpower is a finite resource that easily runs thin without frequent recharging. Through mental conditioning, athletes have unusually high tolerance during times when their brains are screaming for them to stop. It’s after the training, after the race, or after the event that athletes permit themselves the much-needed downtime. After racing the US Olympic trials, Dr. Rob recalls seeing athletes laying in the sun, watching corny movies, or practicing photography. It’s like an opposite activity was needed to cool their physical and mental engines. Business leaders are a group that also show very high levels of willpower and self-control, not unlike athletes. According to Professor McGonical, highly successful business leaders exhibit athlete-like qualities when it comes to prolonged bouts of willpower needed to complete tasks. As McGonical notes, the ability to push past jet-lag, sleep deprivation, or long hours of stressful negotiations, all while performing at a high level are the type of character traits typically seen in athletes. However, like athletes, business leaders need downtime too, to offer reprieve to their highly willed brains. Those business leaders that survive their demanding world surely have their own strategies for dealing with the rigors of work-life realities. And these

Reference

41

techniques of blowing off steam don’t have to be harmful habits, like excessive drinking, drugs, or carousing; they can be positive outlets such as photography, mountain biking, sailing, fun dates with a spouse, family hikes, or just eating some comfort food on the couch with a glass of wine and a good movie. The brief indulgence has to be just that – an indulgence. It can’t be another test of willpower, like forcing ourselves to do something we don’t like as reprieve from a workday that also demanded concentration and self-control. Small doses of relaxation and fun are like the breaks between sets when exercising in the gym; they allow us brief rest periods so we can recover, rally, and push through the next big effort. So, the secret is to permit ourselves momentary, or “tactical indulgences,” in our behavioral self-control, in order to replenish our willpower levels (so long as it’s not harmful of course!). Give our self the permission to fulfill our short-term emotional side, and we will, like the cookie and radish experiment showed, ultimately enjoy more achievable and sustainable levels of self-control. Tactical indulgences, when planned well, also help sidestep the potential deleterious effects that short-term reward and habit-based decisions have on the reduction of gray and white matter in our brains. So long as our premeditated rewards become part of our executive functioning – which includes goal-oriented decision-making and judgment – then we are helping to ensure that we are not simply giving in to habit, but we are continuing to utilize our minds in the way that makes us most human.

Reference 1. Overdose Death Rates. National Institute for Drug Abuse. www.drugabuse.gov. 2020.

3

Raising Children on War, Cartoons, and Social Media

The city of Bern, nestled in the heart of Switzerland, is renowned for its beautiful medieval architecture. Surrounded by rolling green hills on which bell-clanging herds of cattle freely graze, the old town has been proclaimed a UNESCO World Heritage Site. This prestigious recognition stems from the city’s uncanny success at preserving and incorporating its impressive medieval arcades, ornate buildings, and grand fountains into a modern urban landscape. Amidst Bern’s heralded architecture is one of the city’s more curious features, a tall fountain statue called Kindlifresser, which depicts a towering ogre-like figure eating a baby – his massive white teeth about to crush the infant’s scull. At his side, a sack of three frantic and crying babies look on in desperation as they await their imminent demise. The statue’s exact origin remains somewhat of a mystery, although pet theories abound, from a derogatory and racist depiction of Jews to the Greek Titan Kronos eating his six children, to the local version of Switzerland’s infamous “boogie man,” whose job it was, in part, to remind children to behave during Switzerland’s Night of the Fasting Festival. In spite of what some see as the apparent humor of a baby-eating ogre threatening children with death, the “Child Eater of Bern” has been terrifying Swiss children for more than five centuries. Beyond the mysterious and fanciful theories as to the statue’s origin, the rather stark contrast between Bern’s idyllic urban landscape and the mysterious Kindlifresser fountain might also be viewed as a modern-day reminder of a much more real and contemporary piece of Swiss history: an eerily dark chapter during within which Switzerland’s treatment of her most precious loves, her children, was fraught with inexcusable abuse. Now in his 70s, Switzerland’s David Gogniat remembers the night he was kidnapped at the tender age of 8 years old, destroying forever a humble but happy life that he shared with mother, his brother, and two sisters. Like an unshakable curse, his memory of that horrific moment is forever branded into his psyche. David recalls two of Bern’s police officers coming to the door of his modest home. An argument ensued between his mother and the police as they ascended the stairs to her children’s rooms. Tensions escalated – words turned to action – and his mother, in her © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 R. S. Barrett, L. H. Francescutti, Hardwired: How Our Instincts to Be Healthy are Making Us Sick, https://doi.org/10.1007/978-3-030-51729-8_3

43

44

3  Raising Children on War, Cartoons, and Social Media

desperation, fought back against them, pushing one of the police officers down the flight of stairs. The officers left, but returned the next night, this time with reinforcements. David recalls how his mother was held down by the police while other officers took him from his home – to be sold into slavery. Eventually, David’s sister and his two brothers would meet the same fate. Only recently acknowledged and discussed in open public, Switzerland’s dark relationship with her children casts a long shadow on the stereotypical memes we typically ascribe with the nation – gorgeous mountain scenery, clean air, beautiful and healthy peace-loving people, the very home and beating heart of the United Nations, and the pride of modern European living standards. Yet for more than a century, from the late 1800s to the late 1900s, Switzerland condoned a practice of removing babies and children from Swiss parents who were deemed “unworthy,” in particular poorer parents and those who were single or divorced. Far from providing state assistance to those in need, the children were sold, often at auction, to farmers and factories to become indentured servants – the polite-company term for slaves. Only in very recent years, through relentless journalistic exposes, victim movements, and documentaries, have the true magnitude of Swiss child slavery come to the fore. They were called Verdingkinder, or “contract children.” Historian Marco Leuenberger estimates that between 1850 and 1950, 5–10% of all children in Switzerland had been forcibly removed from their homes, forever separated from their parents, and sold by the government into slavery – astonishingly, a practice that continued in that country nearly into the 1980s. In the region that surrounds the city of Bern, some 300,000 Swiss-born children were sold, mostly at public auctions. The buyers were typically farmers who used the children as workers. But this was not the healthy and wholesome imagery of sunsets, wheat fields, and firefly-­ dancing evenings that one might associate with a child growing up on a family farm. These children were not loved, nor nourished, in this way and were routinely forced to survive in sad and abusive conditions. As adults, these “contract children” have been finally able to speak openly about their experiences living in slave conditions, and their narratives are shockingly similar. Stories of going hungry are common. They rarely, if ever, ate with their host families (unless the government inspectors were visiting), and they worked long hours in the fields, often wet and covered with manure, and in inadequate clothing to keep warm. It was common for their pockets to be sewn shut in cold weather, so they’d be forced to keep their hands busy to stay warm. Children as young as 2 years old were carrying milk or scrubbing floors – and abuse – lots of it. Children were regularly beaten, suffered horrifying psychological torture, and some were also routinely sexually assaulted. One story, told by retired journalist Turi Honegger, describes his own experience as a contract child. He was 14 years old and lived under the stairs of a family farm. He describes his life as “work and sleep.” Except for work assignments and beatings, the family ignored him. He had not a friend in the world. His stories include being beaten severely and then being locked in the stables and told to eat livestock feed, for days on end.

3  Raising Children on War, Cartoons, and Social Media

45

Of course, there were other countries that have committed similar crimes against their own children – Australia against her aboriginal youth, England’s deportation of children (without their parents) mostly to Australia to work on farms, and the Catholic Church’s indenturing of some 10,000 “laundry slaves” in Ireland, in which young girls were held against their will in Catholic-run workhouses – a program that continued until 1996. And then there is Canada’s residential school system, which forced 150,000 indigenous children from their homes and families into highly abusive and disease-ridden assimilation schools, under horrible conditions in which many scared and innocent children died, their bodies buried by the government in unmarked graves. The residential school system of Canada is widely recognized, and condemned, as cultural genocide. Of the hundreds of thousands of children who succumbed to forced slavery in Switzerland, some 600 children were also removed from the Jenisch people (the most populous gypsy group in Switzerland), some of them as infants, forcibly torn from their mothers’ arms. Harsh treatment of the gypsies was quietly but officially confirmed in a publication put forth by the Swiss Government in 1998, in which it described a Swiss sanctioned program called Kinder der Landstrasse (“Children of the Country Roads”). The program was run by a eugenics themed Swiss government agency called Pro Juventute, whose mandate included removing newborns and children from their mothers as well as forced sterilization and medical experiments. Kinder der Landstrasse continued well into the 1970s. The unfathomable trauma of being separated from mothers and fathers, being placed into quasi work camps and factories at a tender age and being exposed to horrific physical, psychological, and sexual harm, not only creates ongoing toxic stress during the childhood years, but it may very well cause irreparable damage into adulthood. Young developing brains are far more susceptible to stress damage than adult brains, in part because of the rapid way in which they grow and develop. Like many species on our planet, human brains are comprised of three major structural regions: the “hindbrain,” the “midbrain,” and the “forebrain.” The hindbrain, at the base of our brains, is the first region to form, and it controls our most essential non-conscious functions like breathing and blood flow. It’s also the area associated with our most primitive and basic survival reactions, like “fight or flight.” The hindbrain is often called the “reptilian brain” because the essential structure is shared with creatures that appeared far before humans on the evolutionary journey. The midbrain, the structure that develops next, is largely responsible for processing sensory information from the environment around us. And, finally the “forebrain,” or cortex, functions to make sense of our world, along with all the reasoning, judgment, planning, and decision-making that support our daily lives. The forebrain is one of the most distinctive features between human intelligence and the more basic instinctive behavior of non-human creatures. The brain develops much like we do as we grow from infant to adult, from very basic functions that support life to seeing and exploring our world and finally to making intelligent decisions based on logic, reason, and foresight.

46

3  Raising Children on War, Cartoons, and Social Media

Much of this early structural development takes place in our mother’s womb, where the basic building blocks of our brains are assembled. However, some of the connections that wire these building blocks together in order for parts of our brains to talk to each other, often occur much later in life. As we grow prenatally, and as then as infants and children, our brains develop very rapidly from the most primitive region (the hindbrain) up to the most advanced higher-thinking region (the forebrain), a region that does not typically mature until early adulthood. Of course, our genes play a pivotal role in this development and provide the blueprint for our individual brain growth. But the way our brains grow also depends greatly on our interaction with our environment, as well as our basic physical health. Even before being born, as prenatal babies, we try to make sense of our surroundings. This is the time in our development when most of the architecture associated with the hindbrain – the most primitive part of our brains – grows and matures. Even though the brain develops from the hindbrain up, the small brains of newborns actually contain all the neurons they will ever need; in fact, many more than they will need. Neurons are nerve cells that form the essential communication pathways of our brain and nervous system. In a newborn baby’s brain, they are like an incredibly complex network, similar to 100 billion fiber optic cables – all hooked up and ready to go, but not yet activated. As a newborn begins to experience her new world, her mother’s scent, the taste of breastmilk, the warmth and comfort of touch, and her mother’s voice, she starts the journey of creating neural pathways in her brain. These pathways are connected by synapses that act like tiny bridges to allow electrical impulses to travel from one neuron to the next at lightning speed. For the young brain, especially in the first 3 years of life, the numbers of these synaptic connections quickly run into the billions  – often at a rate of a thousand new connections a second at the peak of development. As babies and young children learn and make sense of their world, the experiences that are repeated develop reinforced neural pathways like well-worn footpaths, while those that are not used quite so regularly are “pruned” away by an army of roaming immune cells called microglia. This is a very natural, and indeed, critical process in the young brain and is a normal biological way of cleaning up the unused branches of an otherwise very complex and crowded neural grid. Such pruning is absolutely essential. In fact, a deficiency in pruning capability has recently been linked to autism, schizophrenia, and obsessive-compulsive disorders. For children who enjoy a safe, comfortable, and stimulating environment, the brain develops rapidly, expanding neural and synaptic pathways upward from the most basic reactive “fight-or-flight” hindbrain to the midbrain and forebrain. This is how the brain’s “architecture” forms. One of the most common ways this is accomplished is through a process called “serve and return.” As the baby or child interacts with the parent or others in its environment, the baby not only begins to mimic the parent, but it develops neural networks based on what it learns about healthy human social interaction. An overly simplified and blunt analogy would be a robot or computer that had only a few inherent (genetic) properties or instructions but no other information about the strange new world around it. The robot’s task

3  Raising Children on War, Cartoons, and Social Media

47

would be to piece together clues in order to wire its own brain – in effect, creating an entire operating system based on what it learned from healthy human interaction; this is the process of “serve and return.” Simpler operating rules would need to be hardwired first, to lay the foundation for more complex rules and understanding. If, however, children experience a significant amount of chronic toxic stress during their very early years, the overtasking of their primitive fight-or-flight stress response system can partially inhibit the natural development toward higher-brain function. To varying degrees, the Verdingkinder surely would have experienced this kind of toxic stress. Whether we experience short-term stress, like being startled, or whether it is medium duration stress, perhaps like dealing with a family member who has been injured in a car crash, or whether it’s even longer forms of chronic stress like child abuse, the reactions in the human body are, for the most part, extraordinarily similar. When the body experiences stress as seen through sight, hearing, or other senses, the primitive hindbrain’s amygdala interprets it as a threat and broadcasts a “thought-­ free” alarm, designed to save the life of the brain’s owner. Like a cat jumping at an unexpected noise – it’s the brain’s internal primitive core moving us out of harm’s way without us having to think about it. The amygdala takes charge of this immediate action and provides a signal that results in a lightning fast cascade of hormonal responses in the body. In such cases, the adrenal glands are instantly recruited to release the hormone adrenaline, and from the outer surface of the adrenal glands, cortisol, which in part, readies the body to be better able to cope with injury. Together, these two hormones act like army Generals, commanding the body’s various defenses to immediately take up their battle stations. The heart and lungs increase their activity, heart rate and blood pressure climb to ready the body for vigorous action, small airways open wider in the lungs, breathing quickens, blood flow is diverted away from non-essential activities like digestion and toward the major muscle groups, pupils dilate and range of vision narrows, capacity for blood-­ clotting increases to preserve blood if injured, sugar is released for muscle fuel, and all muscles tighten. All of this happens in the blink of an eye – in fact, they all take place before your eyes or other conscious senses even have time to process what’s happening. While the stress response is critical for survival, too much of it can be damaging. Abused children, like the Verdingkinder and residential school children, can live in an elevated stress condition for months or years. With stress, epinephrine and norepinephrine (catecholamines) increase the child’s arousal levels during the fight or flight response. During this state, all non-essential sensory information is discarded, and the child becomes hypervigilant to what the brain perceives is an imminent threat to survival. When this state becomes chronic, the brain can become perpetually overreactive, resulting in increased startling, jumpiness, and poor mood regulation. An elevated long-term stress response can eventually result in damaged blood vessels, high blood pressure, muscle atrophy, increased body fat, poor lifestyle choices, and a lack of restful sleep, which in itself can lead to a host of additional corollary problems. Interestingly, research shows that even if a child is freed from their stressful environment, they often develop a new baseline for stress that is much

48

3  Raising Children on War, Cartoons, and Social Media

more elevated than other children who never experienced traumatic stress. From a physiological perspective, formerly abused or neglected children often live in a state of constant fear and hypervigilance, always looking out for threats. The capacity for changing the structures and neural pathways of the brain is called “plasticity,” and in the very young child’s brain, the plasticity is extraordinary, meaning bad experiences can quite easily alter the way the brain forms and functions. Indeed, there are periods during the development of a baby or child’s brain when specific neural pathways are forming and exposure to toxic stress during these “sensitive” periods can have near permanent negative consequences. The Centre on the Developing Child at Harvard University, which seeks to understand the developing brain of children, points out that just as the architecture of young brains can be positively (and normally) altered through life experiences, so too can the brain’s architectural development be impeded. Even though a child starts out with a genetic blueprint of how to build the brain’s architecture, abuse, neglect, and other damaging experiences during the highly sensitive periods of brain development can actually affect this blueprint. Imagine how relatively easy it might be to ask an architect or home builder to make changes to a building before, or even during construction of a particular phase of construction, versus asking just after the building has been fully completed. As Dr. Charles Nelson of Harvard Medical School describes it, the difference between making changes during adulthood versus childhood is like trying to push open a thousand-pound door instead of having the door swing open with the touch of a finger. This means that what happens in childhood during the first few years of life may well set the stage for near permanent brain architecture. Add to this, diagnostic images of neurons in the brain’s cortex and hippocampus show that individuals exposed to toxic stress have far fewer neural branches and connections in the part of the brain responsible for higher reasoning, memory, and rational thought. This relentless activation of the more primitive amygdala means that not only will children in chronic stress environments potentially suffer from cognitive dysfunction in the areas of memory but that they may well be quick to behave irrationally or emotionally because of their hyper-active and hyper-developed fight-or-flight emotional amygdala region. Neuroscientists have characterized this as “excessive response to minor triggers,” or in simpler street terms – overreaction. This is because chronic stress does not permit the child’s brain enough time to allow the frontal cortex to learn from experiences and develop rational coping mechanisms. Often these challenges to brain development can even be seen in the physical brain structures themselves. Just over a third of children who are physically abused show abnormalities in the emotional or limbic parts of their brain  – this rises to nearly half of those with sexual abuse, and nearly all of the children when more than one type of abuse was experienced. The hippocampus too, which is responsible for memory cataloguing and retrieval, is very susceptible to stress hormones, and when children are exposed to conditions of chronic stress, the onslaught of these hormones can actually reduce the size of the hippocampus. The brain is so busy reacting to emotional stress that the hippocampus’ main job of systematically organizing experience is reduced from full-time to part-time work. Other physical effects

3  Raising Children on War, Cartoons, and Social Media

49

include less electrical activity in the left brain hemisphere (responsible for language and logic) when measured by electroencephalogram (EEG), which is thought to mean that the brains of abused children may favor the right brain (responsible for spatial orientation, music, and facial recognition). Sadly, abused children often show evidence of tiny seizures throughout their brains, similar to what scientists might see in epileptics. The functional deficiencies may include a predisposition to depression and also problems with memory. As well, the main region that separates the two hemispheres of the brain, the corpus callosum, which is responsible for communication between the left and right brain, can be significantly reduced in size for children who have experienced abuse. Indeed, using MRIs, researchers discovered that in abused boys, the corpus collosum’s size was up to 43% smaller than in a non-abused child and in girls up to 30% smaller. The researchers hypothesized that the smaller corpus collosum could lead to temperamental mood swings and changes in personality. Not only do we see physical changes to the brain, cognitive dysfunction, temperament, and behavioral problems, but researchers have also discovered other serious consequences of toxic stress in childhood: a significantly elevated risk of alcohol or drug abuse later in life, depression, and even cardiovascular disease. A study of adults who had experienced physical abuse as children showed that nearly half of them had three or more psychiatric disorders and three-quarters of them had at least one psychiatric disorder. For abused children whose childhood was full of chronic stress, the challenge will be overcoming the various and possible brain architecture issues that we cannot readily see from the outside. While many of these will manifest as cognitive, behavioral, or even physical problems as adults, researchers have just begun to discover that some of the effects of trauma experienced in childhood can even be carried forward to future generations. In a recent study, scientists found that the children of grown Verdingkinder experienced more physical abuse and childhood trauma than the non-Verdingkinder offspring control groups. This was not just true of the children of Verdingkinder. Offspring of those traumatized by Cambodia’s Khmer Rouge and the offspring of children who survived the Rwandan genocide showed signs of anxiety, depression, and behavioral disorders. Today, the nature of global conflict puts young children directly in the crosshairs. In Syria and Iraq, a 3-year-old (whose brain is developing at top speed and who is immensely susceptible to stress) knows nothing but a world of war. In the Democratic Republic of Congo (DRC), in the heart of Africa, an entire generation has grown up knowing nothing other than life or death conflict. The war in the DRC, which has been described as the world’s deadliest war since World War II and the world’s most unrecognized and overlooked conflict, has resulted in the death of some 5.4 million civilians – the majority being women and children. Today, 7.5 million children (3.9 million of them girls) are no longer in school. The downline effects of war in places like the DRC and Syria, and the millions of children who are dragged from war-torn regions by their families and into refugee camps, are susceptible to lifelong behavioral challenges. Today, psychologically resilient researchers have developed a host of tools specifically designed to measure distress, grief, and fear in children who

50

3  Raising Children on War, Cartoons, and Social Media

have directly experienced war. According to UNICEF, one in ten children grow up in countries or regions at war, meaning 230 million children are directly affected by conflict. The effects of conflict and the rehabilitation of future generations have often been discussed using macro terms like nation building, capacity building, democracy, rule of law, and other modes of civil society. Only recently have we begun to measure the human capital costs of violence by looking at the various ways war impacts the minds and bodies of the most innocent and vulnerable members of our world – our children – and the potential long-term fallout on this young generation, their future households, and their potential. These are the “micro-level’” effects of violent conflict. In a study that followed children who had survived “city-level destruction” during World War II, researchers found significant long-lasting damage. Some 60 years after the war, the findings indicated that those who had experienced war directly as children had lower education levels, were shorter in height, had significantly less job income, and suffered from poorer health. We know now that war, as the ultimate stress, can seriously affect a child’s young brain – and specifically, the brain’s architectural development. According to UNICEF, some 3.7 million children have now been born into the Syrian civil war. These young children know of no other world than one engulfed in war. Any child younger than 5 years old in Syria has never experienced a day of peace – and has seen only a war where no target is too inhumane and where no target is off limits, including classrooms and playgrounds. A total of 8.4 million Syrian children are now deeply impacted by the conflict, either inside Syria or in neighboring nations, where children make up half of all refugees. The total number of children living through war and threat of death now constitutes some 80% of Syria’s next adult generation. The destruction and closure of 6000 schools in Syria has resulted in children as young as 3 years old being forced into the workforce with more than seven million children now living in poverty. Marcia Brophy, conducting the largest study of its kind on Syria’s children, described the situation as a “terrifying mental health crisis.” Particularly hard hit are those things that all children hold dear as mental sanctuaries – the love and comfort of a family and the safety of a home. Yet two-thirds of children in Syria have either experienced the death of a loved one or have had had their house bombed, shattering these safe havens. Some 80% of children have become more aggressive, nearly 75% had started or increased bed-wetting, have suicidal thoughts, or are engaging in self-harm, while half of adults said they’ve seen children lose the ability to speak or develop speech impediments – all indicators of PTSD. The study suggests that one-quarter of the nearly three million children born into war are now at serious risk of developing a mental health disorder. As lead researcher Brophy describes it, the children are in a constant state of “fight or flight,” and the ensuing toxic stress can result in serious lifelong medical issues. While the long-term damage and cognitive impairment brought on by child abuse and war is truly horrific, this is not the only avenue for harming young minds. Indeed, one of the most fascinating frontiers of study deals with the long-term damage that screen time may have on young rapidly developing brains. For young

3  Raising Children on War, Cartoons, and Social Media

51

children, the brain processes digital screens as a mad bombardment of moving colors and sounds, a sensory onslaught that can leave their brains overwhelmed with the fast-paced flow of information. As psychiatrist Victoria Dunkley puts it, the young brain cannot keep up, and the ultimate stress response in the child’s brain begins to rob the more sophisticated higher brain functions of the time they need for development. Like other toxic stress situations, the child’s brain is in a state of “fight or flight,” with an elevated cocktail of stress hormones similar to the stress-induced brain development issues we see in abused children. While most – if not all – parents do not want to “abuse” their children with the child’s favorite cartoon or computer games, prolonged exposure to digital screens may have serious developmental consequences. Scientists have now coined the term: Electronic Screen Syndrome (ESS), which is a yet unrecognized disorder but one which may well help explain some of our most pressing, and quite frankly, overwhelming trends in pediatric brain health. In the past decades, since the growth of personal digital screens, Attention-Deficit Hyperactive Disorder (ADHD) and childhood bipolar disorder have risen 800% and 4000%, respectively. According to Dr. Dina Panagiotopoulos, a Pediatric Endocrinologist, mood-altering antipsychotic drugs are now even being prescribed to toddlers for aggression and other observable behavioral issues. Overall, in the United States, antipsychotic prescriptions have risen more than 100% in the past dozen years to over 427 million prescriptions in 2013. There are now more prescriptions for antipsychotic drugs each year in America than there are Americans. Canada is not any better with some 58 million prescriptions per year for antipsychotics, which equates to over 200,000 prescriptions per day. In the United States, 11% of the population is now on prescription antidepressants. Today, new antipsychotics have become mainstream. These antipsychotics, called “second generation,” were developed in the 1990s and follow the first generation, originally developed in the 1950s and whose users often suffered significant and debilitating side effects. Despite their newness, second-generation antipsychotics are now being prescribed to children for a host of “conduct” (behavioral) issues, ranging from irritability to frustration and anxiety, to sleep issues. These are drugs that in many cases were designed for adults with severe mania  – not children in diapers. Despite the fact that the long-term effects of antipsychotics in children have not been adequately studied, doctors continue to prescribe them to crib-age children. The American Academy of Pediatrics, the Academy of Child and Adolescent Psychiatry, and the American Academy of Neurology do not condone the use of antipsychotics on children younger than 3 years of age. Yet, in the United States, children aged two and younger, received 20,000 prescriptions of antipsychotic drugs in 2014, up 50% from the year earlier. In the Canadian province of British Columbia, prescriptions of antipsychotics for children increased 1000% over 10 years. In many cases, parents are not even aware of the potential side effects that antipsychotic drugs may have on their children nor are they ever informed. Second-­ generation antipsychotics can cause serious and relentless weight gain along with associated risks of heart disease, cardiac arrhythmias, and diabetes – particularly among children. Users also run the risk of deterioration in strength and onset of

52

3  Raising Children on War, Cartoons, and Social Media

involuntary movements. Complicating matters, up to 80% of preschoolers, were also receiving additional prescription stimulants or antidepressants while taking antipsychotic medication. These poly-pharmaceutical cocktails may present dangerously unpredictable side effects. As one can imagine, few studies exist on how dangerous drug concoctions affect very young brains and bodies. To be sure, there are legitimate times when powerful medication may well be justified and effective in treatment-resistant cases as have been seen in instances of schizophrenia or Tourette syndrome, but widespread off-label prescriptions to battle childhood behavioral issues is a highly troubling trend. Like the digital jungle in which our little ones must learn to survive, our inability as a society to grant our children the time and resources needed to unearth the root of childhood behavioral problems remains a fundamental shortcoming of our modern public health approach. To this point, the Center for Disease Control and Prevention claims that 40% of children age 2 to 17 did not receive access to non-pharmacological treatments when they could have used them (instead of drugs)  – despite the fact that many non-­ pharmaceutical options were available. For those with private insurance, a recent study showed that among children who had received prescriptions for antipsychotic medications, most had not received a proper mental health screening, had not been seen by a psychiatrist, nor had they received any non-pharmaceutical therapy in the year that preceded their prescription. Of course, these sorts of non-pharmaceutical treatments can be extensive in terms of family involvement and can be quite time-­ consuming, involving long commitments and a schedule entailing multiple sessions and, in some cases, may be quite expensive and not covered by one’s drug and health plans. Costly in terms of both time and money, prescription drugs can be a very attractive quick fix, especially for parents who mean well and want to see fast results for their little loved ones. Then there’s the burden on doctors and healthcare professionals. With up to 20% of all children today having issues that could be professionally diagnosed as mental illness, the burden on time-consuming non-­ pharmaceutical courses of treatment for general practitioners and pediatricians can be overwhelming. In America today, 14% of boys and 6% of girls aged 5 to 17 have been diagnosed with ADHD, and this is a tremendous number of cases for resource intensive non-pharmaceutical treatments. All the more interesting, the issue of over-medicating our kids may actually be more of a North American problem. Western Europe simply does not have the same numbers of children on antipsychotics. Indeed, per capita, the United States has one to three times the numbers of kids on antipsychotic medications than do Western European nations. As reported by the British Medical Journal, the United Kingdom has one of the lowest rates of ADHD prescriptions in the Western world, ten times lower than the United States and five times lower than Germany. It has not escaped scientists that the dramatic rise in ADHD has directly paralleled the rise in time that young children spend in front of digital screens, be they televisions, computers, or mobile devices. Today, children spend nearly 7  hours a day planted in front of a digital screen. Considering a 10-hour sleep and time for meals, school, driving to and from activities, as well as hair and teeth brushing, that is a massive percentage of a child’s waking hours sitting and staring at a flashing screen.

3  Raising Children on War, Cartoons, and Social Media

53

If you watch a typical children’s show today – even an educational one – you are apt to see a series of short flashy clips about a given subject. Often, the imagery is strung together in a continuous attention-capturing stream with multiple camera angles, but without the storyline that would normally resemble the real-world time it would take to actually view the material. For example, suppose it’s an educational video with accompanying music and it’s on the subject of taking a city bus. It might show someone getting on a bus, paying the driver, a few seconds of riding on the bus, and then disembarking. It would be rare for the program to discuss how one feels walking on to the bus, all the strange things you may see, how to pick a seat, respect for others, and how to see know when to get off the bus. It’s even rarer if the segment is slow enough that it might resemble real time. This sort of programming would fail miserably – it would be too boring and too slow. Yet, it would be at the pace that a very young child could understand, absorb, and consider, all without undue stress. Dr. Rob tried this on one youngster – a 3-year-old family member who has no known attention issues. Dr. Rob asked if she wanted to watch Mr. Roger’s neighborhood instead of her usual educational program, a show that was full of flashing cartoons, funny voices, and animal sing-alongs. On this particular Mr. Roger’s episode, Mr. Roger’s takes the audience viewers on a tour of an orange juice factory. With his characteristic lack of inflection and his slow and measured walk, Mr. Roger’s calmly points out that he’s never seen these many oranges (quiet exclamation mark). After a few minutes, the expression on the 3-year-old was the same as if I had asked her to watch a houseplant. She slouched back in her chair, her eyelids heavy – finally declaring, “this is so BOR-ing!”. Why was it that I, a grown adult, found the Mr. Roger’s episode so delightfully refreshing and wonderful compared to typical kids programming today, and this little 3-year-old, most certainly did not? The answer has to do with the expectations each of us has, or more specifically, our brains have, when it comes to screen time and Electronic Screen Syndrome. When a young child is exposed to flashy digital screens, particularly when the screen is interactive (like a smartphone, tablet, or gaming device), the young brain gets a good hit from our old friend dopamine. Indeed, the same reward circuitry that plays its part in our cravings for sugar and fat and drugs also plays a role in screen time for children. For the 3-year-old of today, Mr. Roger’s calm real-time storyline of the world flies under the threshold for activating the emotional centers of the young child’s brain, resulting in a rather dismal reward response. No dopamine, no joy. It’s not only the busy screen that kids crave, it’s also the pace of the programming. Children who have grown accustomed to the digital age expect to have their attention held by changing scenes or interactive games. This continuous novelty provides the reward that the brain finds so utterly irresistible. The problem, of course, is that the real world does not move this way, and as such, the capacity of a young child’s brain to thrive in a Mr. Roger’s type world becomes strained. The real world now is simply too slow and too boring and cannot keep the attention of the child. What frustrates many parents of ADHD children is trying to bring them into focus during daily activities while reining in bouts of hyperactivity, only to see the

54

3  Raising Children on War, Cartoons, and Social Media

chaos give way to eerie silence once their same jumping-bean child sits in front of a digital screen. Some parents may well argue that the child cannot possibly have ADHD if they are able to sit so quietly for any length of time. But the screen is providing a dopamine kick. It may come as no surprise that ADHD drugs, like Ritalin, which are used to treat ADHD symptoms, act to raise dopamine levels in the child’s brain, therefore providing calm. No wonder parents of children who suffer from ADHD find temporary and quiet reprieve when their child sits in front of a screen. Are screens and gaming therefore linked directly to the incredible rise in ADHD diagnoses and prescription antipsychotics? It may be too early to tell, definitively, that is. Scientists do know that screen time has detrimental effect on the development of children’s brains, not unlike the brains of children who have experienced chronic stress in their lives and show deficits in the development of their brain architecture. The fight-or-flight response that can arise from poor environments, be it through very serious adversity like abuse or war, or though less sinister mediums like screen time, means that some (perhaps many) children are now unable to cope in real-world environments that require emotional and attentional regulation. For many well-meaning parents, the inside joke is that if you want to actually eat food at a restaurant, have a moment to grab a shower, or attend to a pot on the stove, screens can be a wonderful tool, like momentarily freezing the swirling chaos of our daily domestic tornados. In a struggle to be all things perfect, to be wonderfully attentive parents who nurture their children’s growth every second of the day, who provide beautifully nutritious and multi-colored home-cooked meals, who keep the house spotless to foster uncluttered minds, and who look after their own health with rigorous fitness regimes – yoga, diet, and 8 hours of sleep – screens become a necessary (nay critical) tool of parenthood survival. All parents get it. The frenetic pace of life crashes headlong into the lofty life prescriptions by wise and well-meaning health professionals, whose calmly worded advice is meant to effortlessly guide us to better sleep, nutrition, exercise, and meditation – each and every day. In reality, it is assumed that this will all be done, while toddlers tug desperately at each of our arms. Without a day nanny and night nanny, and perhaps chef, how is it proposed that a set of working parents, who barely have time to brush their own teeth or shower, perform this wonderful self-fulfilling transformation? There are no easy answers, but the most obvious and foremost solution is to reduce casual screen time when other avenues for positive distractions are at hand. But here’s the rub: like all decisions that require foresight, we tend to evaluate immediate desires and needs against long-term aspirations. For parents, this means deciding whether to simply allow a little one to have some screen time so the parents can do some chores, make a phone call, or maybe even hug each other for the first time that day, or do we forsake those “lofty” ambitions to spend time constructing one-­on-­one outlets for our children? Many parents struggle with the good-parent versus bad-parent role – and there’s plenty of guilt to fill our guilt reservoir from pie-­baking and craft-making social media friends whose hashtag worlds document each delicate, smiling, and nurturing step they take through parenthood. But how

3  Raising Children on War, Cartoons, and Social Media

55

real is this imagery? In our modern world, do many simply lack the creativity necessary to provide more productive and less harmful activities for our children? After all, it was not all that long ago when only a select few households in America had televisions in their homes and children had to find ways to entertain themselves. Much of our battle, now and in the near future, will be reconciling the growing disparity between how technology is rapidly changing and integrating with our everyday lives with how desperately slow our brains and bodies are at being able to adapt to that change. Most of us are familiar with the terms “Generation X” or “Generation Y” (Millennials) and have a general understanding that these labels have become pet titles for defining the span of years within which a person was born. Folks of Generation X were born between 1965 and 1976, and those of Generation Y were born between 1977 and 1995. But, it’s not simply the years in which you were born that define each generation, it’s how being born at that particular time is apt to shape your individual’s worldview, opinions, habits, preferences, and values, not just during youth but for the rest of your life. The Center for Generational Kinetics in Austin, Texas, studies the characteristics and traits of each generation and suggests that parenting style, technology, and economics play the biggest roles in defining generational traits. In so doing, they point out that it’s altogether too simplistic to just use a year of birth to define someone while ignoring where that person was born, what the conditions were like in the individual’s early years, and what the opportunities were like to enjoy a happy and socially engaging adolescence and early adulthood. The Center uses the example of a young person born in Greece during extraordinarily high unemployment rates and few opportunities versus someone born in the same year in an area where good jobs were plentiful and worries were few. We know that varying degrees of hardship and stress can not only affect the social and cultural environment, but they may also very well affect the development of the brain and brain health. Of the forces that influence our generational traits, technology is certainly the one that is changing most rapidly. Today’s generation, Generation Z (also called iGeneration) are those who were born starting in 1996 to present day. The Center for Generational Kinetics defines Generation Z from this year because anyone born after 1996 would not remember the events of 9–11, and that is a day that defined a new global shift in politics, economy, and worldview. Dr. Larry Rosen, a professor and past chair at California State University, has been studying how technology affects our brains, having examined nearly 30,000 people in 22 countries. Rosen suggests that the impact of technology on young brains is profound, and because technology is changing so rapidly, the years that define a generation are shortening. The typical thinking that about 20 years defines a “generation,” we now see fundamental changes occurring about every 10 years, and as Rosen argues, it is entirely defined by technology. Bizarrely, this means a toddler may be of a different generation than a ten-year-old. As Rosen points out, time is compressing; it took radio 40 years to reach 50 million users, cell phones less than 15 years, and YouTube about 1 year. The emerging generation of today is Generation C. As Rosen defines it: collaborative, communicative, connected, and creative. This current generation does

56

3  Raising Children on War, Cartoons, and Social Media

not know a world without handheld devices and wearable technology at their fingertips. If you’ve ever mused at a toddler trying to expand a real photograph or picture in a magazine like they would on a touchscreen, you may have thought it cute. This is where the generations collide. Where previous generations see tech gadgets as simply that, gadgets, Generation C views technological devices as essential instruments through which they understand the world and communicate with others. One of the characteristics of emerging tech-savvy generations is their belief that their technological literacy provides them with dextrous superpowers for moving from application to application, from screen to screen, and from device to device. Kids today are amazing at navigating concurrently between a smartphone, computer, maybe an online conversation, music, and homework, all at the same time. Compare that tasking with individuals from previous generations who often use devices in a slow and frustrating manner, sometimes colored with the odd derogatory utterance about the cumbersome response time. It’s no surprise then that younger folks feel that they are better and faster at multitasking. But as Rosen explains, multitasking is a myth; what’s really occurring is that the brains of so-­ called multi-taskers are actually switching completely from one task to another, and back again, what he calls “continuous partial attention” or “task switching.” Rosen adds that what kids are actually good at is using the small bits of “slack time” that exist between the switching of tasks. When it comes to modern technology, younger generations have little patience for unused slack time; 2 seconds being the average maximum length of time that a teenager will wait for a webpage to open before finding something else to do. And even during this “slack time,” the teenager will usually try to do something else online, like send a text message, Rosen suggests. Two big problems unfold in all of this: through this, kids today are developing an intolerance for delay, and second, an intolerance for viewing or understanding anything in any real depth or context. The fact that kids try to juggle so many balls at once and the belief that they can switch from one ball to the other with ease creates a situation in which the young brain (once again) becomes accustomed to a fast-paced “artificial” world, only to find the “real” world too slow. Like too much screen time, this can manifest as attentional disorders and behavioral regulation issues. It doesn’t take too much imagination to understand the effect that heavy Internet tasking has on the young brain. Excessive and continuous seeking of reward, with little patience for delay, creates a stream of attempted gratification that brings pleasure, and by way of that, dopamine. Of course, this feels wonderful to the brain, and as such, the young person seeks more. As Rosen explains, gamers are particularly vulnerable to the dopamine reward circuitry and exhibit the same traits as other addicts. This contrasts with obsession, which is what many of us have with our smartphones. It’s not only the dopamine that gives us a pleasant feeling when we check our phones, it’s the escalating anxiety that we feel the longer we go without checking the phones  – anxiety that we can alleviate simply by picking up our devices. For Generation C, technology does not merely provide tools to understand what’s going on in the real world, technology is the real world. The romanticized view of Gen C is that they see their world as a bounty of limitless possibilities. Come up

3  Raising Children on War, Cartoons, and Social Media

57

with the right video and emerge as a YouTube sensation overnight while you sleep – be a millionaire by the end of the week, create a continuous moment-by-moment photo stream about every second of your day, or broadcast your views and opinions instantly with anyone in the vast global audience who can hear your voice amidst the white noise. For a young iGeneration, Net Gen, or newer Gen C adolescent, one of their most significant challenges is reconciling the rather slow and non-digitized environment of the classroom – where, ostensibly, they are supposed to learn about the world around them. Yet, in nearly every measure, conventional classroom settings fly in the face of the Gen C world. Classrooms are often rigidly organized with hardback chairs all facing the same direction as opposed the type of cool and free-form creative commons that companies like Google have long embraced. There are strict rules in classrooms about when you can speak and when you cannot and those rules often include raising your hand and waiting your turn – maybe for long periods of time while being overlooked – a near torturous condition for a Gen C who lives in a world where human interaction is instantaneous and continuous. Classrooms also typically have one single authority figure at the front of the class who most often delivers information in a didactic manner, rather than student having a limitless bounty of sources to skim and survey at their heart’s content. Finally, the student has to focus on one person in real time, without continuous social media alerts, popups, and sounds, which to a teenager would feel like a significant withdrawal from such brain jacking dopamine boosts. For many of us, we may well see the conventional classroom as a good thing, a blessing – a much oasis of common sense against the disorganized and anarchically unfiltered world of technology that is so apparently corrupting the minds, behavior, and potential of our young ones. There is some logic in this. For all of us who still derive great pleasure from the look, feel, and smell of a book, and enjoy the romantic notion of flipping through its pages on a porch swing without the sounds of little beeps and bells, then the merits of old-fashioned teaching are an easy sell. However, herein lies the issue. If our schools are in the business of enriching and opening young minds, to what extent do we force them to look over top of their screens, to gaze upon the natural world and its vast history, in an effort to understand what wondrous sets of events occurred (and continue to occur) in order for them to be able to sit comfortably and stare at their bleeping screens. None of us  – particularly Gen X and older  – wishes to concede defeat to screens and technology. There is the argument that screens are also making our kids softer. Not just around the midsection but in their brain health as well, specifically in terms of being resilient. The Verdingkinder suffered greatly and their brains and behavior show evidence of that early architectural damage. As is characteristic, individuals with a great deal of early childhood trauma can exhibit smaller amygdala and hippocampus volume, affecting memory and learning. One of the side effects of a continuous fight-or-flight world is that later in life, individuals exposed to childhood stress often exhibit a tendency to emotionally overreact to perceived threats. The normal patterns of resilience that lets them absorb the stress and deal with it in a safe way

58

3  Raising Children on War, Cartoons, and Social Media

are blunted by having experienced chronic toxic stress during the time when “higher” levels of the brain should have been taking a lead role. But sheltering our kids from threats is not the answer either. In her book, I Find that Offensive, Claire Fox argues that the opposite tack, of protecting our kids against all possible dangers in life, big and small, may be having the opposite effect. As a libertarian, Fox argues, politically and socially, that we are being grossly overprotective as a society, and we are paying the price. Like mad scientists, our self-­ perceived parental do-gooding – like the banning of footballs, soccer balls, playing tag, and cartwheels, at New  York’s Weber Middle School in order to prevent all potential playground injury – is creating armies of tiny emotional monsters. At a National Union of Students in West Midlands, United Kingdom, organizers tweeted out that clapping for moments that you like on stage (a rather typical custom for anyone who has attended a performance in the modern era) needed to be replaced by “jazz hands,” so as not to create anxiety. Akin to a good immunization, exposure to life’s small ups and downs can help build a healthy dose of mental resilience for when things really don’t go as planned. Generation Snowflake is a rather comical, albeit slightly derogatory term, that has been not-so-lovingly cast upon the new young adult (college aged) generation who have been accused of lacking the mental fortitude to healthily navigate real-life hurdles. The phrase, which became one of the top memes of the past year and Collins Dictionary’s top ten new words of the year in 2016, refers to the outwardly tough and morally conscious but inwardly fragile generation of young adults who are quick to take offence and who lack typical levels of emotional resilience, grew up without authentic hardship (like war), and instead, were sheltered from any possible threat, in some cases, even cartwheels and tag. While the average age of an American soldier in World War II was 26 and in Vietnam, 22, universities today found themselves creating so-called safe spaces to coddle similarly aged adults, where educators handed out plush toys and coloring books, so that the adults under their tutelage could, hopefully, cope with life’s ups and downs. Ironically, when we shelter our children from all threats, we risk creating an environment in which everything (from baseball to cartwheels) is a threat, all of which our little ones are to be very wary. In turn, the children see everything as an affront, including events or elements that shouldn’t be. As we now know, living in a world of constant and continuous threat creates stress – toxic stress – that can inhibit the development of brain architecture and have downline effects, including hypersensitivity to anything deemed to challenge them. Generation Snowflake, aside from being a rather demeaning meme, is based on a generalization that college-aged adults are quick to temper and have a very low tolerance for any opinions that are not their own. Ironically, the theme political correctness – a core mantra for Generation Snowflake – is actually anti free-speech. Many would fail to grasp the strong undertow of authority in that phrase. Intellectually speaking, it says that “my view is correct and yours is not.” So-called Snowflakes who pride themselves on enlightened social mores by citing “correctness” are in fact promoting a politically primitive mindset: their anti-“Big Brother” platform being bizarrely and confusingly Orwellian in nature. It’s not surprising

3  Raising Children on War, Cartoons, and Social Media

59

that university campuses have seen students shouting down speakers that they don’t agree with rather than offering them a platform for critical debate, as was the norm in higher institutions just one generation ago. Even more sad are university administrations that capitulate to the single-lens worldview as a tactic to keep the peace. So-called Trigger Warnings are now affixed to legal cases at Oxford University law school if it is deemed that a student might be offended by the content of the case. Indeed, it can be argued that universities, because of their sensitivity to public appearance, enrolment, and the bottom line, have altered course more in one generation than in the past seven centuries. Regardless of how very wrong this trend is in terms of the ultimate function of higher education, the issue with so-called Generation Snowflake is that their resilience to opposing viewpoints seems comparatively low. It’s not that they are altogether too tender – indeed, they are tough enough to camp in the rain and cold to rally for a cause – but that they were not raised to adequately distinguish critical threats from no-so-critical threats. A favorite characterization is that Snowflakes are easily offended, by nearly anything and everything. One of the most compelling arguments for Generation Snowflake is that they have been raised in a world of hypervigilance – not necessarily intentional on their parents’ part – but significantly real, nonetheless. In a world where one is protected from all things, all things logically become threats. The brain becomes locked in a fight-or-flight response, not unlike those who have suffered real trauma. Emotional overreaction and hypersensitivity, poor memory, and impaired learning can result. Rachel Dove, writing as a Generation Y’er herself and as a Telegraph columnist, argues that despite being the best fed, safest, and most secure generation of humans in history, her generation is a train wreck of anxiety and depression. She argues that it’s not that her contemporaries shouldn’t be stressed now and again but that through her observations, it appears as if her generation cannot seem to shake free from the terror that envelopes them, the same terror that others might be able to reason away. When you can’t shake feelings of dread, you often turn to medical advice, and quite often that medical advice is a prescription in the form of an antidepressant. When the bulk of a generation finds themselves prone to oversensitivity and overreaction to environmental circumstance, leading to anxiety, and ultimately depression, you get a rapid increase in prescription medications. As Rachel Dove writes, among her 20-something friends, the drug Citalopram is a favorite, a selective serotonin reuptake inhibitor (SSRI). In the United Kingdom, the rate of antidepressants has doubled in one decade. In the United States, about one in ten people over the age of 12 is on antidepressant medication. Antidepressant use in the United States have jumped fourfold in one decade, making them the third most frequently used medication in America. Part of the widespread use of popular drugs like Zoloft (Sertraline) is their purported off-­ label use for a wide swath of ailments, from bipolar disorder to neuropathic pain, to fibromyalgia, and to autism. Of course, SSRI antidepressants can be well warranted in many cases. A fascinating study of SSRI prescriptions in the aftermath of the September 11 terrorist attacks in New York City showed that geographic proximity to the Twin

60

3  Raising Children on War, Cartoons, and Social Media

Towers was directly correlated to the percent increase in SSRI prescriptions, with the closest to ground zero (within 3 miles) having the greatest increase in prescriptions, versus those who lived farther away. There is no question that medications do a good job when needed. As psychiatrist Doris Iarovici wrote in the New York Times, antidepressants can actually save lives. Iarovici writes that nearly 25% of new students arriving at counselling centers are on antidepressants, a threefold increase in a decade. Part of the problem, Iarovici argues, is that the cost of non-pharmaceutical treatments can be expensive and may not be covered by insurance. For busy students, drugs are a much more convenient solution. Like many of her peers, Iarovici concludes that we should not dismiss the reality of emotional stress in our “emerging adults” but rather we should focus on building their levels of resilience. From the American College Health Association, National College Health Assessment, we know that of the many depressive feelings students might have, the leading one was “feeling overwhelmed by all you had to do,” with 85% of the students experiencing this in the preceding 12 months. The data also revealed that nearly half of all students (47%) felt overwhelming anxiety and 6% seriously contemplated suicide. According to the report’s authors, depression on campuses has now eclipsed drug and alcohol abuse as leading health concerns. At Queen’s University, one of Canada’s most prestigious academic institutions, six university students died during the 2010–2011 academic year, two from apparent suicide. One of those students was Jack Windeler, a first year, whose death occurred in March of 2010. An in-depth investigation by Queen’s University’s Commission on Mental Health yielded the report, “Student Mental Health and Wellness: Framework and Recommendations for a Comprehensive Strategy.” Among the many disturbing findings of the Commission was that 4% of the students surveyed had thought of or considered suicide in the previous term and nearly 10% had considered suicide previous to that. When asked about stress, 40% reported stress above average levels and 20% described their stress levels as “tremendous.” For 62% of respondents, this stress was characterized as stemming from mental health problems. Interestingly, conventional strategies for mental health challenges in emerging adults really only reaches the 20% or so that rise to the surface and are identified as being in need of care. The other 75% of students remain vulnerable to insidious deteriorations of mental wellbeing without notice by their peers or professionals. In the aftermath of Jack Windeler’s suicide, his family and a team of devotees created his legacy in the form of The Jack Project, now simply “jack.org,” a creative and engaging effort to reach out to the other 80% (or 100% for that matter) of young adults who may find themselves directly suffering from mental challenges or on the edge of challenge. On the heels of the Jack Project, several Ontario universities have taken action to try to alleviate student stress levels by scheduling a “fall break,” in addition to the traditional “spring break.” In the Queen’s University study, over 50% of students said that they feel overwhelmed, and nearly 40% felt so depressed that it affected their ability to function. Feeling overwhelmed to the point that it affects daily tasks and wellbeing may well be characteristic of either too much information going in all at once (information

3  Raising Children on War, Cartoons, and Social Media

61

overload) or an inability to filter the information properly  – to separate the truly stressful bits from the not-so-stressful bits, or both. The question that arises among these emerging challenges is what has changed? While we know that mental health issues have spiked dramatically for emerging adults, is the increase attributable to our enhanced awareness of these problems? Or, is the world of young adults now more demanding than it was before? Or, finally, are young adults somehow less resilient? Most certainly peers and health providers are much more keenly tuned to mental health issues, and in particular, stigma around mental wellbeing, not only for those diagnosed with a mental health issue but also for those who may be part of the silent majority – who may well be suffering in the shadows, outside of the spotlight of health professionals. This is precisely the mission of jack.org. Yet, in addition to the increase in awareness, comes the revelation that our world is changing, and like many of the themes in this book, changing at a pace that our human brains, bodies, and social spheres may be ill-equipped to process. We see these changes occur in habits that we know quite well can have profound effects on brain chemistry, architecture, and processing. When the majority of college and university students say they feel overloaded, what contributes to this? Part of the problem seems to be “information overload.” It may well be that today’s youth have not yet found a way of navigating healthily in the so-called “information age.” A study at Baylor University found the average woman in college spends 10 hours a day on her cellphone; college men were slightly less at 8 hours. For the women, social media was by the far the greatest use. Combine this with recent studies that show a “significant link” between social media and feelings of envy and depression. A study in 2016 on young adults showed a direct and linear link between time spent on social media and rates of anxiety and depression. We know that for very young children (approximately under the age of three) that screen time can harm their brain development. But what about college students who feel anxiety and overload while surfing on their mobile devices and computers – hour upon hour – each day? Unlike screen “time,” which measures exposure in terms of duration, for adults, anxiety and depression may arise more from the kind of information they are viewing when online. Several studies in the past few years have, somewhat counterintuitively, disproven the hypothesis that vast amounts of online surfing or gaming can lead to depression in adults. Rather, researchers suggest that it’s the negative feelings that can arise from the “social” side of the media and, in particular feelings of envy, that remain the catalyst for poor mental wellbeing. A growing theory suggests that the aftershock of social media use may be a function of maladaptation  – a communication medium that tugs and torments at our innate human quest for social status. The need to understand and express ourselves in terms of the status as compared to others around us can be traced back to our evolutionary psychology. As one study puts it, having lots of friends but few “likes” (a common online validation) can play havoc with our survivalist need to be have high social value within groups. From an evolutionary perspective, when food, resources, and mates are hard-won luxuries, being irrelevant in a community does not bode well for one’s genetic survival.

62

3  Raising Children on War, Cartoons, and Social Media

But what if the alternative is true and the young adult is a social media superstar? Does that not provide the type of life satisfaction that we all desire? The science suggests that yes those who use social media to connect with groups of friends stay in touch with family or simply increase communication and connection and benefit from that in terms of mental wellbeing. But those who passively surf social media to view the online personas of others are at risk of feeling inferior and may suffer from low self-esteem and anxiety. The former equates to honest social connecting, or what evolutionary psychologists call, “honest signalling” – a type of communication that conveys meaning to the recipient. When online social media platforms are used like conversational tools, wellbeing can increase. However, there is a risk in social media that plays right to our weakness, and that is the way in which people portray themselves online. In 1959, long before social media consumed our days and our thoughts, Erving Goffman wrote the volume, Presentation of Self in Everyday Life, in which he deduced that during social interactions, we seek to make a distinction between two forms of communication: what one “gives” and what one “gives off” – the former being verbal communication and the latter being all of the less obvious non-verbal ways we communicate and convey intent. Considering Goffman’s idea of communication, there is a necessary symmetry to human interaction that modern-day social media ignores. Amplify this with, as Goffman suggests, our tendency to favor showcasing things about ourselves that put us in a positive light while shrouding our negative realities, and we see that this is precisely what passive consumers of social media digest on a daily basis. Imagine that you are sitting in an office cubicle on a cold dark winter day while your friends on social media stream photos of their carefree day on their Mediterranean cruise. Do those types of social interactions make you feel good about yourself? If the friends are very close and you communicate with them a lot, you may indeed feel a sense of happiness for them, but if you are passively viewing their amazing experience, it’s more likely that you will infer meaning upon the images that their lives are a tad better than yours. Such social comparison is a natural evolutionary trait, and it’s the thing that can manifest as envy, anxiety, and perhaps, ultimately, depression. One group of researchers described it as watching only another’s “highlight reel,” which, among college students, can create deflated egos and build envy. But regardless of its envious pitfalls, social media is addicting. When UCLA researchers used an fMRI to study the brains of young adults on social media, they found that when the young adults saw “likes” on their posted photos, those likes activated the parts of the brain associated with reward – the same ones that become activated during chocolate eating and sex. Researcher Paul Zak, who wrote the book The Moral Molecule, conducted a rather cheeky test on Fast Company author Adam Penenburg, in which Zak measured Penenburg’s blood before and after a 10-minute session on Twitter, noting that Penenburg’s oxytocin levels had risen 13.2%. Oxytocin in the brain is generally a wonderful elixir and is very much part of our evolutionary hardwiring logic. It’s the hormone that stimulates the start of the birthing process in a woman’s body, and it also creates nature’s ultimate and most indefatigable bond, that between a mother and her newborn. Indeed, women have about

3  Raising Children on War, Cartoons, and Social Media

63

30% more oxytocin than men. Coined the “cuddle hormone,” it’s also the hormone that provides feelings of joy when we hug a loved one or play with a puppy. The fact that social media can increase oxytocin feeds the ever-hungry brain and strengthens our reward circuitry. Using blood volume pulse, skin conductance, an electroencephalogram, an electromyograph, respiratory activity, and pupil dilation, researchers found that just a mere 3-minute viewing of one’s own Facebook page can generate intense positive arousal – what researchers call “core flow state.” More importantly, the higher the number of likes, the more the reward circuitry got flowing among social media users. In one UCLA study, subjects were more apt to like a photo when it already had a high number of online approvals, suggesting that what others thought had a significant bearing on how much one liked a photo. When we publish a photo online, we are setting up an expectation that our brain will interpret as a “reward” – with the associated hit of dopamine, of course. The more times we get “likes” for our photos or posts, the more dopamine we release – and seek. It’s what scientists call a “conditional stimulus,” a dopamine release based on an anticipated future reward. If you’re a coffee lover and you relish your morning coffee, it’s not the pleasure of drinking the coffee that releases dopamine, rather the dopamine is responsible for your walking over to go and get that coffee; it activates and motivates you to seek an anticipated reward. As such, rather than being the chemical of pleasure, it is more accurate to think of dopamine as the “I want that reward” chemical. And when it comes to social media, what is particularly rewarding for humans in terms of dopamine release is talking about themselves, as opposed to talking about others, which releases little dopamine by comparison. A study published in the Proceedings of the National Academy of Sciences suggests that social media offers humans the ultimate forum for talking about ourselves. In “normal” person-to-­ person conversations, we typically talk about ourselves about 40% of the time. With social media, however, we tend to be able to talk about ourselves about 80% of the time, meaning our neurochemistry actually prefers social media to real-world conversation. Moreover, in traditional human interaction, we aren’t usually conversing with hundreds of people at the same time. This type of “social crowding” can exacerbate the effect of self-promotion and self-talk. A study demonstrated that students tended to use more first-person descriptions when thinking and speaking of themselves within large groups than within small groups. This points not so much an obsession with oneself as an obsession to make oneself standout when there is a risk of being in the social shadow of others. With Twitter managing a volume of around 6000 tweets per second, one can well imagine that not all of those tweets contain fascinating earth-shaking fact and commentary. Research at Rutgers University showed that about 80% of social media posters are “Meformers,” while the other 20% were “Informers.” As the labels suggest, Meformers are primarily focused on posting photos and tweets about themselves and their hour-by-hour goings-on – no matter how trivial – with 40% of their messages being of the “me doing this now” variety. Alternatively, Informers tend to post about events, news, or social activism. Interestingly, according to the research,

64

3  Raising Children on War, Cartoons, and Social Media

Informers enjoy larger and more interactive online social networks than their Meformer counterparts. Obviously, there must be some sort of reward feedback for Meformers, or they wouldn’t take the time and effort to stage and polish their online personalities. There is, of course, the potential for online fame like numerous creative bloggers, photographers, or celebrities who have lived the rags-to-riches fairy tale with millions upon millions of followers fanatically consuming their every passing thought. From a logical standpoint, being at the top of the social heap in terms of reputation, fame, and desirability certainly plays to our evolutionary instincts. This is “reputation management,” and it can be seen in the fiercely stratified Royal Courts of middle-­ age England and France and, long before that, to the ancient courts of Egypt, some 5000 years ago. The struggle for social ascension is an ancient hardwired drive and is evident throughout the animal kingdom. So, why should social media, with its near limitless opportunities and instantaneous gratifications, not be the most perfect medium for social survival? The “selfie,” which needs no introduction, is also the ideal coming-­of-­age tool. Combine this with what the famous psychologist Erik Erickson called the age of “Ego-Identity vs. Role Confusion,” during which teens and emerging adults experiment vastly with who they are and what and what their position in the world should be, and you have the recipe for online irrationality. Not only young people, but especially young people, ravenous for social impact, see themselves as the potential stars in their own hit reality series. Armed with the understanding that courting controversy in words and images, also courts online views, and potential fame (and fortune), the selfie-generation push the bounds of normal discretion in order to make an impact. For young people, like generations before them, this routinely comes in the form of rebellion, with online posting of sexually provocative selfies and phrases. One in five teens has sent or posted nude or semi-nude photos of themselves online with nearly half of all teens saying that they send and receive sexually suggestive messages. About one in ten of these adolescents sends these images and posts to people they do not know. But it’s not only teens and emerging adults who post selfies. A growing percentage of adults, including those in married or other long-term relationships, routinely post suggestive or boastful photos of themselves. Far from their adolescent identity struggles, these more mature posters have been linked to certain personality traits, like excessive narcissism. For grownups, this narcissism may not always come from photographing their midriffs in the bathroom but in other measures of social status, such as amazing family vacations, luxurious home-life activities, leisure activities, or even sharing experiences that others might relate to and like. Another somewhat disturbing trend is using one’s children or others to increase validation and popularity, in the form of “a photo of my little one enjoying ice-cream during our third family vacation this year.” While so-called friends clamor over each other to post their loving adorations and “likes” of the cute child, it’s obvious for those who care to think about it, that the motivation for the original post of the ice cream had nothing to do with childhood cuteness.

3  Raising Children on War, Cartoons, and Social Media

65

For adults, many of whom have busy families and households, rampant social media posting is not so much a way to invent and discover themselves but to feed their hardwired need to advertise their status within their chosen social circles. In fact, status – as a means for survival in group settings – is one of the most powerful motivating forces in our evolutionary story, and social media plays directly into it. Indeed, new neuroscience research at the National Institute for Mental Health in Maryland found that the part of the brain that values cold hard cash – the striatum – is equally, if not more active, in experiments that involve social status. The striatum is responsible for converting goals into movement, in addition with our reward circuitry. In Parkinson’s disease, it is the striatum and dopamine receptors that are affected. Not only do our brains motivate us to achieve higher social status, but we are also incredibly tuned to the status of others. Studying the brain’s reaction to status, researchers discovered that the brain is highly sensitive to changes in status of others and can sometimes show immense stress when status rankings seem unstable. The studies share a common critical finding, and one that may well drive to the heart of social media use: that it is not inherently satisfying enough to enjoy a comfortable life and status, rather it is the relative status between social media users (our “friends”) that really feeds our brain. This means that underlying messages embedded in online posts are critical, if not the real reason posters post. The “hidden brag” in the post is the real message, and it’s the one that spurs our brains into high gear and drives us to compete for online recognition and validation. When we do get those likes and positive comments, our forebrains – specifically the prefrontal cortex – light up under fMRI because of the role that this part of the brain has in understanding self-relevance. The prefrontal cortex, as you will recall, is involved in the understanding and reasoning of order and how this should influence our judgment and decision-making. Moreover, in the competition for status and attention, increasing the riskiness or suggestive nature of posted images tends to lower the inhibitions of the viewers of those images, potentially contributing to more provocative or outlandish postings and updates, in the future. What social media provides its users is an opportunity to loosen the bonds of societal hierarchies that have, for millennia, tethered us to one social rung or another. Anyone with a handheld device can begin to fashion and mold their persona like clay into whatever they desire it to be, regardless of their true station or fortune. This mad scramble, as real as it is imaginary, has catapulted some to incredible overnight fame, and it is this promise – even in its lesser forms – that fuels our social drive online. But what of the darker side? There is, of course, the risk that heavy social media users become the avatar of their own lives and that they begin to exist merely for the benefit of their desired online personas. Research comparing the generational differences between Generation X (born 1946–1961) and Millennials (born after 1982) showed considerable differences in attitudes. Despite Millennials seeming, and truly believing, that they are far more global, cosmopolitan, and environmentally conscious than other generations, researchers found that Millennials were motivated far more by “money, fame, and image,” when compared to their Gen X predecessors, who tended to value “self-acceptance, affiliation, and community.” This runs directly

66

3  Raising Children on War, Cartoons, and Social Media

counter to the narrative from Millennials themselves, which, according to recent World Economic Forum data, indicates that Millennials are far more concerned with climate change than they are about wars, poverty, famine, conflict, or good governance and that they (according to the WEF) “uphold the ideals of global citizenship.” In sharp contrast to these glowing accolades, San Diego researchers found that one of the most significant declines between Gen X and Millennials was “taking action to help the environment.” So, why the incredible difference between self-perceived righteousness and reality? Could this self-aggrandized morality actually be an effect of social media, where our “ideal” online selves speak louder than our real flesh-and-blood versions? We know that when we see a picture or comment posted on social media that has garnered a great number of “likes,” we tend to want to like it as well, as a tactic for peer acceptance and belonging. It is entirely possibility that heavy social media users create like-minded memes as a way of creating social attachment and meaning to their online personas, even when their actual real-life actions do not necessarily uphold these beliefs. This “avatar effect” could have significant negative consequences for real-world policy, elections, and activism if attitudes exist only at the levels of cyber. Huffington Post blogger and physical trainer Erin Gragossian highlights the misleading undertones of social media with respect to personal body image. He cites the droves of online “physique” trends in which men and women, but particularly women, proudly showcase their self-perceived physical “flaws” in an effort to say they are in control and happy, when (as Gragossian argues) the act of posting the photos suggests a need for assurance and validation. Similarly, he points to posts of men who advertise, through suspiciously staged photos, that they are working out at odd hours  – very early in the morning or late at night. Many people do that, but what makes a certain person feel they need to post it? It is not that their workout is unique. It is that they need to be validated. The dark heart of social media is that what underlies the beautifully edited photos of idyllic sun-soaked lives is in fact stress and desperation and a longing to secure social status. University of British Columbia researchers Paulhus and Williams have conjured an ominous label: The Dark Triad, which includes psychopathy, Machiavellianism, and narcissism, as emerging personality traits among heavy social media users [1]. Psychopathy is characterized by low empathy and impulsivity, narcissism with feelings of superiority and excessive self-worth, and Machiavellianism with underlying intent to exploit and manipulate. The Dark Triad has gained momentum among researchers as they study the way individuals consciously and sometimes painstakingly construct their online personas. The average 16–24-year-old will take up to 16  minutes to take, retake, and edit a selfie photo and spend more than 5 1/2 hours per week taking these images. And what of the Dark Triad? Some 14% of girls take selfies to “make someone jealous because I look so good” and 15% “to make someone regret ending a relationship with me.” Not overly social. For those who are not as malicious in their intent, selfies can be a symptom of loneliness – and not only that, but the shallowness of the relationships formed online can, in turn, create deeper feelings of loneliness and even more frequent selfie-posting. Yet, does this skin-deep online imagery simply feed into a process that our brain is already ideally hardwired for? The online mobile application Tinder, which displays

3  Raising Children on War, Cartoons, and Social Media

67

photos of prospective “dates,” allows users to either swipe left if they are not interested in the person or right to show interest. If both users (including the one in the photo) swipe right on each other’s pictures, then an opportunity to connect is made. Such sites are like sugar-cubes for our brain, magnifying and accelerating what our brains are designed to do – to make a split-second decision as to whether someone is attractive as a potential mate. Indeed, scientists used fMRI data to assess how quickly and reliable our eyes are at assessing not only attractiveness but also suitability for a romantic relationship. The researchers discovered that our medial prefrontal cortex – a part of the forebrain used in making judgments and one of the last to fully develop in emerging adults – is responsible for making near instantaneous assessments of visual attractiveness. To test the actual reliability of our millisecond glances to that of real life dating potential, researchers conducted a “speed-dating” event with the same test group. After 5 minutes of live face-to-face conversation, the study’s subjects discovered that their initial visual glance of the person’s face in a photo predicted actual real-world dating interest over 60% of the time. If you watch someone using Tinder, they often seem to be swiping (hence, assessing the photo) every second. This is how fast our brains judge attractiveness. If our fingers could swipe faster, or if we could see a large screen with hundreds of faces, perhaps we could even speed things up more in order to keep up with how fast our brains operate. With 100 million downloads of the Tinder app and a total of 1.4 billion swipes per day, the frenzy to give our hardwired brains what they want is truly incredible. According to Tinder’s own data, which is very robust and even uses complex Elo scores (originally used in chess for assessing relative skill) to determine a Tinder user’s “desirability,” approximately 80% of Tinder users are looking for long-term relationships, while the remaining 20% are looking for a much shorter and perhaps raunchier commitment. Given this frivolity toward forming serious connections, one wonders whether the natural selection of Tinder offspring may be oriented toward particular traits that might otherwise have been moderated through real-­ world relationship building? In terms of social media’s impact on brain structures, the science is still young. Using a technique called Voxel-based morphometry, where scientists can look at specific regions, the brain, studies have shown that heavy multimedia users exhibit lower density grey matter in the anterior cingulate cortex (ACC), a region of the brain associated with error detection and especially social and emotional control. Likewise, those who disclose a great deal of personal information online  – the “over-share” types – tend to have increased grey matter volume in the orbital frontal cortex, which plays a role in the processing of social and emotional rewards. But questions remain unanswered as to whether these findings relate specifically to social media use or whether they are also characteristic of extroverts or narcissists, and the like. Regardless of this, what is fascinating is that by merely looking at your brain structures, scientists can now predict how you will likely use social media, how much you will share, and even how many online friends you are likely to have. When marketing folks catch up to this technology, the world will be a much different place for consumers. Our brains are designed to keep us alive long enough to pass along our genetic information to our offspring. At its very underpinnings, this means ensuring that we

68

3  Raising Children on War, Cartoons, and Social Media

remain a functional and valued part of a community, not only for mate selection but for protection. Social media speaks directly to these hardwired instincts – to connect, to belong, and to appear invaluable. If our brains’ evolutionary journey is a very slow burn, social media is demanding that it be an inferno, and the result is a social and psychological nuclear meltdown – an exaggerated “Frankenstein” version of our primitive cognitive selves. We have the same raw ingredients and hardwiring as our ancestors yet exist in a very different operational reality. While our brain has obviously been slow to change biologically, the world around us is moving at a fanatical pace. We could easily argue that our brains are simply not keeping up – and hence, not adapting. Or, we could, once again, make the assertion that our brain is doing precisely what it is designed to do in order to protect us and keep us alive, both physically, and in terms of maximizing our individual genetic futures. In this regard, we are really just taking advantage of the brain’s incredible capacity. Like a fine racehorse, our brain wants to run – and run fast – but, like the racehorse, we can’t make it run indefinitely, just because it feels good to do so. This is the curse of our times – how to mediate the allure of technology on which our brain thrives? Curbing stress among our youngest and most vulnerable, either by shielding them from the horrors of violence or abuse, may seem like an easy starting point. But we are failing here. Our brains are magnificent structures. While the claim that we only use 10% of our brain is a myth, (we use 100% of our brain), scientists are still making amazing discoveries as to how various structures in our brain function to make us who we are. How our brain develops through the years has been an especially hot topic with neuroscientists unearthing the significant cognitive leaps our brain makes at various milestones, right into adulthood at around age 25, and how other developments, like myelination, occurs right into our 40s. As scientists discover more about the brain, one of the more interesting realizations is that the science of the brain is more than just interesting facts about which part of the brain control what. It’s about how our brains interact with the world around us to form our actions, our relationships, and our communities. From the toxic stress of the Verdingkinder and children of war to screen time and emerging avenues for learning to the effect of social media on the way we think, the science of the brain has crossed from anatomy and physiology research into a more combined and holistic view of humans in their environment. While our brain may be malleable in terms of its “plasticity,” many of our survival instincts remain unvaryingly hardwired. Understanding the impact that our brain has on our everyday behavior is no longer purely the interest of neuroscientists. During a PBS interview, Daniel Siegel, author of The Developing Mind, put it this way: there is a range of disciplines now, from anthropology, to psychology, to linguistics, and systems, that want study the brain from different angles.1 Indeed, the study of the brain and its immense role in driving our behavior is not unlike E.O. Wilson’s term “consilience,” defined as a common merging of different scientific disciplines to study a common theme.  For a collection of referenced media clips, see www.drdansiegel.com.

1

Reference

69

It is, perhaps, within this spirit of consilience, with the brain simply doing what it does best, that we will make the greatest strides in unearthing new adaptive strategies. The puzzle we are solving requires skills beyond what the biological or the social sciences can do alone. With the unprecedented pace of technological change and our current struggle in understanding it and managing it, we risk constructing permanent social facsimiles of our real lives. This crisis is playing out right now – particularly among our most vulnerable and our most innocent. It is up to us to understand the trends, to bring awareness, and to make change, to protect our most precious of treasures – the health and wellbeing of our future generations.

Reference 1. Paulhus D, Williams M. The dark triad of personality: narcissism, Machiavellianism, and psychopathy. J Res Pers. 2002;36(6):556–63.

4

The Truth About Happiness

According to the United Nations’ sponsored program on happiness, the world’s worst place to live your life is in a fictional country called dystopia [1]. The virtual opposite of “utopia,” dystopia is an imaginary land where happiness is all but absent in the lives of the distraught and unfortunate citizenry, where misery, poverty, starvation, and insecurity rule each day. In the UN’s report on happiness, no nation on the planet could ever be statistically or theoretically worse off than the hard-hearted world of dystopia. That is until the folks at the Happiness Index decided to take a closer look at the Central African Republic. So brutally unhappy is life in that central African nation that to preserve dystopia’s statistical position on the bottom of the happiness ladder – as the worst-of-everything country – the statisticians at the Happiness Index had to lower dystopia’s ranking from 2.33 on the ladder to 1.85. Without this tweaking of misery, the Central African Republic would have ranked worse off than the unhappiest place imaginable [2]. Indeed, the reality is worse than could be imagined. Individuals living in the Central African Republic make about 99% less money than Americans and are apt to die about 28 years sooner [3]. Described today as a complex emergency – a term that is typically defined by conditions in which humanitarian assistance is either severely encumbered or practically impossible to deliver due to war, or a breakdown of governance or law and order – the Central African Republic is, according the UN, the world’s unhappiest place. In fact, the country has a worse score than war-­ravaged Afghanistan or Syria, or even economically decimated Venezuela where children routinely die in hospitals due to a lack of medicine amidst that nation’s politically induced public health emergency [4]. In the Central African Republic, half of the population is in desperate need of immediate humanitarian assistance, including over 1 million children – 40% of whom are suffering from malnutrition [5]. Even when help arrives, it comes at a sad price, as the Central African Republic has taken center stage in a litany of sexual violence cases committed by UN Peacekeepers [6]. Indeed, this is no happy place. The opposite end of the scale is dominated by northern nations. According to the UN’s 2017 World Happiness Report, the top several happiest countries © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 R. S. Barrett, L. H. Francescutti, Hardwired: How Our Instincts to Be Healthy are Making Us Sick, https://doi.org/10.1007/978-3-030-51729-8_4

71

72

4  The Truth About Happiness

crossed the finish line in a tight pack. Leading by a nose was Norway, which ranked #1 as the happiest place on the planet, followed closely by Denmark, Iceland, Switzerland, and Finland [7]. And it’s more than the snowy winters, fish soup, and muesli that render the folks in these countries so joyful. Along six key indicators of a happy society, caring, freedom, generosity, honesty, health, income, and good governance, these five nations provide fertile ground for individual and societal wellbeing. By comparison, Canada and the United States – although nice enough in their own rights  – slipped down the ladder slightly, grabbing the number 7 and 14 spots, respectively [7]. The gaps in statistical life standards and economic wellbeing widen quickly among contender nations. For example, when comparing Canada to Norway, Norwegians earn nearly 30% more money than Canadians but also enjoy 20% more free time [8]. No wonder they’re happy! Outside of the UN’s World Happiness Report, the OECD produces a life satisfaction report called the Better Life Index. Unsurprisingly, Norway tops the list here too, trailed closely by the usual contenders, Switzerland, Denmark, Iceland, and Finland [9]. When measured for jobs, income, housing, healthcare, and education – the usual metrics for “doing well” financially – Australia and the United States top the list. However, if you remove the income-oriented metrics, and instead focus on “community, life satisfaction, and work-life balance,” then the United States falls dramatically toward the middle of the pack, while the Nordic states and northern European nations dominate all the top spots (except for two top-ranking outsiders: New Zealand and Canada) [10]. What makes people happier or more satisfied? Is it monetary or non-monetary winnings? The well-known saying “money doesn’t buy happiness” is often jokingly accompanied by the sister-saying, “…well, it doesn’t buy unhappiness either.” People often assume, for example, that earning more money would make them “happier.” This may be true to some extent, but generally, at an individual level, the research doesn’t necessarily support this commonly held belief. In fact, some of the most fascinating and controversial research on the subject, done by USC economist Richard Easterlin, demonstrated that a nation’s rising national gross domestic product (GDP) does not positively correlate with an increase in self-reported happiness by that country’s citizens. The counterintuitive relationship between higher GDP not equalling higher happiness scores has led scientists to coin Easterlin’s finding: The Easterlin Paradox. However, it’s a bit different when it comes to personal income. With respect to personal finances (as opposed to GDP), money does seem to buy happiness – but only to a point. Happiness rises as our personal income levels rise but only to a level that provides reliable comfort and one presumes, freedom from excessive worry. Beyond that, the research indicates that higher levels of disposable income and expensive toys do not seem to bring about ever-increasing levels of happiness. Yet, at the core of it, these findings really have nothing to do with actual level of income but rather one’s level of income in relation to one’s cost of living. It turns out that a salary in the mid $60,000  range could buy a nice (and happy) life in Mississippi, Tennessee, and Kentucky, but not so in Hawaii, where you’d need twice

4  The Truth About Happiness

73

that income to enjoy the same level of happiness [11]. Below these amounts, more income does equal more happiness, while above these amounts, more income does not lead to appreciable increases in happiness. But surely those individuals and families who have the means to step out of the hamster wheel of daily life to “summer” in the French Riviera or motor their super-­ yachts along the shimmering Croatian coastline must have higher levels of happiness? Rather unsurprisingly, it turns out that yes, such wealth does bring happiness but more so in the form of life “satisfaction.” This tends to be a measure of how well we are doing in making ends meet, particularly when compared to how well everyone else is doing. It’s not necessarily a measure of how happy we feel inside or how much joy we perceive. The super-rich may have very high life satisfaction, but they may still worry and can still suffer from a lack of joy – and yes, the super-rich even worry about money. A Gallup World Poll measured happiness as two variants: life satisfaction and enjoyment of life. Polling 136,000 people across 132 nations Gallup discovered that increases in money do indeed correlate to increases in life satisfaction [12]. Yet, for day-to-day enjoyment, measured in smile, laughter, and inner peace, it was the social experience of life – our connections in the form of friends, family, and love interests – that created the most memorable and joyful moments that truly define the quality of our lives. Despite this, evidence suggests that the quest for money – as the perceived road to happiness – is nearly universal across nations and indeed across cultures from Switzerland to Swaziland. One would think then that higher standards of living in wealthier nations would yield greater overall happiness among its citizens. But, why then does a middle-income nation like Costa Rica tend to exhibit higher self-­ reported happiness than a country like the United States, where the GDP (per capita) is 500 percent higher than Costa Rica’s? This curiosity becomes clearer when we look at a University of Warwick study called the British Household Panel Survey, in which income levels among more than 80,000 participants were analyzed for life satisfaction against income levels [13]. What researchers discovered was that when income went up, individual life satisfaction also went up, logical so far. However, when the same individuals were ranked for income within their direct peer group, those with a higher relative ranking had the highest life satisfaction, and those with the lower relative ranking had the lowest life satisfaction. This was universal across the board, regardless of actual income category. What this means is that regardless of whether you are rich or poor, how you compare related to your direct peer-group matters most in determining life satisfaction. According to lead researcher, Christopher Boyce, life satisfaction is not measured by absolute wealth but relative wealth – and more precisely relative status. The problem is, relative ranking is somewhat finite. Logically, only one person can be at the top of the list. Everyone else is secretly slightly less satisfied than the person above. But if relative ranking is so critical, why not rank us against those less fortunate, so that we can, at least, find some sort of satisfaction in the idea that things could be worse? As Boyce discovered, when it comes to ranking, nearly twice as many of us

74

4  The Truth About Happiness

always benchmark ourselves with higher-ranking individuals, while very few of us compare ourselves against lower-ranking or lower-status individuals. All this makes sense when looking at our comparison of wealthy America with less wealthy Costa Rica. A cultural analysis of the two nations points to dramatic differences in the way the citizens of each understand and attach importance to status and ranking. Geert Hofstede, who is well known for his work on national cultural dimensions, indicates on his country-comparison website that Costa Rica displays unique cultural markers, two of them worth mentioning when it comes to satisfaction and happiness. The first is that Costa Rica ranks the lowest of all Latin American countries in terms of cultural masculinity, a term used to describe how obsessed a culture is with the ideals of competition and success. Masculine cultures are motivated more by “wanting to be the best” (an obvious issue of status), while a feminine culture is motivated more by “liking what one does” [14]. In sharp contrast to Costa Rica, the United States is at the opposite end of the spectrum, with very high cultural masculinity, meaning that status and ranking are critical in American culture, as is being better than others, and directly linked to life satisfaction. On another measure of culture – individualism versus collectivism – Costa Rica is far more collectivist and by that tends to give importance to taking care of the greater group, whereas the United States is highly individualist, meaning priority is given to taking care of oneself and one’s immediate family, over and above others. The combination of individualism and masculinity in the United States – and hence, the drive toward being better than others – creates a highly competitive environment for the pursuit of status, a sentiment that simply does not exist to the same degree in Costa Rica. When compared to the research on happiness, the higher levels of life satisfaction in Costa Rica seem to be supported by their cultural tendency to avoid the need to derive meaning from status and perceived social ranking. When we say that only one person can be at the top of a list of income levels in a culture where competition and relative ranking are paramount, having this mental ranking means that everyone else, except for the very top dog, is somehow secretly dissatisfied with their lot in life [13]. To be very clear, the authors of this book love competition. In no way are we suggesting that competition is bad – in fact, the opposite is surely true, that competition brings out the very best in us. In fact, Dr. Rob was a former national team cross country skier and has a ferocious competitive streak. Indeed, not only is competitiveness the very basis for life and survival on our planet, in terms of mate selection, breeding, and day-to-day survival, competition gets us off the couch, to get good jobs and to work hard for richer and better lives. Granted, given the contrast between America and Costa Rica, does competition always makes us secretly sour and unsatisfied? Not so, says Norway. Norway, the happiest country in the world in 2016 has the second lowest masculinity score of any nation aside from Sweden (another happy Nordic nation) [15]. However, Norway also scores relatively high on individualism, the degree to which people place priority on themselves over others. This is different than Costa Rica and seems to present a paradox: a national culture that values quality of life and life experience over being the winner or having better societal status while at the same

4  The Truth About Happiness

75

time placing emphasis on oneself (and one’s family) rather than the greater community. Generally, Norwegians do tend to value privacy and immediate family, in contrast to low individualist cultures (collectivist cultures) that put priority on sharing their lives and their space with the greater group. Like other Nordic nations, Norwegians seem to have a unique ability to view their world as a kind and fair one and one where life moments must be recognized and enjoyed, but not at the expense of looking after oneself or putting the needs of one’s family first. Norway is the poster culture for what so many of us profess: on the road to success don’t forget to stop and smell the roses. That too is part of the journey – life is not a finish line. Inherent in the Norwegian approach is a type of competition and motivation for personal improvement that is more nuanced than the narrower view that improvement must be determined by rank. So, enjoy the midnight sun and pass the lutefisk among family after a hard day’s work. Norwegians are also healthier than Americans. In fact, on almost every measure of health and healthcare, Norwegians outperform, and Norwegians not only live longer, but they live better and healthier lives. It may well be the pickled herring or endless cross-country skiing, or it could be that Norwegians have better, earlier, and more frequent access to healthcare. Yet, Norway spends less on its healthcare as a percentage of GDP than does the United States – far less – at 9.7% versus 17.1% [16]. For the United States, despite its heavy spending on healthcare, it trails significantly behind other wealthy (and many non-wealthy nations) when it comes to life expectancy. Japan, which has the highest life expectancy in the world, spends far less than the United States, at 10.2% of GDP. And contrary to dollars spent, many of the world’s middle-income nations, if not poorer nations, also have higher life expectancies. This raises important questions as to why we bother to measure health and wellbeing by economic standards at all? New York Times Best Selling Author, Daniel Buettner, traveled the world in search of places where there lived the highest concentration of centenarians and supercentenarians – those of 100 or 110 years of age [17]. Buettner found four outstanding hideaways that defied conventional aging statistics. Buettner’s National Geographic team called these tiny territorial pockets Blue Zones, named after the blue marker that was used to circle them on the map. They are Sardinia (Italy), Okinawa (Japan), Nicoya (Costa Rica), and Icaria (Greece). Despite their modest means, the people of these tiny locales have exceptionally long lifespans. And, as can be imagined, have now become some of the most studied peoples on Earth. Numerous large-scale research efforts have been accomplished, digging deep into the biomarkers of these super-healthy humans, measuring everything from bone lengths to DNA. Obviously, questions and theories have swirled as to why the people in these zones live longer than anyone else. Are they special (nearly closed) genetic pools? Is it something environmental? Is it the way they live? Or eat? As might be expected, the villagers in these four geographical pockets score extremely well on physiological indicators of good health and in many respects exhibit markers equivalent to those chronologically years younger. In almost every measure of wellbeing, from clean healthy arteries to low rates of cancers, strokes, heart disease, osteoporosis, Alzheimer’s, and depression, the people of the Blue

76

4  The Truth About Happiness

Zones seem to have it all. And it’s not socioeconomic status either, the popular, if not conventional, barometer for good health, as the people are in no way wealthy when compared to those in high GDP nations. Yet, the health contrasts are phenomenal. On Costa Rica’s Nicoya Peninsula, a man of 60 has four times greater chance to reach 100 than an American the same age, despite the United States spending five times more on healthcare [18]. What Buettner and others discovered were nine identifiable commonalities between the four zones. These were as follows: exercise naturally; live purposefully; live low stress; eat smaller portions; eat less meat; drink alcohol, but in moderation; have a form of spirituality; have close ties to family and relatives; and have strong and active social networks. While pharmaceutical giants continue search for (and hope to bottle) the fountain of youth among the zones – with one such British firm even buying up a DNA database and blood records of some 13,000 Sardinians – more and more scientists are weighing in on the non-biomedical aspects of the Blue Zone advantage. Herein, once again, we see the advantage of viewing health and wellbeing through the combined lens of biological and social sciences. Aside from the agrarian lifestyles and abundant outdoor work and daily walking, among the most significant of the hypothesized attributes of long lives are the strong social networks that have formed at the community level. The people in the zones not only live and work together, but they look after each other. This is especially true of the elderly, who remain part of the family, who remain physically engaged in farm and house duties, and who find comfort in having significant purpose and people who depend on them. Many, if not most, analysts describe the Blue Zone cultures as collectivist. But collectivism comes in a variety of flavors, some of which have been closely tied to political authoritarianism and other very unhappy political realities. The nasty ones are often of the vertical collectivist variety, where individuals relinquish control of their uniqueness and individual rights by submitting to authority. These are not happy places. Horizontal collectivist cultures tend to orient their view toward that of the greater good or wellbeing of the group, as opposed to submitting themselves to a principal dictator. But there may well be an aspect of collectivism that helps us live longer – to help us lower stress. This is likely to come from knowing that we are not alone in the world, that others are there for us, and that as we age, we have purpose and are cherished. However, to get to the bottom of this potential force of longevity, we need to visit the very building blocks of humanity. Our bodies are made up of some 40 trillion cells, each containing a small nucleus inside the cell plasma and within that nucleus a pair of 23 chromosomes, equaling 46 in total. 22 of the 23 chromosomes are the same in males and females. Only one set of chromosomes sets the sexes apart. Chromosomes can be described as protein structures that help ensure that DNA is kept intact during the making of new cells, which happens continuously in the body as we replenish older cells. When chromosomes replicate to form new cells, the two strands of chromosome do not replicate

4  The Truth About Happiness

77

equally, meaning that one of the two strands reach the end of the replication sequence before the other. If this shortfall were left unchecked, this would result in shorter and shorter chromosomes after each replication and would constitute a loss of DNA and essentially an unsustainable condition for life. To protect against this loss of DNA, we have telomeres, tiny ends that attach to chromosomes. Telomeres contain the same DNA information as chromosomes so that when chromosomes divide, a tiny bit of telomere is used up to make up the shortfall between the two lengths of chromosomes. As the telomeres are simply copies of the same DNA information, no DNA information is lost. This is how the cells (and nature) protect themselves from losing precious DNA data. In fact, telomeres are often compared to aglets, the plastic ends of shoelaces that keep the shoelaces from unravelling. They have a job, and that job is to keep the important bits from fraying and unraveling when they are used. During each chromosome replication, a tiny bit of the telomere is used to make up the differences in strand lengths. After 50–70 divisions, telomeres are too short to protect the chromosome and the chromosome either dies or cannot replicate. When this process reaches all cells, it’s called senescence, and it’s the point that we die of old age. On the opposite end of the lifespan spectrum, when we are conceived in the womb, our telomeres are extremely long, about 15,000 base pairs (the unit used to understand length). After we are born, the telomere length drops down to about 10,000, and as we age, the length of telomeres continues to drop. In fact, we continue to lose about 40–50 base pairs per year of life, and at approximately 5000 base pairs in length our telomeres become senescence – and we die. If you do the math, this means that the maximum lifespan for humans is about 120 years, which is why many scientists set this as the upper limit for human longevity. In nearly all cases, our telomeres continue to shorten with age until we die. In some unique cases, however, telomeres do not shorten, as Elizabeth Blackburn from the University of California-Berkeley discovered, in research that garnered her the Nobel Prize in Medicine. Blackburn discovered an enzyme she called telomerase, which, she found, acts to preserve the length of the telomere [19]. Eerily, cancer is one of the cell processes that seem to do well in capitalizing on telomerase to preserve the life of cancer cells, making the cancer virtually immortal. Outside of cancer, normal cells do not seem to have the same degree of telomerase support. That is until Harvard scientists realized how to turn on the telomerase enzyme in rodents. The results? A partial reversal of the aging process, specifically a reversal of brain disease and infertility [20]. Given that telomeres shorten as we age, the length of our telomeres is an informative measure of age and degeneration – what some scientists call, the “ultimate” biomarker. Indeed, a landmark study by the Kaizer Permanente Research Group on Genes, Environment, and Health looked at 110,000 individuals, validating the hypothesis that telomeres are indeed the ultimate measure of one’s biological age [21]. When researchers looked at the Blue Zone biomarkers, they found several biomarkers of good health, but one was near the top of the list. You guessed it: telomere length. In Costa Rica’s Nicoyan Peninsula, telomere length was so significantly longer than in other regions that a Nicoyan is biologically 20 years younger than a

78

4  The Truth About Happiness

non-Nicoyan of the same age chronological age [22]. To explain the longer telomeres, researchers looked at 19 biological factors, spanning genetics, to diet, to environmental agents, to exercise [23]. Despite the numerous contrary voices, diet did not explain the longer telomeres, and in fact, the people of the Nicoyan Peninsula were actually worse off than other Costa Ricans when it came to good diets and measures of obesity and blood pressure [24]. Also, the Nicoyans are quite financially poor, so basic socioeconomic assumptions about health and income also do not explain the unusual telomere lengths. Then scientists stumbled upon a critical finding – the longer telomeres were not evident in those Nicoyans who lived alone; in fact, they had shorter telomeres. For those Nicoyans that lived on their own, the Blue Zone advantage, as measured by telomere length, had entirely disappeared. As Nicoyans are more likely to live together, more so than other Costa Ricans, the loneliness theory of telomere length has begun to gain traction. Indeed, one of the leading assumptions of telomere length – and longevity – among those residing in the Nicoyan Peninsula is that lower levels of stress may come from understanding that one is an integral and much needed part of strong, supportive, family social networks. This feeling, combined with a simpler agrarian life and a less culturally masculine preoccupation with status, means that Nicoyans (and others in the Blue Zones) place more emphasis on caring for and looking out for one another, even, and perhaps especially, in old age. Let’s compare this to another famous story of the citizens of Roseto, Pennsylvania. The fascination of Roseto is one that has baffled conventional medical wisdom for years. The story begins following the immigration of Italians to the tiny enclave of Roseto in the Lehigh Valley of Pennsylvania, in 1884. The Italians had left an Italian village by the same name, in the Apulia region of southeast Italy to find a new beginning in Pennsylvania, which included work in the region’s slate mines. Soon after arriving, the immigrants set up a hilltop village and named it after the town from which they emigrated: Roseto. The Italian immigrants, in addition to working hard days in the mines in conditions that must have been less than favorable for optimal health, smoked unfiltered cigars, drank massive volumes of wine without limit, and consumed extremely high saturated fat diets. Yet, despite these potential health bullets, in a span form the mid-­1950s to early the 1960s, Roseto Italians experienced absolutely no heart attacks. This caught the attention of Doctor Steward Wolf, Head of Medicine at the University of Oklahoma, who stumbled upon the Roseto anomaly by mere chance after buying some property in the Poconos and discussing the unusually low death rates of Roseto citizens with a colleague. Dr. Wolf decided to investigate Roseto’s unusual immunity to cardiac death. He discovered that despite the risk of heart attack increasing with age throughout the United States, astonishingly, the rate of heart attack was zero for Roseto men 55–64 years of age [25]. The citizens of Roseto were defying all conventional logic. After an intensive investigation, Wolf concluded that none of the behaviors of the Roseto Italians could explain their apparent immunity to heart disease. To the contrary, most of their habits would lead one to predict that they would be at the opposite of healthy and that they would have a much higher incidence of heart disease.

4  The Truth About Happiness

79

The harsh environmental conditions in the mines, the excessive smoking and drinking, and the high-fat diets made no sense. Yet during the study of the Roseto Italians, one difference seemed to stand out as compared to other communities around them, or in fact, America at large. And that was the extraordinary close-knit community bonds that all citizens of Roseto shared as well as the very egalitarian way in which they lived among each other – with no apparent desire to “keep up with the Joneses” as was (and is) so entrenched in American social life. The tightness of the Roseto Italian community tended to mediate the highs and lows of life to the extent that wealth was not overtly displayed, while those who might be suffering economic setbacks were discretely propped-up by neighbors and loved ones. One citizen describes the Roseto village as inspired by Italian suburban streets, chairs placed nearly curbside at the edge of properties, facing the street. This, so that the members of the household could sit and talk to passersby in the evening, following the southern Italian tradition of la passeggiata, a slow evening stroll during which families would dress up, show off new babies, stop and enjoy a glass of wine, and generally share and discuss family and life with neighbors. It was an idyllic community feel. The incredible sense of security that came from being a citizen of Roseto as well as the removal of overt displays of status and rank meant that life satisfaction and happiness could flourish in the community and with that all the associated positive physiological effects. In 1992, several years after his research began, Dr. Wolf, accompanied by his good friend and sociologist, Dr. John Bruhn, presented their findings on Roseto to the medical community in an academic paper called The Rosetto Effect, in which their summations included the rather controversial conclusion that the lower incidence of myocardial infarction in Roseto was not due to some wondrous herb, food, wine, or exercise, but was due to “greater social solidarity and homogeneity [of the community]” [26]. Wolf’s original claims were somewhat less bold, only stating that the diet and apparent obesity rates conflicted with the near absence of myocardial infarctions [27]. Why the difference in tone? Well, with three decades of integration into American society, the amazing Roseto immunity to heart disease had all but evaporated in parallel with their loss of community closeness and interdependence. Wolf himself had predicted, in 1963, that the apparent cardioprotective aspects of their lifestyles would erode if the social values of the Roseto Italians began to fragment. And, as Wolf explains, this is precisely what happened: as first-­ generation Italians died, the second and third generation moved away to college, returning to build less modest lifestyles in exchange for luxury vehicles and grandiose homes and ostensibly the social dynamics of status and rank that go with them [28]. In step with these social changes, and in less than a generation, Roseto Italians experienced a doubling of heart attacks, equaling the average rate of the rest of America. Dr. Wolf’s original observations and initial musings about social life affecting physical health as much, or more so, than dietary regime or daily behaviors were met with much criticism by his medical colleagues. His later discussions on the

80

4  The Truth About Happiness

matter, after having seen the Roseto Effect fade away in lock step with the disintegration of “traditional” Roseto lifestyle, have rendered his summations far more scientifically palatable. What is it about the social fabric of Roseto that made it more influential on health than even the effects of smoking or eating a highly saturated fat diet? Was it the lower stress that accompanied the communal interdependence with the sense that life challenges could be distributed among many? Or, was it the life satisfaction that came from having a high degree of civic engagement along with helping others? Or, was it the continuous social contact of living in multigenerational households? Or, was it the lack of stress and deep worry over rank and social status? Perhaps, it was simply happiness experienced by enjoying good food, good friends, and good wine? Or, was it the unique mixture of all these things combined? For Dr. Wolf, one of the most prominent markers for good health in Roseto was the greater happiness and wellbeing that accompanied a more egalitarian and caring social class structure. Studies today show that stress – the kind that may come from not feeling satisfied or safe in life – tend to have real biological consequences to markers of aging, including shorter telomeres, less telomerase activity, and increased oxidation at a cellular level, potentially exposing us to disease and degeneration. This sort of stress can also make us age faster. In a study of women exhibiting very high chronic stress, the subjects displayed telomere lengths an entire decade older than their chronological age [29]. Just how much of this stress is self-induced? In 1954, social psychologist, Leon Festinger, theorized that we have a strong need to continuously take stock and evaluate our lives, and one of the primary ways we do so is by comparing ourselves to others who are in similar life situations or circumstances. Festinger’s work gave rise to social comparison theory [30]. The theory posits that we will choose a target comparison that is relatable to us as it helps us to be more accurate in our self-evaluation. The further someone is from our own reality, in terms of location, type of lifestyle, income, or job, the less we feel that comparing ourselves to them is helpful. It is considered that the process of comparison may never be fully satisfied and therefore leads to a constant state of chronic anxiety. Interestingly, as Festinger’s work shows, men tend to omit comparisons with those whose situation are too far removed from their own while women tend not to be discouraged from comparing themselves with others in drastically different circumstances. This is particularly true of body image. In the South Seas, in the small province of Viti Levu, on Fiji’s main island, the indigenous population enjoyed happy, yet basic, lives. One of the unique qualities of the people there was their robust endomorphic physiques; carrying a bit of extra body weight was considered far more attractive than being thin. For the most part, women and girls of Viti Levu were very happy with their appearance, while the rates of eating disorder diseases or other personal dysmorphic attitudes were virtually nonexistent. Anthropologist, epidemiologist, medical doctor, and psychiatrist Dr. Anne Becker has spent much of her professional life studying eating disorders. Today, Becker is the Director of the Harvard Medical School’s Eating Disorder Center. Dr. Becker first observed Fijian social life in 1982, as a graduate student doing

4  The Truth About Happiness

81

anthropological field work there. She noted that the entire fabric of life in Fiji surrounded the enjoyment of food and the social bonds of sharing meals. According to Becker, daily life was devoted to either preparing food or eating food. And, it was a good life. Families, neighbors, and friends came together to share their bounty, relaxing socially in their collective satiation. Becker’s over 20-year research journey into the lives of Fijians, and especially Fijian girls, resulted in a landmark discovery on the forces of comparison and dysmorphia. Becker notes the baseline in 1995, when there were virtually no eating disorders among Fijian girls and women, just prior to it all changing. In 1998, Becker did a round of interviews with Fijian girls, only to discover that 11.3 percent of girls were “purging” after meals in an attempt to lose weight [31]. By 2007, Becker had spearheaded a large study on Viti Levu, the main island in Fiji, in which she discovered that nearly half of the young female population were now “purging” [32]. What had caused this rapid change and apparent public health emergency? According to Becker, by the mid-1990s, American television had made its way into the lives of Fijians for the first time. Among the most popular television shows was Melrose Place and Beverly Hills 90210, portraying beautiful, young, popular, and slender women. Virtually unknown in Fiji prior to the introduction of American primetime drama, dysmorphia and depression had begun to skyrocket. As Becker notes, the desire to “diet” was much higher in those who were watching more of the television shows. Friends (so-called) would point to friends, telling them that they were too fat, resulting in an entire generation of self-loathing and endless bouts of depression among the islands’ impressionable youth. Another case of our social world impacting our health. The linkages between social relationships, stress, and longevity are no more evident than in the stories of life-long love. The tale of 90-year-old Bernard Jordan’s life and love captured British headlines – and hearts – when he snuck out of the nursing home he shared with his wife, Irene, to attend the 70th Anniversary of D-Day celebrations in Normandy [33]. Perhaps inspired by the spirit and memory of the insurmountable odds of D-Day, where 250,000 soldiers lost their lives in the 80-day battle to liberate the coast from entrenched Nazi forces, Bernard would not take no for an answer. With his Royal Navy medals pinned to his chest and hidden from view under his overcoat, Bernard told the nursing-home staff that he was going for a short walk. Hours later he was on a cross-channel ferry headed for France. A police search for Bernard quickly turned from a missing person story to that of homeland hero, when younger veterans phone police to inform them that Bernard was alive and well and that he had checked into a Normandy hotel to attend the D-Day ceremonies. Papers printed headlines about the “Great Escaper,” the elderly British officer who many described as epitomizing the never-surrender spirit of his hardy World War II generation. By the time Bernard returned to his British nursing home, he had no need to hide his medals – he had become a national hero and symbol of D-Day’s triumphant valor. He celebrated his 90th birthday days after his return, receiving over two thousand birthday cards. Yet, Bernard’s story doesn’t end there. Bernard once again made headlines when he passed away the same year, at age 90, but not only for his own death but also for

82

4  The Truth About Happiness

the death of his wife, Irene, who passed away a mere week later, at age 88. Bernard and Irene had been married more than 50 years and were inseparable. The mayor of Brighton, who had known the couple, was quoted as saying, “she probably gave up the will [to live]” [34]. To many, this may seem a comforting romanticized version of soulmate kinship, a lockstep bond that helps those blessed with true love navigate through the journey of life together – as well as the journey of death. To be sure, Bernard and Irene’s story is well known because of Bernard’s famous “great escape,” but it’s the story of the couple’s death, merely a week apart, that has stirred intrigue by those who study the link between relationships, wellbeing, and health outcomes. In Shakespeare’s Romeo and Juliet, Lady Montague, Romeo’s mother, is proclaimed to “die from a broken heart” when learning of her son’s banishment. To fatally succumb to emotional anguish seems appropriate enough for fiction, but what of nonfiction? Former NFL and CFL football quarterback legend, Doug Flutie, whose parents had been married for 56 years, died within one hour of each other. Doug’s father, Richard (Dick) Flutie, died first of a heart attack while in hospital, followed by the death of Doug’s mother, Joan, 1 hour later, also of a heart attack. Another well-­ known Hollywood celebrity, Carrie Fisher, best known on screen for her role as Star War’s Princess Leia, suffered a medical emergency 15 minutes before her trans-­ Atlantic flight landed in the United States. Rushed to hospital, Fisher died 4 days later. Fisher’s grief-stricken mother, Debbie Reynolds, died the very next day (virtually the same night), with mother and daughter sharing the same funeral. In medical circles, broken heart syndrome is known as Takotsubo cardiomyopathy, given its name by Japanese researchers who noticed that the ill-functioning left ventricle in broken heart syndrome seemed to take on a similar shape as that of ceramic octopus traps. Broken heart syndrome is often misdiagnosed as a heart attack but upon closer inspection does not exhibit many of the common characteristics of a heart attack, such as blocked vessels. In broken heart syndrome, an intense and overwhelming stress response seems to trigger a temporary enlargement of the left ventricle to the extent that it can no longer pump properly. While other parts of the heart may try to compensate with more vigorous contractions, the unfortunate condition can rob the body of critical blood flow, mimicking the cardiogenic shock characterized by typical heart attacks. Indeed, cardiogenic shock is the leading cause of death in cases of heart attacks, characterized by a lack of essential blood flow to the body, resulting in tissue hypoxia [35]. A Norwegian study found that broken heart syndrome is a significant risk in the week following the death of a cohabitee spouse, while other studies have found that mortality rates of new widows and widowers in the first 24 months following the death of a spouse are significantly higher than average age-comparable deaths, with the highest risk being in the first 3 months [36]. Of course, such studies must juggle a multitude of factors that can affect the health of the remaining spouse, including poor sleep and nutrition, neglect or omission of medication regimes, and heavier use (or abuse) of alcohol, anti-psychotics, painkillers, and smoking. Regardless of these lifestyle factors, studies of some 60,000 widows and widowers show that the risk of

4  The Truth About Happiness

83

mortality following the death of a long-time cohabitee spouse is dramatically higher than those of a similar age. The effect of relationships on our health is profound. Yet for something so essentially human like relationships, our recognition and appreciation of such a force in health outcomes remain disturbingly diminished by our pursuit of lesser goals. Worse yet, we may, in fact, be actively chasing ideals that are clearly damaging to our health while under the illusion that we are fostering healthier lives. The Harvard Study of Adult Development is perhaps the longest running study of human development in the world. In 1938, Harvard researchers recruited 268 Harvard sophomores (the would-be President John F. Kennedy among them) and later 456 men of far less advantaged inner-city Boston to participate in a comprehensive longitudinal health assessment involving such variables as routine physicals and blood tests to relationship status and life goals. Psychiatrist, Robert Waldinger, now the 4th Director of the 75-plus year study, describes the core findings of the research in a well-known Ted Talk [37]. He states that like most young people in their early years, nearly all the study’s subjects considered money and fame to be the two most instrumental conditions for experiencing a good life. Indeed, Waldinger states, this is no different than the perspectives of today’s millennials, who almost unanimously feel the same as Harvard’s early research recruits. The study, which continues today, follows the 19 remaining participants, all of whom are now in their 90s. However, added to the mountains of data on all the men’s’ lives, researchers also have data on the lives of some 1300 of their offspring, most of whom are now entering their own retirement years. Looking at which individuals remained the healthiest for the longest, researchers discovered that it wasn’t money or fame that determined a good life after all; rather it was relationships. Good marital relationships, strong extended family networks, and active community involvement were not only the magic formulae for happiness; they were also the recipes for physical health. In fact, when the Harvard research team looked at the happiest and healthiest individuals in their study – those who displayed the best health late into life – and then looked back at their health records, blood tests, and physicals during midlife, they discovered that the typical biomarkers of health status, such as cholesterol levels and blood pressure, paled in comparison to the long-term effect of relationships. Like the lack of heart attacks in Roseto that puzzled the medical community so, where the strong egalitarian social network had more of an impact on health than smoking, the Harvard study definitively demonstrates that the quality of our relationships is the single most important determinant of happiness, health, and longevity. But, it’s also the quality of the relationships that matter. Being in a toxic relationship, a lonely marriage, or a strained family environment can have the opposite effect on our health and longevity. So, why do we, with each successive generation, not recognize and understand this? Is it simply the naiveté of youth that fuels the belief that money and fame are the answer to a long and happy life? Is this magic life formula only meant to be fully understood during the final chapters of our lives? Or, does our evolutionary development favor a state of restlessness, where time spent developing slow-moving

84

4  The Truth About Happiness

relationships could otherwise be used to build social status and economic visibility to attract the most favorable mates? If true, what is the cost of this restlessness? Certainly, there is an argument that no great human achievements  – indeed, the things that make humans innovative animals – could ever be accomplished without ambition beyond relationship building. That is perhaps true, from breaking the 4-minute mile, to creating great works of art, to flying to the moon, to fighting diseases; these efforts consume incredible energies that, according to the wisdom of relationships and happiness, would be better spent fostering our marriage and family ties. But, as human animals, and as a theme oft-repeated, the choices we make are never done in a vacuum, devoid of emotion or social context. This is the guts and glory that make us who we are and it certainly matters when trying to decipher why it is we do the things we do. Those in the business of human performance know that there is an optimum level of stimulation when it comes to doing tasks, like landing a passenger jet or attending to an emergency medicine case. These tasks require having “your head in the game” and are characterized by the need to be at peak performance. This means not being overtasked to the extent that we miss important cues, but also not being so under-­ tasked that we lack the requisite alertness. “Being in the zone” means being somewhat stressed but not overly stressed. When we are in bad relationships or when we succumb to excessive worry, our peak performance suffers. A study of more than 22,000 workers across several countries showed that nearly one-third of all employees describe themselves as “experiencing high stress” [38]. Presenteeism, the term coined for attending work while ill, tired, or disengaged, was 50% higher (in days) for the high-stress group. If you look at any office tower in any city, chances are hundreds of those inhabiting the layered floors are stressed and unhappy and by way of those attributes, unhealthy as well. Our brains and bodies strive to keep us in the zone. When our body is challenged by daily stressors, it’s important that our natural stress responses do not muddy our performance. In the face of stress, our bodies strive to mediate extreme responses through a process called allostasis, which simply refers to a self-management tool that keeps our physiology in a happy place – not too stressed and not under-stressed. It’s like a boat’s ability to ride the waves and stay balanced without listing or tipping. This is an essential process of our homeostasis. Critical to health is the total burden we experience by our inability to manage stress through allostasis. In some cases, and in the face of constant chronic stress, our allostatic system can be heavily tasked, as measured by “allostatic load,” a term used to describe a condition in which our allostatic system fails to adequately manage our physiological response to stressors [39]. When allostatic load is very high over a lifetime, the health implications can be grave. Some physiologists describe allostatic load as a measure of the “wear and tear” our bodies experience [40]. We all know someone whose life may have been difficult or challenged in some way, and we can often see the evidence in how poorly they may be aging or in their appearance of ill-health. This is high allostatic load, different than normal aging

4  The Truth About Happiness

85

(what scientists call senescent processes). In one study of 1189 subjects, ten biomarkers, such as blood pressure, cholesterol ratios, and waist-to-hip ratio, were measured to describe the health profiles of the individuals. The results showed a significant correlation between allostatic load and disease processes and mortality, far beyond any of the conventional standalone biomarkers of health determination. One of the simplest and most accessible ways to help our bodies maintain proper allostasis is through strong social support networks. In a study of 357 females and 257 male office workers, the most significant factor in avoiding stress “burnout” was social support [41]. Indeed, those who are engaged in positive social relationships tend to consistently exhibit lower allostatic load scores [42]. To put this into perspective, a lifelong cigarette smoker increases his risk of stroke 2–4 times over that of a non-smoker [43]. For a person with few social ties, their risk of dying is two times greater than that of a person with many social ties. And, for those with pre-existing heart conditions, loneliness can mean a 2.4 times greater chance of suffering a cardiac death [44]. The role of strong positive social bonds is one of the most useful measures of future health status, even beyond conventional biomarkers of long-term health. Newer understandings of cumulative stress, such as allostatic load, have added strong support for this understanding of the link between happiness and health. As is so often the case, what makes us human – our ambition and our social needs – constitute both strength and weakness. Our need to connect with peers and family helps keep us healthy and life challenges keep us sharp and innovative. But too much social competition and the relentless stress of never-ending social comparison can wear us down, eroding the protective qualities that may have come from more positive relationships. In our world of social media, we see these forces collide. What may well appear to be a state of constant social connection may well be a great social façade, a fabricated and fickle social network that does little to support and improve the health outcomes that really matter. We all know what real relationships feel like. They are dependable and they are supportive. They help us manage life stress through the promotion of happiness and wellbeing in good times and in truly being there for us in bad times. Knowing that you have family and friends like this lowers stress and anxiety by providing a sense of security about our future. It is the sense that no matter how bad it gets, we will weather the storm because those around us who care about us will make sure of it. That is the kind of resilience mechanism that translates into real health outcomes, and indeed, greater longevity. Generation after generation of youth fail to understand the truth about happiness and “the good life” until they have nearly completed their lives. Imagine if we could understand the meaning of life and the secret to happiness and wellness before we got old (and wise)? Parents have a role to play here. Promoting positive relationships in the family and in friendships and telling children and young adults why such social networks are important may impact their health as much as talking about smoking, drinking, or drugs. And for non-parents and other adults, working harder on loving and sharing life’s moments with your spouse, being a positive relationship

86

4  The Truth About Happiness

role model for other family members, and by actively participating in your community may not only help your own health, but you may well be saving the life someone else who may grow to care and depend on you.

References 1. The World Happiness Report 2017. Sustainable Development Solutions Network. United Nations. URL: worldhappiness.report. 2. The World Happiness Report 2017. Sustainable Development Solutions Network. United Nations. URL: worldhappiness.report. FAQ. 3. For some interesting country comparisons, see the URL: ifitweremyhome.com originally developed to compare the spill of Deepwater Horizon’s spill on the world map but now expanded to include world demographic, public health, and economic comparatives. 4. Case N. Dying infants and no medicine: inside Venezuela’s failing hospitals. The New York Times. May 15, 2016. 5. ReliefWeb Central African Republic. Also see, Central African Republic: nearly one in five children is a refugee or internally displaced. UNICEF website. 6. Amnesty International. Central African Republic 2016/2017 Annual Report. Amnesty.org. 7. World Happiness Report 2017. Executive summary. Worldhappiness.report. 8. Ifitweremyhome.com for comparisons. 9. OECD Better Life Index at www.oecdbetterlifeindex.org. 10. OECD Better Life Index at www.oecdbetterlifeindex.org. One can alter the metrics used and see the results (rankings) change. 11. Kahneman D, Deaton A. High income improves evaluation of life but not emotional wellbeing. In Doug short: happiness revisited: a household income of $75K? Council for Community & Economic Research. 12. Bryner J. Happiness Is … Making More Money than the Next Guy. Live Science. March 19, 2010. www.livescience.com. 13. Boyce CJ, Brown GD, Moore SC. Money and happiness: rank of income, not income, affects life satisfaction. Psychol Sci. 2010;21(4):471–5. 14. Geert Hofstede’s cultural dimensions website. https://geert-hofstede.com/costa-rica.html. 15. Op cit. 16. The World Bank Global Health Expenditure Database. URL: http://data.worldbank.org/indicator/SH.XPD.TOTL.ZS. 17. Buettner D. The Blue Zones: 9 Lessons for Living Longer from the People Who’ve Lived the Longest. Washington DC: The National Geographic Society; 2012. 18. Mishra BN. Secret of eternal youth; teaching from the centenarian hot spots (“Blue Zones”). Indian J Community Med. 2009;34(4):273–5. https://doi.org/10.4103/0970-0218.58380. 19. Nobel Laureates: http://www.nobelprize.org/nobel_prizes/medicine/laureates/2009/. 20. Saltus R.  Partial reversal of aging achieved in mice: control of telomerase gene appears to control process: The Harvard Gazette. Dana-Farber Cancer Institute. http://news.harvard.edu/ gazette/story/2010/11/partial-reversal-of-aging-achieved-in-mice/. 21. Lapham K, et  al. Automated assay of telomere length measurement and informatics for 100,000 subjects in the Genetic Epidemiology Research on Adult Health and Aging (GERA) cohort. Genetics. 2015;200(4):1061–72. 22. Rehkopf D, et al.. Telomere length in costa Rica’s high longevity blue zone. The Population Association of America Annual Meeting. 2011. http://paa2011.princeton.edu/papers/112258. 23. Marchant J.  Oorest cost Ricans live longest. Nature: International Weekly Journal of Science. 2013. 24. Marchant J.  Poorest cost Ricans live longest. Nature: International Weekly Journal of Science. 2013.

References

87

2 5. Grossman R, Leroux C. A new ‘Roseto effect’. Chicago Tribune. 1996. 26. Egolf B, Lasker J, Wolf S, Potvin L. The Roseto effect: a 50-year comparison of mortality rates. Am J Public Health. 1992;82(8):1089–92. 27. Stout C, Morrow J, Brandt EN, Wolf S. Unusually low incidence of death from myocardial infarction study of an Italian American community in Pennsylvania. JAMA. 1964;188(10):845–9. 28. Interview in People Magazine by Kay Cassill, June 16, 1980. 29. Epel E. et al. Accelerated telomere shortening in response to life stress. 2004 30. Festinger L. A theory of social comparison processes. Hum Relations. 1954;7(2):117–40. 31. Becker AE. Body, Self, and Society: The View from Fiji. University of Pennsylvania Press. See also, Ireland Corydon, Fijian girls succumb to Western dysmorphia, in Harvard Gazette, Science and Health, Culture and Society. March 19, 2009. News.harvard.edu. 1995. 32. Becker found that 45% of girls had purged in the last month. 33. Halliday J. War veteran who escaped care home to attend D-day ceremony dies aged 90, The Guardian 6, 2015. 34. D-Day veteran Bernard Jordan’s wife Irene dies aged 88, The Guardian. Press Association. Thursday January 8, 2015. 35. Is Broken Heart Syndrome Real? American Heart Association. www.heart.org 36. King M, Lodwick R, Jones R, Whitaker H, Petersen I. Death following partner bereavement: a self-controlled case series analysis. PLoS One. 2017;12(3):e0173870. https://doi.org/10.1371/ journal.pone.0173870. 37. Liz Mineo, Good genes are nice, but joy is better. Harvard Gazette. April 11, 2017. Ted Talk video embedded. http://news.harvard.edu/gazette/story/2017/04/ over-nearly-80-years-harvard-study-has-been-showing-how-to-live-a-healthy-and-happy-life/ 38. Karen Higginbottom, Workplace stress leads to less productive employees. Forbes Magazine, September 11, 2014. 39. Logan JG, Barksdale DJ.  Allostasis and allostatic load: expanding the discourse on stress and cardiovascular disease. J Clin Nurs. 2008;17:201–8. https://doi. org/10.1111/j.1365-2702.2008.02347.x. 40. McEwen BS. Interacting mediators of allostasis and allostatic load: towards an understanding of resilience in aging. Metabolism. 2003;52:10–6. 41. Etzion D.  Moderating effect of social support on the stress–burnout relationship. J Appl Psychol. 1984;69(4):615–22. 42. Seeman T.  Social relationships, gender, and allostatic load across two age cohorts. Psychosomatic Med. 2002;64:395–406. 43. Centre for Disease Control. Health Effects of Cigarette Smoking. https://www.cdc.gov/ tobacco/data_statistics/fact_sheets/health_effects/effects_cig_smoking/index.htm. 44. Umberson D, Montez JK.  Social relationships and health: a flashpoint for health policy. J Health Social Behav. 2010;51(Suppl):S54–66.

5

Why Do We Ignore Sleep?

The promising young 33-year-old doctor, adored by his department at his Suffolk England hospital, had just completed a long night on duty. Three long marathon night shifts in a row had taken their toll on Dr. Ronak Patel, and he was eager to finally enjoy some well-deserved rest. It was just before 9:00 am when the young doctor climbed behind the wheel of his car to make the 40-mile drive home. Fatigued from his all-nighters, Ronak called his wife Helen on his hands-free phone. Although he was but a few miles from home, Ronak and Helen began singing to each other over the phone to keep him from falling asleep at the wheel. With Helen singing in time, Ronak’s voice suddenly stopped. With the phone line having fallen silent, Helen, in her worry, attempted to call him back – trying again and again – no less than 14 times. Fetching her car keys, she began driving the route from their home toward Ronak’s hospital. A mere 3 miles from her home, she was stopped by police who were setting up barricades around the scene of an auto accident. Even while singing to his wife, the young doctor had drifted from his lane, his small Volkswagen Gulf crashing headlong into an oncoming transport truck. The inquest into the young doctor’s death showed no mechanical failure of his car; the findings ultimately ruling that despite actively singing aloud to his wife while navigating the twisting curves of the road, the young doctor had nevertheless succumbed to sleep mere, minutes from his home.1 When the brain needs sleep, nothing else matters. The brain’s ability to “pull its own plug” is an essential adaptation necessary for human survival – albeit darkly ironic in the case of falling asleep at the wheel. Life can be interesting and exciting, and we can easily imagine that for our ancient ancestors, it was also dangerous. The ability to stay awake for extended periods of time to fend off predators, to nourish themselves, to find mates, or tend to offspring would be an enormous evolutionary advantage. Yet it’s just for this very reason – our desire to never fall asleep – that the

1  See, Dr. Ronak Patel had been “singing to stay awake” before fatal crash. BBC News, 12 July 2016.

© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 R. S. Barrett, L. H. Francescutti, Hardwired: How Our Instincts to Be Healthy are Making Us Sick, https://doi.org/10.1007/978-3-030-51729-8_5

89

90

5  Why Do We Ignore Sleep?

brain possesses the essential physiological adaptation of being able to carry out a self-shutdown. As we all know very well, the process of sleep begins with a feeling of sleepiness or grogginess, typically brought on by natural circadian rhythms that follow the daily patterns of night and day. The onset of darkness is sensed by the optic nerve, which sends signals to the hypothalamus in our brain, which then directs signals to the ever-so-tiny pineal gland to begin the release of melatonin. In turn, melatonin instigates a cascade of messages (and responses) that begin to transition the brain and body into sleep mode. Like our ancestors before us, once we receive these gentle sleep signals, we begin to feel the desire to move into a comfortable position for our nighttime rest. However, for the 9.5 million American shift workers and countless others who suffer from disturbed circadian rhythms, this idyllic chemical cascade is all but fantasy. As life and work get in the way of perfect fairy-tale circadian rhythms that follow sunsets and sunrises, our modern-day world and sleep shortcomings are resulting in truly nightmarish consequences. The need to sleep is not a complicated concept to understand. The longer we stay awake, the more tired we become. It begins with brain cells called astrocytes that release a chemical called adenosine, a neurotransmitter that inhibits arousal and wakefulness while causing us to feel sleepy. In this way, the relationship between adenosine and the drive to sleep is linear, meaning the longer we are awake, the more adenosine builds up in our brain and greater is our desire to sleep. Once we do sleep, adenosine levels quickly drop, reducing our desire to sleep. This rather simple relationship is called our homeostatic sleep drive, and it’s one of the most powerful mechanisms in the body, so much so, that our brain will not take “no” for an answer when it comes to our need for sleep. When we try to ignore our homeostatic sleep drive by attempting to stay awake, our brain can begin to behave like a sputtering engine, “stalling” momentarily as it attempts to continue running in a compromised state. These brief “stalling” episodes are called microsleeps and are 1–2 second periods during which we fall asleep, while struggling to stay awake. Eerily, in many cases, we may not even know we are experiencing microsleeps because our conscious self is too cognitively “impaired” to even realize what is happening. In the wrong environment, this can lead to disastrous consequences, as was the case of young Dr. Patel. Let’s consider the ramifications of a 1–2 second sleep. A car traveling at a typical highway speed of 65 miles per hour will cover 190 feet in 2 seconds.2 In most instances, a car lane provides merely a few feet between one automobile and opposite direction traffic, meaning even a 1 second microsleep can easily be fatal. And despite our brain’s ability to auto-shutdown, many of us still feel that we can control our sleepiness. Much of this myth may well be rooted in the “necessity of the moment” idea, the false belief that life-or-death tasks that demand a great deal of attention (like driving one’s car) will counter and overcome any such desire to sleep. Despite wishing to 2  See, Shift work and driver fatigue ended Dr. Brandon Rogers’ life. American Sleep Apnea Association, 19 August 2017.

5  Why Do We Ignore Sleep?

91

believe we can keep ourselves awake at all costs, the actual proof on this says something quite different. Research by the National Academy of Sciences found that over a third of car rides home after night shifts resulted in near crashes [1]. Half of the shift-work drivers in the study (who were analyzed while driving after a night shift) were ultimately stopped by the researcher’s backup passenger-seat floor brake and were unable to complete the driving period because they had experienced a temporary loss of automobile control. The results from this study are significant. Not only does pushing through sleep loss put shift workers at risk, but it also puts anyone who shares the road with them, at grave peril as well. Dr. Carl Stepnowsky, Chief Scientist for the American Sleep Apnea Association, describes sleep in terms of a “disengagement” and “unresponsiveness” to the environment. Stepnowsky says that microsleeps are particularly disastrous for drivers because despite being behind the wheel and looking straight ahead, drivers are “no longer seeing the road and no longer actively driving” [1]. A video posted by the organization Circadian shows an experiment performed on a young man with sleep deprivation.3 He is asked to execute several high-speed road maneuvers in an empty parking lot. In the video, he seems to be enjoying himself and commenting that he actually feels pretty good. Following the 2-hour test, the doctor facilitating the experiment shows him the results. Unbeknownst to the young driver, he suffered multiple microsleeps. Astonishingly, when added up, the driver was asleep for 25 minutes of the 2-hour testing without even knowing it. As with this young driver, the sleep deprivation is so severe that the brain can no longer function and shuts itself down, while it desperately attempts short recharging. The reason the brain doesn’t fall asleep without end is that it’s also struggling to stay awake to satisfy the attempt to do critical tasks. Using an MRI machine, researchers could see into the brains of those experiencing microsleeps and discovered that while parts of the brain were shutting down during microsleeps, the attempts of the test subjects to command themselves to stay awake resulted in other parts of the brain trying desperately to stay awake. What the researchers found was during microsleeps, when we are technically asleep, the frontal parietal cortex – responsible for sensory inputs and helping pay attention to stimuli – goes crazy trying to keep us awake while trying to combat the microsleep [2]. Fascinatingly, the brain is fighting a war on two fronts at the same time: trying to steal away brief seconds of total sleep to recharge itself while also pushing itself to stay awake, realizing that its very survival depends on not sleeping. While this is the reality for many, pushing themselves to stay awake during shift work on the back side of the clock, for millions more it is the reality of getting through the daylight hours at work – another day in which productivity is sabotaged by short stabbing episodes of acute fatigue. For our ancestors, sleep patterns followed the sun. Long and stressful days of trying to stay alive followed by buzzing cicadas and long sunsets led to natural 3  See Circadian website and accompanying video. URL: http://www.circadian.com/blog/item/42microsleeps-30-seconds-to-catastrophe.html.

92

5  Why Do We Ignore Sleep?

nocturnal sleep patterns, by which darkness determined sleep. For our forbearers, the sun’s sleep was also their sleep. The signals that darkness had on their eyes and their brains’ – followed by the release of melatonin signaled the brain that it was time to rest. The idea that we fall asleep when Mr. Sun sets and wake when Mr. Sun rises, while surely simplistic, is also a schedule that many of us might love to adopt. But for all the love we have of great sleeps, why is it that so many of us fail to achieve this most basic of human needs, which requires no real skill other than closing our eyes? In offices around the world, employers are trying to combat the recently coined phenomena called “presenteeism”. Presenteeism refers to the idea of being present at work without really being mentally engaged with the work tasks at hand – and the roots of this modern cultural disease are planted in the poor sleep patterns for which so many of us are guilty. Estimates of real business losses due to our lack of sleep are astonishing, running over $400 billion per year in the United States alone [3]. Yet, it surely goes without saying that most of us do not want to feel detached and lethargic during the day – so why, despite our efforts, is this such a monumental modern day affliction? For most of us, our lives are filled with convenience. Except for the farmers among us, most of us do not have to rise before the dawn to work with livestock or till fields, churn our own butter, collect eggs, or make our own bread. We have refrigerators, running water, cars, or transit instead of horse and wagon – and laundry machines instead of washboards. In fact, our lives should be full of free time. But, counterintuitively, today’s family feels busier and experiences more stress than ever before. The common response to “How are you?” is “Good,” followed by: “… busy.” The idea of busy has become synonymous with success. If you’re not busy, you are somehow not achieving things, and life and opportunity are passing you by. As a result, we measure ourselves by how tightly we can fill our day’s agenda. How many different activities our children can do in a week becomes more important than how well they enjoy those activities. Of their children’s stress levels, parents say, “they are fine” because there are always other children who seemingly do more – and as we now know, we love to compare. When it is finally time to wind down at the end of a long day, many of us surf the Internet, read news of impending wars or pandemics, or stare wide-eyed into endless narcissistic social media portrayals of people making fabulous organic culinary delights for their perfect families or sharing their adventures while vacationing by the sea. These online people are not lying in bed looking at pictures of others, proving to themselves how social competition provides a good strong jolt of stressful pre-slumber cortisol. We stimulate our eyes and our brains, long after the sun has set – and in doing so, create anxiety. There is strong social pressure to stay connected as well. We wake and fall asleep to a steady stream of email and text messages and often respond to them while lying in bed, during meals, while walking, while spending time with our children, and dangerously, even while driving (don’t do this!). With this emerging social reality comes expectations  – that we will be available and accessible 24 hours a day, 7 days a week.

5  Why Do We Ignore Sleep?

93

In a world in which full calendars is a sign of success and status, it’s no small wonder that time spent sleeping, relaxing, and recovering have become more of a “nice to have” instead of a “need to have.” And, it is not just the length of time that we devote to rest that is important, it is the quality too. Even when we do grant ourselves a long rest period in our day, our minds are often so busy and so disturbed, either from news, social media, or simply life stressors, which we often suffer from poor sleep quality. What then, is perfect sleep? While it is difficult to answer with generality, we do know what happens when humans are put in environments where they do not have all the sleep-robbing disturbances we place upon ourselves. First, our days become longer. Research suggests that if we were put in a home with no windows and only light from the lamps that we could turn on and off, and with no reference to time of day, we would still sleep and wake but the times that we would do so would extend slightly each day. It turns out our natural body day is longer than 24 hours but is only kept in check by the solar light that enters our eyes in the morning. To do the same in the house with no windows, and hence, no outside light, you would have to set an alarm clock and turn the lights on and off at specific times to avoid extending your day. For those who have had pets, it is interesting to consider their sleep cycles. When dogs and cats are not running around chasing things, eating, or urinating, they can usually be found sleeping on a corner of rug. Scientists claim that multiple sleeps instead of one long sleep may be the natural sleep pattern in the animal kingdom – and there is no reason to believe that humans should be any different. As most new parents will attest, baby humans certainly follow similar sleep patterns until they develop into night and day sleep-wake circadian rhythms. The scientific term for this is polyphasic sleep, meaning more than one sleep. Likely, as a result of industry and the need to live and work as a society, humans have moved away from the animal kingdom’s “normal” polyphasic sleep pattern, toward a single-sleep period. But, this too may be a recent phenomenon. In reviewing literary works of fiction, not more than a couple of centuries old, researchers discovered clues that biphasic sleep may have been the norm. With no television, no Internet, and certainly no social media, nightfall would have brought about a cascade of normal sleep-drive reactions to ready the brain and body for sleep. In biphasic sleep, our ancestors would fall asleep with the sunset and then wake after midnight for a few hours, and then go back to sleep until sunrise. Scientists believe that without the screen stimulation and artificial lighting of our modern world, that biphasic sleep may be our most natural default sleep pattern and that in a situation where we are removed from artificial light  – in which we only have the sun as a source of light – we will quickly move into biphasic sleep patterns [4]. In English literature and as seen in the literature of the Romance languages, sleep was often divided into “first sleep” and “second sleep.” In French, first sleep was referred to as “premier somme” and in Italian “primo sonno.” While biphasic sleep patterns were noted as far back as the literary masterpiece, The Odyssey, more recent translations of first sleep, referred to it as “beauty sleep,” a popular present-­ day phrase.

94

5  Why Do We Ignore Sleep?

First sleep has also been referred to as “anchor sleep” or “dead sleep,” a very deep slumber that sets us up for the short period of wakefulness when our brains come alive. In fact, in the night hours when we are awake between first and second sleep, our brains produce an abnormally high level of prolactin, which gives us a great sense of wellbeing. Some modern-day biphasic sleep adherents note the euphoric and calming sensation during their midnight waking period.4 Not surprisingly, literature that references first and second sleep often cites this brief period of wakefulness as a productive time, for creativity, writing, or intimacy. As is especially true of warmer climates, many of the world’s citizens enjoy an afternoon nap. In some parts of the world, and in particular, within the Romantic cultures, afternoon snoozes like the siesta in Spain and the riposo in Italy are commonplace. Businesses may even shutter their storefronts for hours in the middle of the day to permit store owners (and would-be patrons) an opportunity to sleep during “normal” business hours. For tourists and business travelers unfamiliar with these afternoon sleep rituals, the closing of stores, cafes, and restaurants for hours in the early afternoon can seem a bizarre inconvenience. Yet those who siesta may well be on to something. Researchers who study siestas and heart disease have found that nappers tend to have a much lower risk of heart attack. A midday nap seems to lower blood pressure, reduce stress, and improve cognition. According to Greek researchers, a post-prandial (after meal) siesta can result in a drop in blood pressure that lasts a full 24 hours and should be part of our daily routine to maximize the physiological benefits.5 In this context, a siesta results in approximately 10% lower risk of cardiovascular events. Likewise, a study by the Harvard School of Medicine and the University of Athens – who jointly carried out the largest research effort of its kind with 23,681 subjects – discovered that those who napped at least 30 minutes three times per week had a 37% lower chance of dying from a coronary issue [5, 6]. In cultures not predisposed to siestas, midday napping is often viewed as lethargy or an inability to keep up with the pace of modern work. If you’re sleeping, it’s assumed that something must be wrong with you. Lying with your head on your desk is not typically seen as deserved of a pay increase or promotion. When we sleep at work, we tend to view it as “uncontrollable” sleep, and in many cultures, such loss of self-control is tantamount to weakness. Yet the lack of safe spaces to nap is both physical and psychological. Consider three common office designs: one where offices have glass fronts or windows through which other office workers can see us at our desk, the second a cubicle design where passersby can also peer around dividers, or creepier yet, stealthily leer over top, to newer open wall-free spaces where collaborative desk arrangements or communal living room style lounges are shared and where everyone in the office is in plain view of others. These common office designs provide little privacy for closing our eyes without co-worker scrutiny. 4  Biphasic sleep: what two weeks of it did to me. Renaissance Humans. (Blog). See, http://renaissancehumans.com. 5  See research poster presented by Dr. Manolis Kallistratos at the 2015 European Society of Cardiology Congress. In Busko, Marlene. Siesta Therapy, Medscape, 29 August 2015.

5  Why Do We Ignore Sleep?

95

While many bosses may sympathize with the logic of not paying someone to sleep, mounting evidence in support of short duration naps and improved overall productivity powerful enough to directly and positively affect the bottom line is becoming too loud to sleep through. Recent research indicates that short duration naps are akin to clearing out all the leftover and unwanted messages from your email inbox. Using an infrared spectrometer to measure blood flow in the brain, Georgetown University professor Andrei Medvedev discovered that while 95% of us have a dominant left side of the brain, which is typically our more analytic hemisphere, it’s the right side of the brain – the creative side – that does the heavy lifting during naps, quickly scavenging and clearing the brain of clutter [7]. Current thinking suggests that this cleanup may well take place in the hippocampus where memories of the day’s events are stored like a giant email inbox [8]. When we nap, the brain unloads the hippocampus of all these short-term memories and moves them into long-term storage in the frontal cortex. This process acts to clear out our “inboxes” so that we can think more clearly and remember new things [9]. Our natural sleep pattern can be divided into five stages, which sleep scientists call “sleep architecture.” Our understanding of sleep stages is relatively new as it was once recently thought that sleep was simply one phase, involving repair and clearing of the mind. With modern diagnostic imaging, we know a lot more today about how our brains and our bodies restore during sleep and how age and lifestyle can affect that vital hardwired process. When we first begin to fall asleep, we enter the first and second stages, which are described as light sleep, when our bodies are progressively sliding toward the deeper sleep of Stage 3. If we were to be awoken in the first two stages, most of us would be able to jump back into activity with little sleep inertia. Naps of 30 minutes or less typically involve Stage 1 or Stage 2 only. In Stage 3, our deep-sleep stage, our brains slow down, giving this stage the name slow-wave sleep. This is when our bodies repair themselves, look after injuries, and build and refresh our immune system. In Stage 3 our bodies are under heavy repair and as such we are very much resistant to being awoken in this stage of sleep, but if we are, we will be disoriented and take considerable time to become fully alert. These first three stages are all part of our non-REM (non-rapid eye movement) sleep. By Stage 4, we are in a very deep sleep.6 Stage 5, while also deep, is like no other stage. It is strangely characterized by brain waves similar to what we might expect to see when our brains are fully awake. In fact, Stage 5 is often called paradoxical sleep because our brain is extremely active despite being in a very deep slumber. In Stage 5, our breathing and heart rate increases, but our limb muscles become paralyzed to prevent us from acting out the powerful dreams that are characteristic of Stage 5. During this stage, our eyes move rapidly in all direction, hence, the acronym Rapid Eye Movement (REM) sleep. In a normal night’s sleep, we will repeat these stages over and over, like replaying a record, typically up to four to six times. However, not all the stages get the 6  The American Academy of Sleep Medicine now combines Stages 3 and 4 into one stage. Along with this change, the nomenclature was also changed to N1, N2, N3, and R (replacing S1, S2, S3, S4, and REM sleep). See, Moser et al. [10].

96

5  Why Do We Ignore Sleep?

same play time as the night progresses. In the first half of the night, we spend a great deal of our sleeping time in the “repair shop” of Stage 3 and 4 sleep, while during the second half of the night REM sleep becomes more prominent. However, it’s not just the length of sleep that determines the stages we spend time in, it’s also the time of day: our circadian rhythm. We tend to experience deeper reparative sleep (Stages 3 and 4) up to approximately 3:00 am and then move into more of the REM sleep between 3:00 am and 7:00 am [11]. So, it’s not just the length of sleep that matters but also what time we put ourselves to bed each night. Our age also dictates which stages we spend more time in. Newborns and infants, who can sleep a great deal of the day, tend to spend about half of their total sleep time in REM, but by the time youngsters reach toddlerhood, REM decreases to about 25% of the total time. As we move into childhood and adolescence, not only does our total sleeping time decrease but also the time that we spend in REM and deep non-REM Stages 3 and 4 sleep. By the time we are adults, we spend about half our night in lighter sleep Stage 2.7 As we age then, our quality of sleep diminishes as we spend less time in the deep repairing Stages 2 and 3. Older adults also spend much less time in REM sleep, important for clearing our thoughts to improve memory and cognitive function. Fascinating new research has linked this reduction in REM sleep to dementia, showing that a 1% loss in REM sleep time equates to a 9% increase in the risk of dementia [12]. Despite the most commonly understood reason for REM sleep in mammals – the decluttering of the brain – other possible explanations abound, including periodic (and presumed necessary) revving up of the brain to stimulate new neural connections, the theory that our eyes must occasionally move to remain oxygenated, to ancient survival mechanisms that keep our brain active and ready to go, should our lives be threatened by predators. There are several lifestyle factors that can inhibit our REM sleep. Alcohol, when consumed close to sleep time, or if one is intoxicated, can also inhibit REM sleep, as can certain illicit drugs, like cocaine and ecstasy [13]. Add to this, some prescription medications such as the three major families of antidepressant drugs – MAOI, TCA, and SSRI drugs – all can dramatically reduce or inhibit REM sleep [14]. With one in six adults in the United States now taking psychiatric drugs, it is interesting to consider the societal effects of REM sleep loss within the population [15]. When drugs or alcohol are removed from the system, the brain often scrambles to make up for lost REM time during which an individual can go through long and intensive dream-dense sleeps. In some cases, seemingly never-ending nightmares can accompany withdrawal and from substance abuse – the detox or “DTs” have been thought to represent the brain’s feverish and merciless replenishment of lost REM sleep.8 Not only will the brain try to make up for missing REM time, but the brain may well push for even more REM than was originally lost. Research has shown that test subjects who lost 30 minutes of REM showed an increase of 35% more REM the following night [17].

 For a good discussion on this, see Gordon [11].  See chapter 17 and notes on REM and REM Rebound in Kaufman [16].

7 8

5  Why Do We Ignore Sleep?

97

For most of us, a few drinks tend to make us more chatty, approachable, and sometimes even more energetic. However, despite these apparently uplifting effects, alcohol is technically classified as a depressant. When we have a drink, alcohol begins to have several effects on the neurotransmitters in our brain. Among the early effects is alcohol’s role in increasing dopamine, which gives our brains a huge reward for seeking out the beverage in the first place. Alcohol also increases serotonin levels, giving us a nice warm and fuzzy feeling and elevating our mood – even making us more optimistic about life in general. Alcohol releases endorphins, the natural euphoric painkillers that reduce our feelings of discomfort and provide us with a mild sense of euphoria. And yet, despite these rather positive effects, alcohol also effects glutamate and GABA transmitters, reducing their ability to excite our nervous system and, in effect, slowing down (or “depressing”) our central nervous system activity, making alcohol a very effective depressant. When we drink alcohol before bed, we tend to fall asleep faster and slip into a deeper sleep more quickly. Indeed, alcohol’s ability to make us fall sleep more quickly – a variable scientists call sleep onset latency – is the most consistently validated effect of drinking on our nighttime sleep patterns [18]. With nearly a third of Americans experiencing some form of insomnia from time to time, and 10% suffering from crippling chronic insomnia, it’s no wonder that many may well turn to indulging in a few nightcaps as a way of getting to sleep [19]. Once we do fall asleep with alcohol, we tend to slip fairly quickly into deep sleep (Stages 3 and 4). This is our slow-wave sleep – our repair shop – where our brains and bodies are as slow as they can get to heal and regenerate. However, one of alcohol’s most sinister effects is its role at impeding our dreamy REM sleep, which tends to become more dominant in the latter half of our night’s sleep. Without REM, we may become more irritable, absent minded, and can suffer reduced motor skills. The good news is low and moderate amounts of alcohol tend to have little or no effect on REM, but larger amounts or drinking right before bedtime can have significant negative effects on REM sleep. How much alcohol can you enjoy without disturbing your REM? Research shows that anything greater than 0.4% blood alcohol content (BAC) will disturb your sleep [20]. While there are many nifty %BAC calculators online, it is difficult to know for certain when you will reach 0.4% BAC. For men, who are typically heavier in weight, the time to reach 0.4% BAC and to reach zero % BAC can be as much as half the time it takes for a woman to reach the same levels.9 It might take a 200 pound man only a matter of a couple hours to reach 0.4% BAC after consuming an entire bottle of wine over 5 hours while it could well take a 140 pound woman most of the night.10 For the longest time, scientists believed that women metabolize alcohol slower than men simply because women were, on average, of lesser body weight. However recent research now indicates that there is much more behind the story than was first imagined and that it begins with the first sip. 9  For an interactive online pharmokinetics BAC calculator produced by E.M.P.  Widemark, see: https://www.autoevolution.com/bac/. 10  Ibid.

98

5  Why Do We Ignore Sleep?

Our body is bestowed with a brilliant built-in protection system meant to keep toxins from entering our bloodstream. Like a security guard employed to usher threatening people away from the front door of a building, our body can quickly escort a threatening substance out of our stomach and directly to the liver, without the substance ever entering the bloodstream. It does this through a process called first pass metabolism (FPM), or pre-systemic metabolism, by which the alcohol is almost immediately fast-tracked directly from the stomach and small intestine through the portal vein and on to the liver. From here the liver tries to metabolize as much of the alcohol to reduce its concentration before it enters the bloodstream. This very clever system is also active when we ingest certain drugs and medications, like morphine, and it can be so effective that it can often dilute the medication’s active ingredient, a process that is investigated during drug trials. An enzyme called gastric alcohol dehydrogenase (ADH) plays a central role in FPM and is designed to hammer away at the alcohol to break it down in both the stomach and liver. As it turns out, men have a great deal of ADH and women have very little, meaning that women tend to feel alcohol’s effects far more quickly than men do, regardless of body weight [21]. Women also tend to have less water in their bodies than do men, and this lower water volume means that women tend to have a higher BAC for a given amount of drink.11 In fact, when researchers leveled the playing field between men and women by accounting for water volume in the body, they found that it also equalized the rate of alcohol elimination, providing proof that BAC and rate of alcohol metabolism is closely linked to how much water we are carrying around. Another factor that may increase alcohol’s effect for women are female hormones, as BAC tends to increase with a given number of drinks more dramatically just prior to the time of menstruation. For both men and women, having food in the stomach tends to be one of the most significant ways of slowing down alcohol’s effects on the body as it tends to slow the rate of gastric emptying time, meaning there is more time to break down the alcohol in the stomach, prior to it entering the bloodstream. Other factors that can affect BAC and elimination are fatigue and stress, altitude, and certain medications. The lesson in this for sleep and REM is to be aware of how alcohol is absorbed and eliminated from the body and what affects these rates, particularly in terms of gender. And if we’re not getting to sleep, we aren’t able to perform as well as we would like. Despite our obvious need for sleep, nearly 30% of the American population suffer from some form of insomnia or other sleep disorders, and nearly one out of every three Americans does not obtain sufficient sleep.12 The US Centre for Disease Control and Prevention (CDC) has now labeled insomnia a “public health issue,” linking lack of sleep with road accidents, mistakes at work, emotional and mental instability, and poor performance.13

 See, Factors that Affect Alcohol Metabolism. University of Richmond. Health Factsheet. http:// wellness.richmond.edu/common/pdfs/factsheets/alcohol-metabolism.pdf. 12  Centre for Disease Control. 2016 Press Releases. https://www.cdc.gov/sleep/index.html. 13  Ibid.

11

5  Why Do We Ignore Sleep?

99

Each year, the CDC’s Behavioral Risk Factor Surveillance System (BRFS) conducts the largest public health survey in the world. With nearly a half million participants in the United States, the BRFS peers deep into the lives and households of Americans in all 50 states, districts, and territories. One of the more curious recent findings of the BRFS is that insomnia, while experienced by many, is not an issue that is uniformly distributed geographically. Indeed, according to the findings, if you want to avoid insomnia and its deleterious effects, it’s best to avoid living in the southeastern US states, near the Appalachian mountain range, or state of Hawaii. These three distinct areas show a much greater rate of insomnia than the central US states, those in the Great Plains or Midwest.14 The best sleepers in the United States are in Minnesota, South Dakota, and Colorado. Why is this so? The most likely reasons fall into two broad themes: the first is physical health, with obesity, diabetes, and high blood pressure more rampant in the southeastern United States than in the Great Plains, Midwest, and northwest US states. In fact, if you lay a map of the US states experiencing insomnia and shortened sleep overtop a map of obesity levels in the United States, you would see a near direct correlation between high obesity levels and sleep issues.15 Arkansas, Louisiana, Mississippi, and Alabama form a cluster of states that (along with West Virginia a bit farther north) have soared in rates of obesity in recent years. In all these states, slightly more than 35% of the population is now considered obese.16 Considering that 65% of Americans are overweight, in states with very high obesity rates, the percentage of overweight people may be approaching three out of four. In Mississippi, nearly half of the adult population is obese and 42% of the state’s citizens suffer from hypertension.17 Along with obesity comes the risk for obstructive sleep apnea, a condition that affects nearly 20 million Americans, and which is characterized by a relaxing of the throat muscles to the point where breathing starts and stops during sleep, disrupting the sleep itself. Sufferers of sleep apnea may well experience drowsiness during the day which can increase their appetite and cause a loss of desire for exercise – a one-­ two punch in the gut when it comes to combatting weight gain. Dozens of studies have demonstrated a link between lack of sleep and obesity.18 With subjects wearing polysomnography equipment, researchers were able to discover an inverse relationship between nightly sleep duration and waistline diameter; smaller sleeps lead to bigger waistlines [23]. Of the many ways that lack of sleep affects us, two are most significant when it comes to obesity: those that affect our metabolic and endocrine systems. As sleep is reduced over time, we become less sensitive to insulin, m ­ eaning we must produce more insulin to regulate our blood sugar. Over time, insulin insensitivity can lead to type 2 diabetes where the body becomes resistant to insulin’s sugar-busting effects. Shockingly, one study in non-humans demonstrated that  For a map illustration of the sleep data, https://www.cdc.gov/sleep/data_statistics.html.  For an interactive graphic of obesity levels in the US, see Adult Obesity in the United States (updated 31 August 2017). https://stateofobesity.org/adult-obesity/. 16  Ibid. 17  See stats on Mississippi on State of Obesity website. https://stateofobesity.org/states/ms. 18  For a discussion of literatures and findings, see Guglielmo and Silvana [22]. 14 15

100

5  Why Do We Ignore Sleep?

missing one night of sleep has the same effect on insulin sensitivity as eating a high-­ fat diet for 6 months [24]. Hormonally, one of the most significant effects of sleep loss is a decrease in leptin. Leptin is a hormone that is produced in our fat cells, and its role is to tell the brain’s hypothalamus that we have enough stores of food and fat in our body and that we do not have to eat. In short, it turns off our hunger. The more body fat we have, the more leptin we produce, and the more our brain turns off our desire to eat. When it comes to leptin levels, sleep matters. Several studies have confirmed the causation between sleep loss and decreased leptin levels, with one study claiming that for each hour of decreased sleep duration, leptin levels decreased by 6% [25, 26]. While leptin’s job is to decrease our appetite, its hormone counterpart ghrelin’s job is to increase hunger. Ghrelin, which is produced in the stomach, increases when our stomach is empty, telling our brain that we should get hungry. When sleep is shortened in duration, ghrelin levels increase, meaning we feel hungrier [27]. When we lose sleep, leptin (our eating “stop sign”) goes away, and ghrelin (our eating “green light”) increases. Regardless of whether we need to eat, the effect is hunger. These linkages may start to make more sense in considering the southeastern states that lay claim to the shortest sleeps and the highest rates of obesity. Contrarily, the mid and western states exhibit the best sleep patterns and much lower rates of obesity. It seems that both sleep and obesity are tied at the hip and together form a somewhat vicious circle and formidable challenge to staying lean and healthy. When researchers considered why there was such a geographic variance in sleep length and sleep disturbance, daytime drowsiness, and obesity, they looked at several possible causes, including demographics, lifestyle, substance abuse, socioeconomic conditions, and even sunlight patterns. They found that the most significant determinants of this negative pattern were mental health, age and ethnicity, and access to medical care [28]. Indeed, analyzing 33 key indicators for stress, researchers found that the same southeastern US states – the ones with the lowest sleep and highest obesity rates – were also the ones with the highest reported stress levels [29]. Among the states with the least stress were Minnesota and North Dakota, also the states with the best sleep quality and lowest rates of obesity. Around the world, the stress-sleep-obesity triad is consistently linked. In Nigeria, for example, stress levels are among the highest on the planet with obesity rates skyrocketing to epidemic proportions [30]. Mathew Walker, a neurophysiologist and one of the world’s most influential sleep scientists, warns of a litany of negative physiological responses that come from a lack of sleep [31]. Perhaps most interesting is the newer research that points to the dangers of acute sleep deprivation – as short as one poor sleep. Walker reminds us that even one night of sleep can compromise our immune system, rendering us more susceptible to colds and influenzas. Additionally, even one night of only four our 5 hours of sleep can deplete our stores of natural cancer killer cells by as much as 70% [32]. Indeed, our 24-hour sleep cycle helps support our immune system. Even a short disruption in sleep schedule can begin to produce inflammation in the body while decreasing immunity [33].

5  Why Do We Ignore Sleep?

101

Walker states that over the past 75  years, our sleep health has deteriorated to disastrous levels. In 1942, for example, less than 8% of the population tried to function on less than 6 hours of sleep per night, while today, nearly half the population does [33]. Walker makes a great point that there is a stigma around sleep. In an interview for The Guardian, Walker jokes that for babies, sleep is non-negotiable, and yet no one ever says, “What a lazy baby!” [32]. But for many of us, working harder with less sleep seems like a badge of honor. Many of us feel that while we’re sleeping we are missing out on the world’s never-ending scroll of news and that reducing our snoozing by even a few minutes could help us to surf newsfeeds, catch up on social media, or simply do something relaxing like watch a bit of television or movie. Millennials (and generations after them) were born in the dawn of the digital media age and have never known a world without screens. This means that Millennials grew up and exist entirely in an online ecology that never ever turns off. For many Millennials, the ability to work online means that they can reinvent the traditional nine-to-five punch-clock workday. There are many interesting characteristics of Millennials, who now represents nearly 40% of the American workforce. Millennials remain one of the least settled generations, with 60% of Millennials lasting only 2–3 years at a job, before moving on to another employer. This rate of turnover is a twofold increase over previous generations.19 For most companies, this presents an economic nightmare with the average training and “onboarding” costs for companies being somewhere between $15,000 and $20,000 per employee. And, by 2025, Millennials will soon comprise upward of 75% of the workforce, making employee engagement and retention top priorities for the new business economy.20 While no one is picking on Millennials, they are the first generation in history to grow up with online media at their fingertips making them perhaps the most interesting test generation in our history for an emergent digital world. Indeed, the differences between the way Millennials perceive themselves and the world around them as compared to the same measurements from the generation preceding them, may be one of the most significant divides in history. The average Millennial now spends up to 18 hours a day online [34]. For the Millennial, the inability to “turn off” and unplug has become a real challenge when it comes to sleep. It’s no surprise that many Millennials feel fear in disconnecting from their online lives, lest they fall behind [35]. In fact, the acronym FOMO (fear of missing out) was coined to help describe the psychology of Millennials, and it may well explain why Millennials have garnered the reputation of the restless generation – always with one foot out the door of their employer at any given moment. With near continuous online media and relentless and sometimes unforgiving social comparisons with others, Millennials are consistently looking for higher-value moments and experiences to add to their lives.  Millennial Branding and Beyond.com Survey Reveals the Rising Cost of Hiring Workers from the Millennial Generation. The Cost of Millennial Retention Study. See, www.millenialbranding. com, 6 August 2013. 20  Ibid. 19

102

5  Why Do We Ignore Sleep?

While the Millennials, as the true pioneers of plugged-in lives, continue to provide us with fascinating data on growing up online, their generation is only the leading edge of younger and newer generations now following in their digital footsteps. Whether with FOMO or with the allure of social comparison, young people are becoming increasingly attached to their devices. In the United States, nearly 95% of people surveyed said they watch some form of digital screen in the hour before bed and 90% of 18–29-year-olds sleep with their phones.21 Of the generation born after Millennials (now in their teenage years), 97% have an electronic device in their room, and, astonishingly, one in five always wake up during the night to check social media [37]. And if you can’t understand why a teen wouldn’t simply shut off the bedside phone, it’s because you aren’t remembering what’s it like to be experience the world through a teenage brain. This experience includes neural pruning, whereby old neural pathways are pruned away to make way for new ones, an essential and critical process for brain development. Yet, as a result of this process, teenagers become highly sensitive to outside cues as they hunger for brain stimulation – and of course, the easiest outside stimulation in the middle of the night is the never-ending scroll of social media conveniently within arm’s reach [38]. Some teenagers may even experience immense anxiety if they don’t check in with their phones during the night, as a mother might with a newborn baby. New research on nomophobia (a fear losing touch with one’s smartphone) indicates that smartphone users can develop a strong and protective bond with their devices, viewing them as an extension of themselves because of the consistent use of the phone during the experience and recording of daily life. The phone becomes a life partner, which may explain why one-third of the population would rather give up sex than part with their smartphone [39]. And, among adults, nearly 20% of the population would rather forgo seeing their spouse for a week than give up using their smartphone apps [39]. There is little doubt that our built-in hardwired instincts are intensifying the ill-effects of alluring online devices. At Brigham and Women’s Hospital, a study was conducted to measure the effect of reading on an iPad screen versus a paper book, prior to sleep. The results showed that even if 8 hours of sleep is achieved by those who use screens before bed, the quality of the sleep is impaired because of the screen’s effect on the viewer’s melatonin production [40]. Not only did it take longer to fall asleep, but the quality of the sleep suffered and time spent in REM was shortened. The researchers attribute the results to the shortwave (blue) light emitted from digital screens. When our eyes sense darkness, they send the message to a part of the brain called the suprachiasmatic nucleus (SCN), a tiny region inside our hypothalamus, which is the master control for our body’s circadian rhythm. The SCN oversees the regulation of our hormones and body temperature as they correspond with sleep and wakefulness. One of the most critical signals that the SCN sends out is to the tiny pea-sized pineal gland in the center of the brain, whose job it is to synthesize and release the hormone melatonin.

21

 See cited sources on infographic. Sourced in article by Blodget [36].

5  Why Do We Ignore Sleep?

103

Melatonin has appropriately been called the “vampire hormone” as it only comes out at night. Importantly, melatonin’s job in the body is to signal our brain that it’s time to sleep, but not necessarily to increase our sleep drive. This is a common misunderstood property of melatonin by those who purchase it as an over-the-counter sleep aid. Melatonin won’t necessarily knock you out like a sleeping pill will – its function is only to shift our sleep time in accordance to its release. If you have significant jetlag or are a shift worker, then melatonin supplements may help realign your body clock. While all ages are guilty of reading on their devices, watching movies, scrolling through news stories, using social media, or returning email or text messages, teenagers are especially victimized by the deleterious effects of shortwave blue light on sleep. One attribute that is particular of teenage sleep is their unique sleep-phase delay, whereby the drive toward slumber in the evening is much slower than those of other ages. Indeed, it will come as no surprise that science has now confirmed that teens’ natural biological programming tends to create a condition in which they want to fall asleep later at night and snooze later into the day. Even without cellphones and tablets at their fingertips, teenagers tend to experience a shift of sleep pattern that is uniquely later than both younger and older generations. When shortwave blue light is introduced into the mix, it exacerbates this late sleep-phase delay conundrum, telling the teenage brain that it’s not yet time to sleep and pushing the sleep-phase even later – to around 11 pm [41]. Not only are high school teens viewing screens socially, an ever-increasing amount of homework requires them to be online – often late into the evening, right up to bedtime. When students then must rise early for school, their brains and bodies are suffering from inadequate sleep – not unlike waking an adult at 2 am to go to school. Dr. Mary Carskadon, Professor of Psychiatry at Brown University, and winner of the Sleep Foundation’s Lifetime Achievement Award, is one of the leading researchers of teenage sleep patterns. Among Carskadon’s many studies is one showing that teenagers who sleep less hours suffer a host of problems, including poorer learning and memory, abstract thinking, and problem-solving [41]. This translates into real performance deficits when it comes to all-important grades needed for college and university. Carskadon also found that poor sleep can lead to behavioral problems as well, such as poor inhibitions, moodiness, and impulsivity [41]. Most importantly, a general inability to self-regulate emotional health has been linked to increase depression and suicidal thoughts among teens whereby life and school pressures can become magnified by lack of sleep – pressures that might have been handled with greater resilience in a more rested state. The building of discipline over when to shut off screens prior to bed and a healthier bedtime routine are certainly attainable goals – and ones that parents can help facilitate. However, the natural biological phase delay clocks of teens are also being met with another man-made challenge: early school start times. In the United States, a study of some nearly 40,000 middle schools, high schools (and combined), found that the average bell time was 8:03 am, with some 43% of high schools starting in the 7 am hour [42]. This runs counter to an official statement by the American Academy of Pediatrics (AAP), which calls for bell times to be no

104

5  Why Do We Ignore Sleep?

earlier than 8:30  am [43]. Currently, only 15% of middle or high schools in the United States meet this recommendation, starting their school day at 8:30 am, or later [44]. The AAP argues that delaying school start times by even slight margins can go a long way to solving some of our most pressing public health concerns, including the massive challenges that include mental and physical health. One of the biggest arguments for moving bell times later is the cost of bussing transportation. In the Canadian city of Calgary, bus schedules were revised and aligned in 2017, resulting in moving many school start times earlier. The Calgary Board of Education and the provincial government argued that the move will save money – and save costs for families. Yet, this may ultimately prove to be shallow logic. A Brookings Institute Report on student achievement and bell times considered the cost of transportation and found that moving start times later by 1 hour (not earlier) is what saves money. In fact, not only does it save money, it does so in a very big way. Brookings found that the benefit-to-cost ratio for later start times was 9:1 [45]. While the Brookings analysis did consider the cost of busing, it also included significant factors of student wellbeing and overall performance, not only in the immediate sense but also the long-term societal and public health sense, taking into account better grades, improved mental health, and better physical fitness, in youth and into adulthood. The Calgary Board’s decision flies in the face of science and against emerging trends to move toward later start times. Ironically, one of the contentious issues the Calgary Board is wrestling with is math scores. Yet, a study in the Journal of Clinical Sleep Medicine showed that math scores improved after only 5 days when bell times were pushed back 1 hour later [46]. In so-called “extensive” consultations with stakeholders on the proposed earlier bell times in Calgary, the minutes revealed that no discussions were tabled with respect child wellbeing.22 Interestingly, the earliest average start times among the 40,000 US schools were in Louisiana, which has an average school start time of 7:40 am – a state that suffers from both the lowest sleep hours per night and highest obesity rates in America. On the other end of the spectrum, the schools with the latest school start times, in the Midwest, have the greatest hours of sleep per night and enjoy the lowest obesity rates.23 A US poll found that by senior High School years, students sleep slightly less than 7 hours per night, on average [41]. And, only 6 in 100 teenage girls and 8 in 100 teenage boys get the CDC’s recommended sleep of 9 hours per night [47]. Certainly, more dangerous and costly than falling asleep in the classroom is falling asleep or being inattentive behind the wheel. There is a direct link between early school start times and teen car crashes. In SLEEP’s 2010 scientific meeting, sleep doctor Robert Vorona (in conjunction with the Department of Motor Vehicles) compared various county car crash records involving teenaged drivers against school start times. In Virginia Beach, where school started at 7:20 am, there were 65.4 teen car crashes for every 1000 teen drivers. In adjacent Chesapeake county, which  As discovered by the author in consultation with School Board and parent representatives in Calgary. 23  Op cit. 22

5  Why Do We Ignore Sleep?

105

started school an hour and 20 minutes later, at 8:40, there were 46.2 teen car crashes per 1000 teenage drivers [48]. In another county, changing start times from 7:30 am to 8:30 am resulted in a reduction in teen car crash rates by 16% [49]. Like treatments for many modern ailments, stress, obesity-related issues, and high blood pressure, sleep too, can come in pill form. In a study called One Million Nights, conducted by Dr. Mehmet Oz and ResMed, 20,000 participants were analyzed along a variety of sleep variables, including breathing, body movement, light, and temperature in each sleep stage, over 30 minute intervals, covering 11.1 million sleep hours and 1.4 billion data points. An ongoing data collection program with over 4 million nights of sleep in the dataset, Sleepscore Labs is now the largest sleep dataset in the world.24 The ongoing tabulation suggests that over 60% of Americans have problems falling asleep and staying asleep throughout the night, and astonishingly, over half of Americans use one or more sleeping pills per night, either in prescription form or in the form of over-the-counter sleep aids or herbals.25 Of over 18 million doctor visits for children with sleep issues, a shocking 80% resulted in prescriptions for sleeping pills [50]. The use of sleep aids is rising at a rate never before seen in history. Between 1994 and 2007, the number of prescriptions for sleep-aid sedatives grew 30 times over, a rate 21 times greater than the rate of insomnia complaints during the same period [51]. Over a 5-year span, from 2005 to 2010, emergency room visits for sleeping aids  – and in particular zolpidem, the generic version of Ambien (approved by the FDA in 1992) – increased 220%, with women and individuals over 45 years of age representing the majority of visits [52]. Many of these visits included cases in which Ambien was combined with other sedatives, antipsychotics, anti-anxiety medications, or alcohol. Such sleeping pills can also have other daytime side effects, such as significant daytime sleepiness, confusion, and dizziness.26 Indeed, reporting the bizarre hallucinations and sordid sleep stories of Ambien fills the pages of various Internet sites. They include rather comical and not-so-comical stories of Ambien users waking up in the morning to see pots and pans on their countertops and their dining table set with plates and cutlery, to their cars parked across the street, and to strange ordered packages arriving at the door – all accomplished while sleep-walking, sleep-driving, or even sleep-­ shopping. When stories began surfacing of Ambien users waking up in jail in their pajamas and not knowing how they got there, prosecutors began wrestling with the categorization of sleep aids like Ambien. Such drugs did not neatly fit the description of volunteer intoxicants (like alcohol) or involuntary intoxicants (as in someone drugging you without your knowledge) [53]. According to the National Highway Traffic Safety Administration, the first 3–4 hours after taking Ambien are the most treacherous, with cognitive and coordination degradation leading to problems, even if the user fails to fall asleep. In his book on the killing of Osama Bin Laden, SEAL Team Six member and author Matt Bissonnette, writes of Navy SEALs routinely taking Ambien to deal  See Sleepscore.com.  See Resmed.com. 26  See American Addiction Centers website. 24 25

106

5  Why Do We Ignore Sleep?

with the challenge of all-night training and combat. On the day Bissonnette and his SEAL Team colleagues were set to travel to Pakistan to carry out the daring mission against bin Laden’s compound he recalls having difficulty accessing the door code to the room he needed to pass through after waking from an Ambien slumber. As Bissonnette described, from the time he left the United States for Pakistan, until he returned a week later, he took six Ambien pills – nothing out of the ordinary for elite forces who must sleep and wake on demand. In subsequent interviews on his book, he described the 90-minute helicopter ride to Osama bin Laden’s Abbottabad compound. As it was announced by the crew that they had just crossed the border into Pakistan, half the team were asleep. As Bissonnette tells it, Ambien was widely used by special forces [54]. Yet, it is easy to imagine that for combat troops, rare quiet moments that lend themselves to sleep are precious opportunities to regenerate, especially if they are unsure of how many hours or days they will have to maintain peak fighting alertness. Grabbing some sleep when one can safely do is a logical rest-management tactic for seasoned soldiers. The use of pharmaceuticals in the military, to induce sleep or stay awake, has a long history. Soldiers may have to fight for days with little sleep, and pilots may have to stay vigilante for up to 30 hours in cramped, low-lit, cockpits. All forms of military personnel may have to endure long quiet hours of nighttime boredom pierced by sudden episodes of short-term life-and-death combat. Amphetamine and methamphetamine (crystal meth) were first made commercially available as prescription drugs in the mid-1930s, quickly becoming the go-to darlings for military troops in World War II. Amphetamine enhances the central nervous system, particularly around reaction time and coordination. In the civilian world, the drugs became a popular remedy for narcolepsy and, due to their effectiveness at decreasing appetite, were also routinely prescribed (and commercially advertised) as a remedy for obesity. Yet it was the stimulant qualities of amphetamines and methamphetamines that were soon realized by the military, and in World War II, the British, German, and Japanese troops consumed them with disturbing regularity [55]. Nicknamed “Bennies” for their official name Benzedrine, a 1946 study showed that some 150 million Benzedrine tablets had been used in the World War II [56]. From German tank crews to Japanese Kamikaze pilots, amphetamine and methamphetamine pills became an integral part of the war. In the last 3 years of his life, from 1942 to 1945, Adolf Hitler was receiving up to 20 injections a day from his personal physician, Theodor Morell. Among the cocktail of drugs was crystal meth, which Hitler received by injection each day in the morning hours and prior to Hitler’s most notorious rousing speeches. Indeed, Hitler received injections and IV elixirs containing a vast array of substances, from hormones to heroin and, according to record, even stopping his private train during his travel so that the injections could be steadily administered [57]. According to American intelligence reports, by the time of his death, Hitler was regularly consuming some 74 types of drugs and narcotics, each injection and IV recorded by his physician. President John F. Kennedy was a long-time sufferer of Addison’s disease (interestingly, a disease that also afflicted Adolf Hitler, and which is potentially fatal if left unchecked). When JFK became President, hormones were used to help regulate

5  Why Do We Ignore Sleep?

107

his glandular insufficiency – leading some to speculate whether the supplementation had given rise to Kennedy’s rather notorious reputation of promiscuity. JFK had also suffered greatly from relentless back pain and had undergone an operation to remove a herniated disk and an operation to fuse vertebrae. The President pained so much that he ultimately welcomed the pain management advice of Dr. Max Jacobson, a Jewish doctor who had fled Nazi Germany in the late 1930s. Dr. Max, as he was known, quickly made a name for himself, providing a cocktail of injections to Hollywood stars and starlets, like Marilyn Monroe, Elvis Presley, Elizabeth Taylor, and Frank Sinatra. Dr. Max was first introduced to Kennedy prior to Kennedy’s Presidency, when JFK’s Addison disease was causing him great fatigue and lethargy on the campaign trail. Dr. Max’s “energy formula,” which was first perfected when Jacobson had worked alongside famous Swiss psychiatrist Carl Jung, had been coopted by the Nazi troops, who found the formula irresistibly strong in boosting alertness and aggressiveness among the SS.  Consulting with Dr. Max, the would-be President Kennedy found instant relief from his fatigue and his unforgiving back pain. Often, the mixture was a cocktail of methamphetamines. Dr. Max was careful with Kennedy, at one time trying to caution the President not to overdo it during a crucial meeting with Russian President Khrushchev, an occasion during which Kennedy demanded several doses because of the challenging negotiations. Yet, the doctor remained careful, as he firmly believed that both Adolf Hitler and Eva Braun had become severely addicted to his injection formula. Despite this, Dr. Max would ultimately treat and inject both President Kennedy and First Lady Jackie Kennedy, on a routine basis, being summoned to the White House under the not-so-secret guise of treating a fictitious “Mrs. Dunn.” The regular plot involved the doctor flying to Washington DC up to 30 times a year on a non-­ commercial small twin-Cessna piloted by Mark Shaw, who was also Kennedy’s photographer, and eventually the world-renowned photographer for Life magazine, Harper’s Bazaar, and Vanity Fair. In the hours leading up to the famously pivotal Kennedy-Nixon debates, Kennedy had been suffering from extreme fatigue, and his voice was barely audible. Dr. Max was called upon, with the doctor injecting a syringe full of methamphetamine directly into Kennedy’s neck and into his throat and voice box [58]. Almost instantly, Kennedy was revived and ready to debate, ultimately turning the tide of the Presidential campaign, as well as US history. During his Presidency, Kennedy would take an array of pharmaceuticals through injections, tablets, and pellet implants, to combat his fatigue and pain. In the end, it was not the Addison’s disease or pharmaceuticals for back pain that would contribute most to his demise but the tight corset he was wearing on the day he was assassinated – a back-brace meant to keep him straight and free of pain during his car travel. Although the first shot of the Dallas assassin’s two shots was a non-fatal neck wound, Kennedy’s body would not slump over because of the rigid brace, ultimately keeping him upright and in place for the second fatal bullet. Speed pills may have also brought the astronauts home during the famous 1970 Apollo 13 mission. Research in the 1960s showed that when sleep-deprived military

108

5  Why Do We Ignore Sleep?

personnel were administered amphetamine, cognitive and physiological measurements were elevated to complete functionality [59]. Not only that very few side effects, if any, were noted, resulting in the US Strategic Air Command and, subsequently, the US Tactical Air Command approving amphetamines for military personnel, in 1960 and 1962, respectively [60]. During what many describe as NASA’s “finest hour,” as three astronauts fought to return safely to Earth onboard Apollo 13, and during the relentless non-stop fatigue, NASA’s Houston Control requested that Jim Lovell and his crew take the methamphetamine and dexedrine [60]. Currently, dexamphetamine is officially administered under tight guidelines for military pilots and soldiers, which include parameters such as time aloft and time between missions [60]. Today, sleeping pills are routinely used aboard NASA space missions and surely for good reason, given the lack of normal circadian clues to help the brain and body achieve adequate sleep. The International Space Station (ISS) also has some heavier-­ duty drugs, including antipsychotics, antidepressants, and anti-anxiety drugs – and even a set of physical restraints. Worries about how to manage mental health in space have become more mainstream as scientists consider the ramifications of mental health breakdown on a multiyear mission [61]. During research on this issue, Dr. Rob argued that many of the threats to success for long-duration exploration-­ class missions now overlap into the domains of social science [62]. The problem of sustainable and reliable crew health is particularly interesting when we consider a multiyear mission, like a future mission to Mars, for example. Indeed, being mentally “flight ready” on the day of launch may not necessarily translate into stable crew health 6 months into the mission – or 2 or 3 years, later. Once again, we are pushing our hardwired brains and bodies to deal with new environments for which we have few built-in coping skills. Relationships ebb and flow and (what NASA calls) the isolated confined environment (ICE) can take a toll on wellbeing. Small irritations, such as someone’s table manners or other subtle annoyances, can escalate into chronic low-level conflict that inhibits crew performance [62]. And, there can be bigger problems, as well. Astronaut Lisa Nowak made headlines when she drove nearly 1000 miles without stopping to confront fellow colleague, Air Force Captain Colleen Shipman, with the alleged attempt to do harm. The two women were part of a love triangle involving a male astronaut. Subsequently arrested, Nowak had, in her car, a knife, a mallet, rubber tubing, and a BB gun [63]. For many, the most disturbing aspect of the story may be that Nowak had just returned from space a mere 7 months earlier – a mission she had categorically been assessed as mentally sound. Had Nowak been 7 months into a 3-year Mars mission instead of on the ground, crew dynamics and mission safety could have been seriously compromised. Of course, methamphetamines have been growing in mainstream scripts as well. Attention-deficit hyperactivity disorder (ADHD) is a growing concern in the United States, and adult prescriptions for ADHD medications have increased over 50% in 4  years, while use among young adults has doubled [64]. The drug of choice, Adderall, a methamphetamine, has also overflowed into the common off-label market for young professionals, from Wall Street to Silicon Valley, from construction

5  Why Do We Ignore Sleep?

109

workers to university students – all seeking a fix for never-ceasing connectivity and work routines. A $13 billion industry, by 2020, the ADHD drug industry is expected to grow to half of what the coffee industry is today [65]. In the spirit of “if you can’t beat ‘em, join ‘em,” Forbes magazine recently wrote of the ADHD drug’s “super-­ power,” citing wildly successful achievements of ADHD “sufferers” like Sir Richard Branson (Virgin Group), Ingvar Kamprad (Founder and Chairman of IKEA), and Charles Schwab (of the largest investing firm in the United States, of the same name). Such ADHD “sufferers” and their meds may well be leading an emerging pop culture trend when it comes to high achievement and tireless performance [66]. In Silicon Valley, extraordinarily competitive environments have given rise to biohacking, the name for attempting to improve one’s brain and body through diet and so-called smart drugs such as nootropics or racetams. Nootropics are pharmaceutical performance cocktails that often include a variety of off-label drugs, including small doses of LSD and amphetamines, although many do stick to more “conventional” caffeine and amino acid mixtures. Racetams can include modafinil, a stimulant, while others include drugs not approved by the FDA and substances banned by the World Anti-Doping Agency [67]. As is so often the case, we often fail to recognize the lessons that history provides such as the period of near epidemic use of amphetamines by commercial truck drivers in the 1950s, who, collectively, consumed half of the 8–10 billion pills produced annually; this despite having a culture of low prescription rates [67]. The epidemic was eventually exposed because of the frequency of highway hallucinations and crashes, and with investigatory pressure by police and the FDA, the culture of drug use in commercial trucking began to wake up to the crisis. Despite these lessons, modafinil is rampant as the drug of choice for Wall Street traders. A powerful stimulant that does not excite the brain’s intensely dominant dopamine reward center – and so does not result in addiction – modafinil is providing everything from undivided attention to boundless energy and to even greater levels of perceived happiness. It is presumed that the drug modafinil inspired the movie Limitless, in which actor Bradley Cooper, whose character is intellectually challenged, begins a course of mysterious performance tablets that result in an instant cognitive epiphany and extraordinary levels of intelligence, wit, and performance. There is no question that pharmaceuticals save lives and also extend healthy years for many who suffer from legitimate ailments and disease processes, but in many other cases, our capacity to alter our reality through drugs is contributing to a dangerous spiral of dependency. If you walk the street in America, one in ten of the children you see is on ADHD drugs. US states vary, with Nevada having 4.2% of children with ADHD and Kentucky having 14.8%.27 In her book, A Disease Called Childhood: Why ADHD Became an American Epidemic, Dr. Marilyn Wedge notes that in France, rates of childhood ADHD diagnosis are less than 0.5%. While many social and cultural factors may well influence childhood behavior in French homes, French doctors have also railed against the Diagnostic and Statistical Manual of Mental Disorders 27

 See CDC website data on ADHD.

110

5  Why Do We Ignore Sleep?

(DSM) used to define ADHD. Instead, the French Federation of Psychiatry decided, long ago, to redefine observable ADHD behaviors in terms of a sociological disorder – something that can be addressed through counseling and psychotherapy versus pharmaceutical interventions [68]. However, more recent analyses, in the past few years, have indicated that the French way may be ignoring the reality of ADHD among French children and that the French medical community’s biases toward environmental and social factors versus real neurological causes in children may have created a situation where ADHD has become “clinically invisible” to many French clinicians [69]. Indeed, the DSM 5th Edition (DSM-5) suggests that cultural interpretation of ADHD in society may affect the rates of diagnosis between countries – a rate, that on average, is roughly 5–7% of all children, around the world.28 With our daily stimulating light pollution, our hectic schedules, and our ignoring of natural circadian rhythms, it’s no wonder modern life is marked by adversity when it comes to sleeping and waking on schedule. In recent years, shift workers, including doctors and pilots, were the main populations of concern when it came to the physiological harm associated with sleep deficit and random sleep cycles. Today, the at-risk population is everyone who carries the non-stop buzzing of the online world in their pocket or in their fingertips. Through never-ending media streams and communication, sleep-disturbing devices command our attention and blur the social and cultural lines between work life and home life. While the artificial light from digital devices disrupts and delays our sleep patterns, even in the absence of our smartphones, new social expectations abound, including never being fully disconnected from one’s online life. Children and young adults who have never known any other world other than the instantaneous digital one are forming new social patterns underpinned with a seemingly never-ending anxiousness about missing out on emerging social media conversations and peer-group updates. So recent are these changes within the larger scope of our evolutionary history that we are only beginning to see and feel the ramifications of such modern sleep upset. To be sure, our history with sleep aids and wake-aids is not new, yet we seem no closer to a willingness to learn from the lessons of our past, unless that willingness includes making better drugs. And yet the counterforce to such pharmaceutical enlightenment is the nature of drugs themselves. With new lesser-addictive versions emerging on to the market, our acceptance of drug-induced (low and high) energy may be becoming increasingly common. At the core of it, our ancient physiology requires little tweaking – and when we fail to respect that, whether driving home from an all-nighter while singing to stay awake, forcing ourselves from bed for early school bell times, or sleeping with our glowing and buzzing phones beside our pillow, our hardwired brain and our bodies will tolerate only so much interference before assuming control of our biological needs and forcing us to shut down. Longterm performance doesn’t come in pill form. and yet our short attention spans, exacerbated by our world of instantaneous information, and has created a host of new modern health hazards, which we must face with clear-headed resolve.

28

 See ADHD Institute data.

References

111

References 1. Lee ML, et al. High risk of near-crash driving events following night-shift work. Proc Natl Acad Sci U S A. 2016;113(1):176–81. 2. Poudel GR, et al. Losing the struggle to stay awake: divergent thalamic and cortical activity during microsleeps. Hum Brain Mapp. 2014;35:257–69. 3. Green P. Sleep is the new status symbol. The New York Times, 8 Apr 2017. 4. Ekirch AR.  Sleep we have lost: pre-industrial slumber in the British Isles. Am Hist Rev. 2001;106(2):343–86. Oxford University Press. 5. Harvard Gazette Press Release. 2007. New study shows naps may reduce coronary mortality. Boston: Harvard School of Public Health. http://archive.sph.harvard.edu/press-releases/2007releases/press02122007.html. 6. Naska A, Oikonomou E, Trichopoulou A, Psaltopoulou T, Trichopoulos D. Siestas of health adults and coronary mortality in the general population. Arch Intern Med. 2007;167(3):296–301. 7. Medvedev A.  Shedding near-infrared light on brain networks. J Radiol Radiat Ther. 2013;1:1002. 8. Tucker MA, et al. A daytime nap containing solely non-REM sleep enhances declarative but not procedural memory. Neurobiol Learn Mem. 2006;86(2):241–7. 9. Jaggard V.  Naps clear brain’s inbox, improve learning. National Geographic News, 23 Feb 2010. 10. Moser D, et al. Sleep classification according to AASM and Rechtschaffen & Kales: effects on sleep scoring parameters. Sleep. 2009;32(2):139–49. 11. Gordon AM. Your sleep cycle revealed. Psychology Today, 26 July 2013. 12. Pase MP, Himali JJ, Grima NA, Beiser AS, Satizabal CL, Aparicio HJ, Thomas RJ, Gottlieb DJ, Auerbach SH, Seshadri S. Sleep architecture and the risk of incident dementia in the community. Neurology. 2017;89(12):1244–50. 13. McNamara P. Psychopharmacology of REM sleep and dreams. Psychology Today, 4 Dec 2011. 14. Jiva TM. Pharmacological effects of REM. Sleep Review, 7 May 2002. 15. Moore TJ, Mattison DR. Adults utilization of psychiatric drugs and differences by sex, age, and race. JAMA Intern Med. 2017;177(2):274–5. 16. Kaufman DM. Clinical neurology for psychiatrists, XI. Philadelphia: Saunders; 2007. 17. Nicholson C. Strange but true: less sleep means more dreams. Scientific American Health, 20 Sept 2007. 18. Ebrahim IO, Shapiro CM, Williams AJ, Fenwick PB. Alcohol and sleep I: effects on normal sleep. Alcohol Clin Exp Res. 2013;37(4):539–49. 19. Heffron TM. Insomnia awareness day facts and stats. American Academy of Sleep Medicine, 10 Mar 2014. www.sleepeducation.org. 20. Roehrs T, Papineau K, Rosenthal L, Roth T. Ethanol as a hypnotic in insomniacs: self administration and effects on sleep and mood. Neuropsychopharmacology. 1999;20(3):279–86. 21. Frezza M, et al. High blood alcohol levels in women. The role of decreased gastric alcohol dehydrogenase activity and first-pass metabolism. N Engl J Med. 1990;322:95–9. 22. Guglielmo B, Silvana P. Sleep and obesity. Curr Opin Nutr Metab Care. 2011;14(4):402–12. 23. Theorell-Haglöw J, Berne C, Janson C, Sahlin C, Lindberg E. Sleep. 2010;33(5):593–8. 24. Obesity Society. Insulin sensitivity: one night of poor sleep could equal six months on a high-­ fat diet, study in dogs suggests. Science Daily, 4 Nov 2015. 25. Pejovic S, Vgontzas AN, Basta M, Tsaoussoglou M, Zoumakis E, Vgontzas A, Bixler EO, Chrousos GP. Leptin and hunger levels in young healthy adults after one night of sleep loss. J Sleep Res. 2010;19(4):552–8. 26. Hayes AL, Xu F, Babineau D, Patel SR. Sleep duration and circulating adipokine levels. Sleep. 2011;34(2):147–52. 27. Schmid SM, et al. A single night of sleep deprivation increases ghrelin levels and feelings of hunger in normal weight healthy men. J Sleep Res. 2008;17(3):331–4.

112

5  Why Do We Ignore Sleep?

28. News Release. Sleepless in the south: Penn medicine study discovers state and regional prevalence of sleep issues in the United States. Penn Medicine News, 23 Feb 2012. 29. Bernardo R.  Most & least stressed states. WalletHub, 2017. https://wallethub.com/edu/ most-stressful-states/32218/. 30. Chukwuonye IL, et al. Prevalence of overweight and obesity in adult Nigerians – a systemic review. Diabetes Metab Syndr Obes. 2013;6:43–7. 31. Walker M. Why we sleep: unlocking the power of sleep and dreams. New York: Scribner; 2017. 32. Cooke R. The shorter your sleep, the shorter your life: the new sleep science. The Guardian, 24 Sept 2017. 33. Besedosvsky L, Lange T, Born J.  Sleep and immune function. Pflugers Arch. 2012;463(1):121–37. 34. Fitzgerald BR.  Data point: how many hours do millennials eat up a day. Wall St J.  May 13, 2014. 35. PWC. Millennials at work: reshaping the workplace, 2008. www.pwc.com. 36. Blodget H. 90% of 18–29-year-olds sleep with their smartphones. Business Insider, 21 Nov 2012. 37. Power S. Sleepless in school? The social dimensions of young people’s bedtime rest and routines. J Youth Stud. 20(8):945–58. 38. Pei J.  Smartphones interfere with teen sleep in unprecedented ways. Vanwinkle Online, 30 June 2017. 39. TeleNav. Survey finds one-third of Americans more willing to give up sex than their mobile phones. TeleNav Survey. Sunnyvale: Sunnyvale Press Room; 3 Aug 2011. 40. ScienceDaily. Light-emitting e-readers before bedtime can adversely impact sleep. Boston: Brigham and Women’s Hospital Press Release; 22 Dec 2014. 41. Richter R.  Among teens, sleep deprivation an epidemic. Stanford Medicine News Center, 2015. www.med.stanford.edu. 42. Wheaton A, Ferro GA, Croft JB. School start times for middle school and high school students – United States, 2011-12 school year. MMWR Morb Mortal Wkly Rep. 2015;64(30):809–13. 43. American Academy of Pediatrics. Let them sleep: AAP recommends delaying start times of middle and high schools to combat teen Sleep deprivation. News Room, 25 Aug 2014. 44. American Academy of Pediatrics. School start times for adolescents: policy statement. Pediatrics. 134(3):642–9. 45. Jacob BA, Rockoff JE. Organizing schools to improve student achievement: start times, grade configurations, and teacher assignments. The Hamilton project. Brookings institute discussion paper. Washington, DC: Brookings Institute; 2011. 46. Lufi D, Tzischinsky O, Hadar S. Delaying school starting time by one hour: some effects on attention levels in adolescents. J Clin Sleep Med. 2011;7(2):137–43. 47. Basch CE, Basch CH, Ruggles KV, Rajan S. Prevalence of sleep duration on an average school night among 4 nationally representative successive samples of American high school students, 2007–2013. Prev Chronic Dis. 2014;11:140383. 48. Vorona RD, Szklo-Coxe M, Wu A, Dubik M, Zhao Y, Ware JC. Dissimilar teen crash rates in two neighboring southeastern Virginia cities with different high school start times. J Clin Sleep Med. 2011;7(2):145–51. 49. Danner F, Phillips B.  Adolescent sleep, school start times, and teens. J Clin Sleep Med. 2008;4(6):533–5. 50. Doheny K. Sleep drugs often prescribed for kids. WebMD, 1 Aug 2007. 51. Romm C. Americans are getting worse at taking sleeping pills. The Atlantic, 12 Aug 2014. 52. The DAWN Report. Emergency department visits for adverse reactions involving the insomnia medication zolpidem, 1 May 2003. 53. McCabe A. The disturbing side effect of Ambien, The No. 1 Prescription Sleep Aid. Huffington Post, 15 Jan 2014. 54. Hudson J. How does SEAL team six get to sleep? Lots of Ambien. The Atlantic, 11 Sept 2012. 55. Cornum R, Caldwell J, Cornum K.  Stimulant use in extended flight operations. Airpower J. 1997;Spring:53–8.

References

113

56. Bett WR. Benzedrine sulphate in clinical medicine: a survey of the literature. Postgrad Med J. 1946;22:205–18. 57. Doyle D. Adolf Hitler’s medical care. J R Coll Physicians Edinb. 2005;35(1):75–82. 58. Lertzman R, Birnes W. Dr. Feelgood. New York: Skyhorse Publishing; 2013. 59. Weiss B, Laties VG. Enhancement of human per-formance by caffeine and the amphetamines. Pharmacol Rev. 1962;14:1–36. 60. Kamienski L.  Shooting up: a short history of drugs and war. Oxford: Oxford University Press; 2016. 61. Vakoch DA. (ed). 2011. Psychology of space exploration. The NASA history series. Washington: National Aeronautics and Space Administration Office of Communications. 62. Barrett R. Borrowing from security strategy: can red teams help astronauts prepare for crew conflict in space? Can Mil J. 2009;9:4. 63. Morris NP. Mental health in outer space. Scientific American, 14 Mar 2017. 64. Schwarz A. Report says medication use is rising for adults with attention deficit. The New York Times, 12 Mar 2014. 65. IBIS World Report. ADHD medication manufacturing: market research report, Aug 2016. 66. Archer D. ADHD: the entrepreneur’s superpower. Forbes Magazine, 14 May 2014. 67. Kendall M. “Hacking” the brain: Silicon Valley entrepreneurs turn to fasting and “smart drugs”. The Mercury News, 9 July 2016. 68. Wedge M. A disease called childhood: why ADHD became an American epidemic. New York: Avery; 2015. 69. Ellison K. French kids DO have ADHD. Psychology Today, 4 Nov 2015.

6

Are We Hardwired for Risk?

On the evening of June 6, 1944, General Dwight D.  Eisenhower, Supreme Commander of the Allied Forces, scribbled a hastily written note in pencil and placed it in his uniform pocket. The content of that note contained Eisenhower’s sole acceptance of responsibility for the failure of D-Day. In sharp contrast to his carefully crafted orders to launch Operation Overlord, the D-Day assault on German forces in France, Eisenhower’s penciled note shows just how fantastically risky the historic invasion really was. Eisenhower’s note read, “My decision to attack at this time and place was based upon the best information available. The troops, the air and the navy did all that bravery and devotion to duty could do. If any blame or fault attaches to the attempt, it is mine alone.”1 The invasion force of 156,000 British, American, and Canadian troops, whose attack was carried forth on 5000 ships and in 11,000 planes, was threatened on all fronts, not only by the entrenched and advantaged German armies and their conscripts but from uncooperative weather, intelligence failures, and critical vulnerabilities, such as supply lines. Elaborate American “ghost armies,” under the code name Operation Fortitude, were poised along alternate invasion points to entice the German military leadership into believing that an invasion would most likely occur through Norway or Pas-de-Calais, the latter of which constituted the shortest overwater voyage between England and France. Convincing the Germans of this grand ruse required the espionage prowess of double agent Juan Pujol, a Spaniard who had very gradually won the trust of Britain’s MI5. Pujol, who had invented numerous fictitious characters and sub-­ agents for the Nazis to hunt down, fed the German Generals false information about where the Allied invasion would take place. The only loose end in Pujol’s fantastic espionage was someone who knew him all too well. Indeed, the entire fate of D-Day came stunningly close to failure when Pujol’s wife, who had tired of the harsh realities and poor food in war-ravaged London, threatened to expose her husband if British Intelligence would not grant her the freedom and luxury of  See, https://www.archives.gov/files/education/lessons/d-day-message/images/failure-message.gif.

1

© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 R. S. Barrett, L. H. Francescutti, Hardwired: How Our Instincts to Be Healthy are Making Us Sick, https://doi.org/10.1007/978-3-030-51729-8_6

115

116

6  Are We Hardwired for Risk?

returns to her homeland, Spain. Ever the spy, Pujol devised a plan to have his wife abducted, blindfolded, and brought to an interrogation center, where he and she would reunite after her scare. He would ultimate “facilitate” her release under the promise she would honor his secret life, including his critical orchestration of D-Day deceptions. While such close calls were common in the intelligence sphere, D-Day also depended heavily on optimal weather. Specifically, Eisenhower needed a full moon, low flood tide, and calm winds. For weeks, several international teams of meteorologists competed around the clock in advance of D-Day to develop the most accurate forecast for Eisenhower. Ultimately, the original date of June 5 proved inadequate because of high winds, which would have rendered the landing craft unusable. But not all meteorological teams agreed. With conflicting information, Eisenhower had a difficult choice to make. On that morning, Eisenhower decided on the forecast of the American meteorologist, revising his timeline for the D-Day launch by one day, to June 6. Despite the American’s forecast for lower winds, the gale remained strong, threatening the landing craft that would ultimately bring the troops to French shores. Yet, against all odds, and as we all well know, the Allies did land on the beaches with thousands of troops running forth, headlong into a horizontal hailstorm of German bullets. In the days leading up to D-Day, Eisenhower had the weight – and the fate – of the world on his shoulders. He had worked non-stop, around the clock, for months on end. His blood pressure was dangerously high, his diet a straight-up mix of coffee and four packs of cigarettes a day. Postponing D-Day for 24 hours only added to his work, as it meant getting coded messages to thousands of vessels and commanders who were already in motion for the attack – all without allowing any of them to inadvertently miss his instructions of delay, lest it ruin the Allies’ cover. With only a 3-day window to launch the attack because of tides, moonlight, and weather, Eisenhower was under tremendous pressure. With the D-Day invasion plans having been scrutinized and approved months in advance, British Air Marshall Mallory confided in confidence with Eisenhower, only days before the invasion, of his concern that the Airborne units, who were tasked with parachuting in to the mainland to secure the narrow causeways that stretched from the beaches inland, would suffer a 90% casualty rate before their boots reached the ground. If this were to happen, it would mean that the causeways would not be secured and the thousands of men in the landing crafts would be slaughtered – and D-Day would fail. After much reflection, Eisenhower said his orders would stand and that D-Day would progress as he had planned. As history now tells, against formidable odds, the Airborne paratroopers suffered only an 8% loss [1]. It was not that Eisenhower didn’t care about the welfare of his troops or that he considered them expendable cannon fodder in the struggle for the Allied victory. In fact, it was quite the opposite. On the eve of D-Day, Eisenhower insisted that he personally visit the 101st Airborne troops on the airfields before they launched into German held territory. Covering up his General’s license plate on his car and taking none of his typical entourage, Eisenhower wanted to spend time with his men – to look them in the eyes and talk to them, soldier to soldier, human to human.

6  Are We Hardwired for Risk?

117

Five beaches, codenamed the Sword, Juno, Gold, Omaha, and Utah, were the landing targets for 73,000 American and 83,000 British and Canadian troops. The American led invasion on Omaha beach was particularly well defended by the Germans, with nine out of every ten men on the first wave cut to pieces by the German artillery. Omaha beach, the deadliest of the five attack points, was also where famous war photojournalist Robert Capa made his historical debut. On that day, Capa, a Hungarian Jew, who is still described as the greatest combat photographer in history, did the unthinkable – running unarmed onto Omaha beach with the initial wave of landing craft, shoulder-to-shoulder with the first American soldiers. Amidst a wall of bullets that tore at the water around him, Capa leapt from the landing craft into waist deep water, miraculously intact with three Contax cameras slung around his neck. Unlike the soldiers around him, he was unable to stay low, submerge, and swim, lest he ruin his camera equipment. Against all odds he waded forth toward the steel German artillery walls and some 6 million mines that the German’s had planted in the beach using slave labor. Along with a Jewish medic and an Irish priest, who had also braved the impossible to make it onto the sand, Capa snapped photo after photo – his memoirs describing his experience of being completely encircled by hundreds of men – screaming, body parts in the air, many on fire, and a sea whose surf had turned crimson [2]. Despite a nearly 90% fatality rate from German bullets and artillery, Capa, a civilian, survived, to capture over 100 photos of the bloody beach landing. Much like the odds of soldiers surviving that day, only 11 images survived to make it to print. Capa’s photos would become the most defining images of World War II, with Life magazine choosing to print all 11 of Capa’s photos, forgivingly explaining that the excusable blurriness of the photos were a result of Capa’s shaking hands amidst the horrific slaughter that encircled him. The extraordinary risk of D-Day cannot be underestimated. On the eve of the invasion, Winston Churchill wrote a note to his wife, including the now famous line, “Do you realize that by the time you wake up in the morning, 20,000 men may have been killed” [3]. Ultimately, the price paid for turning the tide against the Germans on D-Day would be nearly a half million lives. The long-term reward for freeing the world of tyranny is surely incalculable. The famous military theorist, Carl von Clausewitz, characterized good military strategy as employing means (battles or politics) in ways that end war. Like many of Clausewitz’s short and weighty statements, hidden in this rather minimalist definition are countless layers of complexity and questions as to what constitutes the “means” to an end and at what price we may be willing to pay for that end. For D-Day decision-makers like Eisenhower, Churchill, and Roosevelt, the enormous rewards for changing the course of the war and winning against juggernaut Germany was worth the potential risk of losing it all – the brave soldiers, the planes and ships, the territory, and whatever else might suffer from any forthcoming retaliation had the Germans won the day. The capacity to order thousands of men into harm’s way, in many cases, to certain death, was not a measure of leadership madness. Nor, as some might argue, was it merely part of some cold economic calculus. Eisenhower cared about the lives of

118

6  Are We Hardwired for Risk?

those he led, and he was satisfied to give others credit for any success that was achieved under his command. While Eisenhower’s decisions may represent one of history’s ultimate risk assessments, evaluating costs and benefits is one of the inherent processes of human survival. Our hardwired instincts have gifted us with an uncanny ability to measure potential rewards against the costs of trying to attain those rewards. Often this is a logical determination, but oftentimes it is based on a type of gut feeling that harnesses our hardwired instincts, including experiences, both learned directly and those which we draw from history through the experiences of others. In the end, the General had to weigh options and go with what he sensed to be the best course of action, judging the risk of exposing the entire mission to the Germans if he delayed, as well as the risk of operational failure, should the weather, tides, or initial assaults fail to materialize as anticipated. Feeling our way to decision need not be haphazard. Our hardwired instincts run deep, and like a General’s knowledge of historical military battles and theory of war, we all come equipped with hardwired instincts honed through countless hard-won ancestral evolutionary bouts with friend and foe. But, are all our decisions so rational, conscious, and deliberate, or are we driven by deeper social and biological hardwiring that works beneath the surface? How to make decisions, how to maneuver, and how to motivate others in extremely high-risk environments is not the sole domain of military Generals. There are countless other examples of bravery and perseverance against all odds, such as Apollo 13’s miraculous recovery from space, the 1979 secret extraction of American hostages from Iran by Canadians, or Edmund Hilary and Tenzing Norgay’s first summit of Mt. Everest. Those old enough to remember the original Star Trek will fondly recall the juxtaposed way in which Captain Kirk (William Shatner) and Mr. Spock (Leonard Nimoy) arrived at their decisions. Kirk, the passionate, intense, but cool-headed Starship Commander was portrayed as a leader gifted in reading people’s intentions (or aliens’, as the case may be). Spock, as we know, was nearly devoid of emotion, a Science Officer who made judgments based on pure logic, including quick calculations of collective gains and losses. These different styles would become a central theme in the way the show portrayed the crew’s onscreen problem-solving. Often, Kirk would seemingly take unnecessary risks or bait his foes into doing the same, often based on his intuition, a style that would often conflict with Spock’s logical calculus. Our own decision-making, and how we measure risk and reward, is much closer to Kirk than Spock. Although Kirk’s decisions were often based on wisdom gained from experience, many of our decisions are driven by deep hardwired desires, of which we may not even be fully aware. And, these evolved instincts, which have helped us survive, have implications for our behavior and our health. The tendency to judge hazards and decide on acceptable risk is not new, nor is it an exclusively human process. While humans may have a unique capacity for evaluating risk because of our highly developed executive functions, we are not immune

6  Are We Hardwired for Risk?

119

to the hardwired instincts that are deeply embedded in our evolutionary operating system. Throughout the animal kingdom, biologists see evidence of species who assume risks that may seem, at first glance, rather foolhardy but, in fact, are almost always tied to some form of reward. However, the rewards that influence these risky moves  – and that so often influence our own decisions  – are not always readily apparent and are often hidden from our conscious calculus. University of Louisville researcher Lee Dugatkin describes his graduate study of animal risk-taking as an obsession. Dugatkin investigates the curious behavior of some wildly risky members of the guppy fish family, as they choose to venture away from the protection of their school to look for, and taunt, larger fish who want to eat them [4]. As one can imagine, this strange behavior comes with the rather grave and immediate threat of being effortlessly devoured by predator fish. And, indeed the scouting guppies do routinely suffer this fate. However, the reward for surviving such death-defying heroism is first serve at the mating buffet upon the guppy’s successful return. While, in nature, female selection of a male mate tends to favor bold coloration and elaborate courting displays as signals of favorable genetics, Dugatkin’s research demonstrates that given a choice between a colorful male and a fearless male, females will choose the less colorful but more behaviorally bold male, most of the time [5]. Contrary to conventional wisdom, scientists hypothesize that when female guppies observe a male searching for predators and then evading their lightning-fast teeth, the male guppy’s risky behavior may be seen as proof of just the sort of survival experience needed to protect the female (and her genes). Such anti-predator or “reverse-stalking” behavior is also observed among other creatures. Among certain gazelle populations, mature adults will directly approach lions and cheetahs in plain view, sometimes even following the predator cats. This reverse-stalking inhibits the cats’ ability to mount a surprise attack, thereby reducing the overall risk [6]. The female selection advantage bestowed to high-risk behavior was also evident in the gladiators of ancient Rome, whose treacherous journey began with the horrifying oath of the Sacramentum Gladiatorum: “uri, vinciri, verberari, ferroque necarior” – a mandatory vow to endure being burned, bound, beaten, and slain by the sword, as well as swearing away any help that may be offered to ameliorate their suffering [7]. Such was the spectacularly deadly existence of gladiators who entered combat under the banner munera sine missio – games without remission or fights to the death. Gladiators, who tended to be societal outcasts, or otherwise slaves or criminals, often exchanged criminal prosecution and execution for a short, painful, but potentially glorious term as a gladiator. Of course, the very best gladiator champions, the ones who survived the torturous horrors of the Colosseum ring, were not only rewarded with another day of life, but often with secret regards from Rome’s most aristocratic women. Even the written text of the Gladiator Manual eludes to the very common but little acknowledged practice of stealthy sex with Roman women of high society [8]. It was not uncommon for well-heeled women of Rome to venture from the safety of their fortified homes and bedchambers under the darkness of night, to slip

120

6  Are We Hardwired for Risk?

unnoticed down the streets and allies, and into the quarters of the gladiators. With the typical gladiator being taller and more muscular than the average man, there is little doubt that physical attraction and sexual desire played a role in the choice these women made in arranging their secret encounters. Yet these were highly risky choices for these women, with the ever-present threat of being found out and potentially abandoned by their wealthy husbands, as well as the threat of personal harm, and even the threat of becoming impregnated by an otherwise economically and socially undesirable male. So routine were these romps that it inspired the theory that Faustina, wife of Roman Emperor Marcus Aurelius (perhaps the last of five “good” Emperors), had born her son – the evil megalomaniac Commodus – from a secret and torrid love affair with a gladiator. Such high-risk behavior underscores the degree to which biological forces can override far safer decision-making logic, and it shows how genetic reward is often bestowed to those who display boldness, adventure, and death-defying risk. Evidence suggests men unwittingly capitalize on the subconscious risk-lust drive as a sexual display strategy. When researchers studied skateboard tricks among young male skaters, they noted that when the scientific researcher was exchanged for an attractive female, the skateboarding males’ testosterone levels increased and they would take on much riskier moves – even if it meant that they would be unsuccessful at completing the more difficult stunts and might suffer injury [9]. It seems that crashing on their skateboards and getting hurt was less important than the need to showboat for the desirable female. It’s not that foolhardiness and risk is favored in natural selection; it’s surviving the foolhardiness and risk that is favored. Genetics rewards outcomes of course. One must survive the high-risk antics. The swirling cocktail of high-risk courtship behavior becomes even more complex with studies that demonstrate that, when asked directly, women often proclaim that they do not find risk-takers more attractive  – in fact, they find cautiousness more appealing [10]. While this seems to run entirely counter to what was just discussed, when we dig deeper, we find that men who take risks enjoy higher “status” among their male counterparts, and it is this status that women find most appealing. A high-status male generally means more security for the female – and hence, risk-­ taking equals status, and status gets the girl. The female doesn’t want to breed with a foolish risk-taking idiot, and she wants to breed with the alpha male because the alpha can protect and provide for her, as well as her future offspring. Studies suggest that not all risks can be considered equal. In a poll of European and American women, risk-taking in social settings was considered attractive in males but not so for “abusive” risk-taking, like gambling or acts that negatively affected one’s health [11]. These support the idea that risk-taking is rewarded in preferential mate selection if the risk is perceived as having significant survival value. In this way, less colorful guppy fish may prove their mating quality by demonstrating their skill at evading predators. One theory suggests that lower quality males may attempt to improve their chance of winning female attention and selection through trait “amplifiers,” which may boost their profile among higher quality males [12].

6  Are We Hardwired for Risk?

121

Our methods of social display – the ways in which we showcase our bravado to others – have perhaps changed more in the past decade than in all of previous history, and much of this is due to the way in which we communicate over the Internet. The first era of the Internet use was almost entirely unidirectional. Information flowed – slowly – from the fingertips of specialized coders to passive readers. Many still remember the slowness of dial-up Internet. Viewers could go make a sandwich in the time it took for an Internet page to load, all while listening to the screeching fax-like dial-up tones. Millennials and more recent generations will have no recollection of this early era of slow Internet. Much of this one-way information flow was surely due to our technological limitations at the time, but it was also likely attributed to the communication models and modalities we were most accustomed to, the printed book, designed for passive enjoyment, not two-way interaction. Blogging introduced an interface by which those with little or no Internet savvy could post opinion columns, often in long 1000-plus word essays. But, as the Internet grew up and the number of interesting webpages vied for our finite attention, authors of online media had to conjure up faster ways to capture and keep our interest. Short and pithy micro-blogs, measured not in words but in characters, gained immense popularity, followed by social media sites dedicated solely to photo and image sharing as a means of quick messaging. Like the less colorful fish who looks for ways to boost his appeal, the Internet is a superb forum for us to highlight those traits we feel will garner the most accolades. Selfies, in photo and video form, are the preferred methods of social display and status comparison among younger generations. Indeed, it is estimated that the average Millennial will post some 25,000 photos of themselves on the Internet [13]. A quick search of the top hashtags on social media shows 374 million images with the hashtag #followme, followed closely by 341 million images with the hashtag #me. The use of online media as a tool for social display is rooted in our innate (and sometimes desperate) drive to improve our status and ranking within our chosen peer group. Perhaps one of the more dangerous ways in which this ancient psychology plays out is in the need to outdo the selfie photos of others through daring and risky behaviors that selfie-posters employ to stand out in the crowd. Not so dissimilar to the simple-minded male guppy fish who charges predators to win the attention of female onlookers, showcasing one’s risk appetite and hopeful survivability can win a great deal of attention and notoriety among a menagerie of otherwise colorful competitors. A fascinating cult statistic is that deaths due to selfies now claim more lives each year than shark attacks [14]. Of these, falling from a height is the number one cause of death among those attempting to capture their daring self-images, followed closely by drowning and then being hit by a train [15]. We often believe that the idea of “image management” is a conscious and constructed strategy for improving our relative status. And yet so much of what we do and how we act in terms of our image and behavior is deeply rooted in our subconscious drives and desires. Take, for example, numerous studies that discuss why most women will take selfies with a high camera angle looking down on their faces

122

6  Are We Hardwired for Risk?

and bodies, while men tend to take the image from a lower perspective with the lens looking up at them from below. For women, there is an evolutionary advantage in appearing prosocial, attractive, non-threatening, and submissive, which can be favored by a top-down image. For men, pointing their cameras upward portrays tallness, strength, a powerful physique, and dominance [16]. Women also tend to tilt their head more than men to appear more expressive – on average around 50% more head tilt, from approximately 12-degrees head tilt in women to 8-degree head tilt in men.2 Of course, men and women do not take time to calculate the precise angle of their head tilts during selfie photos – it’s an unconscious part of our communication process. Our body is communicating a message without us taking an active role in guiding it – as if our brain and body were on autopilot – it’s a fascinating phenomenon. A woman who is attracted to a man will often expose the inside of her wrist to him or show more of her neck, two of the body’s most vulnerable parts. This is not unlike other species in the animal kingdom, particularly carnivores, whose submissive behavior – called “appeasement” – is often indicated by the less dominant animal rolling onto its back and exposing its highly vulnerable belly and neck to the more dominant one. Alternatively, a man, if interested in a woman may position his feet to point squarely toward her, or if seated, may widen his legs – in a more dominant, controlling, and suggestive way. Without realizing it, each is receiving the other’s message loud and clear. These are ancient forms of communication. If body language is often unconscious, is risk-taking unconscious as well? Jim Wickwire is a legend in the world of mountain climbing. Along with his passion for summiting the most inhospitable mountain peaks on Earth is his propensity for daring challenges, a trait shared by members of his mountain-climbing community, who uniformly top the charts for risk-taking [17]. Wickwire is not only famous for his achievements in mountaineering but also his resilience in continuing to pursue expeditions in the aftermath of numerous tragedies. In 1981, while traversing a glacier on Mount McKinley, Wickwire’s 25-year-old climbing partner, Chris Kerrebrock, fell into a crevasse. As they were roped together, Wickwire fell as well but managed to gain a desperate foothold on the vertical ice wall using his ice axe and crampons. Suffering a broken shoulder, Wickwire managed to pull himself up to where Kerrebrock was stranded. Kerrebrock’s fall had been saved when his backpack had become firmly wedged between the narrow ice walls. He dangled helplessly above the icy depths of the crevasse. Wickwire desperately tried to free Kerrebrock but, without any luck, had to continue his accent to the surface. From the mouth of the crevasse, Wickwire continued to try to free Kerrebrock, but when that proved unsuccessful, Wickwire once again descended by rope into the crevasse to try to dislodge Kerrebrock. All attempts failed, and the two men resigned themselves to the horrible reality that Kerrebrock would die there. After what we can only imagine was a desperately heartbreaking goodbye, during which they discussed what to do with Kerrebrock’s body, Wickwire again ascended the wall of the

 See the project Selfcity.net which studied 656,000 images from around the world.

2

6  Are We Hardwired for Risk?

123

crevasse to the surface. Kerrebrock subsequently froze to death during the night hours [18]. That was not Wickwire’s only brush with tragedy. He had also seen fellow climbers fall from impossible heights – a woman falling 6000 feet to her death and two others falling 4000 feet. Despite the high risks that surrounded him, Wickwire went on to become the first American to summit K2, the second highest mountain in the world. The climb included a solo overnight on the side peak at 27,000 feet, and at a temperature of −35 °F (−37 °C), overcoming impossible odds: an uninsulated sleep sack, no food, no water, and had an empty oxygen bottle. To add a dash of challenge, part way through the night, Wickwire discovered that he was slowly sliding downhill in the dark toward a 2-mile-high cliff, stopping his slide in the nick of time. It’s fitting that Wickwire’s autobiography should be titled, Addicted to Danger [18]. Are Wickwire and his mountain-climbing comrades really addicted to danger? Climbers rank among the very highest in risk-taking behavior but when researchers have tried to discover the common denominator of risk-taking, they found that quite often there can be many differing reasons why individuals willingly choose to engage in extreme risk. While the initial inspiration for strapping on the crampons can vary between climbers, one personality trait, sensation-seeking, tends to be fairly common among high risk-takers [19]. Sensation-seekers are more inclined to seek out novel or intense experiences and are willing to accept a great deal of risk (physical, financial, social, or other forms) to make it happen [20]. According to the Marvin Zuckerman, the originator of the sensation-seeking theory, the trait is also linked to negative behaviors such as drug and excessive alcohol use and high-risk sexual behavior [21]. When the Sensation-Seeking Scale came into being in the 1960s, there was only one form of sensation-seeking trait that was measured. However, it was quickly determined that there are actually several varying forms of sensation-seeking, so today, the Scale employs four factors to be measured: Thrill & Adventure Seeking (TAS), Experience Seeking (ES), Disinhibition (DIS), and Boredom (BS) [22]. When it comes to adventure sports like mountaineering, Thrill & Adventure are most often associated because of the excitement that comes from surviving nature’s challenges and inhospitable conditions. A second popular theory for risk-taking considers the state of mind that we may be in at the time, rather than a consistent personality trait. In reversal theory, we are able to flip-flop back and forth between higher-risk and lesser-risk behaviors, depending on the situation or context, the pressures we are experiencing at the moment, or the social cues we are perceiving. Such reversals can take place between telic (actions motivated by serious long-term goals) and paratelic (actions motivated by short-term playfulness experienced in the moment). The argument goes that most of us tend to fall into one of these camps most of the time, but that we may occasionally visit the other camp when the situation is right. These are similar to theories that describe eudaimonically and hedonically oriented individuals; the former motivated by personal excellence and long-term goals, while the latter is motivated by pleasure and enjoyment [23].

124

6  Are We Hardwired for Risk?

High sensation-seeking “Thrill & Adventure” behavior has also been associated with aggressive driving – speeding, quick lane changes, following too close, and driving while impaired. There is evidence that sensation-seeking can even dull our capacity to evaluate potential risk outcomes and consequences [24]. But to what degree are we born with these traits? A study of over 600 individuals showed that our need for speed may be genetic and therefore heritable. Of the genetic attributes that push someone to accept higher-­ risk behavior are those which help in the release of dopamine, our reward neurotransmitter [25]. For risk-takers who enjoy extreme sports like skydiving, rock-climbing, cliff-diving, or other high-risk behaviors such as binge-drinking, drug use, or risky sexual behavior, dopamine-delivering genes may have a lot to do with their motivations. One of the dopamine-regulating enzymes responsible for breaking down dopamine is mono-amine oxidase, of which approximately two-­ thirds of our internal supply is determined by genetics. High levels of genetic mono-­ amine oxidase result in lower dopamine levels in our brains, equating to more cautious and risk-aversive attitudes [26]. Researchers argue that very high-risk recreational activities like base jumping or Pamplona’s Running of the Bulls, and even our career choices can come down to dopamine. And, where there is a mismatch or disconnect between our genetic dopamine levels and career (or lifestyle choices), unhappiness will surely follow [26]. Professor David Zald of Vanderbilt University conducted research on novelty-­ seeking high-risk behavior by studying the brain’s capacity to slow down dopamine release. Using a specialized imaging technique called positron emission tomography, Dr. Zald discovered that individuals who were willing to take greater risks had fewer dopamine-regulating receptors in their brains, meaning they actually experienced a much more significant dose of dopamine when engaging in high-risk activities. As Zald describes it, individuals who seek out a great deal of novelty and risk lack the dopamine chemical “brakes” that more cautious individuals have [27]. Therefore, while many characterize high risk-takers as “adrenaline junkies,” it’s actually not adrenaline, but dopamine that’s driving their behavior. For most individuals, sensation-seeking (including thrill or novelty) tends to increase as they progress through childhood until they approach their teenage years, when it peaks. Males tend to exhibit slightly higher levels of sensation-seeking in three of four categories: Thrill & Adventure Seeking, Boredom Susceptibility, and Disinhibition, while women outscore men only in one category: Experience Seeking, which is sensory of the mind, as is typically experienced through travel, music, or art.3 Interestingly, sensation-seekers also tend to favor art that depicts warmer color tones, such as a red, orange, or yellow, rather than cooler colors [28].4 But, is sensation-seeking all bad? We tend to think of Thrill & Adventure as dominating the sensation-seeking architype, but there are other avenues for experiencing sensation. The willingness to step “outside the box” to experience novel 3  See Sensation-Seeking Demographics. The Psychology of Thrill Seekers. Emory University Online. Coursera. www.coursra.org. 4  Sensation Seeking. Psychology. IResearchNet.Com. www.psychology.iresearchnet.com.

6  Are We Hardwired for Risk?

125

situations and experiences: meeting new friends, traveling to see new and different cultures, and to vary one’s life routine. While many of us may automatically assume sensation-seeking to be a negative and life-damaging trait that pushes individuals toward reckless endangerment, there may actually be some advantages to being a sensation-seeker. In evolutionary terms, sensation-seeking, particularly among teenage boys, would have provided them with the drive to venture outside the safety of their immediate familial confines, to explore and ultimately mate with females from outside their direct lineage or communal groups. With a prehistoric lifespan of 20–35 years, human males, like other species, were designed to procreate as soon as their physiology allowed. Is the sensation-seeking trait still an advantage? Some research would indicate that yes, having a percentage of the population oriented toward new experiences, impulsivity, creativity, and curiosity, could be one of the defining features of our humanness.5 By extension, many aspects of our modern society are fueled by the entrepreneurial ambitions of individuals who are not satisfied with routine, who wish to explore new and novel ways to do things, and most importantly, who take on personal risk to accomplish those goals. Professor Zuckerman, who is the original author of the sensation-seeking concept, suggests that society needs both high and low sensation-seeking personalities to thrive. As he describes it, we need both bookkeepers and explorers for our world to function [29]. Frank Farley, former President of the American Psychological Association, has spent a great deal of time understanding sensory-seeking personalities, coining the term “Big T,” for “big thrill-seeking” individuals. Farley conducted field research in such far-flung arenas as Nepal, with Everest climbers, and with cross-America hot air balloon racers, to discover what he describes as “Big T positive” personalities [29]. These individuals are not content to sit around their homes – they are adventurists. But, importantly, they are not reckless adventurists. They aren’t the same type of sensation-seekers who may ruin their lives through substance addition, excessive partying, or senseless aggressive driving habits. Big T positives carefully analyze and calculate risk and then take steps to mitigate hazards in order to experience the thrill without the experience being their last. Training for 29,000-foot Everest summit will take years and may include dozens of climbs on “lesser” peaks. Written on Everest climber Alan Arnette’s blog are photos of him training for Everest and K2 by climbing smaller peaks with a heavy wooden door strapped to his back.6 The preparation for these expeditions is time-consuming and resource-heavy, and climbers carefully estimate how they will navigate the countless hazards and mitigate threats, both known and yet unknown. These are most certainly not foolhardy sensation-seekers. Similarly, corporate leaders and entrepreneurs rank relatively high on sensation-­ seeking, again not out of recklessness, but quite typically in terms of “creativity.” This drive keeps them from accepting the status quo, welcoming innovation and 5  For research on HSP advantages, see Guest post by Dr. Tracy Cooper: The sensation seeking highly sensitive person. www.hspelamaa.net. 3 April 2015. 6  See alanarnette.com.

126

6  Are We Hardwired for Risk?

ingenuity, and pushing for improved ways of doing things. One particular combination: corporate CEOs who were also private pilots have been researched for their rather unique capacity to produce patents – on average almost 70% more patents than their “non-pilot” CEO colleagues. Pilot-CEOs, it turns out, are more receptive and open to diverse and original projects [30]. While pilot-CEOs do accept greater levels of risk when operating small private aircraft, it may well be that their aviation training, which includes a readiness to respond to emergency situations and to manage unexpected challenges, makes them better able to remain calm and clearheaded when others grow anxious. Certainly, knowing that one can manage and overcome risk with prudence and sound judgment would surely have a positive impact on all areas of one’s life. Indeed, research suggests that high sensation-seekers, while bathing in dopamine during novel activities, produce less cortisol when doing those activities, relative to someone who is not high sensation-seeking. This means that for our pilot-CEO, there is far less “fight or flight” stress experienced during riskier moments, than might be experienced by others [31]. As a result of their higher tolerance for novelty, complexity, and risk, sensation-seekers tend to exhibit what psychologists call a flow state, loosely defined as being in the moment – with energized focus [32]. Flow is the middle ground between skill and difficulty. On one end of the spectrum is a highly skilled individual working on a low-difficulty task and on the other extreme is a very low-skilled individual taking on a very demanding task. Both extremes are certainly not ideal. The flow state occurs when we match skill to challenge. Therefore, both high and low skilled individuals may experience flow. If a very challenging situation meets a very high-skill individual, or a lesser challenge meets a lower-skilled individual, then optimum “flow” occurs. There is much we can apply from this in our schools and in our workplaces. In any given classroom across America, students will be showing up for class with varying levels of built-in sensation-seeking drives. Recall, that some 60% of these students inherited this trait. As is often the case, and in particular with larger class sizes and poorer teacher-to-student ratios, homogenous curriculum and expectations do not afford teachers an opportunity to alter lesson plans for each student, based on their optimum flow states. But, if we could this, then it stands to reason that young people would benefit tremendously in all the areas our society so-often struggles to address, like self-esteem, happiness, stress-management, and enjoyment of learning [33]. These are all features of individuals in optimum flow. Today, there are more than 15,000 books one can purchase on the topic of “happiness” – and how to achieve it [34]. But as the ancient stoics, and even the likes of Friedrich Nietzsche might argue, the direct quest for happiness – as in, “I’m going to imagine myself happy in order to become happy” – is a fool’s pursuit. Far more likely is the case that happiness will come about when you engage in activities or experiences that make you happy. Whether reading a good book in a hammock while sipping a glass of wine or ziplining through the treetop canopies of the Costa Rican jungles, happiness, Nietzsche would argue, is an indirect consequence of human experience, as opposed to a direct goal [35].

6  Are We Hardwired for Risk?

127

Similarly, ancient Greek stoics argued that happiness is a result of our reaction to our external world rather than a state of being that we can actively pursue. When we learn to accept and manage challenges in life with a positive approach rather than excessively ruminating on the negative, we will inevitably be happier. The stoics went even further, suggesting that no life event can really ever hurt us – and that our experiences are actually “neutral.” Rather, it is our “reaction” to life events that determines our happiness or unhappiness [36]. “Big T-positive” sensation-seekers may well have the capacity to weather life’s ups and downs with greater fortitude and resilience and, by way of that, be more level headed during crises. And, because of this, they may well be natural leaders, those who people gravitate to, because of their ability to remain calm and clearheaded when others are succumbing to their emotional reactions. Unfortunately, psychopaths also exhibit an uncanny ability to remain calm when others are experiencing distress, which may be why there’s a myth that many CEOs are really successful psychopaths. A research study of individuals enrolled in business development programs found that 4% of the students had a psychological profile indicating some degree of psychopathy [37]. While that is about four times higher than the general population – of 1% psychopathy – it still means that some 96% of business leaders have no measurable psychopathic traits. However, the idea of a “successful psychopath” has certainly been the subject of much research interest, particularly because of the psychopath’s innate boldness, adventuresome attitude, and for their uncanny capacity to deal with stress. Logically, most studies of psychopaths have taken place in a prison setting, as it’s pretty challenging to get a hold of a large sampling of noninstitutionalized psychopathic business leaders. As a result of this logistical issue, there are few studies on successful psychopathy. However, it may well be the case that noninstitutionalized psychopaths do tend to favor particular jobs because their traits render them more successful at those jobs than others. As psychologist Robert Hare, author of the book Snakes in Suits, suggests, “if I weren’t studying psychopaths in prison, I’d do it at the stock exchange” because of their ability to deal with emotion and stress [37]. With the sheer number of corporate leaders out there, it’s not too difficult to cut the potential psychopaths from the herd. The highly calculated, manipulative, and uncaring sagas of Enron’s Andrew Fastow or WorldCom’s Bernard Ebbers are easy to highlight. Yet as leading expert on psychopathy, Paul Babiak, explains, for many corporations, the CEO actually cares about the fortunes of the company, and by way of that, is not exhibiting psychopathic traits. A true psychopath only cares for himself and takes pleasure in hurting others [37]. Similar studies have tied the ability to handle stress to antisocial personality disorder. Here too, individuals seem to be able to calmly handle higher levels of stimulation without the same “fight-or-flight” alarm reaction others seem to experience. One of the most fascinating findings in the field of criminality demonstrates, with uncanny consistency, the positive correlation between low resting heart rates in adolescents and antisocial behavior and aggression. Even more interesting is research that shows that lower resting heart rates in young people actually increases the risk

128

6  Are We Hardwired for Risk?

of violent and antisocial behavior in adulthood, even when controlled for levels of cardiovascular fitness [38]. To help explain this finding, researchers suggest that individuals with inherently lower resting heart rates may seek out risky behavior as stimulation, in an effort to be more “normal.” This all points to a type of “happy place” for individual risk tolerance levels. On September 3, 1967, the people of Sweden changed from driving on the left side of the road to driving on the right side. The contentious decision was not an easy one, but as more and more Swedes began driving vehicles with steering wheels on the left – largely due to Volvo’s growing international market – the risks associated with left-lane driving were increasing beyond tolerances. One can only imagine the tremendous logistical scale of this national change, in terms of planning an airtight country-wide communication program, as well as multiple public safety factors (including such things as routine school bus drop-offs, one-way streets, and crosswalk safety). As is perhaps obvious, there would be no room for error. Even if one driver in the entire country did not get the message, or simply forgot, it could spell disaster. Part of this logistical nightmare included the issue of changing the direction of some 360,000 road signs in a matter of hours, uprooting and replacing them to face the opposite direction on the other side of the road, all during the dark hours on the specified changeover night and along with the launch of some 1000 new city buses, with doors on the right side of the vehicle. The changeover’s public education campaign began 4 years before the change day and was guided by a team of Swedish psychologists. The official name of the switchover day was Hogertrafikomlaggningen, or Dagen H, or simply, H-Day for short. Sweden undertook one of the world’s biggest national public awareness campaigns in history, which, in addition to the H-Day name, included the creation of an H-Day logo. The logo featured an uppercase letter “H” with an arrow showing directional movement from the bottom left corner of the H through the middle and up to the top right corner of the H – from left side to right. The logo was the official symbol of the changeover, and 130,000 of them were placed in any, and all, spots where someone might look – interestingly, even adorning women’s underwear.7 The Dagen H theme song, Keep to the Right, Svensson (translated to English), even soared to the No.5 spot on the Swedish billboard pop charts. On Sunday, September 3, the date of the switchover from left to right, all non-­ essential road traffic was banned between the hours of 1:00 am and 6:00 am. Should, for some reason, a driver find themselves on the road during these hours, instructions were clear: at 04:50 am, all cars were to slow to crawl and very carefully cross the road to the right side. Then, for the next 10 minutes, all traffic in the country must stop. This would give time for anyone who might not have wound their wristwatch correctly or who did not get the message (and who may not be wearing H-logo underwear for that matter) to also make the correction over to the left side.

 Dagen H: the day Sweden switched sides of the road, 1967. Rare Historical Photos. www.rarehistoricalphotos.com.

7

6  Are We Hardwired for Risk?

129

A countdown on the radio marked off the seconds leading up to 5 am, when, at the top of the hour, the radio announced, “Sweden now has right-hand driving.”8 Swedes who were there on that day recall the international attention Sweden received. As Swedish author Peter Kronborg (who wrote a book on Dagen H) describes, international journalists swarmed into Sweden, expecting to cover an ensuing bloodbath on the highways [39]. Instead, a very different reality unfolded. On the day after H-Day, a Monday, Swedes got into their cars and drove to work. Only 157 minor accidents were reported, which was slightly lower than average. Interestingly, and contrary to expectations, fatalities and injuries due to road collisions declined in the months and years that followed. For 2 years, the accident rates on the roads remained lower than average, despite many expecting just the opposite. It was only after 2 years that the rates of collisions started to slowly creep back up to “normal” but with longer-term trends showing increased safety [40]. With the lower accident rates, fatalities, and injuries, experienced by Sweden immediately after the switch from left to right, one might argue that driving on the right side of the road was now proven to be inherently safer. But that reasoning doesn’t hold up when we consider that, per capita, the United Kingdom boasts one of the lowest traffic accident rates in the world.9 A more plausible answer is that as perceived risk increases, we tend to modify our behavior by taking greater care and caution in order to keep the risk at an acceptable level. Scientists call this the risk homeostasis theory. Homeostasis is the way in which our body regulates itself to maintain healthy normal levels and proper equilibrium of various physiological variables. For example, our pancreas and other associated processes help maintain proper blood sugar levels during energy intake and expenditure. The landmark 1930s book, The Wisdom of the Body, by physician Walter Cannon, outlined the necessity of homeostasis for life to exist – particularly how the body is able to maintain precisely proper ratios of nutrients, water, salt, and other minerals. The homeostasis effect is also apparent in other domains, from how the Earth maintains its atmosphere, to political and economic theories of society, to our psychological and social wellbeing.10 In the world of safety, risk homeostasis refers to our innate ability to adjust our behavior so that we mitigate the level of risk we are sensing. In other words, we have a kind of happy place for our risk tolerance – a target level of risk with which we are comfortable [41]. When we perceive that the risk is getting too great, like driving in a snow blizzard on icy roads, we adjust our behavior by driving more slowly and more cautiously. By doing this, we mitigate the risk of the snow conditions and, in essence, return to our happy place for risk level. On the opposite end of the spectrum, when a road is bare and dry and well lit, we may choose to drive the maximum speed limit (and some may choose to go 8  See, website: www.realscandinavia.com. The day in history: Swedish traffic switches sides, 3 September 1967. 9  World Health Rankings. Road traffic accidents: death rate per 100,000 (age standardized). www. worldlifeexpectancy.com. 10  See Kelvin Rodolfo’s answer to: What is homeostasis? Scientific American. www.scientificamerican.com.

130

6  Are We Hardwired for Risk?

faster than that) because we perceive very little risk – perhaps too little risk – and adjust our behavior to modify the risk level and bring it back up to our happy place. While it sounds rather unbelievable that we would alter our behavior to become riskier when conditions are too easy, there is much evidence to suggest that this is indeed the case. An experiment in Munich studied accident rates for taxis over a 4-year period. During the test period, half of the taxis were fitted with anti-lock braking systems (ABS) and half were not. It was expected that the added safety feature would reduce the accident rates for the test group, whose cars were installed with ABS. But, contrary to predictions, quite the opposite happened. The cars that were fitted with the added safety feature of ABS experienced more accidents than the ones that didn’t have it. The same results were replicated in other countries as well [42]. Similarly, an experiment into increased road lighting was conducted under the hypothesis that improving illumination with streetlights and enhancing roadway visibility would increase safety outcomes. What researchers discovered, however, was that any safety benefits that might have been afforded by the safety technology were outstripped by increased driving speed and reduced concentration. Unpredictably, the drivers had adjusted their behavior to compensate for the new comfort they perceived from the added safety measures [43]. Another study found that parents were more likely to let their children partake in riskier activities and riskier behavior when the children wore protective safety gear [44]. Automobile seatbelts are one technological improvement that most certainly save lives. With consistent seatbelt wearing, seatbelts will reduce fatalities for front seat occupants by 40–50% and backseat occupants by roughly 25% [45]. After isolating a multitude of variables, researchers conducted an exhaustive study of 50 US states, looking at the safety dividends of mandatory seatbelt laws between the years 1975–1987. Researchers discovered that the mandatory use of seatbelts did, in fact, reduce overall injury and fatality rates. This seemed to dismiss the idea of risk homeostasis being more powerful than new safety technologies. However, when researchers dug deeper into the data, they found that motorists were driving with riskier behavior – it’s just that the seatbelts were very good at protecting them when a crash occurred. Another clue that drivers were becoming riskier with seatbelts was unearthed in the data pertaining to non-occupants, such as pedestrians and cyclists. Among these groups, injuries and fatalities increased significantly with mandatory seatbelt laws, indicating that drivers were indeed driving more aggressively – but most certainly not to the advantage of non-occupants outside the car, who did not benefit from seatbelts [45]. Risk homeostasis is also referred to as risk compensation or behavioral adaptation and for good reason. Simply adding extra safety measures can have the opposite effect for safety outcomes as individuals “compensate” for the safety measure by adjusting their behavior to become slightly riskier – in an effort to return to their target risk happy place. In other words, risk lowering technologies can often have the undesired effect of increasing risk behavior, rather than lowering it. Mandatory seatbelt use did increase risk behavior, resulting in more fatalities of pedestrians and cyclists, but for car occupants, the seatbelts provided enhanced safety outcomes.

6  Are We Hardwired for Risk?

131

Similarly, increasing the speed limit in some states actually resulted in less car accidents, not more. Scientists attribute this to the notion that higher speeds make people feel less at ease, and as such, they operate their vehicles with increased care, versus slow speeds where drivers tend to make more impromptu lane changes and swerve more often [46]. Economists call this sort of counterintuitive process “moral hazard,” and it follows that when you try to protect people from threats to their safety, they take increased risks [47]. It is also referred to as the Peltzman Effect, named after economist Sam Peltzman. In Greg Ip’s book, Foolproof, he discusses how the introduction of helmets and facemasks in football resulted in an increase in certain head injuries, not a reduction, as players began using their helmets as battering rams [48]. Likewise, Ip offers examples such as developers building a series of dams to protect homes in floodplains – the unwanted result of which is an increase in home building in those vulnerable areas. Should a flood crisis arise and the dam fail, more homes will be destroyed than if the dams were never built. Of the Peltzman Effect, Michael Schmidt summarizes the problem. Genuine safety must be voluntary, not mandatory. As odd as that is to get one’s mind around, Schmidt provides an example. Imagine a chemical laboratory in which 25% of the workers voluntarily wear the available protective equipment (PPE gloves, apron, and goggles). The other 75% of the workers choose not to wear their available protective equipment. Now (as Schmidt describes it) imagine that injury rate of the 25% is 12 over the course of a year and the injury rate for the other 75%, who choose not to wear the protective equipment, is 120. This means there is an injury rate of 93, on average. If a rule were introduced making it mandatory to wear the protective equipment, what would happen to the injury rate? The Peltzman Effect would show that while there might be an improvement on injuries, the injury rate would never be as good as the volunteer (25%). Why? Because those who are forced to wear the equipment will not have an appreciation for the value of the rule and believe it unnecessary. Therefore, they will take greater risks to compensate [49]. A thought experiment on the Peltzman Effect was proposed by Gordon Tullock, who suggested that if transportation safety agencies and governments actually wanted to reduce road deaths due to car crashes, they’d install a large spike on the center of the steering wheel of all cars, so that any accident would result in impalement and death. Known as Tullock’s Spike, the morbid and fictitious plan helps underscore the relationship between risk behavior and real safety outcomes. The irony of this is that putting a horribly large spike pointed toward your chest while driving your car will make you safer than if there wasn’t a spike [49]. One of the most surprising outcomes of adjusted safety behavior is the issue of “procedural intentional non-compliance” (what safety folk call PINC). This is when good workers, who are, for the most part, competent and conscientious at what they do, choose not to follow particular safety rules. This can happen for a variety of reasons, although there are typically three common ingredients when good workers choose to ignore known rules. Let’s take the example of someone driving a car just above the speed limit, or doing an illegal U-turn, instead of driving the long way around to reverse direction. First, the driver undergoes an internal risk assessment and arrives at the conclusion that their choice is not overly risky. Maybe the road

132

6  Are We Hardwired for Risk?

conditions are excellent and there are no other cars around. This is the risk homeostasis bit. If the driver is happy that the risk is acceptable, then the second criteria is that the driver estimates that the “payoff” of the slight speeding or illegal U-turn choices will be favorable and better than if they followed the rules. The logical thought process goes: time was saved and the risk was low. The final criteria is peer acceptance for their actions. If peers witness the behavior and say nothing, then their social commentary will sanction and reinforce the driver’s behavior. Humans are very social creatures, and we work hard to establish advantage in settings of social comparison, both professionally and personally. Conversely, if the driver’s peers find the driver’s actions to be reckless or unacceptable, and they voice their concerns, the criticism is extraordinarily powerful in tempering behavior. A case in point is the dismal record for hand hygiene among healthcare professionals in hospitals. Advocators and proponents of handwashing campaigns recognize that, at any given time, there are approximately 1.4 million cases of hospital acquired infections (HAI) in the world.11 In developed countries, 80,000 people needlessly die each year from infections they acquired from those whose job it is to make them healthy. In developing countries, 4000 children die every day from healthcare-associated infections.12 Even in hospital settings in which hand hygiene stations have been established, studies show that compliance to handwashing by professional healthcare workers can range from 0% to 40%, a depressingly low rate of compliance. In the United States, nearly three quarters of a million people suffer from infections acquired in hospital, resulting in nearly 75,000 completely avoidable deaths.13 In 2013, little Nora Bostrom, not yet 4 years old, died in hospital, her arms wrapped tightly around her mother’s neck. The infection she died from was caused by a carelessly maintained central line catheter. A central line is placed into a patient’s chest and straight into the heart, where medications can be quickly delivered and are exceedingly common in hospital settings. But if bacteria are permitted to enter the central line, then the patient can suffer from infection – and as many patients have central lines because they just had surgery or for reasons of ill-health, they are already compromised and susceptible to sickness or death. Nora’s story has become a symbol for central line hygiene protocol – putting an innocent face to the completely unnecessary tragedy of central line infections. Even so, completely preventable central line infections result in nearly 10,000 deaths a year in the United States alone, making it the leading cause of death in US hospitals [50]. In one of the world’s leading pediatric hospitals, Dr. Rob has personally witnessed two doctors walk in from the hall to inspect and remove a central line catheter on a 2-year-old recovering from open heart surgery without washing their hands or donning gloves. A study of keyboards used by doctors and nurses has shown that the keyboards had three strains of life-threatening bacteria. When doctors visit a patient suffering from an antibiotic-resistant Staphylococcus (MRSA), their lab coats become  See World Health Organization website for Clean Care is Safer Care.  Ibid. 13  HAI Data and Statistics. cdc.gov. 11 12

6  Are We Hardwired for Risk?

133

contaminated with the infection the majority of the time (a type of infection that now costs the United States some $4 Billion each year) [51]. Let’s put this into perspective: MSRA, an infection that travels on a doctor’s lab coat, now kills more people in the United States than HIV.14 And the blood pressure cuffs on carts that are wheeled from room to room in the hospital? Nearly 80% of those cuffs have life-­ threatening MRSA bacteria on them.15 While the scientific rationale for washing one’s hands when entering or leaving a patient’s room is surely within the cerebral grasp of intelligent doctors and nurses, handwashing compliance is still lower than 50%.16 But even this percentage is difficult to know for sure because doctors and nurses often behave differently when they are being observed. A study out of Santa Clara Valley Medical Centre found that when observers were asked to record handwashing compliance, they observed compliance 57% of the time, but when observers disguised themselves as unrecognizable passersby, handwashing compliance was observed 22% of the time [52]. This tells us that for a given level of knowledge as to the benefit of handwashing, behavior is most modified by peer influence and interaction. This is why Dr. Rob is so heavily critical of expensive handwashing campaigns meant to raise “awareness” of hand hygiene. Doctors and nurses are well aware of this knowledge – their behavior is not for a lack of understanding, it’s a problem of willfully disregarding known rules and guidelines. When healthcare workers choose to not wash their hands, they are committing procedural intentional noncompliance, as their choice is not for a lack of awareness. As we know, the most influential factor in improving adherence is peer support, or conversely, peer criticism. It’s not difficult to imagine how much more powerful it is for a peer to speak out and remind you to wash your hands, as compared to a poster on the wall. Safety is driven by behavior and our built-in drive to quickly assess survivability. From launching a D-Day attack to charging in and out of the jaws of one’s predator, our hardwired instincts drive our internal risk calculators to determine the potential payoffs and fallouts of our decisions. Yet, despite our inner drive to pursue the ultimate course of action that benefits us most, we continue to suffer the slings and arrows of poor decisions, which often cause real harm and injury to ourselves and to others. Evolutionary biologists characterize our weighing of pros and cons in decision-making, as optimality theory. Importantly, it is not our behavior that natural selection favors but ultimately the outcome of our behavior. As part and parcel of high payoff, high risk is not always kind. If every guppy fish who challenged the snapping jaws of his predator survived to mate with the most desirable female, all male guppies would do it. But our decisions to take risks are not always positive, which is why, for many, high-risk behavior is so often unrewarding. This helps explain why nearly 200,000 Americans die

 Patient safety: current statistics. Patient Safety Focus. patientsafetyfocus.com.  Ibid. 16  Clean Hands Count for Safer Healthcare. www.cdc.gov. 14 15

134

6  Are We Hardwired for Risk?

each year due to unintentional injuries.17 Each year, in Canada, about 15% of the nation’s entire population will suffer an injury severe enough to limit their work our home-life activities [53]. Obviously, we are, in many cases, prone to risk miscalculations, or like the skateboarders, we simply assess the potential reward as worth the risk of injury. The bottom line in decision-making and risk analysis is that humans are not merely robots, calculating probabilities like Star Trek’s Spock. In reality, our decision-­making process is highly emotional. In fact, when we look at our brains with fMRIs during very challenging decisions, the MRI imaging reveals that the logical parts of our brain and the emotional parts of our brain battle it out to see which will win [54]. As the imaging shows, the emotional part of our brain is always seeking to assume control of our decisions. So, if the logic is strong, the emotional side will not win, but if the logic is weak or flawed, then our emotions often grab the steering wheel of our decision-making. When teenagers drive too fast on a joyride with their friends, it’s because the emotional centers of their brain are overriding their logic. When workers choose to ignore the rules, it can be because their decision to take shortcuts satisfies a feeling of reward when they prove they are more expeditious or can work faster. When people dive with sharks, jump out of planes, or ride their bikes on busy roadways instead of a stationary bicycle in a gym, it’s because the emotional part of their brain is being fed. In situations of extraordinarily difficult decision-making, like Eisenhower’s decision to launch the D-Day attack, the battle between logic and emotion can be profound. For the General, the utilitarian logic of waging a battle in order to free the world from tyranny, and potentially save millions of lives, would have been heavily matched with conflicting feelings that the pursuit of his aim would surely bring the immediate deaths of thousands of his own men. Other emotions could have played into his decision: feelings of pride at seeing his troops willing to make the ultimate sacrifice or knowing that the suffering of countless others might end should his mission prove successful. As the eighteenth century philosopher David Hume wisely stated, “reason is the slave of passion.” We see this play out every day in our lives, regarding how we feel about ourselves and our relative status within our peer groups and how influential those feelings are in driving our behavior. Hidden away in our subconscious, our hardwiring continuously evaluates the cost and reward in all that we do, from our tolerance for adventure to how we drive and to how we communication with others, and instructs us to act, often without our conscious participation. While our highly evolved hardwiring is meant to keep us safe, its role is also to maximize our genetic potential and thus not only is reason a slave of passion – but our passions are also a slave to biological reason.

 See, Injury prevention and control: cost of injury data. Centers for Disease Control and Prevention. www.cdc.gov.

17

References

135

References 1. Lauder V.  Eisenhower’s soul-racking decision. CNN, 6 June 2014. http://www.cnn. com/2014/06/05/opinion/lauder-eisenhower-d-day-anguish/index.html. 2. Brenner M. Robert Capa’s longest day. Vanity Fair Magazine, 13 May 2014. Also see, Time Magazine URL: http://100photos.time.com/photos/robert-capa-d-day. 3. Gluckstein F.  Churchill’s character: Berlin 1945: in victory, magnanimity. The Churchill Project at Hillsdale College, 2015. 4. Dugatkin LA. The evolution of risk-taking. Cerebrum. 2013;2013:1. The Dana Foundation. 5. Godin JG, Dugatkin LA.  Female mating preference for bold males in the guppy, Poecilia reticulata. Proc Natl Acad Sci U S A. 1996;93(19):10262–7. 6. Fitzgibbon CD.  Anti-predator strategies of immature Thomson’s gazelles: hiding and the prone response. Anim Behav. 1990;40(5):846–55. 7. Barton CA.  The sorrows of the ancient romans: the gladiator and the monster. Princeton: Princeton University Press; 1993. 8. Leon V. The joy of sexus: lust, love, and longing in the ancient world. New York: Walter & Company; 2013. 9. Ronay R, von Hippel W. The presence of an attractive woman elevates testosterone and physical risk taking in young men. Soc Psychol Personal Sci. 2010;1(1):57–64. 10. Kleiner K. Risk-taking boys do not get the girls. The New Scientist, 17 Apr 2005. 11. Wilke A. Is risk taking used as a cue in mate choice? Evol Psychol. 2006;4:367–93. 12. Bogaardt L, Johnstone RA.  Amplifiers and the origins of animal signals. Proc R Soc B. 2016;283(1832):20160324. rspb.royalsocietypublishing.org. 13. Stevens J. Internet stats and facts for 2017. Hosting Facts, 2017. www.hostingfacts.com. 14. Zhang M. Selfies cause more deaths now than shark attacks. PetaPixel, 22 Sept 2015. This statistic is used for emphasis only. We certainly acknowledge that more people attempt selfies than swim in shark waters, so the comparison is not really a valid one. 15. Zhang M. The number behind selfie deaths from around the world. PetaPixel, 9 Feb 2016. 16. Makhanova A, McNulty JK, Maner JK.  Relative physical position as an impression-­ management strategy: sex differences in its use and implications. Psychol Sci. 2017;28(5):567–77. 17. Roberts P. Risk. Psychology Today, 1994. Last Reviewed 9 June 2016. 18. Wickwire J, Bullitt D. Addicted to danger: affirming life in the face of death. New York: Atria Books; 1999. 19. Zuckerman M. Behavioral expressions and biosocial bases of sensation seeking. New York: Cambridge Press; 1994. 20. Prochniak P. Adventure behavior seeking scale. Behav Sci. 2017;7(2):35. 21. Shoham A, et al. The relationship between values and thrill-and adventure-seeking in Israel. In: European advances in consumer research, vol. 3. Provo: Association for Consumer Research; 1998. p. 333–8. 22. Zuckerman M. Sensation seeking: beyond the optimal level of arousal. Hillsdale: Erlbaum; 1979. 23. Huta V. Eudaimonia and hedonia: their complementary functions in life and how they can be pursued in practice. In: Joseph S, editor. Positive psychology in practice: promoting human flourishing in work, health, education and everyday life. 2nd ed. Hoboken: Wiley; 2015. 24. Jonah BA. Sensation seeking and risky driving: a review and synthesis of the literature. Accid Anal Prev. 1997;29(5):651–65. 25. Derrigner J, et  al. Predicting sensation seeking from dopamine genes: a candidate system approach. Psychol Sci. 2010;21(9):1282–90. 26. Feinstein A. In harm’s way: why war correspondents take risks and how they cope. The Globe and Mail, 12 May 2018. 27. Park A. Why we take risks – it’s the dopamine. Time Magazine, 30 Dec 2008. 28. Rosenbloom T.  Color preference of high and low sensation seekers. Creat Res J. 2006;18(2):229–35.

136

6  Are We Hardwired for Risk?

2 9. Munsey C. Frisky, but more risky. Am Psychol Assoc. 2006;37(7):40. 30. Sunder J, Sunder SV, Zhang J.  Pilot CEOs and corporate innovation. J Financ Econ. 2016;123:209–24. 31. Patoine B.  Desperately seeking sensation: fear, reward, and the human need for novelty. New York: The Dana Foundation Briefing Paper; 2009. www.dana.org. 32. Csikszentmihalyi M. Flow: the psychology of optimal experience. New York: Harper Perennial Modern Classics; 2008. 33. Carter K. What we can learn from sensation seekers. Greater Good Magazine. UC Berkeley, 2018. www.greatergood.berkeley.edu. 34. Beard A. The happiness backlash. Harvard Business Review, July–August 2015. 35. Krueger J. Uber-Nietzsche: when thinking about happiness, consider Nietzsche. Psychology Today, 22 Aug 2010. 36. Irvine WB. A guide to the good life: the ancient art of stoic joy. Cambridge: Oxford University Press; 2009. 37. Babiak P, Hare RD.  Snakes in suits: when psychopaths go to work. New  York: Harper Business; 2007. 38. Latvala A, Kuja-Halkola R, Almqvist C. A longitudinal study of resting heart rate and violent criminality in more than 700,000 men. JAMA Psychiatry. 2015;72(10):971–8. 39. BBC, Savage M.  A “thrilling” mission to get the Swedish to change overnight. BBC, The Economics of Change, 18 Apr 2018. www.bbc.com. 40. Reimann M.  When Sweden planned the world’s biggest traffic jam, accidents actually decreased. Timeline, 13 Sept 2017. 41. Collins D. Risk homeostasis theory – why safety initiatives go wrong. Safety Risk, 1 Sept 2016. 42. Wilde GJS. Target risk 3 – Risk homeostasis in everyday life. Toronto: PDE Publications; 2014. 43. Assum T, et  al. Risk compensation  – the case of road lighting. Accid Anal Prev. 1999;31(5):545–53. 44. Morrongiello BA, Major K. Influence of safety gear on parental perceptions of risk injury and tolerance or children’s risk taking. Inj Prev. 2002;8(1):27–31. 45. Evans L, Graham JD. Risk reduction or risk compensation? The case of mandatory safety-belt use laws. J Risk Uncertain. 1991;4:61–73. 46. Malnaca K.  Homeostasis theory in traffic safety. In: Proceedings of the 21st ICTCT Workshop, 2014. 47. Ip G.  Foolproof: why safety can be dangerous and how danger makes us safe. New  York: Little, Brown and Company; 2015. 48. Ip G.  Foolproof: why safety can be dangerous and how danger makes us safe. New  York: Little, Brown and Company; 2015. See also, Stewart H. Foolproof by Greg Ip review – the biggest risk we can take is to allow ourselves to feel safe. Books: The Observer, 12 Oct 2015. 49. Schmidt M.  You can’t make me: mandatory safety and the Peltzman effect. Chemical Manufacturing Excellence, 8 Mar 2017. 50. Kiff S. Do no harm. Vox, 9 July 2015. 51. Pfizer Inc. New research estimates MRSA infections cost US hospitals $3.2 to $4.2 billion annually. Infection Control Today, 16 May 2005. 52. Barzilay J. Doctors’ hand hygiene plummets unless they’re being watched, study finds. ABC News, 10 June 2016. 53. Billette J-M, Janz T.  Injuries in Canada: insights from the Canadian Community Health Survey. Health at a Glance. Statistics Canada, 2017. www.150.statscan.gc.ca. 54. Wright R. The brain: how we make life or death decisions. Time Magazine, 29 Jan 2007.

7

From Pandemics to Prosperity: Feeding Our Hardwired Health

One of the most horrific historical periods was that of 1350s Europe, when 60% of the European population, some 50 million people, died under the merciless grip of the Black Death. The Bubonic, or Black Plague, was an especially nasty form of anaerobic bacterium carried to Europe on the fleas of ship rats, the latter of which were often brought to ports as unwitting stowaways in the hulls of cargo vessels. The Plague’s deadly bacterium, called Yersinia pestis, was transmitted to humans through flea bites, as it was often the case that rats would die and the hungry fleas would abandon their rodent hosts, only to jump onto other very unlucky animals. During the Black Plague, human victims could also contract the bacteria from working with, or butchering, infected animals or by breathing in droplets from an infected person sneeze or cough. It’s been estimated that the infected fleas typically killed off their rat colony over the course of a couple of weeks. Within a few days, and with increasing hunger, the fleas would abandon their rat hosts and hunt for new victims. Once bitten by a flea, the bacteria would drain into the host’s lymph nodes, where it remained hidden from the immune system. Three to five days later, the infected individual would become sick and within a week to 10 days, suffer death [1]. So fast was the pandemic spread that many of the living spent their entire days devoted to burying the dead. In the Italian Republics of Siena and Florence, large wide pits were dug, and each day a new layer of bodies would be lined on top of the previous ones. In many towns, there would often be no one left to bury the dead, ultimately resulting in a complete loss of the population. The Black Plague was surely one of the most devastating events in all of human history. Ultimately, and born of necessity, widespread quarantine and hygiene practices, as well as cremation, eventually turned the tide against the pandemic – and, unknown at the time, also turned the corner of human civilization, promoting a collective spirit of rebirth and renewal. Such was the dawn of the Renaissance.

© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 R. S. Barrett, L. H. Francescutti, Hardwired: How Our Instincts to Be Healthy are Making Us Sick, https://doi.org/10.1007/978-3-030-51729-8_7

137

138

7  From Pandemics to Prosperity: Feeding Our Hardwired Health

The Renaissance provides a splendid example of human empowerment – a word that means granting authority or power to act.1 In practical terms, empowerment means fostering a process of self-determination, rather than being blindly led by other forces or people. As the Renaissance proves, one of the most fundamental human assets is our ability to recognize the value in harnessing knowledge to improve our lives. This too is what makes us human. The importance of the Renaissance is also captured in the way in which health and wellbeing became uniquely parceled with social and intellectual development, advancing together in lockstep. While there are, without doubt, many grand empires and civilizations that have contributed markedly to our social, political, and philosophical thought – from the ancient Chinese Dynasties, to Ancient Egypt, to the Persian and Ottoman Empires – and each with historical periods of immense triumph, the Italian Renaissance serves as a relatively “recent” historical highpoint during which we demonstrated the capacity to harness our hardwired drives to enhance our collective potential. Marking the transition from the Middle Ages to the Modern Period, the Renaissance illustrates how positive change can arise, not by ignoring or trying to block our ancient hardwiring but by feeding it. The Italian Renaissance, in all its splendor, fueled our inner drives and fed our souls by placing more emphasis on the importance of individual happiness and fulfillment while also creating a superbly aesthetic environment that provided satiation for our reward-hungry brains. While the Renaissance in Italy spans several centuries in its entirety, the fact that it followed on the heels of one of the darkest periods of human suffering and wretchedness makes it a truly profound study in understanding how rapid social change can actually contribute to health, as opposed to impeding it. Today, we have much to learn from the empowerment that embodied the time of the Renaissance. Renaissance, which means “rebirth,” was a period of tremendous cultural and economic transformation. Beginning in Tuscany – principally in the areas of Sienna and Florence, and followed closely by Venice, the Italian Renaissance would become the new European fever but this time providing far much better health outcomes as one of the most culturally prolific eras in human history. As the unforgiving Plague sank into history, one of the first aftershocks was a rather simple economic one. With one in three Europeans removed from the population, wealth was spread across fewer hands and laborers were in very short supply. As a result, those who survived the Plague either enjoyed greater inheritances or greater wages. With increased prosperity came greater collective interest in pursuing life’s finer luxuries – food, art, and philosophy. The wealthiest families, such as the Medici’s of Florence, looked to grand historic civilizations like the Ancient Greeks and Romans for inspiration in rebuilding society and paid handsomely for artists and architects to replicate the magnificent cultural symbols of those timeless eras.

 Merriam-Webster Online.

1

7  From Pandemics to Prosperity: Feeding Our Hardwired Health

139

With wealth less concentrated among elites, the old agrarian feudal system soon collapsed, overtaken by urban industrialization, as well as population movements from rural areas to cities. Average citizens felt less like cogs in a wheel and less like passive feudal fodder – told what to think and do. A greater sense of identity and self-awareness, which was no longer the exclusive cognitive luxury of elites, led to an increased interest in making a difference in one’s own life and of engaging and participating in civic affairs. This sense of agency gave rise to humanism, and ultimately, a greater appreciation for the core essential attributes necessary to fully appreciate one’s own place in the world – specifically, the pursuit of higher education in “studia humanitatis,” or the humanities – grammar, logic, history, philosophy, and art. Great works of art, which soon proliferated throughout Renaissance Italy, laid bare the naked human form, spurred on by a renewed fascination with the individual, of human anatomy, and of science. The influencers of this time still remain some of the most prominent intellects in all of human history: Michelangelo, Leonardo da Vinci, and Copernicus. The unique nexus between art, science, and philosophy was unparalleled as artists, who in the Middle Ages were viewed as low-­ status craftsmen, came to be seen as expressing science and philosophy through paint and sculpture. The Renaissance depiction of man and woman followed an increasing interest in “naturalism,” which emphasized anatomical accuracy and authenticity of expression. Great Renaissance artists, such as Leonardo Da Vinci, invested significantly in the study of anatomy and physiology, dissecting cadavers to study the muscular structures beneath their skin in order to further the naturalism of art. Humanism, one of the pillars of the Renaissance, placed emphasis on the individual as the center of his own world – rather than a humble subject and servant of someone else’s. The idea of Renaissance Humanism gave rise to the idea of agency, of human potential and of an individual sense of control over one’s own destiny, which stood starkly opposed to one determined by divinity. This critical concept is what Renaissance art historians call “Returning to the Sources” – a revival of the classical Greek and Roman works, celebrating the human form, in contrast to the somewhat two-dimensional drawings of the Middle Ages. Michelangelo’s most famous statue, David, is an example of the influence of humanism on Italian Renaissance art. At 17′ tall, David is carved with near anatomical perfection and in a contrapposto pose – an Italian term meaning “opposite” – in which David’s arms and shoulders seem unevenly aligned with the distribution of his weight, which he carries on one leg. Contrapposto harks back to Greek sculpture, which combined feelings of movement and affect – or emotion – into human sculpture. In short, such sculptures look more natural and more human. This humanism was one of the most critical rebirths of the Italian Renaissance and emphasized the realism and individuality of men and women and impacted all facets of society, including health. As with architecture and art, health and medicine were also influenced by ancient civilizations. During the Renaissance, the theory of the “four humors,” first introduced by Hippocrates in ancient Greece, dominated medical practice. The four

140

7  From Pandemics to Prosperity: Feeding Our Hardwired Health

humors were: black bile, yellow bile, phlegm, and blood, and each were, in turn, associated with the four earth elements (black bile  =  earth, yellow bile  =  fire, phlegm = water, and blood = air). Ancient Chinese medicine and Islamic medicine also referred to balancing humors in the body. Scholars even point to Albrecht Durer’s famous painting of the fall of Adam and Eve, which depicts four animals in the background, each thought to represent one of the four humors, to which Adam and Eve’s once perfect forms would now be susceptible. In Durer’s painting, the elk represents black bile, the cat equals yellow bile, and ox is phlegm, and the rabbit is blood.2 While it was believed that everyone had their own unique combination of the four humors and that no two people were exactly alike in this respect, the general aim of Renaissance medicine was to maintain a balance and harmony between all four humors. If a person became imbalanced, medical theory of the day suggested that they would present with the characteristic properties of the dominant humor, generally described as their overarching “temperament.” For example, an individual who produced a lot of phlegm would be “phlegmatic” and may exhibit characteristics such as paleness, slow movements, and dull wittedness. As such, medical practitioners could diagnose which temperament a person was leaning toward by their appearance and demeanor. An optimistic, intelligent, energetic but emotionally volatile person, might be described as exhibiting a “sanguine” temperament and constitution – as characteristic of the “blood” humor. Important for medical practitioners was the belief that particular temperaments were more likely to be associated with specific diseases  – and hence, treatments. By treating the dominant temperament (and humor) the intent of medicine then was to re-balance the body and mind. One of the most noteworthy features of the Renaissance was the belief that individuals could play a lead role in managing the balancing act between their humors. Spirited by the idea of humanism and individuality, the concept “preventive” medicine and personal health management was born. Through Renaissance art and science, the fascination with individual innate humanness spurred forth a renewed interest in health awareness. Concepts such as “wellbeing” and “preventive medicine” began to emerge as guiding principles for everyday life. Analyzing a variety of sources, investigators have illuminated a significant uptick in Late Renaissance lifestyle, concerning sleep, diet, exercise, and stress control [2]. Perhaps once considered a luxury, these health concepts were also seen in newer forms of Renaissance academia. Studying the skeletons of individuals before and after the Black Plague, scientists have identified that following the Black Plague pandemic, diets improved and lifespans increased, as evidenced by a higher proportion of older adults [3]. The grand narratives surrounding individual agency and personal self-­ improvement became new guiding principles in Renaissance society. There was perhaps no greater promoter of this “new learning” than Vittorino Da Feltre, one of Italy’s most innovative educators, whose small school in Venice, and later, larger school in Mantua, became synonymous with the formation of new Renaissance  See “Medieval and Renaissance European Medicine”. Beforenewton.blog.

2

7  From Pandemics to Prosperity: Feeding Our Hardwired Health

141

education. Vittorino’s schools offered an unparalleled educational track for young boys, with rare exception the offspring of Italy’s wealthiest families. Vittorino, whose family had suffered through various bouts of poverty when he was a child, also accepted select students from more humble backgrounds, often waiving all tuition fees while teaching them for free. Indeed, Vittorino saw himself as a father figure for his students and would sit with them at mealtime, sharing stories with them, venturing out on excursions with them, and playing games. Such was his familial relation with his pupils that he rarely needed to resort to any form of harsh punishment, which, in itself, was a significant departure from earlier educational approaches. A Vittorino education constituted a major shift away from the traditional Monastic Middle Ages philosophy, in which Monks reinforced an unquestionable authoritarian hierarchy, absolute discipline, piety, and, above all else, submission to the primacy of the church. Conversely, Vittorino emphasized the uniqueness of each student and of their individuality, tailoring his teaching to match their respective needs, personal experiences, and family lineage. Most importantly, Vittorino promoted the idea that students had an instrumental role in shaping their own futures, in contrast to being solely determined by the divine. Vittorino thought it critical that he provide a good example for his students by focusing on his own physical and mental fitness, his use of proper grammar, his focus on personal deportment, and the upholding of high standards in terms of virtuousness [4]. At his core, Vittorino was a Platonist, believing the concept that the souls of men, if left unguided, were likely to fall well short of their potential. In this view, Vittorino saw his role as nurturing the development of young souls, not unlike the careful pruning of tree branches, to lend them superior shape and form. In this view, Vittorino upheld the classic Renaissance view of creating “mens sana in corpore sano” (the perfect man), or l’uomo universale (the complete man) – a man of sound mind, physique, and character. While the same degree of exalted individualism would not be afforded entirely to women, the marriage relationship between men and women began to bend ever-so-­ slightly toward a more equitable and egalitarian bond, in which women, who, not solely viewed as vessels for childbearing, began to experience tiny steps toward sexual enlightenment. Far from being liberated from the entrenched patriarchal power structures and the assumption of women as the eternal homemaker, the Renaissance did bring forth such novel ideas as courtly love, whereby women were seen worthy of romance – a concept almost entirely absent during the Middle Ages. Indeed, the term “courtly” originated from the Court, which formed the central pillar of government and society. Being invited to attend Court meant being amongst the most prominent and powerful figures of the time. Within Court, female courtesans or courtiers, tended to be unusually well educated, who circled among, and lent company to, powerful men and dignitaries. As the word’s origin suggests, to “court” someone was therefore to act like a courtesan. The idea of wooing another was logically accompanied by a greater awareness of one’s own presentation. History’s most recognized courtesan, Venetian beauty Veronica Franco, was an extraordinarily erudite woman. She had been privately educated by her brothers’ tutors, later serving

142

7  From Pandemics to Prosperity: Feeding Our Hardwired Health

as a literary advisor to Venice’s elite. Openly erotic, Franco exemplified the Venetian courtesan by seducing men as much with her intellect as her beauty and bedroom skills. Openly feminist, Franco argued vehemently for female sexual agency and was a staunch advocate against all forms of sexual violence. Defending herself at the Inquisition, Franco narrowly escaped persecution, following which she published her famous volume of poems, Familiar Letters [5]. While Renaissance Italy brought forth a new awareness of the aesthetic in art and in the human form, so too was this accompanied by a greater consciousness of one’s hygiene and appearance – including physical beauty. While domestic realties were still incredibly humble by today’s standards – cramped, smoky, and with little privacy – and life and death were largely governed by the Church, Renaissance women (and men) were not covered and shamed to the degree we see emerge in later Victorian England. Contrarily, Renaissance Italy’s humanist drive, which gave rise to individualism, including consciousness of one’s own health and presentation, permitted the introduction of significant hygiene improvements in everyday urban life. As Renaissance expert, Douglas Biow writes in his award-winning review of Renaissance Italy’s cleanliness, self-awareness, and dignity became strong cultural attributes, underpinned by the Renaissance’s artistic aesthetic [6]. As Biow describes, even toilets were made popular when they were cast as subjects of art in the works of Dante and Boccaccio. Indeed, it was Leonardo da Vinci who articulated his vision of a clean and hygienic city. One of the most critical and unique qualities of the Renaissance was the way in which Renaissance society adapted positively to the sweeping changes witnessed in the social world. Embracing new forms of education and learning that placed emphasis on individual health and wellness, including such ideals as ethics, cultural appreciation, art aesthetics, and scientific advancement, proved to be the ideal path toward advancing human capital. The Italian Renaissance is noteworthy for the intertwine of health and cultural change. While a good number of Italians continued to live and work in the countryside, a trend toward increased urbanization was a noted feature of the Renaissance, particularly in Florence, Milan, and Venice. As economist Deirdre McCloskey writes in her seminal work, The Bourgeoise Ethics, urbanization created new folds in the social dynamic of Renaissance Italy, within which formerly isolated colonies of people had to acquire new social and entrepreneurial skills needed to buy and sell goods [7]. It follows that simpler agrarian times were not necessarily more civilized times, with Renaissance urbanization bringing forth a significant value shift toward education, arts, science, and philosophy. Generally, as a rule  – and even today  – urban centers typically enjoy higher levels of education [8]. In fact, education remains one of the most significant determinants of health and longevity [9]. Of course, there are urbanization projects that result in much poorer health outcomes – not improved ones. Such was the case in many of the prominent cities of the Middle Ages. London, for example, suffered greatly in terms of public health, particularly because of the astonishingly poor sanitation. It was not uncommon for homes in London to be built with a clay floor onto which rushes would be placed. So deep were the layers of rushes that the bottom layer may rest undisturbed for a

7  From Pandemics to Prosperity: Feeding Our Hardwired Health

143

quarter century or more – harboring all manner of mold, vomit, ale, and dog urine,3 the very same floor upon which people slept, ate, gave birth, died, and had sex. With Renaissance urbanization, the traditional agrarian diet of wild game meats became far less readily available to urban dwellers, which meant that daily sustenance had to rely more on vegetable-based food products. While the Renaissance is best known for its rapid advancement and amalgamation of art, science, and philosophy, there is far less focus on small but powerful changes – like the introduction of salad. Not to disappoint, even Italian Renaissance salad eating became an artistic endeavor, beginning with its inclusion in Italian theatrical plays concerning the development of famous Italian courtesans. In art circles, salad eating was portrayed as a learned skill for refined courtesans – a form of sprezzatura – by which it is meant to do something with elegance and restraint, and refined effortlessness. First characterized in Castiglione’s work in 1528, The Book of the Courtier, sprezzatura was noted to be the most important quality of a great courtesan. It is the art of hiding one’s efforts so as to appear nonchalant yet extremely graceful. A courtesan had to be able to dine on salad while remaining sexually appealing. In time, salads were even described as evoking and awakening male-female sexuality, as if the novel flavors of each bite were a metaphor for sexual intrigue. The variety in salad was compared to the variety in sex – and happiness. Comparatively, the Spanish were chastised for not dressing their salads, very unexciting, indeed. The humble salad had emerged as a symbol of Italian high culture. Fundamentally changing nutrition patterns. Invented in Florence, it became a symbol of Italian national identity. Culturally, the evolution of diet away from game meats to that of vegetable-based diets clashed with northern Europe, where raw vegetables (especially) were regarded solely as feed for animals. Indeed, English theater often poked fun of the Italians and their new raw-vegetable salads. But there would be no stopping the Renaissance’s capacity to change the world – even through the world of culinary arts. Erasmus of Rotterdam’s book on manners, in which cutlery and napkins were used – first seen in Venice when the Princess of Byzantine traveled to marry the Doge in the tenth century was ultimately influenced heavily by Catherine de Medici, who introduced them to France and Europe, from her home in Florence [10]. A leading matriarch of the Italian Renaissance, Catherine Medici’s gastronomic influence was profound. When, at 14  years old, she was married to the French King’s son, Henry II, she brought with her to France many Italian customs, chief among them the use of the fork, table settings and napkins, and numerous culinary principles, such as the separation of savory and sweet. Olive oil, truffles, artichokes, and Chianti wine have been credited to young Catherine’s French debut. A pet theory abounds that even Catherine’s decade-long struggle with conceiving a child – a period of infertility that ended with just the opposite situation – nine children – was ultimately attributed to the dietary interventions from her Italian chefs. Of course, too few speak of young Catherine’s stress as a contributing factor during this time,  See Health in the Middle Ages. Lordsandladies website.

3

144

7  From Pandemics to Prosperity: Feeding Our Hardwired Health

seeing her then 15-year-old husband, Henry, fall madly in love with the strikingly beautiful 35-year-old Diane de Poitiers, whose seductive powers not only lured the teenaged Henry into her bed, but she into Henry and Catherine’s home, as a permanent resident and chief mistress. So effected by this arrangement, Catherine is reputed to have become a consummate Machiavellian strategist, seeing the vulnerable weakness men exhibited in the face of female beauty. Catherine devised her escadron volant (flying squadron) of some 80 charming and alluring women, who, at Catherine’s bidding, circled like sharks amongst the French court’s most powerful men, spying for Catherine and, in turn, manipulating her vast fortunes. As such, her husband’s weakness became her strength. As the idea of Renaissance society embraced science, philosophy, and art into common societal ends, so too did form the idea of the Renaissance Man. Perhaps a somewhat sexist label in today’s world, the term refers to a man who excelled in a wide variety of areas. Most famous in this category were the great polymaths, Leonardo Da Vinci and Michelangelo, who became leading experts and inventors in the arts, as well as science and engineering. So widely used is the term Renaissance Man, that it remains a label, even today, for anyone who demonstrates skills in a variety of seemingly unrelated disciplines. In more modern times, Benjamin Franklin was one such man, whose legacy spanned from slavery abolitionist and co-writer of the United States Declaration of Independence to inventor of electricity, enclosed wood stoves, and reading glasses. There is perhaps no better period in history than The Renaissance to showcase the inseparable relationship between our social world and our health and wellbeing. A half millennium on, the Renaissance stands alone in the unique way in which it shaped our modern world, our culture, and our health. Of course, it’s easy to search for other recent periods of significant change, such as the Great World Wars. Yet, while war, like World War II, which killed some 3% of the world’s population, does often prove to be the unwanted mother-of-invention in certain scientific fields, like engineering, physics, and chemistry, such periods of total war are almost always damaging to public health and, in particular, for those on the losing side of history. To be sure, there can be medical breakthroughs as well in war, which are also born of necessity. During World War I, antiseptic wound irrigation, as an alternative to mass amputations (there were 20,000 amputations in the early phase of World War I), and general anesthesia were two of the war-time medical breakthroughs that would ultimately change the world of medicine [11]. As for the Great Depression, analysis of the population indicates that public health standards actually improved during the depression years, a very counterintuitive finding. Indeed, the deepest points of the recession, in 1921, and between 1930 and 1933, saw a lower rate of mortality and a higher rate of longevity [12]. This has been generally attributed to improvements in sanitation, such as running water and sewer connections in newer homes [13]. The idea that there are social determinants of health is not at all new. In 2005, the World Health Organization created a Commission dedicated to investigating such determinants. Today, the Commission has three principal mandates: to improve

7  From Pandemics to Prosperity: Feeding Our Hardwired Health

145

daily living conditions of vulnerable groups, to reduce inequity, and to measure the effects of social determinants and interventions.4 While we acknowledge the deleterious health effects of low socioeconomic life, the opposite end of the spectrum, in which societies with means become less healthy, remains uncharted territory. Like the Italian Renaissance, we too are being faced with daunting health challenges while experiencing profound social and economic shifts. However, as today’s information is doubling at an ever-increasing rate, our social and technologically driven ecosystem is changing faster and to a greater degree than that of the Renaissance or, indeed, any other period before. Despite this fantastic change, we are not enjoying the fruits of our societal progress – instead, we are suffering from them. What the Renaissance teaches us, quite plainly, is that we do have the capacity to feed our hardwired drives in ways that empower us, rather than hurt us. And, the same hardwired instincts that crave lifestyle habits that can cause us harm can also be satiated by more positive and progressive societal changes. The point of the Renaissance story is not to suggest that we should all begin taking courses on seductive salad-eating or form gangs of confidantes to spy on our acquaintances; the core takeaway of the Renaissance, for our purpose, is to understand that we can enjoy societal progress in ways that feed our hardwired reward center and excite our senses. These are not entirely foreign concepts to us; we simply need to link the lessons learned from the Renaissance with the type of solutions we are currently seeking in our whirling modern world. Social media memes about living our ideal life and countless writings on empowering ourselves showcase our inner drive for individuality and agency – a cornerstone of social change in the Renaissance. Countless self-help writings and videos promote the idea of stillness, mindfulness, and meditation, not only to create cognitive coping strategies for our switched-on lives but also to help us find aesthetic meaning. When we take time to sit and read a great book, study a captivating piece of art, challenge or educate ourselves in new ideas, socialize meaningfully with friends, exercise, or improve our sense of place through architecture, style, or nature, we are feeding our brain’s hardwiring. As we know, these effects run deep, right down to the level of brain chemistry – and to feel healthy is to be healthy. The Italian Renaissance, or “rebirth,” created a social world that offered humans a new and better future – a transition from the Dark Ages and its sanguinary Black Plague, to a brighter and healthier society built on the amalgamation of art, philosophy, and science. In large part, the societal revolution experienced during the Renaissance was attributed to a thirst for knowledge and personal empowerment, rather than an upholding of strict and unwavering doctrinal rules. Most importantly, the changes seen in the Renaissance allowed people’s minds and bodies to flourish as they were evolved to do. Such societal foundations lay the groundwork for behavioral change – principally in the area of human achievement. 4  See World Health Organization website. http://www.who.int/social_determinants/thecommission/en/.

146

7  From Pandemics to Prosperity: Feeding Our Hardwired Health

While we have made astounding technological leaps in our modern world – perhaps more rapidly than our even wildest imaginations – our built-in human software has remained locked in a desperately outdated version, akin to having an old operating system on a computer. Even though we essentially share the same hardwired drives as those who lived during the Renaissance, we still haven’t quite figured out how to thrive in our changing world. We’ve invented artificial lights that illuminate the dark, and yet our brains have failed to adapt to this new reality. We surround ourselves with grab-and-go sugary snacks, and yet our bodies have not yet recalibrated to this environmental overload. We stimulate our brains with flashy digital media and our youngest, and most vulnerable minds react as if their very life is threatened. We intentionally emphasize social divides through online media, which cause anxiety and depression. And while we surround ourselves with technology, many of our most critical advances, such as aerospace and medicine, remain hamstrung by fundamental human errors in communication and judgment. Even as we are excelling at innovation – better Internet, more efficient cars, faster and bigger passenger jets, and even smarter medical devices – our actual health and wellbeing are faltering. How do we reconcile this growing gap between the new world we are creating and our perfectly refined – and yet apparently outmoded human software? The solution, like the Renaissance, requires a holistic view of human health, recognizing the critical relationship between our social world and our biological world. While we may appreciate “simpler times” now and then, very few of us would advocate a reversal in our fantastic technological innovations – present company included! Surely, most of us will agree that our potential must not be bound by primitive drives. Even though, admittedly, these instincts remain inescapable. Like the fathers of the Renaissance, our task in solving this evolutionary puzzle is to empower ourselves: first, to understand why we do the things we do; second, to understand how to manage them; and third, how to harness a more holistic view of our social and biological worlds in order to foster a healthier and more reliable future. While this book’s stories and science describe the scale of our emergent crises, there are hints of prescriptive promise. It is within these silver linings that we should find hope and guidance, lighting our way into the uncharted future. When we began planning our research, there was considerable discussion as to whether the book would be designed around a particular prescription or set of tools or whether the aim of the book should be to inform and provide context. Undoubtedly, we could have done both – but it was decided that the scope of these significant health trends was too grand to be encapsulated within a simple tool or rule. Generally speaking, it would have seemed disingenuous to our readership to pretend that one mantra, acronym, or witty motto would cure us all. The alternative was to peer into the shadowy unknowns – our health uncharted – and search for meaning, perhaps lessons upon which we could, and with integrity, identify as touchstones for change. And, that is what we did. First and foremost, our modern advances prove, beyond any doubt, that we possess a fantastic ability to acquire knowledge and use that knowledge to build incredible tools to advance our lives. In fact, at the core of it, this is what makes us human.

7  From Pandemics to Prosperity: Feeding Our Hardwired Health

147

Yet, strangely, for many of us, it would take less willpower to do a complicated task like putting a man on the moon than to avoid sugary foods – despite a lunar mission’s outrageous complexity. Why is it that so often, throughout history, great men and women have been felled as a result of their most basic instincts? We are a strangely fantastic creature whose promise knows no bounds but whose evolutionary devices are both a blessing and curse. All told, our human story is not a tale of destructive instincts – it is one of triumphant ones. It is critical to remember that when our brains and bodies compel us to overeat, take pain killers, binge watch a TV series at bedtime, or endlessly pursue greater social status, it’s not because our biology wishes us harm – it’s precisely the opposite – this is how we have survived, and we are doing exactly what our brilliant evolutionary history has taught us to do: hoard the good stuff. In viewing the problem this way, it becomes apparent that our so-called weaknesses are, in fact, not weaknesses at all  – they are misguided strengths. Throughout this book, we have learned that those things that make us human – our desires and our wants – are deeply rooted in the way our brains and bodies have evolved to survive. These necessary traits are essential to human life. However, we also know that we have the capacity to think, plan, and strategize. On the surface, it would seem that these two paths are irreconcilable with each other – that we either have to be hedonistic and self-centered or we have to be cool-headed strategists, immune to our more basic instincts. The insinuation is that the latter is preferred and that we have to learn how to douse the flames of basic desire in order to get a grip on our health and wellbeing. The Renaissance proves instructional here in illustrating a pivotal historical period in which humans rose from the embers and ashes of unimaginable ruin to achieve great heights of health and achievement. This was not done through a shunning of our human instincts but rather an embracing of them. The Renaissance took ownership of our “humanness” at an individual level, studying our anatomy and biological processes at a degree of scientific granularity never done before. This gave rise to a greater understanding of disease processes, hygiene, and diet. From the darkness of the Plague, during which health was surely defined as “the absence of death,” the Renaissance began to change the understanding of health from that of basic survival to that of conscious health choices. This included a consideration of how lifestyle decisions can contribute to individual wellness and longevity. The conversation was not merely how to be healthier – it was an understanding of what has prevented us from being healthy. It’s always fascinating to hear people discuss how long the believe human will survive. There are doomsday predictions, apocalyptic prognostications, statistical models, and optimistic portrayals of future exploration to habitable new worlds. Yet, what is it that gives humans the luxury of reflecting on our past while attempting to engineer our future, when most other creatures on this Earth do not exercise any sort of free will at all? A curious thing about the human brain is that it only comprises about 2% of the body’s weight and yet demands about a quarter of the total daily energy expenditure, just to function. It turns out that one of the things that makes

148

7  From Pandemics to Prosperity: Feeding Our Hardwired Health

human brains so very unique and energy hungry is that our cerebral cortexes are more densely packed with neurons than any other species’ brains on the planet.5 For any other animal to support this size of brain and neural density, it would require near constant eating throughout all of the animal’s waking hours. Why is this not the case for humans? One theory posits that it’s because humans can make lasagne. Joking aside, we are the only species that cooks its food into calorie dense meals. It is this trait that provides us with the ability to ingest enough calories to support our unusually hungry cortexes.6 Yet another (mouth-watering) example of the uniquely human relationship between our social and biological worlds. It is surely obvious that our brains have provided us with the capacity to deliver tremendous cognitive achievements to a level that is truly unique in the animal kingdom. With our magnificent brains, we can build our own ecosystems that go far beyond the basic survival needs of our population. We touched on the Italian Renaissance as an extraordinary example of what we can achieve when we are at our best – combining science, art, and philosophy – to build educated, industrious, and enlightened societies. And yet, the same hunger and drive that inspires us to invent, to build, and to create is also the very same ancient instincts that render us susceptible to too much of a good thing. We are evolved to survive, but in our modern world, these survival instincts have created a strange health paradox in which the very hardwired traits that are meant to keep us alive are now killing us. Through our stories and science, we have suggested that our evolutionary hardwiring is often linked to our brain’s reward circuitry as well as the degree to which this system becomes energized when surrounded by limitless bounty and stimuli. While this abundance certainly includes food – and particularly sugary foods – our hungry brain also feeds ravenously on the delights of the Internet, including social media, which preys upon our deeply hardwired need for status. We also discussed how these traits hamper our capacity to deliver safe healthcare in hospitals, with rates of human error in healthcare settings at epidemic levels – albeit largely preventable. Hospitals, which are now among the most dangerous places on the planet, are particularly susceptible to suffering from hardwired social instincts. Steep authority gradients among staff, a blaming culture, imperfect information, and relentless time and performance pressures, compound to create an environment ripe for human conflict and misstep. The significance is that even in arenas, such as modern medicine, in which we know the origin of diseases as well as the drugs or interventions to manage them, we are still burdened by our hardwired social roles, fears, and stress responses. The solution, advertised by many, is to create moments to detach from the daily routine, to experience nature, to meditate, and to find answers in the latest health trends. In the opening pages of this book, we described our current susceptibility to ridiculous health fads, in part due to our affinity for social media and its role in guiding us. While these techniques and life hacks may work for some, they are based on 5  For a great description of what makes human brains unique, see Suzana Herculano-Houzel’s TEDGlobal talk, “What is so special about the human brain?” 2013. 6  Ibid.

7  From Pandemics to Prosperity: Feeding Our Hardwired Health

149

a principle of escaping or evading our instincts, a practice that simply does not hold up against our seemingly impenetrable human hardwiring. Workarounds are a good temporary solution and like meditation, can be quite effective at changing the brain, but these are still ways to manage the symptoms of stress and anxiety, rather than an understanding of why we are the way we are and why we do the things we do. Many of these popular techniques derive from Eastern philosophical concepts of detachment. In such pursuits, the intent is to accept who we are and try to create new pathways to circumvent the parts of us that are manifesting in negative ways. There is little doubt that techniques like meditation are enjoying increasing scientific support and they are a great tool for reducing anxiety and increasing focus – but much of our challenge remains in understanding how we have arrived at a point in our evolution in which mediation is necessary. Identifying barriers to wellbeing requires a degree of self-awareness, a sense that you, as a person, are worthy of improvement, in mind, body, and spirit. This was one of the most profound shifts in Renaissance thinking. Previously, choices about health would have been viewed as entirely impractical in a busy day-to-day grind of basic survival, work, and child-rearing. These questions of self-awareness are the very same ones we ask today. What is happiness and how can I do a better job of achieving it? What should my goals be in life? What are meaningful relationships? Is there such a thing as life-work balance in a world that never shuts off? How do we protect our children? How do I improve myself, mentally and physically? Understanding what the barriers are to health and how they operate in our subconscious is just as important, if not more important, than understanding how to improve our health. Many of the themes discussed in this book have considered how our behavior is being directed and controlled by our most basic instincts  – with much of the focus on the way in which our brains and bodies seek and achieve reward. This reward circuitry need not be fed solely by cravings that inspire negative or destructive behaviors. The key is empowering ourselves to redirect the same misguided reward circuitry toward more aspirational goals. The Renaissance combined science, art, and philosophy to play upon our grander ideals while still capturing the same reward responses that our brains and bodies so desperately desire. Much has been made of the Renaissance’s use of art in feeding our hardwired instincts. While most of us can readily imagine the contribution that science, and perhaps philosophy (and its corollary, education) might make in bettering our lot, on the surface, art may seem less essential. Until recently we only had to look at fine architecture, great paintings and sculpture, beautiful music, or the use of space and know that it somehow fed our souls. Thanks to modern technology, we now have clues as to why art is so significant – and why it is as important now as it was during the Renaissance. Living on the frontier of neuroscience offers us a novel and intimate view of how the human brain responds to environmental changes. It’s perhaps no surprise that functional MRI imaging shows a positive response in our reward circuitry when experiencing appealing visual art, poetry, or music [14]. As researchers note, there are no known “survival” advantages to these aesthetic stimuli – and yet they act on our brain using the same reward pathways as do sugar, fat, and social media. In fact, the Italian phrase terribilità was used during the Renaissance to

150

7  From Pandemics to Prosperity: Feeding Our Hardwired Health

describe the intense emotional reaction evoked by such great works as Michelangelo’s 5800 square-foot awe-inspiring fresco on the ceiling of the Sistine Chapel [15]. Through imaging, neuroscience is beginning to reveal how the complex human brain can be differentiated from the brains of other animals in the way it translates social and aesthetic stimuli into hedonic reward experiences [16]. The relatively new field of neuroaesthetics explores the effects of art on our brains. Not only do our brains thrive on aesthetic experiences, like art and music, but we are especially built to appreciate aesthetic experiences in social settings. When in groups, our brains tend to search for social cues in order to mirror the behaviors of those around us. In a theatre or live performance, we tend to experience a profound collective emotional connection, not only to the actual performance but to the strangers around us. We feed off each other’s laughter and tears when regions of our brain – the prefrontal cortex and the temporoparietal junction, in particular – sense the emotional responses of the crowd. Through our hardwired social perception, we have evolved to subconsciously tune into extremely subtle changes in emotional cues from those around us.7 It’s understandable that this would have been an evolutionary advantage – that our survival in groups would very much depend on being able to mirror the cues and behavior of others, both for social acceptance and for immediate survival. If our brains are truly hedonistic and seek reward – even in the form of aesthetic experience – in addition to food, drugs, and sex – then this is surely the platform from which change must occur. As intelligent creatures, the first step is recognizing that we hold the key to opening the doors of change. And, the process begins with awareness. In this book, we have shared many facts and figures, some of them distressing and some of them inspiring. And yet, the book is more than a collection of social curiosities. In reality, each chapter provides some context to shared themes that illustrate our current need for awareness – and specifically self-awareness – lest we wander aimlessly into poorer and poorer states of health. In Chap. 1 we considered the role of human error in hospitals as one of the leading causes of preventable death in America. The notion that a place of healing can be more dangerous to our health than going to war illustrates the confounding frontier of human health science. As we learned, it’s not our knowledge of medicine or disease that is lacking, nor our technological innovation; it’s our ability as humans to navigate the social complexities of working in teams, communicating during high-stress events, and trapping our own errors and lapses. Even simple solutions, such as operating room checklists or handwashing campaigns, are often met with resistance, not over their logic in preventing harm, but because of their challenge to ingrained social constructs, be they status, communication roles, organizational pressure, or pride. This chapter also introduced the idea that our health and our social world have become more inseparable than ever before and that any serious consideration of modern health can no longer view them as exclusive domains. 7

 For a beautiful visual presentation on this, see: Kaufman et al. [17].

7  From Pandemics to Prosperity: Feeding Our Hardwired Health

151

Chapter 2 shone a light on the world of modern indulgences, looking at four particularly dark horseman: sugar, fat, salt, and stress, as powerful harbingers of ill-health. Our highly evolved sensitivities for reward, which served us well for millions of years, are now running amuck in today’s world of anything – anytime. In this chapter we looked at why this is so, considering how the brain craves these sugary sirens and how our reward pathways become dullened with repeat exposure, leading us into a downward spiral of dependency. In Chap. 3 we discussed how the brain develops, with particular emphasis on the youngest humans – our children. Without a doubt, this was one of the most difficult stories to tell; how stressful life experiences interrupt or distort healthy brain development in our most innocent and vulnerable members of society. While stress for children is certainly not new and has been an extremely unfortunate reality during times of war, famine, natural disasters, or family breakups, the modern-day stress imposed by digital screens and other lifestyle changes is reaching epidemic proportions. These concerns extend to emerging adults as well, who are entering their so-­ called “volitional” years, characterized generally as transitioning from dependence to independence to new adult relationships, study, work, and potentially shifting worldviews [18]. This phase of life has always been challenging, but today’s unique pressures seem to be exacting a greater toll on emerging adults’ ability to cope, as witnessed through the generation’s epidemic levels of anxiety and depression. Chapter 4 looked at how our search for happiness, as an end state, is an often-­ misguided pursuit. Our ancient brains are built to survive in social settings, but despite this key to happiness being plainly in view, we routinely succumb to more basic drives that push us to pursue higher levels of social status, overt displays of wealth, and advertised wellness (as opposed to genuine wellness). Posturing and posing as healthy and balanced individuals in a world of social media avatars is proving counterproductive to producing the type of real happiness we so desperately desire. Sleep, the other side of our waking world, and an essential component to life itself, was showcased in Chap. 5. Our modern world has challenged the way in which we manage our sleep – or fail to manage it – causing a host of undesirable health consequences. What we eat and drink, the artificial light that shines into our eyes, and our fatiguing around-the-clock work schedules are impacting our health in ways we are only just beginning to discover. Chapter 6 dove into the deep end by looking at how we measure reward in risky situations and how far we are willing to go to satisfy our hardwired needs. From leading an army to winning the girl, we have evolved to take certain risks if there is sufficient reward – much of this controlled by our behind-the-scenes hardwiring. We learned that some general level of risk seems advantageous in getting ahead in life and achieving adequate levels of social status, while other far riskier pursuits, like mountain climbing, are often the result of hormone levels and personality differences. Risk homeostasis is a critical way in which we evaluate risk and reward in our modern world, with Sweden’s nail-biting change from left-lane to right-lane driving, and the unexpected reduction of accident rates, providing the perfect example. Also fascinating, mandatory safety improvements like seat belts, ABS, and

152

7  From Pandemics to Prosperity: Feeding Our Hardwired Health

increased illumination on roadways have resulted in more aggressive driving behavior, not safer behavior. Hospital workers who choose not to wash their hands despite fully understanding the medical reason they should illustrate that the logic we attach to judging risk is often driven by our internal risk-and-reward system, rather than deliberate and conscious risk evaluations. This book is about our future, and specifically, our future health and wellbeing. Perhaps more than any other time in our history, our modern health and wellness are intrinsically bound to our social world. This has been a theme throughout this book, with many of our stories and scientific discussions grounded in the central idea that today’s emerging health crises are not solely clinical medical problems but social ones too. Our physical health is beholden to the decisions we make on a day to day basis, with much of those decision determined by our social world. Right or wrong, this seems to be our carryover from ancient evolutionary processes. All societies experience varying degrees of socioeconomic status as a consequence of wealth or birthright. In many overtly hierarchical cultures, these various tiers can be quite profound and difficult, or nearly impossible, to transcend, even with newfound wealth. This is far less true of our modern world, in which subjective social status often takes precedence. Anyone on YouTube with a funny dance or comical disposition can become an overnight social sensation, garnering near instantaneous fame and fortune. This subjective nature of social status is a strong motivating force in promoting the role of social media and online social networking tools as easy means for establishing personal identity. We have always been social creatures. In 1992, anthropologist Robin Dunbar declared that the maximum size of a social community in which everyone would know everyone else and could entertain consistent social relations was 150 people [19]. What has now come to be known as Dunbar’s Number, it was derived from Dunbar’s general observation that larger groups of primates tended to have larger average brain neocortex size. The theory suggests that, as social creatures, we seek out an optimum group size based on our brain’s capacity to conceptualize the group’s social network. A bigger social network means the individual needs a larger brain to deal with the multiple social identities and relationships. Extrapolating this to humans, Dunbar hypothesized, given our existing neocortex size, the ideal average population for human communities was 150 individuals, which Dunbar pointed out, just happens to be about the same size as the groups in Neolithic farming communities. While we can appreciate our need for social networks, it is what happens within these groups that is most fascinating, specifically, our quest for social status – a hardwired instinct we share with our primate cousins. The five great primates, orangutan, gorilla, chimpanzee, bonobos, and humans, have been studied extensively with respect to social status and hierarchy. While mate-pairing strategies differ slightly – all great primates are polygynous except for the monogamous gibbons (and quite debatably, humans) – most primates arrange their societies into larger groups and smaller subgroups, of which males are dominant. In gorillas, a silverback male rules sovereign through physical might and the threat of force and controls a harem of females. In humans, however, this is not

7  From Pandemics to Prosperity: Feeding Our Hardwired Health

153

necessarily the case. It is estimated that sometime during the Pleistocene era, the end of which was marked by the last period of glaciation some 12,000 years ago, humans began to move away from the physical dominance model to a social intellect model, in which cunning and alliance-formation became more effective in gaining alpha-male status than pure physical might. While our quest for social status is deeply embedded in our evolutionary story, the means by which we strive for it and attain it are not always conspicuous. In the pages of this book, we’ve dedicated much discussion to the prominent role social significance has in our human hardwiring and how this dominant force drives our day-to-day decisions and happiness (or sadness), often without any conscious manipulation. Our modern world has added a layer of unforeseen complexity onto this Pleistocene era affinity for social worth. Regardless of our shifting cultural landscape, our hardwired drives continue to make social ranking a fundamental requisite for survival. A three-decade study in the UK discovered that as one climbed the social ladder of society, one’s health improved. While access to wealth is the first thing that comes to mind as the most critical variable here (and it is surely important for providing basic life essentials), the study showed that money is not nearly as critical in determining long-term health as is relative social status [20]. Research indicates that relative social position affords two key attributes for health: increased control over one’s life and, second, greater opportunities for social participation.8 It turns out that having some degree of agency with respect to one’s day-to-­ day routine – as we saw with the Italian Renaissance – is paramount to health, and this “free will” is directly tied to subjective perceptions of social status. In Korea, health officials have found that self-perceived social status, more than any other variable, drives health, particularly with respect to levels of smoking and alcohol consumption [21]. How many of us feel that we are in control of our habits, including diet, social media use, or sleep, or whether they are somehow controlling us? So powerful is our social world that many of us will follow the crowd simply to gain acceptance, even if the decision to do so is completely irrational [22]. Of course, there may be sound reasons to follow the herd. If everyone is madly running in the same direction, maybe they know something that you do not – perhaps something that will save your life. Indeed, it is very difficult to resist the temptation to follow the apparent wisdom of the crowd when they are acting in unison. Psychologists call this informational influence, as it is based on our assessment that the group has information that can help us survive. A second type of influence is called normative, in which we are persuaded and motivated by a group identity and culture out of an eagerness to belong – a very powerful theme in modern social media. We can very quickly adopt the narrative of groups that we wish favor within, hoping that they will accept us. Normative influence results in conformity for the sole sake of belonging. Our modern world has magnified this appeal for belonging. Interestingly, studies into normative social influence demonstrate that conformity is much stronger in public than in private. In the famous Asch conformity experiments, when subjects were able to write their  See Dr. Michael Marmot’s findings in his book, The Status Syndrome (Marmot [20]).

8

154

7  From Pandemics to Prosperity: Feeding Our Hardwired Health

answers down privately as opposed to stating their answers openly to the group (as to the length of the lines), they were honest nearly all of the time, unencumbered by the social pressures to conform to the group narrative. Today, so much of our world is public, from posting online photos of our food in restaurants, to wishing our children happy birthday via social media apps, to taking images of ourselves waking up in our beds. Popular social media memes are rendering our private sphere public, and for many, conformity with online social media trends means greater peer acceptance. Conforming to group norms and attitudes is also a hardwired trait. When teens and college students were shown social media images, while in a fMRI, the images that elicited the greatest response in the brain’s reward center – the nucleus acumen – were the ones that had the most likes by others [23]. This suggests that we are not entirely free-thinkers when it comes to stating what we like or dislike but are subconsciously evolved to follow the trends of the group. Early in the book, we considered an allegory of sorts, characterizing our slow evolutionary adaptation and our quick environmental changes as akin to the fable of the tortoise and the hare. Today, the tortoise is seemingly losing the race, but as we all know, it need not end this way. While we are not proposing that our environment “fall asleep” like the hare, there is a chance that through understanding our internal hardwiring and how it impacts our modern health outcomes, that we will begin to empower our own health and wellness. Just as we witnessed with the Renaissance, our capacity to grow by using our intelligence and instincts is very much possible, if not our natural state. Those who seem to be the most successful at thriving in our modern world are not those who choose to disconnect or detach but rather those who have a knack for harnessing our built-in reward system and social affinity for the betterment of themselves and society. This is really the secret to mastering our lives. Understanding how our brains and bodies seek reward and are affected by the stimuli around us – and then taking it a step further in order to channel that hardwiring in a more positive direction rather than a destructive one. As we have discussed, this does not mean that we all attempt to harness unrealistic levels of willpower, as we know this does not ultimately work. It does not mean trying to live outside our brains and bodies – fighting our innate hardwired evolutionary drives; this does not work either. It means understanding why we want particular stimuli in our lives and then directing these same instincts toward, as they did in the Renaissance, greater achievement, joy, and individual fulfillment. It is important to underscore the myth of continuous willpower, as this is the conventional wisdom when we try to avoid bad things. As discussed with the cookie and radish experiment, we are simply not designed for discipline without end. In fact, we now know that our performance degrades under relentless willpower and increases with metered indulgences, meaning we perform better when we allow ourselves some downtime. What is important then is striking a balance between work and pleasure. This too seems to be essentially human and is a strategy easily adopted into our everyday lives. Many of the world’s most famous writers, such as Hemingway, Steinbeck, Munro, Dickens, and Twain, wrote in the morning hours

7  From Pandemics to Prosperity: Feeding Our Hardwired Health

155

and enjoyed other non-work pursuits in the afternoon, often involving walks and social interactions. This separation of task and more idle pursuits is a pattern that seems to help sustain creativity and perseverance. In addition, when the writers do put down the pen for the day, they switch completely from their work environments. Sir Winston Churchill too, even amidst his toughest years, took time to walk his gardens, take daily baths, and enjoy wine, cognac, martinis, and cigars. How many of us continue to remain connected to our devices and computers after work hours have ended, never granting ourselves the freedom to rebuild and regenerate? Willpower, which is certainly at the core of our perseverance to succeed, requires routine periods of downtime and non-work indulgences in order to maintain a high level of productivity and creativity. Sir Richard Branson has often been cited as being one of the most successful individuals on the planet and not solely because he is a brilliant billionaire but because he has found a vibrant formula for life by which he lives every day. And, Branson’s mantra shouldn’t be a secret – his rule for whether something is worth pursuing is whether it is fun. Not only is Branson relentlessly customer focused in his business affairs – routinely trying to experience his companies from a customer’s perspective and even cold-calling his own customer service lines – his requisite condition for business and life is whether the pursuit is thrilling. It’s a marvelous philosophy. Of course, many may be thinking, “Isn’t it nice when you’re a billionaire and everything you get to do is fun.” Yet, Branson was not always wealthy. He dropped out of high school to start a magazine, took risks, and put his life and reputation on the line – as do all entrepreneurs – in order to follow his dreams. What Branson teaches us is that we don’t need to detach from our inner drives to be successful. Instead, we can throw a lasso around our own reward systems and harness our basic drives in order to build a better version of ourselves. This idea of using our evolutionary gifts, instead of trying to evade them, may well be the secret to happiness. We have seen that those who have lived the longest, healthiest, and happiest lives have been those who have learned to embrace family and relationship connections – fundamental attributes of human life. Pushing ourselves into isolation or into states of never-ending competition (no matter how veiled) are the antitheses of a healthy and happy life. Today, we are seeing magnificent scientific achievements but are at risk of missing our progress opportunity because we do not fully appreciate our deeper instinctive and instructional cores, our hardwiring. We mustn’t bury this humanness or attempt to harness some sort of resilience to combat it. Living a passionate life with real social connections is the easiest and most effective route to greater health and longevity. We have seen evidence, time and again, that finding real meaning in relationships and connections within family or between friends or sharing pursuits with one’s community, limits loneliness, reduces life worries, and tempers our pursuit of status. Moreover, it makes us healthy. From the Italian immigrant families whose tightknit social connections created an uncanny resistance to cardiovascular disease, to the marvelous populations in which longer life-extending telomeres are uniquely prevalent, we have witnessed, together, the stories and science of our hardwiring. To borrow from Catherine Medici’s understanding of how to turn disadvantage into

156

7  From Pandemics to Prosperity: Feeding Our Hardwired Health

advantage in the French Court, we too must use the very same nature and reality that threatens to harm us, into something that helps us thrive. Ours is a journey that begins with knowledge. Understanding who we really are and what it means for life in today’s world is the most important challenge facing our species. We cannot escape ourselves – nor should that ever be our aim. We must do what we have always done. Learn, adapt, and thrive. It is what makes us unique, it is what makes us human. It is what makes you.

References 1. Benedictow OJ. The black death: the greatest catastrophe ever. Hist Today. 2005;55(3):42–9. 2. Cavallo S, Storey T.  Healthy living in late renaissance Italy. Oxford: Oxford University Press; 2013. 3. DeWitte SN. Mortality risk and survival in the aftermath of the medieval black death. PLoS One. 2013;9(5):e96513. 4. Thurber CH. Vittorino Da Feltre. Sch Rev. 1899;7(5):295–300. 5. Franco V. Venetian courtesan poet. University of Chicago Library, 1546–1591. https://www. lib.uchicago.edu/efts/IWW/BIOS/A0017.html. 6. Biow D. The culture of cleanliness in renaissance Italy. Cornell: Cornell University Press; 2007. 7. McCloskey D. The bourgeois ethics: ethics for an age of commerce. Chicago: University of Chicago Press; 2006. 8. Konuk N, Turan NG, Ardaili Y.  The importance of urbanization in education. Eurasia Proc Educ Soc Sci. 2016;5:232–6. 9. Zimmerman EB, Woolf SH, Haley A. Understanding the relationship between health and education. In: Population and health: behavioral and social science insights. Rockville: Agency for Healthcare Research and Quality. 10. Prince F.  How table manners as we know them were a renaissance invention. National Geographic History Magazine, 2017. 11. Hampton E. How World War I revolutionized medicine. The Atlantic, 24 Feb 2017. 12. Granados JA, Diez Roux AV. Life and death during the Great Depression. Proc Natl Acad Sci U S A. 2009;106(41):17290–5. 13. Irwin N. What was the greatest era for innovation? A brief guided tour. The New York Times, 13 May 2016. 14. Vessle EA, Starr GG, Rubin N.  The brain on art: intense aesthetic experience activates the default network. Front Hum Neurosci. 2012;6:66. 15. Coleman SW. Michelangelo Buonarroti: sparks will fly. The Art Minute, 10 June 2012. 16. Berridge KC, Kringelbach ML.  Affective neuroscience of pleasure: reward in humans and animals. Psychopharmacology. 2008;199(3):457–80. 17. Kaufman SL, et al. This is your brain on art. The Washington Post, 18 Sept 2017. 18. Wood D, et al. Emerging adulthood as a critical stage in the life course. In: Halfon N, Forrest C, Faustman E, editors. Handbook of life course health development. Cham: Springer; 2018. 19. Dunbar RIM.  Neocortex size as a constraint on group size in primates. J Hum Evol. 1992;22(6):469–93. 20. Marmot M. The status syndrome. New York: Henry Holt and Co; 2004. 21. Kim Y.  Perceived social status and unhealthy habits in Korea. Drug Alcohol Depend. 2019;194:1–5. 22. Pryor C. We follow social norms even when they are arbitrary and useless. Behavioural and Social Science, 18 Dec 2018. 23. Sherman LE, et al. Peer influence via Instagram: effects on brain and behavior in adolescence and young adulthood. Child Dev. 2018;89(1):37–47.

Index

A Adam and Eve painting, 140 Adaptive solutions, xviii Addison’s disease, 106, 107 Adenosine, 90 ADHD, see Attention-deficit hyperactivity disorder (ADHD) A Disease Called Childhood:Why ADHD Became an American Epidemic (Wedge), 109 Adrenal glands, 47 Adult-onset diabetes, 32 Advocacy, 18, 20 Advocate, 18 African hunter-gatherer, 25 Air Florida, 7 Air Florida Flight 90, 5, 6 crash, 8 Alcohol, 96, 97 Alcohol dehydrogenase (ADH), 98 All Party Parliamentary Group on Body Image, xvii Allostasis, 84, 85 load, 84, 85 Ambien, 105 American Academy of Pediatrics (AAP), 103 American adults, obese, 33 American Psychological Association, x, xvi Amphetamines, 106, 108, 109 Amplifiers, 120 Amygdala, 47, 48 Anchor sleep, 94 Ancient Chinese medicine, 140 Anterior cingulate cortex (ACC), 67 Anterograde amnesia, 3 Anti-depressants, 59 Anti-icing system, 5 Anti-lock braking systems (ABS), 130 Anti-predator, 119

Antisocial behavior, 128 Anxiety, xix, 61 Appeasement, 122 Arnette, Alan, 125 Artificial lights, 146 Asch conformity experiments, 153 Asch, Solomon, 10, 11 Astrocytes, 90 Athletes, 40 Attention-deficit hyperactivity disorder (ADHD), 51, 52, 108, 109 diagnoses and prescription antipsychotics, 54 dramatic rise in, 52 lowest rates of, 52 parents of, 53 Aurelius, Marcus, 120 Automobile seatbelts, 130 Aviation Safety Reporting System, 15 Avoid and advocate, 21 Axons, 34 B B-19, 29, 30 Baby boomers, xvii BAC, see Blood alcohol content (BAC) Bad drinks, 31 Barrett, Robert, xix Baumeister, Roy, 39 Becker, Anne, 80, 81 Behavioral changes, 26, 30 Behavioral choices, 28 Behavioral Risk Factor Surveillance System (BRFS), 99 Behavioural adaptation, 130 Behavior experimentation, 29 Benzedrine, 106 Bern, Amidst, 43

© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 R. S. Barrett, L. H. Francescutti, Hardwired: How Our Instincts to Be Healthy are Making Us Sick, https://doi.org/10.1007/978-3-030-51729-8

157

Index

158 Better Life Index, 72 Big T, 125 Big thrill-seeking, 125 Big T-positive sensation-seekers, 127 Binge eating, 33 Biohacking, 109 Biow, Douglas, 142 Bird, Frank E., Jr., 16 Bissonnette, Matt, 105, 106 Blackburn, Elizabeth, 77 Black Plague, 137, 140, 145 Blogging, 121 Blood alcohol content (BAC), 97, 98 Blood flow, 47 Blood pressure, 105 Blue Zones, 75, 76 biomarkers, 77 cultures, 76 Body badges, xvi Body confidence, xvii Body-part selfies, xvi Boeing 787 Dreamliner, 7 Boredom susceptibility, 124 Bourgeoise Ethics, 142 Boyce, Christopher, 73 Brain development, 151 Brain electrodes, 29 Branson, Sir Richard, 155 British Air, 116 British Household Panel Survey, 73 British Intelligence, 115 Broken heart syndrome, 82 Brookings Institute, 104 Brophy, Marcia, 50 Bruhn, John, 79 Bubonic, 137 Buettner, Daniel, 75, 76 C Calgary Board of Education, 104 Canadian Space Agency, 10 Cannon, Walter, 129 Capa, Robert, 117 Cardiac MRI, 3 Cardiogenic shock, 82 Carskadon, Mary, 103 Cas-9, 2 Caucasian women, 26 Centre for Generational Kinetics, 55 Centre on the Developing Child, 48 Challenger, 12 Child abuse, long-term damage and cognitive impairment, 50

Children, viii antipsychotic drugs, 51 Australia against, 45 Canada’s residential school system, 45 chronic toxic stress, 47, 49 city-level destruction, 50 dark relationship with, 44 in DRC and Syria, 49 England’s deportation of, 45 “laundry slaves” in Ireland, 45 physical effects, 48 physiological perspective, 48 stress response, 47 in Switzerland, 43 in Syria, 50 Chromosomes, 76, 77 replication, 77 Chronic insomnia, 97 Churchill, Winston, 117, 155 Circadian rhythms, 90, 96, 110 Citalopram, 59 Classrooms, 57 Clustered regularly inter-spaced short palindromic repeats (CRISPR), 2 Cockpit Resource Management (CRM), 6, 8, 9 Cognitive tests, 34 Cold turkey, 38 Collectivism, 74, 76 Commission on the Space Shuttle Challenger Accident, 12 Communication, 14 Complex emergency, 71 Conditional stimulus, 63 Conduct (behavioral) issues, 51 ConocoPhillips Marine, 16 Consilience, 68, 69 Contract children, 44 Cooper, Bradley, 109 Copernicus, 139 C-reactive protein (CRP), 34, 35 Creativity, 125 Crew compartment, 12 Crew Resource Management, 8, 10 Cuddle hormone, 63 Cultural masculinity, 74 Cumulative stress, 85 Curbing stress, 68 Cynicism, xv D da Vinci, Leonardo, 139, 144 Dark Triad, 66 Darwin, xi

Index DC-8, 6, 7, 10 D-Day, viii, 115, 116 extraordinary risk of, 117 de Medici, Catherine, 143, 144, 155 Dead sleep, 94 Death-defying heroism, 119 Decision-making, viii, 118, 134 Dehydration, 35, 36 role of, 36 Democratic Republic of Congo (DRC), 49 Depression, v, 61 Desirability, 67 Detox (DT), 96 The Developing Mind (Siegel), 68 Dexamphetamine, 108 Disinhibition (DIS), 123, 124 DNA information, 77 Dopamine, 28, 29, 33, 124 Dopamine-regulating enzymes, 124 Dove, Rachel, 59 Dual-inheritance theory, xii Dugatkin, Lee, 119 Dunbar, Robin, 152 Dunbar’s number, 152 Dunkley, Victoria, 51 Dysmorphia, 81 Dystopia, 71 E The Easterlin Paradox, 72 Easterlin, Richard, 72 Ebbers, Bernard, 127 Education level, 25 Ego-Identity, 64 Eisenhower, Dwight D., 115, 116 Electroencephalogram (EEG), 49 Electrolyte imbalance, 35 Electronic Screen Syndrome (ESS), 51, 53 Emergency medicine, xix, xx Emerging epidemic, 32 Emotional eater, 38 Empowerment gap in hospitals, 21 Energy formula, 107 Erasmus of Rotterdam, 143 Evolutionary hardwiring, 148 Evolutionary mismatch, vi Experience seeking (ES), 123 F Fall asleep, 154 Farley, Frank, 125

159 Fasto, Andrew, 127 Fat, xxi Fear of missing out (FOMO), 101, 102 Federal Aviation Administration, 15 Female sexual agency, 142 Festinger, Leon, 80 Fight-or-flight alarm reaction, 127 Fight-or-flight stress response system, 47 First pass metabolism (FPM), 98 First sleep, 94 Fisher, Carrie, 82 Flow state, 126 Flutie, Doug, 82 Forebrain, 45 Four humors theory, 139 Fox, Claire, 58 Francescutti, Louis Hugo, xix Franco, Veronica, 141 Frankenstein effect, ix Franklin, Benjamin, 144 Free sugar, 31 Fuller, Buckminster, vi Functional deficiencies, 49 Functional-MRI imaging, 149 Future health, 152 G Gallagher, Thomas, 17 Gallup World Poll, 73 Gene-culture coevolution, xii Generation, 55 characteristics and traits of, 55 Generation C, 56 Generation Snowflake, 58, 59 Generation X, 55, 65 Generation Y, 55, 59 Generation Z, 55 Generational traits, 55 Genetic code, viii Genome mapping, 2 Germain, George, 16 Ghrelin, 100 Gladiator, 119, 120 Goffman, Erving, 62 Gogniat, David, 43, 44 Good reporting systems, 15, 17 Gragossian, Erin, 66 Gratification, 37 Great Depression, 144 Gross domestic product (GDP), 72 Growing theory, 61

Index

160 H Hand hygiene, 19, 21 Handwashing, 18, 19, 132, 133 campaigns, 132 compliance, 133 Happiness, vii, xxi, 71–73, 85, 126, 151, 155 Happiness Index, 71 Hardwired brains, viii Hardwired health, xxi Hardwired social perception, 150 H-Day, 128, 129 Healthcare knowledge, vi Healthcare workers, ingredients necessary, 20 Heath, Robert, 29 Health Sciences Centre, 3 Heinrich, H.W., 16 Hidden brag, 65 High reliability organizations (HROs), 12 High-risk courtship behaviour, swirling cocktail of, 120 High sensation-seeking, 124 Hilary, Edmund, 118 Hindbrain, 45 Hippocampus, 48 Hitler, Adolf, 106 Hofstede, Geert, 74 Homeostasis, 129 sleep drive, 90 Homosexual, 30 Honegger, Turi, 44 Hormones, 106 Hospital abdominal surgery, 19 doctors and nurses efforts, 2 emergency room, 3, 4 empowerment gap in, 21 errors, 14 gall bladder issues, 2 genome mapping, 2 investigation of Warhol’s death, 3 killing people, crashes vs., 5–10 labyrinth, journey into, 1 long QT interval, 3 medical care, 1 medical error, 2, 4 Medscape survey of physicians, 17 natural causes, 1 non-technical attributes of patient care, 5 robust health and death, 18 Hospital acquired infections (HAI), 19, 132 Hospitals, 148 human error in, 150 Human behavior, xix Human development, 83

Human error in hospitals, 150 Human growth hormone (HGH), 26 Humanism, 139 Humanist drive, 142 Hume, David, 134 Hunter-gatherer, 30 Hydration, 36 Hydrocodone, 26 Hypernatremia, 35 Hypo-functioning reward circuitry, 33 Hypothesis, 130 Hypoxia, 82 I Iarovici, Doris, 60 I Find that Offensive (Fox), 58 iGeneration, 55 Image management, 121 Individualism, 74 Industrial Accident Prevention: A Scientific Approach (Heinrich), 16 Inflammation, 35 Insomnia, 98, 99 Instagram, xvi Insulin, 99 Intelligence Quotient (IQ), x Intelligence sphere, 116 Intentional noncompliance, 20 International Space Station (ISS), 108 Ireland, laundry slaves, 45 Islamic medicine, 140 Isolated confined environment (ICE), 108 Italian immigrants, 78 Italian Renaissance, 142, 145, 148 J Jack Project, 60 Jacobson, Max, 107 Joint Commission, 19 Jordan, Bernard, 81, 82 Journal of Clinical Sleep Medicine, 104 Jung, Carl, 107 K Kennedy, John F., 106, 107 Kerrebrock, Chris, 122 Kinder der Landstrasse, 45 Kindlifresser, 43 Knowledge, 146 Kronborg, Peter, 129

Index L Lactase, xii Leapfrog Group’s Hospital Safety Score, xviii Leauenberger, Marco, 44 Leptin, 100 Life expectancy, 26 Life satisfaction, 72, 73 levels of, 74 Lifespan, 25 determinant of, 25 rates of change in, 26 sharp and unexpected decline in, 26 Limitless, 109 Lingering, xiv Longevity, 25, 26 Long QT interval, 3 Long-term stress response, 47 Lovell, Jim, 108 M Machiavellianism, 66 Magnetic resonance imaging (MRI), 49 Makary, Marty, 21 Making decisions, 10 Maladaptation, 61 Mallory, Marshall, 116 Man-made solutions, xi Marshmallow experiment, 37 Masculine cultures, 74 McAuliffe, Christa, 12 McDonald, Allan, 11, 12 McGonical, Kelly, 36, 39, 40 Medical care, 1 Medical error, 2, 4 Mediterranean diets, xv Medvedev, Andrei, 95 Meformers, 63, 64 Melatonin, 103 Memory loss, 3 Methamphetamines, 106, 108 MI5, 115 Michelangelo, 139, 144 Microglia, 46 Microsleeps, 91 Midbrain, 45 Midday napping, 94 Millennials, xvi, 65, 83, 101, 102 Mindulgences, 151 Mischel, Walter, 37, 38 Modafinil, 109 Modern health, xix Modern medicine, 26 Mono-amine oxidase, 124

161 Moral Molecule, 62 Morell, Theodor, 106 Multi-taskers, 56 Myelin sheath, 34 N National Academy of Sciences, 91 National Highway Traffic Safety Administration, 105 Natural causes, 1 Naturalism, 139 Nelson, Charles, 48 Neonatal abstinence syndrome, 27 Neurons, 46 Neuroscience, 150 Neurotransmitters, forms of, vii “Never events”, 14 Newborn, 46 Nicoyan Peninsula, 77, 78 Nicoyans, 78 Nietzsche, Friedrich, 126 Nomophobia, 102 Noncompliance, 20 Noninstitutionalized psychopaths, 127 Nootropics, 109 Norgay, Tenzing, 118 Normative, 153 Norway, 75 Norwegian approach, 75 Norwegians, 75 Nowak, Astronaut Lisa, 108 O Obesity, 99, 100 American adults, 33 causes, 34 Obesity-related issues, 105 Obstructive sleep apnea, 99 One Million Nights, 105 Operating room (OR) checklist, 13 Operation Fortitude, 115 Operation Overlord, 115 Opioids, 26, 27 drug deaths and, 27 soothing and euphoric, 27 Overreaction, 48 Oxycodone, 27 Oxytocin, 62 P Paleo diets, xv

Index

162 Panagiotopoulos, Dina, 51 Paradoxical sleep, 95 Parkinson’s disease, 65 Patel, Ronak, 89 Paulhus, D., 66 Peltzman Effect, 131 Peltzman, Sam, 131 Penenburg, Adam, 62 Pet theory, 143 Physical health, 152 pilot-CEOs, 126 Plague, 137 Plasticity, 48 Polyphasic sleep, 93 Positron emission tomography, 124 Pre-agrarian brains, ix Prefrontal cortex (PFC), 34, 36 Premier somme, 93 Presentation of Self in Everyday Life, 62 Presenteeism, 84, 92 Pre-systemic metabolism, 98 Preventable errors, 1 medical error, 4 Preventive medicine, 140 Primates, 152 Pro Juventute, 45 Procedural drift, 20 Procedural intentional non-compliance (PINC), 19, 131, 133 Psychopaths, 127 Psychopathy, 66 Public health emergency, visible and poignant symptom of, 27 Pujol, Juan, 115 Punch-clock workers, 27 R Racetams, 109 Rapid Eye Movement (REM) sleep, 95–97 Reagan, Ronald, 12 Reason, James, 15 Rebirth, 138, 145 Renaissance, 138, 147, 149 medicine, 140 qualities of, 142 Renaissance art and science, 140 Reporting errors, 15 Reporting system, 15 Reptilian brain, 45 Reputation management, 64 Research, 153 Resilience, xiv Reversal theory, 123

Reverse-stalking behaviour, 119 Risk analysis, 134 Risk compensation, 130 Risk homeostasis, 130, 151 Role Confusion, 64 Rosen, Larry, 55, 56 Roseto Italians, 78, 79 citizens of, 78 social fabric of, 80 study of, 79 Rosetto Effect, 79 Routine procedural intentional noncompliance, 20 Rubber O-rings, 11 S Safety behaviour, 131 Safety triangle, 16 Salad eating, 143 Salt, xvi, xxi, 35 Sanguine temperament, 140 Schizophrenia, 52 Schmidt, Michael, 131 Science, viii, 146, 148 SEAL Team, 106 Second generation antipsychotics, 51 Selective serotonin reuptake inhibitor (SSRI), 59 Selfies, 64, 66, 121 Self-regulate emotional health, inability to, 103 Senescence, 77 Sensation-seeking, 123, 124 Scale, 123 trait, 125 Sentinel events, 16 Sertraline, 59 Serve and return, 46 Sexual display strategy, 120 Shared situational awareness, 10 Shortwave blue light, 103 Siegel, Daniel, 68 Sinclair, Brian, 3 Sleep, vii, xxi, 89, 151 cycles, 93 first sleep and second sleep, 93 lack of, 99 length and sleep disturbance, geographic variance in, 100 need for, 98 patterns, 91 process of, 90 and REM, 98 stages, 95, 96

Index Sleep architecture, 95 Sleep deprivation, 91, 100 Sleeping pills, 105, 108 Sleep onset latency, 97 Sleep-phase delay, 103 Sleep-robbing disturbances, 93 Sleepscore Labs, 105 Slow-wave sleep, 95 Smart drugs, 109 Social acceptance, 10 Social ascension, 64 Social bonds, 85 Social change, xix Social comparison theory, 80 Social crowding, 63 Social determinants of health, 144 Social display, 121 Social drives, viii Social media, ix, 62, 65, 67, 68, 110, 145, 148 addicting, 62 memes, 154 Social networks, 85, 152 Social sciences, vi Social significance, 153 Social world, vi, vii, xix Societal/cultural change, xii Societal evolution, xi Socioeconomic status, 152 Sodium, xii, 35 Sodium-pump, 35 Space Shuttle Challenger, 11 Speed pills, 107 Sprezzatura, 143 Stalling, 90 Stanford marshmallow experiments, 37 Stepnowsky, Carl, 91 Stick-shaker, 6 Stoics, 127 Stress, viii, xix, xxi, 80, 105 for children, 151 indicators for, 100 response, 47 Striatum, 65 Successful psychopath, 127 Sugar, xii, xv, xxi, 31 brain health and mental acuity, 33 digestive system and, 31 in food and drink, 31 soft drinks, 31 Suprachiasmatic nucleus (SCN), 102 Survival, 149, 150 Surviving challenge, xiv Swiss cheese model, 15, 18

163 Synapses, 46 Syrian civil war, 50 T Tactical indulgences, 41 Takotsubo cardiomyopathy, 82 Technology, xi, xiii, 55, 146 Tech-savvy generations, 56 Telomerase, 77 Telomeres, 77 Temperament, 140 Thrill & Adventure behavior, 124 Thrill & Adventure Seeking (TAS), 123, 124 Tinder, 66, 67 Tombstone safety, 7 Total body water (TBW), 35 Tourette syndrome, 52 Trait amplification, vii Trigger Warnings, 59 Truth, Lies, and O-Rings (McDonald), 12 Tullock, Gordon, 131 Tullock’s Spike, 131 Twitter, 63 Type 2 diabetes, 31, 32, 99 charecteristics, 32 prevention, 32 U Urbanization, 142, 143 V Vampire hormone, 103 Ventral segmental area, 28 Verdingkinder, 44, 47, 49, 57, 68 Vicodin, 27 Vittorino education, 140, 141 von Clausewitz, Carl, 117 Voxel-based morphometry, 67 W Waldinger, Robert, 83 Walker, Mathew, 100, 101 War on drugs, 27 Warhol, Andrew, 2 Wedge, Marilyn, 109 Wellbeing, vi, xix, 140, 149, 152 Well-meaning health professionals, 54 White matter, 34 Wickwire, Jim, 122, 123

Index

164 Williams, M., 66 Willpower, 37, 154, 155 cookie versus radish experiment, 39 determinant of, 38 levels of, 37 mental conditioning, 40 Rochester experiment, 38 and self-control, 39, 40 short-term rather than long-term thinking, 38 test of, 41 Wilson, E.O., 68 Windeler, Jack, 60 Winnipeg emergency room, 3 The Wisdom of the Body (Cannon), 129

Wolf, Steward, 78–80 Work and sleep, 44 World Happiness Report, 71, 72 Y Yo-yo dieting, 33 Z Zak, Paul, 62 Zald, David, 124 Zoloft, 59 Zolpidem, 105 Zuckerman, Marvin, 123, 125