Cosmic Rays: Multimessenger Astrophysics and Revolutionary Astronomy (Astronomers' Universe) 3031385594, 9783031385599

In recent years, cosmic rays have become the protagonists of a new scientific revolution. We are able today to film the

106 79 6MB

English Pages 220 [214] Year 2023

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Cosmic Rays: Multimessenger Astrophysics and Revolutionary Astronomy (Astronomers' Universe)
 3031385594, 9783031385599

Table of contents :
Foreword
Preface
Introduction
Contents
1 The Highest Energies in the Universe
The Universe Around Us
The Universe is Expanding
Stars and Stellar Evolution
The Fate of the Universe
The Dark Universe and the Standard Cosmological Model
Particles and Fields
Cosmic Rays
The Energy Spectrum of Cosmic Rays
Atmospheric Showers
Cosmic Ray Sources
2 The Mystery of Cosmic Rays
The Discovery of Natural Radioactivity
Is Natural Radioactivity of Extraterrestrial Origin?
Father Wulf, a True Experimentalist
Pacini and the Measurements of Radioactivity Attenuation in Water
Hess and Measurements of Radioactivity on Balloons
Confirmations in Europe and the Tragedy of the First World War
Research in the United States
Cosmic Rays Are Predominantly Charged Particles
Bruno Rossi, the East-West Effect and Cosmic Showers
3 The Physics of Elementary Particles
The Discovery of Antimatter
Recognition by the Scientific Community
The µ Lepton and the Mesons
The Discovery of Strangeness
Mountain-Top Laboratories
Hunters Become Farmers: Particle Accelerators
The Discovery of Charm
The Unexpected
What is the Maximum Energy of Cosmic Rays?
Anomalous Events
Hypotheses on the Origin of Cosmic Rays
4 The Colors of the Universe
The Universe in Radio Waves
Large Radio Telescopes
Very Long Baseline Interferometry
The Cosmic Microwave Background
Compact Objects and Accretion Disks
Molecules and Emission Lines
The Square Kilometer Array
The Infrared Universe
The Center of the Milky Way
The James Webb Space Telescope
Euclid
The Ultraviolet Universe
The Interstellar Medium and Intergalactic Medium
Supernova 1987A
Beyond the Limits of the Thermal Universe: X-Rays
The Discovery of Cosmic X-Rays
Binary Systems
Supernova Remnants and Pulsars in X-Rays
Active Galactic Nuclei
Galaxy Clusters
eROSITA and ATHENA
The Gamma Rays' Violent Universe
Space-Based Detectors
Ground-Based Detectors
Extensive Air Shower (EAS) Detectors
Imaging Atmospheric Cherenkov Telescopes
Supernova Remnants and Cosmic Rays
Very-High-Energy Sources
The Fermi Bubbles
The Structure of Supernova Remnants and the Mechanisms of Acceleration
More on Active Galactic Nuclei
Gamma-Ray Bursts
Dark Matter
The Cosmic Journey of Gamma Rays
The Cherenkov Telescope Array and the Southern Wide-Field Gamma-Ray Observatory
5 The New Senses of the Universe: Multimessenger Astronomy
Cosmic Rays of Ultrahigh Energies
The Pierre Auger Observatory
Correlation of Cosmic Nuclei with Astrophysical Sources
Alternative Techniques and Future Detectors
Cosmic Antimatter
Neutrinos and the Extreme Universe
Solar Neutrinos and the Solar Neutrino Problem
Very-High-Energy Cosmic Neutrinos
The Future of Neutrino Astronomy
Gravitational Waves
The Future of Gravitational Wave Astronomy
Putting All This Together
6 Cosmic Rays in Our Lives
Variations in Cosmic Ray Fluxes
Cosmic Rays and Life
Ionization and Chemistry of the Atmosphere
Cosmic Rays and the Origin of Life
Biological Effects of Cosmic Rays
Implications for Evolution
Cosmic Rays and Climate
Is There a Correlation Between Cosmic Rays and Earthquakes?
Cosmic Rays and Electronics
Cosmic Rays and the Exploration of the Earth and the Universe
Cosmic Rays and Airplane Flights
One More Risk for Astronauts
Cosmic Rays and Archeology
Dating of Archaeological Finds
Muonic Tomography
Cosmic Rays and the Analysis of Large Structures
What Next?
Postface
Index

Citation preview

Alessandro De Angelis

Cosmic Rays Multimessenger Astrophysics and Revolutionary Astronomy

Astronomers’ Universe

Series Editor Martin Beech, Campion College, The University of Regina, Regina, SK, Canada

The Astronomers’ Universe series attracts scientifically curious readers with a passion for astronomy and its related fields. In this series, you will venture beyond the basics to gain a deeper understanding of the cosmos—all from the comfort of your chair. Our books cover any and all topics related to the scientific study of the Universe and our place in it, exploring discoveries and theories in areas ranging from cosmology and astrophysics to planetary science and astrobiology. This series bridges the gap between very basic popular science books and higher-level textbooks, providing rigorous, yet digestible forays for the intrepid lay reader. It goes beyond a beginner’s level, introducing you to more complex concepts that will expand your knowledge of the cosmos. The books are written in a didactic and descriptive style, including basic mathematics where necessary.

Alessandro De Angelis

Cosmic Rays Multimessenger Astrophysics and Revolutionary Astronomy With a Foreword by Francis Halzen

Alessandro De Angelis Dipartimento di Fisica e Astronomia “Galileo Galilei” Università di Padova Padua, Italy LIP/IST University of Lisbon Lisbon, Portugal

ISSN 1614-659X ISSN 2197-6651 (electronic) Astronomers’ Universe ISBN 978-3-031-38559-9 ISBN 978-3-031-38560-5 (eBook) https://doi.org/10.1007/978-3-031-38560-5 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland Paper in this product is recyclable.

You imagine that what you cannot understand is spiritual or does not exist, instead there are in the Universe a million things that, to be known, would require a million different organs. I, for example, understand through my senses the cause of the attraction of the magnet, that of the ebb and ow of the sea, and what animals become after death; but you cannot come to these lofty conceptions except through faith, for you lack the conditions to understand these wonders, just as a blind man from birth could not imagine the beauty of a landscape, the color of a picture, the shades of the iris, but he will imagine them now as something palpable, now as food, now as sound, now as smell. If I were to explain to you what I perceive, because of your lack of senses you would imagine it as something that can be heard, seen, touched, smelled or tasted, yet it is none of these things. Cyrano de Bergerac, “Voyage dans la lune”, Paris 1657

To Luc Pape

Foreword

Based on the size of the Sun and given the rate that it must be contracting to transform gravitational energy into its radiation, Lord Kelvin concluded at the end of the nineteenth century that the Sun cannot be more than tens of million years old. His estimate was correct but directly in conflict with known geology. Moreover, it did not leave sufficient time for Darwin’s evolution to have run its course. The puzzle was eventually resolved when Becquerel accidentally discovered radioactivity. In an illustrious career, Lord Kelvin also contributed to the development of the electroscope, an instrument that allowed early pioneers in the field to establish that radioactivity at the Earth’s surface originated from particles that are mostly protons, dubbed cosmic rays, reaching us from sources beyond our atmosphere. More than a century later, the veil is finally being lifted on their origin with multimessenger observations involving cosmic rays, neutrinos, and gravitational waves and instruments covering all wavelengths of light, from radio waves to gamma rays. Neutrino telescopes observe the Milky Way glowing in neutrinos of cosmic ray origin. However, where precisely they originate remains a puzzle; Zwicky’s guess dating back to the 1930s that they originate in supernovae is the stuff of textbooks, but the evidence for his claim is still missing. Alessandro de Angelis’s book covers this long journey, refreshingly reminding us at every step that instrumentation drives discovery and that the journey continues, more exciting than ever. Despite the puzzling gap between the predicted and the actual age of the solar system of 4.5 billion years, Lord Kelvin argued that all physics had left

ix

x

Foreword

to do was dot the i’s and cross the t’s. Instead, the discrepancy provided a hint of totally new physics to be discovered. Along the sinuous and still incomplete journey toward identifying the sources of cosmic rays, cosmic ray detectors serendipitously revealed the signatures of the muon and the pion. It was the beginning of an exhilarating ride, from the revelation of a chaos of particles and resonances in the 1960s to the quark “model” and the emergence of the Standard Model, capped by the discovery of the Higgs boson that gives them mass. The first paper I ever read as a student was George Zweig’s highly speculative CERN preprint on “aces,” now called quarks. It came with more warnings from my supervisor than the average medication; these days, quarks are routinely featured in introductory physics books, along with the levers and pulleys of the first chapter. Besides particle physics, this book also weaves the development of cosmology into the story of cosmic rays. Today the topics share part of their intellectual frontiers: the nature of dark matter and dark energy. As Alessandro de Angelis points out, not since the time of Newton have physics and astronomy been intellectually closer, and that may be especially the case for today’s cosmic ray physics, routinely referred to as astroparticle physics. My office was one floor below that of Monseigneur Lemaître; strangely, I only knew of his existence because at night I used a computer that his research group had built. The discovery of the microwave background in 1966 brought him fame, and the juggernaut that is now precision cosmology changed cosmology from boutique science to a discipline pushing the intellectual frontlines of both physics and astronomy today. Already as an undergraduate student, I quickly found out that doing research on quarks, cosmology, or black holes was not a wise career move, but look what happened; where black holes are concerned, you can look at Fig. 4.9. Based on this experience, I should refrain from predicting the future: the science will indeed proceed with detours, dead ends, false alarms, missed opportunities, and unexpected surprises, as it always does. What is guaranteed is that the journey to new horizons will be exhilarating. Madison, Wisconsin

Francis Halzen

Francis Halzen is the Principal Investigator of the IceCube project, and Hilldale and Gregory Breit Professor in the Department of Physics at the University of Wisconsin-Madison.

Preface

The story of this book originates from L’enigma dei raggi cosmici, a booklet in Italian published in 2011 with a pretty large distribution. The publisher asked me to write a second edition because many things have changed since the first edition was published. Multimessenger astronomy, just a dream in 2011, has become a reality thanks to progress in the study of gamma rays and the detection of gravitational waves and neutrinos of cosmological origin. This new astronomy of the twentyfirst century is merging with particle astrophysics, reestablishing the unity between physics and astronomy to a level that was lost after Newton. Curious readers need to understand this new evolution, explained here in strict yet straightforward terms for the public. I could not resist the temptation to make many changes to my old book in Italian. The result is that this has become a completely new book: readers will find here a little less of the history of the field, and much more information on science today and in the future. Paris, France June 2023

Alessandro De Angelis

xi

Introduction

In the early 1900s, scientists discovered the existence of natural radioactivity on Earth, and wondered from where this radioactivity originated. Initially, the most accredited hypothesis was that it was due to radiation from Earth’s crust. The solution to this enigma has been and is (we will see that some important questions remain open) one of the most exciting intellectual enterprises in the history of science. It brought to the discovery that much of the radiation originates from extraterrestrial sources, and the extraterrestrial radiation was named “cosmic rays”. We know today that cosmic rays are particles (mostly hydrogen nuclei, i.e., protons) that strike the Earth’s atmosphere apparently from every direction, with velocities close to the speed of light. Their energies reach the highest observed in nature, up to one hundred million times the energy of the particles accelerated by the Large Hadron Collider at CERN in Geneva. They must therefore come from very powerful cosmic accelerators characterized by strong gravitational forces, probably residing near compact objects: supernova remnants (remnants of the explosion of stars) or supermassive black holes (millions to billions of solar masses). Cosmic rays of lesser energy come from the Sun itself. We recall that a black hole is an object with such a strong gravitational attraction that nothing, not even light, can move away from its surface. This occurs when the escape speed (i.e., the minimum speed necessary to escape from its attraction) is higher than the speed of light: an escaping particle would have to move faster than light itself, which is impossible.

xiii

xiv

Introduction

The principle of the mechanism that allows extraterrestrial sources to accelerate particles was first postulated by Enrico Fermi in 1949, and will be explained in Chap. 3, but much remains to be understood. Discovering the existence of extraterrestrial radiation has been difficult: the coexistence of cosmic radiation and terrestrial radiation made this demonstration particularly delicate. Suppose much of the radioactivity comes from extraterrestrial sources. Radioactivity is caused by particles; these particles, crossing the atmosphere, might interact with its molecules and produce secondary particles, many of which charged. The charged secondary particles will induce ionization. We thus expect to measure at high altitudes an ionization greater than that on the ground, and a smaller ionization under the surface of a lake or of the sea. The observation of these two complementary effects allowed between 1911 and 1912 to solve the first enigma, and identify the existence of extraterrestrial radiation to which the name “cosmic rays” was then attributed. The first discoveries related to cosmic rays were a scientific milestone and a fascinating intellectual adventure. Walt Disney in 1957 produced a 1-hour documentary directed by Frank Capra, entitled The Strange Case of the Cosmic Rays, where the story was told as a thriller with the participation of puppets and cartoons; the documentary made use of scientific advice from the Nobel Prize winner Carl Anderson and of the great Italian physicist Bruno Rossi. Fundamental discoveries in cosmic ray physics are due to several scientists in Europe and the New World, and took place during the early nineteenth century, characterized by nationalism and lack of communication. Historical, political, and personal facts, inserted in the historical context preceding and following the First World War, made it difficult to correctly recognize individual merits. Today science is certainly more transparent. Recently cosmic rays have been the protagonists of a new revolution, moving very fast: multimessenger astronomy. What is multimessenger astronomy? As often in the history of the Universe, everything starts from light. In 1610, Galileo Galilei published his Sidereus Nuncius, the first scientific book based on astronomical observations made with a scientific instrument: the telescope, originating from a Dutch invention and improved by the Tuscan genius. The title Sidereus Nuncius, deliberately ambiguous, allows two equally fascinating translations from the original Latin: “a message from the stars” and “the messenger of the stars”. In his book Galilei comments and interprets among other things the observation of the mountains of the Moon, of hundreds of stars never seen before, of Jupiter’s satellites, and phases of Venus. All these observations had been possible thanks to the light

Introduction

xv

emanating from celestial bodies: according to Galilei, light is the “nuncius” (messenger) of the stars. Therefore, the story of the cosmic messengers begins with photons: the particles constituting light, which today we know represent the quanta transmitting the electromagnetic field. Even with a telescope, however, humans can only see a small part (therefore called “visible”) of the spectrum of electromagnetic waves, that is, of photons. Only waves of length on the order of a few tenths of a micrometer, that is, with energies on the order of one electronvolt (eV), approximately 10−19 joule, are “visible”—one joule, the unit of energy in the International System of units (SI) is roughly the kinetic energy acquired by a mass of 100 g falling from a height of one meter. These tiny amounts are the energies that atomic electrons release when they move to a lower energy level. Our brain interprets these waves’ small differences in energy and wavelength as colors. The blue of the ocean is the image that our brain attributes to a wave that has a frequency, and therefore an energy, that is 10% higher compared to that of the light coming from green forests (the energy of electromagnetic waves such as light is proportional to their frequency). Since the 1930s, the spectrum of frequencies, and therefore of energies, of light waves that we can observe in the Universe has started to expand: this is how multifrequency astrophysics was born. Thanks to new instruments we have been able to see new “colors” invisible to the naked eye: first the radio waves produced by the Sun and the Galaxy, billions and millions of times less energetic than visible light; then microwaves; and finally X and gamma rays, millions to billions of times more energetic than visible light. The history of astronomy between 1930 and 2015 may be interpreted as a long journey to discover the colors of the Universe invisible to the human eye. By comparing the signals at various wavelengths, we learned much about the origin and evolution of the Universe. Many of the discoveries in recent years are due to the detection of high-energy photons. In particular, we have seen that supernova remnants in the Milky Way accelerate cosmic rays up to thousand billion times the energy of visible light, i.e., few million times the energy needed to create a hydrogen atom. We have seen that systems of two astrophysical objects orbiting around each other (binary systems), one of which is a compact object, can behave as powerful particle accelerators. We observed the mechanisms of accretion and radiation of supermassive black holes in other galaxies. We have seen gravitational lenses caused by black holes with billions of solar masses, causing the repetition of the same gamma-ray signal a few days away. Gravitational lenses refer to the phenomenon where the gravitational field of a massive object, such as a galaxy or a cluster of galaxies, bends the path of light from a more distant

xvi

Introduction

object behind it. This bending of light causes the distant object to appear distorted or magnified, as if it were being viewed through a lens. All these observations have been possible thanks to new detector technologies, mostly from particle physics at accelerators. However, we also faced the limitations of our knowledge, opening the door to new questions. What happened in the first moments of the life of the Universe? What are dark matter and dark energy, forms of energy that we know to dominate the Universe but of which we do not know the nature? How do black holes grow and evolve? September 2015 has been the date of a revolution: the first gravitational wave detection. A new member joined the family of cosmic rays. Gravitational waves are produced in the acceleration of masses, which deform the space-time; they increase and decrease with a constant cadence the distances in space in two directions at 90 degrees from each other, in turn perpendicular to the direction of the wave motion. The effect is very small: for a corresponding released energy at approximately three solar masses, as in the first event detected by the LIGO detector in 2015, the relative effect on Earth is about of one part in 1022 —that is, as if the distance between the Earth and the Sun varied by one atom! Einstein’s opinion, which was not too optimistic about humankind’s capability for technological progress, was that the effect of gravitational waves was too small to be detected. However, thanks to laser interferometry, the two LIGO detectors in the United States, separated by a distance of approximately 3000 km, which at the speed of light means a journey of approximately 10 milliseconds, were able to measure the effect by observing the gravitational radiation emitted by two black holes that had merged (the fusion process lasted only a few seconds). Gravitational waves, which travel without obstacles at the speed of light across the cosmos, have started a revolution in astrophysics, opening a whole new way of observing the most violent events in the Universe, and have already provided unique information on dozens of cataclysmic collisions. The Nobel Prize for Physics 2017 was awarded to Barry Barish and Kip Thorne from the California Institute of Technology (CalTech) in Pasadena, and Rainer Weiss from the Massachusetts Institute of Technology (MIT), for the discovery of gravitational waves. In October 2017, another revolutionary announcement was made: for the first time gravitational waves were detected along with gamma rays in the merger of two neutron stars. Two orbiting neutron stars lose energy by emitting gravitational waves; energy loss brings them closer until they merge. From the spectrum of the gamma rays emitted it was seen that the fusion of neutron stars produces most of the elements heavier than iron in the periodic table and therefore is fundamental for the evolution of life as we

Introduction

xvii

know it. It has also been seen that one of the consequences of the merger is the production of a very energetic burst of gamma rays. All the pieces of the puzzle fit together thanks to the almost simultaneous observations of a gravitational wave and of a burst of photons across the entire electromagnetic spectrum, an observation performed thanks to the work of almost ten thousand astronomers and astrophysicists all around the world. Finally, in July 2018, another “big bang” in science was announced by three collaborations of scientists: the first, guided by Francis Halzen from the University of Madison, operating the IceCube instrument, a particle detector of a cubic kilometer immersed in the ice of Antarctica; the second, operating Fermi, a telescope for the observation of gamma rays from satellite; and the third, operating MAGIC, a telescope for observation of gamma rays on the crater of the Taburiente Volcano on the Canary Island of La Palma. For the first time, gamma rays and a highly energetic neutrino from a supermassive black hole located in the center of a galaxy, accreting at the expense of the surrounding mass, were simultaneously detected. Once again it was possible to solve a mystery: the comparison of the energies of the neutrino and of gamma rays revealed that in the proximity of the black hole matter is accelerated to energies much higher than we can produce on Earth. A flux of approximately one hundred billion neutrinos per second reaches the surface of a fingernail, but these neutrinos do not interact and therefore we do not realize it. Since the probability of interaction, and therefore of revelation, of neutrinos, is low, neutrino telescopes must be enormous. The neutrino, a very common but very elusive particle in the Universe, enters the family of cosmic messengers along with photons, atomic nuclei, electrons and positrons (and their heavier brothers: muons), and with gravitational waves. Gravitational waves and neutrinos, like photons, point directly to their sources of production: the simultaneous observations of two or more of these messengers have opened the field of multimessenger astrophysics, deeply integrating particle physics and astrophysics. Today we can begin to answer some fundamental questions that seemed beyond our reach. After building instruments capable of observing new colors, we develop new “senses” and begin to know them. As well as sound, smell, touch, and taste give us information about the reality that surrounds us, completing what appears to us through sight, we are now starting to collect and analyze new information from remote regions of the Universe transmitted not by light but by different messengers. Knowledge is evolving fast and unexpectedly: a new astronomy is emerging for the new century, and we are building telescopes to study it. The development of a new physics is possibly at the horizon.

xviii

Introduction

This book chronicles the evolution of the study of cosmic rays since their discovery up to the present day, outlining the still unsolved problems and how they are faced; it indicates the new frontiers and new fields of investigation. It is written hoping that intelligent readers will explore this new land.

Contents

1 The Highest Energies in the Universe The Universe Around Us The Universe is Expanding Stars and Stellar Evolution The Fate of the Universe The Dark Universe and the Standard Cosmological Model Particles and Fields Cosmic Rays The Energy Spectrum of Cosmic Rays Atmospheric Showers Cosmic Ray Sources

1 1 4 7 10 11 13 16 16 17 18

2 The Mystery of Cosmic Rays The Discovery of Natural Radioactivity Is Natural Radioactivity of Extraterrestrial Origin? Father Wulf, a True Experimentalist Pacini and the Measurements of Radioactivity Attenuation in Water Hess and Measurements of Radioactivity on Balloons Confirmations in Europe and the Tragedy of the First World War

23 23 27 29 31 35 39

xix

xx

Contents

Research in the United States Cosmic Rays Are Predominantly Charged Particles Bruno Rossi, the East-West Effect and Cosmic Showers 3 The Physics of Elementary Particles The Discovery of Antimatter Recognition by the Scientific Community The µ Lepton and the Mesons The Discovery of Strangeness Mountain-Top Laboratories Hunters Become Farmers: Particle Accelerators The Discovery of Charm The Unexpected What is the Maximum Energy of Cosmic Rays? Anomalous Events Hypotheses on the Origin of Cosmic Rays 4 The Colors of the Universe The Universe in Radio Waves Large Radio Telescopes Very Long Baseline Interferometry The Cosmic Microwave Background Compact Objects and Accretion Disks Molecules and Emission Lines The Square Kilometer Array The Infrared Universe The Center of the Milky Way The James Webb Space Telescope Euclid The Ultraviolet Universe The Interstellar Medium and Intergalactic Medium Supernova 1987A Beyond the Limits of the Thermal Universe: X-Rays The Discovery of Cosmic X-Rays Binary Systems Supernova Remnants and Pulsars in X-Rays Active Galactic Nuclei

42 45 47 53 54 57 61 65 66 68 70 71 71 72 73 75 80 81 82 84 84 86 88 88 89 89 91 93 94 94 95 95 99 100 104

Contents

Galaxy Clusters eROSITA and ATHENA The Gamma Rays’ Violent Universe Space-Based Detectors Ground-Based Detectors Supernova Remnants and Cosmic Rays Very-High-Energy Sources The Fermi Bubbles The Structure of Supernova Remnants and the Mechanisms of Acceleration More on Active Galactic Nuclei Gamma-Ray Bursts Dark Matter The Cosmic Journey of Gamma Rays The Cherenkov Telescope Array and the Southern Wide-Field Gamma-Ray Observatory

xxi

105 106 106 109 114 120 121 123 124 125 125 128 130 132

5 The New Senses of the Universe: Multimessenger Astronomy Cosmic Rays of Ultrahigh Energies The Pierre Auger Observatory Correlation of Cosmic Nuclei with Astrophysical Sources Alternative Techniques and Future Detectors Cosmic Antimatter Neutrinos and the Extreme Universe Solar Neutrinos and the Solar Neutrino Problem Very-High-Energy Cosmic Neutrinos The Future of Neutrino Astronomy Gravitational Waves The Future of Gravitational Wave Astronomy Putting All This Together

135 136 137 141 143 144 147 147 151 155 156 161 163

6

165 165 170 170 170 172 174 176 177

Cosmic Rays in Our Lives Variations in Cosmic Ray Fluxes Cosmic Rays and Life Ionization and Chemistry of the Atmosphere Cosmic Rays and the Origin of Life Biological Effects of Cosmic Rays Implications for Evolution Cosmic Rays and Climate Is There a Correlation Between Cosmic Rays and Earthquakes?

xxii

Contents

Cosmic Rays and Electronics Cosmic Rays and the Exploration of the Earth and the Universe Cosmic Rays and Airplane Flights One More Risk for Astronauts Cosmic Rays and Archeology Dating of Archaeological Finds Muonic Tomography Cosmic Rays and the Analysis of Large Structures

178 179 179 180 181 181 182 184

What Next?

187

Postface

191

Index

193

1 The Highest Energies in the Universe

The origin and destiny of the Universe are, for most researchers, the fundamental question. Many answers were provided over the ages, a few built on scientific observations and reasoning. The depth of our observations is ultimately related to our ability to “see” the largest and the smallest distances, and to analyze the greatest energies. During the last century, enormous scientific theoretical and experimental advances have changed our vision of the role of human beings in the Universe. Less than a century ago we believed that the Milky Way, our Galaxy, which contains approximately a hundred billions of stars, was the whole Universe; we now know that there are at least one hundred billion galaxies in it. Most of them are so far that we cannot even hope to explore them.

The Universe Around Us We start an imaginary journey across the Universe from our planet. The Earth, which has a radius of approximately 6 400 km, is one of the planets orbiting around the Sun. The latter is a star with a mass of roughly 2 × 1030 kg located at a distance from us of approximately 150 million km (i.e., 500 light seconds, 500 s being the time that it takes for a photon to cross that distance). The average Earth-Sun distance is the astronomical unit, in short au or AU. The ensemble of planets orbiting the Sun is called the solar system. Looking to the aphelion, i.e., to the farthest point, of the orbit of the farthest acknowledged planet, Neptune, the solar system has a diameter of 9 billion km (approximately 10 light hours, or 60 au). © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 A. De Angelis, Cosmic Rays, Astronomers’ Universe, https://doi.org/10.1007/978-3-031-38560-5_1

1

2

A. De Angelis

Fig. 1.1 The Milky Way seen from above and from side. From https://courses. lumenlearning.com/astronomy

The Milky Way (Fig. 1.1) is the galaxy that contains our solar system. Its name “milky” is derived from its appearance as a dim glowing band arching across the night sky, in which the naked eye cannot distinguish individual stars. The ancient Romans named it “via lactea”, which corresponds to the present name (being lac the Latin word for milk)—the term “galaxy,” too, descends from a Greek word indicating milk. The Milky Way appears as a band because we see its disc-shaped structure cut view from inside (Fig. 1.2). Thanks to the telescope, Galileo Galilei in 1610 was the first to understand that the Milky Way is made up of stars. The Milky Way is a giant spiral-shaped galaxy and is approximately 100,000 light years in diameter and a thousand light years thick, with a total mass of approximately one trillion solar masses. The solar system is located in the periphery, approximately 30,000 light years from the center, in the so-called Orion Arm (the richest and most intense region visible from Earth is in the direction of the constellation of Orion). The center of the Galaxy, in the

1 The Highest Energies in the Universe

3

Fig. 1.2 The Milky Way as seen from Cafayate near Salta, Argentina. Because of the tilt of the Earth’s axis the view from the Southern Hemisphere of our planet is better than that from the Northern Hemisphere. Photograph by Gonzalo Javier Santile

constellation of Sagittarius, hosts a supermassive black hole of approximately 4 million solar masses, as can be deduced from the motion of the orbits of nearby stars (for the discovery of this black hole the physicists Reinhard Genzel of the Max Planck Institute for Extraterrestrial Physics in Munich and Andrea Ghez of the University of California in Los Angeles were awarded the Nobel Prize in Physics in 2020). The interstellar medium is filled with ionized gas (mostly hydrogen), representing approximately 15% of the Galaxy’s total mass, and unevenly distributed. A magnetic field of a few millionths of a gauss (the Earth’s magnetic field is a fraction of a gauss) interacts with the interstellar medium. The Milky Way is a large galaxy. Together with a companion of similar size (the Andromeda galaxy), it has gravitationally trapped many more small galaxies. All these galaxies constitute the so-called Local Group, which includes approximately 50 galaxies; some are dwarf galaxies (as little as just a few thousand stars). The Local Group has a diameter of 10 million light years, or 3 million parsec (one million parsec is one megaparsec, or Mpc). The parsec, which means “parallax of one second of arc” (the distance from which an astronomical unit appears to cover an angle of one second of arc, i.e., one sixtieth of sixtieth of a degree), is a unit of length often used in astronomy to measure distances to objects outside the solar system, and it is approximately 3.3 light years, corresponding to approximately thirty million million kilometers.

4

A. De Angelis

The galaxies filling the Universe are not evenly distributed: most are organized in groups (such as the Local Group) and clusters (up to several thousand galaxies). Groups, clusters, and isolated galaxies form even larger structures called superclusters, extending up to 100 million light years.

The Universe is Expanding In 1929 the American astronomer Edwin Hubble (1889–1953), studying the emission of radiation from galaxies outside the Milky Way, compared their speed with distance (Fig. 1.3), and discovered that, on average, objects in the Universe move away from us with speed proportional to their distance d: v  H0 d , where d is the distance from the Earth (the symbol “” indicates an approximate equality, and we will use it often). This equation, historically called the Hubble law, is a cornerstone of cosmology and provides important evidence that the universe is expanding. H0 is a parameter called the Hubble constant and its value is now approximately 70 km per second per megaparsec (i.e., on average, an object that is 1 megaparsec away from Earth moves away at a

Fig. 1.3 Experimental graph of the recession speed (in km/s) of astrophysical objects as a function of their distance from Earth (in Mpc). The line represents Hubble’s law, which fits well with the data. From A.G. Riess, W.H. Press e R.P. Kirshner, Astrophys. J. 473 (1996) 88

1 The Highest Energies in the Universe

5

speed of 70 km/s, an object at 10 megaparsec moves away at a speed of 700 km/s, and so on). To give an idea of what H0 means, the speed of revolution of the Earth around the Sun is approximately 30 km/s. Andromeda, the great galaxy closest to the Milky Way, is located at a distance of approximately 2.5 million light years from us; anyway Andromeda and the Milky Way are actually getting closer and will eventually collide: this is an example of the effect local movements (in this case dictated by gravitational attraction). The expansion of the Universe had been predicted two years before Hubble’s publication by the Belgian astronomer and Catholic priest Georges Lemaître (1894–1966), on a purely theoretical basis. As a consequence, the International Astronomical Union recommended in 2018 to call the expansion law “HubbleLemaître law”, although the decision caused some controversy. When Hubble experimentally demonstrated the expansion of the Universe, the existence of galaxies other than the Milky Way was novel. Several methods are used to determine the distances of astronomical objects. Distances of up to hundreds of parsec are measured using stellar parallax, i.e., the difference between the observation angles in the sky at a distance of 6 months, when the Earth is at opposite ends of its orbit around the Sun. To understand parallax, hold your finger at arm’s length and close one eye. Then, switch eyes. As you do this, your finger appears to shift relative to the background. This shift is the parallax, and from the parallax and the distance between your eyes you can easily calculate the distance of your finger. However, as distance grows, parallax becomes impossible to measure. To go beyond, one must use “standard” (or standardizable) candles. A standard candle is an astronomical object whose intrinsic brightness is constant, and known with high accuracy. Standardizable candles are objects whose intrinsic brightness can be determined from an observable property. This property is usually a relationship between the object’s brightness and a measurable quantity, such as the object’s temperature. By measuring this observable property and comparing it to the known relationship, astronomers can determine the intrinsic brightness and therefore the distance to the object. Distances up to 50 Mpc are measured using Cepheid stars, a class of periodic stars, whose luminosity is correlated to the pulsation period (the distance can then be computed by comparing the intrinsic brightness with the apparent brightness). Distances from 1 to 1,000 Mpc can be measured thanks to type Ia supernovae, a class of exploded star relics of which we can calculate the intrinsic brightness. Beyond these distances we use the so-called Tully-Fisher correlation between the intrinsic luminosity of a spiral galaxy and the width of the velocity spectrum of the stars in it. The above methods, having large regions of overlap, can be cross-calibrated.

6

A. De Angelis

We note that H0 is measured in kilometers per second per megaparsec, i.e., it is a distance divided by another distance and a time. It is thus the inverse of a time, and if we input the right units we see that it roughly corresponds to one divided by 14 billion years. A simple interpretation of Hubble’s law is that, if the Universe had always been expanding at a constant rate, approximately 14 billion years ago its volume would have been zero. We might then think that the Universe has been formed by exploding from a quantum singularity; we call this explosion the “Big Bang”. The age of approximately 14 billion years is consistent with current age estimates of the Universe within cosmological theories, and somewhat larger than the age of oldest stars, which can be measured by the fraction of unstable nuclei present in them. Everything appears to be consistent. Thanks to the Hubble law, we can introduce a new measure of distance: the redshift, indicated by the letter z. If we call λ the wavelength of a light wave produced by a light source, the Doppler redshift is the change λ in the wavelength due to the relative motion between the light source and the observer. When a light source moves away from an observer, the wavelength of the light it emits appears to be stretched out, causing the light to shift toward the red end of the spectrum. The definition of the Doppler redshift is: z=

λ . λ

For relatively small distances, corresponding to nearby galaxies and galaxy clusters within a few tens of millions of light-years from us, z is also approximately equal to the recession speed of the light source divided by the speed of light c: z  v/c. In these conditions the amount of redshift is thus proportional, according to the Hubble law, to the distance. But redshift can be larger than 1. In the “recent” history of the universe galaxy formation became important around z  7, then peaked up in the z  2 to z  1 region, and has been falling off since then. At a redshift z the observed wavelength is larger than that at the source by a factor of 1 + z. So z = 1 means that the wavelength is twice as long as at the source, z = 5 means that the wavelength is 6 times larger than at the source, and so on. A positive value of z indicates a redshift, while a negative value indicates a blueshift, which occurs when the light source is moving toward the observer. This is similar to the case of an ambulance with a siren. As the ambulance moves toward an observer, the sound waves emitted by the siren are compressed, resulting in a higher frequency or pitch of the sound. Conversely, as the ambulance moves away from the observer, the sound waves are stretched out, resulting in a lower frequency or pitch of the sound.

1 The Highest Energies in the Universe

7

Here are some typical values of redshift z and their corresponding distances: • z  0.01 to 0.1: nearby galaxies and galaxy clusters within a billion of light-years from us. • z  0.5 to 1: distant galaxies and galaxy clusters at cosmological distances of a few billion light-years from us. • z  2 to 7: the so-called “cosmic dawn” and the era of the first galaxies, when the Universe was more than one billion years old. • z larger than 7: the most distant known objects in the Universe, including some of the first galaxies, when the Universe was less than one billion years old. The expansion of the Universe involves its cooling, as the energy necessary for its expansion must be taken from the internal energy of the particles— the temperature of a gas is proportional to the average kinetic energy of its particles. This fact implies that studying the ancient Universe means in a sense exploring the highest energies: subatomic physics and astrophysics are naturally connected. The current average temperature of the Universe is slightly below 3 kelvin (approximately 270 degrees centigrade below zero).

Stars and Stellar Evolution Under certain conditions of density and temperature, the clouds of gas (mainly hydrogen and helium) present in the Universe collapse, and, if their mass is appropriate, stars are formed at the end of the collapse, that is, systems that generate energy through thermonuclear fusion reactions between atoms that occur in their core. Stellar masses are limited by the conditions of very high pressure and temperature at which nuclear reactions can be activated in the stellar core; consequently a star must have a mass of at least one tenth of that of the Sun. Stars with mass greater than 100 solar masses do not usually form: it is much easier for the nebula from which they originate to give life to two different protostars rather than a huge one—these are the so-called binary systems of stars, very frequent (approximately half of the stars are in binary systems, although we cannot see it with the naked eye). For a star with the mass of the Sun, the formation has taken 50 million years, and the total lifespan is approximately 11 billion years before collapsing into a “white dwarf”—a dense remnant where fusion cannot take place anymore. Our Sun is today about 4.5 billion years old. The brightness of a star as seen by an observer on Earth is called its apparent size (or magnitude). We still use today this measure derived from that of the

8

A. De Angelis

Greek astronomers, who divided the stars into six magnitudes: the brightest stars were called first magnitude stars (magnitude 1), while the palest of those visible to the naked eye were called sixth magnitude stars (magnitude 6). The classification is now formalized by stating that a star of magnitude 1 is 100 times brighter than one of magnitude 6 on a geometric scale; then a star of first magnitude is approximately 2.5 times brighter than a star of second magnitude (obviously according to this definition the brighter an object, the smaller the value of its magnitude). A starting point is necessary in this definition. We fix it by saying that stars Arcturus and Vega have an apparent magnitude approximately equal to 0. Since the received light flux decreases with the square of the distance, the magnitude is a function of the total power radiated from the star (called the intrinsic magnitude) and the distance. Stars cover a wide range of brightness and colors and can be classified based on these characteristics: red stars radiate less energy, and blue stars radiate more energy. Smaller stars, known as red dwarfs, can have a minimum of 10% of the mass of the Sun and emit only 0.01% of its power, and typically have surface temperatures of 3,000 degrees centigrades, approximately half the surface temperature of the Sun. They are by far the most numerous stars in the Universe and are expected to live tens of billions of years, much larger than the present age of the Universe. On the other hand, the heaviest stars are known as supergiants; they can be 100 or more times more massive than the Sun and have surface temperatures of over 40,000 degrees centigrades. Supergiants emit hundreds of thousands of times more power than the Sun, and have lifetimes of only a few millions of years; living so little they are extremely rare and the Milky Way contains just a handful of them. Since the heavier the star, the more effective the fusion process and the shorter the lifetime, we need a star such as our Sun to last for approximately ten billion years, giving enough time to life to develop, and to guarantee temperatures high enough to allow the life of carbon-based beings. The fate of a star depends on its mass. When the stellar core stops producing fusion energy, the star collapses: in the case of the Sun, the result will be a “white dwarf ”—as we saw, an inactive star the size of the Earth, but a million times denser. Stars with more than several solar masses can die with the core collapsing in a very energetic explosion called a supernova. The core of the star, mostly made of iron (the most stable atom and thus the final point of the fusion processes nuclear power, Fig. 1.4) collapses. The released gravitational energy causes an explosion that releases an enormous amount of energy, approximately 1046 joule, in a few tens of seconds. This is the energy produced by our Sun in a billion years! Over a period ranging from a few days to weeks, a supernova can

1 The Highest Energies in the Universe

9

Fig. 1.4 Binding energy per nucleon for atoms: the higher it is, the more stable the atom is. Iron (56 Fe) is the most stable element (the binding energy per nucleon is the highest); therefore, it is the natural end point of fusion processes of light elements and fission of heavy elements. From hyperphysics

be brighter than its entire host galaxy. Much of its energy is projected outward, and the remnant may be a neutron star or a black hole. The neutron star is the most probable remnant of the collapse. What is a neutron star? Under normal conditions, matter is quite “empty”: most of the atomic mass is concentrated in the nucleus, which occupies only one hundred thousandth of the size of the atom. However, under great pressure the protons of the nucleus and electrons can, so to speak, “collapse” into neutrons (this is an extreme simplification of the process). In these conditions the matter packs in a much denser way. The condition to form a black hole is quantified for a spherical body at rest as a relationship between its mass and its radius R: the mass must be greater than Rc2 /2G, where c is the speed of light in vacuum and G is the universal gravitational Newton’s constant. The expression above tells that the radius of a (nonrotating) black hole is proportional to its mass. If the Earth, whose escape velocity is approximately 11 km per second and radius is approximately 6400 km, were squashed until it had a radius of 9 mm, it would become a black hole. If the Sun becomes a black hole, its radius would be approximately 3 km; a star with a mass ten times larger than our Sun would become a black hole of radius 30 km, and so on. The mass of the Sun, which is approximately 330,000 times the mass of the Earth, is often identified as M , where the  symbol comes from the rinascimental representation of our star. When a star collapses into a neutron star its radius does not exceed ten kilometers, and the remaining mass is generally between the 1.4 and the 3

10

A. De Angelis

solar masses. For example, a neutron star with a radius of 15 km and a mass equal to 1.4 times that of the Sun has a density of approximately 200 billion tons per cubic meter, 100 billion times more than the Earth. If the Earth became his black hole, its radius would be less than one centimeter and, although it is not appropriate to speak of density for a black hole that is not a normal solid object, the ratio of the mass to the volume of a sphere of corresponding radius would be approximately 2 × 1027 tons per cubic meter, or roughly one billion billion billion times the current density of the Earth, which is approximately 5 tons per cubic meter. In addition to supernova explosions that come from the collapse of the core, called type II supernovae, there is another category, called type Ia, which comes from the collapse of binary systems (as we have seen, stars often form binary systems). On average, supernova explosions occur only once or twice a century in a galaxy such as ours. Only seven supernovae have been observed in our galaxy visible to the naked eye of which traces have remained in the reports of astronomers: 185 AD, 393, 1006, 1054 (the remnant of which is known as the Crab Nebula), 1181, 1572 (Tycho’s supernova), and 1604 (Kepler’s supernova). Note that the last two have been observed by Galileo Galilei—who, however, was only eight years old during the explosion of the 1572 supernova. Seen from Earth, both were of luminosity higher than or comparable to that of Venus. Often a supernova explosion is invisible because it is concealed by galactic dust. However, we are in credit with nature, and one day or the other, who knows.

The Fate of the Universe As we saw, the Universe is expanding today. Will this expansion last forever? The fate of the Universe depends on its energy content, because the more energy it contains, the more likely it is for it to eventually collapse under the action of gravity. Let us pause our journey for a moment to talk about the measures of energy in particle physics. In general we use as the unit of energy the electronvolt, abbreviated as eV (one electronvolt is the kinetic energy acquired by an electron being accelerated by the difference of potential of one volt) and its multiples. One eV is approximately the energy of the particles (photons) that make up visible light. A GeV, i.e., a billion eV, is roughly the energy it takes to create a proton or neutron (or a hydrogen atom, since the mass of the electron is only approximately 0.05% of the proton mass) based on Einstein’s famous equivalence relation E = mc2 : as you can see we can use the same unit of measurement for mass and energy! 1,000 GeV, or 1 TeV, is the kinetic energy

1 The Highest Energies in the Universe

11

of a mosquito, and the most powerful accelerator built by humans, the LHC at CERN near Geneva, Switzerland, accelerates particles up to an energy of 7 TeV. The evolution of the Universe depends on its energy density, since it is driven by gravity. Below a certain critical mass/energy density of approximately 5 GeV per cubic meter (equivalent to five hydrogen atoms per cubic meter), the Universe could expand forever, and temperatures will drop to a level that guarantees a cold death. In this scenario, the Universe will continue to expand, and eventually all matter and energy will become so widely dispersed that the temperature will approach absolute zero. As the temperature of the Universe approaches zero, all energy transfers will cease, and all stars will eventually exhaust their fuel and die out, leaving only black holes and other remnants. The black holes will slowly evaporate, and the Universe will eventually become a dark, cold, and empty place. Wait a second: are we falling in a contradiction? We said before that nothing can escape from a black hole, and now we speak about black holes evaporating. Well, in quantum physics, indeed, classically forbidden phenomena can happen, and this is the case for the evaporation of a black hole by the so-called Hawking radiation. According to quantum physics, particles and antiparticles can spontaneously appear and disappear violating energy conservation, as long as they annihilate each other within a very short time frame. When this process happens near the event horizon of a black hole, it can cause one of the particles to be sucked into the black hole, while the other escapes. This is known as Hawking radiation from the name of Stephen Hawking (1942–2018), the British theoretical physicist who first proposed its existence. Hawking is an iconic figure because of his contributions to cosmology ad particle physics and also because at the age of 21 he was diagnosed with amyotrophic lateral sclerosis, a progressive neurodegenerative disease that gradually paralyzed him, but despite his physical limitations he continued to work and communicate his ideas through a computerized speech system, which he used for the rest of his life. The density we measure by counting stars, galaxies and diffuse gas appears much smaller than this critical value, only one twentieth.

The Dark Universe and the Standard Cosmological Model However, the study of the movements of stars in galaxies indicates the presence of a large amount of invisible matter in the Universe. This matter appears to be of a kind currently unknown to us; it does not emit or absorb electromagnetic radiation (in particular, it does not emit or absorb visible light). We call it dark matter; its abundance in the Universe is six times greater than matter

12

A. De Angelis

we are made of—this is a new Copernican revolution! Dark matter is one of the greatest mysteries of astrophysics and high-energy physics. There are also indications of an unknown, new and unexpected form of energy, which we call dark energy, which contributes to the total energy balance of the Universe three times more than dark matter—unknown as well. To explain the present observations, dark matter should be “cold” (i.e., moving at rather slow speeds). The so-called Lambda cold dark matter (CDM) model is the widely accepted cosmological scenario that describes the evolution of the Universe from shortly after the Big Bang to the present day, incorporating present observations. It is based on the idea that the Universe is dominated by cold dark matter (CDM) and dark energy (), and that everything began with a big bang followed by a period of rapid expansion known as cosmic inflation. It is such a strong obect of belief that many colleagues call it the standard cosmological model. In the CDM model, the Universe is assumed today to be flat (i.e., to follow the Euclidean geometry), homogeneous (the laws of physics are the same everywhere), and isotropic (there are no preferred directions) on large scales. In addition to ordinary matter, the model includes two main components: dark matter and dark energy. Dark matter is believed to make up approximately 27% of the total matter-energy density of the universe and is responsible for the observed gravitational effects on galaxies and galaxy clusters. Dark energy, on the other hand, is believed to make up approximately 68% of the total matter-energy density of the Universe. The model also includes the observed “normal” (baryonic) matter, which makes up only approximately 5% of the total matter-energy density of the Universe. This ordinary matter is responsible for the formation of stars, galaxies, and other visible structures. In summary, we live in a largely unknown world (Fig. 1.5). The evolution of the Universe, its ultimate fate, and our daily life depend on this part of the world we do not know.

Fig. 1.5 Energy balance of the universe. Credit: NASA

1 The Highest Energies in the Universe

13

However, some messengers tell us about the unknown Universe: each second, high-energy (i.e., above 1 GeV) particles of extraterrestrial origin cross every square centimeter on Earth, coming from regions where highly energetic phenomena occur that we cannot directly explore. They are the so-called cosmic rays. We will see that through these messengers we can obtain information on the highest energy phenomena in the Universe. What these high-energy particles are, and how do they interact? We need another pause to discuss the modern view of particles and their interactions.

Particles and Fields The paradigm currently accepted by physicists, which is the basis of the socalled standard model of particle physics, for which the Nobel Prize in Physics was assigned to the British Sheldon Glashow, the Pakistani Abdus Salam and the American Steven Weinberg in 1979, is that there is a set of elementary particles that make up matter. However, be careful: from a philosophical point of view, the existence of elementary particles is a concept far from being established, since the concept of elementarity could depend on the energy scale at which matter is studied. And since we can use only finite amounts of energies, there is a limit to the scale that can be probed. The interactions between elementary particles are described by fields (i.e., quantities associated to points in spacetime or modifications of the spacetime geometry) representing forces; in quantum field theory, interactions can be seen as particle-field exchanges. At the energy scales at which we explored up to now the microcosm there are 12 particles of “matter”; they can be divided into two large families: 6 leptons (for example the electron, with unitary charge e, and the neutrino, neutral), and 6 quarks, which have fancy names. Each large family can be divided into three generations of two particles each; generations have similar properties, but different masses. This is summarized in Fig. 1.6. The masses of elementary particles vary by many orders of magnitude, from the masses of neutrinos which are on the order of a fraction of electronvolt, to the mass of the electron (approximately half a megaelectronvolt or MeV), up to the top quark (approximately 173 GeV). In short, the table is the equivalent of Mendeleev’s periodic table. There is an antiparticle (antimatter) counterpart for every known particle, with the same mass and opposite charge quantum numbers. A state of three bound quarks constitutes a baryon, such as the proton or the neutron (which are part of the atomic nuclei and therefore are also

14

A. De Angelis

Fig. 1.6 Currently known elementary particles. The so-called fermions (the particles of matter) are listed in the first three columns; the gauge bosons, that is, the field particles, are listed in the fourth column. From Wikimedia Commons

called nucleons). The matter that makes up the Earth can be explained by only three particles: the electron, the up quark and the down quark (the proton is composed of two up and one down quark, uud, and the neutron of one up and two down, udd). The baryons, made by three quarks, are not the only allowed combination of quarks: in particular, mesons are allowed combinations of a quark and an antiquark. All mesons are unstable. The lightest mesons, called pions, are combinations of u and d quarks and their antiquarks; they have masses of approximately 0.14 GeV. Although unstable (they live infinitesimal fractions of a second), pions are also quite common, since they are one of the end products of the chain of interactions of particles from the cosmos with the Earth’s atmosphere. In the Universe as we know it, particles interact through four fundamental interactions. In order of increasing intensity: 1. The gravitational interaction (force) acts between any pair of bodies and thus is dominant on a macroscopic scale. 2. The weak interaction also affects all matter particles and is responsible for the transmutation of protons into neutrons and thus the energy production in the Sun.

1 The Highest Energies in the Universe

15

3. Electromagnetic interaction acts between pairs of electrically charged particles (i.e., all matter particles, excluding neutrinos). 4. The strong interaction acts between quarks (and thus also between baryons). It becomes dominant over the electromagnetic force only when they are very close together, approximately a hundred thousand times closer than the size of an atom. This interaction, discovered only in the second half of the twentieth century, was first conjectured by Isaac Newton in the late seventeenth century: “There are agents in nature capable of making the particles of bodies stand together with a very strong attraction. And it is the task of physics to detect them. The smallest particles of matter are bound by the strongest attractions; these can compose themselves into larger particles joined by weaker forces; and a large number of these can compose larger particles whose cohesive force is even weaker, and so on until the progression ends in the largest particles on which the chemistry and colors of bodies in nature depend.” (I. Newton, Opticks). The relative intensity of interactions spans many orders of magnitude. In a deuterium atom (a neutron and proton in the nucleus and an orbiting electron), if we call 1 the intensity of the strong interaction between nucleons, the intensity of the electromagnetic interaction between the electron and the nucleus is one part over ten million, the intensity of the weak interaction is one-tenth of a thousandth of a billionth, and the intensity of the gravitational interaction between the electron and the nucleus is 10−45 (zero dot 44 zeros, and then a 1). However, intensity is not the only relevant feature in this context. Weak and strong interactions act at subatomic distances, smaller than 1 fm (the typical size of an atomic nucleus, approximately a millionth of a billionth of a meter), and so are not very important on astronomical scales. In contrast, electromagnetic and gravitational forces have a dependence of the type 1/r 2 , i.e., their intensity decreases with the square of distance, similar to the intensity of light from a lamp. On small scales (molecular scales, a thousandth of a micron), gravity is negligible compared to electromagnetic forces. Still, the Universe is electrically neutral on large scales, so electrostatic forces become negligible. Gravity, the weakest of all forces, determines the evolution of the Universe on a large scale. Interactions can be the cause of the decay of unstable particles. The decays and the collisions of particles can produce secondary particles which in turn are a source of radiation.

16

A. De Angelis

Cosmic Rays Approximately 20 percent of the radiation on Earth at sea level originates from extraterrestrial sources—these are the so-called “cosmic rays”; the rest comes from the radioactive decay of unstable particles in the soil, and to a small extent from the atmosphere and from nuclear processes occurring in the Earth’s core. The term “ray” comes from the early years of radiation research, when the flux of all ionizing radiation was called a ray (e.g., alpha rays, which are helium nuclei, or beta rays, electrons). In the language of modern physics we call particles of light “rays,” while reserving the name “particles” for objects of a more corpuscular nature such as atomic nuclei; however, for historical reasons, many authors have retained the name “rays” for cosmic rays. In this book, we will call “cosmic rays” all particles/waves coming from the cosmos: atomic nuclei, photons, neutrinos, gravitational waves, electrons, etc., focusing on emissions characteristic of processes in which the largest energies are released, so-called nonthermal processes. Cosmic rays impact the Earth’s atmosphere seemingly from every direction, at speeds close to the speed of light. Their energies reach the highest observed in nature; therefore, they must come from very powerful cosmic accelerators. As we shall see, we know that some of these are located in supernova remnants (stars exploded at the end of their lives) and near supermassive black holes. The majority of the high-energy particles arriving from the cosmos are atomic nuclei: protons, that is, nuclei of hydrogen, the most common element in the Universe, are by far dominant; approximately 10% are helium nuclei (alpha particles), and 1% are nuclei of heavier elements. The nuclei together make up 99% of cosmic rays, and electrons, photons, and traces of antimatter participate in the remaining 1%. The number of neutrinos (neutral particles of very small mass with low probability of interaction with matter) is estimated to be at high energy comparable to that of photons; at low energy it is very large because of the nuclear processes occurring in the Sun, processes that involve a large production of these elusive particles.

The Energy Spectrum of Cosmic Rays The flux of cosmic rays arriving at the Earth depends very much on their energy E, and falls rapidly with it. The variation in the flux with the energy (the so-called spectrum) of cosmic nuclei is quite well described locally by a power law, that is, by a function of the type E − p , with p being a positive number. The so-called spectral index p is the slope of the graph of the data in logarithmic units. To give a quantitative example, a power law with spectral

1 The Highest Energies in the Universe

17

Fig. 1.7 The spectrum (number of incident particles per unit energy, unit time, unit area and unit solid angle) of primary cosmic nuclei. From Wikimedia Commons

index p equal to 3 (as in much of the cosmic ray spectrum) implies that when the energy considered doubles, the cosmic ray flux is reduced to one-eighth. After the low-energy region, dominated by cosmic rays from the Sun (part of the so-called solar wind), this spectrum becomes steeper with p  2.7 for energy values below 1,000 TeV, also called 1 PeV. For higher energies, there is a further steepening, with p becoming approximately 3. The point at which this change in slope takes place is called the “knee.” Another steepening around 100 thousand TeV is dubbed the “second knee”. For even higher energies (over one million TeV) the cosmic ray spectrum again becomes less steep, resulting in a further change in slope which is called the “ankle” (Fig. 1.7). The energy of the most energetic cosmic rays ever detected is approximately a billion TeV (recall that 7 TeV is the energy of the LHC beams).

Atmospheric Showers Cosmic rays impinging on the atmosphere (called primary cosmic rays) generally produce secondary particles that can reach the Earth’s surface through the

18

A. De Angelis

Fig. 1.8 When a primary cosmic particle interacts with a nucleus in the Earth’s atmosphere, it can generate an extended shower of particles. Showers consist of many (can be millions) particles, with a complex history of chain interactions, production, absorption and spontaneous decays. Credit: ESO

mechanism of so-called multiplicative “showers,” which involves the sequence of a complex history of chain interactions, production, spontaneous absorption and decay (Fig. 1.8). Without the shielding effect of the Earth’s atmosphere, cosmic rays that pose a serious health hazard would directly affect us (people living on high mountains or making frequent airplane trips are subject to a relevant additional dose of radiation). In addition to radioactivity-related effects, there are also (controversial) indications of a correlation between cosmic rays and meteorological conditions. Important experimental results have been achieved in this regard. In August 2011, physicists at CERN in Geneva reconstructed artificial cloud embryos in the CLOUD experiment, observing that cosmic rays increase (up to 10 times) the formation of aerosol particles and the rate at which they aggregate to form clusters, which gradually become increasingly larger until they form clouds.

Cosmic Ray Sources Approximately once per second, a single subatomic particle enters the Earth’s atmosphere with the same energy as a well-thrown stone. Somewhere in the Universe some unknown accelerators can impart a single proton an energy 100 million times larger than the energy obtainable from the most powerful accelerators on Earth. Where are these accelerators and how do they work?

1 The Highest Energies in the Universe

19

Fig. 1.9 The supernova remnant in the Crab Nebula, a powerful gamma-ray emitter in our galaxy. The explosion of the supernova occurred in AD 1054 and was recorded by Chinese astronomers. The vortex of matter is clearly visible. around the center, where there is a rapidly rotating neutron star characterized by periodic emission of energy (pulsar). Some supernova remnants, seen from Earth, have an apparent size of a few tenths of a degree—approximately the size of the Moon. Multifrequency image reconstructed by NASA’s Hubble, Chandra and Spitzer space telescopes, and the MAGIC telescope at the Canary Island of La Palma. Credit: NASA

The ultimate driver of cosmic ray acceleration is gravity. In gigantic gravitational collapses such as those that occur in supernova remnants (as we have already mentioned, stars that implode at the end of their lives, see, for example, Fig. 1.9) and in the accretion of supermassive black holes (i.e., of masses equal to millions or even billions of solar masses) at the expense of surrounding matter (Fig. 1.10), part of the gravitational potential energy is transformed into kinetic energy of the particles in their fall toward the black hole (the detail of this mechanism hides even more of a puzzle) or in general toward the central compact object. Large stars can spontaneously evolve into black holes by collapsing after their nuclear fuel is gone—stars are sustained by the nuclear energy they produce. Most known galaxies appear to have a supermassive black hole (millions to billions of solar masses) in their center. In particular, the Milky Way, our Galaxy, has a black hole of almost four million solar masses in its center, in the constellation of Sagittarius. As we have already said, for the discovery of the black hole at the center of our galaxy, Genzel and Ghez were awarded the

20

A. De Angelis

Fig. 1.10 A supermassive black hole grows by swallowing nearby stellar bodies, and emits jets of charged particles, neutrinos and gamma rays. Credit: NASA GSFC

Nobel Prize in Physics in 2020 (along with Roger Penrose of the University of Oxford who explained the mechanism of formation of black holes within Einstein’s theory of general relativity, and some properties of the Big Bang). Once formed, given the conditions of strong gravity, a supermassive black hole tends to grow by engulfing the matter nearby; this process produces violent collisions and radiation of high energy particles. The reason human-made accelerators cannot compete with the still mysterious cosmic accelerators is simple. Acceleration requires containment, and containing particles within a radius R requires a magnetic field B; the energy of the contained particles is proportional to the product of R and B. On Earth it is difficult to think of confinement radii wider than kilometers and magnetic fields stronger than a dozen tesla (one hundred thousand times the terrestrial field). This combination can provide energies on the order of the TeV, such as those of CERN’s LHC accelerator. In nature, on the other hand, there are accelerators with much greater radii, such as the remnants of supernovae (hundreds of light years) and the active galactic nuclei of galaxies (tens of thousands of light years). Today we know how to use these cosmic accelerators, but we also know that, besides the advantages given by the high energies attained, there are also several disadvantages for research in particle physics when compared to accelerated beams in particle physics laboratories: fluxes of cosmic rays are very low and energies are not known precisely. Cosmic rays are just a tool for exploration; firm conclusions must be validated by accelerators.

1 The Highest Energies in the Universe

21

Traveling through the Milky Way, charged cosmic rays are deflected by the weak galactic magnetic field (roughly one millionth of Earth’s magnetic field) before reaching Earth. Below an energy of approximately one hundred thousand TeV, the effect of galactic magnetic fields causes all information about the direction of origin to be lost. Neutral particles, on the other hand, arrive at our detectors retaining information about their direction of origin. The so-called gamma rays are photons (particles of light) of very high energy: they occupy the most energetic part of the electromagnetic spectrum. Since they have no electric charge they can travel long distances without being deflected by galactic and extragalactic magnetic fields, and they enable the direct study of emission sources. The energy spectrum of gamma rays extends to the highest energy in the Universe, but typically when we speak of gamma radiation revealed in cosmic rays we refer to a range from MeV (millions of times the energy of visible light) to PeV (billions of MeV) in energy. These facts are prompting us today to study mainly gamma rays of very high energy and cosmic rays of hundreds of millions of TeV. However, gamma rays are few compared to charged cosmic rays, and the trend of the energy spectrum means that even charged cosmic rays of hundreds of millions of TeV are very rare events, and thus one needs large detectors to measure them. As explained in the Introduction, the detection of gravitational waves and neutrinos also requires large detectors, a technology that we have been refining in recent years. Therefore, to explore the Universe’s very high energy phenomena, it is necessary to build instruments that cover large areas on the ground, possibly placed high in the mountains to limit absorption by the atmosphere; these instruments are complementary to others placed on satellites. How did we come to discover the existence of natural radiation, and how did we conclude that much of this radiation is of extraterrestrial origin? The next two chapters review the history of the investigation of the origin of cosmic rays and of the first discoveries made with these particles, a fascinating adventure that has engaged and excited many scientists for more than a century.

2 The Mystery of Cosmic Rays

At the turn of the 20th century, scientists discovered the existence of natural radioactivity on Earth. This groundbreaking discovery sparked curiosity about the origins of such radiation. The quest to solve this enigma became one of the most thrilling intellectual pursuits in the history of science. Despite some fundamental questions that remain unanswered to this day, this pursuit ultimately led to the conclusion that a significant amount of natural radiation originates from extraterrestrial sources. This extraterrestrial radiation has been given the name “cosmic rays.”

The Discovery of Natural Radioactivity The electroscope (Fig. 2.1) is a device capable of detecting electric charge. A typical example is the leaf electroscope which reveals the charge through the fact that two thin metal sheets (called leaves), constrained to the upper end, repel each other and therefore diverge when charged. Were it not for imperfect insulation, it would seem at first glance that an electroscope must hold its charge forever. In 1785, Charles-Augustin de Coulomb (Angoulème 1736–Paris 1806), the father of electrology, suspected that reality was more complicated. Coulomb was an officer of the French military genius. He studied mechanics, fluid dynamics, and above all electricity and magnetism, fields to which he dedicated seven fundamental memories between 1785 and 1789. In his last years he was forced into private life by the revolutionaries, and he was able to perfect his experiments. Among his findings was the observation that © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 A. De Angelis, Cosmic Rays, Astronomers’ Universe, https://doi.org/10.1007/978-3-031-38560-5_2

23

24

A. De Angelis

Fig. 2.1 Schematic operation of the electroscope. From Wikimedia Commons

electroscopes spontaneously discharge in the air, even if isolated as well as possible from the electrical point of view. Suspecting a fundamental reason, he transcribed this observation in one of his memoirs. After the English physical chemist Michael Faraday (Newington 1791– Hampton Court 1867) confirmed in 1835 the effect observed by Coulomb, his student and compatriot William Crookes (the phrase Faraday greeted Crookes every morning is legendary: “Work! Finish! Publish!”), famous today for discovering the chemical element called thallium, observed in 1879 that the discharge speed decreased when the pressure was reduced. He concluded that the direct cause of the electroscope discharge must be the ionization of the air contained in the electroscope itself. However, what was the first cause of that ionization? The explanation of the phenomenon of spontaneous discharge came at the beginning of the twentieth century. It paved the way for a scientific discovery revolutionary for humanity: the existence of cosmic rays. The study of the discharge rate of electroscopes required quite a sophisticated experimental technology; fortunately, this type of measurement has been very popular since the end of the eighteenth century, because it was linked to atmospheric electricity issues ultimately connected to meteorological problems. William Thomson (Belfast 1824–Netherall 1907), who would later be appointed Lord Kelvin, professor at Cambridge and Glasgow and today most famous for his contributions to thermodynamics, transformed electrometry into a science, inventing new, more precise electroscopes. The technique was

2 The Mystery of Cosmic Rays

25

developed not only in Great Britain but also in the United States, Canada, Italy, Germany, and in particular, Austria. In some cases these studies had applications related to agriculture and military sciences, two sectors that would have benefited greatly from the hope that humans could influence atmospheric phenomena with electricity. Of great importance for the development of atmospheric electricity studies was the research of Franz Exner in Wien in the second half of the 19th century. Exner, whose school was awarded numerous Nobel Prizes, not only further perfected the electroscope by improving on Lord Kelvin’s instruments, but also succeeded in attracting many good students, most notably Erwin Schrödinger, later winner of the Nobel Prize in physics in 1933 for his contributions to quantum physics. Schrödinger became interested in physics specifically through his study of atmospheric ionization. Natural radioactivity was discovered in 1896 by Frenchman Henri Becquerel. A few years later, his student Maria Sklodowska and her husband Pierre Curie (Fig. 2.2) observed that the elements polonium and radium undergo transmutations that generate radioactivity (radioactive decays). According to the nomenclature of the time, the set of gaseous products of radium, an element now called radon, was called radioemanation. Radon decays through

Fig. 2.2 Four Nobel prizes in the same family: Marie Curie, Nobel laureate in physics in 1903 for discoveries related to natural radioactivity, Pierre Curie, who shared with Marie the Nobel in 1903, and their daughter Irène, who in turn would win the Nobel Prize in chemistry in 1935 for the discovery of induced or artificial radioactivity. Marie Curie won a second Nobel prize (in chemistry) in 1911. From Wikimedia Commons

26

A. De Angelis

the emission of alpha particles (helium nuclei) into what used to be called radium A, or Ra A, which in turn decays again with the emission of alpha particles into Ra B; the latter in turn transforms with the emission of beta rays (electrons) into Ra C. The combination of Ra A, Ra B and Ra C substances was called “induced radioactivity.” Maria Sklodowska Curie, better known as Marie Curie (Warsaw 1867– Paris 1934) is still a legendary figure today. She influenced the development of nuclear physics both experimentally and theoretically. A Russian-Polish chemist and physicist, naturalized French, she was awarded the Nobel Prize in physics in 1903 at the age of 36 (along with her husband Pierre Curie and Henri Becquerel) and, in 1911, the Nobel Prize in chemistry for her work on radium, and is to date the only scientist to win the Nobel Prize in two different scientific disciplines. In the presence of a radioactive material, a charged electroscope discharges faster; it can be concluded that the spontaneous discharge of electroscopes may be due to charged particles emitted in radioactive decays. The rate of discharge of an electroscope, as Marie Curie observed, can therefore be used to measure the level of radioactivity. This observation opened in Europe and the New World (the United States and Canada in particular) a new season in research related to natural radioactivity studies, and in some ways unified, given the common experimental technique, ionization studies carried out in the context of meteorology and research related to environmental radioactivity. At the turn of the 20th century, Julius Elster (1854–1920) and Hans Geitel (1855–1923) improved the technique, isolating the electroscope in a closed vessel, and thus increasing the sensitivity of the instrument (Fig. 2.3). As a result, they were able to make quantitative measurements of the spontaneous discharge rate. Elster and Geitel were two gymnasium teachers from a small town in lower Saxony, Wolfenbüttel; friends since their school days, they shared the same house and worked maniacally on the study of atmospheric electricity. In an experiment conducted in 1899, by enclosing the electroscope in a metal box, they found a decrease in radioactivity, thus determining that the discharge was primarily due to ionizing agents from outside the container in which the electroscope itself was contained. The obvious questions concerned the nature of such radiation, and whether it was of terrestrial or extra-terrestrial origin. The simplest hypothesis was that they came from radioactive materials. Thus, terrestrial origin was a common assumption, since such materials are present in the soil; experimental demonstration, however, seemed difficult to achieve.

2 The Mystery of Cosmic Rays

27

Fig. 2.3 An early twentieth century electroscope by Elster and Geitel. Courtesy of the Physics Cabinet of the Calasanzio Institute of Empoli

Is Natural Radioactivity of Extraterrestrial Origin? Charles Thomson Rees Wilson (Edinburgh, 1869–1959) was a Scottish physicist. He had been Lord Kelvin’s student at Cambridge, and from a young age he was very interested in the mechanism of cloud and fog formation, atmospheric electricity, and thunderstorms, and to the condensation of vapors around charged particles (we will discuss in the following an instrument he invented, the cloud chamber, which will be fundamental in elementary particle physics, and for the invention of which he will be awarded the Nobel Prize in physics in 1927). In 1901 he confirmed Elster and Geitel’s result and suggested the possibility that the origin of ionization could be a highly penetrating radiation of extraterrestrial origin. He wrote: “Experiments must be conducted to see whether the production of ions in the impurity-free air can be explained as originating from sources outside the atmosphere, probably of radiation such as Röntgen rays or cathode rays, but enormously more penetrating.” However, experimental investigations did not support the extraterrestrial hypothesis: Wilson took his electroscope to a tunnel in Scotland, well shielded from the surrounding rock, but did not measure, because of experimental

28

A. De Angelis

uncertainties, a decrease in radioactivity as he expected to find if his hypothesis were true. Although occasionally discussed, the hypothesis of an extraterrestrial origin of the radiation was abandoned for the next few years. Elster, Geitel and Wilson’s findings motivated great interest in the issue. In the period from 1906 to 1908 numerous systematic studies were carried out all around the world to characterize the origin of radiation. Mache’s group in Vienna and McLennan’s group in Canada measured radiation under different temperature, altitude, and location conditions, from Mount Matterhorn to the frozen surface of the Great Lakes. In principle, we expect the radiation on the surface of a lake to be somewhat lower than that on the ground, given the lower presence of radioactive materials in the water; however, fluctuations related to location, time of day, pressure, and temperature were beyond the accuracy of the instruments, and it seemed impossible to obtain a clear idea. Based on the study of ionization as a function of wind speed, Mache believed that radiation had a measurable atmospheric component. Between 1907 and 1908 the English-Canadian physicist Arthur Eve (1862– 1948) made measurements at various locations; the results showed within error consistent levels of radioactivity in the Atlantic Ocean, in England and in Montreal. In 1908 Elster and Geitel observed a 28% decrease in radioactivity by moving their electroscope from the surface of the Earth to the bottom of a salt mine. They concluded that, in agreement with the literature, the soil is the source of the penetrating radiation and that certain types of soil, such as salt deposits, are relatively free of radioactive substances and thus can act efficiently as shields. The situation in 1909 can be summarized as follows. The phenomenon of spontaneous discharge is consistent with the hypothesis that even in isolated environments background radiation is capable of penetrating metallic walls. This penetrating radiation has three possible components: extraterrestrial radiation probably coming from the Sun, radioactivity from the Earth’s crust, and radioactivity in the atmosphere. Few were convinced of extraterrestrial radiation or atmospheric radiation: the prevailing view was that most of the radiation came from radioactive material in the Earth’s crust. Many calculations were made to determine how the radiation, if originating from the Earth’s ground, should decrease with height, and measurements were made to verify the consistency of the scenario. As is often the case in physics, improved instruments were needed to solve the problem.

2 The Mystery of Cosmic Rays

29

Father Wulf, a True Experimentalist To measure radioactivity in different places it was necessary to move the electrometers, and these were still difficult to transport; there was a need for an improvement in the instruments and possibly for an innovative idea on how to carry out the measurements. The fundamental work of Father Wulf responded to these needs. Theodor Wulf (Hamm, Westphalia, 1868–Winterberg, Sauerland, 1946) was a German scientist; he became a Jesuit priest at the age of twenty, before studying physics under the guidance of the Nobel-prize winner Walther Nernst at the University of Göttingen. He taught physics at the Jesuit University of Valkenburg, in the Netherlands, from 1904 to 1914 and 1918 to 1935, and at the Collegio Romano (the Jesuit university of Rome). Wulf designed and built a more sensitive, and above all more transportable, electrometer. In Wulf ’s electroscope the two leaves were replaced by two wires or blades of metallized glass, with a tension spring placed in the middle (Fig. 2.4). In 1909 Wulf tested his electroscope by measuring ionization in various places in Germany, Holland and Belgium. He interpreted the results of his

Fig. 2.4 Wulf’s electroscope. The cylinder has a diameter of 17 cm and a depth of 13 cm. On the right, the microscope from which you can read the distance between the two metallic glass threads, illuminated with a light reflected by the mirror on the left. The sensitivity of the instrument is approximately 1 v. From Wikimedia Commons

30

A. De Angelis

experiments as a confirmation of the validity of the instrument he developed; everything was consistent with the hypothesis that the penetrating radiation was caused by radioactive substances present in the upper layers of the Earth’s crust. Temporal variations were interpreted as being caused by fluctuations in atmospheric pressure or air flow. Wulf finally wrote that, if an additional component existed, it was too small to be measured with the available instrumentation. Once the instrument was perfected and the validity of the measurements verified, Wulf had the idea of measuring the variation in radioactivity with height to understand its origin. The concept was simple: if radioactivity came from the Earth, it would decrease with altitude. In 1909, he took his electroscope to Paris and measured the ionization rate at the top of the Eiffel Tower (approximately 300 m high). Assuming that most of the radiation was of terrestrial origin, he expected to find less ionization at the top than at the ground; however, the ionization rate showed too slight a decrease to confirm this hypothesis. Wulf concluded in his paper that, in comparison with the values at the ground, the intensity of the radiation decreases “at approximately 300 m of even half of its value on the ground,” while under the assumption that the radiation emerged from the ground, only a small fraction of the ground radiation was expected to remain at the top of the tower. Wulf ’s observations were of great value because the data were recorded at different times of day and during different days at the same location. For a long time, Wulf ’s data were considered the most reliable source of information on the effects of altitude on penetrating radiation. Wulf, however, concluded that the most likely explanation for his result was still emission from the ground. Wulf ’s experiment also struck the collective imagination because of its simplicity and elegance. In short, the prevailing interpretation was that radioactivity originated mainly from radioactive materials in the Earth’s crust. Schrödinger, who had also carried out experimental ionization measurements within Exner’s group, wrote in 1911 that three possible explanations had been suggested regarding the source of the observed penetrating radiation, but the experts’ opinions regarding the relative importance of the three were vehemently in disagreement: radioactive substances contained in the soil or precipitated on the Earth’s surface; radioactive substances suspended in the atmosphere; and hypothetical extraterrestrial sources of radiation. The third radiation source, Schrödinger concluded, is completely hypothetical and should be introduced, only if adequately justified, if the first two are shown to be insufficient to explain the observations.

2 The Mystery of Cosmic Rays

31

Pacini and the Measurements of Radioactivity Attenuation in Water Italian physicist Domenico Pacini (1978–1934) questioned the conjecture that radioactivity originated mainly from the Earth’s crust. Pacini was born in Marino, near Rome; in Rome, he attended upper secondary studies at the physical-mathematical section of the technical institute “Leonardo da Vinci”, not a very prestigious school. He graduated in physics in 1902 at the Faculty of Sciences of the University of Rome, where he worked for the subsequent three years as an assistant to Professor Pietro Blaserna. The choice of the supervisor was fortunate. Pietro Blaserna (Fiumicello di Aquileia near Gorizia 1836–Rome 1918) was born in an Austrian territory that later became Italian in 1866, and graduated in Vienna in the same group in which Exner was formed. At the age of thirty Blaserna had to choose between Austrian and Italian citizenship, and he chose the Italian citizenship; he made a rapid career, becoming a full professor and then president of the Senate and the first president of the Italian Physical Society (SIF). In a short time, Blaserna formed a group that established a collaboration with the Exner group. Thanks to this collaboration, it was possible for his group in Rome to use the most advanced instruments for measuring atmospheric ionization. The beginning of Pacini’s career was bright. In 1904 he studied N-rays, a hypothetical new visible radiation capable of penetrating aluminum, later proven to be a scientific fake; Pacini was among those who asserted the nonexistence of N-rays, which earned him a publication in the prestigious journal Nature. In 1905 Pacini obtained a tenured position as an assistant at the Central Bureau of Meteorology and Geodynamics, and worked in the group charged with studying thunderstorms and electrical phenomena in the atmosphere. Pacini traveled extensively within Italy on missions for the Central Bureau, in particular in Castelfranco Veneto, at Monte Velino in Abruzzo, and at Sestola’s meteorological observatory, near Modena. During his trips, he collected radioactivity measurements at different locations, with different geology and different altitudes above sea level. The long road that led Pacini to the hypothesis of cosmic rays (or, to be more precise, of radiation that does not come from the Earth) began with the studies of electrical conductivity in gaseous media that he carried out at the University of Rome during the early years of the 20th century. Pacini became interested in the problem of ionization of air and became familiar with many instruments for measuring this quantity. Ionization can result from ultraviolet rays, wind, and mechanical effects; but the main cause is due to radioactivity, although the final level of ionization is influenced by many effects, such as humidity.

32

A. De Angelis

Fig. 2.5 Three electroscopes used by Pacini in his measurements (these are the original instruments that have been lost). In the foreground is Ebert’s electroscope. From Wikimedia Commons

Before ionization can be properly studied, careful control of systematic effects is therefore necessary. Since 1905 Pacini made systematic measurements of the conductivity of air, using an Ebert electroscope, derived from the Elster and Geitel electroscope (Fig. 2.5) and very similar to it, an instrument that he improved to increase the sensitivity of the measurement (he arrived at a sensitivity of one-third of a volt). First he made many measurements to establish variations in the electroscope’s discharge rate as a function of the environment in Rome, Sestola, Livorno, Monte Velino, and the valley below, near Forme di Massa d’Albe. Pacini sought to identify the sources of ionization, in particular through the study of the rate of activation of a charged wire he could recognize the radium, thorium, and actinium families; these sources of radioactivity are present in the Earth’s crust. A summary of these results indicates, according to the author’s conclusions, that “under the assumption that the origin of the penetrating radiation is in the soil only [...] results cannot be explained.” In August and September 1908, Pacini began a systematic study of the variation in ionization over time inside a 1.3-mm thick zinc chamber. He found strong variations (up to a factor of five), dependent on temperature, pressure, and humidity. He also identified a daily cycle with two maxima; to explain this temporal variation he hypothesized the solar origin of some of the penetrating radiation, confirming earlier observations. He concluded,

2 The Mystery of Cosmic Rays

33

Fig. 2.6 Pacini at work in May 1910. From A. De Angelis; courtesy of the Pacini family

however, that the Sun could not be the only source of radiation. Pacini’s work was also cited by Marie Curie in her monumental Traité de radioactivité. Pacini continued and perfected his experimental program of systematic radiation measurements on the ground (at different altitudes, including at sea level, and at different locations to study local effects) and on the sea (Fig. 2.6). These last measurements were carried out on the Tyrrhenian Sea, in front of the Naval Academy in Livorno, on an Italian Navy ship, the destroyer “Fulmine” (Fig. 2.7). Initially, Pacini placed the electroscope on the ground and on the sea a few kilometers from the coast; the ionization measurements were consistent within the intrinsic fluctuations. The definitive experiment was carried out in June 1911, during seven days of deep-water measurements in the Tyrrhenian Sea. With the apparatus at the sea surface 300 m from the coast, Pacini measured the discharge rate of the electroscope eight times over three hours, resulting in an average indicating an ionization of 11.0 ions per second per cubic centimeter; with the apparatus at a depth of 3 m in a 7-m-deep stretch of sea, he measured as the result of seven tests an average of 8.9 ions per second per cubic centimeter (with an estimated error of 0.2 v per hour). The difference (2.1 ions per second per cubic centimeter) can be attributed to radiation, independent of radioactivity by the Earth’s crust (underwater measurement was 20% lower than at the

34

A. De Angelis

Fig. 2.7 The destroyer ‘‘Fulmine’’ photographed during the first mission Pacini participated in, in 1907. The photograph was taken in the vicinity of the Santa Margherita Ligure landfall. From Wikimedia Commons

surface, indicating the absorption by water of radiation from outside). The result was for the first time statistically significant: using modern language, the significance was at the level of 4.3 standard deviations (i.e., the probability that the result was coming from a statistical fluctuation was smaller than 0.002%). Consistent results were obtained during subsequent measurements made at Lake Bracciano, at the same depth. Pacini reported these measurements, the results obtained, and their interpretation in a note in Italian whose title can be translated as Penetrating radiation at the surface and underwater. This note, published in the journal of the Italian Physical Society “Il Nuovo Cimento” in February 1912, marked the beginning of the underwater technique for cosmic ray studies (a technique that has been implemented many times to the present day). Pacini wrote: With an absorption coefficient of 0.034 for water, it is easy to deduce from the well-known equation [...] describing the exponential decay of radiation with the thickness of the crossed material, that under the conditions of my experiments, the activities of the seafloor and of the surface were both negligible.

2 The Mystery of Cosmic Rays

35

He concluded: [A]pparently the experiences of which this note is the subject confirm [...] that there exists in the atmosphere a sensitive ionizing cause, with penetrating radiation, independent of the direct action of radioactive substances in the soil.

Pacini’s technique could not rule out with certainty an atmospheric origin of the radiation, but he cited Eve who had concluded that the contribution of radioactive substances in the air is negligible. This was the first time it was established that the results of many radiation experiments could not be explained by radioactivity in the Earth’s crust. It should be mentioned as a curiosity that in 1910 Pacini tried to test for a possible increase in radioactivity during the passage of Halley’s comet, and found no indication of an effect from the comet itself. In Italy there were balloons capable of flying up to an altitude of 5,000 m; although Pacini published a paper in 1909 on the perturbations produced by balloons on the Earth’s electric field, he never made measurements of ionization at high altitudes on balloons. The use of this technique found its most refined realization in Austria.

Hess and Measurements of Radioactivity on Balloons The importance of balloon experiments for studying atmospheric electricity became clear after Wulf ’s observations on the effect of altitude. These experiments were widely used, dating back to Bartolomeu de Gusmão and the Montgolfier brothers’ invention of the hot air balloon. In 1805, Gay Lussac and Biot ascended to 6,000 m to test their famous laws of gases. In December 1909, the first balloon flight aimed at studying the properties of penetrating radiation was carried out in Switzerland using the balloon “Gotthard” from the Swiss Aeroclub. Professor Albert Gockel from the University of Fribourg conducted measurements up to 3,000 m and found that ionization did not decrease with altitude as expected. He confirmed Pacini’s conclusion that a non-negligible part of the penetrating radiation is independent of the Earth’s radioactive substances, and concluded that it is not of earthly origin. However, Gockel was unable to observe an increase in radioactivity as altitude increased, contrary to his expectations. Gockel had indeed been particularly unlucky. Later calculations, carried out by Schrödinger during his thesis work and soon after, showed that if radioactivity comes partly from the Earth and partly from above (as is the case),

36

A. De Angelis

up to three thousand meters the decrease in radioactivity from the Earth’s crust can be compensated for by the growth of radioactivity from extraterrestrial sources. At the same time as Gockel, the German physicist Karl Bergwitz performed measurements up to an altitude of 1,500 m, of course also without significant results. Between the end of the 19th century and the early 20th century, AustriaHungary was at the forefront of cultural, scientific, and artistic innovation. With the blending of multiple cultures, including German, Hungarian, Czech, Slovak, Polish, Slovenian, Croatian, and Italian cultures, Vienna became a melting pot of ideas. This cultural hub was responsible for giving birth to some of the most revolutionary ideas in human civilization, ranging from psychoanalysis to quantum theory, and music to physiology. The cafés of Vienna became the meeting places for creative individuals, where they could exchange ideas and collaborate. This dynamic atmosphere made Vienna a hub for cutting-edge research and innovation. Literature flourished, with Musil and Roth making significant contributions, and science as well. One of the most notable scientists at this time was Mach, who was known for his interdisciplinary approach to physics. He believed that the behavior of a physical system, particularly the inertia of bodies, depended on the distribution of masses in the Universe. This highly nonlocal and holistic view challenged the reductionist approach of Descartes, Galilei, and Newton, and laid the foundations for modern physics, particularly modern quantum physics and Einstein’s general relativity. Another prominent physicist of the Vienna School was Boltzmann, who had taken over the chair previously occupied by Stefan. He was known for his groundbreaking work in statistical mechanics, which laid the foundations for the modern understanding of thermodynamics. The creative individuals of the Vienna school later scattered, taking with them the seeds of contemporary culture that would flourish throughout the world. The legacy of the Austrian renaissance continues to influence modern physics and science, and we are still building on it. Even in such a rich environment, terrestrial physics was not neglected. Exner’s group excelled, and Schrödinger, who enrolled in the university in 1906 (the year of Boltzmann’s tragic suicide in Duino), chose to start his career in environmental physics. In the early 1900s, Lise Meitner also studied in Vienna with Boltzmann and was among the first women to earn a doctorate in physics. It was in Vienna that she began research in radioactivity, which would later lead to the discovery of nuclear fission. Into this unique environment and time in history came Austrian physicist Victor Franz Hess. Through a long series of balloon flights, Hess, formed at the Vienna school, was able to provide independent and solid evidence for the

2 The Mystery of Cosmic Rays

37

extraterrestrial origin of at least some of the radiation that caused the observed ionization. Hess was born in 1883, at Waldstein Castle near Styria (his father was the administrator of the estates of Prince Oettingen-Wallerstein, and his mother was a housewife—a family situation similar to Pacini’s). He attended grammar school in Graz and began studying physics at the university in 1901. He earned a doctoral degree in physics in 1906 “sub auspiciis imperatoris”, a mark of exceptional distinction. After graduation, Hess intended to move to Berlin to work on optics with Professor Drude, but due to Drude’s sudden suicide, he decided to go to Vienna to work at the Institute directed by Exner. Exner encouraged him to focus on radioactivity and atmospheric electricity. In 1910, Hess was appointed to teach medical physics at the Faculty of Veterinary Medicine. He also became an assistant to Professor Meyer at the Institute for Radium Research of the Viennese Academy of Sciences, where he conducted most of his research on cosmic rays. Hess began his experiments by examining Wulf ’s results and Eve’s measurements of radioactivity absorption coefficients in the atmosphere. To improve their accuracy, Hess measured gamma-ray absorption in the air and obtained a coefficient consistent with Eve’s measurements. The contradiction with Wulf ’s results remained, leading Hess to conclude that further measurements were needed. Hess then carried out balloon observations (Fig. 2.8). On August 28, 1911, he made his first ascent, reaching a height of 1,070 m. On October 12, 1911, he made a second ascent during the night. During both flights, Hess found that the intensity of penetrating radiation did not vary with altitude within the limits of error. From April to August 1912, Hess made seven ascents using three different radioactivity measuring instruments. He used two airtight Wulf radiation detectors with three- and two-millimeter walls to observe gamma rays, and a third instrument consisting of a Wulf electrometer enclosed in a cylindrical ionization vessel made of the thinnest commercially available zinc foil to study beta rays (electrons), which are characterized by a small penetrating power. On August 7, 1912, Hess reached 5,200 m aboard the hot air balloon “Böhmen” during a six-hour journey from Aussig to Pieskow, a village approximately sixty kilometers east of Berlin. His results showed that ionization increased significantly with height after passing through a minimum (Fig. 2.9). He wrote: (i) Immediately above the ground the total radiation decreases slightly; (ii) at an altitude between 1,000 and 2,000 m a slight regrowth of penetrating radiation

38

A. De Angelis

Fig. 2.8 Hess’ historic balloon flight in 1912. From Wikimedia Commons

Fig. 2.9 Variation of ionization with altitude. Left: Hess’ final ascent (1912), with two electroscopes (electroscope 2 was shielded with thicker walls). Right: Kolhörster’s ascensions (1913, 1914)

2 The Mystery of Cosmic Rays

39

occurs; (iii) the increase reaches, at an altitude between 3,000 and 4,000 m, already 50% of the total radiation that is observed on the ground; (iv) between 4,000 and 5,200 m the radiation is more than 100% stronger than that on the ground.

He concluded that the increase in ionization with height must depend on the fact that the radiation comes from above, and he thought this radiation was of extraterrestrial origin: The results of the present observations seem to be most logically supported by the assumption that radiation of very great penetrating power enters our atmosphere from above, and subsequently produces a part of the ionization observed in closed vessels in the lower layers. The intensity of this radiation appears to be subject to transient variations, observable on time scales of one hour.

He also ruled out the Sun as a direct source of this hypothetical penetrating radiation because of the lack of variation between night and day, and from the results of a mission accomplished during a partial eclipse. Hess finally published a summary of his results in the Physikalische Zeitschrift in 1913, an article that reached the general public. The Austrian scientist coined the term “höhenstrahlung” (radiation from above) for radiation— Wulf and Gockel had used the less cautious term “kosmische strahlung” (cosmic radiation) three years earlier. The following year, in 1913, Hess had the opportunity to fly on the balloon “Astarté,” which was owned by Egmund Sigmundt from Trieste and made available to him for free. Hess reached a height of 4,500 m during the flight. The quantitative measurements he took were in agreement with his previous results and with calculations made by the young Schrödinger, who assumed that part of the radiation came from the ground and part was of cosmic origin. These calculations are an excellent example of an elegant use of mathematical analysis. However, Hess did not acknowledge Schrödinger’s work, which was making clear the fact that if Gockel had persisted, he would likely have obtained a significant result.

Confirmations in Europe and the Tragedy of the First World War Hess’s findings were later confirmed by Werner Kolhörster (1887–1946) of Germany, who conducted a series of high-altitude flights between 1913 and 1914. Building on Hess’s work, Kohlhörster improved the design of

40

A. De Angelis

electroscopes by making them hermetically sealed, thus reducing the potential for systematic errors in measurements. This was especially important for thin-walled electroscopes, which were subject to pressure variations and thus introduced some subjectivity into measurements. He recorded a tenfold increase in ionization levels compared to sea level and measured the radiation’s absorption coefficient, which came as a surprise as it was eight times smaller than the known air absorption coefficient for gamma rays at the time. However, Kolhörster did not follow up on this result, which could have led to the important conclusion that extraterrestrial radiation was not primarily composed of gamma rays. This understanding did not come until fifteen years later. The 85th Congress of German-speaking physicists and physicians, a landmark event in the history of physics, was held in Vienna from September 21 to 28, 1913. With the city at the top of its architectural, artistic, and cultural prestige, over 7,000 scientists attended the congress, making it one of the largest scientific gatherings to that date. The imperial court and the city of Vienna hosted banquets for the congress participants. During the congress, the Viennese physicists displayed their newly established Institute for the Study of Radioactivity and six physics sessions were held. In the second session, a young Albert Einstein presented an early version of his work on general relativity, including the calculation of light deflection near the Sun. He stated, “We hope that the expected eclipse in 1914 will finally allow us to reach a significant decision [between general relativity and classical physics].” This presentation captivated the audience and sparked discussions. The congress made significant contributions to the fields of relativity and quantum physics, which had a lasting impact despite the limited opportunities for scientific collaboration in the following years due to the war. One full session was focused on measurements of radioactivity and atmospheric penetrating radiation. Hans Geiger, who had recently taken a teaching position in Berlin after serving as Ernest Rutherford’s assistant in Manchester, presented the concepts that later were at the basis of the Geiger counter (see later). Victor Hess also presented his experimental work, discussing the improvements he made to Wulf ’s electroscope. Wulf reviewed the results on the origin of cosmic rays, summarizing the experiments known to him and concluding that “the idea of an extraterrestrial source of cosmic rays is not supported by the observations”. He suggested that the observed variations could be just fluctuations. Hess did not present the results of balloon flights, as that was the task carried out by Kolhörster. At the time, Kolhörster had made three ascents, reaching altitudes of 3,600, 4,000, and 6,300 m, and his results confirmed Hess’.

2 The Mystery of Cosmic Rays

41

Pacini, who was not associated with the academic community, did not attend the congress, missing a crucial opportunity. Kolhörster’s conclusive flight up to an altitude of 9,300 m above sea level, which confirmed Hess’s results beyond any doubt (Fig. 2.9, right), took place on June 28, 1914, the same day as the assassination of Archduke Franz Ferdinand, the heir to the Austro-Hungarian Empire, in Sarajevo. This event marked the start of World War I and changed the course of history. During the 1914–1918 war and the years that followed, physics, particularly in Europe, stagnated. In the years immediately following World War I, very few investigations of penetrating radiation were carried out. However, the discussion remained heated. Excerpts from the correspondence exchanged between Pacini and Hess in 1920 provide valuable insights into their allocation of scientific priorities and mutual understanding of results. On March 6, 1920, Pacini wrote a letter to Hess: I had the opportunity to study some of your papers about electrical-atmospherical phenomena that you submitted to the Principal Director of the Central Bureau of Meteorology and Geodynamics [in Rome]. I was already aware of some of these works from summaries that had been reported to me during the war. [But] the paper entitled “The problem of penetrating radiation of extraterrestrial origin” was unknown to me. While I have to congratulate you for the clarity in which this important matter is explained, I have to remark, unfortunately, that the Italian measurements and observations, which take priority as far as the conclusions that you, Gockel, and Kolhörster draw, are missing; and I am disappointed about this, because in my own publications I never forgot to mention and cite anyone.

Hess’s response, dated March 17, 1920, was as follows: Dear Professor, your very valuable letter dated March 6 was to me particularly precious because it gave me the opportunity to re-establish our links that unfortunately were severed during the war. I could have contacted you before, but unfortunately I did not know your address. My short paper “The problem of penetrating radiation of extraterrestrial origin” is a report of a public conference, and therefore has no claim of completeness. Since it reported the first balloon measurements, I did not provide an in-depth explanation of your sea measurements, which are well known to me. Therefore, please excuse me for my unkind omission, that was truly far from my aim.

42

A. De Angelis

On April 12, 1920, Pacini wrote again to Hess: [W]hat you say about the measurements on the penetrating radiation performed on a balloon is correct; however the paper “The problem of penetrating radiation of extraterrestrial origin” lingers quite a bit on measurements of the attenuation of this radiation made before your balloon flights, and several authors are cited whereas I do not see any reference to my relevant measurements (on the same matter) performed underwater in the sea and in the Bracciano Lake, that led me to the same conclusions that the balloon flights have later confirmed.

In his last letter, dated May 20, 1920, Hess replied: Coming back to your publication, I am ready to acknowledge that certainly you had the priority in expressing the statement, that a non-terrestrial radiation of 2 ions/cm3 per second at sea level is present. However, the demonstration of the existence of a new source of penetrating radiation from above came from my balloon ascent to a height of 5,000 m on August 7, 1912, in which I discovered a huge increase in radiation above 3,000 m.

The correspondence between Hess and Pacini, nine years after Pacini’s work and eight years after Hess’s 1912 balloon flight, highlights the difficulty of communication at that time. The language barrier, as Pacini published mainly in Italian and Hess in German, even in their exchange of letters, may have contributed to the difficulty. Additionally, Pacini at that time was not an academic, and because of this limitation he could not defend effectively his work. Meanwhile, Kolhörster continued his research using newly developed instruments and made measurements in the mountains, which produced results (published in 1923) that supported the findings from balloon flights. However, there were also opposing views on the hypothesis of extraterrestrial radiation, with the German researcher Hoffmann, using highly sensitive electrometers he developed, concluding that the cause of ionization was radioactive elements in the atmosphere.

Research in the United States With Europe devastated, the center of research shifted to the United States. A key figure in this scenario was Robert Millikan (1868–1953), who was awarded the Nobel Prize in physics in 1923 for his work on the electron’s charge and the photoelectric effect. Before transitioning to physics, Millikan was a scholar of classical literature.

2 The Mystery of Cosmic Rays

43

Fig. 2.10 Millikan and coworkers carried cosmic ray measurement equipment to Mount Whitney in 1925. Courtesy of the California Institute of Technology

Millikan and his collaborator Bowen developed a lightweight electrometer and ion chamber for uncrewed balloon ascents, utilizing technologies developed for military purposes. To their surprise, their ascents in Texas to a maximum altitude of 15,000 m showed a radiation intensity only a quarter of that reported by Hess and Kolhörster. They initially attributed this difference to a reversal of intensity at higher altitudes, unaware of the fact that the geomagnetic causes a significant variation in cosmic radiation at different altitudes in Europe and Texas. Millikan believed that there was no extraterrestrial radiation and at the American Physical Society congress in 1925 he stated that “all penetrating radiation is of local origin.” However, in 1926, Millikan and Cameron made absorption measurements of radiation at different depths in lakes and high altitudes, based on the known absorption coefficients and the altitude dependence of radiation (Fig. 2.10). They concluded that the radiation consisted of high-energy gamma rays, uniformly propagating through space in all directions, and named them “cosmic rays” (recall that Gockel and Wulf had already used this term, while more conservatively Hess used the designation “höhenstrahlung,” or radiation from above). They did not cite Pacini’s and Hess’ work. Millikan was known for being a ruthless scientist. One of the jokes circulating about him at Caltech was, “Jesus saves, and Millikan takes the credit.” He was also skilled at communicating with the media, and the discovery of cosmic rays became, in the eyes of the public, a triumph of American science (Fig. 2.11). Millikan believed radiation was generated by nuclear transitions

44

Fig. 2.11

A. De Angelis

In 1927 Millikan had the honor of the cover of the weekly Time magazine

characterized by energy releases similar to those in the nebulous matter of space. He called cosmic radiation the “birth cry of atoms” in our galaxy. His lectures garnered much attention from figures such as Eddington and Jeans, who tried to explain Millikan’s theories. The reactions of the two main protagonists of research in Europe before the war, Hess and Pacini, to Millikan’s misbehavior were very different, partly because of different personal situations. Immediately after Millikan’s publication in 1926, Hess, along with Bergwitz and Kolhörster, wrote an article emphasizing the priority of Austrian and German scientists in balloon experiments. The article explicitly stated that Millikan’s account was susceptible to misunderstandings. Hess was familiar with the American environment, as he spent two sabbatical years in New Jersey in 1921 and 1922, teaching at various American universities, before being appointed as a full professor in Graz in 1923, and establishing laboratories for studying cosmic rays in the mountains, although with limited time for research.

2 The Mystery of Cosmic Rays

45

Pacini did not respond to Millikan’s publication due to health issues and a busy schedule in his new job as a full professor in Bari—he was tasked with establishing physics studies in the School of Medicine and restructuring the Institute of Physics. His main research interests then shifted to studying light scattering processes in the atmosphere. Meanwhile, significant advancements were made in understanding cosmic rays.

Cosmic Rays Are Predominantly Charged Particles In the 1920s, only three types of radiation were known: alpha (helium nuclei), beta (electrons), and gamma (high-energy, ionizing photons). Gamma rays are the most penetrating of the three and were believed to be the source of cosmic radiation due to their penetrating power, since the ability of highenergy charged particles to penetrate matter was not yet known. Millikan proposed that gamma rays were produced from the formation of helium nuclei from protons and electrons in interstellar space. A crucial experiment was conducted to determine the nature of cosmic rays, specifically whether they were charged or neutral, by measuring the dependence of cosmic ray intensity on geomagnetic latitude. The Earth’s magnetic field deflects charged particles, causing a possible inhomogeneity in their flux at different latitudes. If cosmic rays are predominantly charged particles, a reduction in their intensity near the equator is expected due to the planet’s magnetic field. When charged particles approach the Earth’s poles, they tend to follow the direction of the magnetic field lines, experiencing no resistance and thus reaching the Earth’s surface with maximum intensity. However, particles that approach the equator travel in a direction perpendicular to the magnetic field, causing them to be deflected away. Only charged particles with high energy can reach the equator, as slower particles are deflected back into space, resulting in minimal intensity at the equator. In 1927 and 1928, important measurements were made by Dutch physicist Jacob Clay during two ship voyages between Java and Genoa and later between Java and Amsterdam and Java and Southampton. Clay found that ionization increased with latitude (Fig. 2.12) reproducibly, suggesting that the cosmic rays were charged, contrary to Millikan’s belief. This fact carries an important consequence for cosmic rays: the existence of a geomagnetic cutoff. The magnetic field acts as a shield, preventing low-energy particles from reaching the Earth’s atmosphere, while allowing higher-energy

46

A. De Angelis

Fig. 2.12 Radioactivity measurements made by Clay and collaborators; the effect of latitude is clear. This fact proves that cosmic rays are deflected by the geomagnetic field and, thus, they are predominantly charged particles. From Wikimedia Commons

particles to penetrate.The geomagnetic cutoff is an important factor to consider in the study of cosmic rays. It determines which particles can reach the Earth’s atmosphere and thus possibly be detected by ground-based instruments. Since the radius of curvature of a particle in a magnetic field is proportional to its momentum divided by its charge, the cutoff is a function of a variable, called “rigidity”, equal to p/q, where p is the momentum and q is the charge expressed in units of the elementary charge. The geomagnetic cutoff is higher at the equator and can reach 10 GeV for a particle of unit charge. In 1928, the introduction of the Geiger-Müller counter tube (Fig. 2.13) marked the beginning of a new era in experiments. The Geiger counter,

Fig. 2.13

Geiger counters from the 1930s. From Wikimedia Commons

2 The Mystery of Cosmic Rays

47

invented by Hans Geiger (1882–1945) and improved by his student, Walther Müller, is an effective tool for measuring ionizing radiation, including alpha and beta decay. The device consists of a tube filled with low-pressure gas and a metal wire stretched along its axis, isolated from the tube. A high voltage, approximately 1,000 v, is applied between the wire and the tube. When radiation passes through the tube and ionizes a gas molecule, it creates positive and negative charge pairs through avalanche multiplication. The resulting electrical pulse indicates the presence of ionizing radiation and makes the instrument highly sensitive to charged particles, allowing it to distinguish between charged and neutral particles. The Geiger-Müller counter is also robust and easy to use, as described by Giuseppe “Beppo” Occhialini (1907–1993), a prominent figure in cosmic ray research, who said it was like the colt in the Wild West: an easy-to-use tool that helps you find your way across a difficult frontier. The use of the Geiger-Müller counter led to definitive confirmation that cosmic radiation is mainly corpuscular, thanks to an experimental investigation conducted by the Germans Bothe and Kohlhörster.The investigation employed the coincidence technique, which had just been introduced by Bothe, to determine the time delay of the passage of a particle in two detectors and, thus, to determine the direction of origin of a particle. Bothe was awarded the Nobel Prize in physics in 1954 for introducing the coincidence technique. Despite clear experimental results, Millikan remained skeptical. In 1932, another American physics giant, Arthur Holly Compton, entered the scene. Compton was a professor at the University of Chicago from 1923 until he died in 1962 and won the Nobel Prize in 1927 for discovering of the effect that bears his name, the Compton effect. He conducted a global experiment to resolve the controversy, with more than 60 physicists participating and carrying out independent measurements, in what he was a precursor to what is now known as big science. The results showed a latitude effect, indicating that cosmic rays were charged particles and proving Millikan to be wrong. Millikan was forced to admit that there was indeed a latitude effect and that cosmic rays were, for the most part, charged particles.

Bruno Rossi, the East-West Effect and Cosmic Showers The question was then whether the particles were positive or negative, which was solved through the innovative thinking of Bruno Rossi, an Italian physicist born in Venice in 1905 and passed away in Boston in 1993. After studying in Padua and Bologna, Rossi founded the Florentine school of cosmic ray physics

48

A. De Angelis

and worked as an astrophysicist at the Arcetri Observatory. In 1932, he became the chair of experimental physics at the University of Padua and was responsible for the building of the current physics department in the city—the institute that he designed was equipped with several state-of-the-art laboratories such as an efficient workshop and a tower for the study of cosmic radiation. However, due to the implementation of racial laws, Rossi, who was of Jewish descent, was forced to leave Italy in 1938 and move to the United States. He went on to work at several renowned universities and research institutions before finally settling at the Massachusetts Institute of Technology (MIT) in Boston. Today, an archive of his manuscripts and scientific materials is preserved at MIT. During his initial role as an assistant at Arcetri in 1930, Rossi refined the coincidence technique by constructing electronic circuits that enabled detectors to be connected over large distances. In the same year, he proposed utilizing the Earth’s magnetic field to determine whether cosmic particles were predominantly positive or negative (Fig. 2.14). If the cosmic rays were primarily positively charged particles, they would appear to originate mostly from the West due to their interaction with the Earth’s magnetic field. On the other hand, if they were negatively charged particles, they would appear to come primarily from the East. Rossi carried out this measurement in Italy, but the result was inconclusive. Rossi then considered relocating to Asmara in the Eritrean colonies of Italy because he demonstrated that the East-West effect would be stronger near the equator (Fig. 2.15). In 1933, he made the trip and successfully proved that

Fig. 2.14 The fact that cosmic rays are predominantly positively charged implies that they come predominantly from the West than from the East because of their interaction with the geomagnetic field. The effect is particularly clear at latitudes near the equator. From Wikimedia Commons

2 The Mystery of Cosmic Rays

49

Fig. 2.15 The young Rossi, an Italian soldier, and an Eritrean soldier next to the tent used as a base camp for measurements. From Wikimedia Commons

cosmic rays were predominantly positively charged particles, publishing his findings in 1934. Unfortunately, he was unlucky as a few months prior, Luis Alvarez (an American physicist of Spanish origin who taught at Berkeley and who would later win the Nobel Prize in physics in 1968 “for his decisive contributions to elementary particle physics, particularly the discovery of numerous resonance states through the development of the bubble chamber and data analysis”) and Compton had already arrived at the same conclusion, acknowledging Rossi’s contribution in their paper. As a result, it became clear that cosmic rays were mostly composed of protons. Alvarez’s name is also linked to the Manhattan Project, to several interdisciplinary scientific issues, from the hypothesis of the extinction of dinosaurs following the impact of a large meteorite on Earth to the study of ballistic information related to the assassination of John Kennedy. In the preparation phase of his measurements, Rossi made another significant discovery. After a test of his equipment performed at the location of the physics institute under construction in Padua and then in Asmara he reported that he observed nearly simultaneous discharges of Geiger counters placed far apart on a horizontal line: The frequency of coincidences recorded with counters far from each other, referred to as “random coincidences” in the tables, appears to be higher than anticipated based on the resolving power of the instruments, which was measured in Padua before my departure [...]. This raised the possibility that these coincidences were not truly random. This hypothesis seems to be supported by the following observations: in 21 h and 37 min, 14 coincidences were recorded

50

A. De Angelis

between three distant counters arranged in such a way that the same particle could not pass through all of them. If these were to be considered random, [...] only 6 were expected. [...] Therefore, it appears (since appropriate control experiments ruled out any potential disturbances) that from time to time, very extensive showers of particles reached the devices, causing coincidences between counters that were even quite far apart. Unfortunately, I did not have sufficient time to more closely study this phenomenon to establish with certainty the existence of these supposed particle showers and determine their origin.

In 1937 Pierre Auger, probably unaware of Rossi’s earlier paper, detected and investigated the same phenomenon in more detail. He concluded that extensive showers of particles are generated by high-energy primary cosmic ray particles interacting with air nuclei in the upper atmosphere, initiating a cascade of secondary interactions that eventually brings a shower of electrons, photons, and muons to ground level. Particle showers directly explain the phenomenon of spontaneous discharge of electroscopes, from which the whole investigation started at the beginning of the century! The formal theory of shower development was later developed thanks to Hans Bethe, Walter Heitler, Rossi and his assistant Kenneth Greisen, and pioneering contributions by Heisenberg. The role of Werner Heisenberg (Würzburg 1901–Munich 1976) is particularly important. Heisenberg, who graduated at the age of 20, began collaborating in 1922 with Bohr in Copenhagen and with David Hilbert in Göttingen; he worked out a new mathematical formalism for quantum mechanics and proved the famous theorem known as the “uncertainty principle,” which establishes the existence of an intrinsic limit to the accuracy in the simultaneous measurement of certain pairs of observables, such as position and velocity of a particle. He also made very important logical, philosophical and epistemological contributions to theoretical and experimental physics. In 1932 he was awarded the Nobel Prize in physics for his contributions to quantum physics. A shadow of connivance with Nazism hung over him, a shadow that did not prevent him and the Jew Rossi from being friends and having a close correspondence relationship. After the war Heisenberg, taken prisoner because of these suspicions, was eventually rehabilitated, so much so that by 1952 he was in charge of creating and organizing in Munich the Max Planck Research Institute for Physics, which today bears his name, and he was deeply involved in the reconstruction of German and European research centers (he was with Edoardo Amaldi among the founders of CERN in Geneva). At that time, Heisenberg published in Leipzig several papers on cosmic rays, particularly on the particle showers discovered

2 The Mystery of Cosmic Rays

51

by Rossi. He collaborated with enlightening discussions on the design of many of Rossi’s experiments. During the interwar years, the question of variations in radiation over time received much attention. Several researchers, including Wulf, Pacini, and Hess, observed these variations, but by around 1930, the prevailing view was that they were not significant. However, a thorough analysis by the US physicist Scott Forbush revealed that the intensity of cosmic rays detected in the Earth’s atmosphere varies significantly with time. We know today this effect, called the Forbush effect, to be mostly due to the interaction of cosmic rays with solar wind. During periods of high solar activity, such as solar flares and coronal mass ejections, the solar wind becomes more intense, leading to an increase in the number of charged particles that reach the Earth’s atmosphere. This, in turn, can lead to a decrease in the flux of secondary cosmic rays at the ground thought to be caused by the temporary shielding effect of the enhanced solar wind, which blocks some of the cosmic rays from reaching the Earth’s surface.

3 The Physics of Elementary Particles

The development of cosmic ray physics led scientists to discover that astrophysical sources produce projectiles of extremely high energy that reach the Earth’s atmosphere. This fact prompted the investigation into the nature of these projectiles and their use as probes for studying matter in detail, along the idea of the famous experiment by Ernest Rutherford and collaborators in 1900. Rutherford, who was awarded for his idea the 1908 Nobel Prize in chemistry, pioneered the technique of using high-energy particles to study the structure of targets by bombarding them and observing their deflection. In the experiment, a thin gold foil was bombarded with alpha particles. At the time, many scientists thought that the positive charge of an atom was uniformly distributed throughout its structure, like plum pudding. However, the results of the Rutherford experiment showed that most the alpha particles passed through the gold foil with little to no deflection. This indicated that the gold atoms were mostly empty spaces and that the atom’s positive charge was concentrated in a small, dense nucleus at the center. This marked the beginning of elementary particle physics, the science that explores the fundamental constituents of matter. Many significant discoveries have been made in particle physics thanks to cosmic rays, including some of the most important findings in the history of science.

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 A. De Angelis, Cosmic Rays, Astronomers’ Universe, https://doi.org/10.1007/978-3-031-38560-5_3

53

54

A. De Angelis

The Discovery of Antimatter The first breakthrough in particle physics was the discovery of antimatter. In 1926, Schrödinger proposed his famous equation describing the motion of a quantum particle, but this equation was soon found to be inconsistent with the theory of relativity. In 1928, Paul Dirac (Bristol 1902–Tallahassee 1984), a young Anglo-Swiss scientist, proposed a new equation consistent with quantum theory and the theory of relativity. Dirac’s equation elegantly explained the spin and magnetic properties of the electron, without the need for further assumptions. However, it also predicted the existence of anti-particles corresponding to all elementary particles of matter. This implied that in addition to our world, there had to be an anti-world, where an anti-electron existed, which was identical to the electron but had a positive charge. Initially, Dirac did not fully understand this implication and tried to eliminate it when it was presented to him by Weyl in 1930. However, despite the apparent absurdity of this concept, Dirac believed that a mathematically beautiful theory was more likely to be correct than an unpleasant one confirmed by experimental data. Meanwhile, a new instrument took hold in the study of cosmic rays: the cloud chamber. The cloud chamber (Fig. 3.1) is a hermetically sealed box containing air saturated with water vapor connected by a conduit to a cylinder within which a plunger slides. A rapid displacement of the plunger causes sudden expansion of the vapor in the chamber, which switches to an unstable supersaturated state. In such conditions, an electrically charged particle that penetrates the box ionizing the atoms with which it collides creates along its path a dense succession of condensation nuclei (ionized atoms), around which supersaturated vapor collects to form tiny droplets. The trace left by the trajectory traveled by the particle can be photographed through a transparent wall of the box. It then became possible to “photograph” the trajectories of cosmic rays. American student Carl Anderson, (New York 1905–Pasadena 1991), of Swedish descent, discovered antimatter in the form of the positron (i.e., the positive electron) in 1932 while analyzing the traces of cosmic rays passing through his cloud chamber (Fig. 3.2). Anderson, who was not well supported in his idea by Millikan, his thesis advisor, had arranged a lead sheet in the chamber to slow down the particles. His discovery was paramount since it proved that Dirac’s theory, which predicted the existence of antiparticles, was correct. Dirac and Schrödinger were awarded the Nobel Prize in 1933 for “the discovery of new forms of the atomic theory”; about Anderson we shall know soon.

3 The Physics of Elementary Particles

55

Fig. 3.1 The cloud chamber built and used by Anderson and Neddermeyer between 1935 and 1940. Credit: California Institute of Technology

Fig. 3.2 The first photograph showing the passage of an anti-electron, or positron, through a cloud chamber immersed in a magnetic field. It is understood that the particle comes from below because after passing through the sheet of material in the medium (and thus losing energy), the radius of curvature decreases. It is understood that it is positive from the direction of rotation in the magnetic field. Mass is measured by the density of the bubbles, which indicates the loss of energy by ionization (a proton would have lost energy faster). From Wikimedia Commons

56

A. De Angelis

Many more discoveries were made in particle physics using cosmic rays. After Anderson discovered antimatter, research in elementary particle physics using cosmic rays, driven by improvements in detection instruments, particularly the cloud chamber, flourished. One of the key figures in this development was Britain’s Patrick Blackett (1897–1974), who was awarded the Nobel Prize in physics in 1948. The first major discovery was the conversion of photons into pairs of electrons and positrons in 1933. In addition to predicting the existence of the positron, Dirac’s theory also predicted that a photon of sufficient energy could transform into an electron-positron pair. This phenomenon (Fig. 3.3) was observed in cosmic rays by Blackett and Occhialini, who improved the observing technique by coupling a cloud chamber with a coincidence counter system. It made it more efficient in capturing meaningful photographs of cos-

Fig. 3.3 Conversion of a photon into an electron-positron pair. A photon, coming from the bottom and invisible in the bubble chamber, converts into an electron and a positron, the two spiraling particles; the visible track emerging from the interaction point is a scattered electron. The V-shaped topology in the upper part shows the conversion of another invisible photon. From a bubble chamber image taken at Brookhaven National Laboratory, US (public domain)

3 The Physics of Elementary Particles

57

mic rays. This production of electron-positron pairs confirmed the equivalence of mass and energy, as well as the particle-like behavior of light and the concept of wave-particle duality, where a photon behaves as both a wave and a particle.

Recognition by the Scientific Community Finally, in 1936, Victor Hess and Carl Anderson were awarded the Nobel Prize in physics (Fig. 3.4) for their discoveries in the field of cosmic rays. Hess was recognized for his pioneering work in balloon flights, which led to the discovery of the origin of cosmic rays. At the same time, Anderson was honored for the discovery of the positron, the first example of an anti-particle. In 1931, Hess was appointed as a professor of experimental physics and director of the Institute for Radiation Research at the University of Innsbruck. During his time in Innsbruck, he continued to study cosmic rays and built a small laboratory on the Hafelekar mountain near Innsbruck. Pacini continued his work as a director in Bari, with little or no time for research, until he died in 1934.

Fig. 3.4 From the left foreground Petrus Debye (Nobel laureate in chemistry), Carl Anderson and Victor Hess (Nobel laureates in physics) wait to receive the prize in Stockholm on December 10, 1936 (Nobel Foundation)

58

A. De Angelis

At this point, it is worth spending a few words to clarify the rules governing the award of the Nobel Prize in physics. At the initial stage, a secret list of selected university professors and researchers (today, there are more than two thousand, but at that time, only a few dozen) receive confidential nomination forms, and they make their recommendations accompanied by a short report. Each member of the list is not supposed to know who the others are. The nominees are then examined and discussed by a subcommittee, which narrows down the list. A more extensive report is prepared for each shortlisted candidate and submitted to the Royal Swedish Academy. The Physics class of the Academy then makes the final selection of the winner, or winners, through a vote—a maximum of three Nobel laureates and two different works may be selected. By regulation, the names of the nominees are never made public, and they are not informed that they are being considered for the prize. The nomination records are kept secret for fifty years. Therefore, if some of your friends claim they were nominated for the Nobel Prize in physics, be cautious! Scientific research today is characterized by rapid communication of results, but the situation was very different when cosmic rays were discovered. Communication was slow, there were major language barriers, and the aftermath of World War I was severe. As a result, nominations were often made in favor of compatriots or people who had worked in the nominator’s country. In 1936, Pacini was deceased and therefore not eligible to be a Nobel candidate, since posthumous nominations are not permitted—he had never been nominated anyway. The complete absence of proposals in favor of Pacini by the Italian scientific community is surprising, and this fact can probably be explained by the closure and division into “clans” of the cultural and academic environment. The Royal Swedish Academy received 31 nominations for the Nobel Prize in 1936. Since some names were repeated in the nominations, there were 22 possible candidates. Hess had been nominated by Clay, who had suggested that Hess receive the prize on his own and not be shared with others. This was a kind gesture since Clay himself was nominated for the discovery of the dependence of cosmic ray flux on latitude. Hess was also nominated by Compton, who had proposed that the prize be awarded to both Hess and Anderson, which eventually happened. Hess had previously been nominated in 1931 by Pohl, a professor in Göttingen, and then in 1933 by Plotnikov, a professor in Zagreb, and in 1934 by Willstätter, a professor in Munich. In his letter supporting the nominations of Hess and Anderson, Compton wrote: The time has now arrived, it seems to me, when we can say that the so-called cosmic rays definitely have their origin at such remote distances from the Earth

3 The Physics of Elementary Particles

59

that they may properly be called cosmic, and that the use of the rays has by now led to results of such importance that they may be considered a discovery of the first magnitude. [...] It is, I believe, correct to say that Hess was the first to establish the increase of the ionization observed in electroscopes with increasing altitude; and he was certainly the first to ascribe with confidence this increased ionization to radiation coming from outside the Earth.

Why was recognition so late? Compton explained that it was only recently that the study of cosmic rays had become relevant in other areas of physics. Later in the same letter he added some powerful words, “Before it was appropriate to award the Nobel Prize for the discovery of these rays, it was necessary to await more positive evidence regarding their unique characteristics and importance in various fields of physics. This has now been accomplished. Studies of the magnetic latitude effect on cosmic rays have shown that they include electrical particles of much higher energy than are available from artificial sources, further that these rays come from a source that may be properly called cosmic. The usefulness of the rays has been demonstrated by the experiment which has revealed the existence of the positron.” The Royal Academy formed a subcommittee to study cosmic ray-related nominations. This subcommittee prepared a report and sent it to the Nobel Committee for Physics in June 1936. The report, signed by theoretical physicist Erik Hulthén, was later included as an appendix in the committee’s proposal to the Royal Swedish Academy of Sciences. Despite Pacini’s lack of notoriety, his contribution was correctly cited. Hulthén commented that the results of balloon measurements confirmed Pacini’s measurements, which indicated that a non-negligible part of the radiation was independent of the direct action of substances contained in the Earth’s crust. He noted, however, that Hess’s careful work also included an accurate measurement of gamma-ray absorption as a function of distance and several balloon ascents between 1911 and 1912, at the end of which a factor two increase in ionization was finally found at an altitude of 5,200 m. Hulthén cited Hess’s conclusion that the results show that a very penetrating radiation affects the atmosphere from outside (“The results of the present observations seem to be explicable on the assumption that a radiation of very high penetrating strength enters our atmosphere from above.”). Hess’s results had attracted much attention but were also questioned due to experimental uncertainties; however, they were soon confirmed by Kolhörster, who measured 40 times more ionization than on the ground at an altitude of 9,300 m. After World War I, the research was resumed in 1920. The results of Hess and Kolhörster were still questioned, among others by Millikan, who nevertheless confirmed Hess’s conclusion in 1925.

60

A. De Angelis

Hulthén concluded his paper discussing the importance of Hess’s findings for other areas of fundamental physics. Henning Pleijel, chairman of the Nobel Committee for Physics, stated in his speech at the Nobel award ceremony on December 10, 1936: A search for radioactive substances was carried out [by various scientists]: in the Earth’s crust, in the seas, and in the atmosphere; and the instrument just mentioned-the electroscope-was used. Radioactive rays were found everywhere, whether the investigations were carried out in the deep waters of lakes or in high mountains. [...] Although these investigations yielded no concrete results, they showed that the ubiquitous ionization could not be attributed to the action of radioactive substances in the Earth’s crust. [...] The mystery of the origin of this radiation remained unsolved until Professor Hess chose it as the problem of his life. [...] With superb experimental skill, Hess perfected the instrumental equipment used and eliminated sources of error. After completing these preparations, Hess made a long series of balloon ascents. [...] From these investigations, Hess drew the conclusion that there is an extremely penetrating radiation, coming from space, that enters the Earth’s atmosphere.

After winning the Nobel Prize, Hess emigrated permanently to the United States, where he was offered a chair in experimental physics at Fordham University in New York City. In the United States, Hess (a U.S. citizen in 1944) devoted himself passionately to teaching and directed his research to problems of meteorology and radiation protection. He became a strong opponent of nuclear testing and stated that “we know too little about radioactivity at this time to say with certainty that testing underground or above the atmosphere will have no effect on the human body.” He also wrote that he planned to devote the rest of his life to better understanding the effects of radiation on humans and published in 1949 the pioneering monograph Cosmic Radiation and its Biological Effects. Hess retired in 1958 but continued to work as a professor emeritus, even though his health was deteriorating, and died in 1964. Fordham University has established an archive that holds his manuscripts and books. The discovery of cosmic rays, a major milestone in science, involved contributions from scientists in Europe and America during a time of limited communication and heightened nationalism resulting from World War I. The story of its recognition shows that, unfortunately, some significant contributions were overlooked. The scarcity of references to the work of Pacini, Wulf, and others, is due to a combination of factors, including international political events, the differing organizational structures of research in different countries,

3 The Physics of Elementary Particles

61

but also personal situations. Despite our efforts to correctly and efficiently organize the progress of science and society, life is also and above all made up of episodes.

The µ Lepton and the Mesons As early as in the 17th century, Newton expressed the conjecture that, in addition to electrostatic and magnetic interactions and to the force of gravity, there must exist a stronger interaction at small distances that holds matter together. This interaction, later referred to as “strong” or “nuclear”, becomes invisible at larger distances. For over two centuries, no one could explain how this intense force, which is present at distances of a femtometer (also called a “fermi”: a billionth of a micrometer), rapidly fades at greater distances and becomes negligible at a distance of a thousandth of a micrometer. In 1935, 28-year-old Japanese physicist Hideki Yukawa formulated a new theory of the strong force that led him to win the Nobel Prize in 1949. This theory was similar to the theory of electromagnetic interaction in that it required a “mediating” particle to transmit the interaction at a distance. The photon, a massless particle, mediated the electromagnetic interaction, while the strong interaction was hypothesized to be mediated by a particle with an intermediate mass between the electron and the proton, referred to as a “meson”. The mass of the proton corresponds to an energy of approximately 1 GeV; the mass of the electron is two thousand times lower. Yukawa predicted that the meson must have a mass of approximately one-tenth of a GeV (approximately two hundred times the mass of the electron) to explain the rapid decrease in the strong interaction with distance. In the years following Yukawa’s hypothesis, researchers in cosmic rays began to discover new particles with masses intermediate between the electron and proton, which could be the meson he predicted. Anderson, who was now a professor, and his student Seth Neddermeyer observed positive and negative cosmic ray particles in the Colorado mountains that had a higher penetration capacity than the particles known at the time. They were heavier than the electron but lighter than the proton. In 1937, Neddermeyer and Anderson published their results and proposed the name “mesotron” for the new particle. In 1938 and 1939, precise measurements of the masses of these particles were made by analyzing cosmic ray photographs in a cloud chamber. The mass was calculated from the trajectory’s curvature in a magnetic field and from ionization. The measurements revealed a mass of one-tenth of a GeV, between 200 and 240 times the mass of the electron, which matched Yukawa’s predictions

62

A. De Angelis

for the meson. Most researchers believed that these particles were the strong force carriers predicted by Yukawa and were created when primary cosmic rays collided with nuclei in the upper atmosphere, similar to when an electron emits photons when it collides with a nucleus. However, this interpretation was incorrect, and mesotrons were soon renamed. The average lifetime of the mesotron was measured by studying its flux at different altitudes, notably by Bruno Rossi’s group in Colorado (Rossi had by then moved to the US to escape racial persecution). The result was an average lifetime of approximately two microseconds, which was approximately a hundred times larger than the lifetime predicted by Yukawa for the particle that transmitted the strong interaction but still not too different. The decrease in the number of mesotrons with altitude also confirmed the “time dilation” predicted by the theory of relativity: a fast particle lives longer than when it is at rest (without this dilation, the average mesotron would travel only approximately 600 m, equal to the product of its lifetime and the speed of light, and would never cross the approximately 30 km from the upper atmosphere to the Earth’s surface). At the end of its life, the mesotron decays into an electron plus neutrinos that leave no trace in the bubble chamber. The positive mesotron decays into a positive electron and neutrinos. Beyond the initial excitement, however, the math did not add up. In particular, the Yukawa particle was thought to be the “glue” between nucleons (i.e., protons and neutrons) and therefore should not have been highly penetrating. The nuclei in the atmosphere should absorb it quickly, contrary to observations. Many theorists attempted to develop complex explanations to save the theory. Still, the simplest explanation proved to be correct: what was once called the mesotron was not the Yukawa particle. The key discovery in this regard was made by Marcello Conversi, Oreste Piccioni, and Ettore Pancini. During an epic experiment to measure the penetration of cosmic rays with fast coincidences at Rossi’s laboratory in Rome between 1941 and 1944, amid the bombing of World War II, the researchers determined that cosmic muons could not be the particles responsible for the strong interaction that holds nucleons together in nuclei. They hypothesized that there were two particles involved: the particle predicted by Yukawa, now known as a pion, is created in the interactions of cosmic protons with the atmosphere and then interacts with the nuclei of the atmosphere or decays into what was once called the mesotron. The particle in question, later known as the muon or μ lepton, does not carry the “strong” force. At the time, the pion, which was the actual Yukawa meson, was yet to be discovered. To detect the pion, experiments had to be

3 The Physics of Elementary Particles

Fig. 3.5

63

The flight of the ‘‘Century of Progress’’ in 1933. From Wikimedia Commons

conducted at extremely high altitudes or by sending detectors into the upper atmosphere through uncrewed balloons or balloons with pressurized cabins. In the early days, unpressurized flights were also conducted, some of which ended tragically. The challenge was to design detectors that could operate efficiently without human intervention or electrical power. During the 1930s and 1940s, the technique of measuring in the stratosphere had significant development, particularly in the context of cosmic ray research, thanks to the Piccard brothers and the Soviet Space Society. Auguste Piccard reached 16 km in altitude in 1932, and the Soviet balloon “Sirius” reached 18 km in 1933, returning safely with its passengers. In 1933, Jean Piccard, Auguste’s brother, flew the “Century of Progress” (Fig. 3.5) balloon to almost 19 km, carrying instruments for detecting cosmic rays and midges to study genetic mutations. In 1934, the “Sirius” reached 20 km but then collapsed, resulting in the deaths of the three researchers on board. This led to the discontinuation of humancrewed stratospheric balloon flights, although a series of stamps were issued to commemorate the glory of the “Sirius.” At the same time, in England, Cecil Frank Powell (1903–1969), who had been a student of Rutherford and Wilson at Cambridge and would later become a Nobel laureate in physics in 1950 for the development of the photographic

64

A. De Angelis

method and his discoveries on mesons, and Occhialini, who had returned to England after spending time in Brazil where he had established a school of cosmic ray scholars that is still active and highly reputed today, approached both the challenges of balloons and high mountains. They solved the problem of detectors on uncrewed missions by using nuclear emulsions, a technique similar to the one used by Marie Curie in her studies on radioactivity using photographic emulsions. Nuclear emulsions are subnuclear particle detectors with excellent spatial resolution (better than a tenth of a micrometer) consisting of silver salt crystals suspended in an organic gel. Being thicker than ordinary photographic emulsions, they allow for three-dimensional reconstruction. Like photographic films, they must be developed after exposure to reveal the image. The mass of the particle can be obtained from the density of the dots that form the trace and any deviations of the trajectory from a straight line due to collisions with emulsion nuclei. The slower the particle is, the more atoms it ionizes, resulting in a greater density of points, while lighter particles are more easily deflected in collisions. In 1946, Powell and Occhialini exposed several dozen photographic plates at an altitude of 2,900 m on the Pic du Midi in the French Pyrenees; at the same time, Donald Perkins from Imperial College London flew a Royal Air Force plane at an altitude of 9,000 m, carrying photographic plates. The results hinted at the existence of the pion but were not conclusive. Further measurements were required with longer exposures at higher altitudes than Pic du Midi. Fortunately, Occhialini brought a student from Brazil, Cesare Lattes, who will be the future founder of the Brazilian Center for Physical Research, and knew the right place for the measurements: the Andes mountain range. Specifically, on Mount Chacaltaya in the Bolivian Andes, near the capital La Paz, there was a meteorological laboratory at an altitude of 5,500 m. In 1947, Powell, Occhialini, and Lattes exposed nuclear emulsions to cosmic rays on Mt. Chacaltaya and finally demonstrated the existence of charged positive and negative pions by observing the pion and muon simultaneously and determining their masses (the pion was found to be 30 percent heavier than the muon). Numerous photographs from later collected nuclear emulsions, particularly from balloon experiments, clearly showed the tracks of two types of particles. The heavier particle was the pion, also called the π meson, while the track of a lighter particle began at the end of its track, which was the muon. Analysis of the emulsions allowed for the measurement of the mass of the muon, which was approximately 106 MeV, and the mass of the charged pion, which was approximately 140 MeV. In some photographs, the complete decay chain of π mesons into μ leptons and then to electrons could be seen (Fig. 3.6).

3 The Physics of Elementary Particles

65

Fig. 3.6 Pion and muon: the decay chain π → μ → e (the pion travels from the bottom to the top on the left, the muon horizontally, and the electron from the bottom to the top right of the photograph). The missing momentum is carried by neutrinos. From C.F. Powell, P.H. Fowler & D.H. Perkins, The Study of Elementary Particles by the Photographic Method (Pergamon Press 1959)

At this point, the distinction between pions and muons was clear. The muon is a heavier brother of the electron and thus belongs to the lepton family; it is very penetrating because, like all leptons, it does not “feel” the strong interaction. After the discovery of the pion, the muon, also called μ lepton, had no theoretical reason to exist according to the knowledge of the time (it is attributed to physicist Isidor Rabi in the 1940s the famous line, “Who ordered that?”). The meson theory had great development even before it was known that mesotrons were not the particles of Yukawa. Since the strong force is much more intense than the electromagnetic force, it was believed that a charge symmetry existed: the forces between protons and neutrons, between neutrons and neutrons, and between protons and protons should be similar, and there should exist positive, negative, and even neutral mesons. Neutral pions were more difficult to discover than charged pions because neutral particles leave no trace in detectors (and of the fact discovered later that their lifetime is a hundred million times shorter). However, between 1947 and 1950, they were identified in cosmic rays by analyzing their decay products within showers and later came clear confirmation by particle accelerators. In this way, after 15 years of research, Yukawa’s theory was finally verified.

The Discovery of Strangeness In 1947, after the resolution of the challenging meson problem, particle physics appeared to be a well-established science. At that time, 14 particles were known, including the proton, the neutron (which are part of the baryon family, a term derived from the Greek word for heaviness), and the electron, along with their antiparticles. The neutrino, which had been proposed to account for apparent

66

A. De Angelis

violations of the principle of energy conservation, was also recognized, as well as three pions and two muons. However, some of these particles were only postulated and will be experimentally discovered later. Apart from the muon, which was initially considered a redundant particle, all the others were believed to have a purpose in nature: the electron and nucleons formed the atom, the photon conveyed the electromagnetic force, and the pion represented the “strong” force. Neutrinos are crucial for the conservation of energy. Just as things appeared to be settled, a new revolution was imminent. As early as 1944, strange particle topologies in cosmic rays began to occasionally appear in photographs taken in cloud chambers. In 1947, G.D. Rochester and C.C. Butler of the University of Manchester clearly observed in chamber photographs a pair of V-shaped tracks coming from a single point that were deflected in opposite directions by an external magnetic field. Analysis of the photograph showed that an unknown neutral particle with a mass of approximately half a GeV, which was intermediate between the mass of a proton and a pion, disintegrated into a pair of oppositely charged pions. A broken track in a second photograph indicated the decay of a charged particle of approximately the same mass into a pair of pions, one neutral and one charged (Fig. 3.6). These particles, which could only be produced in very energetic interactions, were only seen once every hundred photographs. It was not until 1953 that they could be produced in the laboratory, and their only source was cosmic rays. They are now known as K mesons (or kaons) and can be positive, negative, or neutral. The discovery of this new family of particles led to their being called “strange particles” (which refers to all compound particles that contain the strange quark, or s). The study of K mesons motivated the G-Stack experiment, a balloon detector that first demonstrated a violation of parity symmetry. After K mesons, strange particles heavier than protons and neutrons were also discovered. These particles decay with a “V-shaped” topology into final states that include protons, and are known as strange baryons or hyperons (, , , . . .).

Mountain-Top Laboratories The discovery of mesons, which caused a stir in the world of physics in the aftermath of World War II, can be considered the beginning of modern elementary particle physics. In the following years, research into cosmic rays rapidly progressed, and detection techniques were improved through the use of both cloud chambers and nuclear emulsions. The cost-effectiveness of nuclear emulsions led to an increase in experiments and the formation of international

3 The Physics of Elementary Particles

67

Fig. 3.7 The laboratory of Testa Grigia (above) and that of Fedaia Pass (below)

collaborations. Scientists soon realized the importance of setting up laboratories in mountainous regions to study cosmic rays, leading to the establishment of such facilities, particularly in Italy (Fig. 3.7), France, South America, and the Soviet Union. In Italy, the Rome-based group of physicists led by Gilberto Bernardini and Pancini, Conversi, and Edoardo Amaldi built the Testa Grigia laboratory in Cervinia, located at the highest place in Italy accessible year-round at an altitude of 3,505 m. The laboratory was connected to the valley floor by Europe’s highest cable car and was made of wood and aluminum to allow maximum penetration of cosmic rays, despite having to endure high winds and snow in the winter. The laboratory was inaugurated in 1948, and the Turin school of physicists was established there, particularly growing under Carlo Castagnoli. In 1950, SADE (Società Adriatica di Elettricità), which had a monopoly in North-Eastern Italy on the production and distribution of electricity, began constructing a dam to generate hydroelectric power. Antonio Rostagni,

68

A. De Angelis

professor in Padua, took the initiative to build a laboratory for the study of cosmic rays at the foot of the northern slope of the Marmolada, which had access to a large amount of electricity. The laboratory was equipped with a large electromagnet built by engineer Giovanni Someda based on an old drawing by Rossi. Young physicists from Padua, such as Pietro Bassi, Marcello Cresti, Luciano Guerriero, and Guido Zago, trained there and started their careers by transferring a significant part of their experimental activities to the laboratory. Fermi and Powell also spent brief periods there. However, particle accelerator technology was starting to emerge and enabled measurements to be made in controlled conditions.

Hunters Become Farmers: Particle Accelerators Particle physicists relied on cosmic rays as their primary research tool until particle accelerators became available in the 1950s, and the initial breakthroughs in this field were a result of studying cosmic rays. Studying cosmic rays was akin to hunting: researchers could not always predict what they would find and often had to rely on chance encounters. With the invention of particle accelerators in 1950, scientists could now produce energetic particles of their choice in controlled conditions, marking a transition from hunting to herding. This was the era of the “particle zoo”, with the number of quarks increasing from three to six, the number of mesons increasing from a few to a thousand, and the number of baryons rising from three to several hundred. The turning point in the use of cosmic rays and accelerators as the primary source of high-energy particles is typically marked by the Bagnères de Bigorre conference on cosmic rays, held in July 1953 in the French Pyrenees. At the time, the Brookhaven accelerator, also known as the Cosmotron, had just achieved a record energy of 3 GeV, and many physicists were shifting their focus to this new technology. In his conclusions, the general reviewer, French physicist Leprince-Ringuet, stated, “We must consider the fundamental question: what is the future of cosmic rays? Should we continue to pursue new discoveries, or should we turn to accelerator machines instead? It is undoubtedly true that the majority of the future of nuclear physics lies in machines. However, this view should be tempered by the fact that cosmic rays offer unique opportunities for studying certain, albeit rare, phenomena that current accelerators cannot replicate, as they can reach energies far beyond what current machines can produce, despite the rapid growth in machine energy.”

3 The Physics of Elementary Particles

69

Despite the advancements in accelerator technology and the clear advantages of using accelerators over cosmic rays, the highest energies will always be achieved through cosmic rays. The founders of CERN (European Laboratory for Particle Physics) explicitly included the study of cosmic rays as one of the organization’s purposes in the establishment of the research organization (Convention for the Establishment of a European Organization for Nuclear Research) in 1953. It is interesting to note that Fermi provided an estimate of the maximum achievable energy by an accelerator on Earth, under optimistic assumptions. He was famous for his so-called “Fermi problems”: problems that, at first glance, seemed impossible, in which he could make approximate estimates of solutions with limited data. For example, he calculated the power of the first atomic bomb based on the distance traveled by pieces of paper he dropped from his hand during the explosion. As a lecturer, he used to challenge his classes with such problems—a famous one was that of estimating the number of piano tuners in Chicago given only the population of the city. In a speech to the American Physical Society on January 29, 1954, Fermi considered the hypothesis of a proton accelerator with a ring as large as the Earth’s circumference (Fig. 3.8) and a magnetic field of 2 tesla, which is a good estimate of the maximum possible

Fig. 3.8 Fermi’s so-called ‘‘maximum accelerator’’, from the reproduction of Fermi’s original drawing in his 1954 speech to the American Physical Society (Fermi National Laboratory, Batavia, Illinois). Note that in his undergraduate transcript, Fermi had obtained the maximum grade with honors in all exams, except in drawing. From Wikimedia Commons

70

A. De Angelis

value of the magnetic field over large distances. Based on this hypothesis, he estimated that a maximum energy of approximately 5,000 TeV could be achieved, similar to the energy of cosmic rays near the “knee,” the typical energy of galactic accelerators. Fermi optimistically extrapolated the pace of accelerator progress in the 1950s and predicted that this accelerator could be built by 1994 at a cost of approximately 170 billion dollars. Things did not unfold as Fermi had predicted and the progress in accelerator technology has been slower than expected. Despite this, the successes of the Large Hadron Collider (LHC) accelerator are evident, even though its energy is 700 times smaller than what Fermi estimated as the maximum achievable energy. Meanwhile, particle physicists have continued their work and in recent years have seen a resurgence of discoveries.

The Discovery of Charm As we have seen, in 1944 particles formed by a new quark, the s (strange) quark, had been found in cosmic rays, which do not take part in the proton and in the neutron (proton and neutron are combinations of u and d quarks: proton p ≡ uud; neutron p ≡ udd). In 1964, James Bjorken and Sheldon Glashow hypothesized the existence of a fourth quark, called “charm”, to complement the existing d, u, and s quarks. The hypothesis was based on symmetry with the leptonic world, which had been organized into two doublets (electron and electron neutrino, muon and muon neutrino). The idea was given a physical foundation in 1970 by Glashow, John Iliopoulos, and Luciano Maiani, who showed that the existence of the charm quark explained some otherwise inexplicable phenomena concerning strange neutral mesons (known as the “GIM mechanism”). The existence of the charm quark was confirmed in November 1974 in what is known as the “November revolution”. Samuel Ting’s group at Brookhaven National Laboratory observed a new meson, called J , with a mass of approximately 3.1 GeV in the products of proton-beryllium collisions. At the same time, Burton Richter’s group at the Stanford Linear Accelerator Center (SLAC) observed the same particle in electron-positron collisions and gave it the name ψ. The discovery of J/ψ earned Ting and Richter the Nobel Prize in 1976. The new particle was a bound state of the new charm quark and its antiquark, much heavier than the previously discovered quarks. The charm contained in J/ψ is called “hidden” charm: J/ψ has in fact zero total charm, being composed of a charm and an anticharm quarks. Mesons with “naked” charm also

3 The Physics of Elementary Particles

71

exist in nature, such as the D mesons (composed of a charm quark and an anti-u or anti-d), D + , Ds+ (c¯s ), etc. Three years before Richter and Ting’s announcement, a pair of naked charm particles had been already discovered in 1971 in Japan as a result of a cosmic ray interacting with a nuclear emulsion on a balloon. These two particles were D mesons with a lifetime of a few tenths of a picosecond (one million of a microsecond), which was incompatible with any other known particles. The Japanese scientists did not feel confident enough to announce their discovery to the world, thus missing a great opportunity (and the Nobel prize)!

The Unexpected What is the Maximum Energy of Cosmic Rays? Ultrahigh energy cosmic rays (UHECRs) are cosmic rays with kinetic energy greater than 1018 eV, orders of magnitude higher than the energy that can be produced by particle accelerators such as the Large Hadron Collider which reaches a maximum energy of approximately 10 TeV (1013 eV). The term “extreme energy cosmic ray” (EECR) refers specifically to cosmic rays with energy greater than 5 ×1019 eV. In 1962, U.S. physicist John Linsley detected a shower of cosmic rays at the Volcano Ranch array in New Mexico with an estimated energy of over 1020 eV, i.e., 100 billion GeV. The significance of the discovery of a cosmic ray with energy greater than 1020 eV became clearer a few years later, after the discovery that the Universe is filled by thermal radiation at an average temperature of 2.7 K, which is the remnant of the Big Bang; this radiation is called Cosmic Microwave Background (CMB) or Cosmic Background Radiation (CBR). CMB was first detected in 1964 by American astronomers Arno Penzias and Robert Woodrow Wilson using a radio telescope; the discovery came at the end of a study that began in 1940 and led to their winning the Nobel Prize in Physics in 1978. Greisen, that we already encountered in this story as Rossi’s assistant, and Russian physicists Georgiy Zatsepin and Vadim Kuzmin demonstrated in 1966 that cosmic nucleons (and heavier nuclei as well) with energies above 5 × 1019 eV would experience significant energy losses due to collisions with CMB photons, leading to a decrease in the cosmic ray flux at energies higher than this (GZK mechanism). The event at Volcano Ranch was therefore a rare occurrence. In 1991, American researchers were amazed when their Fly’s Eye detector in Utah detected a particle from space with an energy of approximately 3 × 1020

72

A. De Angelis

eV, which was six times higher than the maximum energy according to the GZK mechanism—such energy was also called the “GZK cutoff”, although it is not meant to be a sharp cutoff. Fly’s Eye was a telescope that detected cosmic rays by observing the fluorescence light they produced when they hit the atmosphere. Two years later, on the other side of the world, the Akeno Giant Air Shower Array (AGASA) in Japan recorded another of these “impossible” events, estimated to have an energy of approximately 2 × 1020 eV. The AGASA array, located 120 km west of Tokyo, consisted of 111 particle detectors spread over an area of 100 km2 , with each detector housed in a small hut of approximately 2 m2 . The construction of AGASA, which was the world’s largest surface detector for measuring high-energy cosmic rays until 2004, began in 1987 and was just completed. The Pierre Auger Observatory eventually surpassed AGASA as the world’s largest detector in 2004; we will discuss it later in this book.

Anomalous Events Not only were cosmic rays of unexpectedly high energies observed but also interactions that were difficult to explain. In 1972, a cosmic ray detector located on Mount Chacaltaya in Bolivia recorded a shower that was rich in charged particles and poor in photons, which was contrary to expectations based on isospin symmetry in hadronic interactions. Isospin symmetry, introduced by Heisenberg, describes the similarity of families of particles in terms of their strong (hadronic) interaction; it is an approximate symmetry of nature, with deviations arising due to the effects of electromagnetic and weak interactions. Proton and neutron have approximately the same mass and the forces between them are identical; the same applies to the positive, the negative, and the neutral pion, which should be present in approximately equal numbers as a result of a strong (hadronic) interaction. Since the neutral pion decays quickly into a pair of photons, photons should be present in all hadronic interactions. This fact prompted the event to be referred to as a “Centaurus” due to its similarity to the half-man, half-horse character from Greek mythology. Since then, detectors in Bolivia and Tajikistan mountains have recorded more than 40 Centaurus events. If these events are true and not the result of errors in the experimental apparatus, one possible explanation is that the strong interactions between particles become atypical when they are extremely energetic. Observations made at Mount Pamir inTajikistan seem to indicate an increase in the probability of interaction of high-energy cosmic rays. One explanation is that an energy scale is being observed at which quarks are composed of substructures, while another is the explosion of a nearby black hole. The exact

3 The Physics of Elementary Particles

73

explanation for these events remains uncertain, and it is unclear whether they are measurement errors or phenomena that occur only at the highest energies beyond the reach of ground-based detectors. Only time will tell, but it is important not to lose sight of the search.

Hypotheses on the Origin of Cosmic Rays One important question was still unanswered, and a complete understanding is lacking even today: where do cosmic rays originate from, and what mechanism provides their incredible energy? In 1901, Nikola Tesla, a SerbianAustrian physicist, had given an early answer to this question, even when few still believed in the existence of cosmic rays. Tesla had patented an “apparatus for using radiant energy.” According to his patent, the Sun and other sources of radiant energy emit positively charged particles. Thus, Tesla believed that the source of cosmic rays, which he thought were positively charged, was the Sun together with other unspecified sources. Today, we know that the Sun is the main producer of protons below one GeV. However, Tesla’s patented machine promised to generate energy using cosmic rays through electromagnetic induction in large systems of metal coils in the air. Despite raising over a million dollars for funding for this idea, Tesla failed to create his generator. We now know that although the individual energy of cosmic rays may be very high, the total energy of cosmic rays on Earth is approximately 100 million times smaller than the corresponding solar energy, making Tesla’s device very inefficient in producing energy. In 1933, Swiss scientist Fritz Zwicky (1898–1974) and his German colleague Walter Baade (1893–1960) proposed a revolutionary hypothesis about the origin of cosmic rays: they believed that massive stars explode at the end of their lives, producing cosmic rays and leaving behind a collapsed star made of densely packed neutrons. They referred to this explosive event as a “supernova.” However, the question of how a supernova remnant (or any remnant of a gravitational collapse) can accelerate particles to the high energies observed on Earth remained unanswered. The fruitful collaboration between Zwicky and Baade ended with a series of fights—Baade said he was afraid that Zwicky would kill him. It was not until 1949 that Enrico Fermi provided a solution to this problem. Fermi was a highly influential physicist who ranks among greats such as Einstein, Landau, Feynman, and Heisenberg. After completing his studies in Pisa, he became a professor in Rome and gathered a group of brilliant young collaborators known as the “boys from Via Panisperna.” Fermi believed in the

74

A. De Angelis

close unity of theory and experiment and made significant contributions to nuclear physics, including the discovery of slow neutrons catalyzing nuclear transmutations. He received the Nobel Prize in 1938 and went on to participate in the Manhattan Project, later speaking out against the use of atomic bombs on civilians. After the war, he dedicated himself to theoretical studies on the physics of elementary particles and the origin of cosmic rays. Fermi’s original idea to explain acceleration of charged particles in supernova remnants and in the neighborhoods of compact objects like black holes was that charged particles gain energy through collisions with regions of inhomogeneous magnetic fields that are in motion. The interstellar medium is a plasma, a gas with a high degree of ionization and low density, that is in motion. This results in a succession of many random events, during which a particle gradually acquires energy proportional to its initial energy until it reaches very high energies, much like a tennis ball that is hit multiple times by rackets without suffering from atmospheric friction. His idea has been refined over time but remains the basis of the current understanding of the acceleration mechanism.

4 The Colors of the Universe

The first scientific observations of the Universe were of course made in the so-called “optical band” of the electromagnetic spectrum, using the human eye, and later instruments (such as telescopes derived from the Galilean and Newtonian telescopes) sensitive to light visible to the human eye. The study of cosmic messengers begins with photons, the particles that make up light and that we now know to represent the “quanta” of the electromagnetic field, with energies on the order of one electronvolt (eV), approximately 10−19 J, i.e., the typical energy that atomic electrons release when they fall to lower energy levels. Our brain interprets photons of these energies as colors: photons reflected by plant leaves, which have a typical energy of 2.2 eV, are interpreted as green, while those reflected by the surface of a lake when the sky is clear, which have a typical energy of 2.6 eV, are interpreted as blue. However, the spectrum of electromagnetic waves extends indefinitely, and what we see is only a small part of it (Fig. 4.1). The energy of electromagnetic waves such as light is proportional to their frequency (i.e., the number of times, also called the number of cycles, in which the wave repeats itself per second), f , through Planck’s well-known relationship E = h f , where h  6 × 10−34 J times second is the Planck constant, which describes the quantization of the Universe. The relationship can be written in more common units as follows: E(in eV) 

f . 2.4 × 105 GHz

The typical frequency of visible light is on the order of 500 trillion cycles per second (hertz, or Hz) or 500 THz (terahertz), and the corresponding wavelength is on the order of a micrometer (a thousandth of a millimeter) or © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 A. De Angelis, Cosmic Rays, Astronomers’ Universe, https://doi.org/10.1007/978-3-031-38560-5_4

75

76

A. De Angelis

Fig. 4.1 The spectrum of electromagnetic waves. From Wikimedia Commons

less. The wavelength, λ, is equal to c/ν, where c, approximately 300,000 km per second, is the speed of light in vacuum. Since the 1930s, the spectrum of frequencies (and therefore energies) of the light waves we are capable of observing in the Universe has begun to extend: this is multifrequency (or multiwavelength) astronomy/astrophysics. New colors invisible to the human eye allow us to see previously unknown phenomena. The field of observation of the electromagnetic spectrum has been expanded to more than twenty orders of magnitude. Think about how different your hand is if you see it through an X-ray or with your eyes (Fig. 4.2)—and in this case, the difference is only two orders of magnitude, between the electronvolt of visible light and the 100 eV of light used in radiology. Objects emit “thermal” radiation, radiation at all wavelengths whose peak intensity depends only on the temperature of the emitting body. Our bodies, at 36 ◦ C, emit the peak of their radiation in the infrared at approximately 10 micrometers of wavelength; the surface of the Sun, which has a temperature of approximately 6,000 ◦ C, emits its peak radiation at 0.5 micrometers, right in the middle of the visible spectrum (this is not a coincidence and is well explained by evolutionary biology—and of course also by creationist biology). Thermal radiation is emitted with a characteristic distribution (Fig. 4.3) known as the Planck or “black body” spectrum. From now on, we will use the kelvin (K), which is the unit of the international system, as the unit of measure of temperature; the temperature in kelvin is given by the temperature in degrees Celsius plus 273.16 (for

4 The Colors of the Universe

77

Fig. 4.2 Left hand photographed in the visible and in X-rays. https://www.scienceabc. com

Fig. 4.3 Planck distribution. From hyperphysics

example, the temperature of the human body, at 36 ◦ C, is approximately 310 K). The Planck distribution is a bell-shaped distribution, with a maximum for photons at a wavelength corresponding to an energy E max (in electronvolt) proportional to the temperature in kelvin. The diffuse light of the Universe shows the presence of numerous thermal bands (Fig. 4.4). In this figure, there are also regions where peaks of the Planck

78

A. De Angelis

Fig. 4.4 Distribution of wavelengths of background light in the Universe. The acronyms CRB, CMB, CIB, COB, CUB, CXB, CGB indicate respectively the cosmic radio, microwave, infrared, optical, ultraviolet, X-ray, and gamma-ray backgrounds. From R. Hill, K.W. Masui, D. Scott, Applied Spectroscopy 72 (2018) 663

distribution type are not noticeable, for example, the high energy region (the farthest right in the figure); we call them nonthermal regions. Hot solid objects produce light with a continuous spectrum; sometimes we find objects that emit “lines” at particular wavelengths. An example is given by the emission lines of gases: the energies, and thus the wavelengths and frequencies, of photons emitted when electrons fall from one allowed energy level to another are well defined. This allows chemical analysis of stars through spectroscopy: if I see the characteristic lines, for example, of iron (Fig. 4.5), I can say that there is iron in the source; by comparing the intensity with that of typical hydrogen emission lines, I can say how much iron is present compared to hydrogen. Since stars burn hydrogen and, through a long cycle, might transform it into heavier elements such as iron (before eventually undergoing a transition and becoming a more compact object, a neutron star or a black hole), measuring the amount of hydrogen compared to iron, it is possible to measure the age of stars, although the complexity of stellar populations makes this game difficult to play. The typical energies of electron transitions in the atom are on the order of electronvolts and thus fall in the visible, infrared, or ultraviolet regions. A particularly important radiation line is the 21 cm line (also known as the HI line), which is located in the radio wave band. It is characteristic of emission from cold, neutral intergalactic hydrogen atoms. When the spins (i.e., in a classical view, the rotation directions) of a hydrogen atom’s pro-

4 The Colors of the Universe

79

Fig. 4.5 Emission spectra of some elements: hydrogen, helium, oxygen, neon, and iron from top to bottom. From Wikimedia Commons

ton and electron are antiparallel, then the atom has slightly lower energy than when they are parallel (the difference is called the “hyperfine splitting”). The energy difference is approximately 6 millionths of an eV and corresponds to a wavelength of 21 cm, i.e., a frequency of 1420 megahertz. Although the transition occurs very rarely, there is so much hydrogen in the Milky Way that the 21 cm line is easily observable. The 21 cm radiation easily penetrates the intergalactic dust particle clouds that obstruct deep optical observations in galaxies, thus allowing the mapping of the Milky Way and other galaxies. Hot solid objects surrounded by colder gas will show a nearly continuous spectrum with black lines that correspond to the gas emission lines (of course the gas absorbs light at the same frequencies at which it re-emits it). By studying the absorption lines, we can determine the chemical composition of a planet’s atmosphere or, if we observe a star, its outer regions. In this case, we are talking about absorption spectroscopy. A “telescope” for the detection of electromagnetic waves must have a diameter D that is at least comparable to the wavelength λ (otherwise it is blind to that wavelength); its angular resolution (in radians: a full circle corresponds to 2π radians, so one radian corresponds to approximately 57 degrees) is roughly equal, under optimal conditions, to λ/D. If I look at the Moon with a 10m optical telescope, the best angular resolution we can obtain is θ  0.5 µm/10 m  0.05 microradians 0.003 milli-degrees. Therefore, a “pixel” on

80

A. De Angelis

the Moon, which is at a distance d of approximately 380,000 km, will have a side L  θ × d, with θ measured in radians: approximately 20 m. This optimistic estimate applies to a perfect telescope and ignores the distortions due to the Earth’s atmosphere. Still, it is enough to understand why we have not been able to “see” (I speak on behalf of those who were already on this Earth during the lunar trips) the astronauts on the Moon.

The Universe in Radio Waves The first of the “new astronomies” that allowed us to see previously unknown astronomical objects in the optical band was radio astronomy. The beginning of the story dates back to 1899, when Tesla built a radio receiver and claimed to receive extraterrestrial signals continuously. Tesla’s observations were not taken seriously because he interpreted them as signals of intelligent life, finding in them simple numerical regularities. The spatial resolution of his instrument was poor, but he associated the main source of the emission with the planet Mars; he reiterated his claim for at least twenty years, and most colleagues laughed at his fanciful interpretation of the data. In the 1930s, American Karl Jansky, who worked at Bell Laboratories, was working on a new radio antenna operating at frequencies of approximately 20 MHz (a wavelength close to 15 m) and could not eliminate unknown noise. He then discovered that this “noise” extended to lower wavelengths. By orienting the antenna, he realized that the noise came from outside the atmosphere and varied depending on the point in the sky “observed” by the antenna, becoming maximum in correspondence with the galactic plane. In 1939, the engineer and amateur astronomer Grote Reber built a 9-m diameter parabolic antenna in his backyard and made a map of the “noise” in the radio waves at 160 MHz (wavelength of 1.8 m); he discovered that it was associated with the galactic plane and showed a peak near the center of the Milky Way. He found particularly strong noise in the direction of the area of the constellation of the Cygnus that hosts an external galaxy now known as Cygnus A. At that point, it was clear that what was considered noise was in reality an astrophysical signal and that the Milky Way, in particular its center and Cygnus A, were emitters of radio waves. At the beginning of the 1940s, radar (the acronym for “radio detection and ranging”) was developed for military purposes. Radar is a system that uses radio waves for the detection of both stationary and moving objects. The functioning of radar is based on the physical phenomenon of the reflection of electromag-

4 The Colors of the Universe

81

netic radiation; a receiving antenna can detect the returning radiation after a time equal to double the signal’s travel time. The development of radars brought with it the development of radio receivers with short radio wavelengths, down to a few centimeters. The instruments became increasingly sensitive and accurate and began to be able to “see” structures such as sunspots. The detectable radio waves range from those referred to as “extremely low frequency,” or ELF, with frequencies starting from 3 Hz and wavelengths of 100,000 km, to those referred to as “tremendously high frequency,” or THF, with frequencies of thousands of GHz and wavelengths of a hundred micrometers. Radio waves with millimeter-scale wavelengths are also known as microwaves. The microwaves from the oven that we use to heat food quickly have typical wavelengths of 10 cm and thus typical frequencies of 3 GHz and energies of 12 millionths of an eV. Water, fat, and other substances in food absorb energy from microwaves in a process called dielectric heating. Many molecules (such as those of water) are electric dipoles because they have a partial positive charge at one end and a partial negative charge at the other. Thus, they rotate as they try to align with the alternating electric field of the microwaves. The rotating molecules strike other molecules and set them into motion, thereby dissipating energy in the medium. This energy raises the temperature of the food.

Large Radio Telescopes As we already saw, a “telescope” for the detection of electromagnetic waves must have a diameter D that is at least comparable to the wavelength λ. Radio telescopes sensitive to radio waves must therefore be large (or be composed of many instruments at large distances, each of which must be at least as large as the wavelength). They will then be difficult to point to the target. The so-called single dish radio telescopes have found the most spectacular example in the Arecibo Observatory (Fig. 4.6), located on the island of Puerto Rico. Inaugurated in 1963, it was equipped with a single-aperture antenna with a diameter of 305 m, the largest in the world until the commissioning in September 2016 of the 500-m FAST radio telescope in Guizhou Province, China. Following significant damage to the structural support due to the 2020 Puerto Rico earthquake, the suspended platform of the Arecibo telescope collapsed.

82

A. De Angelis

Fig. 4.6 Arecibo radio telescope when it was still in operation. From https://www.ucf. edu

Very Long Baseline Interferometry Another technique for radio wave detection is interferometry: a radio interferometer is made up of two or more parabolic antennas that simultaneously observe an astronomical source and whose signals can be combined to produce a single signal. We can imagine a large parabolic antenna as mentioned above and remove most of the surface, leaving dozens of much smaller telescopes. An example is the Very Large Array (VLA), one of the world’s leading radio interferometers, located near Socorro, New Mexico, USA. It consists of 27 radio antennas, each 25 m in diameter, in a Y-shaped configuration with arms 21 km long (Fig. 4.7). The individual telescopes can move on rails. Starting in the 1970s, improvements in the stability of radio telescope receivers have allowed telescopes from all over the world to be combined to perform very long baseline interferometry. Instead of physically connecting the antennas, the data received on each antenna are paired with timing information based on atomic clocks and later analyzed along with data recorded similarly from other antennas to produce the resulting image. Using this method, it is possible to synthesize an antenna of the size of the Earth. The large distances between the telescopes allow for very high angular resolutions, much better than in any other field of astronomy. At the highest frequencies, precision of the order on 10 millionths of a second arc is possible, such that a “pixel” of a Moon image is 2 cm in side.

4 The Colors of the Universe

83

Fig. 4.7 The very large array VLA. From Wikimedia Commons

Today’s main arrays are the Very Long Baseline Array (VLBA) with telescopes located throughout North America and the European VLBI network (with telescopes in Europe, China, and South Africa). Each array usually operates separately, but there are occasional joint projects to have maximum sensitivity and resolution. In this case, it is referred to as the Global VLBI or “Event Horizon Telescope” (EHT), as in Fig. 4.8.

Fig. 4.8 The event horizon telescope. From Wikimedia Commons

84

A. De Angelis

The Cosmic Microwave Background The Universe is a giant black body, and as such, it is permeated by thermal radiation that is not associated with any star, galaxy, or other celestial body. The present temperature of the Universe is approximately 2.7 K (it naturally decreases very slowly over time because the Universe is expanding at the expense of its internal energy and thus cooling), and its radiation is called the cosmic microwave background (CMB). The CMB is the oldest light in the Universe, emitted when the Universe was just 380,000 years old—the medium that last emitted it was optically opaque, and earlier light was quickly absorbed. For a temperature of 2.7 K, the radiation is centered at an energy of approximately 0.66 millielectronvolt, which corresponds to a frequency of 160 GHz and a wavelength of approximately 2 mm. The CMB photons are very numerous, approximately 410 per cubic centimeter, and make up the most important background in the Universe both in terms of number and energy. In Fig. 4.4, the Planck curve corresponding to the CMB is well visible.

Compact Objects and Accretion Disks Radio waves are important not only for the study of the CMB (which allowed us to verify the theory of the Big Bang and to discover other fundamental properties that we will discuss later) but also for studying celestial objects with unprecedented accuracy. The atmosphere is transparent to radio waves; therefore, we can construct radio telescopes with dimensions equivalent to the Earth’s diameter or even, for stable sources, to the Earth’s orbital diameter and therefore have unbeatable λ/D resolutions. Thanks to these resolutions, radio astronomy has led to the discovery of different classes of new distant objects, including pulsars, binary stars, and active galactic nuclei (most of which also emit copiously in the radio wave region which is why they were originally called quasars, or quasi-stellar radio sources). These objects are home to some of the Universe’s most extreme and energetic physical processes. Radio astronomy allows us to see things that are not detectable in optical astronomy, in particular the so-called “synchrotron radiation”. Synchrotron radiation (also known as “synchrotron light”, although it is not in general visible) is generated by charged particles, in particular electrons or positrons, traveling at speeds close to the speed of light and whose trajectories are curved by a magnetic field. This occurs near compact objects such as black holes in active galactic nuclei, which grow at the expense of the

4 The Colors of the Universe

85

surrounding matter, supernova remnants, and pulsars. Synchrotron radiation is a characteristic of acceleration. The emission spectrum of synchrotron radiation can range from radio waves to X-rays, depending on the intensity of the magnetic field and the particle density in the region where the phenomenon occurs, but in general, the emission is conspicuous in the radio wave region. Since, as mentioned, the resolution of radio telescope systems is excellent, the most accurate imaging of distant objects has been possible. In 2019, the EHT published a historical photo, which was presented by the press as “the first photo of a black hole”. Of course, the black hole is invisible: the telescope sees the synchrotron radiation emitted by the matter just before it crosses the point of no return in its fall toward the central black hole of galaxy M87. The black hole at the center of M87 is approximately a thousand times more massive (and therefore larger: the radius of a black hole, neglecting rotational effects, is proportional to its mass) than the black hole at the center of our galaxy and has a mass of over 6 billion solar masses. M87 is approximately 16.8 megaparsecs away from us, or approximately 55 million light-years. At such a distance, the event horizon (the boundary in spacetime beyond which events cannot affect an observer located outside it, or, in other words, the region separating the observable Universe from the unknown interior of a black hole) covers an angle of only 15 ms of arc, roughly equal to the tiny angular size of Sgr A*, the black hole at the center of the Milky Way. We are close to the limits of resolution, but the photograph of the structure (Fig. 4.9) is spectacular!

Fig. 4.9 Accretion disk of the M87 black hole. Courtesy of the EHT collaboration

86

A. De Angelis

Emissions from binary systems also have common features with those of growing compact systems; in addition to emissions in the highest energy bands, X and gamma rays, they have characteristic radio wave emissions that have allowed us to photograph such systems. Most stars in the Universe form binary systems and orbit each other. If the rotation takes place in a sufficiently violent and energetic environment, allowing the acceleration of particles (electrons and protons) to relativistic speeds close to the speed of light, X-rays and gamma rays can be produced, but the same population of accelerated particles also produces copious radio emission. Studying this radio emission allows us to study the emission and absorption phenomena in these systems and resolve the emitting region’s morphology if we observe with large-base radio interferometers. Binary stars are of different types; in some, at least one star is very massive (more than 10 times the mass of the Sun), and the other can be another massive star or a compact object, such as a neutron star or a black hole. In this case, the more massive object can accrete from the other, with a mechanism similar to that seen earlier for active galactic nuclei. We call these systems microquasars.

Molecules and Emission Lines The formation of stars takes place thanks to the existence of dense molecular clouds—in the case of the Milky Way, the diffuse components of matter all together incorporate approximately 15% of the ordinary mass. The existence of dark nebulae, after a first hypothesis by Galileo, was confirmed by William and Caroline Herschel (brother and sister) in 1785, although the first images date back to the 1920s. William Herschel, son of a musician, was born in Hanover, Germany, in 1738. He followed in his father’s footsteps and moved to England to teach music, but eventually he became interested in astronomy and started to build his own telescopes, developing and refining Isaac Newton’s design. He called Catherine to work with him; her exceptional mathematical skills were at the foundation of the modern mathematical approach to astronomy. After 1950, the discovery of HI lines demonstrated the existence of a cold component of the interstellar medium. The problem is that if the clouds are dense, hydrogen tends to form diatomic molecules, in which the characteristic transition of the 21 cm line is strongly suppressed for reasons related to quantum mechanics. After 1970, molecular structures were revealed thanks to telescopes sensitive to another emission line in the microwave, the characteristic transition line of carbon monoxide, CO, which has a wavelength of 2.7 mm.

4 The Colors of the Universe

87

The ability to expand the radio spectrum to shorter wavelengths, including the millimeter and submillimeter range, and eventually reaching the far infrared range, enabled the discovery and measurement of various molecules in interstellar space, including those found in both the coldest and warmest areas near stars, following the detection of carbon monoxide transitions. More than 200 different molecules have been identified thus far. However, despite ongoing efforts, scientists have yet to find the molecules that constitute the foundation of life, such as DNA or amino acids. Molecular astronomy has led to the construction of specialized equipment; the most spectacular example is the Atacama Large Millimeter/Submillimeter Array ALMA (Fig. 4.10). ALMA is an interferometer composed of 66 parabolic dishes, each 12 m in diameter, plus a compact array of four 12-m antennas and twelve 7-m antennas that operate between 9.6 and 0.3 mm in wavelength. ALMA was built at an altitude of 5,000 m in the Atacama desert in Chile, one of the driest places in the world; it is the most expensive ground-based astronomical instrument ever built (approximately 1.2 billion euros) and required global collaboration.

Fig. 4.10

The ALMA multitelescope system. Credit: ESO

88

A. De Angelis

The Square Kilometer Array As explained above, interferometers optimize the resolution of radio telescopes, but the total surface area and therefore the collected light are smaller than radio telescopes such as Arecibo or FAST. To overcome this limitation, the Square Kilometer Array (SKA) is being built, an interferometer with a total surface area of 1 km2 spread worldwide, sensitive in the region between 50 MHz and 30 GHz. This telescope, currently under construction, is managed by a large international collaboration.

The Infrared Universe In 1800, when studying the spectrum of sunlight with a thermometer, William Herschel noted that heating was maximal when the thermometer was illuminated by light just outside the limit of the visible region, at the red end of the spectrum. However, infrared astronomy only seriously started in the midtwentieth century. In the 1940s and 1950s, U.S. astrophysicist Albert Whitford began to work on the problem of interstellar dust, which absorbs light and limits the range of observations. Clouds of interstellar dust obscure the view of much of our galaxy and other galaxies. Large clouds of interstellar gas, mainly hydrogen, mix with dust particles from the condensation of heavier elements and compounds produced in stars and expelled into the interstellar medium. Whitford discovered that this dust is almost transparent to infrared radiation. Infrared astronomy allows us to observe a Universe that is otherwise invisible. Another reason to observe in the infrared is that we can observe weaker and weaker stars that are increasingly distant from us. In fact, cosmic expansion moves the spectrum of emitted light toward the red and infrared, so we can see galaxies moving away from us at large distances. Today, we can detect galaxies that emitted their light when the Universe was only 2% of its current age. If the galaxy emitted its light when the Universe was one quarter of its current age, the received wavelength has become roughly four times its original value. Therefore, the light has moved from the visible to the infrared. The first major progress in infrared astronomy came in the 1970s thanks to the development of silicon detectors. The first large map of the sky in the near infrared was called the Two Micron Survey and was based on observations made from Mount Wilson in California. Mount Wilson hosted a 100-inch (two and a half meter) diameter telescope, which was soon made useless for optical astronomy due to light pollution from nearby Los Angeles.

4 The Colors of the Universe

89

It was immediately clear that it was particularly important to use infrared astronomy to explore the center of the Milky Way. This is the region where most galactic stars (and, as we shall see, most dark matter if it exists) concentrate. However, gas and dust are abundant in the galactic plane, and with the Earth in such a plane, there is no hope of observing this region at visible wavelengths.

The Center of the Milky Way At the beginning of the millennium, a group led by Reinhard Genzel and Andrea Ghez at the Max Planck Institute for Extraterrestrial Physics in Garching, Munich, studied in detail the orbits of 28 stars around the galactic center for 16 years and was able to state that the nucleus of the galaxy houses a supermassive black hole with a mass of approximately 4 million solar masses. It also confirmed a specific prediction of general relativity on the precession rate of a star’s orbit around a black hole. Figure 4.11 is an infrared image converted to visible wavelengths of the orbit of some of these stars. Various spectacular videos available on the internet condense 16 years of orbital revolutions into a few seconds. Genzel and Ghez’s study was awarded the Nobel Prize in 2020.

The James Webb Space Telescope Since 1983, orbiting telescopes sensitive to infrared rays have been sent into space to reveal the bands closest to visible light, which are absorbed by the atmosphere. The Spitzer satellite has revealed regions of star formation and allowed the study of the atmospheres of exoplanets; the Herschel satellite has revealed the presence of water in extraterrestrial objects. All of this has favored the decision that the great space telescope complementing and then replacing Hubble in some way is an infrared telescope. The James Webb Space Telescope (JWST), launched in December 2021 and operating since June 2022 after a commissioning phase, is opening new horizons for infrared astronomy and astronomy in general thanks to cuttingedge design technologies. It is the largest telescope ever sent into space: the large primary mirror has a diameter of 6.5 m and consists of 18 ultrathin hexagonal mirrors made of beryllium that compose a single large collecting surface deployed after reaching the orbital point (Fig. 4.12). The JWST features several innovative technologies. In particular, it has a cryogenic system for cooling the detectors to 7 K, and it is stabilized with precision on the order of picometers (millionths of a micrometer).

90

A. De Angelis

Fig. 4.11 The orbits of stars S0-2 and S0-102 near the black hole at the center of the Milky Way. The orbits of other stars are represented by less marked lines. The background is a high-resolution infrared image of the region. Courtesy of the Keck/UCLA collaboration

Fig. 4.12

James webb space telescope (NASA)

4 The Colors of the Universe

91

Unlike Hubble, the JWST orbits around the Sun 1.5 million km from Earth around the so-called Lagrangian point L2, an orbit already used for the WMAP, Herschel, and Planck missions. This orbit keeps the Webb telescope aligned with the Earth’s orbit, allowing to protect the telescope from the light and heat of the Sun, Earth, and Moon and ensuring almost continuous communications with the control center. The so-called Lagrangian (or Lagrange) points L1 and L2 are gaining great importance in astronomy. But what is a Lagrangian point? Lagrangian points. In the three-body gravitational problem, Lagrangian points are points in space where two massive bodies, through the interaction of their respective gravitational forces, allow a third much smaller body, such as a satellite or an asteroid, to maintain a constant distance position relative to them. The theory of Lagrangian points is due to Joseph-Louis Lagrange, (1736–1813) an Italian-French mathematician and astronomer who made significant contributions to various fields of mathematics, and physics. Lagrange is particularly known for his work on analytical mechanics, which laid the foundation for the development of modern theoretical physics. If we identify the two main bodies with their masses M and m, assuming M > m (for example, M could refer to the Sun and m to the Earth), the L1 point lies on the line passing through M and m. It is the easiest point to understand intuitively: in fact, it is the point where the gravitational attraction of M partially cancels that of m. It is located exactly where the period of a body positioned there is exactly equal to the period of m. The L1 point of the Sun-Earth system is an ideal observation point of the Sun, as it is never eclipsed by the Earth or the Moon in that position; near L1, space-based solar observers are therefore positioned. The L2 point of the Sun-Earth system is located beyond the radius of the lunar orbit. It still lies on the same line as the L1 point but beyond body m, and it is the point where the orbital period of the body positioned there is equal to the period of m. The L2 point of the Sun-Earth system is an excellent observation point of space due to the stability of solar illumination that facilitates the thermal management of the equipment and points toward deep space. If M is much larger than m, the distance r of L1 and L2 from m is approximately the same; in the Sun-Earth system, r is approximately 1,500,000 km. There are three additional Lagrangian points, called L3, L4, and L5, of lesser interest for space science (Fig. 4.13).

Euclid The Euclid mission aims to understand the nature of dark matter and dark energy, which represent the majority of the energy content of the Universe.

92

A. De Angelis

Fig. 4.13 The five Lagrangian points of the Sun-Earth system; the equipotential lines are superimposed. The arrows indicate the direction of the net force around the points of equilibrium, toward them (red) and away from them (blue). From Wikimedia Commons

Equipped with a 1.2 m diameter telescope feeding an optical sensor and an infrared sensor, Euclid measures the gravitational effects of dark matter on the shapes of galaxies. In addition, it maps the distribution of galaxies over the last 10 billion years of cosmic history across more than a third of the sky, in order to reveal the precise way in which dark energy has accelerated the expansion of the Universe and to measure whether and how cosmological parameters have varied over a broad period of time. The satellite was launched in July 2023 by a Falcon 9 rocket of SpaceX Corporation, and it is orbiting around the L2 Lagrangian point for a 6-year mission. Euclid can point to an area of the sky more than a hundred times larger than that seen by the Hubble Space Telescope and the James Webb Telescope: its field of view is approximately as large as the Moon in one dimension. This is why it can map a third of the sky in six years, albeit with lower depth compared to Hubble.

4 The Colors of the Universe

93

The Ultraviolet Universe The most important reason for observing astronomical sources in the ultraviolet part of the spectrum is that one can find some of the key spectral features of the most abundant element, hydrogen. These include the series of shortwavelength lines discovered by US physicist Theodore Lyman in approximately 1910 and today called the Lyman series and the associated ultraviolet continuum at even shorter wavelengths. Studying these lines and the ultraviolet continuum is essential for exploring the behavior of stars, galaxies, and the interstellar medium. The Lyman series lines represent energy jumps when electrons move between energy levels within the hydrogen atom. The fact that they are found in the ultraviolet part of the spectrum has forced astronomers to make observations from space because the atmosphere is opaque to this range of wavelengths. Thus, ultraviolet (UV) astronomy has been only possible since rockets and satellites have allowed detectors to be placed outside the atmosphere. Furthermore, observing the interstellar medium in the ultraviolet region can be considered an extension of observing in the visible region, as the processes detected and measured are similar but at higher energies. Near the remnants of supernovae, expanding gases collide with the interstellar medium, causing emissions observed in the ultraviolet and at even higher energies. From 1962 to 1975, NASA launched eight orbiting solar observatories that recorded thousands of ultraviolet spectra of the Sun and another eight from 1968 to study stars and the interstellar medium from 120 to 400 nm. The most successful ultraviolet space instrument before the Hubble Space Telescope (HST) was the International Ultraviolet Explorer, launched in 1978. After other missions, we finally arrived at Hubble. Hubble was launched into a low orbit at an altitude of approximately 535 km (its revolution period around the Earth is approximately 95 min) in 1990 and is currently operational; it is expected to function until 2030 or even 2040. It has a 2.4-m diameter mirror. It has recorded some of the most detailed images of the Universe in visible light, allowing a deep view into space and time. Many HST observations have had a profound impact on astrophysics, for example, allowing us to accurately determine the rate of expansion of the Universe. Although the Hubble Telescope has achieved its greatest fame thanks to its wonderful optical resolution, producing memorable images, one of its main scientific goals is to extend imaging and spectroscopy of astronomical objects in the ultraviolet region. A beautiful example of measuring the expansion velocity of a supernova through ultraviolet astronomy is the study of the Cygnus Loop, the remains

94

A. De Angelis

of a supernova that exploded approximately 20,000 years ago (the age of a supernova is calculated from the size and expansion velocity, except for the 7 supernovae for which we can base on astronomical historical observations reported by humans) approximately 2,500 light years away. The spectral line measurements show that the supernova remnant still expands at approximately 350 km/s and injects energy into the interstellar medium.

The Interstellar Medium and Intergalactic Medium One of the many contributions of ultraviolet astronomy has been the first mapping of the structure of the interstellar medium. It has also been possible to measure its chemical composition near the solar system, thanks to the emission and absorption lines of hydrogen and those of heavier elements. Among these, deuterium, an isotope (i.e., an atom with the same chemical characteristics but a different atomic weight) of hydrogen, is particularly important for cosmology. In practice, deuterium is a heavier form of hydrogen. The characteristic wavelengths of deuterium are very close to those of hydrogen, but a good instrument can distinguish them. From these data, it was possible to calculate the primordial density of deuterium, which is one of the elements that formed immediately after the big bang.

Supernova 1987A Located in the Large Magellanic Cloud, a small satellite galaxy of the Milky Way, this supernova, whose explosion reached Earth in 1987, is the closest observed during the era of modern observation techniques; its study was probably the first major success of multimessenger astrophysics, having been revealed in various wavelengths of light and through the emission of neutrinos. The ultraviolet spectrum showed that the star had been a blue supergiant (a hot, luminous star, an order of magnitude heavier our Sun) shortly before the explosion. Hubble Space Telescope photographs have revealed an expanding nebula around the position of the progenitor star. Figure 4.14 is a composite image of SN 1987a formed from ultraviolet, X-ray, and radio wave images. Thanks to the study of the expansion, it has been seen that supernovae expand at speeds about one tenth of the speed of light in the first days after the explosion.

4 The Colors of the Universe

95

Fig. 4.14 Remnant of the supernova 1987A seen at different wavelengths. ALMA radio data (in red) show the newly formed dust after the explosion. Hubble data in the ultraviolet (in green) and Chandra satellite data in the X-ray region (in blue) show the expanding shock wave. From Wikimedia Commons

Beyond the Limits of the Thermal Universe: X-Rays As seen in Fig. 4.4, the diffuse spectrum of X-rays in the Universe has a thermal part (mostly associated with the accretion of compact objects, up to approximately 100,000 K) and a harder, nonthermal component. This important transition occurs in the X-ray region.

The Discovery of Cosmic X-Rays The discovery of cosmic X-rays has therefore been one of the most important in the history of astronomy. This discovery was accelerated by the Cold War. The launch, on October 4, 1957, of Sputnik 1, the first artificial satellite in the world, by the Soviet Union, caused a real political and cultural crisis. United States were behind in the space race and decided to start a large and costly enterprise to win the competition.

96

A. De Angelis

In 1958, the first US satellite, Explorer I, entered orbit carrying a simple Geiger-Müller counter designed by the James Van Allen group. It discovered two large belts of trapped radiation surrounding the Earth, conquering the merit of the first important scientific discovery of space exploration. The socalled “Van Allen belts” extend from approximately 700 to 58,000 km above the Earth’s surface. Most of the particles that form the belts come from solar wind and cosmic rays in general. Earth’s magnetic field deflects those energetic particles and protects the atmosphere. Also in 1958, President Eisenhower established the National Aeronautics and Space Administration (NASA), which immediately created a scientific committee, called the Space Science Board, to interest scientists in space research. Bruno Rossi, who was among the fifteen members of the Board, was called upon to form a commission tasked with “special”, that is, very original, space projects, and chose as his collaborators for this project Thomas Gold, Salvador Luria (an Italian medical doctor, also exiled from the fascist regime, who would win the Nobel Prize in Medicine in 1969) and Phillip Morrison. Rossi began to think about far-reaching visionary experiments, particularly in astrobiology and “cosmic physics”, a natural extension of cosmic ray physics synergic with radio astronomy. After a while, Rossi and his collaborators focused mainly on this second aspect (although Luria and Morrison would give important contributions to astrobiology). Rossi thought about designing space experiments that would provide a new line of attack on the same astrophysical problems he had been interested in through his studies on cosmic rays. In March 1961, the “Explorer X” probe, containing a magnetometer and an instrument called the Faraday cup, was launched from Cape Canaveral, Florida. The Faraday cup is a metallic cup sensitive to charged particles. When ions hit the body of the cup, their charge is transferred to the walls. The metal can then be discharged to measure a small current proportional to the number of impinging ions. The probe, injected into a highly elliptical orbit (perigee 300 km, apogee 240,000 km), revealed the existence of a gas, or plasma, of charged particles, probably of solar origin: it was the so-called “solar wind”, which flowed over the Earth at supersonic speed. Along with his interest in plasmas in space, Rossi had also become convinced of the importance of exploring the X-ray window of the Universe. A few days after the first meeting of the Space Science Board, he expressed the opinion that exploratory work in X-ray astronomy should be included in the satellite program for the 1959–1960 period, and this proposal suggested preliminary experiments with balloons. Most scientists were skeptical about the possibility of revealing astrophysical X-rays, but NASA was quite tolerant

4 The Colors of the Universe

97

because there were very few scientific projects and Americans needed to refine the technology of space launchers. Before being absorbed, X-rays of a few keV can cross galactic distances, but this was the first time anyone had ever looked through the soft X-ray region of the spectrum, a possibility uniquely allowed by the access to space. Rossi’s “exploratory philosophy” was very simple: “since space might be transparent to X-rays, and since there are many ways that X-rays can be generated in space and by stars, you should go and see what you can find. The sensitivity of the instruments seemed far from that considered necessary for detecting remote X-ray sources, but no one has yet explored the sky with X-ray detectors as sensitive as I hope can be developed, and this, for me, is a sufficient reason to undertake this exploration. My long experience as a cosmic ray physicist has taught me that when you enter an unexplored territory, there is always the possibility of finding something unexpected.” A commercial company, that had worked on developing tools for measuring X-rays and gamma rays from nuclear explosions and was restructuring toward peaceful activities accepted the risk of developing the new necessary instruments. The 28-year-old Riccardo Giacconi (1931–2018), a former student of Giuseppe Occhialini at the University of Milan, was chosen as the program manager. Rossi and Giacconi considered using a new instrument. They thought of a “pancake detector” (Fig. 4.15). The pancake shape’s advantage was the wide ratio between the window area and the sensitive volume. The chosen detector geometry allowed for determining the direction of sources with great precision. After two failed launches (in which the detectors were destroyed), in June 1962, a successful launch took place from New Mexico. The payload carried a magnetometer and three pancake detectors with windows of different thicknesses to infer the energy of the radiation. The rocket remained above 80 km for 350 s, reaching a maximum altitude of 225 km. Two of the three counters worked reliably (as in the famous Hess balloon flight). At an altitude of 80 km, beyond the dense layers of the atmosphere, an increase in the frequency of the counter pulses revealed an X-ray signal. Most of the radiation consisted of “soft” X-rays originating from sources beyond our solar system. Scorpius X-1, a binary system located approximately 9,000 light-years away from the Earth, was identified as one of the sources. Additionally, the experiment demonstrated the existence of X-ray background radiation in the Universe, which was an extremely fascinating discovery. The discovery of the first extrasolar X-ray source was announced in August 1962 at Stanford, and although it was met with enthusiasm, some remained skeptical. Giacconi authored the article presenting the findings, and this article

98

A. De Angelis

Fig. 4.15 Sketch of the geometry of a pancake detector that allows 2/3 of a hemisphere to be visualized at any time. From Proposed Experiment for the Measurement of soft X-rays from the Moon, note ASE-83-I, 25 October 1960, Rossi Papers, MIT Archives, Box 34

was only published in the Physical Review after Rossi personally took responsibility for the claims made. This was a significant milestone for Rossi, who thirty years earlier had sought validation for his scientific results from Werner Heisenberg. This discovery marked the beginning of a new field of astronomy, providing crucial insights into previously unknown processes and new classes of stellar objects. Subsequently, X-ray emission from the Crab Nebula and its pulsar was revealed. The launch of the Uhuru instrument in 1970 led to the discovery of X-ray emission from binary systems containing a neutron star or a massive black hole (mass black holes were discovered in this context). Focusing X-rays. The interest in X-ray astronomy increased further with the launch in 1978 of Einstein, the first observatory with an X-ray telescope capable of fortifying images. The focused total external reflection of X-rays in grazing incidence was the basic principle of the new instrument (Fig. 4.16). Observations came to include a wide class of celestial objects. This telescope was followed by the ROSAT satellite in 1990, which carried out sky-wide detection and indicated many of the targets for subsequent X-ray observations.

4 The Colors of the Universe

99

The Chandra X-ray telescope (Fig. 4.17) was launched in July 1999. The 1.2 m sensor has a collection area of approximately 1,000 cm2 . The observatory was placed in a highly elliptical orbit with an initial altitude between 10,000 and 139,000 km (the spacecraft spends most of its time outside the Van Allen belts). Chandra has operated continuously since its launch and has studied X-rays as characteristics of a wide range of astrophysical phenomena. XMM-Newton, a telescope similar to Chandra managed by the European Space Agency (ESA), has been also operational since 1999. Coded masks. The energy of hard X-rays (beyond 5–10 keV) is too high to allow for reflection, even grazing: these X-rays pass through mirrors. Therefore, coded mask technology, developed in the 1990s, is used (Fig. 4.18). In a coded mask telescope, the images from multiple apertures overlap on the detector. An algorithm (which depends on the exact configuration of the coded masks) is needed to reconstruct the original image and the direction of the photons’ origin. In this way, a clear and fault-tolerant image in individual sensors can be obtained. An example of an X-ray telescope that uses a coded mask detector is INTEGRAL, a collaboration between the Russian space agency, ESA and NASA and was launched in 2002 and is still operational.

Binary Systems One of the most interesting sectors in stellar astronomy is binary stars, which we have already mentioned. X-ray emissions are frequently observed from binary stars, where matter within the accretion disk is accelerated in the strong gravitational field of the

Fig. 4.16 Modern version of the focalizing soft X-ray telescope designed by Giacconi and collaborators. From Wikipedia Commons

100

Fig. 4.17

A. De Angelis

The Chandra telescope, launched in July 1999. Credit: NASA

compact star, generating X-rays as synchrotron radiation as it falls and releases energy. The compact object can be either a neutron star or a black hole; for example, Scorpius X-1, the first X-ray source detected outside of our solar system, has a black hole as its host. The X-ray output is not constant, and there is notable variability. X-ray production in an accretion ring of a binary system has even more spectacular aspects in the collapse phase of two neutron stars, which we will discuss in the next section concerning gravitational waves.

Supernova Remnants and Pulsars in X-Rays In addition to being often bright sources of visible light, supernova remnants and pulsars also emit intense X-rays, making them crucial targets for X-ray astronomers. The Crab Nebula (1054 AD), one of the most fascinating objects in the sky, is a supernova remnant used for a long time as a “calibration source” (i.e., used to calibrate signals and in technical tests to verify the proper functioning of detectors). One of the strongest sources of X-rays and gamma rays is visible from both the Cancer and Capricorn tropics and was considered stable (recently, it has been discovered that its flow is subject to variations not yet fully understood).

4 The Colors of the Universe

Fig. 4.18

101

Scheme of operation of a coded mask telescope. From Wikimedia Commons

The Crab Nebula has morphological and spectral characteristics typical of expanding compact objects surrounded by matter: it includes an inner ring and two visible jets in X-rays. Along the inner ring, there are knots whose expansion is observable. Figure 4.19 shows a combination of observations from telescopes at various wavelengths, from visible to ultraviolet to X and gamma, and demonstrates the power of multiwavelength astronomy. Much remains to be done to uncover the physics of pulsars and nebulae, and X-ray and gamma observations are playing a crucial role in this endeavor. The emission spectrum of the Crab Nebula is shown in Fig. 4.19. It is also an example of an important process: the so-called “synchrotron self-Compton mechanism,” which explains the acceleration of gamma rays. Gamma rays are neutral and therefore impossible to accelerate with electromagnetic fields. However, photons emitted by electrons as synchrotron radiation can “bounce” back against moving electrons and increase their energy like pinball balls. This process is called the “inverse Compton effect” and is at the stage of the gamma-ray energy increase. The combination of synchrotron radiation and the inverse Compton mechanism is called the “synchrotron self-Compton mechanism,” or SSC. In the SSC, electrons accelerated in a magnetic field—such as the field present in the accretion region of an Active Galactic Nucleus (AGN), i.e.,

102

A. De Angelis

Fig. 4.19 Emission spectrum of Crab Nebula. From Yuan, Yin et al., http://arxiv.org/abs/ 1109.0075

Fig. 4.20

The SSC model and the resulting differential energy spectrum of photons

of a supermassive black hole growing at the expense of nearby matter, or in the supernova remnants’ surroundings—generate synchrotron photons. Such photons in turn interact via Compton scattering with their own parent electron population (Fig. 4.20); since electrons can be very fast, rescattered photons can be boosted by a large factor. The resulting differential energy spectrum of photons explains the emission from the Crab Nebula. In addition to the photon acceleration mechanisms, two decades of observations with modern X-ray detectors—Chandra in particular—have allowed

4 The Colors of the Universe

Fig. 4.21

103

The Tycho supernova remnant. From Wikimedia Commons

us to understand much about the acceleration of charged cosmic rays. In fact, the filaments of supernova remnants are visible in X-rays, and their expansion speed is measurable. The remnant of a famous supernova explosion, seen and studied by Tycho Brahe in 1572 and now called Tycho’s supernova, can be seen in X-rays in Fig. 4.21. Its current expansion is so fast (approximately one hundredth the speed of light) that the increase in the size of the spherical region is easily visible by comparing X-ray images taken in 2000 and 2015. The SSC is a mechanism involving photons and electrons, but no hadrons. It is called a (purely) leptonic mechanism of high energy gamma rays production. On the other hand, gamma rays can also be the byproduct of reactions initiated by accelerated protons (or other nuclei) smashing into other nuclei or low-energy photons. The discovery of the presence of protons accelerated to the highest energies could open the way to identifying the sources of the ultrahigh-energy cosmic rays that bombard the Earth’s atmosphere. We shall see that gamma-ray astrophysics can distinguish between hadronic and leptonic production mechanisms (we expect many sources to produce gamma

104

Fig. 4.22

A. De Angelis

The galaxy Centaurus A, seen at various wavelengths. Credit: NASA

rays through a combination of both). Typically, photons produced in hadronic reactions have an energy an order of magnitude smaller than the parent hadron.

Active Galactic Nuclei Another extremely interesting object is Centaurus A, the closest active galaxy to us, which is 3.5 megaparsec away from us. Centaurus A contains a supermassive black hole of over one hundred million solar masses, and its emission has been studied for decades at all measurable wavelengths. As we will see in the next chapter, it is a candidate site for accelerating very high-energy cosmic rays. Optically, Centaurus A appears as an elliptical galaxy with a strip of dark dust in the center. Figure 4.22 shows the image obtained by Chandra. We can resolve its complex X-ray structure into several components: the luminous central nucleus that hosts the black hole; a jet (the other one is confused with the interstellar medium); and an extended disk with a radius of 20 parsec. Centaurus A has recently “swallowed” a smaller spiral galaxy, and the remains of the spiral are still visible in the central region.

4 The Colors of the Universe

105

Fig. 4.23 Berenice’s Coma cluster, approximately 100 megaparsecs from the solar system. From Wikimedia Commons

Galaxy Clusters One of the significant accomplishments of X-ray astronomy has been the detection and measurement of X-ray emissions from galaxy clusters, which has provided valuable insights into the Universe. Galaxy clusters are the most massive objects in the Universe, comprising hundreds or even thousands of galaxies bound together by their gravitational force.They can have an enormous diameter spanning millions of light years. These clusters emit X-rays due to the magnetic fields and the gravitational fields created by their massive structures, causing their intergalactic gas to fall toward the cluster center and collide, which raises its temperature to millions of kelvin. At these temperatures, thermal emission occurs at X-ray wavelengths. We can artificially represent X-rays using false colors to obtain an idea of the presence of hot gas in a galaxy cluster. As an example, the Coma Cluster is shown in Fig. 4.23, where we can see that the hot gas is denser toward the center of the cluster, as expected in the case of gravitational infall.

106

A. De Angelis

eROSITA and ATHENA X-ray astronomy is still at the forefront of high-energy astrophysics, and large X-ray missions are planned for the present and near future. eROSITA is a Russian-German telescope successfully launched from the Cosmodrome of Baikonur in Kazakhstan in July 2019 and placed in orbit around the Lagrangian point L2. It is performing the first all-sky survey in the medium energy X-ray range up to 10 keV with unprecedented spectral and angular resolution. In the late 2030s, the launch of the large ESA mission called ATHENA is planned, with even higher resolution and field of view.

The Gamma Rays’ Violent Universe Astrophysics based on the detection of protons and ions requires extremely high energies, and the observational window is very small, being limited at the lowest energies by the requirement that the cosmic magnetic fields do not randomize their directions and the highest energies by the GZK cutoff. That is why complementary and indirect research lines are followed. The one that has produced the most spectacular results in recent years is the detection of gamma rays, the highest energy photons, for which we have developed efficient detection techniques. Gamma rays are defined as the most energetic part of the electromagnetic spectrum; the spectrum is unlimited in energy, but due to reasons related to flow and absorption in the Universe, they have only been observed thus far up to maximum energies of approximately 1,000 TeV (this energy is called 1 peta-electronvolt, or PeV). The lower energy threshold, which separates X-rays from gamma rays, is arbitrary. An obvious threshold is approximately 1 MeV: a photon with at least enough energy to produce an e+ e− pair is certainly a gamma ray, but in general, photons produced in nuclear transitions, which can be of energies of a few hundred keV, can be called gamma rays. The rationale is that gamma rays must be of sure origin from nonthermal processes. Astronomical sources produce gamma rays with much higher energies than any source we can find on Earth. Quantum mechanics describes electromagnetic radiation in terms of both waves and particles; light particles are called photons. When the energy of the radiation increases, the wavelengths shorten. A gamma ray with an energy of 1 GeV corresponds to a wavelength of approximately 1 femtometer (also called a fermi). A femtometer corresponds to a billionth of a micrometer, which

4 The Colors of the Universe

107

is approximately the radius of a proton. The distinction between waves and particles loses its meaning at these energies: a gamma-ray is a particle-like wave! As far back as 1959, Giuseppe Cocconi suggested the possibility of detecting high-energy photons from cosmic sources; gamma photons could be separated from the background because they pointed to the source, and Cocconi also suggested observing the Crab Nebula, which, based on his calculations, could be a source of gamma rays. His proposal motivated Aleksandr Chudakov from the Lebedev Institute to build the first gamma-ray telescope in Crimea a few years later; however, this telescope failed to detect the signal from the Crab Nebula. The observation of gamma rays required thirty more years and the development of more powerful instruments, but it has become a standard of astrophysics. Gamma rays are observed today both directly by detectors on satellites and indirectly by detecting on the ground the particle showers generated in their interaction with the atmosphere. This second technique, however, only allows the observation of gamma rays at the highest energies since low-energy gamma rays are absorbed in the upper layers of the atmosphere. The cosmic flow of gamma rays is very small compared to the total flow of charged cosmic rays (approximately one thousandth at 1 TeV). However, gamma rays, not being deflected by magnetic fields as they are neutral, directly point to their sources; this requirement can increase the signal/noise ratio when observing them. In addition to nuclear processes (which can account for production up to energies of a few tens of MeV), astrophysical gamma rays can come from the radiation of accelerated particles or from particle collisions; in the latter cases, they are the signature of the existence of charged particles orders of magnitude more energetic and thus allow us to point to the same sources accelerating cosmic rays. The propagation of gamma rays allows the measurement of the background of photons in the Universe: gamma rays can interact with this background, and their flux can, as a consequence, be attenuated. By studying the polarization of gamma rays emitted by the AGN, we can obtain information about the properties of the intergalactic magnetic fields. These studies can help understand how magnetic fields influence the formation and evolution of galaxies and how they act on the distribution and dynamics of high-energy particles. Moreover, gamma rays coming from the AGN can be used to probe the vacuum and test predictions of Einstein’s General Relativity theory and quantum gravity theories—although at first order the interaction of gamma rays with the neutral vacuum is forbidden, it is allowed at higher orders, as we will see. Gamma rays can also be a signature of dark matter. The products of the annihilation of pairs of dark matter particles—the so-called self-annihilation—

108

A. De Angelis

include, if the dark matter particles are heavy enough, all known particle types: matter, antimatter, neutrinos, and gamma rays. Gamma rays (as well as neutrinos, which are more difficult to detect) point to the locations where these selfannihilations occur. Clearly, these locations should be sought among regions where gravity is stronger, as these are the regions of dark matter accumulation given that dark matter interacts gravitationally. Galactic nuclei, near massive central black holes, host the regions of the Universe where gravity is the highest. Gamma rays can also be used as a probe for the presence of new hypothetical particles—we well see later the example of the so-called axion-like particles— that can interact with gamma rays and cause them to slow down or change direction (or polarization state). Photon polarization refers to the orientation of the electric and magnetic fields that make up a photon’s electromagnetic wave, perpendicular to the direction of propagation; it is a fundamental property of light. There are two main types of polarization: linear and elliptical. Linear polarization occurs when the photon’s electric field oscillates in a particular direction perpendicular to the propagation. Elliptical polarization occurs when the electric field rotates around the propagation axis in a regular way (can be, in particular, circular). Unpolarized light is light in which the orientation of the electric field oscillations is random. Some processes (Compton scattering, for example) produce light with a definite state of polarization, and others change the polarization state. We have a common example of polarization when we look through polaroid lenses. Polaroid lenses are a type of polarizing filter that can selectively block or transmit light waves based on their polarization direction. They are made up of a thin sheet of polymeric material that contains long chains of molecules aligned in a particular direction. This alignment makes the material anisotropic, meaning it has different optical properties in different directions. When unpolarized light enters a polaroid lens, the aligned molecules in the material block all the light waves oscillating perpendicular to their alignment, allowing only the light waves that are oscillating in the same direction to pass through. As a result, the light that emerges from a polaroid lens is linearly polarized in a single direction. Polaroid lenses have various practical applications, such as sunglasses and camera filters. Polarization detectors are in principle similar to polaroid lenses. By measuring the energy distribution and polarization of gamma rays coming from distant sources, such as active galactic nuclei or gamma-ray bursts, scientists can search for evidence of particles in the vacuum interacting with photons, constraining their properties. In summary, in addition to their importance for the astrophysics of stars and compact objects and the study of cosmic rays, gamma rays play an important role in cosmology by providing a window into the early Universe, helping us to

4 The Colors of the Universe

109

understand the formation and evolution of galaxies and black holes and aiding in the search for dark matter and dark energy. However, how can gamma rays be detected, since the atmosphere is a shield for these photons, absorbing them through the generation of particle showers?

Space-Based Detectors The study of gamma rays from satellites was preceded by the pioneering phase of studying the emission of astrophysical sources in the X-ray band. This study revealed many new aspects of these sources. After the first X-ray satellites discovered more sources than expected and strange phenomena such as extremely violent emissions of high-energy radiation, such as gamma-ray bursts (GRBs), which emit more radiant energy in a few seconds than the rest of the Universe combined, the study began on how to build a tool that would detect gamma rays up to the highest energies. The most convenient orbit to place a gamma-ray satellite is a so-called low-Earth orbit (LEO). The term LEO refers to an orbit around the Earth with an altitude of less than 2,000 km above the Earth’s surface; the most convenient region is below 700 km, i.e., below the inner Van Allen belt, to reduce noise from background radiation. Satellites in a LEO typically orbit the Earth once every 90 min or so. LEOs are important for space exploration and communication. Many satellites, including the International Space Station (ISS), at an altitude of approximately 410 km, are in a LEO. LEOs are also used for remote sensing, Earth observation, and weather forecasting. Because LEOs are relatively close to Earth, they allow high-resolution imaging and rapid communication with ground stations. In a LEO, the accidental background from cosmic rays mimicking a gamma conversion is severely reduced. The most difficult aspect of building a gamma-ray telescope is that gamma rays penetrate all solid surfaces; thus, gamma-ray mirrors cannot be made. This means that the normal optical methods that can be used across all wavelength ranges, from the longest radio waves to the shortest X-rays, are useless in gamma-ray telescopes. There are three main processes by which gamma rays can be detected. • Photoelectric effect. It occurs when gamma rays interact with the electrons in atoms, transferring their energy to the electrons and causing them to change their orbital level. The energy of the incident photons determines the kinetic energy gained by electrons. This process is relevant for incident photons of energy, say, below 1 MeV.

110

A. De Angelis

• Compton scattering. When a gamma ray interacts with a material, it can cause the material’s electrons to scatter. The scattered electrons produce an electrical signal that can be measured and used to determine the energy of the gamma ray. This process is relevant for incident photons of energy, say, between a few keV and a few MeV. • Pair production. When a gamma ray with sufficient energy passes through a material, it can interact with an atomic nucleus to create an electronpositron pair. These charged particles can produce a signal that can be measured, providing information on the gamma ray’s energy. How can gamma rays be detected at the highest energies? On the one hand, the flow of gamma rays decreases quickly with energy, roughly as the inverse of the square of the energy (doubling the energy reduces the number of gamma photons by a factor of 4) or worse; on the other hand, the dominant process becomes pair production. There is therefore a need for detectors as large as possible and with enough material to cause the conversion of photons into pairs. EGRET. The first example of a gamma detector on a satellite was the EGRET (Energetic Gamma-Ray Experiment Telescope). EGRET was a key instrument aboard the Compton Gamma-Ray Observatory, a satellite launched by NASA in 1991 to study high-energy gamma rays from celestial sources. EGRET was designed to detect gamma rays in the energy range of 30 MeV to 30 GeV. The core of the EGRET detector consisted of a spark chamber. When a charged particle enters this kind of chamber, it ionizes the gas molecules along its path; the ionized atoms release electrons, which create sparks along the path of the particle, which produce an electrical signal though an appropriate sensor. By analyzing the pattern of sparks, scientists can determine the direction, energy, and charge of the incoming particle. A calorimeter and a scintillation counter surrounded the spark chamber. EGRET made many important discoveries during its nine-year mission, including the detection of more than 170 sources of gamma rays, including active galactic nuclei, pulsars, and supernova remnants. It also provided new insights into the nature of gamma-ray bursts and contributed to understanding high-energy processes in the Universe. The EGRET detector was one of the first to demonstrate the existence of diffuse gamma-ray emission from the Milky Way galaxy, partly due to interactions between cosmic rays and interstellar gas. AGILE, the Fermi satellite and the Fermi-LAT detector. To surpass EGRET, designing a large satellite (Fig. 4.24) required a significant international collaboration involving astrophysicists and elementary particle physicists, “mixing” the two research fields. NASA’s Fermi satellite, originally called GLAST

4 The Colors of the Universe

111

Fig. 4.24 On the left, the Fermi satellite. On the right, the layout of the Large Area Telescope (LAT) and the principle of operation are shown. Credits: NASA

(NASA likes to change the names of its satellites after they have successfully entered orbit), was conceived in 1994 by Bill Atwood’s group in Stanford, by Peter Michelson and NASA’s Dave Thompson, and was born at the end of the twentieth century from a collaboration between the United States, Italy, Japan, France, and Sweden. Launched in 2008, it orbits at a distance from the Earth of approximately 565 km with a revolution period of 95 min; it has been designed to operate for at least twenty years. Fermi’s operating scheme is illustrated in Fig. 4.24. The heart of the instrument, which has approximately 1.8 × 1.8 m2 of surface area, is the Large Area Telescope or LAT, built by the Italian industry in collaboration with research institutes. It records the conversion of gamma photons through the tracker, which is a sequence of parallel planes of silicon detectors interspersed with tungsten conversion planes. The Fermi satellite weighs approximately three tons, and thanks to sophisticated electronics, it consumes only 500 W (such as five incandescence lamps). As far back as in the first year, the Fermi satellite identified more than 1500 sources of gamma rays with energy greater than one tenth of a GeV, and to date, the sources of the “Fermi catalog” are more than 5,000 (Fig. 4.25). Fermi is also equipped with a gamma-ray burst monitor (GBM) sensitive to transient events.The GBM includes 14 detectors sensitive to light (scintillators) oriented to catch a large portion of the sky, albeit with low positional reconstruction, sensitive to X-rays and low-energy gamma rays. GRBs are detected

112

A. De Angelis

Fig. 4.25 Map of the Universe in galactic coordinates, i.e., a projection in which the Milky Way is at the Equator, showing the approximately 5,000 gamma-ray emitters at energies larger than 100 MeV; the map is derived from data collected in the first eight years by the Fermi satellite. Credit: NASA

by a significant change in count rate in at least two of the scintillators. After a trigger, the GBM processor provides a preliminary estimate for the source’s position and energy. The result is sent to the ground segment but it can also serve for a possible autonomous repointing of the LAT. GBM detects more than 200 GRBs per year. Gamma rays witness processes that are particularly violent and far from thermal equilibrium. In particular, a portion of the energy released in the gravitational collapses of supermassive systems is emitted in the form of gamma rays, which have allowed and continue to allow us to photograph (or rather “film,” given the rapid variability of the processes involved) these cataclysmic events. The sky seen from the Fermi satellite is a sequence of lights that turn on and off on time scales often on the order of a day and sometimes even a few minutes. In Fermi’s most recent catalog of gamma-ray sources with energies greater than 30 MeV, only a third of the over 5,000 emitters are positionally associated with known astrophysical objects. This adds a special thrill to the research. An all-Italian precursor to the Fermi satellite is the AGILE (Astrorivelatore Gamma a Immagini LEggero) satellite, designed, built, and operated by the Italian Space Agency together with the National Institute of Nuclear Physics (INFN), the National Institute of Astrophysics (INAF) and Italian industries of excellence, supervised by Guido Barbiellini and Marco Tavani. AGILE served as a guide to Fermi, being launched one year earlier; despite its sensitivity being approximately one tenth of that of the Fermi telescope, it has provided and continues to provide important scientific information. After Fermi, in 2013, an Italian-Chinese collaboration launched the DAMPE satellite, which should

4 The Colors of the Universe

113

Fig. 4.26 Overview of the ASTROGAM concept payload showing the silicon tracker, the calorimeter, and the anticoincidence system. Credit: ASTROGAM Collaboration

be the precursor of a new large detector, HERD, to be launched after 2030. In short, the sky is big enough for everyone! MeV astrophysics and the ASTROGAM concept. The gamma-ray energy range from a few hundred keV to a few MeV is still largely unexplored, mainly due to the challenging nature of the measurements. It is crucial for cosmic ray physics since it identifies hadronic interactions through emission lines of atoms and through decays of neutral pions. We believe this will be one of the main detectors for the next decades. Most of the data currently available are due to the COMPTEL instrument, operating three decades ago. Gamma rays were detected by two successive interactions: an incident cosmic gamma ray was first Compton scattered in the upper detector and then totally absorbed in the lower detector. Data obtained by COMPTEL could be used to reconstruct sky images over a wide field of view with a resolution of a few degrees, producing the first map of the Universe in the MeV range. Improvements in the technology of solid-state detectors make it possible to build a better detector allowing simultaneous measurement of the energy and direction of gamma rays from the MeV range to the GeV range. This result can be achieved by combining a Compton telescope and a pair-production telescope in a single instrument. This technique is known as the ASTROGAM concept. It consists of a telescope made of tens of planes of silicon microstrip detectors, a calorimeter made of scintillation detectors, and anticoincidence detectors to veto the charged particle background (Fig. 4.26). The concept is close to the pair-production detector Fermi-LAT design, but without a converter and with double-sided Si strip detectors instead of singlesided detectors, to make the three-dimensional reconstruction of Compton

114

A. De Angelis

interactions possible. Removing the converter improves the angular resolution for low-energy gamma rays. In the MeV energy region, the ASTROGAM concept allows an improvement of one-two orders of magnitude in sensitivity compared to COMPTEL (thanks to the increased effective area and the improved technology) and thus illuminates the dark MeV region. The ASTROGAM concept was first proposed in 2001 by Gottfried Kanbach and collaborators at the Max-Planck Institute for Extraterrestrial Physics in Munich and later elaborated in proposals to the ESA—particularly by Vincent Tatischeff, Tavani, and Alessandro De Angelis. The concept was developed also at NASA thanks mostly to a group lead by Alexamder Moiseev. An ASTROGAM detector will achieve a spectacular improvement in terms of source localization accuracy and energy resolution and allow to measure the contribution to the radiation of the Universe in an unknown range. The sensitivity of ASTROGAM in the line and continuum modes will reveal the transition from nuclear processes to those involving magnetic and gravitational interactions.

Ground-Based Detectors Given the current cost of space technology, satellites are limited in size. As a consequence of the rapid drop in the gamma photon flux with increasing energy, the highest energies that can be detected by satellites are approximately 100 GeV (the sky’s most luminous gamma-ray source sends less than one photon a day at this energy to an area as small as that of the Fermi satellite, the largest of the gamma telescopes ever placed in orbit). To explore energies beyond hundreds of GeV, it is therefore necessary to use ground-based instruments, revealing the particle showers produced by the interaction of gamma rays with the atmosphere. A shower is a cascade of secondary particles produced as the result of a high-energy particle interacting with matter—in particular, in our case, with atomic nuclei in the atmosphere. The incoming particle interacts, producing multiple new particles with lesser energy; these particles then interact, in the same way, and the process continues until many thousands, millions, or even billions of low-energy particles are produced. Below a low energy threshold they are then stopped in the matter and absorbed. There are two basic types of showers. Electromagnetic showers are produced by particles that interact primarily or exclusively via the electromagnetic interactions—typically, in the case of cosmic rays, primary high-energy pho-

4 The Colors of the Universe

115

Fig. 4.27 Longitudinal shower development as a function of the number of radiation lengths (r.l.) from a photon-initiated cascade. The parameter s describes the shower age, and is 1 at the maximum development. From R. M. Wagner, Ph. D. thesis, 2006, Technische Universität München, MPP-2006-245

tons or electrons. Hadronic showers are produced by hadrons, most frequently protons and heavier nuclei, and proceed mostly via the strong nuclear force. An electromagnetic shower begins when a high-energy photon enters in the atmosphere (electrons are less frequent since they have smaller free paths in space, because of energy losses by radiation). At high energies (above a few MeV), the photoelectric effect and Compton scattering are insignificant: photons interact with target atoms primarily via pair production—that is, they convert into electron-positron pairs. High-energy electrons and positrons primarily emit photons, a process called bremsstrahlung (from German, “braking radiation”). These two processes (pair production and bremsstrahlung) continue, leading to a cascade of particles of decreasing energy until photons fall below the pair production threshold, and energy losses of electrons other than bremsstrahlung start to dominate. The characteristic amount of matter traversed for these related interactions is called the radiation length and it is roughly the mean distance over which a high-energy electron loses 63% of its energy by bremsstrahlung, as well as the mean free path for pair production by a high energy photon. The cascade scales with the radiation length and approximately with the logarithm of the energy. The average longitudinal profile of the energy deposition in showers is reasonably well described by an asymmetrical bell-like distribution called “gamma distribution” (Fig. 4.27). In a hadronic shower, for each interaction about half of the incident hadron energy is passed on to additional secondaries. The remainder is consumed in multiparticle production of slow pions and in other processes. The phenom-

116

A. De Angelis

Fig. 4.28 Schematic representation of two atmospheric showers initiated by a photon (left) and by a nucleus (right). The lateral profile is amplified for clarity: electromagnetic showers are in reality less than one degree wide. From R.M. Wagner, Ph.D. thesis, 2006, Technische Universität München, MPP-2006-245

ena which determine the development of the hadronic showers are: hadron production, nuclear deexcitation and pion and muon decays. Neutral pions amount, on average, to 1/3 of the produced pions; since they promptly decay into two gamma rays, their energy is dissipated in the form of electromagnetic showers. Another important characteristic of hadronic showers is that they take longer to develop than the electromagnetic ones (Fig. 4.28). The particle showers produced by gamma rays can be distinguished from those produced by protons, which are a thousand times more numerous, using sophisticated classification and recognition techniques exploiting the differences in longitudinal and transverse development and the fact that hadronic showers are more subject to fluctuations.

Extensive Air Shower (EAS) Detectors The number of charged particles produced by a typical electromagnetic shower generated by high-energy gamma photons has a maximum at five to ten kilometers altitude and is negligible at sea level. Therefore, if we want to detect gamma rays with instruments that directly detect the particles in the shower (a technique called Extensive Air Shower, or EAS), we need to place the instruments at high altitudes, with significant logistical problems. An example of an EAS instrument for detecting gamma rays is the High Altitude Water Cherenkov detector HAWC, located on the flanks of the Sierra Negra volcano near Puebla, Mexico, at an altitude of 4100 m. The detector, with a surface of 22,000 m2 , has an instantaneous field of view covering

4 The Colors of the Universe

Fig. 4.29

117

Layout of the LHAASO detector. Credit: LHAASO Collaboration

15% of the sky, and during each 24 h it observes two-thirds of the sky. The HAWC observatory performs a high-sensitivity synoptic survey of the gamma rays from the Northern Hemisphere. Upon striking the upper atmosphere, high-energy gamma rays can create positron-electron pairs that move at large speeds. HAWC consists of large metal tanks, 7.3 m wide by 5 m high, containing a light-tight bladder holding 188,000 liters of water, equipped with photosensors (photomultipliers, which amplify the signal of a the photons, creating an avalanche of electrons), which detect the light emitted in water by superluminal charged particles in water by the so-called Cherenkov effect. Charged particles can travel at speeds faster than light in water (we remember that this does not violate the theory of relativity, as the speed of light in a transparent material is c/n, where n is the refractive index and is greater than one). In these cases, they emit a flash of light, the so-called Cherenkov light named after the discoverer of the phenomenon, Pavel Alekseevich Cherenkov (Novaja Cigla 1904—Moscow 1990), Nobel Prize winner in 1958 for this discovery. Cherenkov light is the optical equivalent of the supersonic “bang” for sound waves. The light is mostly visible, and the emission is more intense in the blue region. Another example is the Chinese Large High Altitude Air Shower Observatory LHAASO, designed by Zhen Cao and collaborators and located at 4.4 km altitude in Sichuan, China. A layout of LHAASO is shown in Fig. 4.29. LHAASO’s particle detector arrays comprise more than 5,000 electromagnetic particle detectors and approximately 1200 muon detectors located in the square-kilometer complex array, a 78,000 m2 water Cherenkov detector array, and 18 wide-field-of-view Cherenkov telescopes. Using these four detection

118

A. De Angelis

techniques, LHAASO can measure cosmic rays omnidirectionally with multiple variables simultaneously. The arrays will cover an area of approximately 1.36 km2 . LHAASO has recently discovered a dozen accelerators in the galaxy producing photons of energies as large as 100 TeV to 1 PeV (1,000 TeV); these accelerators are called, in jargon, “PeVatrons” (they are shown in Fig. 4.30 in the Galactic plane). We remind that if gamma rays are generated by an interaction of a charged ion with the ambient medium, the gamma-ray energy is approximately a factor of 10 lower than the parent cosmic-ray energy. The spectrum of cosmic rays has a break in energy at a few PeV, referred to as the knee. Below the knee, cosmic rays are believed to be of galactic origin, but the sources where they are produced are still unknown. We are actively on the hunt for these extreme accelerators within our galaxy. PeV emission seems to be characteristic not only of supernova remnants and active galactic nuclei but also of the more abrupt gamma-ray bursts.

Imaging Atmospheric Cherenkov Telescopes Another technique for detecting astrophysical gamma rays is the Cherenkov technique, which uses the Cherenkov emission of light by charged particles in atmospheric showers. In both electromagnetic and hadronic showers, charged particles can travel at speeds faster than light in the atmosphere. The flash is emitted in a cone with an amplitude of approximately one degree relative to the direction of the particle that generates it and travels toward the ground along with the other particles of the shower (Fig. 4.31). The Cherenkov detectors reflect with their large optical surface the weak flash of light onto a matrix of photomultiplier sensors placed in the telescope’s focal plane; then, the information on the individual photomultipliers (pixels) that have received the signal is digitized. In this way, the gamma ray is photographed as if it were a kind of shooting star, whose flash lasts just 2 or 3 ns; the image is recorded on a computer system and stored for data analysis. Using a technique whose pioneers were Trevor Weekes, an Irishman who became a professor in Tucson, Arizona, and Michael Hillas from Leeds, UK, the shape of the image allows distinguishing the showers generated by photons from those (much more numerous) generated by protons. This technique allowed the location of the first high-energy gamma ray emitter in 1989; as predicted by Cocconi, this source was the Crab Nebula at the Whipple telescope near Tucson, Arizona. Four multitelescope Cherenkov systems for detecting high-energy gamma rays are currently operational; they are structurally and functionally similar.

4 The Colors of the Universe

119

Fig. 4.30 Map of the Universe showing gamma-ray emitters at energies larger than 100 GeV. Our galaxy lies on the equatorial plane; Milky Way emitters are mostly supernova remnants. Outside the equator, the sources are supermassive black holes from other galaxies. Credit: TeVCAT

H.E.S.S. (High Energy Stereoscopic System) in Namibia, operational since 2003, mostly due to Heinrich Völck, Felix Aharonian, and later to Werner Hofmann; MAGIC (Major Atmospheric Gamma Imaging Cherenkov telescope) in the Canary island of La Palma (Fig. 4.32), operational since 2004, designed by Eckart Lorenz and later improved thanks mostly to Razmik Mirzoyan and Masahiro Teshima; and VERITAS (Very Energetic Radiation Imaging Telescope Array System) in the Arizona desert, operational since 2006, started by Trevor Weekes as an expansion of Whipple. The construction of the multitelescope system CTA, distributed over two sites in the Northern and Southern Hemispheres, has just started. The first telescopes are being deployed in La Palma—we will briefly describe them. In synergy with the Fermi satellite, these detectors are drawing the map of cosmic gamma ray emitters (and therefore, indirectly, cosmic rays) in the TeV region. The main advantage of the Cherenkov technique is its high sensitivity and the accuracy of the reconstruction of the direction (less than one tenth of a degree to be compared with a few degrees for the EAS) and of the photon energy. On the other hand, the EAS technique has the advantage of detecting gamma-ray emissions with serendipity: EAS detectors have a large field of view,

120

Fig. 4.31

A. De Angelis

A sketch of the Cherenkov technique. From Wikimedia Commons

while Cherenkov detectors need to be pointed to areas of the sky spanning a few degrees (Fig. 4.33). In addition, EAS detectors can reach higher energies. However, EAS detectors must be built at higher altitudes, posing logistical problems.

Supernova Remnants and Cosmic Rays One of the most important results of Cherenkov telescopes is that the morphology of supernova remnants in the Galaxy demonstrates that these objects are emitters of cosmic rays up to several hundred TeV. A possible mechanism of gamma ray production in supernova remnants with molecular clouds involves a source of cosmic rays illuminating the clouds and generating hadronic showers by hadronic collisions. This allows us to spot the generation of cosmic rays by studying of photons coming from pion decays in hadronic showers. Recent experimental results support the “beam dump” hypothesis: accelerated protons collide with molecular clouds or photon fields. An example is the supernova remnant IC443. In Fig. 4.34, a region of acceleration at GeV energies is shown as seen by the Fermi-LAT. It is significantly displaced from the centroid of emission detected at higher energies by the MAGIC gamma-ray telescope which in turn, is positionally consistent with a molecular cloud. The spectral energy distribution of photons also supports

4 The Colors of the Universe

121

Fig. 4.32 The MAGIC binocular system on the crater of the Taburiente volcano (2,250 m above sea level) on the island of La Palma in the Canary Islands. With its two parabolic reflectors of 240 m2 of surface area each, MAGIC offers one of the largest reflecting optical surfaces for astronomical purposes (this surface is of Italian and German construction, thanks to INFN, INAF, a consortium of Italian universities, and the Max-Planck Institute in Munich). To provide a visual comparison, the little white house seen on the right (the MAGIC control room) has two floors. Credit: MAGIC Collaboration

two-component emissions, with a rate of acceleration of primary electrons approximately equal to protons. Such a 2-region displaced emission morphology has also been detected in several other supernova remnants (W44 and W82, for example).

Very-High-Energy Sources In the last twenty years, thanks mostly to ground-based telescopes, the number of very-high-energy sources known has been more than tenfold, with a discovery rate of approximately one source per month: over 200 sources are now known. Figure 4.30 shows that the gamma sky at high energies is mainly populated in correspondence with the galactic plane: proximity plays a fundamental role in defining the abundance of observed sources. Sources suffer from an interesting attenuation effect: the Universe is not very transparent to gamma rays due to their interaction with the “fog” of infrared photons from stars in galaxies and with the fossil radiation of the big bang; such interaction leads to

122

A. De Angelis

Fig. 4.33 A sketch showing the basics of a Cherenkov array compared to an EAS detector. Credit: SWGO Collaboration

Fig. 4.34 Scheme of the generation of a hadronic cascade in the collision of a proton beam with a molecular cloud. On the right, the supernova remnant IC443: centroid of the emission from different gamma detectors. The position measured by Fermi-LAT is marked as a diamond, that by MAGIC as a downward-oriented triangle; the latter is consistent with the molecular cloud G. Credit: MAGIC and H.E.S.S. Collaborations

the formation of pairs of opposite charge particles. Therefore, measuring the gamma-ray horizon is also a measure of the photon density in the Universe. Externally to the Milky Way, bright galaxies are predominantly observed, including active galactic nuclei, that is, supermassive black holes in the center of galaxies growing at the expense of the surrounding material. The farthest celestial objects are not easily visible, as they are weaker due to the distance.

4 The Colors of the Universe

123

Fig. 4.35 Almost every galaxy in the Universe harbors in its center a supermassive black hole (SMBH) with a mass of millions to billions times the mass of the Sun, and some of these black holes feast on the infalling matter. That matter is gathered in an accretion disk around the black hole before falling into the abyss. Sometimes the rotating disk generates a narrow channel along the polar axis of the black hole through which particles stream away at speeds close to the speed of light. These jets are the origins of some of the most extreme outbursts seen in the Universe. Depending on the line of sight, active galactic nuclei are known with different names; if the jet occurs to line up with the line of sight from Earth, they are called blazars. Credit: H.E.S.S. Collaboration

Most detections in gamma rays concern a particular subset of active galactic nuclei called blazars. In approximately one-tenth of active galactic nuclei, the matter that falls into the black hole ignites powerful collimated jets that emerge at relativistic speeds in opposite directions (Fig. 4.35). If a jet is observed at a small angle with respect to the line of sight, the emission revealed is amplified by a relativistic effect up to two or three orders of magnitude and dominates the observation; in this case, we have a blazar. Ground-based gamma-ray telescopes are also testing the theory of relativity in unknown regions, where it is expected that it could be violated, and studying the structure of the quantum vacuum.

The Fermi Bubbles In 2010, data recorded by the Fermi satellite revealed a huge and mysterious structure in our galaxy. This structure had never been seen before because

124

A. De Angelis

Fig. 4.36 An image of the Fermi bubbles (in purple) using data from Fermi-LAT and e-ROSITA. Credit: NASA

physicists were only looking for “small” structures, and they were not open to the possibility of finding a source as large as the whole Galaxy! The so-called “Fermi bubbles”, magenta in Fig. 4.36, are perpendicular to the plane of the Milky Way and extend over a total length of approximately 50,000 light-years, half the radius of the Galaxy. The Fermi bubbles do not yet have a certain explanation. The most credited possibility is that they are related to the release of large amounts of energy emitted by the supermassive black hole at the center of the Milky Way. The central black hole of the Milky Way may have been active in the past (ten million years ago), producing the jets responsible for the bubbles.

The Structure of Supernova Remnants and the Mechanisms of Acceleration All sources of hadronic cosmic rays are gamma-ray sources. Still, the converse is not necessarily true: the SSC mechanism, which we described earlier, involves energetic gamma-ray emission without charged cosmic ray emission (accelerated electrons cannot escape from the acceleration region because their mean free path is short). Unlike protons or electrons, gamma rays are not deflected in their paths to the observer by magnetic fields. Of course, they can interact with particles along the path and be deflected considerably, just as photons of light can interact with cosmic dust, for example, but if a gamma ray beam reaches us, it indicates a potential cosmic ray source. Supernova remnants are the most common possible sources of gamma rays within the galaxy. Several hundred such sources have been observed.

4 The Colors of the Universe

125

Many gamma-ray-producing supernova remnants have huge molecular clouds around them. Thus, if protons as well as electrons are accelerated inside, they can produce secondary gamma-ray photons by producing neutral π mesons that in turn decay into gamma-ray pairs (Fig. 4.34). Evidence of the production of cosmic rays by supernova remnants is overwhelming. And of course, not only protons can be accelerated, but heavier ions as well. In this case, being possibly the electric charge for a ion larger than for a proton, the final energy will be proportionally higher. We expect thus that, for the same cosmic accelerator, the composition of the acceleration products will become richer in heavy ions as the energy increases. This effect is invisible at relatively low energies because of the large number of protons but might become important close to the maximum energies achievable by the cosmic accelerator.

More on Active Galactic Nuclei As we said, the central regions of AGNs sometimes produce two opposite collimated jets, with rapid outflow of matter and energy. The axis of rotation of the accreting structure determines the direction of the jet. The resolution of astronomical instruments is generally too poor, especially at high energies, to resolve the morphology of the jet in gamma rays, and observations cannot yet comprehensivly explain the mechanism. Some experimental information on the behavior near the central supermassive black hole is available, as we have seen, in the radio wave band, thanks to very long-baseline interferometry, which can display images of synchrotron radiation emission, rarely of X-rays. The gamma-ray region, however, is the most interesting one. Very-highenergy gamma rays are produced in the interaction of cosmic particles of an order of magnitude greater energy with ambient particles or photons in the jets. There is an intrinsic limit to the possibility of accelerating electrons: above energies of a few PeV, the dispersion due to synchrotron radiation makes it impossible. Primary cosmic particles accelerated by a process derived from the enormous gravity of a black hole to energies above those attained by our particle accelerators must probably be hadrons. We shall discuss active galactic nuclei again in the context of multimessenger astronomy.

Gamma-Ray Bursts Gamma-ray bursts (GRBs) are extremely intense and fast flashes of gamma radiation; they are detected approximately once a day on average. They last

126

A. De Angelis

Fig. 4.37 Sky distribution of Fermi-GBM triggered GRBs in galactic coordinates as of December 2017. Red diamonds indicate LAT-detected bursts, and blue dots indicate Swift-detected bursts (NASA)

from a few fractions of a second to several seconds and are often followed by “afterglows” after minutes, hours, or even days. The emission is so powerful that it can outshine the entire Universe in gamma rays—some attribute the disappearance of dinosaurs to the explosion of a very close GRB. They are named GRByymmdd by the date they were detected: the first two numbers after “GRB” correspond to the last two digits of the year, the second two numbers to the month, and the last two numbers to the day. Sometimes a progressive letter (A, B,…) is added if more than one GRB was discovered on the same day. Their position in the sky seems random (Fig. 4.37), which suggests that they are of extragalactic origin. They are generally very distant (Fig. 4.39). The most distant event ever detected thus far is a 10-s long GRB at a redshift of approximately 8.2, called GRB090423; the closest has been GRB031203, at a redshift of approximately 0.1. GRB080319B at a redshift of 0.94, detected by the Neil Gerehls Swift Observatory, was observable with the naked eye: it had a peak visual apparent magnitude of 5.7 and remained visible to human eyes for approximately 30 s. Swift, a NASA mission with international participation, is an X-ray satellite dedicated to gamma-ray bursts. Within seconds of detecting a burst, Swift relays its location to ground stations, allowing ground-based and space-based telescopes worldwide to observe the burst’s afterglow. The energy spectrum of GRBs is not thermal and varies from event to event; most GRBs have energies of a few hundred keV, but some reach up to tens of GeV. Because they are thus far away, the energy reaching Earth is a tiny fraction of the total energy: it is estimated that energy emission in the gamma-ray band from the source often exceeds the energy flux from the rest of the Universe during fractions of a second.

4 The Colors of the Universe

Fig. 4.38

127

Distribution of gamma-ray burst durations. From Wikimedia Commons

Fig. 4.39 Distribution of distances, expressed as redshift, and the corresponding age of the Universe for gamma-ray bursts detected by NASA’s Swift X-ray satellite. Credit: Edo Berger (Harvard), 2009

The distribution of their durations shows two maxima (Fig. 4.38) and involves a first phenomenological classification between “short” GRBs (typically lasting a few tenths of a second) and “long” GRBs (lasting more than 2 s, and typically approximately 40 s). Short GRBs have been associated with the merging of pairs of compact objects. For long GRBs, the emission is associated with the formation of a

128

A. De Angelis

supernova from the collapse of a single star, presumably of very high mass (a “hypernova”).

Dark Matter As we saw in the first chapter, the most likely explanation for the observed anomalies in the motion of stars in the periphery of many galaxies (including our own) is a new type of matter hitherto unobserved, which we call “dark matter.” This dark matter could comprise approximately 20% of the mass of the Universe and 80% of the mass of our galaxy. It could be of two types: either made of very heavy particles, over 50 GeV (Weakly Interacting Massive Particles, abbreviated as WIMP) or very light, less than one eV (Weakly Interacting Slim Particles, abbreviated as WISP). Particles of intermediate masses between WISPs and WIMPs are unlikely, since they would have probably been produced at CERN accelerators and we would know about them. One feature of WIMPs is that current theories predict that they annihilate in pairs, producing gamma rays. They can therefore be studied by looking for an excess of gamma rays from regions where dark matter is expected to be present. Since this method does not look directly for WIMPs but for their annihilation products, it is called “indirect search.” The probability of annihilation increases with the probability of the particle pairs meeting, which depends on the square of the density; therefore, it is greatest near compact objects, particularly the centers of galaxies. Of course, the first place to look is the center of our galaxy, which has been studied with great care by gamma telescopes (especially H.E.S.S., which is located in the Southern Hemisphere and has a better view) and by Fermi, without finding, at the moment, any indications of a signal. Other targets for observation are the so-called dwarf spheroidal galaxies. The environs of the Milky Way are populated by many small galaxies, called dwarf spheroidal galaxies, typically composed of a few million Sun-sized stars—up to a hundred million, and thus a thousand to ten thousand times smaller than our galaxy. These galaxies (Fig. 4.40) are too small to host supermassive black holes, but the motion of their stars cannot be explained based on the observed matter: they must therefore, if the dark matter hypothesis is correct, contain large amounts of dark matter relative to ordinary matter (for some of them, dark matter is required to be tens of times more than visible matter). The advantage of studying dwarf spheroidal galaxies is that, unlike our galaxy, they do not have a populous and highly energetic “galactic center,” and therefore, observations have no background noise. Again, however, we have at present no hint of a signal.

4 The Colors of the Universe

Fig. 4.40

129

Our galactic group, with dwarf spheroidal galaxies

In addition to indirect methods (which we will discuss again in the next chapter: gamma rays are not the only product of annihilation, thus multimessenger astronomy is of great importance for this technique), there are two other techniques for searching WIMPs. 2. The direct search exploits the collision of these particles with targets on Earth. If the WIMP hypothesis is correct, we live in a sea of WIMPs. For a WIMP mass of 50 GeV, there must be approximately a hundred thousand particles per cubic meter in our surroundings, moving at a speed comparable to the speed of revolution of the Earth around the Sun: the typical speed should be approximately 230 km/s. Direct detection relies on observing the interaction of WIMPs within low-noise ground-based detectors located in mines or under mountains (the most notable example is the cavern of the Gran Sasso laboratory near L’Aquila, which presently hosts the most sensitive detector of this kind, the XENON experiment by the US-Italian Elena Aprile and collaborators). 3. At accelerators, WIMPs can be directly produced by the collision of standard ultrahigh-energy particles (such as protons at the LHC). The three techniques just illustrated are complementary, but the results are often difficult to compare. In any case, we are still groping in the dark, and pessimism is taking hold among many of the hunters. Maybe the interaction of dark matter is weaker than expected, maybe so weak that we will never be able to see these particles. Maybe dark matter is not of the particle kind. Or isn’t it that perhaps Newton’s

130

A. De Angelis

universal gravitation is not so universal, and dark matter does not exist but simply gravitation is not exactly what we believe?

The Cosmic Journey of Gamma Rays After being produced, gamma rays propagate through the intergalactic/galactic space. Although they are not deflected by magnetic fields, they can interact, in particular with photon fields. As shown in Fig. 4.4, the maximum density corresponds to the Cosmic Microwave Background with approximately 410 photons per cubic centimeter and an average energy of approximately 0.6 millielectronvolt (meV). Another prominent photon background is the so-called extragalactic background light (EBL). This radiation was mostly emitted during star formation; its spectrum exhibits one peak at approximately 8 meV (in the infrared region) and one at approximately 1 eV (in the nearinfrared and visible region). The dominant process for the absorption of gamma rays is pair creation: a photon interacts with a background photon in the Universe and produces an e+ e− pair. Given the energy E bck of the target (background) photon, the process can occur at typical energies E

800 GeV . E bck (in eV)

This means that the EBL plays the leading role in the absorption of gamma rays of very high energies. The interaction with photon backgrounds implies the existence of a “horizon” beyond which gamma rays are strongly absorbed. This horizon gets closer as the energy increases, up to a few PeV. At PeV energies, the horizon is close to the distance of our own galactic center (Fig. 4.41). The existence of a gamma-ray horizon is a nuisance but also a resource. Its actual value depends on: • the density of background photons in the Universe, • the value of cosmic magnetic fields, • the values of the cosmological parameters related to the expansion of the Universe, • the energy of the vacuum. This last point is particularly relevant. In quantum mechanics, the vacuum is not completely empty, but rather it is virtually populated by all kinds of

4 The Colors of the Universe

131

existing particles and their antiparticles; its energy is a measure of the number of existing particles. Gamma-ray astrophysics can measure the energy of the vacuum through the interaction of gamma rays with particles in the vacuum. This possibility was suggested through slightly different mechanisms by De Angelis, Marco Roncadelli, and Oriana Mansutti and by Dan Hooper and Pasquale Serpico in 2007. A notable example is the axion. Axions are hypothetical particles that are predicted by certain extensions of the Standard Model of particle physics. They are similar to neutrinos in that they are very light and weakly interacting, but unlike neutrinos, they can directly interact with photons. Axions were originally proposed in the 1970s as a solution to a problem in quantum chromodynamics (QCD), the theory that describes the strong force that binds quarks together to form protons and neutrons. QCD suffers the so-called “strong CP problem”: a symmetry called “CP”, that QCD should violate, is indeed respected. The existence of a new particle, called the axion, was proposed in 1977 by Roberto Peccei and Helen Quinn to resolve this problem (Peccei-Quinn mechanism), and axions have since been studied as potential candidates for dark matter. The nickname axion-like particles (ALPs) refers to a large class of particles similar to axions; such particles have also been proposed as possible candidates for (at least part of ) dark matter. The interaction of gamma rays with axion-like particles is a topic of active research. Some models predict that gamma rays can convert into ALPs in the presence of strong magnetic fields, which could explain certain anomalies in astrophysical observations. This phenomenon is called “axion-photon mixing” or “photon-ALP conversion.” This effect (not

Fig. 4.41

Mean free path for photons from 100 GeV to 1,000 PeV

132

A. De Angelis

Fig. 4.42 Layout of the CTA observatory: Northern (top) and Southern (bottom) Hemispheres. Credit: CTA Collaboration

yet demonstrated) could provide evidence for the existence of ALPs and shed light on some of the mysteries of the Universe. However, thus far, there is no definitive evidence for axions or ALPs, and more research is needed to confirm or rule out their existence.

The Cherenkov Telescope Array and the Southern Wide-Field Gamma-Ray Observatory The MAGIC, HESS, and VERITAS collaborations, with a fundamental input from the German Deutsches Elektronen-Synchrotron (DESY) and other institutions, have teamed up to build the Cherenkov instrument of the future: two giant telescope arrays called the Cherenkov Telescope Array (CTA), whose sensitivity will exceed that of current Cherenkov telescopes by an order of magnitude. For this new venture, the technology chosen is similar to that used today, replicated on dozens of instruments, with two sites, one in the Southern Hemisphere, in Chile’s Andes region, and one in the Northern Hemisphere, on the island of La Palma, covering areas of several square kilometers (Fig. 4.42). CTA involves the use of three types of telescopes. • Large (LST, Large-Sized Telescope), with a tessellated parabolic mirror diameter of approximately 23 m, derived essentially from MAGIC.

4 The Colors of the Universe

133

• Medium (MST, Medium-Sized Telescope), whose mirror has a diameter on the order of 12 m, derived essentially from H.E.S.S. • Small (SST, Small-Sized Telescope) with a mirror diameter of approximately 4 m, with double reflection (a tessellated primary mirror and a monolithic secondary mirror). To be sensitive to low energies, telescopes must be very large, since the number of photons per unit area is small. The Northern Site of the Cherenkov Telescope Array could be operational in its initial phase (with 4 large 23-m diameter and 9 medium 10-m telescopes) as early as 2026. The first large telescope at La Palma has already been built (Fig. 4.43). The southern site will be built later with 14 medium, 37 small, and four large telescopes. A miniarray of small telescopes, known as the ASTRI miniarray, is being deployed at the Teide observatory on the Canary island of Tenerife, under the supervision of the INAF group of Giovanni Pareschi and of the Fundación Galileo. CTA represents a completely new organizational concept with respect to the previous gamma-ray telescopes: the telescopes were built as a collaborative project by research institutions and private companies, who are also responsible for the installation. That means that the technology of Cherenkov telescopes has become mature and can be outsourced by research institutions. An agreement was signed in 2019 to build a future wide-field gamma-ray observatory in the Southern Hemisphere. The founding countries of the Southern Wide Field-of-view Gamma-ray Observatory (SWGO) are Argentina, Brazil, the Czech Republic, Chile, Germany, Italy, Mexico, Portugal, South Korea, the United Kingdom, and the United States of America. The new observatory is expected to be installed in the Andes at an altitude above 4.4 km, to detect gamma rays from a few hundred GeV up to PeV. The location in the Southern Hemisphere will allow direct observations of the most interesting region of our galaxy. Wide-field-of-view observations are ideal for searching for transient sources but also for very large emission regions, including “Fermi bubbles” and, in particular, for discovering transient emission and unexpected phenomena. The starting point for the new observatory will be the approach of existing ground-based gamma-ray detectors, namely, HAWC in Mexico and LHAASO in China. Specifically, water-based Cherenkov detectors will be used to sample the particle showers produced by gamma rays in the atmosphere, recording the light produced when the particles pass through tanks filled with purified water. However, new technologies will be explored to increase sensitivity and

134

A. De Angelis

Fig. 4.43 CTA’s first LST telescope at the island of La Palma. Credit: CTA LST Collaboration

lower the energy threshold of the observatory. The total area will be over 1 km2 (Fig. 4.44).

Fig. 4.44

Layout of the SWGO observatory. Credit: SWGO Collaboration

5 The New Senses of the Universe: Multimessenger Astronomy

Multiwavelength astronomy has been at the forefront of astrophysical research for one century. New regions of the electromagnetic spectrum are open thanks to the development of new telescopes, completely different from the first Galilean telescopes. Recently, different kinds of cosmic rays have become the new frontier of astronomy, thanks to the new field of investigation called multimessenger astroparticle physics, an interdisciplinary sector between astrophysics, cosmology, and elementary particle physics using different particles to explore the nature of the Universe. Astroparticle physics has grown considerably throughout the 20th century, and many large projects are underway. Multimessenger astroparticle physics (also called multimessenger astrophysics or multimessenger astronomy, with some nuances depending on a point of view more focused on astrophysics/astronomy for the latter and on fundamental physics for the former), started one decade ago, is the new protagonist of the exploration of the Universe in the 21st century. Several cosmic ray experiments are currently operational in space but also often in the most remote mountain regions of the Earth. Cosmic ray physics from the ground is more “dispersed” than particle physics at accelerators and can involve more actively small nations. The work is arduous and often carried out in difficult conditions. However, the locations are beautiful, and the excitement of discovery makes up for the fatigue and difficulties. Some observatories are in galleries in the mountains, to shield detectors from less penetrating cosmic rays and record only the most penetrating particles (muons and, to a greater extent, neutrinos, which at ordinary energies have a low probability of interacting with matter even when passing through the diameter of the Earth). © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 A. De Angelis, Cosmic Rays, Astronomers’ Universe, https://doi.org/10.1007/978-3-031-38560-5_5

135

136

A. De Angelis

Large laboratories where this research is conducted can be found, for example, in Italy, France, Spain, the United States, Canada, and Japan, where elementary particle physics has been excellent in recent years, particularly dominating the global landscape of neutrino physics with a wealth of new results. Many experiments are in space observatories; many are in desertic regions. Correspondingly, the telescopes sensitive to the different kinds of cosmic rays that dominate multimessenger astrophysics (gamma rays, ions, neutrinos, and gravitational waves in particular) are very different.

Cosmic Rays of Ultrahigh Energies The study of atmospheric showers induced by high-energy cosmic rays, seventy years after Rossi and Auger’s discovery of atmospheric particle showers, continues to be a source of new knowledge. Once cosmic rays enter the atmosphere, they produce chain interactions that degrade their energy and scatter them into thousands of particles distributed over areas of square kilometers. How can we study the origin of cosmic rays? The first idea that comes to mind is to make a kind of “astronomical observation” by mapping the sky using the direction of origin of cosmic rays themselves. Unfortunately, the value of galactic magnetic fields, approximately one millionth of Earth’s magnetic field, is enough to cause charged particles (which, as we have seen, make up the main component of cosmic rays) to be deflected, making it impossible to trace their origin unless they reach energies of several joule (we recall that one joule, the unit of energy in the International System of Units, is roughly the kinetic energy acquired by a mass of 100 grams falling from a height of one meter; it corresponds to more than six billion GeV) per particle. Charged cosmic rays arrive close to the Solar System after being deflected from the Galactic magnetic fields, which are quite large (approximately 1 microgauss or μG in intensity, i.e., approximately one millionth of the magnetic field at the Earth’s surface, which has a typical value of 1 gauss) and possibly by extragalactic magnetic fields (whose value, approximately 10−15 gauss, with an uncertainty of three orders of magnitude, is not well known) if they are of extragalactic origin. The radius of curvature R L (expressed in kiloparsec) of a proton deflected by the magnetic field in the Galaxy (the magnetic field B is approximately 1 μG) if the energy of the proton is expressed in EeV (hexa-electronvolt, one billion billion electronvolt or 1018 eV) is R L (in kpc) 

E(in EeV) , B(in μG)

5 The New Senses of the Universe: Multimessenger Astronomy

137

is shorter than the distance to the Galactic center for energies smaller than 2 × 1019 eV, i.e., 20 EeV—much above the knee—and thus astronomy with charged cosmic rays is extremely difficult. We know from gamma-ray astronomy that cosmic rays below the knee, i.e., below a few PeV, come mostly from the Galaxy and from supernova remnants in particular, but the magnetic field of the Milky Way randomized the arrival directions of these particles, and thus it will not be possible to locate directly their sources. The equation above sets a natural scale for the charged cosmic rays that can be used for astronomy: this scale is 1 EeV. We shall call the events above this threshold “ultrahigh-energy” cosmic rays (some call them “extremely-high-energy”; there is some confusion in the literature, but let us keep on the term “ultra”). The rapid decrease in the number of cosmic rays with energy means that at these energies (above 1 EeV, one billion GeV), there is less than one particle per square kilometer per year, so very large detectors are needed. In addition, we recall the GZK cutoff that we introduced at the beginning of this book: there is a theoretical upper limit on the energy of cosmic rays coming from distant sources. This limit was calculated in 1966 by Greisen, Zatsepin, and Kuzmin, and is now called the “GZK cutoff” in their honor. Protons and nuclei with energies greater than a few tens joule (50 EeV to 100 EeV) interact with photons that survived from the Big Bang, the cosmic microwave background radiation, and thus do not travel distances greater than those typical of the local supercluster of galaxies (some 100 million light-years). Unless the laws of physics in distant regions of the universe are different from those verified near us, particles of ultrahigh energy come from galaxies in our supercluster. Thus, there is little room for astronomy with protons and nuclei in general— from a couple of EeV to some 100 EeV. Discovering a significant fraction of particles beyond the GZK cutoff would indicate exotic sources, such as extremely massive particles that survived the early moments of the Universe, or the demonstration of new laws of physics different from those we know (such as the theory of relativity). However, the physics of showers comes to our aid: since a shower is composed of many particles, it is not necessary to cover the entire surface of the observatory to be used for detection, but sampling it is sufficient: the detectors can be made of small units dispersed in space, and this fact reduces the costs.

The Pierre Auger Observatory In 1992, the American James “Jim” Cronin, a long-time professor at Chicago and Nobel Prize laureate in 1980 for his studies on the properties of K mesons, and Alan Watson of the University of Leeds, proposed the construction of an

138

A. De Angelis

observatory for cosmic rays so large that it could collect a consistent statistic on ultrahigh-energy cosmic rays. This observatory required a generous and hospitable nation to provide a vast surface area, and Argentina offered its availability. In 2004, the large ground-based detector called the Pierre Auger Observatory started collecting data. It currently covers an area of over 3,000 km2 (Fig. 5.1) in Pampa near Malargue (approximately three times the area of the municipality of Rome, which is the largest in Italy; over 50 times the area of Manhattan peninsula). It is instrumented with 1600 surface detector stations (Cherenkov water tanks) arranged in a grid of 1.5 km side, at an altitude of approximately 1,400 m. Each water tank is a cylinder of 10 m2 base by 1.5 m height filled with 12 tons of water (Fig. 5.2). A highly reflective material coats the inner walls of the tank, and three photodetectors placed at the top of the tank capture the Cherenkov light generated by charged particles passing through the water. The tanks operate independently, with a GPS unit providing a time stamp and a solar panel supplying power. Communication with the central data acquisition system occurs via radio. Twenty-four fluorescence telescopes are used to supplement the water tanks. They are arranged in four locations to monitor the atmosphere above the detector area. As particles from extensive air showers pass through the atmosphere, they ionize and excite nitrogen gas molecules, which emit visible and ultraviolet radiation during de-excitation. These emissions, known as fluorescence light, can be detected using optical telescopes. Each fluorescence detector is an optical telescope with a field of view of approximately 30◦ in each direction (Fig. 5.2). Light enters the telescope through an ultraviolet filter installed over the telescope and is collected in a 3.5 m diameter spherical mirror that focuses it in a camera made of 440 photomultipliers. The Pierre Auger Observatory provides fundamental information on cosmic rays, in particular indicating (Fig. 5.4) that the direction of ultrahigh-energy cosmic rays (above several joule per particle) is statistically correlated with the nuclei of galaxies outside the Milky Way altogether. It seems, therefore, that the origin of ultrahigh-energy cosmic rays is related to gravitational collapses near supermassive black holes. However, no firm individual correlation with sources has been done yet: we cannot yet say “this AGN is a source of cosmic rays”—with one possible exception. Ultrahigh-energy cosmic rays serve as messengers from the extreme Universe and present a one-of-a-kind chance to investigate particle physics at energies surpassing those achievable at the LHC. Nevertheless, due to their limited flux

5 The New Senses of the Universe: Multimessenger Astronomy

139

Fig. 5.1 Top: scheme of the Pierre Auger Observatory near Malargue, Argentina. The radial lines point to the fluorescence detectors. The black dots are the 1600 ground stations. Sites with specialized equipment are also indicated. Bottom: Photograph showing one of the water-Cherenkov detectors (foreground) and a fluorescence-detector station (background). Credit: Pierre Auger Collaboration

140

A. De Angelis

Fig. 5.2 Sketch of one of the Pierre Auger surface detectors (left); a fluorescence telescope (right). Credit: Pierre Auger Collaboration

Fig. 5.3 The energy spectrum of cosmic rays at the highest energies measured by the Pierre Auger Observatory (dots). Superposed is a fit to the sum of different components at the top of the atmosphere. The partial spectra are grouped according to the mass number as follows: hydrogen (red), helium-like (gray), carbon, nitrogen, oxygen (green), and iron-like (cyan). Heavier elements peak at higher energies. Credit: Pierre Auger Collaboration

5 The New Senses of the Universe: Multimessenger Astronomy

141

Fig. 5.4 Sky map in galactic coordinates of the arrival directions of CRs with energy larger than 55 EeV detected with the Pierre Auger Observatory, plotted as black dots. The red stars represent the position of nearby active galactic nuclei, with areas proportional to their luminosity as seen from the Earth. Darker background colours indicate larger relative exposure. Credit: Pierre Auger Collaboration

and indirect detection methods, fundamental questions such as their origin, nature, and mode of interaction remain unanswered. The ultrahigh-energy cosmic rays’ energy spectrum is now well measured up to 1020 eV (see Fig. 5.3). One can distinguish between different particles by examining the characteristics of shower development. Heavier nuclei, composed of more hadrons, interact with the atmosphere at higher altitudes. As one can expect, the highest energy cosmic rays are made by heavy nuclei, such as iron. Due to their higher charge, heavier nuclei are more efficiently accelerated than lighter nuclei, and iron is the heaviest nucleon formed in a fusion process.

Correlation of Cosmic Nuclei with Astrophysical Sources Is there a discernible correlation between cosmic rays and their potential sources? At energies above a few GeV, when integrating across all energy levels, the arrival direction of charged cosmic rays exhibits isotropy, likely due to the Galactic magnetic field’s influence on smearing their arrival directions. No straightforward correlation with known astrophysical objects is apparent in the anisotropies observed. However, statistically significant anisotropies have been detected at ultrahigh energies, and interpreting their origin is relatively simple. Accelerating particles to ultrahigh energies above one EeV, i.e., 1018 eV, necessitates specific conditions present in astrophysical objects, such as the vicinity of supermas-

142

A. De Angelis

sive black holes in active galactic nuclei or transient high-energy events such as those creating gamma-ray bursts. Galactic objects are improbable sites for particles with such energy levels, and as a result, we do not observe a concentration of ultrahigh energy cosmic rays in the galactic plane. Additionally, the galactic magnetic field is incapable of confining ultrahigh energy cosmic rays within our galaxy. Under the commonly accepted assumptions of a finite horizon (due to the GZK cutoff ) and of extragalactic magnetic fields of approximately 10−15 (with an uncertainty, as we saw, of three orders of magnitude), the number of sources is relatively small, and thus some degree of anisotropy could be found studying the arrival directions of the cosmic rays at the highest energies. Such searches have been performed extensively in recent years either by looking for correlations with catalogs of known astrophysical objects or by applying sophisticated correlation algorithms at all angular scales. Indications for intermediate-scale anisotropy, namely correlated to active galactic nuclei and star-forming catalogs, have been reported by the Pierre Auger Observatory. At large scales, in approximately 30,000 cosmic rays with energies above 8 EeV recorded over 12 years, the Pierre Auger Observatory has evidenced a significant dipole anisotropy, i.e., a directional correlation with the distribution of active galaxies weighted by their gamma-ray luminosity as seen by Earth. Related to individual candidate sources, the Pierre Auger collaboration claimed with a significance larger than 3σ a hot spot near the Centaurus A active galactic nucleus, at a distance of approximately 4 megaparsec. Centaurus A (Fig. 4.22) is also a very-high-energy gamma-ray emitter, a fact that reinforces the claim. Another large detector of ultrahigh-energy cosmic rays is the Telescope Array, located in the Utah desert, also at an altitude of 1,400 m. The original Telescope Array construction, completed in 2008, included an array of 507 scintillator detectors and three telescope stations. The scintillator detectors were deployed onto a square grid with 1.2 km spacing and are spread over more than 700 km2 . However, an extension under construction will increase its surface to 2,800 km2 , reaching approximately the same size—and thus the sensitivity— as Auger. It is interesting to note that being located in different hemispheres, Auger and Telescope Array observe complementary regions of the sky. Astronomy with cosmic rays is nevertheless difficult, because even with large instruments such as Auger, the number of events collected is small (a few dozen per year).

5 The New Senses of the Universe: Multimessenger Astronomy

143

Alternative Techniques and Future Detectors It is difficult to think of a detector larger than the Pierre Auger Observatory. If we want to increase sensitivity, we must think of different techniques than sampling showers at ground. A promising technique shifts the problem: since the interaction of cosmic rays with the atmosphere involves the emission of radio waves, the detection of radio transients with antennas can be correlated, under low noise conditions such as those that can be found in Antarctica or in the upper atmosphere, with cosmic rays of very high energies. With this technique, the collaboration managing the ANITA experiment at the South Pole recently published an energy spectrum of cosmic rays. Another technique exploits the observation of the sky from an orbit a few hundred kilometers around the Earth in search of the “flashes” caused by the fluorescence (and possibly Cherenkov) light that comes from the interaction of cosmic rays with the atmosphere. A medium-large instrument or a pair of instruments in space can monitor a huge atmospheric volume (Fig. 5.5). This is frequently called the “EUSO concept”, from the Extreme Universe Space Observatory studied since the 1990s. Locating large telescopes in space requires special optics. One solution proposed is the Fresnel optics, i.e., lenses printed on plastic surfaces. The JEMEUSO program (Joint Experiment Missions for the Extreme Universe Space Observatory) is developing along this line. Another solution is represented by the use of a binocular space telescope. This is the case of the recently proposed Probe Of Extreme Multi-Messenger Astrophysics (POEMMA), using a wide-field Schmidt telescope—i.e., a double reflector. A space-based telescope based on the EUSO concept could be optimistically in space in the late 2030s.

Fig. 5.5 POEMMA observing modes. Credit: POEMMA Collaboration

144

A. De Angelis

Cosmic Antimatter In the study of cosmic rays, high energy and large distances are not the only frontiers: the search for antimatter can also provide new fundamental knowledge. The distinction between matter and antimatter with current tools is only possible up to not very high energies; this type of research therefore concerns the so-called “diffuse background” in the Universe, without the possibility of directly targeting sources. As is known, we live in a world made almost exclusively of matter. One of the biggest problems in physics is understanding why (did antimatter disappear in the first moments of the Universe’s life or is it still somewhere far away from us?). Since antimatter rapidly annihilates upon contact with matter, it is very unlikely that a primary antimatter particle would arrive on Earth, as it would interact immediately with the atmosphere; therefore, antimatter detectors need to be placed in space. The PAMELA magnetic spectrometer was a satellite primarily designed and built by Russian and Italian scientists; it could measure the charges (thanks to the presence of a magnet that deflects particles) and masses of particles and thus distinguish between matter and antimatter. Launched in 2006 and taking data until 2016, it has revealed a fact difficult to explain: the quantity of antielectrons is much higher than expected, and in addition the ratio between antielectrons and electrons increases as the energy increases. We still cannot understand why; one hypothesis is that this result indicates the existence of particles of large masses, perhaps attributable to the first moments of the Universe’s life, which decay “democratically” into particles of matter and particles of antimatter—or “blobs” of energy created by the annihilation of pairs of weakly interacting massive particles, the hypothetical WIMPs that are among the favorite dark matter particles. Another possible explanation, also extremely interesting, is that nearby sources of electron-positron pairs exist—these can be pulsars otherwise invisible. NASA’s AMS-02 (Alpha Magnetic Spectrometer) mission, a detector on board the International Space Station, is extending the investigation on antimatter. AMS-02 (we shall just call it AMS from now on: the suffix 02 is a reminder that the mission follows a test experiment called AMS-01), designed and constructed by Ting and collaborators is a general-purpose high-energy particle detector capable of measuring cosmic rays from electrons to hydrogen up to iron, and from energies of hundreds MeV up to approximately 1 TeV. AMS involves an international collaboration of 44 institutions from America, Europe, and Asia, and its total construction cost has been over 2 billion USD. Because the AMS detector is installed on the International Space

5 The New Senses of the Universe: Multimessenger Astronomy

145

Fig. 5.6 Schematic view of the AMS detector, illustrating the path of a typical cosmic ray crossing the various detector elements, in particular the silicon tracker. Credit: AMS-02 Collaboration

Station, it can be serviced and upgraded by astronauts, which ensures its continued operation and scientific relevance. There have been several spacewalks to install, maintain, and upgrade AMS. In addition, the International Space Station is a good place to be since it allows a relatively large power supply— AMS has a weight of approximately 7,500 kg and consumes an average of 2.5 kilowatt of power during operation. This is five times larger than the Fermi LAT but still less than a typical European family. The layout of the AMS detector is sketched in Fig. 5.6. The heart of the detector consists of nine planes of precision silicon tracker in a permanent magnet, and a calorimeter absorbing particles and allowing the measurement of their energies. The charge is measured through the curvature in the magnetic field. AMS has provided a wealth of important scientific results. Here, we list some of the most relevant. • AMS has measured the flux of high-energy antielectrons, or positrons, in cosmic rays with unprecedented precision and has confirmed with higher precision and extending up to higher energies the unexpected rise up to a few hundred GeV observed in the positron fraction at high energies by PAMELA, which could be a signature of dark matter annihilation. In general, AMS has made precise measurements of the fluxes of cosmic ray antiprotons and positrons, which provide important clues about the origin and propagation of antimatter in the universe. The measurement of the positron flux, from 1 GeV to 1,000 GeV of energy, is shown in Fig. 5.7.

146

A. De Angelis

Fig. 5.7 Positron flux (multiplied by E 3 ) in the energy range from 0.5 GeV to 1000 GeV. The source contribution is represented by the magenta area, and the diffuse term contribution is represented by the gray area. Credit: AMS-02 Collaboration

The positron spectrum can be explained as the sum of two components. The low energy part of the flux is due to a diffuse component dominated by the positrons produced in the collisions of ordinary cosmic rays with interstellar gas. Positrons at high energies, instead, predominantly originate from a source with a cutoff of around 800 GeV. This source could be of astrophysical nature, for example, an otherwise unseen pulsar, or even dark matter annihilation creating a blob of energy then decaying into electrons and positrons. • Cosmic ray spectra: The AMS detector has measured the energy spectra of various cosmic ray species, including protons, helium nuclei, and heavier nuclei, with very high precision. These measurements have helped to test and refine our understanding of the origin and propagation of cosmic rays in the Milky Way and beyond. • AMS has measured the effect of solar activity on the fluxes of cosmic rays (solar modulation), which provides important information about the interaction of cosmic rays with the solar wind and the heliosphere. High-altitude balloons are a relatively inexpensive alternative to detectors on satellites for the study of antimatter and cosmic rays in general. They can be launched and recovered quickly, making them a popular choice for scientific research. They are large, helium-filled balloons that can reach altitudes of up to 40 km.

5 The New Senses of the Universe: Multimessenger Astronomy

147

Besides the smaller altitude, two disadvantages of high-altitude balloons is that they are affected by weather conditions, and cannot operate continuously for years as AMS does. High-altitude balloons for long (over forty days) circum-polar flights are nowadays available, which could make exploration outside the atmosphere economical; this type of instrument could also be beneficial to gamma-ray physics.

Neutrinos and the Extreme Universe Another tool for directly investigating sources is based on the detection of neutrinos, which are chargeless and thus are not deflected by cosmic magnetic fields. Neutrinos have a low probability of interaction, which makes them even more suitable for detecting distant sources of cosmic rays, even more so than gamma rays. The acceleration of high-energy nuclei (protons and heavier) is always accompanied by the emission of neutrinos at energies an order of magnitude lower, since nuclei interact strongly with the environment generating charged pions, which in turn decay producing neutrinos. High-energy neutrinos are thus an indicator of ion acceleration.

Solar Neutrinos and the Solar Neutrino Problem Among cosmic rays, especially at relatively low energies, there are a large number of neutrinos because these particles are copiously produced by the Sun in the process of nuclear fusion at the base of the energy that allows life on Earth. Some 100 billion neutrinos from the Sun pass through each of our fingers every second, but we do not feel them because they interact rarely and only very weakly with matter. For every hundred billion solar neutrinos that pass through the entire Earth, only one interacts with it. While this low probability of interaction allows us to look deeply into the Universe, surpassing the limits given by the limited mean free path of photons (Fig. 5.8), it also makes it difficult to build sensitive detectors. There are three known types of neutrinos. Nuclear fusion in the Sun produces neutrinos that are associated with electrons, the so-called electron neutrinos. The other two types of neutrinos, muon neutrinos and tau neutrinos, are produced, for example, in laboratory accelerators, or more likely in large cosmic accelerators (remnants of supernovae and black holes) outside the solar

148

A. De Angelis

Fig. 5.8 Multimessengers from hadronic accelerators. Credit: J.A. Aguilar & J. Yang

system, or in showers that come from the interaction of primary cosmic rays with the atmosphere. Until the 1980s, little was known about neutrinos due to their low probability of interaction with matter. It was not even known if they had mass, and the prevailing opinion was that they were massless, such as photons. The years around the turn of the 20th century saw a revolution in neutrino physics; this revolution is mainly related to the study of cosmic rays and the construction of neutrino detectors that are large enough to estimate the flux of these elusive particles. Because neutrinos have a low probability of interaction, the experiments that detect them must be shielded from other types of cosmic radiation and therefore located deep underwater in the sea or in large lakes, in galleries inside mountains, in mines, or in the ice of Antarctica. Since the late 1960s, scientists have tried to detect cosmic neutrinos. The Sun was a guaranteed source, and they initially focused on it. The Sun performs nuclear fusion through a chain reaction of proton fusion, which converts four hydrogen nuclei into helium nuclei and neutrinos and releases energy. This energy is released in the form of electromagnetic radiation, as well as in the form of the kinetic energy of charged particles. Neutrinos travel from the Sun to Earth without any appreciable absorption by the outer layers of the Sun. The resulting neutrino flux from the power produced by the Sun is very large and not difficult to calculate since the physical processes generating neutrinos are well known and the solar energy flux is well known. The solar neutrino flux on Earth is several tens of billions per square centimeter per second.

5 The New Senses of the Universe: Multimessenger Astronomy

149

In the Homestake experiment in the homonymous gold mines in South Dakota, Ray Davis and John Bahcall, were the first to detect solar neutrinos; the first observation occurred in 1968. Their experiment used a detector based on a solution of cadmium chloride in water. Electron neutrino interactions with this solution finally result in pairs of coincident photons with an energy of approximately 0.5 MeV each, which could be detected by the two detectors above and below the target. The result of measuring the solar neutrino flux was surprising: Davis and Bahcall found approximately one-third of the electron neutrinos compared to those the Sun was supposed to produce. The prediction was very solid since the power irradiated by the Sun is well known! Perhaps the experiment had a fundamental error? The answer was given by Masatoshi Koshiba (1926– 2020) of the Kamiokande Observatory in Japan, who confirmed the result with a different detector from Davis. The Kamiokande detector was a huge tank of 3,000 tons of water located in a zinc mine, surrounded by detectors of Cherenkov light flashes produced when neutrinos interact with atomic nuclei in water molecules. Koshiba designed the Kamiokande experiment to detect proton decay, a prediction of grand unified theories; no proton decay was found, but Koshiba realized that the detector could be made to detect neutrinos and adapted the project accordingly, following the pioneering work of Davis. Having established that solar neutrinos had been detected and that they were approximately one-third of expected, the puzzle of solar neutrinos had to be solved. Physicists knew that a mechanism discussed in 1957 by Bruno Pontecorvo could explain this deficit. However, they hesitated to accept it for various reasons, including the fact that it went beyond the Standard Model of Particle Physics and that Pontecorvo’s explanation required that neutrinos have a mass different from zero (and thus it was introducing two important complications in physics). Perhaps Pontecorvo’s controversial personality also played a role. Bruno Pontecorvo (Pisa 1913–Dubna 1993) was a very interesting historical and scientific figure: a student and then assistant to Enrico Fermi, he emigrated to Britain and contributed to the construction of the British atomic bomb—thanks to this he was granted British citizenship. Later, in 1950, he crossed the Iron Curtain: he moved to Dubna near Moscow, to the prestigious Soviet research center on atomic energy, and there he continued his studies on neutrinos and muons, becoming finally a citizen and Academician of the Sciences of the Soviet Union. Toward the 1980s, faced with increasingly strong evidence, it was finally accepted that neutrinos are not particles without mass as predicted by the Standard Model (at least in its original version: it is possible to “patch up” the model in some way) but rather mixed quantum states made up of particles

150

A. De Angelis

with mass defined in different (and complicated) proportions. This allows a neutrino produced as a pure electron neutrino (for example, inside the Sun) to transform during propagation into a mixture of electron, muon, and tau neutrinos, with a reduced probability of being detected by a detector sensitive only to electron neutrinos. This explains the puzzle of solar neutrinos. Davis and Koshiba won the Nobel Prize in Physics in 2002, which they shared with Giacconi. In the last years of his life, Davis (1914–2006) had Alzheimer’s disease, and his son Andrew, a professor at Chicago, delivered his Nobel Prize lecture. Let us go back to the 1980s. The only form of extraterrestrial neutrinos discovered were solar neutrinos, but it was known that other stars, and particularly supernovae, should produce huge fluxes of neutrinos. The explosion of a supernova close enough was needed to verify this prediction. This finally happened on February 23, 1987, when a type II (core-collapse) supernova appeared in the Large Magellanic Cloud, a Milky Way satellite galaxy well visible in our planet’s southern hemisphere, one hundred and fifty thousand light years away from the Earth. This is the closest supernova to have been observed after that of 1604, which exploded within our galaxy; moreover, it is the closest supernova observed after the invention of the telescope. A flood of neutrinos from SN1987a (as the supernova is called) preceded the star’s luminous flash by a few hours. The neutrinos were detected by the group led by Koshiba in Japan, by a detector called IMB (Irvine-Michigan-Brookhaven), similar to Kamiokande, but located in a mine near Lake Erie in the United States, and by the Soviet Baksan detector, located in a mine in the Caucasus mountains. Kamiokande observed twelve neutrinos from the explosion in a few seconds, IMB observed eight, and Baksan observed five (25 neutrinos out of a total of approximately 1058 emitted in the explosion, of which approximately 1028 had reached our planet). Note also that, being the detectors located in the Northern hemisphere and the supernova in the Southern hemisphere, these detected neutrinos had crossed the Earth. Solar (and generally stellar) and supernova neutrinos have typical energies in the range of 1 MeV to 30 MeV and are relatively abundant. The neutrino flux decreases rapidly with energy, and now the most difficult part of the game remains: detecting high-energy cosmic neutrinos, for example, from active galactic nuclei or other processes. It was evident that the large size of the existing detector target material (a few thousand tons) needed to be increased. In the 1990s, a nearly blind race began to understand how large a neutrino detector would need to be to detect cosmic neutrinos. The answer to this fascinating question of experimental astrophysics was found only a few years ago.

5 The New Senses of the Universe: Multimessenger Astronomy

151

As mentioned, it was first necessary to increase the active volume of the detectors, and no one knew how far it would be necessary to go.

Very-High-Energy Cosmic Neutrinos Koshiba and his collaborator Takaaki Kajita designed an even larger and more sensitive version of Kamiokande, called Super-Kamiokande (Fig. 5.9) or SuperK. Super-K consists of a 39-m-diameter, 42-m-high cylindrical detector containing 50,000 tons of water; it is placed in the Kamioka mine, Japan, under a kilometer of rock that shields it from cosmic rays except for neutrinos. The water is the target for neutrino interaction; the particles produced emit a flash of Cherenkov light, revealed by 11,000 half-meter diameter photosensors covering the inner surface of the cylinder. With this detector, it is possible to reconstruct the energy and direction of the particles from the neutrinos. It became operational in 1996 and immediately made an important observation: even the muonic neutrinos present in cosmic rays, as well as the electron neutrinos from solar neutrinos, “disappeared” by transforming into another type of neutrino. The result was confirmed by the Canadian Arthur B. McDonald of the Sudbury Neutrino Observatory, (SNO) located 2 km below the surface of the Earth in the Creighton mine near the city of Sudbury, Ontario, and by

Fig. 5.9 Engineers inspecting photosensors in Super-Kamiokande. Credit: Kamioka Observatory, ICRR (Institute for Cosmic Ray Research), the University of Tokyo

152

A. De Angelis

researchers from the Gran Sasso laboratory in Italy. Pontecorvo’s hypothesis was finally confirmed. For this discovery, Kajita and McDonald were awarded the Nobel Prize in Physics in 2015. However, there was still no trace of cosmic neutrinos apart from those from the Sun and SN1987. Detectors even larger than Super-K had to be built. The next step, from Super-K’s 50,000 tons of water, was based on the prejudice that tens of thousands of tons would suffice to detect the background of cosmic neutrinos in the Universe and possibly some sources of these neutrinos. However, the extensions in this direction due to the underwater ANTARES detector off the coast of Marseille, France, the Baikal detector in the homonymous lake near Irkutsk in Russia, and the AMANDA detector buried in the ice of Antarctica did not lead to significant results. A further qualitative leap was needed: a detector of one million tons, or one cubic kilometer. There were only two possibilities for such a detector: the sea’s depths and the Antarctica’s ices. The IceCube project, proposed by the Belgian-American physicist Francis Halzen, uses photodetectors buried in the ice of Antarctica. The heart of the observatory includes over 5,000 photodetectors the size of basketballs, organized into 86 columns of 60 detectors each and arranged within a cubic kilometer of ice (Fig. 5.10). Strings of detectors were dropped into the ice drilled using hot water jets; shortly after the operation, water freezes again and the photodetectors are in position. The construction of the detector was completed at the end of 2010. IceCube was the first telescope to detect extrasolar neutrinos apart from SN 1987A. Since its construction, the detector has discovered approximately one neutrino per month that is incompatible with its origin from the interaction of cosmic rays with the atmosphere. These neutrinos typically have very high energies, a few hundred TeV and beyond: at lower energies, contamination from neutrinos resulting from the interaction of cosmic rays with the atmosphere (atmospheric neutrinos) dominates. However, with very few exceptions, it has not yet been possible to associate these neutrinos with specific cosmic sources. The luckiest strike dates September 22, 2017. IceCube detected a neutrino with an energy of approximately 300 TeV. The localization of the direction of a neutrino of this kind is rather accurate, approximately 0.5 degrees, but there are still several sources in a cone of such amplitude. IceCube immediately issued a “neutrino alert” to all telescopes, scattered in space and on Earth, hoping that their observations could help precisely identify the source position—gammaray detectors have positional accuracies of less than 0.1 degrees. And they did.

5 The New Senses of the Universe: Multimessenger Astronomy

153

Fig. 5.10 Top: Artistic composition, based on a real image of the IceCube Laboratory at the South Pole and the IceCube invisible sensors in the ice. Bottom: a display of what is inside the ice. Credit: IceCube collaboration

The Fermi satellite detected a gamma-ray emission coinciding with an excited gamma-ray source: the blazar TXS 0506+056, an active galactic nucleus at the heart of a galaxy located 4.5 billion light-years away from Earth, consistent with the direction indicated by IceCube. The MAGIC telescopes immediately oriented their giant mirrors toward the source, managing to

154

A. De Angelis

observe it at an energy a thousand times greater than Fermi, thus providing another important piece to complete this puzzle. Thanks to the combination of all the different observations, a masterpiece of multimessenger astronomy, it was possible for the first time to identify a neutrino source outside of the local group of galaxies. It is a much more energetic source than a supernova: it is a supermassive black hole in accretion. We recall one of the most important aspects of neutrino astrophysics: the presence of a neutrino is a signature of the acceleration of cosmic rays to energies one order of magnitude larger or of the decay of clusters of energy resulting from the annihilation of dark matter particles. In the case of IceCube neutrinos, at energies of hundreds or thousands of TeV, the second hypothesis is very unlikely. IceCube neutrinos trace cosmic rays with energies on the order of PeV and beyond. In November 2022, evidence of high-energy neutrino emission from NGC 1068, also known as Messier 77, an active galaxy in the constellation Cetus and one of the most familiar and well-studied galaxies to date, has been announced—this galaxy, located 47 million light-years away from us, can be observed with large binoculars. The first neutrino sources identified by IceCube researchers were, therefore, two galaxies outside the Milky Way. Only in June 2023, IceCube announced the detection of a significant neutrino flux from our galaxy. The data still cannot pinpoint individual sources, and thus, the final evidence of Zwicky’s conjecture is postponed. Summarizing, cosmic neutrinos have been proven to be detectable (and this is already a success) and already told us many things. In particular, • The arrival directions of the most energetic neutrinos (Fig. 5.12) are consistent with a uniform distribution across the sky. Coincident observations of neutrinos and gamma rays from the blazar TXS 0506+056 have shown evidence of the first extragalactic high-energy neutrino source, but present neutrino detectors are not sensitive enough to probe by themselves the sources (unless a cataclysmic event will happen nearby). We need larger detectors to make neutrinos an astronomical tool. • The neutrino diffuse flux might be explained by blazars observed in gamma rays up to a maximum of one-quarter. It is thus reasonable to assume that another class of sources might contribute to the neutrino flux. A particularly powerful class of accelerators, which could not have coincident flares in the γ -ray band, could be in action—perhaps non-jetted active galactic nuclei. Possibly starburst galaxies, characterized by a high star formation rate (a hundred new stars per year and higher, compared with a rate of a couple per

5 The New Senses of the Universe: Multimessenger Astronomy

155

year in the Milky Way), are the efficient cosmic ray factories and accelerators that dominate the Universe. • Gamma-ray bursts are not an important source of cosmic protons and nuclei. IceCube has been searching for neutrinos arriving from the direction and at the time of a gamma-ray burst. After more than one thousand followup observations, none were found, resulting in a limit on the neutrino flux from gamma-ray bursts of less than one percent.

The Future of Neutrino Astronomy Now we know that today’s neutrino detectors have almost the right size for astronomy: in 10 years of observation by IceCube, we have identified approximately a hundred cosmic neutrinos, and for one of these, we have finally identified the source. This “almost” also hides a negative aspect: we now know that the right size is approximately 10 times larger than the existing ones, i.e., approximately 10 km3 . To this end, a project for the extension of IceCube is about to begin. By doubling the already deployed instrumentation, the telescope can reach a volume of 10 km3 , aiming for an increase of one order of magnitude in neutrino detection

Fig. 5.11 Future IceCube Gen-2 detector, including the optical array (blue shaded region) that contains IceCube (red shaded region) and a more densely instrumented core that will include additional sensors added in the next few years as part of the IceCube upgrade project underway (green shaded region). Credit: IceCube Collaboration

156

A. De Angelis

Fig. 5.12 Arrival directions of neutrinos from IceCube (different symbols indicate different types of neutrino events). The blue-shaded region indicates where the Earth’s absorption of such neutrinos becomes important, and as a consequence, this area is blinded. The dashed line indicates the equatorial plane. Credit: IceCube collaboration

rates. IceCube-Gen2 will provide an unprecedented view of the high-energy universe, taking neutrino astronomy to new levels of discovery. Unfortunately, this colossal extension (Fig. 5.11) will require at least a decade of work. The project in the Mediterranean Sea is also gearing up to achieve the appropriate sensitivity for astronomy. Groups coordinating underwater experiments in France, Italy, and Greece have joined forces to create Km3NeT, consisting of a series of photodetector strings dropped into the Mediterranean Sea, in particular off Marseille and Cape Passero in Sicily (Fig. 5.13), covering approximately 3 km3 of volume. A more advanced detection system than that of IceCube-Gen2 should make it possible to achieve the same sensitivity. This experiment is also expected to start in the 2030s. In Japan, the construction of Hyper-Kamiokande has begun, an extension of Super-Kamiokande aimed at reaching a volume of one cubic kilometer, equivalent to that of IceCube today but with better instrumentation, particularly sensitive to extragalactic supernovae. The telescope should start collecting data in the early 2030s.

Gravitational Waves Gravitational waves, whose existence was hypothesized a century ago by Albert Einstein in his theory of General Relativity, are the messengers of the weakest of all known interactions: gravitational interaction. They are associated with

5 The New Senses of the Universe: Multimessenger Astronomy

157

Fig. 5.13 An artist’s view of the Km3NeT detector, seen from inside the sea. Credit: Km3NeT collaboration

the relative motion of masses and are expected to be particularly intense in conditions of acceleration such as those associated with cosmic cataclysms. They travel at the speed of light, and the Universe is transparent to them: therefore, they constitute a privileged messenger for astronomy. Given the transparency of matter to them, gravitational waves arrive from the heart of compact objects, providing otherwise inaccessible data. They are also the first signal emitted in gravitational collapses. Their existence was indirectly demonstrated with the pioneering discovery in 1974 of the energy loss by the binary pulsar PSR 1913+16, for which the measurements of the decay of the orbital period over time are consistent with the energy losses predicted for gravitational wave emission (R.A. Hulse and J.H. Taylor, Nobel Prize in Physics 1993). Gravitational waves are produced in the acceleration of masses with spherical asymmetry and propagate by stretching and compressing spacetime; distances in space increase and decrease with a constant frequency in two directions at 90 degrees to each other, orthogonal to the direction of wave motion. The effect is very small: to observe it, detectors sensitive to changes in distances of one part in 1022 are needed, i.e., a compression of space that would change the distance between the Earth and the Sun by approximately the size of an atom. Albert Einstein, who predicted the existence of gravitational radiation, believed that it was too weak to be detected directly.

158

A. De Angelis

Fig. 5.14 Schematic diagram of the operation of a gravitational wave interferometer. From Wikipedia Commons

The typical instrument is an interferometer with two perpendicular arms or a triangle in which the sum of the internal angles is measured (if this is different from 180 degrees, it demonstrates that space has been deformed). Laser interferometry allows measuring distance variations as small as those induced by a gravitational wave. The operating principle can be schematized as follows. A single beam of light is split into two identical beams through a semitransparent mirror tilted at 45 degrees. The beams follow different paths before reuniting to reach the detector. The difference in distance traveled creates a phase shift between the two beams and gives rise to an interference pattern between two waves that were originally identical. If the length of one of the paths changes, so does the phase difference (Fig. 5.14). This is just the principle: reality requires complex technical adjustments, and the physics of gravitational wave detectors is among the most sophisticated experimental activities. As an example, additional mirrors are inserted near the beam splitter to allow multiple reflections of the laser beam, containing it within the interferometer and increasing the distance traveled by the beams. I often feel like a craftsman when I meet my colleagues studying gravitational waves. Three large interferometers are currently in operation: two (separated by a distance of approximately 3,000 km) in the United States that make up the LIGO (Laser Interferometer Gravitational-wave Observatory) project, with arms of 4 km, and one in Cascina near Pisa (Fig. 5.15), with arms of 3 km, called Virgo. These interferometers work in synergy to increase efficiency and enable the localization of possible sources of gravitational waves. The detection by all three instruments allows for the direction of origin to be determined using triangulation methods. Note that triangulation methods to localize cosmic

5 The New Senses of the Universe: Multimessenger Astronomy

159

sources had already been proposed by Galileo Galilei at the time of the explosion of the 1604 supernova—Galileo, at that time in Padua, made his triangulation with colleagues from Spain and from Naples.

Fig. 5.15 Aerial view of the LIGO detector near Hanford, Washington, US. There are two LIGO installations; the other is near Livingston, Louisiana, US. Credit: LIGO/Virgo Collaboration

LIGO started operating in September 2015 and immediately detected a strong gravitational signal; both interferometers saw the signal with a time delay of approximately 7 milliseconds, consistent with their distance. Figure 5.16 summarizes the observation: the signal increases in frequency and amplitude for 200 milliseconds, reaching a maximum frequency of approximately 150 Hz. The interpretation of this signal is the merger of two orbiting black holes; the total mass of the two bodies was estimated to be approximately 65 solar masses, and the energy released was 3 solar masses, i.e., 6 × 1047 joule—enough to satisfy the energy needs of the population of the Earth for 1027 years, a time much longer than the age of the Universe. It was the first signal of a gravitational wave. The gravitational waves produced by that event had traveled for 1.3 billion years before hitting the twin instruments of the LIGO detector.

160

A. De Angelis

Fig. 5.16 The first detection of a gravitational wave signal. Top: The waveform of the merger of the binary black hole system GW150914 (September 14, 2015). Gravitationalwave strain amplitude in the LIGO detector. Bottom: The separation of the black holes in units of the approximate final black hole radius R S and the relative velocity normalized to the speed of light. From Phys. Rev. Lett. 116, 061102 (2016)

That discovery brought an impressive advance in the knowledge of cosmic phenomena and won the Nobel Prize in Physics in 2017 for the theorist of gravitational waves Kip Thorne, and the experimentalists Barry Barish and Rainer Weiss. Today, LIGO and Virgo detect approximately one gravitational signal per week. For most the origin is understood: they come from the merging of two compact objects—two black holes, two neutron stars, or one black hole and one neutron star. Among the captured signals, there is one of particular importance. In August 2017, the first gravitational wave emitted by the collision between two neutron stars was observed, accompanied by the production of various types of radiation, particularly gamma rays. The LIGO-Virgo detection of the event called GW170817 marked a new way of studying the Universe: it was the birth of so-called multimessenger astronomy, which only a year later will be completed by the joint signal of gamma rays and neutrinos from an active galactic nucleus called TXS 0506 +056.

5 The New Senses of the Universe: Multimessenger Astronomy

161

In a few years, the LIGO-Virgo observations have already produced revelations about some of the most energetic and cataclysmic processes in the Universe. From the first gravitational wave event in 2015 and more recent black hole mergers observed, it is now known that: • there is a population of black holes paired in orbitally bound binary systems that evolve through the emission of gravitational waves and then merge; • many black holes of tens and even hundreds of solar masses exist in nature; • the properties of the observed black holes seem entirely consistent with Einstein’s general relativity. In particular, GW170817 and subsequent multiwavelength observations ranging from the radio waves the gamma-ray domain demonstrated the following: • short gamma-ray bursts are a consequence of the merging of binary compact objects; • binary mergers produce heavy element nucleosynthesis; • gravitational waves travel at the same speed as light to better than a few parts in 1015 . Additionally, the LIGO and Virgo detections have enabled tests of general relativity in the strong gravity regime that were inaccessible to other experiments and astronomical observations, motivating research on many fronts in fundamental physics and astrophysics. The 3-km Kamioka Gravitational Wave Detector (KAGRA) in Japan recently joined LIGO and Virgo to form the LIGO-Virgo-KAGRA network; the LIGO-India interferometer will join later in this decade, further improving the ability of the network to confidently detect and locate gravitational wave events and providing new methods for testing alternative theories of gravity through enhanced ability to resolve polarization.

The Future of Gravitational Wave Astronomy What has been demonstrated so far by gravitational wave detectors captures only a fraction of the potential science afforded by future observations of gravitational waves. Future ground-based detectors are targeting as much as a tenfold increase in sensitivity over the existing network. The key change will be an increase in the baseline arm length. Among the future protagonists of gravitational wave astrophysics is the Einstein Telescope (ET), which is expected to become operational in the mid-

162

A. De Angelis

2030s and could detect gravitational waves at cosmological distances, such as the coalescence of supermassive black holes, and perhaps gravitational waves produced in the early moments of the Universe.

Fig. 5.17

A rendering of the future Einstein Telescope. Credit: the ET Collaboration

ET (Fig. 5.17) will be a triangular-shaped interferometer. Its size will mark an increase from the 3–4 km of the current LIGO-Virgo detectors to 10 km; the optics will be cooled to a temperature of 10–20 K to reduce thermal noise, and new quantum technologies will be adopted to reduce light fluctuations. Currently, there are two candidate locations to host the Einstein Telescope: the region surrounding the Sos Enattos mine in Sardinia and the region between the Netherlands, Belgium, and Germany. Multimessenger astronomy requires accurate and relatively precise localization of gravitational wave events. The current network includes four km-scale observatories in operation: the two LIGOs, Virgo and KAGRA. The addition of LIGO-India later this decade will further improve the sky localization capability. An additional detector to complement the network in the Southern Hemisphere would be needed to form a powerful array with localization errors smaller than 10 degrees, and studies are ongoing in Australia for possible implementation. The community of physicists in the United States is thinking to a gravitational wave telescope concept featuring two facilities, one 40 km on a side and one 20 km on a side, each housing a single L-shaped detector.

5 The New Senses of the Universe: Multimessenger Astronomy

Fig. 5.18

163

The LISA constellation (left) and its orbit (right). Credit: ESA

Complementary to ET and Cosmic Explorer and even more sensitive (although in regions corresponding to different physical phenomena) is a space interferometer called LISA (Laser Interferometer Space Antenna), under construction by the ESA in collaboration with NASA. LISA is a constellation of three satellites separated by 2.5 million km (7 times the distance between the Earth and the Moon). It will be placed in orbit in the solar system (Fig. 5.18) in the mid-2030s and will be capable of detecting the first seed black holes, tracing the evolution of black holes from the early Universe through the peak of the star formation era. LISA will directly map the curvature of spacetime at the event horizons of massive black holes. It might also detect stellar-mass binary black hole systems and provide very precise sky localization of such events for electromagnetic follow-up. LISA will be another ideal player in multimessenger astronomy.

Putting All This Together In the previous chapter, we have seen how we can take advantage of different wavelengths of light to study astrophysical objects in the Universe. • Gamma rays and X-rays reveal high-energy emitters such as pulsars, black holes, and transient events, allowing us to inspect fundamental processes. • Ultraviolet and visible light reveal stars and star-forming material, with complementary properties related to the transparency of the Universe. • Infrared light shows the presence of cooler gas and dust, • Microwave and radio light reveal jets of particles and diffuse background emissions.

164

A. De Angelis

Whenever we look at an object in a different wavelength of light, we can reveal an entirely new class of information about it. All these messengers, although different, are still light—photons. However, cosmic objects in the Universe do not just emit light. Light is just part of a rich family of cosmic messengers, which includes electrons and positrons, nuclei, neutrinos, and gravitational waves. We have been studying these particles for more than a century, but we have only recently become able to use them for astronomy. Thanks to the detectors built at the beginning of this century and to be built in the next years, when the next nearby supernova occurs, we shall certainly be able to detect both light and particles, and possibly gravitational waves, too. The amount of knowledge we obtain from violent events in the Universe is becoming impressive. The Universe offers various kinds of signals for us to gather such as light, particles, and gravitational waves, each conveying distinct information. Integrating these signals can yield a more comprehensive understanding of our cosmic past than relying on any single source. Although multimessenger astronomy is relatively new, we can anticipate a flood of fresh events and insights as this field advances over the 21st century.

6 Cosmic Rays in Our Lives

Cosmic rays can have various effects on life and the environment. • On atmospheric chemistry: they can interact with atmospheric molecules and affect the chemical composition of the atmosphere, which can have consequences for weather patterns, climate, and the capability of the atmosphere to protect us from ultraviolet light and other biological hazards; • On living beings: cosmic rays can damage DNA in living organisms, leading to mutations and an increased risk of cancer. Moreover, cosmic rays have affected us and affect our lives in many other different ways.

Variations in Cosmic Ray Fluxes The flux of cosmic rays can vary and has varied in the past; this variation has affected our lives. Cosmic rays affect, for example, the stratospheric ozone layer, the “sunscreen” protecting living things from too much ultraviolet radiation from the Sun. Ozone is an oxygen molecule grouping three atoms, O3 , instead of the two of normal oxygen, O2 . Ozone is a blue gas with a characteristic pungent odor (you can smell it after thunderstorms, since lightning enhances ozone formation near the soil). Since charged cosmic rays, which constitute the majority of cosmic rays, pass through solar and terrestrial magnetic fields, the variation in these fields changes the flux of cosmic rays. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 A. De Angelis, Cosmic Rays, Astronomers’ Universe, https://doi.org/10.1007/978-3-031-38560-5_6

165

166

A. De Angelis

A periodic variation in the cosmic ray flux is anticorrelated with the solar magnetic field—i.e., when the solar magnetic field is stronger, the cosmic ray flux is lower. The variation in the solar magnetic field and generally in solar activity, known as the solar cycle, has a period of approximately 11 years. The effect is shown in the lowest panel of Fig. 6.1. The physical mechanism of solar

Fig. 6.1 Flux of cosmic rays during the last 9,400 years. (A) Geomagnetic dipole field strength relative to today. (B) Cosmic radiation over the last 9.400 years. Time (year BP) is given as year before 2012. The gray band represents the statistical uncertainty on the individual radionuclide records. (C) Same as (B), but zoom-in of the past millennium. Capital letters mark grand solar minima: O: Oort, W: Wolf, S: Spörer, M: Maunder, D: Dalton, G: Gleissberg. (D) Same as (C), but zoom-in of the past 350 years. Time is given as the year AD. At the bottom, the annual sunspot number is plotted. From F. Steinhilber et al., PNAS 109 (2012) 5967

6 Cosmic Rays in Our Lives

167

activity is unknown but is currently believed to be solely of purely solar origin, with several theories attempting to explain it. Even the Earth’s magnetic field undergoes variations up to the point of magnetic field reversals, which seem to occur irregularly (which is a way of saying that we do not understand their cause and we cannot predict their occurrence). However, it has been shown that the changes in cosmic ray flux induced by this effect are much smaller than those related to the Sun. Other small variations are due to lunar periodic cycles caused by the perturbation of the Earth-Sun magnetic connection by the passage of the Moon. Looking at longer periods, the so-called Milankovic cycles are caused by periodic changes in the Earth’s orbit around the Sun. In the 1920s, the Serbian geophysicist and astronomer Milutin Milankovic (1859–1958) suggested that cyclical changes in the distribution of solar radiation on Earth’s surface across different latitudes and seasons can be attributed to the combined effects of variations in eccentricity and axial tilt of the orbit, and precession of the axis of rotation of the Earth. He also proposed that these changes played a significant role in shaping the planet’s climate patterns with periodic combinations of cycles ranging from tens of thousands of years to hundreds of thousands years. The Earth may also experience a periodic variation in high-energy cosmic ray flux due to our galaxy falling toward the Virgo cluster (a great gravitational attractor) coupled with the oscillatory movement of our solar system perpendicular to the galactic plane. This model predicts cycles of approximately 60 million years. A nonperiodic and unpredictable variation occurs from time to time when the Earth is close to an intense radiation source, such as a supernova. The movement of our solar system through a dense interstellar cloud can push back the solar wind plasma, which protects the Earth from cosmic rays. With little shielding from this plasma, there may be an increase in the rate of anomalous cosmic rays, which can cause serious ozone damage. Since several isotopes are produced by the primary cosmic ray spallation (i.e., fragmentation of the target) reactions, a variation in the cosmic ray flux can be determined by measuring the concentration of these isotopes (for example, in tree rings, ice cores, deep-sea sediments, and meteorites). Data on radiocarbon, a radioactive isotope of carbon that we will discuss later, are widely used for this purpose. Since the half-life (i.e., the time for the concentration of the element to decrease by half ) of radiocarbon is approximately 5,700 years, this isotope allows us to monitor cosmic ray variations on time scales on the order of 10,000 years. There are other radioisotopes with longer half-lives, e.g., berillium-10, an unstable isotope made of 4 protons and 6 neutrons, which decays into boron-10 (made of 5 protons and 5 neutrons) with a half-life of approximately

168

A. De Angelis

1.4 million years. Thanks to the study of the concentration of these radioisotopes, we can roughly know the cosmic ray flux up to ten to twenty million years ago, the date of appearance on Earth of the great anthropomorphic apes. However, data on the last 9,400 years are more precise thanks to measurements of ice cores and tree rings (Fig. 6.1). Smaller but still measurable effects, fundamental for experiments, are due to pressure and temperature. Atmospheric pressure experiences temporal variations of different types, both periodic and aperiodic. Periodic variations occur daily and annually and are linked to the day/night cycle, atmospheric tide phenomena, and the expansion of air due to solar radiation-induced temperature changes. These variations exhibit two maxima and two minima within a 24-h cycle, with one peak more pronounced at approximately 10 in the morning and another at approximately 22, while the first minimum occurs at approximately 4 a.m., and the second, more pronounced, occurs at approximately 16 h. The amplitude of these changes is typically a few millibars at most and is dependent on the observation site’s location and season. On the other hand, aperiodic variations occur due to local weather conditions and can have much larger amplitudes, up to 10–20 millibars, lasting several days and sometimes masking regular variations. Further small annual variations in atmospheric pressure are also observed, associated with seasonal cycles and the ground’s heating and cooling effects. These changes are of lesser amplitude and vary significantly depending on the geographic observation site. As an example, Fig. 6.2 compares the atmospheric pressure and the corresponding flux of cosmic rays measured by the count rate in a cosmic ray detector—the camera-based telescope of the EEE experiment. The Extreme Energy Events (EEE) experiment, dedicated to the study of secondary cosmic rays, is a very large cooperative experiment based on 60 telescopes distributed over all the Italian territory; each telescope is made of three position-sensitive chambers and allows to reconstruct the trajectory of cosmic muons with high efficiency and optimal angular resolution. One unique aspect of the EEE network is that most of its telescopes are located within high schools and overseen by groups of students and teachers responsible for their construction at CERN. Recent reviews of data on the muonic component of cosmic rays, as measured by detectors situated across various regions of the world, have facilitated the investigation of barometric coefficients and temperature effects with respect to the geomagnetic cutoff and the angle of muon arrival. For instance, it has been observed that the barometric coefficient of such detectors is lower in regions with low geomagnetic cutoff, increasing as the cutoff values increase.

6 Cosmic Rays in Our Lives

169

Fig. 6.2 Flux of cosmic rays versus atmospheric pressure measured by the EEE experiment. Courtesy of Francesco Riggi and the EEE group

The soft component of secondary cosmic radiation usually exhibits a higher dependence, typically approximately 0.3% per millibar. Even in the absence of atmospheric pressure variations at ground level, temperature-induced fluctuations in the density of the air column above the observation site can impact the measured flux of secondary cosmic rays. The effect of temperature is attributed to the interactions and decay of secondary particles in the atmosphere, which produce mostly muons, pions, and kaons. An increase in temperature results in a reduction in density, slightly altering the probability of interaction, while simultaneously increasing the effects due to decay, and ultimately leading to an increase in the rate with increasing temperature (positive correlation between temperature and observed rate). The other effect, characterized by a negative correlation between temperature and observed rate, is associated with the phenomenon of muon decay itself. As the temperature rises and atmospheric density drops, muons must travel longer distances before decaying, resulting in an increased muon flux at ground level. In the case of high-energy particles, a positive correlation is observed between cosmic ray flux and temperature, whereas the opposite is true for low-energy particles. Additionally, while pressure variations are typically interpreted in relation to the atmospheric pressure measured at ground level, temperature-related variations necessitate knowledge of the temperature profile across various layers of the atmosphere itself. This type of data, more challenging to obtain, can usually be recorded only from experiments on atmospheric balloons.

170

A. De Angelis

Cosmic Rays and Life Cosmic rays deposit a large amount of energy in the atmosphere, ionizing it and causing changes in its chemistry; in part, secondary particles contribute to radioactivity in the biosphere. Life on Earth has evolved in the presence of this radiation, which, as we have seen, has varied considerably over time. During the late 1920s, John Joly proposed that cosmic radiation could have a long-term impact on living organisms and the evolution of life. He was also among the first researchers to investigate the relationship between radioactive sources and cancer. Victor Hess and his colleagues discovered that secondary particles produced by the interaction of cosmic rays can have biological effects beyond atmospheric ionization. In his Nobel lecture on December 12, 1936, Hess made a bold prediction: “The investigation of the effects of cosmic rays on life will be of great interest.” We describe the mechanisms through which cosmic rays could influence life and the potential implications of variations in the flux of cosmic radiation.

Ionization and Chemistry of the Atmosphere Changes in atmospheric chemistry induced by ionization can have significant implications on the ozone layer in the upper atmosphere. The ozone layer blocks harmful ultraviolet radiation with a wavelength of approximately 0.3 micrometers, which directly interacts with DNA damaging it. The harmful effect of decreasing ozone concentration impacts simple organisms such as phytoplankton, which are at the base of the food chain and responsible for half of the world’s oxygen production. However, since high-energy primary cosmic rays deposit most of their energy at lower altitudes, the effect on the ozone layer does not directly scale with the flux. In addition to the effect on ozone, cosmic rays facilitate the production of nitrogen oxides (composed of one nitrogen atom and multiple oxygen atoms), which combine with water becoming nitrates. Nitrates are deposited on the ground through rain and act as fertilizers, which can increase plant growth in the short term.

Cosmic Rays and the Origin of Life In 1922, the Russian biochemist Alexander Oparin proposed a bold hypothesis about the origin of life: life appeared on our planet through a long process of chemical evolution. The primitive environment where this chemical evolution

6 Cosmic Rays in Our Lives

171

occurred had some fundamental properties: free oxygen was almost completely absent from the atmosphere, which, however, was rich in hydrogen, while both the atmosphere and the waters contained large quantities of nitrogen and carbon. In addition to these fundamental ingredients, chemical evolution needs also a source of energy inducing ionization. Under these conditions, complex molecules could have formed from atmospheric gasses, which would later have collected in the planet’s seas and lakes, giving rise to a “primordial soup.” Over time, these complex molecules would have become more numerous and concentrated, forming increasingly well-organized aggregates. Subsequently, chemical evolution would have given way to prebiotic evolution with the appearance of tiny systems, perhaps the starting point of today’s living world. Oparin struggled to publish his hypothesis in 1922, and the scientific community ignored it. Over 30 years later, in 1953, the American chemist Stanley Miller, twenty-three years old, proposed an experiment to test Oparin’s hypothesis to his professor Harold Urey. Miller and Urey recreated in the laboratory the environmental conditions that Oparin believed were present in the primordial Earth. They used a sterile system consisting of two spheres containing one liquid water and the other hydrogen (the most abundant element in the universe), methane (CH4 ), and ammonia (NH3 ), and two electrodes. The two spheres were connected by a system of sealed pipes (Fig. 6.3). The water was heated to induce the formation of water vapor, while the two electrodes were used to provide electrical discharges that simulated lightning. The whole system was then cooled so that

Fig. 6.3 Scheme of the Miller-Urey experiment. From Wikimedia Commons

172

A. De Angelis

the water could condense and fall back into the first sphere to repeat the cycle. With these conditions and in the presence of an energy source such as solar radiation, more complex molecules could have formed. After a week of maintaining constant conditions, Miller observed that approximately 15% of the carbon had formed organic compounds, including some amino acids and other potential biological constituents. The experiment demonstrated that lightning and ionization could play a role in the formation of organic molecules, which are the building blocks of more complex structures at the basis of the formation of life. In addition to the direct ionization induced by cosmic rays, we know (as we will later see) that the frequency of lightning increases with the flow of cosmic rays. Is there a connection between the flow of cosmic rays and the origin of life?

Biological Effects of Cosmic Rays Cosmic rays ionize molecules in the air and biological structures they pass through, creating highly reactive free radicals that can damage DNA and other cellular components. The biological effects of cosmic rays depend on their energy, mass, and penetration into biological tissues. For example, heavy charged particles such as iron nuclei can cause more damage to DNA than lighter particles such as protons. In addition, low-energy cosmic rays can be easily absorbed by the skin’s surface cells, while high-energy rays can penetrate through tissues and reach internal organs. Biological damage caused by cosmic rays can lead to a range of consequences, including genetic mutations, cancer (exposure to cosmic rays has been associated with an increased risk of cancer, especially for people working in fields with high exposure to cosmic rays, such as pilots and astronauts), and premature aging. Cosmic rays can accelerate cell aging and cause damage to the immune system and long-term health problems such as age-related macular degeneration and cataracts. In general, exposure to cosmic rays is relatively low for most people on Earth, as most cosmic rays are blocked by our atmosphere. However, it can increase in some high-altitude regions or in space environments such as the International Space Station, where astronauts are exposed to very high levels of ionizing radiation. Stories related to the effects of cosmic rays on humans are abundant in the Marvel comics. Reed Richards, Sue Storm, Johnny Storm, and Ben Grimm received exposure to cosmic rays in space, and this led to lifelong mutagenic changes transforming them into the Fantastic Four. Interaction with cosmic

6 Cosmic Rays in Our Lives

173

rays has frequently been a part of Reed Richards’s attempts to restore Ben Grimm (known as the Thing) to his human state, and Reed himself used cosmic radiation to heal at a point when he had lost his elasticity. Others have occasionally tried to recreate cosmic ray contamination to create superpowers, with varying success. Notable is the result by Dr. Bruce Banner (known as the Hulk) who exposed the system propulsion engineer Jimmy Darnell to cosmic radiation far greater than the Fantastic Four, transforming him into a super-hero called X-ray. The damage caused by radiation is roughly proportional to the amount of energy absorbed by the irradiated tissue. This absorbed radiation dose is a general indicator of the level of damage and corresponding clinical and radiobiological effects, regardless of the type and source of radiation. It is important to note that cells have natural repair mechanisms, and simple physical equations cannot fully describe the resulting biological damage. The radiation damage caused by different types of radiation also varies based on the type of ionization produced. Therefore, the biological effectiveness needs to be known to quantify the different effects of each type of radiation. This is determined experimentally for any particular biological system under specific conditions. The effective radiation dose, incorporating these effects, is measured in a unit of the international system (SI) called sievert– Rolf Sievert (1896–1966) was a Swedish medical physicist whose major contribution was the study of the biological effects of ionizing radiation. The total annual radiation dose on Earth’s surface from natural sources is 2.4 millisievert per year on average, with the largest damage caused by inhalation of radon, which originates in soils and rocks. Cosmic rays contribute to a quarter of this dose, primarily through muons. The intensity of cosmic rays varies depending on the location due to atmospheric and geomagnetic effects. Whether variations in cosmic ray flux throughout history have influenced the evolution of life on Earth is uncertain, but likely. Radioactivity, thermal stress, ultraviolet light and genotoxic chemicals are correlated with cosmic rays, and repair processes protect the genomes of cells against these agents. The repair mechanism is relevant to both adaptation and selection, the two basic ingredients of evolution. Muons, which primarily undergo energy loss due to ionization, lose about 2 GeV in the atmosphere and reach the surface of the Earth with an average energy of 4 GeV; they penetrate the Earth’s upper crust and the water of the oceans down to a depth of several hundred meters, threatening much of the biosphere. Neutrons lose energy through short-range strong interactions and neutron capture reactions, and represent a significant threat only in the upper

174

A. De Angelis

atmosphere, especially at the airline altitude. The peak neutron flux occurs in the stratosphere, i.e., the layer of the atmosphere extending from approximately 15 km altitude to its upper boundary at 50 km, and containing the protective ozone layer.

Implications for Evolution Energetic photons or particles can damage DNA molecules through ionization, inducing mutations. Most sunlight, being not energetic enough to induce ionization, is not destructive; however, ultraviolet rays, which are mostly screened by the atmosphere, can be dangerous, especially in large doses. This also makes evolutionary sense because life could not have existed in the presence of background radiation with too high a mutagenic rate. Astrophysical objects can influence Earth by causing bursts of radiation, which in turn can trigger the mutation process on a large scale. DNA is easily damaged by ultraviolet rays with wavelengths of approximately 0.3 micrometers (UVB). To increase the amount of UVB radiation, the incoming radiation needs to be energetic enough to ionize the upper atmosphere and reduce the ozone layer, which is responsible for absorbing UVB radiation. Sources of such radiation include gamma-ray bursts, nearby supernovae, and energetic solar protons. Solar UVB rays can directly interact with DNA if the ozone layer is depleted. Although galactic gamma-ray bursts are not frequent, they have a significant chance of impacting Earth over a few hundred million years. These bursts release up to 1045 joule of energy in a few seconds to a minute. Short bursts have a harder spectrum than long bursts and are more common, so the total energy deposited in the atmosphere is similar. Total fluence is the most important factor affecting atmospheric damage. Over a few hundred million years, nearby supernovae can have significantly increased the flux of high-energy cosmic rays. This increase possibly amplified the effects of solar UVB rays and the dose of muonic radiation on the surface. Type II supernovae within 100 parsec from the solar system may have occurred in the Pleiades or in Centaurus. Such a nearby supernova could have increased solar ultraviolet radiation and decreased phytoplankton and biomass, potentially impacting other species. Energetic solar flares occur closer to Earth at a higher rate, but the energy released is lower by several orders of magnitude. Some events can accelerate particles to energies up to 20 GeV. High-energy particles can penetrate the geomagnetic field, causing global damage unlike low-energy particles that spiral

6 Cosmic Rays in Our Lives

175

Fig. 6.4 Two mirror forms of a generic amino acid that is chiral. From Wikimedia Commons

around the geomagnetic field lines and as a consequence are mostly concentrated near the polar regions. Cyclic variations with a period of approximately 60 million years can increase the cosmic ray flux up to PeV energies, and the rate increase can last a few million years. The total increase in radiation dose is estimated to be between 30% and a factor of three times the total annual radiation dose from natural sources; the total increase in radiation dose from muons is very significant in this case. This mechanism explains modulated variations observed in some fossils. Finally, a more speculative and fascinating hypothesis is that the chirality of some molecules in living beings is related to cosmic rays. In biology chirality, or handedness, refers to mirror-image versions of molecules; mirror-image molecules cannot be superimposed or stacked, like our left and right hands (Fig. 6.4). In life, only one form of molecular handedness is used, and substituting the mirror version of a molecule for the regular version within a biological system can lead to malfunctions. Since Louis Pasteur first discovered biological homochirality in 1848, scientists have debated whether the handedness of life was driven by random chance or some unknown deterministic influence. Some physicists believe that cosmic rays may be the origin of homochirality, arguing that muons, having a preferred chirality, and their daughter electrons, might have affected chiral molecules on Earth and everywhere else in the universe. The researchers believe that cosmic rays affected the evolution of the two mirror life forms in different ways at the beginning of life on Earth, helping one ultimately prevail over the other and producing the single biological handedness we see today.

176

A. De Angelis

The study of the biological effects of cosmic rays is just beginning (as Einstein said, “When dealing with living beings, one can understand how primitive physics still is”). The doses of radiation from various astrophysical sources can be calculated and their biological effects can be estimated. Experiments are needed, particularly with muons of energy on the order of GeV, to investigate further the mechanisms of damage on a variety of samples. The effects of radiation on different living organisms are very different, depending on their complexity, and need further exploration. Translating this biological damage to its effect on the biosphere in general and estimating its effects on the evolution of life is an even more difficult challenge.

Cosmic Rays and Climate The idea that modulation of ionization in the atmosphere could influence climate dates back to the Cold War era. Edward Ney proposed in 1959 that cosmic rays could impact climate through that mechanism. Ney suggested that solar wind affects the flow of cosmic rays reaching Earth, with high solar activity deflecting more cosmic rays reaching the inner solar system and reducing atmospheric ionization. He proposed that this ionization could impact climate, linking solar activity with variations in climate and explaining events such as the Little Ice Age, a long period of lower-than-average temperatures, during the so-called Maunder Minimum, when sunspots were scarce on the solar surface. The Maunder Minimum was a period around 1645 to 1715 during which sunspots became exceedingly rare—approximately 50 sunspots, compared with the typical 40,000 to 50,000 sunspots seen in modern times over a similar timespan. In the 1990s, Danish physicist Henrik Svensmark provided the first empirical evidence to support this connection, demonstrating a correlation between cloud cover and variations in cosmic ray flux in the solar cycle. This connection has been bolstered by further evidence, including climate correlations with variations in cosmic ray flux independent of solar activity. More recently, laboratory experiments have shown the role of ions in the nucleation of small aerosols and their subsequent growth into larger ones, further supporting this linkage, as described below. The mechanism through which cosmic rays have a role in the generation of thunderstorms has yet to be established quantitatively, and research in this field is ongoing. However, the following mechanism is widely agreed upon: electrons generated by atmospheric showers can create an avalanche, which multiplies ionization. Upon reaching relativistic energies, the electrons can

6 Cosmic Rays in Our Lives

177

cause a sudden discharge, releasing energy in the form of thunderstorms. This effect is enhanced with higher-energy primary cosmic rays as they generate more electrons and deposit more energy in the atmosphere. Cosmic rays are known to modulate electric fields in the atmosphere. The concentration of atmospheric ions changes directly with the flux of charged cosmic rays and indirectly influences the charges in the atmosphere. Some experiments also observe a strong correlation between the intensity of cosmic rays and the amplitude of electric field variations. It should be noted that different types of cloud layers produce different types of thunderstorms, varying in their size and polarity. Both the increase and decrease in secondary cosmic ray intensity can therefore be observed through experiments depending on the type of event. Work is underway to obtain a quantitative understanding of thunderstorms and associated particle acceleration in the atmosphere. The role of cosmic rays influencing cloud cover and its impact on climate has been a topic of intense debate. An increase in cosmic ray intensity would increase ionization rate in the atmosphere and could increase the cloud formation rate; less solar radiation would reach the Earth’s surface, resulting in global cooling. Experimental work is ongoing at CERN to verify this hypothesis, thanks to the CLOUD group. If this hypothesis is true, it could explain the early weak Sun paradox. The younger Sun is predicted to have had approximately 30% lower luminosity than the current value, and extremely low temperatures would be expected accordingly. However, geological records indicate the presence of liquid water on Earth during that period, which contradicts the above statement. This evidence can be partially explained when considering the cosmic ray flux during that period. Since the Sun was more active some four billion years ago in the emission of solar wind, shielding from cosmic rays would have been greater from the intense solar winds. With a lower cosmic ray flux, cloud cover is expected to be smaller, leading to global warming; this explains the paradox.

Is There a Correlation Between Cosmic Rays and Earthquakes? Since the earliest days of human existence on Earth, earthquakes have been a source of fear. From that time, humans have attempted to predict earthquakes based on whatever our ancestors could see around: Sun, Moon, stars, weather, etc., as well as unobservable factors such as deities. Even today, attempts to predict seismic activity based on approaches with little or no scientific evidence of correlations, such as solar and lunar behavior, continue.

178

A. De Angelis

One avenue of investigation that has been pursued for some time is the search for correlations between the detection rates of secondary cosmic rays and seismic activity. This research aims to identify a new type of precursor that could be used in a global early warning system against the tragic effects of earthquakes. Mass movements inside the Earth could lead to major earthquakes, affecting both the gravitational and geomagnetic fields. The detection of precursors might be possible by analyzing changes in the frequency of secondary cosmic radiation, which is very sensitive to the geomagnetic field. Although this research is active and interesting, and despite some evidence, at the state of the art there has yet to be firm proof of a correlation between cosmic ray activity and earthquakes to date.

Cosmic Rays and Electronics Electronic circuits, particularly those utilizing semiconductors, rely on localized electric charges to represent basic binary information (0, 1). Any form of noise capable of modifying these charge distributions has the potential to change the information stored in the circuit. This noise can come from a variety of sources, including electromagnetic radiation and ionizing radiation sources such as cosmic radiation. This effect is called “soft failure” or single event upset. It alters the information stored in a specific bit, which can cause the entire circuit to malfunction or produce erroneous information. Soft failure effects must be distinguished from permanent circuit damage caused by a cumulative radiation dose, which only manifests with high doses of radiation (cosmic rays can also be responsible for such damages; hence, electronics for avionics and space science are designed to be radiation resistant). However, altering the information stored in a basic cell does not necessarily lead to the entire system malfunctioning. For instance, a computer may not use the information from that cell during its ongoing operation or might overwrite it with new, accurate information before reading the falsified information. At sea level, the cosmic radiation particles that are most likely to induce soft errors are neutrons, which can penetrate the circuit and interact with the nuclei of the material, being captured and producing secondary ions that induce charges in the chip. In the 1990s, IBM estimated that soft errors induced by cosmic rays in a 256-megabit memory in a PC could occur at a rate of approximately one per month. Of course, the flow of cosmic rays depends on many environmental factors, related to the location of the device (altitude) and

6 Cosmic Rays in Our Lives

179

the particular operating conditions (time of the year, solar cycle, occurrence of catastrophic solar events, etc.). Some cases have been documented where the transformation of a bit in a memory location has led to incorrect results or system malfunction. There has been one instance where this has caused an airliner’s autopilot system to malfunction. An adequate design of the chips can reduce the importance of these effects, through appropriate geometries and choice of the semiconductor and support material. However, since the rate of these errors cannot be reduced to zero, it is important to carefully diagnose their presence and correct the errors that may result. A hardware-based solution consists of replicating circuits and comparing results. In high-reliability situations, such as aerospace applications, a common approach is to use three identical circuits; if there is a discrepancy in output, the value agreed upon by two of the three circuits is chosen. However, this obviously increases complexity and costs. Software techniques for correcting this type of error use redundancy, checksums, or verification of results obtained from sequences of instructions executed multiple times at the program compilation level. Radiation-hardened electronic components are a must for crucial and vulnerable parts of avionics. In this context, it should be emphasized that during a space weather radiation event the effects on complex avionics systems are still uncertain.

Cosmic Rays and the Exploration of the Earth and the Universe Cosmic Rays and Airplane Flights As we have seen (Fig. 2.9), radiation increases with height due to cosmic rays. According to European Union and US flight regulations, flight personnel are considered occupationally exposed to radiation due to the comparable annual doses they receive to other radiation workers. Consequently, these countries have implemented advisories and legal regulations to address this issue. However, although there is concern about the potential effects of radiation on both health and avionics, current epidemiological studies have not yet revealed significant evidence of dose-response patterns in aircrews for various types of cancer.

180

A. De Angelis

Radiation protection measures aim to limit radiation exposure to levels that are as low as reasonably achievable since these effects depend on the absorbed dose. Therefore, a reasonable approach also involves a timely response to significant increases in radiation due to solar radiation events to prevent high radiation exposures and thus their potential teratogenic effects.

One More Risk for Astronauts Three main factors determine the amount of radiation that astronauts receive or how radiation affects astronauts: • Altitude above the Earth. At higher altitudes the Earth’s atmospheric protection is no longer present and the magnetic field is weaker, so there is less protection against ionizing particles. Radiation is trapped in some regions such as the Van Allen belts. • Solar cycle, which we already discussed. • Individual susceptibility. This is an area of active investigation. NASA monitors the radiation doses of the International Space Station (ISS) crew in the short term and over the astronauts’ lifetime to evaluate the risk of radiation-induced diseases. Although NASA’s radiation limits are more stringent than those for radiation workers on Earth, astronauts can remain well below them while residing and working on the ISS, within Earth’s magnetosphere. However, NASA aims to send human missions to Mars in the 2030s, and this will be a different story. Cancer risk from cosmic radiation exposure is a potential showstopper for human-crewed missions to Mars and beyond. A human mission to Mars means sending astronauts into interplanetary space for at least a year, even with a very short stay there. Mars has no magnetic field to trap cosmic rays, and its atmosphere is much thinner (by a factor of 10) than the Earth, so that astronauts will receive only minimal protection even on the surface of Mars. Cosmic rays are difficult to shield. Using massive materials to protect astronauts would be too expensive since more mass means more fuel required to launch. Using materials that shield more efficiently would cut down on weight and cost. NASA is investigating materials that could be used in anything from spacecrafts to the Martian habitat to space suits. Polyethylene, the same plastic commonly found in water bottles, is a good material for radiation shielding. It is fairly inexpensive and can be easily tailored, but it needs to be stronger to build large structures. Another material in development at NASA is the

6 Cosmic Rays in Our Lives

181

so-called hydrogenated boron nitride nanotubes: nanotubes made of carbon, boron, and nitrogen, with hydrogen interspersed throughout the empty spaces left in between the tubes. These structures have diameters of less than one tenth of a micrometer and lengths up to 10 mm, and thus can also be used to produce suits. Boron is also an excellent absorber of neutrons.

Cosmic Rays and Archeology Various aspects of archaeology have benefited and continue to benefit from the physics of cosmic rays.

Dating of Archaeological Finds Carbon exists on Earth in three isotopes (elements with different physical properties, in particular nuclear mass, but the same chemical properties): two stable isotopes (carbon-12, which constitutes 99% of all carbon on Earth, and carbon-13, which constitutes 1%), and one radioactive isotope (carbon-14), which is found in small quantities, approximately one atom per trillion atoms of carbon in the atmosphere. Carbon-14, also known as radiocarbon, contains 6 protons and 8 neutrons. It decays through beta decay (transmutation of a neutron into a proton through a disintegration that also produces an electron and a neutrino) into nitrogen-14, whose nucleus is made up of seven protons and seven neutrons. The half-life of the decay of radiocarbon into nitrogen is approximately 5700 years. A gram of carbon containing one carbon-14 atom per trillion atoms emits approximately fifteen electrons (beta rays) per minute; as a result, the isotope would eventually disappear if it were not continuously replenished by animal respiration and plant absorption of air. The production of new carbon14 occurs regularly in nature in the upper atmosphere due to the capture of neutrons, which are secondary components of cosmic rays, by nitrogen atoms in the atmosphere. This is currently the dominant mechanism, although it has not always been so: during the years of open-air nuclear testing, neutrons released by nuclear explosions led to an increase in the concentration of carbon14, still visible today in wines produced before 1980 and particularly before 1960 (and similarly in the bones of people born before 1960). The percentage of radiocarbon in organic materials is the basis of the radiocarbon dating method, used by U.S. physical chemist Willard Libby and colleagues since 1949 on archaeological, geological, and hydrogeological samples.

182

A. De Angelis

For introducing this technique Libby (1908–1980) was honored with the 1960 Nobel Prize in Chemistry. Once carbon-14 is produced, it primarily binds to oxygen in the form of carbon dioxide, which is related to the cosmic ray flux (which, as we have seen, is also variable over time). All living organisms that are part of the carbon cycle continuously exchange carbon with the atmosphere through respiration processes (animals) or photosynthesis (plants) or assimilate it by feeding on other living beings or organic substances. Therefore, as long as an organism is alive, the ratio between its concentration of carbon-14 and that of the other two carbon isotopes derives from what has been observed in the atmosphere during its life. After death, these processes cease, and the organism no longer exchanges carbon with the outside world. Therefore, due to decay, the concentration of carbon-14 decreases exponentially. We are simplifying slightly because we assume that the concentration of radiocarbon in the atmosphere is always the same: the simple law of exponential decay should be corrected taking into account the fact that the initial concentration is a function of the history of the concentration of carbon-14, and thus, ultimately, of the flux of cosmic rays.

Muonic Tomography Measurements using cosmic muons as a probe to explore the inner part of large solid structures have been documented since the mid-1950s. These initial attempts involved carrying out measurements inside tunnels where a fraction of the cosmic rays were shielded by the thickness of solid rock above. Particularly useful is to study the muonic component of cosmic rays, since it can have a large penetration in the soil—muons can also be detected below some hundreds of meters of solid rock. In recent years, several studies on the use of secondary cosmic muons have been performed, with applications ranging from volcano exploration to archaeology, and to investigate the presence of potentially dangerous fissile materials. It is useful to distinguish between different applications of muon absorption. In some cases, muon absorption is used to obtain information on the amount of material traversed by the particles, such as in the case of searching for hidden chambers inside Egyptian pyramids. In other cases, the phenomenon of multiple scattering is utilized. This is particularly relevant when we want to detect the presence of materials with a high atomic number within difficultto-access volumes. Luis Alvarez and collaborators conducted a famous study that employed muons for tomography, published in 1970. They aimed to explore one of the

6 Cosmic Rays in Our Lives

183

Egyptian pyramids, the pyramid of Khafre, to search for hidden chambers inside. After covincing the political institutions, Alvarez began the installation of the detector and necessary equipment inside the pyramid in 1967. The detection apparatus used a set of large area position detectors to track and reconstruct particle trajectories. Over two years of measurement, the apparatus reconstructed approximately 100 million muons. This allowed for the measurement of their flow as a function of direction and ruled out the presence of further large chambers inside the pyramid. Although this experiment did not yield a positive result, it is still significant for various reasons. First, it paved the way for further investigations in this direction, which have been carried out in the following decades and have led to archaeological discoveries of interest. For instance, in 2017 a previously unknown large room was recently discovered within the Great Pyramid in Egypt, the oldest of the ancient seven wonders, located on the outskirts of Cairo (Fig. 6.5). In 2023 a scan of the subsoil in the Sanità district of Naples revealed a burial chamber from the Hellenistic age.

Fig. 6.5 The pyramid of Cheops, also known as the Great Pyramid of Egypt, is the oldest and largest of the three main pyramids in the Giza necropolis. It is the tomb of Pharaoh Cheops, also known as Khufu, who ruled during the IV dynasty approximately 2560 BC. It is the oldest of the seven wonders of the world and the only one that has survived to the present day. The figure highlights the recently discovered chamber and the technique based on cosmic muons used to find it. From Wikimedia Commons

184

A. De Angelis

Cosmic Rays and the Analysis of Large Structures Muon tomography is highly effective for examining large structures in general. This technique can be used to study solid structures of immense sizes, such as volcanos. Comparing the flux measured from open sky regions to the muon flux crossing a volcano from various orientations provides quantitative information on the fraction of absorbed muons, indicating the amount of material traversed and highlighting the presence of cavities. This technique has been successfully employed in a recent study of volcanos worldwide to estimate the risk of eruptions. To implement this technique, a muon tracking detector (telescope) is needed, placed in transmission with the object to be explored positioned between the open sky and the detector (as for pyramids, see Fig. 6.5). The results of measurements with this setup yield two-dimensional density maps, with the spatial resolution dependent on the tracking capabilities of the telescope and the amount of multiple scattering present in the rock and surrounding materials near the detector. With appropriate numerical imaging algorithms, the combined use of multiple telescopes viewing the volcano from different angles can also provide a three-dimensional map. Exploring volcanos with tomographic techniques based on cosmic muons requires long acquisition times, weeks or even months, to provide reliable images. Therefore, this technique can provide information on the evolution of the volcano’s internal structure on these timescales. It can be used together with other volcanological exploration techniques to predict possible changes and evolution of volcanic activity in the medium term. At present, it is possible to see the details of the internal structure with spatial resolutions on the order of tens of meters, but this accuracy is continually improving based on the improvement of detection systems and reconstruction algorithms. Another application that has recently attracted the interest of researchers is the possibility of monitoring, through precise tracking of muons, possible long-term deformations of buildings and civil structures. For this purpose, one must use two or more detectors with good tracking capabilities, i.e., allowing an accurate reconstruction of the trajectory of the detected muons, installed in appropriate positions and firmly attached to the structure, to detect any displacements over time of one detector compared to another. Figure 6.6 shows how to detect possible displacements of parts of a building. Another technique of investigation of materials based on cosmic rays is muon scattering tomography, which is based on the multiple scattering process that muons, being charged particles, undergo when passing through a material. The scattering angle depends on the composition and thickness of

6 Cosmic Rays in Our Lives

185

Fig. 6.6 Muon tomography can be used to monitor the stability of buildings. Courtesy of Francesco Riggi

the material passed through, particularly on its atomic number (the number of protons in the nucleus), which is correlated with the atomic weight. Therefore, information on the composition can be extracted by measuring the deviation that muons undergo when passing through a sample of material. It is thus possible to discriminate between the presence of light, intermediate, and heavy atoms, i.e., to recognize heavy elements even in the company of low or intermediate atomic weight materials. Fissile material (uranium or plutonium) might be transported illicitly inside containers, mixed with other commercial goods, typically made of light materials. Systematic checking would require such long times as to be incompatible with the flow of containers—the number of containers used annually in the world for the transport of goods is approximately 200 million; as a consequence, only a small fraction of these is checked. The muon-scattering technique helps in fighting traffic of dangerous materials. Recently, the problem of monitoring reinforced concrete structures, evidencing in particular the deterioration of the internal metal part, has also received renewed attention, based on the possibility of using muon scattering tomography. On a different front, analysis using cosmic rays has recently been used in the context of exploration of other bodies in the solar system, revealing gamma rays produced by the interactions of cosmic rays with the surface of the celestial body under study and thus obtaining an estimate of the composition of that surface.

What Next?

In a book such as this, which concludes with a review of ongoing and future experiments, there is a risk of ending the reading with greater attention to the problems to be solved than considering the issues solved and the knowledge acquired. This would be unfair to the pioneers of cosmic ray study and to present researchers, since an impressive amount of knowledge has been accumulated in a century. In 1911, we had no certainty: today, summarizing, we know that cosmic rays are of extraterrestrial origin (those of lower energy come from the Sun, those of intermediate energy come from the Milky Way and in particular from supernova remnants, those of higher energy, up to some hundred joule per particle, from supermassive black holes at the center of galaxies). The threshold between Galactic and extragalactic cosmic rays is still uncertain, somewhere between a few PeV (the knee of the distribution, see Chap. 1) and a few hundred PeV (the second knee), this upper bound being due to the observed approximate isotropy which requires magnetic confinement—i.e., the particle’s rotation radius in the Galactic magnetic field needs to be smaller than the size of the Galaxy. After the knee region (at energies of a few PeV), the contribution from protons accelerated by Galactic sources probably ends, but still heavier nuclei can be accelerated, till the second knee. We know that cosmic rays are mostly protons, with a small fraction of helium nuclei, electrons, photons, neutrinos, and traces of other particles; we know in principle how they are produced, with what energy spectrum and how they travel through space. Furthermore, considering the fundamental limits of accelerators on Earth, which are unlikely to exceed the energies of a few TeV in the 21st century, © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 A. De Angelis, Cosmic Rays, Astronomers’ Universe, https://doi.org/10.1007/978-3-031-38560-5

187

188

What Next?

cosmic rays and cosmological sources have once again become the focal point of high-energy physics. Beyond what we have learned, the surprises that the Universe can still offer in observing these phenomena are the reward that scientists who invest with patience and increasing competence in research in this field at the intersection of astrophysics and particle physics can enjoy. It is precisely the discovery potential of this new sector that is attracting an increasing number of young scientists and that leads to the development of new ideas, the realization of new technologies, and the identification of new mysteries. We are aware that the detectors we have built in this generation are the sentinels of the Universe’s boundaries. The adventure that began a century ago has yielded fruits that no one could have imagined at the time. It is nice to think that the pioneers of cosmic ray physics can see all the discoveries that their insights have generated, and it is nice to think that they, and hopefully we with them, can see the discoveries yet to come. Yes, although we know a lot, we, researchers, are always looking to the unknown. What can we expect for the next 10 to 20 years? Future major X-ray facilities after those already planned for the early 2030s will be a collaborative endeavor at the world level. X-ray astronomy is the key to high-energy astronomy for several reasons, including the large flux, and it will always be in the front line. Gamma-ray astronomy at the highest energies will become more effective by at least one order of magnitude thanks to the CTA arrays, to improvements in LHAASO and to the southern hemisphere detector SWGO at ground; in addition, the presently unknown MeV range will be explored by a space-based detector probably based on the ASTROGAM concept. Although more difficult than X-ray astronomy, gamma-ray astronomy is worth pursuing, since in large part (for example the MeV region) it is unexplored, and there you might find the unexpected! The next revolution might come from multimessenger astrophysics, probably the astronomy of the XXI century—of course in synergy with multiwavelength astronomy, in particular at X-ray energies and above. The physics of particles different from photons is going to be integrated into astronomy. Neutrino detectors will boost their sensitivity by a factor of 10 mostly thanks to Super-Kamiokande and Km3NeT in the Northern Hemisphere and to the improved second generation IceCube in the South.

What Next?

189

Ions at the highest energies will be probably detected thanks to a detector in orbit following the EUSO concept. We expect a dramatic improvement in the sensitivity to gravitational waves at different wavelengths thanks to the Einstein Telescope at ground and to the LISA constellation in space. Very high quality measurements over a wide range of particles, astrophysical objects, spatial scales, and redshifts will be important in expanding the cosmological discovery space. Studying the formation of early large structures and their interaction with evolving supermassive black holes will also provide a powerful tool for searching for departures from the standard cosmological model based on dark matter and dark energy. Finding such departures would have far-reaching implications for our understanding of the fundamental physics governing the Universe.

Postface

I decided not to write a bibliography for this book; however, I want to list my main sources. The first chapter, which introduces the subject from a scientific point of view, is based on the text Introduction to Particle and Astroparticle Physics, which I wrote with my friend Mário Pimenta and was published by Springer Nature in 2018. The second and third chapters, tracing the history of cosmic ray physics, are taken from my book L’enigma dei raggi cosmici (in Italian), reducing the length by approximately 50%. The fourth chapter concerns multiwavelength astronomy. Here the treatment is much more extensive than in the book L’enigma dei raggi cosmici, and it is carefully explained what is learned by observing the Universe in different bands of electromagnetic radiation. For the treatment of observations in radio waves, in the infrared and in the ultraviolet, I relied on the beautiful book by John Beckman Multimessenger Astronomy, published by Springer Nature in 2021; regarding X-ray astronomy, on the review article From cosmic ray physics to cosmic ray astronomy: Bruno Rossi and the opening of new windows on the Universe by Luisa Bonolis, Astroparticle Physics 53 (2013) 67, and on the article The high energy X-ray Universe by Riccardo Giacconi, PNAS 2010 107 (16) 7202. The part on gamma-ray astrophysics is original. Chapter 5 deals with astrophysics with messengers other than light (photons); it is essentially new, since the landscape has completely changed during the last five years.

© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 A. De Angelis, Cosmic Rays, Astronomers’ Universe, https://doi.org/10.1007/978-3-031-38560-5

191

192

Postface

The last chapter on the “exotic” aspects of cosmic ray physics (the relationship of cosmic rays with biology, archaeology, and meteorology) is essentially new. I own to the very instructive textbook Messengers from the Cosmos by Francesco Riggi (Springer Nature 2023). This book benefits from the help of many friends. I want to thank for their valuable comments and suggestions Cesare Barbieri, Mario Bertaina, Paolo Bison, Giovanni Busetto, ChatGPT, Maria Vittoria Cubellis, Michela De Maria, Fernando Ferroni, Alberto Franceschini, Oriana Mansutti, Francesco Riggi, Claudio Tuniz. Finally, I thank my editor Marina Forlizzi for her patience, sympathy, and support. A bonus of my work as an author is the chance to know an editor that later becomes a friend, and this is what happened with Marina.

Index

A

Absorption lines, 79 Accelerators, 68 Active Galactic Nuclei, 101, 104 AGILE, space detector, 112 Aharonian, Felix, 119 Airplane flights and cosmic rays, 179 Akeno Giant Air Shower Array (AGASA), 72 Alvarez, Luis, 49, 182 Amaldi, Edoardo, 50, 67 AMANDA, neutrino detector, 152 AMS-02 magnetic spectrometer, 144 Anderson, Carl, xiv, 54 Ankle (in the cosmic ray spectrum), 17 ANTARES, neutrino detector, 152 Antimatter, 13, 54 Antimatter, cosmic, 144 Aprile, Elena, 129 Archeology and cosmic rays, 181 Arecibo, radio telescope, 81 ASTRI, miniarray of Cherenkov telescopes, 133 ASTROGAM, space detector, 113

Astronauts and cosmic rays, 180 Astronomical unit (au or AU), 1 ATHENA, 106 Atwood, William (Bill), 111 Auger, Pierre, 50

B

Bahcall, John, 149 Baikal, neutrino detector, 152 Barish, Barry, xvi Barbiellini, Guido, 112 Bassi, Pietro, 68 Becquerel, Henri, 25 Bernardini, Gilberto, 67 Big bang theory, 6 Binary systems, 99 Biological damage, 173 Biological effects of cosmic rays, 172 Bjorken, James, 70 Blackbody radiation, 76 Black hole, xvi, 9 Blaserna, Pietro, 31, 35 Bremsstrahlung, 115

© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 A. De Angelis, Cosmic Rays, Astronomers’ Universe, https://doi.org/10.1007/978-3-031-38560-5

193

194

Index

C

Cao, Zhen, 117 Carbon-14, 181 Castagnoli, Carlo, 67 Centaurus A, 104 CERN, xiii, 18, 177 Chacaltaya, laboratory, 64, 72 Chandra Observatory, 99 Cherenkov technique, 117 Cherenkov, Pavel, 117 Chirality, in biology, 175 Clay, Jacob, 45 Climate and cosmic rays, 176 CLOUD experiment, 18, 177 Cocconi, Giuseppe, 107 Coded mask telescope, 99 COMPTEL, space detector, 113 Compton, Arthur Holly, 47 Conversi, Marcello, 67 Cosmic Background Radiation (CBR), 71 Cosmic Explorer, gravitational-wave Observatory, 163 Cosmic Microwave Background (CMB), 71 Cosmic rays, 16 spectrum, 16 Coulomb, Charles-Augustin de, 23 Crab Nebula, 19, 98 Cresti, Marcello, 68 Cronin, James (Jim), 137 CTA, gamma-ray telescope, 132 Curie (Sklodowska), Marie, 25 Curie, Pierre, 25

D

DAMPE, space detector, 112 Dark matter, 11 Dark nebulae, 86 Dating of archaeological finds, 181 Davis, Ray, 149 De Angelis, Alessandro, 114, 131 Deuterium, 94

Deutsches Elektronen-Synchrotron DESY, 132 Dirac equation, 54 Dirac, Paul, 54 Dwarf galaxies, 3

E

Earthquakes and cosmic rays, 177 EEE, experiment, 168 Einstein Observatory, 98 Einstein Telescope, gravitational-wave observatory, 161 Electromagnetic shower, 115 Electronics and cosmic rays, 178 Electroscope, 23 Elster, Julius, 26 Emission lines, 79 Energetic Gamma-Ray Experiment Telescope (EGRET), 110 eROSITA, 106 Euclid, 91 EUSO concept, 143 Event horizon, 85 Event horizon telescope, 83 Evolution and cosmic rays, 174 Exner, Franz, 25 Explorer 1, 96 Explorer X, 96

F

Faraday, Michael, 24 FAST, radio telescope, 81 Fedaia, laboratory, 68 Fermi Space Observatory, xvii Fermi, Enrico, xiv, 73 Fermi-LAT, 110 Forbush effect, 51 Forbush, Scott, 51 Future of cosmic ray research, 187

Index G

Galaxy cluster, 105 Galilei, Galileo, xiv, 86 Gamma-ray astronomy, 106 Gehrels, Neil, 126 Geiger, Hans, 47 Geiger-Müller counter, 46 Geitel, Hans, 26 Genzel, Reinhard, 3 Geomagnetic cutoff, 45 Ghez, Andrea, 3 Giacconi, Riccardo, 97, 150 Glashow, Sheldon, 13 Gran Sasso National Laboratories (LNGS), 129 Gravitational waves, 156 Greiner, Jochim, 114 Groups (of galaxies), 3 Guerriero, Luciano, 68 GZK cutoff, 72, 137 GZK mechanism, 72

195

Hyper-Kamiokande (Hyper-K), neutrino detector, 156

I

IceCube Gen-2, neutrino detector, 155 IceCube, neutrino detector, xvii, 152 Iliopoulos, John, 70 Infrared astronomy, 88 INTEGRAL Observatory, 99 Interactions, 14 Intergalactic medium, 94 Interstellar medium, 94 Isospin symmetry, 72

J

James Webb Space Telescope (JWST), 89 Jansky, Karl, 80 Joly, John, 170

H

Hadronic shower, 115 Halzen, Francis, xvii, 152 HAWC, 116 Hawking radiation, 11 Hawking, Stephen, 11 HERD, space detector, 113 Herschel, Catherine, 86 Herschel, space detector, 89 Herschel, William, 86, 88 H.E.S.S., gamma-ray telescope, 119 Hess, Victor, 170 HI line, 78 Hilbert, David, 50 Hillas, Michael, 118 Hofmann, Werner, 119 Homestake experiment, 149 Hooper, Dan, 131 Hubble, Herwin, 4 Hubble Space Telescope, 93 Hulse, Russell, 157

K

Kajita, Takaaki, 151 Kanbach, Gottfried, 114 Kelvin, 76 Km3NeT, neutrino detector, 156 Knee (in the cosmic ray spectrum), 17 Kolhörster, Werner, 39 Koshiba, Masatoshi, 150

L

Lagrange, Joseph-Louis, 91 Lagrangian points, 91 CDM cosmological model, 12, 189 Large Hadron Collider (LHC), xiii Lattes, Cesare, 64 Lemaître, Georges, 5 Lepton, 13 LHAASO, detector, 117

196

Index

LIGO, xvi Libby, Willard, 181 LIGO, gravitational-wave Observatory, 158 Linsley, John, 71 LISA, gravitational-wave Observatory, 163 Local group (of galaxies), 3 Lorenz, Eckart, 119 Low-Earth orbit (LEO), 109 Luria, Salvador, 96

M

MAGIC, gamma-ray telescope, xvii, 119 Magnitude, 8 Maiani, Luciano, 70 Mansutti, Oriana, 131 Michelson, Peter, 111 Microquasar, 86 Milky Way, 2 Miller, Stanley, 171 Miller-Urley experiment, 171 Millikan, Robert, 42 Mirzoyan, Razmik, 119 Moiseev, Alexander, 114 Morrison, Phillip, 96 Mountain-top laboratories, 66 Multimessenger astronomy, xiv, 135 Multimessenger astroparticle physics, 135 Multimessenger astrophysics, 135 Multiwavelength astronomy, xvii, 76 Muonic tomography, 182 Muon scattering tomography, 184

O

Occhialini, Giuseppe (Beppo), 47, 64 Oparin, Alexander, 170 Origin of life and cosmic rays, 170 Ozone and cosmic rays, 165

P

Pacini, Domenico, 31 PAMELA magnetic spectrometer, 144 Pancini, Ettore, 67 Pareschi, Giovanni, 133 Parsec, 3 Particle accelerators, 68 Pasteur, Louis, 175 Peccei, Roberto, 131 Peccei-Quinn mechanism, 131 Penzias, Arno, 71 Perkins, Donald, 64 Photomultiplier, 117 Photon, xv, 75, 106 Pierre Auger Observatory, 138 Polarization, 108 Pontecorvo, Bruno, 149 Powell, Cecil Frank, 63 Pressure and cosmic ray flux, 168 Probe Of Extreme Multi-Messenger Astrophysics (POEMMA), 143 Pulsar, 19

Q

Quark, 13 charm, 70 strange, 65 Quinn, Helen, 131

N

Nernst, Walter, 29 Neutron star, 9, 10 Ney, Edward, 176

R

Radar, 80 Radiation length, 115 Radiation, thermal, 76 Radiocarbon, 181

Index

Radio waves, 80 Reber, Grote, 80 Redshift, 6 Richter, Burton, 70 Rigidity, 46 Roncadelli, Marco, 131 Rossi, Bruno, xiv, 47, 96 Rutherford, Ernest, 53 Rutherford, experiment, 53

197

Tavani, Marco, 112, 114 Taylor, Joseph, 157 Temperature and cosmic ray flux, 168 Teshima, Masahiro, 119 Tesla, Nikola, 73, 80 Testa Grigia, laboratory, 67 Thompson, David (Dave), 111 Thorne, Kip, xvi Thunderstorms and cosmic rays, 176 Ting, Samuel, 70, 144

S

Salam, Abdus, 13 Schrödinger equation, 54 Schrödinger, Erwin, 25, 35 Serpico, Pasquale, 131 Sidereus Nuncius, xiv Sievert, 173 Soft fail, 178 Solar activity, 167 Someda, Giovanni, 68 Southern Wide Field-of-view Gamma-ray Observatory (SWGO), the, 133 Spallation, 167 Spitzer, space detector, 89 Sputnik 1, 95 Square Kilometer Array, 88 Standard candle, 5 Stratosphere, 174 Sudbury Neutrino Observatory, 151 Supercluster (of galaxies), 3 Super-Kamiokande experiment, 151 Supernova, 10 Supernova 1987A, 94 Svensmark, Henrik, 176 Swift (Neil Gerehls Swift Observatory), 126 Synchrotron self-Compton (SSC) mechanism, 103

U

Ultraviolet astronomy, 93 Urey, Harold, 171 V

Van Allen, belts, 96, 109 Van Allen, James, 99 VERITAS, gamma-ray telescope, 119 Very Large Array, 82 Virgo, gravitational-wave Observatory, 158 Völck, Heinrich, 119 W

Watson, Alan, 137 Weakly Interacting Massive Particles (WIMP), 128 Weekes, Trevor, 119 Weinberg, Steven, 13 Weiss, Rainer, xvi Whipple, gamma-ray telescope, 118 White dwarf, 7 Whitford, Albert, 88 Wilson, Charles Thomson Rees, 27 Wilson, Robert, 71 WISP, 128 Wulf, Theodor, 29

T

X

Tatischeff, Vincent, 114

XENON, detector, 129

198

Index

X-ray astronomy, 95

Z

Zago, Guido, 68 Zwicky, Fritz, 73 Y

Yukawa, Hideki, 61