Vol 587, 19 November 2020 
Nature

Table of contents :
Structural basis of GPBAR activation and bile acid recognition
Cryo-EM structure of GPBAR–Gs complex
The orthosteric binding site
Fingerprints for bile acid recognition
An unconventional activation mechanism
Coupling to Gs through ICL3
Second ligand-binding pocket
Structural basis of the bias property
Discussion
Online content
Fig. 1 Cryo-EM structure of GPBAR–Gs complexes.
Fig. 2 Activation mechanism of GPBAR.
Fig. 3 The coupling of GPBAR to Gs.
Fig. 4 The second ligand-binding pocket and its allosteric effect.
Fig. 5 Structural basis of the biased agonism by INT-777.
Extended Data Fig. 1 Cryo-EM images and single-particle reconstruction of the P395–GPBAR–Gs and INT-777–GPBAR–Gs complex.
Extended Data Fig. 2 Selected electron microscopy density map of the P395–GPBAR–Gs complex and the INT-777–GPBAR–Gs complex.
Extended Data Fig. 3 Activation of GPBAR by P395 and INT-777.
Extended Data Fig. 4 Binding, activation and mutation effects of GPBAR in response to different agonists.
Extended Data Fig. 5 Structural fingerprints of GPBAR recognizing different bile acids.
Extended Data Fig. 6 The active structure of GPBAR in complex with Gs.
Extended Data Fig. 7 Interactions of the α5 helix of Gs with the GPBAR transmembrane core.
Extended Data Fig. 8 Interactions of GPBAR ICL3 with Gs.
Extended Data Fig. 9 The second ligand-binding pocket of GPBAR.
Extended Data Fig. 10 Effect of the alanine scanning mutagenesis of the potential residues involved in biased property of GPBAR.
Extended Data Table 1 Cryo-EM data collection, model refinement and validation statistics.
s41586-020-2696-8.pdf
The native structure of the assembled matrix protein 1 of influenza A virus
Online content
Fig. 1 Cryo-ET of influenza A HK68 virions.
Fig. 2 Structure and assembly of influenza M1.
Fig. 3 Interactions that mediate assembly of M1.
Extended Data Fig. 1 HK68 virions and VLPs have variable numbers of M1 strands and variable radius.
Extended Data Fig. 2 Comparison of HK68 virus and VLP M1 structures, and resolution measurements.
Extended Data Fig. 3 Analysis of the M1 structure determined within virions and VLPs.
Extended Data Fig. 4 NMR analysis of M1.
Extended Data Fig. 5 Electron microscopy of in vitro-reconstituted M1 helical tubes.
Extended Data Fig. 6 Analysis of the in vitro M1 structure.
Extended Data Fig. 7 Alignment of M1 protein sequences.
Extended Data Table 1 Data collection and processing parameters for M1 within virions and VLPs.
Extended Data Table 2 Data collection and processing parameters for in vitro helical assembly of M1.
s41586-020-2906-4.pdf
HSP40 proteins use class-specific regulation to drive HSP70 functional diversity
Interaction of class A and B JDPs with HSP70
The GF region of DNAJB1 inhibits JD–HSP70 binding
DNAJB1 contains a second HSP70-binding site
EEVD binding to DNAJB1 releases JD–GF inhibition
DNAJB1 binds HSP70 and substrates independently
EEVD deletion abolishes HSP70–DNAJB1 binding
GF mutations restore DNAJB1–HSP70(ΔEEVD) function
JD–GF inhibition essential for amyloid disaggregation
Online content
Fig. 1 The GF region of class B JDPs initially blocks J-domain binding to HSP70.
Fig. 2 DNAJB1 contains an additional HSP70-binding site that is not found in class A JDPs.
Fig. 3 DNAJB1 binding to the C-terminal EEVD tail of HSP70 releases the JD–GF inhibition.
Fig. 4 DNAJB1 JD–GF inhibition is essential for amyloid disaggregation.
Extended Data Fig. 1 Interaction of JDP J-domains with HSP70 chaperone.
Extended Data Fig. 2 Structural characterization of DNAJB1JD–GF.
Extended Data Fig. 3 Deletion of helix V removes the GF inhibition of the J-domain and restores HSP70 binding.
Extended Data Fig. 4 Interaction of DNAJB1 with HSP70.
Extended Data Fig. 5 HSP70 binds to the CTDI of DNAJB1 but not of DNAJA2.
Extended Data Fig. 6 Client proteins and the HSP70 C-terminal EEVD tail bind simultaneously to DNAJB1.
Extended Data Fig. 7 Removal of the HSP70 EEVD tail abolishes binding to class B JDPs.
Extended Data Fig. 8 DNAJB1 mutants with released GF inhibition of the J-domain.
Extended Data Fig. 9 DNAJB1 mutants with partially released J-domains can interact with HSP70 lacking the C-terminal tail.
Extended Data Fig. 10 Characterization of HSP70 activity with constitutively JD-released (DNAJB1(ΔH5)) mutant.
s41586-020-2904-6.pdf
Molecular dissection of amyloid disaggregation by human HSP70
Chaperone interaction with α-synuclein
Amyloid selection by DNAJB1
DNAJB1 organizes HSP70 on the fibril
DNAJB1 facilitates HSP70 crowding
Disaggregation requires a bulky NEF
Assembly of the HSP70 disaggregase
Discussion
Online content
Fig. 1 The HSP70 machinery binds to discrete sequences in α-synuclein.
Fig. 2 Specific interaction of DNAJB1 with α-syn fibrils leads to clustering of HSP70 molecules.
Fig. 3 DNAJB1 facilitates HSP70 binding at high density to α-syn fibrils.
Fig. 4 HSP110 potentiates amyloid disaggregation.
Fig. 5 Model of amyloid disaggregation by the HSP70 chaperone machinery.
Extended Data Fig. 1 Structural features of the HSP70 disaggregase machinery.
Extended Data Fig. 2 Interaction of chaperones with monomeric α-syn.
Extended Data Fig. 3 C-terminal truncation abolishes DNAJB1 interaction and disaggregation activity.
Extended Data Fig. 4 DNAJB1 promotes HSP70 recruitment to α-syn fibrils.
Extended Data Fig. 5 Entropic-pulling model of amyloid disaggregation by HSP70.
Extended Data Fig. 6 HSP110 role in disaggregation is independent of ATPase cycle.
Extended Data Fig. 7 HSP110 role beyond HSP70 recycling.
Extended Data Table 1 Reaction conditions and replicate numbers for experiments shown in the figures.
Extended Data Table 2 Reaction conditions and replicate numbers for experiments shown in the Extended Data figures.
s41586-020-2864-x.pdf
Single-cell mutation analysis of clonal evolution in myeloid malignancies
Clonal architecture in myeloid malignancies
Mutation patterns in clonal architecture
Initiating mutations and clonal dominance
Clonal evolution in myeloid malignancies
Simultaneous scDNA-seq and immunophenotyping
Discussion
Online content
Fig. 1 Single-cell DNA sequencing of patients with myeloid malignancies.
Fig. 2 Elucidation of clonal dominance and co-mutation by single-cell DNA sequencing.
Fig. 3 Identification of initiating mutations and clonal expansion through assessing optimal genetic trajectories.
Fig. 4 Simultaneous single-cell DNA and cell-surface protein expression sequencing.
Extended Data Fig. 1 Single-cell DNA-sequencing patient cohort.
Extended Data Fig. 2 Analysis of clonal architecture by disease type and gene mutation.
Extended Data Fig. 3 Clonal dominance, initiating mutation, and co-mutation patterns in patients with myeloid malignancies.
Extended Data Fig. 4 Clonal evolution in patients with myeloid malignancies.
Extended Data Fig. 5 Contribution of clonal haematopoiesis mutations to mature cell lineages.
Extended Data Fig. 6 Simultaneous molecular and immunophenotypic profiling of samples from patients with AML.
Extended Data Fig. 7 Clonal architecture analysis using single-cell DNA+Protein sequencing of select AML samples.
Extended Data Fig. 8 Neighbourhood analysis of all single-cell DNA+Protein AML samples.
Extended Data Fig. 9 Clone- and gene- specific alterations to cell-surface protein expression and community representation in AML samples.
s41586-020-2886-4.pdf
Gut-educated IgA plasma cells defend the meningeal venous sinuses
Online content
Fig. 1 IgA plasma cells are localized adjacent to the dural sinuses in mouse and human meninges.
Fig. 2 Meningeal IgA cells are clonally related to those in the gut and depend on intestinal microbiota.
Fig. 3 Meningeal IgA entraps fungi in the dural sinuses and protects the brain from infection.
Extended Data Fig. 1 Anatomical localization of dural B cells and IgA+ cells.
Extended Data Fig. 2 Bacterial diversity in the gut of DA-GF and DB-GF mouse lines.
Extended Data Fig. 3 Clonal relatedness of IgA+ cells in the gut and meninges of SPF, DA GF and DB GF mice, and meningeal Ig heavy chain gene expression in DSS colitis mice.
Extended Data Fig. 4 Accumulation and proliferation of meningeal IgA+ cells after gut-epithelial barrier breach.
Extended Data Fig. 5 B cell expansion in meninges following systemic candidiasis.
Extended Data Fig. 6 Distribution of C.
Extended Data Fig. 7 Local effects of bortezomib-mediated meningeal plasma cell depletion.
Extended Data Fig. 8 Effects of bortezomib-mediated meningeal plasma cell depletion on bone marrow and spleen compartments.
s41586-020-2877-5.pdf
Exuberant fibroblast activity compromises lung function via ADAMTS4
Stromal responses to influenza infection
Regulation of fibroblast activation
ECM proteases in activated fibroblasts
ADAMTS4 promotes lethal immunopathology
ADAMTS4 and severe influenza in humans
Discussion
Online content
Fig. 1 Single-cell gene-expression profiling of CD45− cells during severe influenza A virus infection.
Fig. 2 ADAMTS4 is a lung fibroblast-derived ECM protease induced by influenza virus infection.
Fig. 3 ADAMTS4 promotes lethal immunopathology during influenza A virus infection in mice.
Fig. 4 ADAMTS4 levels are associated with severe seasonal and avian influenza infections.
Extended Data Fig. 1 Single-cell gene-expression profiling of CD45− lung cells during severe influenza virus infection in mice.
Extended Data Fig. 2 Summary of gene-set enrichment analysis of mouse and human fibroblast populations.
Extended Data Fig. 3 Validation of inflammatory fibroblasts transcriptional states by flow cytometry.
Extended Data Fig. 4 Assessment of fibroblast cell surface phenotype in human lung samples.
Extended Data Fig. 5 Regulation of ECM-related gene expression in human and mouse respiratory cells following cytokine stimulation.
Extended Data Fig. 6 Meta-analysis of publicly available human disease single-cell gene-expression datasets.
Extended Data Fig. 7 Spatial transcriptomics of lung sections from mice collected 10 days after infection.
Extended Data Fig. 8 Assessment of IAV infection in Adamts4+/+ and Adamts4−/− mice.
Extended Data Fig. 9 Versican accumulates in the lung during severe IAV infection in mice and affects localization and responses of T cells.
Extended Data Fig. 10 Association of ADAMTS4 with clinical measures from three independent human cohorts.
s41586-020-2890-8.pdf
Paracrine signalling by cardiac calcitonin controls atrial fibrogenesis and arrhythmia
Atrial cardiomyocytes produce CT
Human ACFs express functional CTR
CT–CTR signalling regulates ACF function
Disrupted CT–CTR signalling in AF
CT–CTR system controls susceptibility to AF
Discussion
Online content
Fig. 1 Myocardial production of CT.
Fig. 2 CT regulates ACFs.
Fig. 3 CT and physiology of ACFs.
Fig. 4 CT–CTR signalling, atrial fibrosis and AF inducibility.
Extended Data Fig. 1 Effects of human αCGRP on human ACF function.
Extended Data Fig. 2 Effect of CT on collagen 1 processing and single-cell transcriptome (10× scRNA-seq) of cultured human ACFs.
Extended Data Fig. 3 CTR expression and CT-mediated changes in ACF.
Extended Data Fig. 4 Single-cell transcriptome of freshly isolated human ACFs (scRNA-seq SMART-seq2).
Extended Data Fig. 5 Cluster comparison of single-cell transcriptomes (SMART-seq2) of freshly isolated human ACFs.
Extended Data Fig. 6 Protein profiling of the selected CT–CTR downstream targets.
Extended Data Fig. 7 Atrial gene expression, morphological parameters and AF-duration in mice.
Extended Data Fig. 8 Validation of anti-CTR antibody and study summary.
Extended Data Table 1 Clinical characteristics of the study participants.
Extended Data Table 2 In vivo echocardiographic and haemodynamic parameters in mice.
s41586-020-2866-8.pdf
Enteric neurons increase maternal food intake during reproduction
Innervation of the stomach-like crop
Control of the crop by myosuppressin neurons
Neuron remodelling during reproduction
Neuron remodelling promotes food intake
Discussion
Online content
Fig. 1 Regulation of crop enlargement by Ms and MsR1.
Fig. 2 Reproductive modulation of Ms neurons.
Fig. 3 Steroid and enteroendocrine modulation of Ms neurons and crop enlargement.
Fig. 4 Post-mating, Ms-mediated crop enlargement increases food intake and reproductive output.
Extended Data Fig. 1 Innervation of the anterior portion of the adult Drosophila intestine.
Extended Data Fig. 2 Intestinal transit dynamics and dietary regulation of crop enlargement.
Extended Data Fig. 3 Characterization of Ms expression.
Extended Data Fig. 4 Ms neuron regulation of crop enlargement.
Extended Data Fig. 5 Expression of Ms receptors and their regulation of crop enlargement.
Extended Data Fig. 6 Post-mating modulation of Ms neurons.
Extended Data Fig. 7 Ecdysone modulation of Ms neurons and crop size.
Extended Data Fig. 8 Bursicon modulation of Ms neurons.
Extended Data Fig. 9 Post-mating modulation of crop enlargement by Burs and ecdysone.
Extended Data Fig. 10 Regulation of food intake, fecundity and fertility by Ms neurons.
s41586-020-2881-9.pdf
Host variables confound gut microbiota studies of human disease
Machine-learning framework
Microbiota-associated variable identification
Host variables confound disease analyses
Alcohol and stool quality affect microbiota
Caveats to data interpretation
Discussion
Online content
Fig. 1 Physiological, lifestyle and dietary characteristics strongly associate with the composition of the gut microbiota.
Fig. 2 Human participants with a disease vary from healthy controls in critical microbiota-associated variables that confound microbiota analyses.
Fig. 3 Variation in microbiota due to confounding variables spuriously increases observations of disease-associated microbiota differences.
Fig. 4 Alcohol intake and BMQ are associated with robust effects on microbiota composition that confound microbiota studies of human disease.
Extended Data Fig. 1 Data processing and machine-learning analysis framework.
Extended Data Fig. 2 Machine-learning evaluation of common exclusion criteria and variables for matching.
Extended Data Fig. 3 Evaluation of Random Forest microbiota association strengths compared to beta diversity assessments and as a function of sample size.
Extended Data Fig. 4 Comparison of microbiota–disease association strengths between disease-inclusive and disease-exclusive cohorts.
Extended Data Fig. 5 Machine-learning and compositional analyses for diseases before and after confounder matching.
Extended Data Fig. 6 Assessment of the capacity for statistical methods to correct for mismatching.
Extended Data Fig. 7 Validation of confounding effects of host variables in external independent cohorts of type 2 diabetes and metabolic syndrome.
Extended Data Fig. 8 Assessment of strength of confounding effects for microbiota-associated confounding host variables.
Extended Data Fig. 9 Examination of the effects of alcohol consumption on the gut microbiota with external validation.
Extended Data Fig. 10 Bowel movement quality matching and external validation.
s41586-020-2759-x.pdf
Initiation of a conserved trophectoderm program in human, cow and mouse embryos
Online content
Fig. 1 Transcriptional and protein expression differences between cells in human morula embryos.
Fig. 2 Apical expression of aPKC and AMOT in outer cells in human morula embryos, in which SOX2 expression is retained.
Fig. 3 aPKC activity is required for YAP1 and GATA3 expression in mouse, cow and human morula embryos.
Extended Data Fig. 1 Morphokinetic analysis to benchmark key morphological events in mouse, cow and human pre-implantation development.
Extended Data Fig. 2 Protein expression of TE-associated markers in mouse, cow and human morula embryos.
Extended Data Fig. 3 WWTR1, KRT18 and GRHL2 expression in human morula embryos.
Extended Data Fig. 4 Correlation analysis of GATA3 expression in human morula cells.
Extended Data Fig. 5 SOX2 is an inner-cell-specific marker in mouse, but it is broadly expressed in cow and human morula embryos.
Extended Data Fig. 6 Apical PAR complex and AMOT expression in mouse, cow and human morula embryos.
Extended Data Fig. 7 aPKC inhibitor dose–response experiments in mouse, cow and human embryos.
Extended Data Fig. 8 Characterization of the effects of aPKC inhibition in mouse, cow and human embryos.
Extended Data Fig. 9 Trim-Away optimization in mouse embryos.
Extended Data Fig. 10 Characterization of the effects of aPKC Trim-Away in mouse embryos.
Extended Data Fig. 11 Trim-Away experiments in cow and human embryos.
s41586-020-2905-5.pdf
Persistent transcriptional programmes are associated with remote memory
Neuron subtypes in remote memory engrams
Memory-associated gene expression
Vesicle exocytosis signatures in memory
Non-neuronal gene expression changes
Discussion
Online content
Fig. 1 Labelling and collection of single memory engram cells via the TRAP2 Ai14 line.
Fig. 2 Molecular identification of active neurons during remote memory consolidation.
Fig. 3 Transcriptional programmes activated by consolidation of remote memories are distinct across neuron subtypes.
Fig. 4 Remote memory consolidation is associated with specific markers for vesicle exocytosis.
Fig. 5 Transcriptomic changes in non-neuronal cells associated with remote memory consolidation.
Extended Data Fig. 1 Fidelity of the TRAP2 Ai14 line and sequencing quality metrics.
Extended Data Fig. 2 Distribution of cell numbers and neuronal subtypes across various training conditions.
Extended Data Fig. 3 Differential gene expression in distinct neuronal subtypes (FR over NF TRAPed populations).
Extended Data Fig. 4 Analysis of TRAPed ensembles in food salience (S) versus no salience (NS) mice.
Extended Data Fig. 5 DEGs when comparing ensembles from food salience (S) to no salience (NS) mice.
Extended Data Fig. 6 Neuron subtype-specific activation programs, hypothesized protein–protein interactions and upstream regulatory motifs.
Extended Data Fig. 7 In situ validation of tdT levels, neuronal subtype compositions and remote-memory-specific DEGs in the mPFC.
Extended Data Fig. 8 DEGs and potential cell–cell interactions in non-neuronal cells during memory consolidation.
Extended Data Fig. 9 Comparison of remote-memory DEGs with previously published datasets of experience-dependent transcriptional activity.
Extended Data Fig. 10 Comparison of remote-memory-specific DEGs and fear-experience-related DEGs.
s41586-020-2802-y.pdf
Spontaneous travelling cortical waves gate perception in behaving primates
GLM analysis
Reporting summary
Online content
Fig. 1 Spontaneous LFP fluctuations often travel as waves across the cortex.
Fig. 2 Spontaneous travelling waves modulate ongoing spiking probability.
Fig. 3 Waves facilitate detection when aligned with the retinotopic location of visual targets.
Fig. 4 Wave state predicts target-evoked response magnitude and perceptual sensitivity.
Extended Data Fig. 1 Retinotopic mapping and motion direction tuning is consistent with the anatomical organization and tuning preferences of marmoset MT.
Extended Data Fig. 2 Detection of spontaneous travelling waves.
Extended Data Fig. 3 Wideband GP is better coupled to spike timing than narrowband alpha or theta filters.
Extended Data Fig. 4 Spike coupling to GP is spatially dependent.
Extended Data Fig. 5 Spontaneous travelling waves are present during normal viewing of naturalistic visual scenes.
Extended Data Fig. 6 False alarms are not predicted by the phase of travelling waves.
Extended Data Fig. 7 Target-evoked response magnitude is correlated with detection performance.
Extended Data Fig. 8 Narrowband filters fail to detect any significant wave phase alignment before target onset.
Extended Data Fig. 9 Instantaneous voltage is less predictive of spike timing and perception than GP.
s41586-020-2807-6.pdf
Innate and plastic mechanisms for maternal behaviour in auditory cortex
Online content
Fig. 1 Temporal statistics drive behavioural and cortical responses to pup calls in naive and experienced females.
Fig. 2 Excitatory and inhibitory tuning and synaptic responses are altered by maternal experience.
Fig. 3 Co-housing with pups results in coordinated plasticity of excitatory and inhibitory neuronal tuning.
Fig. 4 Auditory cortex and the oxytocinergic system are required for the re-tuning of cortical neurons during co-housing.
Extended Data Fig. 1 Stimulus library of prototypical and morphed pup calls.
Extended Data Fig. 2 Pup call ISIs drive retrieval and approach behaviour in experienced females.
Extended Data Fig. 3 Two-photon calcium imaging of auditory cortical responses to pup calls and pure tones.
Extended Data Fig. 4 Temporal tuning to pup calls in left auditory cortex reflects behavioural-salience of ISIs and retrieval probability.
Extended Data Fig. 5 Pup-naive virgins did not increase the number of times they pressed a lever to turn off prototypes or morphs.
Extended Data Fig. 6 Experience-dependent neuronal and synaptic temporal tuning in auditory cortex.
Extended Data Fig. 7 Variability in prototype-responsive neurons and single-cell temporal tuning curves during co-housing.
Extended Data Fig. 8 Re-tuning of cortical neurons requires co-housing and reflects the statistics of pup call exemplars.
Extended Data Fig. 9 Optical inhibition of OT neurons perturbs the re-tuning of auditory cortical neurons.
Extended Data Fig. 10 Intrinsic tuning in auditory cortex acts as a scaffold for experience-dependent plasticity during co-housing.
s41586-020-2889-1.pdf
A yeast living ancestor reveals the origin of genomic introgressions
A clonal descendant of the ancestral hybrid
Genome instability in the ancestral hybrid
LOH blocks rescue hybrid fertility
Recreating the history of introgressions
The asexual fitness of the living ancestor
Discussion
Online content
Fig. 1 The ancestor of the Alpechin lineage.
Fig. 2 LOH blocks guide recombination of the subgenomes of the hybrid.
Fig. 3 Reconstructing the origin of introgressions.
Fig. 4 The asexual fitness landscape.
Extended Data Fig. 1 Genome-wide distribution of LOH and introgressions.
Extended Data Fig. 2 Alternative evolutionary models of the origins of the living ancestor and Alpechins.
Extended Data Fig. 3 Genome structure of the living ancestor.
Extended Data Fig. 4 LOH size distribution.
Extended Data Fig. 5 Genome-wide genotype of the living ancestor gametes.
Extended Data Fig. 6 Living ancestor clonal evolution.
Extended Data Fig. 7 Reconstruction of the hybridization-to-introgression model.
Extended Data Fig. 8 Mapping the slow growth phenotype in the Alpechin AQA strain.
Extended Data Fig. 9 The competitive fitness of the living ancestor.
Extended Data Fig. 10 The adaptive value of introgressions.
s41586-020-2902-8.pdf
Sources of particulate-matter air pollution and its oxidative potential in Europe
Online content
Fig. 1 PM and OPv sources at rural and urban sites.
Fig. 2 Levels and sources of PM10 and DTTvPM10 in Europe.
Fig. 3 Source-segregated exposures to PM10 and OPvPM10, their dependence on population density, and historical and projected emissions.
s41586-020-2908-2.pdf
Key role of chemistry versus bias in electrocatalytic oxygen evolution
Influence of charge on OER activity
Charge-storage mechanism
OER mechanism on Ir-based materials
External bias and rate
Oxidative charge and rate
Activation free energies
Conclusions and outlook
Online content
Fig. 1 Measured electrocatalytic response of IrOx/Ti-250 °C.
Fig. 2 Charge storage under steady-state and potentiodynamic conditions.
Fig. 3 Computed surface pH–potential phase diagram.
Fig. 4 Computed mechanism and energetics of water–oxyl coupling.
Fig. 5 Computed electrocatalytic response of IrO2.
s41586-020-2909-1.pdf
The scales of human mobility
Nested scales generate power laws
A simple model identifies containers
Scales of human mobility
Validating through generation of traces
Validating through demographics and built environment
Discussion
Online content
Fig. 1 The scales of human mobility.
Fig. 2 The container model generates realistic mobility traces.
Fig. 3 Socio-demographic differences and heterogeneity in scales.
Extended Data Fig. 1 The D1 dataset.
Extended Data Fig. 2 Distribution of container sizes at different levels.
Extended Data Fig. 3 Schematic description and validation of the likelihood optimization algorithm.
Extended Data Fig. 4 The container model generates realistic synthetic traces.
Extended Data Fig. 5 Number of hierarchical levels recovered from traces.
Extended Data Table 1 The distribution of container sizes is not scale free.
Extended Data Table 2 The distribution of time spent within container is not scale free.
Extended Data Table 3 Characteristics of the lognormal distributions of container sizes.
Extended Data Table 4 The container model describes unseen data better than other individual mobility models.
s41586-020-2900-x.pdf
The growth equation of cities
Growth of cities and Zipf’s law
Deriving the equation of city growth
No stationary distribution for cities
Dynamics: splendour and decline of cities
A new paradigm
Online content
Fig. 1 Migration flow analysis.
Fig. 2 Rank clocks for France.
Extended Data Fig. 1 No universal exponent.
Extended Data Fig. 2 In- and out-neighbours.
Extended Data Fig. 3 Density function of the out-of-system growth rate.
Extended Data Fig. 4 Migration-flow analysis.
Extended Data Fig. 5 Average distribution of city sizes.
Extended Data Fig. 6 Scatterplot of the quantity P(S, t) × S versus the ratio for France’s top-500 largest cities between 1875 and 2016.
Extended Data Fig. 7 Power-law fit of the expansion with α = 1.
Extended Data Fig. 8 Rank clocks of the USA and the UK.
Extended Data Fig. 9 Microdynamics of city rank through time for the largest cities in France, the USA and the UK.
Extended Data Fig. 10 Average number of years (and standard dispersion) taken to observe the maximal rank variation ∆r as a function of ∆r.
Table 1 Estimates of parameter α.
Table 2 Estimates of parameters for the four datasets.
Table 3 Average rank shift per unit time, d.
s41586-020-2910-8.pdf
Observation of gauge invariance in a 71-site Bose–Hubbard quantum simulator
Online content
Fig. 1 Quantum simulation of a U(1) lattice gauge theory.
Fig. 2 Probing the many-body dynamics.
Fig. 3 Density–density correlation.
Fig. 4 Fulfilment of Gauss’s law.
Extended Data Fig. 1 Level structure of a three-site building block (matter–gauge–matter).
Extended Data Fig. 2 Single-site resolved imaging.
Extended Data Fig. 3 Dynamics in building blocks.
Extended Data Fig. 4 Quantum phase transition and revival.
Extended Data Fig. 5 Numerical simulations of the phase transition dynamics.
Extended Data Fig. 6 Correlation length.
Extended Data Fig. 7 Ramping speed and gauge violation.
Extended Data Fig. 8 Dynamics in double wells.
Extended Data Fig. 9 Detecting the gauge-invariant states.
Extended Data Fig. 10 Resolving the population of the states.
s41586-020-2893-5.pdf
A blue ring nebula from a stellar merger several thousand years ago
Online content
Fig. 1 Ultraviolet and Hα images of the blue ring nebula and a geometric schematic of the biconical outflow.
Fig. 2 Spectral energy distribution and Hα emission of TYC 2597-735-1.
Fig. 3 Schematic of the merger events responsible for the current state of TYC 2597-735-1 and its ultraviolet nebula (not to scale).
Extended Data Fig. 1 TYC 2597-735-1 and its ultraviolet nebula in different bandpasses.
Extended Data Fig. 2 The source of emission in the far-ultraviolet nebula.
Extended Data Fig. 3 TYC 2597-735-1 is an outlier when compared with other moderately evolved stars of similar mass.
Extended Data Fig. 4 Stellar Hα emission properties of TYC 2597-735-1.
Extended Data Fig. 5 Radial velocity of TYC 2597-735-1.
Extended Data Fig. 6 Light curve of TYC 2597-735-1 since 1895.
Extended Data Fig. 7 Evolution of a stellar merger between a 2M⊙ primary star and a 0.
Extended Data Fig. 8 Demonstration of the velocity line profile fitting to an unblended Fe i line (5,569.
s41586-020-2715-9.pdf
LifeTime and improving European healthcare through cell-based interceptive medicine
Technology development and integration
Identification of medical priorities
Implementation and infrastructure
Interaction with industry and innovation
Ethical and legal issues
Education and training
Impact on medicine and healthcare
Outlook summary
Acknowledgements
Fig. 1 Early disease detection and interception by understanding and targeting cellular trajectories through time.
Fig. 2 Hallmarks of the LifeTime approach to disease interception and treatment.
Fig. 3 Exploiting the LifeTime dimension to empower disease targeting.
Fig. 4 Blueprint of the LifeTime Initiative.
s41586-020-2898-0.pdf
Publisher Correction: Innate and plastic mechanisms for maternal behaviour in auditory cortex

Citation preview

The international journal of science / 19 November 2020

Europe must think more globally in its pandemic response The EU has struggled to find a unified voice in the pandemic. Its new plan is a strong start, but needs to be more outward-looking.

H

ow is it that countries with some of the world’s highest levels of health-care spending have also seen some of the highest mortality from COVID-19? This is one of the great mysteries of the coronavirus pandemic. Almost one year into the crisis, five of Europe’s biggest economies — Germany, France, the United Kingdom, Italy and Spain — have recorded a total of almost 200,000 deaths. At the same time, the 27-country European Union has failed to produce much in the way of a unified response. In the pandemic’s early months, some EU nations stopped exports of personal protective equipment, even to fellow member states. EU nations have so far allocated nearly €6 billion (US$7.1 billion) to support the pandemic response, but individual countries do not have a common benchmark on which to base interventions, so have differed on crucial issues such as what is meant by social distancing, when to lock down, and the rules of quarantine. The incoherence of the EU’s pandemic response is surprising for a group of nations that has so successfully acted and spoken with one voice on other cross-border issues, most notably climate change. Fortunately, the European Commission and its ethics and research advisers have been working to get a firmer grip on the situation. On 11 November, the commission published a lengthy list of actions (see go.nature.com/3pcyjch) for the European Parliament and the governments of EU nations to consider. These include upgrading the European Centre for Disease Prevention and Control to improve its disease surveillance and its capacity to help countries to prepare for — and respond to — epidemics. The plan, which is being steered by commission president Ursula von der Leyen, also calls for the establishment of a Europe-wide network of reference laboratories for testing human pathogens. Another proposed body is the EU Health Emergency Preparedness and Response Authority. One of its functions would include channelling public funding to companies and universities developing promising drug and vaccine candidates — similarly to the US Biomedical Advanced Research and Development Authority. Independent research advice is at the heart of the proposals. Also on 11 November, the commission’s science and ethics advisers published a separate report, ‘Improving pandemic preparedness and management’ (go.nature.com/ 3lcwzxo). This reviews key literature on subjects including

The EU’s leadership must approach its pandemic response as it did the financial crisis.”

the biology, spread and economic effects of infectious diseases; the populations that are most vulnerable; and the rise of virus misinformation and conspiracy theories. The report draws on literature from many disciplines, and needs to be communicated widely, because it will help to provide a unified evidence base for those tasked with harmonizing member states’ pandemic response plans. But last week’s announcement also highlighted a crucial gap in the EU plan, one that will require some creative thinking to bridge. EU member states have not previously faced a challenge that needed a coordinated public-health response on the scale required by the current crisis. At the same time, health is not a part of the EU’s core ‘competences’ — those areas of public policy for which processes exist for member states to make collective decisions. Health is a matter for individual member countries, which is partly why the EU lacks a high-level group of decision-makers that can quickly be mobilized when an emergency strikes. The commission recommends strengthening the EU’s existing health-coordinating body, the Health Security Committee. But there’s an argument for creating a higherlevel network that could be activated in the event of a health emergency. Finance ministers are an example of such a group. They meet regularly, and acted in concert when the 2008 financial crisis threatened to devastate the world economy. A recommendation to create an apex health network must come from Europe’s heads of government — it is beyond the remit of the commission, which is effectively the equivalent of the EU’s civil service. Discussions on its feasibility cannot begin soon enough. The EU’s pandemic response also needs to be more open to the knowledge, experience and research of non-EU states, including those in Africa and Asia that have more experience of tackling dangerous infectious diseases and are, in some cases, managing the pandemic better. This is a difficult time for international relations, as the EU reassesses its links with the United States, owing to four turbulent years under the presidency of Donald Trump, and with China. But EU decision-makers need to find a way to negotiate for the public-health needs of the union’s member countries alongside these changing political relationships. And they must heed the advice of their science and ethics advisers, who note, in their report, that because pandemics are international, “preparing for them and responding to them requires cooperation across countries and continents, irrespective of geopolitical alliances”. The EU’s leadership must approach its pandemic response as it did the 2008 financial crisis and the 2015 Paris climate agreement. In both of those instances, EU leaders could have restricted their policy response to the boundaries of member states, but they wisely reached out and created alliances with other countries, including many in the global south, leading to a more powerful and more inclusive global response. The EU understood that climate change and financial contagion do not observe borders. Neither does a pandemic, and the EU must act accordingly.

. d e v r e s e r s t h g i r l l A . d e t i m i L e r u t a N r e g n i r p S 0 2 0 2 ©

Nature | Vol 587 | 19 November 2020 | 329

Editorials

Facial-recognition research needs an ethical reckoning The fields of computer science and AI are struggling with the ethical challenges of biometrics. Institutions must respond.

O

ver the past 18 months, a number of universities and companies have been removing online data sets containing thousands — or even millions — of photographs of faces used to improve facial-recognition algorithms. In most cases, researchers scraped these images from the Internet. The pictures were public, and their collection didn’t seem to alarm institutional review boards (IRBs) and other research-ethics bodies. But none of the people in the photos had been asked for permission, and some were unhappy about the way their faces had been used. This problem has been brought to prominence by the work of Berlin-based artist and researcher Adam Harvey, who highlighted how public data sets are used by companies to hone surveillance-linked software — and by the journalists who reported on Harvey’s work. Many researchers in the fields of computer science and artificial intelligence (AI), and those responsible for the relevant institutional ethical review processes, did not see any harm in using public data without consent. But that is starting to change. It is one of many debates that need to be had around how facial-recognition work — and many other kinds of AI research — can be studied more responsibly. As Nature reports in a series of Features on facial recognition this week (pages 347, 350 and 354), many in the field are rightly worried about how the technology is being used. Some scientists are analysing inaccuracies and biases in facial-recognition technology, warning of discrimination, and joining campaigners calling for stronger regulation, greater transparency, consultation with the communities that are being monitored by cameras — and for use of the technology to be suspended while lawmakers reconsider its benefits and risks.

Responsible studies Some scientists are urging a rethink of ethics in the field of facial-recognition research, too. They are arguing, for example, that researchers should not do certain types of study. Many are angry about academic papers that sought to study the faces of people from vulnerable groups, such as the Uyghur population in China, whom the government has subjected to surveillance and detained on a mass scale. Others have condemned papers that sought to classify faces by scientifically and ethically dubious measures such as criminality.

. d e v r e s e r s t h g i r l l A . d e t i m i L e r u t a N r e g n i r p S 0 2 0 2 ©

330 | Nature | Vol 587 | 19 November 2020

Scientists shouldn’t gather and analyse personal data simply because they can.”

Nature conducted a survey to better understand scientists’ views on the ethics of facial-recognition technology and research. Many respondents said that they wanted conferences to introduce ethics reviews for biometrics studies. This is starting to happen. Next month’s NeurIPS (Neural Information Processing Systems) conference will, for the first time, require that scientists address ethical concerns and potential negative outcomes of their work. And the journal Nature Machine Intelligence has begun to ask researchers to write a statement describing the impact of certain types of AI research. These are important steps, but journals, funders and institutions could do more. For example, researchers want more guidance from institutions on what kinds of study are acceptable. Nature’s survey found widespread worry — and disagreement — about the ethics of facial-recognition studies, and concern that IRBs might not be equipped to provide sufficient guidance. General ethical guidance for AI already exists. And US and European funders have supported efforts to study the challenges of biometrics research and recommended rethinking what counts as ‘public’ data — as well as urging scientists to consider a study’s potential impacts on society. Ultimately, biometrics research involves people, and scientists shouldn’t gather and analyse personal data simply because they can. Public consultation is key: scientists should consult those whom the data describe. If this is impossible, researchers should try to reach a panel of representatives who can speak for them. One problem is that AI guidance tends to consist of principles that aren’t easily translated into practice. Last year, the philosopher Brent Mittelstadt at the University of Oxford, UK, noted that at least 84 AI ethics initiatives had produced high-level principles on the ethical development and deployment of AI (B. Mittelstadt Nature Mach. Intell. 1, 501–507; 2019). These converged around medical-ethics concepts, such as respect for human autonomy, the prevention of harm, fairness and explicability (or transparency). But Mittelstadt pointed out that different cultures disagree fundamentally on what principles such as ‘fairness’ or ‘respect for autonomy’ mean in practice. Medicine has internationally agreed norms for preventing harm to patients, and robust accountability mechanisms. AI lacks these, Mittelstadt noted. Specific case studies and worked examples would be more helpful to prevent ethics guidance becoming little more than window-dressing. A second concern is that many researchers depend on companies for their funding and data. Although most firms are concerned by ethical questions about the way biometrics technology is studied and used, their views are likely to be conflicted because their bottom line is to sell products. Researchers alone can’t stop companies and governments using facial-recognition technology and analysis tools unethically. But they can argue loudly against it, and campaign for stronger governance and transparency. They can also reflect more deeply on why they’re doing their own work; how they’ve sourced their data sets; whether the community they’re expecting their studies to benefit wants the research to be done; and what the potential negative consequences might be.

A personal take on science and society

World view

By Francine Ntoumi

Tropical diseases need attention, too As the COVID-19 pandemic threatens to erode huge gains against much more devastating infections, I look for silver linings.

CONGOLESE FOUNDATION FOR MEDICAL RESEARCH (FONDATION CONGOLAISE POUR LA RECHERCHE MÉDICALE)

A

ll year, COVID-19 has commandeered the world’s attention. It is as if no other disease has ever been more important, more contagious or more deadly. I founded a non-profit research institute in 2008; we established the first molecular-biology laboratory in the Republic of Congo, at the country’s only public university. We monitor pathogens such as those that cause gastro­intestinal diseases, malaria, HIV, tuberculosis (TB) and chikungunya — which together infect more than 250 million people each year globally, and kill more than 2.5 million. To keep treatments effective, we assess the development of resistance to antimalarial, antiretroviral and antibiotic drugs. Our research programmes were already in place, so we could quickly pivot to diagnostic testing and blood-based epidemiological studies to understand how COVID-19 was spreading in Congo and how to keep health-care workers safe. Since March, three-quarters of our time has been spent on COVID-19. That means I am neglecting my work on other diseases — which are not going away. And it’s not only my lab. In October, the World Health Organization (WHO) reported that progress against TB might stall: in the countries with the highest rates of the disease, the number of people diagnosed and directed to care dropped by one-quarter compared with last year’s figure. Because many countries have implemented lockdowns, hospitals and health centres have seen a significant drop in the number of people coming for treatment. In Uganda, maternal mortality rose by 82% from January to March, and because of COVID-19, rates of HIV diagnoses and of people starting antiretroviral treatment (and treatment to prevent TB) will fall by 75% (D. Bell et al. Am. J. Trop. Med. Hyg. 103, 1191–1197; 2020). These treatments must be kept on track through active community outreach. In September, researchers at the WHO and elsewhere modelled what could happen if distribution of antimalarial medicine and insecticidal bednets to prevent malaria falls by up to 75% (D. J. Weiss et al. Lancet Infect. Dis. https://doi.org/fg3n; 2020). If this plays out, all the gains made against malaria over the past 20 years could be lost. My message is not that efforts against COVID-19 are misguided, but that I am disheartened that such efforts have not been rallied and sustained against other infectious diseases. Sometimes, while running diagnostic tests to track COVID-19 infections in my country, I daydream about a disease I have worked on for 25 years. What if the world

What if the world had tackled malaria with the energy now dedicated to the coronavirus?”

Francine Ntoumi heads the Congolese Foundation for Medical Research and is a senior lecturer at the University Marien Ngouabi in Brazzaville, Republic of Congo, and the Institute of Tropical Medicine at the University of Tübingen in Germany. e-mail: fntoumi@ fcrm-congo.com

had tackled malaria with the energy now dedicated to the coronavirus? Might malaria have been defeated? Philanthropic organizations, such as the Bill & Melinda Gates Foundation in Seattle, Washington, have accelerated research against malaria and other diseases. Deaths from malaria declined by nearly 31% from 2010 to 2018. Some treatments were developed in Africa (where some trials for the Ebola vaccine were also run). But these exertions do not compare with those against COVID-19. More than 90% of the global burden of malaria deaths is in Africa. A child dies from malaria every 2 minutes. For survivors, such infectious diseases lock in a vicious cycle. They keep people from work and school, trapping them in poverty and conditions that allow illness to thrive. The people most directly affected do not have the resources to mount a huge effort against them. To combat this injustice, I try to find a sense of progress — to identify concrete actions to strengthen research capacities in Africa in general and in my country in particular. One silver lining in this pandemic is that African leaders, who had developed the bad habit of putting all their hopes on development aid, have dug into their own budgets to fight COVID-19. The private sector, including oil companies and local banks, has chipped in. If this alliance can continue after the pandemic ebbs, research capacity will increase across Africa. This might be a case in which we ‘build back better’ after the pandemic. During the lockdown, researchers and engineers developed prototypes of respirators made in Congo using recycled components, showing initiative and creativity that should flow into other areas of health research. We need to set up functional, well-equipped labs to boost this work. I also hope that the dynamism and richness of scientific exchanges since January 2020 will continue and intensify. We need to establish solid collaborations nationally (with other research institutions), regionally (with surrounding countries) and with regional and international networks, such as the Central Africa Clinical Research Network (CANTAM) and the Pan-African Network for Rapid Research, Response and Preparedness for Infectious Diseases Epidemics (PANDORA), both of which I coordinate. Above all, we must train the next generation of scientists locally. I tell myself that COVID-19 will help in this exercise. I just need to apply to many calls for proposals for coronavirus grants, in collaboration with colleagues from all parts of the world. This funding will be an opportunity to train researchers who will move on to tropical diseases as soon as the need to tackle COVID-19 becomes less pressing. To get through my work day after day, this is how I see the COVID-19 pandemic: as an opportunity to build structures that will reduce the burden of all tropical diseases. I do not want to think about a world where that does not happen.

. d e v r e s e r s t h g i r l l A . d e t i m i L e r u t a N r e g n i r p S 0 2 0 2 ©

Nature | Vol 587 | 19 November 2020 | 331

The world this week

News in brief MAJOR EUROPEAN RESEARCH FUND GETS LAST-MINUTE BOOST

L TO R: YASUYOSHI CHIBA/AFP/GETTY; EU COUNCIL/ANADOLU AGENCY/GETTY; SOURCE: NASA

KENYA HAS MYSTERIOUSLY LOW COVID DEATH TOLL One of the first large SARS‑CoV-2 antibody studies in Africa suggests that by mid-2020, the virus had infected 4% of people in Kenya — a surprisingly high figure in view of Kenya’s small number of COVID-19 deaths. The presence of antibodies against SARS-CoV-2 indicates a history of infection with the virus. Sophie Uyoga at the KEMRI-Wellcome Trust Research Programme in Kifili, Kenya, and her colleagues searched for such antibodies in samples of blood donated in Kenya between late April and mid-June (S. Uyoga et al. Science https://doi.org/fhsx; 2020). On the basis of those samples, the researchers estimated that 4.3% of Kenya’s people had a history of SARS‑CoV-2 infection. The team’s estimate of antibody prevalence in Kenya is similar to an earlier estimate for the level in Spain. But Spain had lost more than 28,000 people to COVID-19 by early July, whereas Kenya had lost 341 by the end of the same month. The authors write that the “sharp contrast” between Kenya’s antibody prevalence and its COVID-19 deaths hints that the coronavirus’s effects are dampened in Africa.

In a final round of intensive budget talks last week, policymakers agreed to give €85 billion (US$100 billion) to the European Union’s flagship 7-year research programme, Horizon Europe — €4 billion more than previously proposed. The last-minute increase is part of an agreement between the union’s 27 members and the European Parliament on the bloc’s overall 2021–27 budget, a record €1.8-trillion package that includes a €750-billion COVID-19 recovery fund. Governments and the parliament agreed on 10 November to raise the budgets for health and education: together, Horizon Europe, the student-exchange programme Erasmus+ and the COVID-19 response package EU4Health will get an extra €15 billion. The European Commission, led by Ursula von der Leyen (pictured), had previously proposed a €94.4-billion budget for Horizon Europe, but in July, negotiations scaled that back to €81 billion. Research organizations that lobbied for more generous funding say the final deal is underwhelming. The European Parliament is expected to formally approve the budget deal before the end of the year.

Trendwatch

How space-station science has matured Twenty years ago this month, on 2 November 2000, one astronaut and two cosmonauts floated into the newly constructed International Space Station (ISS). It was the beginning of two decades of the outpost being permanently inhabited by people. Astronauts have since conducted around 3,000 scientific experiments on the ISS. The research spans disciplines from fundamental physics to Earth observation and biomedical studies. Once critiqued as relatively insignificant and not all that relevant to people on Earth, science aboard the ISS has blossomed as its inhabitants have devoted more time to research. The results include insights into how humans and animals adapt to long-duration space flight, as well as how materials behave in space. Today, the ISS is packed with modern research equipment, including a top-ofthe-range confocal microscope, installed in 2017. Most of the experiments aim to investigate how things work differently in microgravity — such as the way a flame burns or how mouse cells develop — to see whether those lessons can be applied to technologies or drugs on Earth. “It’s like taking an entire world-class university and shrinking it down to the size of the space station,” said Kate Rubins, a NASA astronaut and biologist who is currently aboard the ISS.

RESEARCH IN ORBIT

Astronauts have run nearly 3,000 scientific experiments on the International Space Station. Canadian Space Agency European Space Agency Roscosmos Japan Aerospace Exploration Agency NASA Biology and biotechnology Technology development Educational activities Human research Physical science Earth and space science 0

2

4

6 8 10 Experiments (hundreds)

12

14

. d e v r e s e r s t h g i r l l A . d e t i m i L e r u t a N r e g n i r p S 0 2 0 2 ©

Nature | Vol 587 | 19 November 2020 | 335

The world this week

EVA MARIE UZCATEGUI/BLOOMBERG VIA GETTY

News in focus

COVID-19 vaccines are being tested in tens of thousands of people around the world.

WHAT LANDMARK COVID VACCINE RESULTS MEAN FOR THE PANDEMIC Scientists welcome the first compelling evidence that vaccines can prevent COVID-19 — but questions remain about how much protection they offer, and for how long. By Ewen Callaway

S

cientists have greeted with cautious optimism a slew of positive preliminary results from phase III trials of COVID-19 vaccines — the final round of human testing for these experimental immunizations. In the past week, three major efforts — led by drug firm Pfizer, biotech company Moderna and Russian developers — reported early data from phase III trials. Each said that its vaccine is more than 90% effective at preventing coronavirus infection. The results offer the first compelling evidence that

vaccines can prevent COVID-19 — but the data do not answer key questions that will show whether the vaccines can block transmission of COVID‑19, and how well they work in different groups of people. “We need to see the data in the end, but that still doesn’t dampen my enthusiasm. This is fantastic,” says Florian Krammer, a virologist at Icahn School of Medicine at Mount Sinai in New York City, of results from Pfizer’s trial, which was the first to report early data, on 9 November. In phase III trials, candidate vaccines are given to a large number of people who are

followed for weeks or months to see whether they become infected and symptomatic. These results are compared with those for a group of participants who are given a placebo. Pfizer, a New York City-based drug company that is developing a vaccine with German biotech firm BioNTech, revealed in a press release that its vaccine is more than 90% effective. The two-dose vaccine consists of molecular instructions — in the form of messenger RNA — for human cells to make the coronavirus spike protein, the immune system’s key target for this type of virus. The effectiveness was based on 94 cases of COVID-19 among 43,538 trial

. d e v r e s e r s t h g i r l l A . d e t i m i L e r u t a N r e g n i r p S 0 2 0 2 ©

Nature | Vol 587 | 19 November 2020 | 337

News in focus participants, when measured a week after participants received their second vaccine dose. The trial, which started on 27 July, will continue until 164 COVID-19 cases are detected, so initial estimates of the vaccine’s effectiveness could change. Pfizer’s news was followed on 11 November by a press release from a Russian vaccine trial dubbed Sputnik V, which said that its candidate seems to be similarly effective. The Gamaleya National Center of Epidemiology and Microbiology in Moscow and the Russian Direct Investment Fund said that an interim analysis of 20 COVID-19 cases identified among trial participants has found that the vaccine was 92% effective. The vaccine is composed of two different adenoviruses that produce the coronavirus spike protein, administered three weeks apart. The analysis looked at more than 16,000 volunteers — who received either the vaccine or a placebo — 3 weeks after they had taken the first dose. The trial has enrolled a total of 40,000 participants, the release said. Some scientists criticized the scant data on which the analysis was based. It is difficult to interpret the clinical-trial results without more information, says Shane Crotty, a vaccine immunologist at the La Jolla Institute for Immunology in California. “I would not conclude anything from 20 events.” The Sputnik V trial’s protocol has not been made public, in contrast to those of Pfizer and some other leading candidates in phase III trials, so it is unclear whether an interim analysis after just 20 COVID-19 cases was in the works all along. “I worry that these data have been rushed out on the back of the Pfizer/BioNTech announcement,” Eleanor Riley, an immunologist at the University of Edinburgh, UK, told the Science Media Centre in London. “This is not a competition. We need all trials to be carried out to the highest possible standards and it is particularly important that the pre-set criteria for unblinding the trial data are adhered to avoid cherry picking the data.”

Moderna makes three Then, on 16 November, biotech company Moderna in Cambridge, Massachusetts, reported that its RNA-based vaccine is more than 94% effective at preventing COVID-19, on the basis of an analysis of 95 cases in its ongoing phase III efficacy trial. Scientists say that these press-released results share a few more details than do the announcements from Pfizer and BioNTech, and the Russian developers. Moderna released figures suggesting that its vaccine is likely to prevent severe COVID-19 infections, something that was not clear from the other developers’ announcements. “The results of this trial are truly striking,” says Anthony Fauci, director of the US National

Cold supply chain The company began a phase III trial of its vaccine on 27 July, and has enrolled roughly 30,000 people. That study continues, but an analysis conducted on 15 November by an independent data committee found that 95 participants had developed COVID-19. Of these, 90 were in the group that received a placebo injection and 5 had received the vaccine, which equates to an efficacy of 94.5%. Researchers were also buoyed by Moderna’s announcement that its vaccine remains stable in conventional refrigerators for a month and in ordinary freezers for six months; Pfizer’s vaccine must be stored at −70 °C before delivery, which means it could be difficult to distribute in parts of the world that do not have the infrastructure to keep it that cold. Easier storage is “a really big plus”, says Daniel Altmann, an immunologist at Imperial College London. “We’ve always said that we need a number of vaccines ready and that the devil will be in the detail.” Once the trials are completed and all the data have been analysed, the final calculations of the vaccines’ efficacies could be lower. Researchers

“Our aspirations have been met and that is very good news.” say it is likely that the Pfizer and Moderna vaccines’ effectiveness will stay well above 50%, the threshold that the US Food and Drug Administration (FDA) says is required for a coronavirus vaccine to be approved for emergency use. “Both the Pfizer vaccine and the Moderna vaccine have notably more efficacy than most scientists would have expected,” says Stephen Evans, a statistical epidemiologist at the London School of Hygiene & Tropical Medicine. But the low number of cases reported in the Sputnik V trial means there is less certainty that the interim results of more than 90% efficacy are close to the true figure, says Evans. “Follow-up is needed because the results are compatible with a much lower efficacy — 60% — based on these data.” Sarah Gilbert, a vaccinologist at the University of Oxford, UK, agrees that the Sputnik V results should be interpreted cautiously because of the small number of cases. But she is encouraged, because the vaccine her team is developing with pharmaceutical company

. d e v r e s e r s t h g i r l l A . d e t i m i L e r u t a N r e g n i r p S 0 2 0 2 ©

338 | Nature | Vol 587 | 19 November 2020

Institute of Allergy and Infectious Diseases in Bethesda, Maryland, which is co-developing the vaccine. Fauci says he told reporters several months ago that he would be satisfied with a vaccine that was 70% or 75% effective, and that one that prevented 95% of cases would be “aspirational”. “Well, our aspirations have been met and that is very good news,” he adds.

AstraZeneca also uses an adenovirus to expose the immune system to the coronavirus spike protein. “Seeing the Russian results, albeit from a small number of endpoints, does indicate that we would expect to see high efficacy, but we have to wait and see,” she says.

Missing information Key questions about all three vaccines remain. Pfizer and the Russian group have not released details about the nature of the infections their vaccines can protect against — whether they are mostly mild cases of COVID-19 or also include significant numbers of moderate and severe cases, say researchers. “I want to know the spectrum of disease that the vaccine prevents,” says Paul Offit, a vaccine scientist at the Children’s Hospital of Philadelphia in Pennsylvania who sits on an FDA advisory committee that is set to evaluate the Pfizer vaccine next month. “You’d like to see at least a handful of cases of severe disease in the placebo group,” he adds, because fewer such cases in the vaccine group would suggest that the vaccine has the potential to prevent such cases. Moderna presented some evidence that its vaccine protects against severe cases of COVID-19. Its analysis found 11 severe cases in the trial’s placebo arm, and none in the vaccine arm. That’s a good sign, says Evans, but hardly surprising, given the vaccine’s high effectiveness. “If a vaccine starts to get to that kind of efficacy, then there isn’t a lot of room for severe cases in there,” he says. But it is not yet clear whether the vaccines can block people from transmitting the virus; whether they work equally well in higher-risk groups such as older adults; and how long their protective effects last. “To me, the main question is what about six months later, or even three months later,” says Rafi Ahmed, an immunologist at Emory University in Atlanta, Georgia. There will be a chance to answer that question if trials continue for several more months, says Ahmed. And although little is known about the vaccines’ long-term effectiveness, that is unlikely to hold up use, he adds. “I don’t think we should say, ‘Well, I’ll only take a vaccine that protects me for five years.’” One thing about the Pfizer and Moderna vaccines is certain: regulators will soon decide whether they are ready for roll-out. Both companies said they would seek emergency-use authorization from the FDA in the coming weeks, when half of the participants have been followed for two months — an FDA safety requirement for COVID-19 vaccines. And although researchers want to see the data behind the vaccine results, they are prepared to accept caveats that come with them. “Right now, we need a vaccine that works,” says Krammer, even if it works for only a few months or doesn’t stop transmission. “That’s what we need in order to get halfway back to normal.”

JOE RAEDLE/GETTY

Supported by Kamala Harris, Joe Biden briefed the media on his pandemic plans last week.

JOE BIDEN’S COVID PLAN IS TAKING SHAPE — AND RESEARCHERS APPROVE Scientists praise the US president-elect’s coronavirus advisory board and updated strategy. By Nidhi Subbaraman

J

ust two days after being declared victors in the US election, future president Joe Biden and vice-president Kamala Harris announced that they had set up a COVID-19 advisory board stacked with infectious-disease researchers and former public-health advisers to help them to craft a pandemic plan as they transition into office. The speed of the announcement, alongside an updated COVID-19 plan, has scientists and doctors hopeful that the United States can correct its course in its handling of the outbreak: so far, 10 million Americans have been infected and more than 240,000 have died. And the numbers continue to rise. “I really think they put together an outstanding and stellar team to advise the new administration on what is clearly one of their highest priorities,” says Helene Gayle, president and chief executive of the Chicago Community Trust in Illinois and co-chair of a US National Academies of Sciences, Engineering, and Medicine committee that recommended a coronavirus vaccine-allocation plan for the country. Eric Goosby, an infectious-diseases researcher at the University of California, San Francisco, who led past White House AIDS

responses, and Vivek Murthy, a doctor who served as US surgeon-general between 2014 and 2017, are among the 13 advisory-board members who will brief the future leaders. Observers say the board members are an experienced and impressive team. “It seems like a terrific group — it’s a real relief to have great experts providing guidance,” says Joshua Sharfstein, a health-policy researcher and vice-dean at the Johns Hopkins Bloomberg School of Public Health in Baltimore, Maryland, who served as principal deputy commissioner of the US Food and Drug Administration under former president Barack Obama. The immediate naming of the board is in stark contrast to President Donald Trump’s efforts to contain the pandemic. He has been criticized for ignoring the advice of public-health specialists, worsening the pandemic’s toll on the country. But if Biden and Harris follow the science, communicate honestly and openly and have an organized response, it would be “three big resets” from Trump’s administration, says Tom Frieden, who led the Centers for Disease Control and Prevention (CDC) as director from 2009 to 2017. Among the Biden–Harris team’s top priorities is a strong COVID testing and

contact-tracing strategy. The team says it will create a “Nationwide Pandemic Dashboard” to display transmission rates of the virus in regions across the country. Other nations, such as South Korea, have dashboards on which officials report outbreaks and daily case numbers. The United States has been late or lacking in presenting disease-incidence data, in part because of political push-back and also because of decades of neglect of public-health infrastructure. “It’s critically important that we maintain a national surveillance system,” says Gayle. But the public-health agency that would normally take charge in this situation, the CDC, has been sidelined during the pandemic by Trump’s administration. Instead, its parent agency, the Department of Health and Human Services, has taken charge of collecting corona­virus data from hospitals. Once Biden and Harris are in office, the CDC will be in charge of announcing recommendations for when it is safe to open or close restaurants, schools and businesses, according to the updated plan. Frieden would like to see the CDC more frequently brief the press and the public about outbreaks — he sees it as a way of building trust in evidence and science, which scientists feel has been eroded over the course of the past year.

Addressing inequality Another update to the Biden–Harris pandemic strategy is the proposal to create a task force to address the coronavirus’s disproportionate effect on people of colour in the United States. The COVID-19 mortality rate for Black, Latino and Indigenous people in the United States is more than three times as high as the rate among white people. Researchers who study health and racism have suggested that such a task force could build trust among minority groups in the United States who have been hit hardest by the pandemic because of the jobs they hold and the places they live in. “I hope that, whatever shape or form the task force takes, that it will include people who are closest to that lived experience,” says Rachel Hardeman, a health-policy researcher who studies inequality at the School of Public Health at the University of Minnesota in Minn­eapolis. Quoting Massachusetts Representative Ayanna Pressley, she adds: “The people closest to the pain should be closest to the power.” Hardeman would have liked that principle to have been applied to the recently announced COVID-19 advisory board, too: nurses have been on the front lines of responding to the pandemic, so that group should be better represented, she says. She would also have liked to have seen even more public-health researchers on the team, whose focus is on preventing disease rather than treating it. The Biden team won’t take the helm at

. d e v r e s e r s t h g i r l l A . d e t i m i L e r u t a N r e g n i r p S 0 2 0 2 ©

Nature | Vol 587 | 19 November 2020 | 339

News in focus advice in his own remarks. In a 9 November address, he urged people in the United States to wear masks — his plan proposes that all governors introduce mask mandates in their states. The Trump administration has presented conflicting advice on mask wearing, even though scientists have been saying for months that the coverings are a necessary first line of virus defence. “This is just a simple thing that everybody can do,” says Gayle. “The fact that there’s a part of our population that has resisted that message is unfortunate.”

COVID MINK ANALYSIS SHOWS MUTATIONS ARE NOT DANGEROUS — YET But scientists say the coronavirus’s rampant spread among the animals means mink still need to be killed. By Smriti Mallapaty

H

ealth officials in Denmark have released genetic and experimental data on a cluster of SARS-CoV-2 mutations circulating in farmed mink and in people, days after they announced the mutations could jeopardize the effectiveness of potential COVID-19 vaccines. News of the mutations prompted Danish Prime Minister Mette Frederiksen to announce plans to end mink farming for the foreseeable

future — and cull some 17 million animals — sparking a fierce debate about whether such action was legal. But scientists were careful not to raise the alarm until they saw the data. Now, scientists who have reviewed the data say the mutations themselves aren’t particularly concerning, because there is little evidence that they allow the virus to spread more easily among people, make it more deadly or will jeopardize therapeutics and vaccines. “The mink-associated mutations we know of are not associated with rapid spread, nor with

The coronavirus SARS-CoV-2 transmits rapidly among mink.

. d e v r e s e r s t h g i r l l A . d e t i m i L e r u t a N r e g n i r p S 0 2 0 2 ©

340 | Nature | Vol 587 | 19 November 2020

any changes in morbidity and mortality,” says Astrid Iversen, a virologist at the University of Oxford, UK. But researchers say culling the animals is probably necessary, given the virus’s rapid and uncontrolled spread in mink — it has been detected on more than 200 farms since June — which makes the animals a massive viral source that can easily infect people. In regions with affected mink farms, the number of people with COVID-19 increases a lot, says Iversen. And there are roughly three times more mink than people in Denmark. “The mink cull is necessary,” she says. Uncontrolled spread in mink also increases the opportunity for the virus to evolve and develop mutations that could be concerning, says Jannik Fonager, a virologist at the State Serum Institute, the Danish health authority leading the investigations, based in Copenhagen. He says scientists shared their concerns with the government, but that the government decided to cull the mink. The government submitted legislation to enable the cull on 10 November, and has urged farmers to begin the process.

Mink mutations Fonager says researchers in Denmark have sequenced viral samples from 40 mink farms and identified some 170 coronavirus variants. In viral samples from people — representing about one-fifth of the country’s confirmed COVID-19 cases — they’ve found some 300 individuals with variants that contain mutations thought to have first emerged in mink. “That is something we really want to keep a close eye on,” adds Fonager. In the viral samples from mink and people, researchers have identified several mutations in the gene encoding the spike protein that the coronavirus uses to enter cells. This concerns researchers because changes in this region could affect the immune system’s ability to detect infection. Many vaccines also train the immune system to block the spike protein. Of particular concern is a virus variant containing a unique combination of mutations called Cluster-5, which was found in 5 farms and 12 people in the North Jutland region of northern Denmark. Fonager says the Cluster-5 variant causes three amino-acid changes and two deletions in the spike protein. Preliminary cell experiments suggest that antibodies from some people who had recovered from COVID-19 found it more difficult to recognize the Cluster-5 variant than to spot coronaviruses that did not carry these mutations. This suggests that the variant could be less responsive to antibody treatments or vaccines, and informed the government’s decision to cull the farmed mink, according to a letter from Denmark’s chief veterinary officer to the World Organisation for Animal Health. “It is the right thing to do in a situation where the

OLE JENSEN/GETTY

the White House and install leaders at public-health agencies until Inauguration Day on 20 January. As Nature went to press, however, Trump had refused to concede the election, delaying the typical transition of power. According to several US news reports, in outlets including The Washington Post and The New York Times, a Trump appointee in the General Services Administration has not given Biden access to funding and office space typically provided to new administrations to ensure a smooth handover. Meanwhile, Biden has echoed public-health

vaccine, which is currently the light at the end of a very dark tunnel, is in danger,” Danish minister for food and fisheries Mogens Jensen said in a statement on 5 November. But researchers who have reviewed the available data say these claims are speculative. The Cluster-5 variant seems to be a “dead end” in people, because it hasn’t spread widely, says Iversen. The variant has not been seen since September despite extensive sequencing and data sharing, she says. Iversen adds that the experimental work is too limited to draw any conclusions about its implications for therapies and vaccines. “It is really important in this situation not to over-interpret very preliminary data.”

ALAMY

Spread in people One mink-associated mutation has spread more widely in people. The mutation, Y453F, also encodes an amino-acid change in the spike protein and has been found in about 300 virus sequences from people in Denmark, and in sequences from mink and people in the Netherlands. An experimental study suggests that virus variants with the Y453F mutation partially escaped detection by a commercial monoclonal antibody. But that does not mean that this mutation will hinder the drug’s therapeutic effect in the body, says Iversen. It’s also not clear whether all the mink-associated mutations in people actually originated in mink, because not all the data have been released, says David Robertson, a virologist at the University of Glasgow, UK. But there are some examples of mutations originating in mink and passing to people, says Kasper Lage, a computational biologist at Massachusetts General Hospital and the Broad Institute of MIT and Harvard in Boston. And many researchers are worried that uncontrolled spread of the virus through millions of mink could lead to problematic mutations. In Denmark, the world’s largest producer of mink pelts, authorities are struggling to rein in farm outbreaks, despite extensive control measures. In many affected farms, almost all animals have antibodies against the virus. Outbreaks have also been detected on mink farms in the Netherlands, Sweden, Spain, Italy and the United States. The Netherlands plans to cull its entire mink population by 2021, accelerating plans to end mink farming there by 2024. Scientists still don’t know how the virus is entering farms, says Anette Boklund, a veterinary physician at the University of Copenhagen. Her team has found low levels of viral RNA on house flies, as well as in hair and air samples close to mink cages. They have also tested nearby wildlife. The only positive wildlife sample was from a seagull’s foot. Infected farm workers are the most likely source, says Boklund.

Early cases of COVID-19 were linked to a meat market in Wuhan, China.

INVESTIGATION INTO COVID ORIGIN BEGINS BUT FACES CHALLENGES Identifying the source and managing the political sensitivities between the US and China will be tricky. By Smriti Mallapaty

T

he World Health Organization (WHO) has released its plan to investigate the origins of the COVID-19 pandemic. The search will start in Wuhan — the Chinese city where the coronavirus SARS-CoV-2 was first reported — and expand across China and beyond. Tracing the virus’s path is important for preventing future viral spillovers, but scientists say the WHO team faces a daunting task. Most researchers think the virus originated in bats, but how it jumped to people is unknown. Other coronaviruses have passed from an intermediate animal host; for example, the virus that caused an outbreak of severe acute respiratory syndrome (SARS) in 2002–04 probably came to people from raccoon dogs (Nyctereutes procyonoides) or civets. “Finding an animal with a SARS-CoV-2 infection is like looking for a needle in the world’s largest haystack. They may never find a ‘smoking bat’” or other animal, says Angela Rasmussen, a virologist at Columbia University in New York City. “It will be key for

the investigators to establish a collaborative relationship with scientists and government officials in China.” Nailing down the origins of a virus can take years, if it can be done at all, and the investigation will also have to navigate the highly sensitive political situation between China and the United States. US President Donald Trump has been “calling it a China virus and the Chinese government is trying to do everything to prove that it is not a China virus”, says Linfa Wang, a virologist at Duke–National University of Singapore Medical School. The political blame game has meant that crucial details about research under way in China have not been made public, says Wang, who was part of the WHO mission that looked for the origin of SARS in China in 2003. He hopes the situation with the new US administration will be less volatile. President-elect Joe Biden has also said he will reverse Trump’s withdrawal from the WHO. Support from China and the United States will create “a much more positive environment to conduct research in this field”, says Wang. An international team of epidemiologists,

. d e v r e s e r s t h g i r l l A . d e t i m i L e r u t a N r e g n i r p S 0 2 0 2 ©

Nature | Vol 587 | 19 November 2020 | 341

News in focus virologists and researchers with expertise in public health, animal health and food safety will lead the WHO’s COVID-19 investigation. The agency has not released their names. The team held its first virtual meeting, including researchers in China, on 30 October, and is reviewing the preliminary evidence and developing study protocols, says the WHO. The initial phase of investigations in Wuhan will probably be conducted by researchers who are already in China, and international researchers will travel to the country after reviewing those results, the agency says. In Wuhan, researchers will take a closer look at the Huanan meat and animal market, which many of the earliest people diagnosed with COVID-19 had visited. What part the market played in the virus’s spread remains a mystery. Early investigations sampled frozen animal carcasses at the market, but none found evidence of SARS-CoV-2, according to a 5  November report on the WHO mission’s terms of reference (see go.nature. com/2uiz8ik). However, environmental samples, taken mostly from drains and sewage, did test positive for the virus. “Preliminary studies have not generated credible leads to narrow the area of research,” the report states. The WHO mission will investigate the wild and farmed animals sold at the market, including foxes, raccoons (Procyon lotor) and sika deer (Cervus nippon). They will also investigate other markets in Wuhan, and trace the animals’ journeys through China and across borders. The investigators will prioritize animals that are known to be susceptible to the virus, such as cats and mink. The team will also look at Wuhan’s hospital records, to find out whether the virus was spreading before December 2019. The researchers will interview the first people identified to have had COVID-19, to find out where they might have been exposed, and will test blood samples collected from medical staff, laboratory technicians and farm workers in the weeks and months before December, looking for antibodies against SARS-CoV-2. The report acknowledges that some of this work might already be under way in China.

Longer-term plans The initial investigation in Wuhan will inform longer-term studies into the pandemic’s origins, which could take investigators outside China. “Where an epidemic is first detected does not necessarily reflect where it started,” the WHO report states, noting preliminary reports of viral RNA detected in sewage samples before the first cases had been identified. This statement could refer to a study, posted on the preprint server medRxiv without peer review (G. Chavarria-Miró et al. Preprint at medRxiv https://doi.org/fhw5; 2020), which retrospectively tested Spanish sewage

The investigation should also prioritize carnivorous mammals farmed for fur, such as raccoon dogs and civets, which had a role in the SARS outbreak, says Martin Beer, a virologist at the Federal Research Institute for Animal Health in Riems, Germany. “It is surprising that there is no mention of these animals in the report, and we have no information from China about whether these animals have been tested,” says Beer. A spokesperson for the WHO says the mission will be guided by science, and “will be open-minded, iterative, not excluding any hypothesis that could contribute to generating evidence and narrowing the focus of research”.

UNDERDOG TECH MAKES GAINS IN QUANTUM COMPUTER RACE Trapped-ion technologies are gaining momentum in the quest to make a commercial quantum computer. By Elizabeth Gibney

A

technology for building quantum computers that has long been sidelined by commercial developers is gaining momentum. As quantum computing has transformed from academic exercise to big business over the past decade, the spotlight has mostly been on one approach — the tiny superconducting loops embraced by technology giants such as IBM and Intel. Superconductors last year enabled Google to claim it had achieved ‘quantum advantage’ with a machine that for the first time performed a particular calculation that is beyond the practical capabilities of the best classical computer. But a separate approach, using ions trapped in electric fields, is gaining traction in the quest to make a commercial quantum computer. Earlier this year, technology and manufacturing company Honeywell launched its first quantum computer that uses trapped ions as the basis of its quantum bits, or ‘qubits’, which it had been working on quietly for more than a decade. Honeywell, headquartered in Charlotte, North Carolina, is the first established company to take this route, and it has a 130-strong team working on the project. In October, seven months after the launch, the firm unveiled an upgraded machine; it already has plans to scale this up. And Honeywell is not the only company planning to make trapped-ion systems at scale. Last

. d e v r e s e r s t h g i r l l A . d e t i m i L e r u t a N r e g n i r p S 0 2 0 2 ©

342 | Nature | Vol 587 | 19 November 2020

samples from March 2019 and found SARSCoV-2 fragments, says Raina MacIntyre, an epidemiologist at the University of New South Wales in Sydney, Australia. “If this study was correct, we have to ask how the virus was in Spain in March last year,” she says. Plans to look beyond China are sensible, given that extensive surveillance in bats in China since the 2002 SARS outbreak has identified only a distant relative of SARS-CoV-2, says Wang. A growing number of experts think that the immediate or close ancestors of SARSCoV-2 are more likely to exist in bats outside China, says Wang. He says the WHO team should survey bats and other wildlife across southeast Asia for SARS-CoV-2 antibodies.

month, University of Maryland spin-off firm IonQ in College Park announced a trappedion machine that could prove to be competitive with those of IBM or Google, although the company has yet to publish details of its performance. Smaller spin-off firms — such as Universal Quantum in Brighton, UK, and Alpine Quantum Technology in Innsbruck, Austria — are also attracting investment for trapped-ion projects. Trapped-ion quantum computers, which store information in the energy levels of individual charged atoms held in an electric field, are far from new: they were the basis of the qubits in the first basic quantum circuit in 1995, long before anyone used superconducting loops (C. Monroe et al. Phys. Rev. Lett. 75, 4714; 1995). But efforts to put all the building blocks together to build viable commercial systems are “sort of bursting on the scene now”, says Daniel Slichter, a quantum physicist at the US National Institute of Standards and Technology in Boulder, Colorado.

Rising challenger “I think nowadays people say ‘superconductors’ and ‘trapped ions’ in the same breath, and they weren’t saying that even five years ago,” says Chris Monroe, a physicist at the University of Maryland who worked on the 1995 experiment and is a co-founder of IonQ. Quantum computing is still in its infancy, and although various companies are jockeying to claim that their quantum computer is the most

advanced, it is too early to say which types of hardware — if any — will prevail. As companies embrace a range of technologies, the field is wider than ever. Classical computers store their information as 1s and 0s, but qubits exist in a delicate superposition of 1 and 0. Through the quantum phenomenon of entanglement, qubits’ states can become intertwined, and interference of their wavelike quantum states should allow a quantum computer to carry out certain massive calculations exponentially faster than the best classical machines can. This includes finding the factors of prime numbers.

Pros and cons Any system with two possible quantum mechanical states — such as the oscillations in a superconducting loop or energy levels of an ion — could form a qubit, but all hardware types have pros and cons, and each faces substantial hurdles to forming a full-blown quantum computer. A machine capable of living up to the original promise of quantum computing by, for example, cracking conventional encryption, would require millions of individually controllable qubits. But size is not the only issue: the quality of the qubits and how well they connect to each other are just as important. The frequency of errors in qubits and their operations, caused by noise, tends to increase as more are connected. To operate at scale, each qubit needs to work with error rates that are low enough to allow mistakes to be detected and fixed in a process known as error correction, although physicists also hope that smaller, noisier systems will prove

useful in the short term. In the past few years, rapid progress in superconducting loops risked leaving trapped ions in the dust. Google, IBM and others have developed machines with around 50 or more high-quality qubits. IBM aims to have a 1,000qubit machine by 2023. John Martinis, a quantum physicist at the University of California, Santa Barbara — and, until April, head of quantum hardware at Google in Mountain View, California — thinks that Google will use the same basic architecture it used to achieve quantum advantage to achieve error correction, the next big milestone. Superconducting qubits have so far benefited from feeling familiar to many companies, as their basic components are compatible with classical chip technology.

“I want to help someone to build the first quantum computer. It doesn’t have to be me.” But trapped-ion qubits have many inherent advantages, says Sabrina Maniscalco, a quantum physicist at the University of Helsinki. Their operations are much less prone to errors and the delicate quantum states of individual ions last longer than those in superconducting qubits, which, although small, are still made of a very large number of atoms. Moreover, superconducting qubits tend to interact only with their nearest neighbours, whereas trapped ions can interact with many others, which makes it easier to run

some complex calculations, she says. But trapped ions have drawbacks: they are slower at interacting than superconducting qubits, which will be important when it comes to accounting for real-time errors coming out of the system, says Michele Reilly, founder of quantum software company Turing in New York City. And there are limits to how many ions can fit in a single trap and be made to interact. IonQ’s latest model contains 32 trapped ions sitting in a chain; plucking any 2 using lasers causes them to interact. To scale up to hundreds of qubits, the company is working on ways to link up multiple chains of qubits using photons. The firm aims to double its number of qubits each year. Meanwhile, Honeywell plans to interconnect all the ions by physically shuttling them around a giant chip ( J. M. Pino et al. Preprint at https://arxiv.org/abs/2003.01293; 2020). The latest system by the firm’s Honeywell Quantum Solutions (HQS) division, called H1, consists of just ten qubits, but its chief scientist, Patty Lee, says that the firm is already working on its next iteration. In the next 5 years, the team plans to connect around 20 qubits, which should allow the machine to carry out problems that are otherwise impractical on classical machines, says Tony Uttley, president of HQS. The challenge is to keep the quality and precision of qubits, while controlling dozens, or even hundreds, at once — which neither Honeywell nor IonQ has yet shown it can do. Although many of the necessary components have been mastered individually, “what is needed is a system-level integrative approach putting it all together, testing it, solving its problems”, says Barbara Terhal, a theoretical physicist at Delft University of Technology in the Netherlands.

HONEYWELL QUANTUM SOLUTIONS

No clear victor

An ion trap from Honeywell’s quantum computer inside a vacuum chamber.

Trapped-ion hardware isn’t the only technology attracting substantial investment. The success of superconducting qubits has opened doors for various technologies, says Slichter, including silicon-based spin qubits, which store quantum information in the nuclear spin states of an atom embedded in a silicon crystal. In a coup for this technology, Martinis joined Silicon Quantum Computing in Sydney, Australia, on a six-month sabbatical in September — his first move away from superconducting systems in almost two decades. Martinis doesn’t mind which design ends up winning. “I want to help someone to build the first quantum computer. It doesn’t have to be me [or] whatever I’m working with,” he says. The race is also far from being called, says Maniscalco, and a winner might never emerge. “It may be that there isn’t one winning platform, but we have a hybrid or different platforms being useful for different tasks.”

. d e v r e s e r s t h g i r l l A . d e t i m i L e r u t a N r e g n i r p S 0 2 0 2 ©

Nature | Vol 587 | 19 November 2020 | 343

Capping the number of visitors to restaurants can significantly reduce COVID-19 infections.

HOW TO STOP RESTAURANTS SEEDING COVID INFECTIONS US mobile-phone data suggest restaurants and gyms can be virus hotspots — and reveal ways to slow spread. By David Cyranoski

I

n cities worldwide, coronavirus outbreaks have been linked to restaurants, cafes and gyms. Now, a model using mobile-phone data to map people’s movements suggests that these venues could account for most COVID-19 infections in US cities. The model, published in Nature, also reveals how reducing the occupancy of venues can cut the number of infections (S. Chang et al. Nature https://doi.org/ghjmt2; 2020). The model “has concrete pointers as to what may be cost-effective measures to contain the spread of the disease, while at the same time, limiting the damage to the economy”, says Thiemo Fetzer, an economist at the University of Warwick in Coventry, UK. “This is the policy sweet spot.” To predict how people’s movements might affect viral transmission, the research team input anonymized location data from mobilephone apps into a simple epidemiological model that estimated how quickly the disease spreads. The location data, collected by SafeGraph, a company based in Denver, Colorado, came from ten of the largest US cities, including Chicago, Illinois; New York; and Philadelphia, Pennsylvania. It mapped how people moved in

. d e v r e s e r s t h g i r l l A . d e t i m i L e r u t a N r e g n i r p S 0 2 0 2 ©

344 | Nature | Vol 587 | 19 November 2020

and out of 57,000 neighbourhoods to points of interest, such as restaurants, churches, gyms, hotels, car dealerships and sporting-goods stores for 2 months, starting in March. When the team compared the model’s number of infections in Chicago neighbourhoods between 8 March and 15 April with the number of infections officially recorded in those neighbourhoods a month later, they found that the model had predicted the case numbers. “We are able to faithfully estimate the contact network between 100 million people for every hour of the day. That is the secret ingredient we have,” says team leader Jure Leskovec, a computer scientist at Stanford University in California. The team then used the model to simulate different scenarios, such as reopening some venues while keeping others closed. They found that opening restaurants at full capacity led to the largest increase in infections, followed by gyms, cafes, hotels and motels. If Chicago had reopened restaurants on 1 May, there would have been nearly 600,000 extra infections that month, whereas opening gyms would have produced 149,000 extra infections. If all venues had been open, the model predicts that there would have been 3.3 million extra cases. But capping occupancy for all venues at 30%

would reduce the number of extra infections to 1.1 million, the model estimated. If occupancy was capped at 20%, new infections would be reduced to about 650,000. “The study highlights how real-time big data on population mobility offers the potential to predict transmission dynamics at unprecedented levels of spatial granularity,” says Neil Ferguson, an epidemiologist at Imperial College London. The mobility data also suggest why people from poorer neighbourhoods are more likely to get COVID-19: because they are less able to work from home, and the shops they visit for supplies tend to be more crowded than those in other areas. The average grocery shop in poorer neighbourhoods had 59% more hourly visitors per square foot, and visitors stayed, on average, 17% longer than at shops outside those areas. Leskovec says that people living in these areas probably have limited options to visit less-crowded shops, and, as a result, a shopping trip is twice as risky as it is for someone from a wealthier area. But Christopher Dye, an epidemiologist at the University of Oxford, UK, says these patterns need to be validated with real-world data.

Global trend Broadly speaking, Fetzer says, the modelling study corroborates much of what has been learnt from contact-tracing studies worldwide, which have identified restaurants, gyms, choir practices, nursing homes and other crowded indoor venues as sites of superspreader events, where many people are infected at one time. Last month, Fetzer published a report showing that a UK government programme called Eat Out to Help Out, in which restaurant meals were subsidized during August, led to a huge surge in restaurant visits and accounted for up to 17% of new COVID-19 infections that month (see go.nature.com/32f5fiy). But restaurants might not be hotspots everywhere. Contact-tracing data from Germany have shown that restaurants were not the main source of infection in that country, says Moritz Kraemer, who models infectious diseases at the University of Oxford. That might be because it can be hard to identify the source of an infection using contact-tracing data. Although the model’s prediction of overall infection rates in cities was validated with real-world data, Kraemer says, more-detailed contact-tracing data will be needed to test whether the model correctly identified the actual location of infections. Leskovec says that all models have some amount of error. But because many of his team’s model’s predictions align with observations, he adds, there is no reason to think that it wouldn’t work at smaller scales. If the model is found to accurately predict the risk of visiting specific locations, health officials could use it to fine-tune social-distancing policies, says Ferguson.

SPENCER PLATT/GETTY

News in focus

BEATING BIOMETRIC BIAS

KELVIN CHAN/AP/SHUTTERSTOCK

Feature

The Metropolitan Police in London used facial-recognition cameras to scan for wanted people in February.

Facial recognition is improving — but the bigger issue is how it’s used. By Davide Castelvecchi

W

hen London’s Metropolitan Police tested real-time facial-recognition technology between 2016 and 2019, they invited Daragh Murray to monitor some of the trials from a control room inside a police van. “It’s like you see in the movies,” says Murray, a legal scholar at the University of Essex in Colchester, UK. As cameras scanned passers-by in shopping centres or public squares, they fed images to a computer inside the van. Murray and police officers saw the software draw rectangles around faces as it identified them in the live feed. It then

extracted key features and compared them to those of suspects from a watch list. “If there is a match, it pulls an image from the live feed, together with the image from the watch list.” Officers then reviewed the match and decided whether to rush out to stop the ‘suspect’ and, occasionally, arrest them. Scotland Yard, as the headquarters of the London police force is sometimes known, had commissioned Murray and his University of Essex colleague Pete Fussey, a sociologist, to conduct an independent study of its dragnet. But their results1, published in July 2019, might not have been quite what the law-enforcement agency had hoped for.

Fussey and Murray listed a number of ethical and privacy concerns with the dragnet, and questioned whether it was legal at all. And they queried the accuracy of the system, which is sold by Tokyo-based technology giant NEC. The software flagged 42 people over 6 trials that the researchers analysed; officers dismissed 16 matches as ‘non-credible’ but rushed out to stop the others. They lost 4 people in the crowd, but still stopped 22: only 8 turned out to be correct matches. The police saw the issue differently. They said the system’s number of false positives was tiny, considering the many thousands of faces that had been scanned. (They didn’t reply to Nature’s requests for comment for this article.) The accuracy of facial recognition has improved drastically since ‘deep learning’ techniques were introduced into the field about a decade ago. But whether that means it’s good enough to be used on lower-quality, ‘in the wild’ images is a hugely controversial issue. And questions remain about how to transparently evaluate facial-recognition systems. In 2018, a seminal paper by computer scientists Timnit Gebru, then at Microsoft Research in New York City and now at Google in Mountain View, California, and Joy Buolamwini at the Massachusetts Institute of Technology in Cambridge found that leading facial-recognition software packages performed much worse at identifying the gender of women and people of colour than at classifying male, white faces2.

. d e v r e s e r s t h g i r l l A . d e t i m i L e r u t a N r e g n i r p S 0 2 0 2 ©

Nature | Vol 587 | 19 November 2020 | 347

Feature Concerns over demographic bias have since been quoted frequently in calls for moratoriums or bans of facial-recognition software. In June, the world’s largest scientific computing society, the Association for Computing Machinery in New York City, urged a suspension of private and government use of facial-recognition technology, because of “clear bias based on ethnic, racial, gender, and other human characteristics”, which it said injured the rights of individuals in specific demographic groups. Axon, a maker of body cameras worn by police officers across the United States, has said that facial recognition isn’t accurate enough to be deployed in its products. Some US cities have banned the use of the technology in policing, and US lawmakers have proposed a federal moratorium. Companies say they’re working to fix the biases in their facial-recognition systems, and some are claiming success. But many researchers and activists are deeply sceptical. They argue that even if the technology surpasses some benchmark in accuracy, that won’t assuage deeper concerns that facial-recognition tools are used in discriminatory ways.

More accurate but still biased Facial-recognition systems are often proprietary and swathed in secrecy, but specialists say that most involve a multi-stage process (see ‘How facial recognition works’) using deep learning to train massive neural networks on large sets of data to recognize patterns. “Everybody who does face recognition now uses deep learning,” says Anil Jain, a computer scientist at Michigan State University in East Lansing. The first stage in a typical system locates one or more faces in an image. Faces in the feed from a surveillance camera might be viewed in a range of lighting conditions and from different angles, making them harder to recognize than in a standard passport photo, for instance. The algorithm will have been trained on millions of photos to locate ‘landmarks’ on a face, such as the eyes, nose and mouth, and it distils the information into a compact file, ranging from less than 100 bytes to a few kilobytes in size. The next task is to ‘normalize’ the face, artificially rotating it into a frontal, well-illuminated view. This produces a set of facial ‘features’ that can be compared with those extracted from an existing database of faces. This will typically consist of pictures taken under controlled conditions, such as police mugshots. Because the feature representations are compact, structured files, a computer can quickly scan millions of them to find the closest match. Matching faces to a large database — called one-to-many identification — is one of two main types of facial-recognition system. The other is one-to-one verification, the relatively simple task of making sure that a person matches their own photo. It can be applied to anything from unlocking a smartphone to

“Systems are being brought to the wild without a proper evaluation of their performance.” facial-recognition software from a decade ago could recognize frontal images, NIST found. Recognizing a face in profile “has been a long-sought milestone in face recognition research”, the NIST researchers wrote. But NIST also confirmed what Buolamwini and Gebru’s gender-classification work suggested: most packages tended to be more accurate for white, male faces than for people of colour or for women5. In particular, faces classified in NIST’s database as African American or Asian were 10–100 times more likely to be misidentified than those classified as white. False positives were also more likely for women than for men. This inaccuracy probably reflects imbalances in the composition of each company’s training database, Watson says — a scourge that data scientists often describe as ‘garbage in, garbage out’. Still, discrepancies varied between packages, indicating that some companies might have begun to address the problem, he adds. NEC, which supplies Scotland Yard’s software, noted that in NIST’s analysis, it was “among a

. d e v r e s e r s t h g i r l l A . d e t i m i L e r u t a N r e g n i r p S 0 2 0 2 ©

348 | Nature | Vol 587 | 19 November 2020

passport control at national borders. One measure of progress is the Face Recognition Vendor Test, an independent benchmarking assessment that the US National Institute of Standards and Technology (NIST) in Gaithersburg, Maryland, has been conducting for two decades. Dozens of laboratories, both commercial and academic, have voluntarily taken part in the latest round of testing, which began in 2018 and is ongoing. NIST measures the performance of each lab’s software package on its own image data sets, which include frontal and profile police mugshots, and pictures scraped from the Internet. (The US technology giants Amazon, Apple, Google and Facebook have not taken part in the test.) In reports released late last year, the NIST team described massive steps forward in the technology’s performance during 2018, both for one-to-many searches3 and for one-to-one verification4 (see also go.nature.com/35pku9q). “We have seen a significant improvement in face-recognition accuracy,” says Craig Watson, an electrical engineer who leads NIST’s image group. “We know that’s largely because of convolutional neural networks,” he adds, a type of deep neural network that is especially efficient at recognizing images. The best algorithms can now identify people from a profile image taken in the wild — matching it with a frontal view from the database — about as accurately as the best

small group of vendors where false positives based on demographic differentials were undetectable”, but that match rates could be compromised by outdoor, poorly lit or grainy images.

False faces One-to-one verification, such as recognizing the rightful owner of a passport or smartphone, has become extremely accurate; here, artificial intelligence is as skilful as the sharpest-eyed humans. In this field, cutting-edge research focuses on detecting malevolent attacks. The first facial-recognition systems for unlocking phones, for example, were easily fooled by showing the phone a photo of the owner, Jain says; 3D face recognition does better. “Now the biggest challenge is very-highquality face masks.” In one project, Jain and his collaborators are working on detecting such impersonators by looking for skin texture. But one-to-many verification, as Murray found, isn’t so simple. With a large enough watch list, the number of false positives flagged up can easily outweigh the true hits. This is a problem when police must make quick decisions about stopping someone. But mistakes also occur in slower investigations. In January, Robert Williams was arrested at his house in Farmington Hills, Michigan, after a police facial-recognition system misidentified him as a watch thief on the basis of blurry surveillance footage of a Black man, which it matched to his driving licence. The American Civil Liberties Union (ACLU), a non-profit organization in New York City, filed a complaint about the incident to Detroit police in June, and produced a video in which Williams recounts what happened when a detective showed him the surveillance photos on paper. “I picked that paper up, held it next to my face and said, ‘This is not me. I hope y’all don’t think all Black people look alike.’ And then he said: ‘The computer says it’s you,’” Williams said. He was released after being detained for 30 hours. ACLU attorney Phil Mayor says the technology should be banned. “It doesn’t work, and even when it does work, it remains too dangerous a tool for governments to use to surveil their own citizens for no compelling return,” he says. Shortly after the ACLU complaint, Detroit police chief James Craig acknowledged that the software, if used by itself, would misidentify cases “96% of the time”. Citing concerns over racial bias and discrimination, at least 11 US cities have banned facial recognition by public authorities in the past 18 months. But Detroit police still use the technology. In late 2019, the force adopted policies to ban live camera surveillance and to use the software only on still images and as part of criminal investigations; Williams was arrested before the policy went into practice, Craig said in June. (He did not respond to Nature’s requests for comment.) Other aspects of facial analysis, such as trying to deduce someone’s personality on

the basis of their facial expressions, are even more controversial. Researchers have shown this doesn’t work6 — even the best software can only be trained on images tagged with other people’s guesses. But companies around the world are still buying unproven technology that assesses interview candidates’ personalities on the basis of videos of them talking. Nuria Oliver, a computer scientist based in Alicante, Spain, says that governments should regulate the use of facial recognition and other potentially useful technologies to prevent abuses (see page 350). “Systems are being brought to the wild without a proper evaluation of their performance, or a process of verification and reproducibility,” says Oliver, who is co-founder and vice-president of a regional network called the European Laboratory for Learning and Intelligent Systems.

Persistent problems Some proposals for regulation have called for authorities to establish accuracy standards and require that humans review any algorithm’s conclusions. But a standard based on, say, passing NIST benchmarks is much too low a bar on its own to justify deploying the technology, says Deborah Raji, a technology fellow in Ottawa with the Internet foundation Mozilla who specializes in auditing facial-recognition systems. This year, Raji, Buolamwini, Gebru and others published another paper on the performance of commercial systems, and noted that although some firms had improved at classifying gender across lighter- and darker-skinned faces, they were still worse at guessing a person’s age from faces with darker skin7. “The assessment process is incredibly immature. Every time we understand a new dimension to evaluate, we find out that the industry is not performing at the level that it thought it did,” Raji says. It is important, she says, that companies disclose more about how they test and train their facial-recognition systems, and consult with the communities in which the technology will be used. Technical standards cannot stop facial-recognition systems from being used in discriminatory ways, says Amba Kak, a legal scholar at New York University’s AI Now Institute. “Are these systems going to be another tool to propagate endemic discriminatory practices in policing?” Human operators often end up confirming a system’s biases rather than correcting it, Kak adds. Studies such as the Scotland Yard external review show that humans tend to overestimate the technology’s credibility, even when they see the computer’s false match next to the real face. “Just putting in a clause ‘make sure there is a human in the loop’ is not enough,” she says. Kak and others support a moratorium on any use of facial recognition, not just because the technology isn’t good enough yet, but also because there needs to be a broader discussion

HOW FACIAL RECOGNITION WORKS

Facial-recognition systems analyse a face’s geometry to create a faceprint — a biometric marker that can be used to recognize or identify a person. Another use is facial analysis, which tries to classify a face according to labels such as gender, age, ethnicity or emotion.

Capture image

Locate face

Locate landmarks: eyes, nose and mouth

Align into frontal, well-lit view. Extract geometrical features

Facial recognition

Verify Confirm that faceprint matches stored example (one-to-one comparison)

Facial analysis

Identify Check against database to discover identity (one-to-many comparison)

✖ ✖ ✖ ✔ Examples: Unlock a smartphone Travel through a passport gate Verify school or work attendance

Examples: Scan crowd until ‘hit’ found against watch list Match person of interest against vast database

of how to prevent it from being misused. The technology will improve, Murray says, but doubts will remain over the legitimacy of operating a permanent dragnet on innocent people, and over the criteria by which people are put on a watch list. Concerns about privacy, ethics and human rights will grow. The world’s largest biometric programme, in India, involves using facial recognition to build a giant national ID card system called Aadhaar. Anyone who lives in India can go to an Aadhaar centre and have their picture taken. The system compares the photo with existing records on 1.3 billion people to make sure the applicant hasn’t already registered under a different name. “It’s a mind-boggling system,” says Jain, who has been a consultant for it. “The beauty of it is, it ensures one person has only one ID.” But critics say it turns non-card owners into second-class citizens, and some allege it was used to purge legitimate citizens from voter rolls ahead of elections. And the most notorious use of biometric technology is the surveillance state set up by the Chinese government in the Xinjiang province, where facial-recognition algorithms are used to help single out and persecute people from religious minorities (see page 354). “At this point in history, we need to be a

Classify Infer human-defined characteristics

input: analysing... result: { age: 56 gender: female sentiment: happy confidence: 8 } Examples: Assess person’s age or gender Assess person’s emotional state (tests are controversial and less reliable than facial recognition).

lot more sceptical of claims that you need ever-more-precise forms of public surveillance,” says Kate Crawford, a computer scientist at New York University and co-director of the AI Now Institute. In August 2019, Crawford called for a moratorium on governments’ use of facial-recognition algorithms (K. Crawford Nature 572, 565; 2019). Meanwhile, having declared its pilot project a success, Scotland Yard announced in January that it would begin to deploy live facial recognition across London. Davide Castelvecchi reports for Nature from London. Additional reporting by Antoaneta Roussi and Richard Van Noorden. 1. Fussey, P. & Murray, D. Independent Report on the London Metropolitan Police Service’s Trial of Live Facial Recognition Technology (Univ. Essex, 2019). 2. Buolamwini, J. & Gebru, T. Proc. Mach. Learn. Res. 81, 77–91 (2018). 3. Grother, P., Ngan, M. & Hanaoka, K. Face Recognition Vendor Test (FRVT) Part 2: Identification (NIST, 2019). 4. Grother, P., Ngan, M. & Hanaoka, K. Ongoing Face Recognition Vendor Test (FRVT) Part 1: Verification. Updated 10 September 2020 (NIST, 2020). 5. Grother, P., Ngan, M. & Hanaoka, K. Face Recognition Vendor Test Part 3: Demographic Effects (NIST, 2019). 6. Feldman Barrett, L., Adolphs, R., Marsella, S., Martinez, A. M. & Pollak, S. D. Psychol. Sci. Public Interest 20, 1–68 (2019). 7. Raji, I. D. et al Preprint at https://arxiv.org/abs/2001.00964 (2020).

. d e v r e s e r s t h g i r l l A . d e t i m i L e r u t a N r e g n i r p S 0 2 0 2 ©

Nature | Vol 587 | 19 November 2020 | 349

VLADIMIR ZIVOJINOVIC FOR NATURE

Feature

RESISTING THE RISE OF FACIAL RECOGNITION

Cameras watch over Belgrade’s Republic Square.

Growing use of surveillance technology has prompted calls for bans and stricter regulation. By Antoaneta Roussi

I

n Belgrade’s Republic Square, domeshaped cameras hang prominently on wall fixtures, silently scanning people walking across the central plaza. It’s one of 800 locations in the city that Serbia’s government said last year it would monitor using cameras equipped with facial-recognition software, purchased from electronics firm Huawei in Shenzhen, China. The government didn’t ask Belgrade’s

. d e v r e s e r s t h g i r l l A . d e t i m i L e r u t a N r e g n i r p S 0 2 0 2 ©

350 | Nature | Vol 587 | 19 November 2020

residents whether they wanted the cameras, says Danilo Krivokapić, who directs a humanrights organization called the SHARE Foundation, based in the city’s old town. This year, it launched a campaign called Hiljade Kamera — ‘thousands of cameras’ ­— questioning the project’s legality and effectiveness, and arguing against automated remote surveillance. Belgrade is experiencing a shift that has already taken place elsewhere.

Facial-recognition technology (FRT) has long been in use at airport borders and on smartphones, and as a tool to help police identify criminals. But it is now creeping further into private and public spaces. From Quito to Nairobi, Moscow to Detroit, hundreds of municipalities have installed cameras equipped with FRT, sometimes promising to feed data to central command centres as part of ‘safe city’ or ‘smart city’ solutions to crime. The COVID-19

pandemic might accelerate their spread. The trend is most advanced in China, where more than 100 cities bought face-recognition surveillance systems last year, according to Jessica Batke, who has analysed thousands of government procurement notices for ChinaFile, a magazine published by the Center on U.S.China Relations in New York City. But resistance is growing in many countries. Researchers, as well as civil-liberties advocates and legal scholars, are among those disturbed by facial recognition’s rise. They are tracking its use, exposing its harms and campaigning for safeguards or outright bans. Part of the work involves exposing the technology’s immaturity: it still has inaccuracies and racial biases (see page 347). Opponents are also concerned that police and law-enforcement agencies are using FRT in discriminatory ways, and that governments could employ it to repress opposition, target protesters or otherwise limit freedoms — as with the surveillance in China’s Xinjiang province (see page 354). Legal challenges have emerged in Europe and parts of the United States, where critics of the technology have filed lawsuits to prevent its use in policing. Many US cities have banned public agencies from using facial recognition — at least temporarily — or passed legislation to demand more transparency on how police use surveillance tools. Europe and the United States are now considering proposals to regulate the technology, so the next few years could define how FRT’s use is constrained or entrenched. “What unites the current wave of pushback is the insistence that these technologies are not inevitable,” wrote Amba Kak, a legal scholar at New York University’s AI Now Institute, in a September report1 on regulating biometrics.

Surveillance concerns By 2019, 64 countries used FRT in surveillance, says Steven Feldstein, a policy researcher at the Carnegie Endowment for International Peace in Washington DC, who has analysed the technology’s global spread2. Feldstein found that cities in 56 countries had adopted smart-city platforms. Many of them purchased their cameras from Chinese firms, often apparently encouraged by subsidized loans from Chinese banks. (US, European, Japanese and Russian firms also sell cameras and software, Feldstein noted.) Belgrade’s project illustrates concerns that many have over the rise of smart-city systems: there is no evidence that they reduce crime more than ordinary video cameras do, and the public knows little about systems that are ostensibly for their benefit. Krivokapić says he is worried that the technology seems more suited to offering an increasingly authoritarian government a tool to curb political dissent. “Having cameras around in a young democracy such as Serbia can be problematic because of the potential for political misuse,” says Ljubiša Bojić, coordinator of the Digital

Sociometrics Lab at the University of Belgrade, which studies the effects of artificial intelligence (AI) on society. “Although the situation has changed since the turmoil of the nineties, the dogma of police state and fear of intelligence agencies makes Serbia an inappropriate place for implementation of AI cameras.”

a proof of principle for a surveillance technology that is still very new. “The history of technology and law enforcement is littered with examples of hubris and outlandish claims,” he says. “It’s reasonably uncontroversial to say that surveillance cameras in general are more effective for tackling crimes against things, rather than people. Once you start getting into automated surveillance, it becomes more difficult, partly because it is not used as much.”

WHAT UNITES THE CURRENT WAVE OF PUSHBACK IS THE INSISTENCE THAT THESE TECHNOLOGIES ARE NOT INEVITABLE.”

Pandemic push

When the government announced the project, it gave few details. But SHARE found a 2018 press release on Huawei’s website (which the firm deleted) that announced tests of high-definition cameras in Belgrade. The document said that the cameras had helped Serbian police to solve several major criminal cases and improve security at major sporting events. This year, the government disclosed that the scheme involves purchasing 8,000 cameras for use in police cars, as body-worn cameras and on buildings. “There are many questions that remain unanswered,” Krivokapić says. “For example, where will the data be stored? In Serbia or in China? Will Huawei have access to the data?” After SHARE and others pressed for more details, the Serbian government said that data wouldn’t be collected or kept by Huawei. But Lee Tien, a senior staff attorney at the Electronic Frontier Foundation in San Francisco, California, says that one of the main reasons large technology firms — whether in China or elsewhere — get involved in supplying AI surveillance technology to governments is that they expect to collect a mass of data that could improve their algorithms. Serbia models its data-protection laws on the European Union’s General Data Protection Regulation (GDPR), but it is unclear whether the interior ministry’s plans satisfy the country’s laws, Serbia’s data-protection commissioner said in May. (The interior ministry declined to comment for this article, and Huawei did not respond to questions.) Overall, there haven’t been studies proving that ‘safe’ or ‘smart’ cities reduce crime, says Pete Fussey, a sociologist at the University of Essex in Colchester, UK, who researches human rights, surveillance and policing. He says anecdotal claims are being leveraged into

In March, Vladimir Bykovsky, a Moscow resident who’d recently returned from South Korea, left his apartment for a few moments to throw out his rubbish. Half an hour later, police were at his door. The officers said he had violated COVID-19 quarantine rules and would receive a fine and court date. Bykovsky asked how they’d known he’d left. The officers told him it was because of a camera outside his apartment block, which they said was connected to a facial-recognition surveillance system working across the whole of Moscow. “They said they’d received an alert that quarantine had been broken by a Vladimir Bykovsky,” he says. “I was just shocked.” The Russian capital rolled out a city-wide video surveillance system in January, using software supplied by Moscow-based technology firm NtechLab. The firm’s former head, Alexey Minin, said at the time that it was the world’s largest system of live facial recognition. NtechLab co-founder Artem Kukharenko says it supplies its software to other cities, but wouldn’t name locations because of non-disclosure agreements. Asked whether it cut down on crime, he pointed to Moscow media reports of hooligans being detained during the 2018 World Cup tournament, when the system was in test mode. Other reports say the system spotted 200 quarantine breakers during the first few weeks of Moscow’s COVID-19 lockdown. Like Russia, governments in China, India and South Korea have used facial recognition to help trace contacts and enforce quarantine; other countries probably have, too. In May, the chief executive of London’s Heathrow airport said it would trial thermal scanners with facial-recognition cameras to identify potential virus carriers. Many firms also say they have adapted their technologies to spot people wearing masks (although, as with many facial-recognition performance claims, there is no independent verification). Researchers worry that the use of live-surveillance technologies is likely to linger after the pandemic. This could have a chilling effect on societal freedoms. Last year, a group set up to provide ethical advice on policing asked more than 1,000 Londoners about the police’s use of live facial recognition there; 38% of 16–24-year-olds and 28% of Asian, Black and mixed-ancestry people surveyed said they would stay away from events monitored with

. d e v r e s e r s t h g i r l l A . d e t i m i L e r u t a N r e g n i r p S 0 2 0 2 ©

Nature | Vol 587 | 19 November 2020 | 351

Feature

We’re all in the database Another concern, especially in the United States, is that the watch lists that police use to check images against can be enormous — and can include people without their knowledge. Researchers at the Center on Privacy and Technology at Georgetown University in Washington DC estimated in 2016 that around half of all Americans were in law-enforcement face-recognition networks, because many states allow police to search driver’s-licence databases. And earlier this year, The New York Times revealed that software company Clearview AI in New York City had scraped billions of images from social-media sites and compiled them into a facial-recognition database. The firm offered its service to police in and outside the United States. “The Clearview scandal threw into relief what researchers had long thought was possible,” says Ben Sobel, who studies the ethics and governance of AI at the Berkman Klein Center at Harvard University in Cambridge, Massachusetts. “Technology capable of recognizing faces at scale is becoming more accessible and requiring less sophistication to run.” Social-media sites such as Twitter, Facebook and YouTube have told Clearview to stop scraping their sites, saying it breaches their terms of service. And several lawsuits have been filed against the firm, including under an Illinois law that allows individuals

in that state to sue firms who capture their biometric information — including from the face — without their consent. In June, the European Data Protection Board issued an opinion that Clearview’s service breaches the GDPR ­­— but no action has yet been taken. Clearview, which stopped selling some of its services this year after media coverage, told Nature that its “image-search engine functions within the bounds of applicable laws”.

Calls for regulation

TECHNOLOGY CAPABLE OF RECOGNIZING FACES AT SCALE IS BECOMING MORE ACCESSIBLE.” Clearview isn’t the only firm to harvest online images of faces. A company called PimEyes in Wrocław, Poland, has a website that allows anyone to find matching photos online, and the firm claims to have scraped 900 million images — although, it says, not from social-media sites. And NtechLab launched the FindFace app in 2016 to permit face-matching on the Russian social network VK. The company later withdrew the app. It now seems impossible to stop anyone from privately building up large facial-recognition databases from online photos. But in July, researchers at the University of Chicago in Illinois unveiled a piece of software called Fawkes

Activist Darya Kozlova in Moscow has her face painted with features said to confuse cameras.

. d e v r e s e r s t h g i r l l A . d e t i m i L e r u t a N r e g n i r p S 0 2 0 2 ©

352 | Nature | Vol 587 | 19 November 2020

that adds imperceptible tweaks to images so that they look the same to the human eye, but like a different person to a machine-learning model. If people ‘cloak’ enough of their facial images through Fawkes, they say, efforts such as Clearview’s will learn the wrong features and fail to match new, unaltered images to its models. The researchers hope that photo-sharing or social-media platforms might offer the service to protect users, by applying the software before photos are displayed online.

In September 2019, the London-based Ada Lovelace Institute, a charity-funded research institute that scrutinizes AI and society, published a nationally representative survey3 of more than 4,000 British adults’ views on FRT. It found that the majority of people supported facial recognition when they could see a public benefit, such as in criminal investigations, to unlock smartphones or to check passports in airports. But 29% were uncomfortable with the police using the technology, saying that it infringes on privacy and normalizes surveillance, and that they don’t trust the police to use it ethically. There was almost no support for its use in schools, at work or in supermarkets. “The public expects facial-recognition technology in policing to be accompanied by safeguards and linked to a public benefit,” the survey concluded. Many researchers, and some companies, including Google, Amazon, IBM and Microsoft, have called for bans on facial recognition — ­ at least on police use of the technology — until stricter regulations are brought in. Some point admiringly to the GDPR, which prohibits processing of biometric data without consent — although it also offers many exceptions, such as if data are “manifestly public”, or if the use is “necessary for reasons of substantial public interest”. When it comes to commercial use of facial recognition, some researchers worry that laws focused only on gaining consent to use it aren’t strict enough, says Woodrow Hartzog, a computer scientist and law professor at Northeastern University in Boston, Massachusetts, who studies facial surveillance. It’s very hard for an individual to understand the risks of consenting to facial surveillance, he says. And they often don’t have a meaningful way to say ‘no’. Hartzog, who views the technology as the “most dangerous ever to be invented”, says if US lawmakers allow firms to use facial recognition “despite its inevitable abuses”, they should write rules that prohibit the collection and storage of ‘faceprints’ from places such as gyms and restaurants, and prohibit the use of FRT in combination with automated decision-making such as predictive policing, advert-targeting and employment. The Algorithmic Justice League, a researcher-led campaigning organization founded by

YURI KADOBNOV/AFP/GETTY

live facial recognition. Some people who attend rallies have taken to wearing masks or camouflage-like ‘dazzle’ make-up to try to confuse facial-recognition systems. But their only ‘opt-out’ option is to not turn up.

THOMAS PETER/REUTERS

A software engineer at Hanwang Technology in Beijing tests a facial-recognition programme that identifies people wearing face masks.

computer scientist Joy Buolamwini at the Massachusetts Institute of Technology in Cambridge, has been prominent in calling for a US federal moratorium on facial recognition. In 2018, Buolamwini co-authored a paper showing how facial-analysis systems are more likely to misidentify gender in darker-skinned and female faces4. And in May, she and other researchers argued in a report that the United States should create a federal office to manage FRT applications ­— rather like the US Food and Drug Administration approves drugs or medical devices5. “What a federal office would do is provide multiple levels of clearance before a product can enter the market. If the risks far outweigh the benefits, maybe you don’t use that product,” says Erik Learned-Miller, a computer scientist at the University of Massachusetts in Amherst who co-authored the report. In China, too, people have expressed discomfort with widespread use of facial recognition — by private firms, at least. An online survey of more than 6,000 people in December 2019 by the Nandu Personal Information Protection Research Centre, a think tank affiliated with the Southern Metropolis Daily newspaper in Guangzhou, found that 80% of people worried about lax security in facial-recognition systems and 83% wanted more control over their face data, including the option to delete it. Chinese newspapers have run articles questioning FRT use, and the government is

bringing in tighter data-protection laws. But the debate doesn’t usually question the use of cameras by the police and government, and the data-protection laws don’t put limits on government surveillance, says Graham Webster, who studies China’s digital policies at Stanford University in California. Europe’s data-protection rules say that police can process data for biometric purposes if it’s necessary and subject to appropriate safeguards. A key question here, says Fussey, is whether it would be proportionate to, for example, put tens of thousands of people under video surveillance to catch a criminal. So far, British judges have suggested they think it might be, but only if the use of the technology by police has tighter controls. Last year, a man named Ed Bridges sued police in South Wales, alleging that his rights to privacy had been breached because he was scanned by live facial-recognition cameras on two occasions in Cardiff, UK, when police were searching crowds to find people on a watch list. In August, a UK court ruled that the actions were unlawful: police didn’t have enough guidance and rules about when they could use the system and who would be in their database, and they hadn’t sufficiently checked the software’s racial or gender bias. But judges didn’t agree that the camera breached Bridges’ privacy rights: it was a ‘proportionate’ interference, they said. The EU is considering an AI framework that could set rules for biometrics. This year, a white

paper — ­ a prelude to proposed legislation — suggested that special rules might be needed for ‘high-risk’ AI, which would include facial recognition. Most people and firms who wrote into a consultation that followed the document felt that further regulations were needed to use FRT in public spaces. Ultimately, the people affected by FRT need to discuss what they find acceptable, says Aidan Peppin, a social scientist at the Ada Lovelace Institute. This year, he has been helping to run a citizens’ biometrics council, featuring in-depth workshops with around 60 people across the country. Participants provide their views on biometrics, which will inform a UK review of legislation in the area. “The public voice needs to be front and centre in this debate,” he says. Antoaneta Roussi is a freelance journalist in Nairobi. Additional reporting by Richard Van Noorden. 1. Kak, A. (ed.) Regulating Biometrics: Global Approaches and Urgent Questions (AI Now Institute, 2020). 2. Feldstein, S. The Global Expansion of AI Surveillance (Carnegie Endowment for International Peace, 2019). 3. Ada Lovelace Institute. Beyond Face Value: Public Attitudes to Facial Recognition Technology (Ada Lovelace Institute, 2019). 4. Buolamwini, J. & Gebru, T. Proc. Mach. Learn. Res. 81, 77–91 (2018). 5. Learned-Miller, E., Ordóñez, V., Morgenstern, J. & Buolamwini, J. Facial Recognition Technologies in the Wild: A Call for a Federal Office (Algorithmic Justice League, 2020).

. d e v r e s e r s t h g i r l l A . d e t i m i L e r u t a N r e g n i r p S 0 2 0 2 ©

Nature | Vol 587 | 19 November 2020 | 353

I

A collage of images from the MegaFace data set, which scraped online photos. Images are obscured to protect people’s privacy.

THE ETHICAL QUESTIONS THAT HAUNT FACIALRECOGNITION RESEARCH

Journals and researchers are under fire for controversial studies using this technology. And a Nature survey reveals that many in this field think there is a problem. By Richard Van Noorden

. d e v r e s e r s t h g i r l l A . d e t i m i L e r u t a N r e g n i r p S 0 2 0 2 ©

354 | Nature | Vol 587 | 19 November 2020

n September 2019, four researchers wrote to the publisher Wiley to “respectfully ask” that it immediately retract a scientific paper. The study, published in 2018, had trained algorithms to distinguish faces of Uyghur people, a predominantly Muslim minority ethnic group in China, from those of Korean and Tibetan ethnicity1. China had already been internationally condemned for its heavy surveillance and mass detentions of Uyghurs in camps in the northwestern province of Xinjiang — which the government says are re-education centres aimed at quelling a terrorist movement. According to media reports, authorities in Xinjiang have used surveillance cameras equipped with software attuned to Uyghur faces. As a result, many researchers found it disturbing that academics had tried to build such algorithms — and that a US journal had published a research paper on the topic. And the 2018 study wasn’t the only one: journals from publishers including Springer Nature, Elsevier and the Institute of Electrical and Electronics Engineers (IEEE) had also published peer-reviewed papers that describe using facial recognition to identify Uyghurs and members of other Chinese minority groups. (Nature’s news team is editorially independent from its publisher, Springer Nature.) The complaint, which launched an ongoing investigation, was one foray in a growing push by some scientists and human-rights activists to get the scientific community

IMAGE VISUALIZATION BY ADAM HARVEY (HTTPS://MEGAPIXELS.CC) BASED ON THE MEGAFACE DATA SET BY IRA KEMELMACHER-SHLIZERMAN ET AL. BASED ON THE YAHOO FLICKR CREATIVE COMMONS 100 MILLION DATA SET AND LICENSED UNDER CREATIVE COMMONS ATTRIBUTION (CC BY) LICENCES

Feature

to take a firmer stance against unethical facial-recognition research. It’s important to denounce controversial uses of the technology, but that’s not enough, ethicists say. Scientists should also acknowledge the morally dubious foundations of much of the academic work in the field — including studies that have collected enormous data sets of images of people’s faces without consent, many of which helped hone commercial or military surveillance algorithms. (A feature on page 347 explores concerns over algorithmic bias in facial-recognition systems.) An increasing number of scientists are urging researchers to avoid working with firms or universities linked to unethical projects, to re-evaluate how they collect and distribute facial-recognition data sets and to rethink the ethics of their own studies. Some institutions are already taking steps in this direction. In the past year, several journals and an academic conference have announced extra ethics checks on studies. “A lot of people are now questioning why the computer-vision community dedicates so much energy to facial-recognition work when it’s so difficult to do it ethically,” says Deborah Raji, a researcher in Ottawa who works at the non-profit Internet foundation Mozilla. “I’m seeing a growing coalition that is just against this entire enterprise.” This year, Nature asked 480 researchers around the world who work in facial recognition, computer vision and artificial intelligence (AI) for their views on thorny ethical questions about facial-recognition research. The results of this first-of-a-kind survey suggest that some scientists are concerned about the ethics of work in this field — but others still don’t see academic studies as problematic.

Data without consent For facial-recognition algorithms to work well, they must be trained and tested on large data sets of images, ideally captured many times under different lighting conditions and at different angles. In the 1990s and 2000s, scientists generally got volunteers to pose for these photos — but most now collect facial images without asking permission. For instance, in 2015, scientists at Stanford University in California published a set of 12,000 images from a webcam in a San Francisco café that had been live-streamed online2. The following year, researchers at Duke University in Durham, North Carolina, released more than 2 million video frames (85 minutes) of footage of students walking on the university campus3. The biggest collections have been gathered online. In 2016, researchers at the University of Washington in Seattle posted a database, called MegaFace, of 3.3 million photos from the image-sharing site Flickr4. And scientists at Microsoft Research in Redmond, Washington,

issued the world’s largest data set, MSCeleb5, consisting of 10  million images of nearly 100,000 individuals, including journalists, musicians and academics, scraped from the Internet. In 2019, Berlin-based artist Adam Harvey created a website called MegaPixels that flagged these and other data sets. He and another Berlin-based technologist and programmer, Jules LaPlace, showed that many had been shared openly and used to evaluate and improve commercial surveillance products. Some were cited, for instance, by companies that worked on military projects in China. “I wanted to uncover the uncomfortable truth that many of the photos people posted online have an afterlife as training data,” Harvey says. In total, he says he has charted 29 data sets, used in around 900 research projects. Researchers often use public Flickr images that were uploaded under copyright licences that allow liberal reuse. After The Financial Times published an article on Harvey’s work in 2019, Microsoft and several universities took their data sets down. Most said at the time — and reiterated to Nature this month — that their projects had been completed or that researchers had requested that the data set be removed.

“Conferences should avoid sponsors who are accused of enabling abuses of human rights.” Computer scientist Carlo Tomasi at Duke University was the sole researcher to apologize for a mistake. In a statement two months after the data set had been taken down, he said he had got institutional review board (IRB) approval for his recordings — which his team made to analyse the motion of objects in video, not for facial recognition. But the IRB guidance said he shouldn’t have recorded outdoors and shouldn’t have made the data available without password protection. Tomasi told Nature that he did make efforts to alert students by putting up posters to describe the project. The removal of the data sets seems to have dampened their usage a little, Harvey says. But big online image collections such as MSCeleb are still distributed among researchers, who continue to cite them, and in some cases have re-uploaded them or data sets derived from them. Scientists sometimes stipulate that data sets should be used only for non-commercial research — but once they have been widely shared, it is impossible to stop companies from obtaining and using them. In October, computer scientists at Princeton University in New Jersey reported identifying 135 papers that had been published after the Duke data set had come down and which had

used it or data derived from it (see go.nature. com/3nlkjre). The authors urged researchers to set more restrictions on the use of data sets and asked journals to stop accepting papers that use data sets that had been taken down. Legally, it is unclear whether scientists in Europe can collect photos of individuals’ faces for biometric research without their consent. The European Union’s vaunted General Data Protection Regulation (GDPR) does not provide an obvious legal basis for researchers to do this, reported6 Catherine Jasserand, a biometrics and privacy-law researcher at the Catholic University of Leuven in Belgium, in 2018. But there has been no official guidance on how to interpret the GDPR on this point, and it hasn’t been tested in the courts. In the United States, some states say it is illegal for commercial firms to use a person’s biometric data without their consent; Illinois is unique in allowing individuals to sue over this. As a result, several firms have been hit with class-action lawsuits. The US social-media firm Facebook, for instance, agreed this year to pay US$650 million to resolve an Illinois class-action lawsuit over a collection of photos that was not publicly available, which it used for facial recognition (it now allows users to opt out of facial-recognition tagging). The controversial New York City-based technology company Clearview AI — which says it scraped three billion online photos for a facial-recognition system — has also been sued for violating this law in pending cases. And the US tech firms IBM, Google, Microsoft, Amazon and FaceFirst were also sued in Illinois for using a data set of nearly one million online photos that IBM released in January 2019; IBM removed it at around the time of the lawsuit, which followed a report by NBC News detailing photographers’ disquiet that their pictures were in the data set. Microsoft told Nature that it has filed to dismiss the case, and Clearview says it “searches only publicly available information, like Google or any other search engine”. Other firms did not respond to requests for comment.

Vulnerable populations In the study on Uyghur faces published by Wiley1, the researchers didn’t gather photos from online, but said they took pictures of more than 300 Uyghur, Korean and Tibetan 18–22-year-old students at Dalian Minzu University in northeast China, where some of the scientists worked. Months after the study was published, the authors added a note to say that the students had consented to this. But the researchers’ assertions don’t assuage ethical concerns, says Yves Moreau, a computational biologist at the Catholic University of Leuven. He sent Wiley a request to retract the work last year, together with the Toronto-based advocacy group Tech Inquiry. It’s unlikely that the students were told

. d e v r e s e r s t h g i r l l A . d e t i m i L e r u t a N r e g n i r p S 0 2 0 2 ©

Nature | Vol 587 | 19 November 2020 | 355

Feature enough about the purpose of the research to have given truly informed consent, says Moreau. But even if they did freely consent, he argues, human-rights abuses in Xinjiang mean that Wiley ought to retract the study to avoid giving the work academic credence. Moreau has catalogued dozens of papers on Uyghur populations, including facial-recognition work and studies that gathered Uyghur people’s DNA. In December, he wrote an opinion article in Nature calling for all unethical work in biometric research to be retracted7. His campaign has had some impact, but not quite to the extent he’d hoped. Publishers say the key issue is checking whether participants in studies gave informed consent. Springer Nature, for instance, said in December 2019 that it would investigate papers of concern on vulnerable groups along these lines, and that it had updated its guidance to editors and authors about the need to gain explicit and informed consent in studies that involve clinical, biomedical or biometric data from people. This year, the publisher retracted two papers on DNA sequencing8,9 because the authors conceded that they hadn’t asked Uyghur people for their consent, and it has placed expressions of concern on 28 others. Wiley has also focused on informed consent. Last November, the publisher told Moreau and Tech Inquiry that it was satisfied that consent forms and university approval for the Dalian study were available, and so it stood by the research, which it felt could be firmly separated from the actions in Xinjiang. “We are aware of the persecution of the Uyghur communities,” Wiley said. “However, this article is about a specific technology and not an application of that technology.” In December, however, the publisher opened a formal investigation, after Curtin University in Perth, Australia, where one of the authors is based, also asked for a retraction, saying it agreed that the work was ethically indefensible. This year, Wiley added a publisher’s note saying that the article “does appear to be in compliance with acceptable standards for conducting human subject research”. In September, after Moreau dug into the authors’ previous studies of facial recognition on Uyghurs and pointed out apparent inconsistencies in the year that the data sets had been gathered, Wiley placed an expression of concern on the study, saying that it was not clear when the data collection had taken place. The publisher told Nature that it now considers the matter closed after thorough investigation — but not everyone involved is content. “Curtin University maintains that the paper should be retracted,” deputy vice-chancellor Chris Moran told Nature. He said the university was still investigating the work. Wiley says that after its conversations with Moreau, it updated its integrity guidelines to make sure that expected standards for

“There are a number of lawful and legitimate applications of face and biometric recognition.” tiny of universities’ partnerships with companies or research programmes linked to mass surveillance in Xinjiang. The Massachusetts Institute of Technology (MIT) in Cambridge, for example, said it would review its relationship with the Hong Kong-based tech firm SenseTime after the US government — in the middle of a trade war with China — blacklisted the firm and other Chinese AI companies, such as Megvii in Beijing, over their alleged contributions to human-rights violations in Xinjiang. In 2018, SenseTime and MIT announced they had formed an “alliance on artificial intelligence”; MIT says that SenseTime had provided an undisclosed sum to the university without any restrictions on how it would be used, and that the university will not give it back. Both Megvii and SenseTime contest the US blacklisting. SenseTime says its technology has “never been applied for any unethical purposes”, and Megvii says it requires its clients “not to weaponize our technology or solutions or use them for illegal purposes”. Academic conferences have been contentious, too. The Chinese Conference on Biometrics Recognition (CCBR) was held in Xinjiang’s capital, Ürümqi, in 2018. Anil Jain, a computer scientist at Michigan State University in East Lansing, sat on the conference’s advisory board

. d e v r e s e r s t h g i r l l A . d e t i m i L e r u t a N r e g n i r p S 0 2 0 2 ©

356 | Nature | Vol 587 | 19 November 2020

informed consent are met and described in articles. Other publishers say that they have made adjustments, too. The IEEE says that this September, it approved a policy under which authors of articles on research involving human subjects or animals should confirm whether they had approval from an IRB or equivalent local review; editors determine whether research (on biometrics or other areas) involves human subjects. But Moreau says that publishers’ focus on the details of consent is too narrow, and that they should also take a stand on the wider ethics of research. “We are talking about massive human-rights abuses,” he says. “At some point, Western publishers should say that there are some baselines above which they don’t go.” He suggests that publishers should set up independent ethics boards that can give opinions when questions such as these arise. (No publishers asked by Nature said that they had taken this step.) Universities and researchers who disapprove of human-rights abuses could also do more to express this by dropping their associations with questionable technology firms, says Kate Crawford, co-director of the AI Now Institute at New York University. In the past year, there has been growing scru-

and travelled there to give a speech. Some AI researchers, including Toby Walsh at the University of New South Wales in Sydney, Australia, later criticized Jain for this in stories reported by the New York City-based Coda magazine. Coda magazine also noted that Springer Nature sponsored the conference; the company said its role was limited to publishing CCBR proceedings and that it had strengthened its requirements for conference organizers to comply with the publisher’s editorial policies after concerns were raised about past content. And Jain challenged the critique, telling Nature that attending conferences in China “does not mean that … international conference participants, like me, condone these atrocities against minorities”. Growth in surveillance there shouldn’t be a reason to “curtail scientific exchange”, he said. Jain remains on the advisory board for CCBR 2020–21; Springer Nature is still publishing the conference abstracts. And major international computer-vision conferences have continued to accept sponsorship from Chinese firms. Just after the blacklisting, SenseTime and Megvii sponsored the 2019 International Conference on Computer Vision, and Megvii sponsored the 2020 Conference on Computer Vision and Pattern Recognition, although its logo was removed from the conference’s website after the meeting occurred. “Conferences should avoid sponsors who are accused of enabling abuses of human rights,” reiterates Walsh. However, he notes that last year, the non-governmental organization Human Rights Watch in New York City withdrew initial allegations that Megvii facial-recognition technology was involved in an app used in Xinjiang. Conference organizers did not respond to a request for comment.

Ethical checkpoints Questionable research projects have popped up in the United States, too. On 5 May, Harrisburg University in Pennsylvania posted a press release declaring that researchers there had developed facial-recognition software “capable of predicting whether someone is likely going to be a criminal”, with “80 percent accuracy and no racial bias”. The announcement triggered a wave of criticism, as had previous studies that hark back to the discredited work of nineteenth-century physiognomists. One notorious 2016 study reported that a machine-learning algorithm could spot the difference between images of non-criminals and those of convicted criminals that were supplied by a Chinese police department10. Harrisburg University removed its press release on 6 May following the outcry, but left a dangling question: the press release had said that the work was to be published by Springer Nature in a book series (which the publisher later denied). On 22 June, more than 2,400 academics signed a letter from a group

called the Coalition for Critical Technology (CCT), asking Springer Nature not to publish the work and calling on all publishers to refrain from publishing similar studies. The letter pointed out that such studies are based on unsound science. It also noted that algorithmic tools that tell police where or who to target tend to provide a scientific veneer for automated methods that only exacerbate existing biases in the criminal justice system. Three days earlier, more than 1,400 US mathematicians had written a letter asking their colleagues to stop collaborating with police on algorithms that claim to help reduce crime, because of concerns about systemic racism in US law-enforcement agencies. Springer Nature said the work was never accepted for publication: it had been submitted to a conference and rejected after peer review. (The authors, and Harrisburg University, declined to comment.) Springer Nature was already under fire for a different paper, published in January in the Journal of Big Data, on detecting ‘criminal tendency’ in photos of criminals and non-criminals11. After researchers from the IEEE got in touch with ethical concerns, Margeret Hall, the paper’s co-author at the University of Nebraska Omaha, asked in June for the paper to be withdrawn. Hall says the now-retracted paper was “indefensible”. Springer Nature says the journal reviewed its processes and now requires authors to include statements on ethics approvals and consent when submitting manuscripts.

Nature survey To get a wider sense of academic views on facial-recognition ethics, Nature this year surveyed 480 researchers who have published papers on facial recognition, AI and computer science. On some questions, respondents showed a clear preference. When asked for their opinions on studies that apply facial-recognition methods to recognize or predict personal characteristics (such as gender, sexual identity, age or ethnicity) from appearance, around two-thirds said that such studies should be done only with the informed consent of those whose faces were used, or after discussion with representatives of groups that might be affected (see ‘Facial recognition: a survey on ethics’). But on other issues, academics were split. Around 40% of the scientists in the survey felt that researchers should get informed consent from individuals before using their faces in a facial-recognition data set, but more than half felt that this wasn’t necessary. The researchers’ dilemma is that it’s hard to see how they can train accurate facial-recognition algorithms without vast data sets of photos, says Sébastien Marcel, who leads a biometrics group at the Idiap Research Institute in Martigny, Switzerland. He thinks that researchers

FACIAL RECOGNITION: A SURVEY ON ETHICS Nature surveyed* nearly 500 researchers who work in facial recognition, computer vision and artificial intelligence about ethical issues relating to facial-recognition research. They are split on whether certain types of this research are ethically problematic and what should be done about concerns.

Who responded to the survey?

Europe

North America

China (mainland)

South America

480 respondents

Southeast Asia

L

Australia/ India New Zealand

Middle Africa East

Hong Kong/ Russia Taiwan

Not specified

Restrictions on image use

Question: Researchers use large data sets of images of people’s faces — often scraped from the Internet — to train and test facial-recognition algorithms. What kind of permissions do researchers need to use such images? Researchers should get informed consent from people before putting their faces in a database Researchers can freely use any online photos Researchers can use online photos when terms or licences permit that use Other No opinion 0

10

20

30

40

Percentage of respondents

Restrictions related to vulnerable populations

Question: Is it ethical to do facial-recognition research on vulnerable populations that might not be able to freely give informed consent, such as the Muslim population in western China? Ethically acceptable as long as the population gives consent Might be ethically questionable even if informed consent is given Other 0%

10

20

30

40

50

60

70%

Attitudes on different uses

Question: How comfortable are you with facial-recognition technology being used in the following ways? Extremely uncomfortable

Somewhat uncomfortable

Neither

Somewhat comfortable

Extremely comfortable

Police identifying a suspect after a crime Airports checking travellers’ identities Users unlocking smartphones Companies tracking who enters premises Public-transport systems checking travellers’ identities Schools registering students and checking attendance Police monitoring public places Schools assessing students’ behaviour Companies tracking people in public spaces Employers assessing personality traits and emotions of job candidates Anyone looking up somebody’s identify 0%

100%

*Questions and answers have been paraphrased for brevity. The full survey and results are available online at go.nature.com/2uwtzyh

. d e v r e s e r s t h g i r l l A . d e t i m i L e r u t a N r e g n i r p S 0 2 0 2 ©

Nature | Vol 587 | 19 November 2020 | 357

Schoolchildren walk beneath surveillance cameras in Xinjiang in western China.

should get informed consent — but in practice, they don’t. His own group doesn’t crawl the web for images, but it does use online image data sets that others have compiled. “A lot of researchers don’t want to hear about this: they consider it not their problem,” he says. Ed Gerstner, director of journal policy at Springer Nature, said the publisher was considering what it could do to discourage the “continued use” of image databases that don’t have explicit consent for their use in research from the people in the images. Nature’s survey also asked researchers whether they felt that facial-recognition research on vulnerable populations — such as refugees or minority groups that were under heavy surveillance — could be ethically questionable, even if scientists had gained informed consent. Overall, 71% agreed; some noted it might be impossible to determine whether consent from vulnerable populations was informed, making it potentially valueless. Some of those who disagreed, however, tried to draw a distinction between academic research and how facial recognition is used. The focus should be on condemning and restricting unethical applications of facial recognition, not on restricting research, they said. Ethicists regard that distinction as naive. “That’s the ‘I’m just an engineer’ mentality — and we’re well past that now,” says Karen Levy, a sociologist at Cornell University in Ithaca, New York, who works on technology ethics. Some of the respondents in China said that they were offended by the question. “You should not say that in Xinjiang some groups are detained in camps,” wrote one. Just under half of the 47 Chinese respondents felt that studies on vulnerable groups could be ethically questionable even if scientists had gained consent,

Ethical reflection Researchers who work on technology that recognizes or analyses faces point out that it has many uses, such as to find lost children, track criminals, access smartphones and cash machines more conveniently, help robots to interact with humans by recognizing their identities and emotions and, in some medical studies, to help diagnose or remotely track consenting participants. “There are a number of lawful and legitimate applications of face and biometric recognition which we need in our society,” says Jain.

. d e v r e s e r s t h g i r l l A . d e t i m i L e r u t a N r e g n i r p S 0 2 0 2 ©

358 | Nature | Vol 587 | 19 November 2020

a lower proportion than respondents from the United States and Europe (both above 73%). One Chinese American AI researcher who didn’t want to be named said that a problem was a cultural split in the field. “The number of Chinese researchers at top conferences who actively support censorship and Xinjiang concentration camp[s] concerns me greatly. These groups have minimal contact with uncensored media and tend to avoid contact with those who don’t speak Mandarin, especially about social issues like this. I believe we need to find ways to actively engage with this community,” they wrote. Nature asked researchers what the scientific community should do about ethically questionable studies. The most popular answer was that during peer review, authors of facial-recognition papers should be asked explicitly about the ethics of their studies. The survey also asked whether research that uses facial-recognition software should require prior approval from ethics bodies, such as IRBs, that consider research with human subjects. Almost half felt it should, and another quarter said it depended on the research.

But researchers must also recognize that a technology that can remotely identify or classify people without their knowledge is fundamentally dangerous — and should try to resist it being used to control or criminalize people, say some scientists. “The AI community suffers from not seeing how its work fits into a long history of science being used to legitimize violence against marginalized people, and to stratify and separate people,” says Chelsea Barabas, who studies algorithmic decision-making at MIT and helped to form the CCT this year. “If you design a facial-recognition algorithm for medical research without thinking about how it could be used by law enforcement, for instance, you’re being negligent,” she says. Some organizations are starting to demand that researchers be more careful. One of the AI field’s premier meetings, the NeurIPS (Neural Information Processing Systems) conference, is requiring such ethical considerations for the first time this year. Scientists submitting papers must add a statement addressing ethical concerns and potential negative outcomes of their work. “It won’t solve the problem, but it’s a step in the right direction,” says David Ha, an AI researcher at Google in Tokyo. The journal Nature Machine Intelligence is also trialling an approach in which it asks the authors of some machine-learning papers to include a statement considering broader societal impacts and ethical concerns, Gerstner says. Levy is hopeful that academics in facial recognition are waking up to the implications of what they work on — and what it might mean for their reputation if they don’t root out ethical issues in the field. “It feels like a time of real awakening in the science community,” she says. “People are more acutely aware of the ways in which technologies that they work on might be put to use politically — and they feel this viscerally.” Richard Van Noorden is a features editor for Nature in London.

1. Wang, C., Zhang, Q., Liu, W., Liu, Y. & Miao, L. Wiley Interdiscip. Rev. Data Min. Knowl. Discov. 9, e1278 (2019). 2. Stewart, R., Andriluka, M. & Ng, A. Y. in Proc. 2016 IEEE Conf. on Computer Vision and Pattern Recognition 2325–2333 (IEEE, 2016). 3. Ristani, E., Solera, F., Zou, R. S., Cucchiara, R. & Tomasi, C. Preprint at https://arxiv.org/abs/1609.01775 (2016). 4. Nech, A. & Kemelmacher-Shlizerman, I. in Proc. 2017 IEEE Conf. on Computer Vision and Pattern Recognition 3406–3415 (IEEE, 2017). 5. Guo, Y., Zhang, L., Hu., Y., He., X. & Gao, J. in Computer Vision — ECCV 2016 (eds Leibe, B., Matas, J., Sebe, N. & Welling, M.) https://doi.org/10.1007/978-3-319-46487-9_6 (Springer, 2016). 6. Jasserand, C. in Data Protection and Privacy: The Internet of Bodies (eds Leenes, R., van Brakel, R., Gutwirth, S. & de Hert, P.) Ch. 7 (Hart, 2018). 7. Moreau, Y. Nature 576, 36–38 (2019). 8. Zhang, D. et al. Int. J. Legal Med. https://doi.org/10.1007/ s00414-019-02049-6 (2019). 9. Pan, X. et al. Int. J. Legal Med. 134, 2079 (2020). 10. Wu, X. & Xhang, X. Preprint at https://arxiv.org/ abs/1611.04135 (2016). 11. Hashemi, M. & Hall, M. J. Big Data 7, 2 (2020).

GREG BAKER/AFP/GETTY

Feature

Science in culture

JOEL KOWSKY/NASA

Books & arts

Mission scientists monitor the spacecraft Cassini as it plunges into Saturn’s atmosphere.

Lessons in teamwork from the heart of NASA What gets discovered depends on how scientists collaborate, a sociologist shows. By Alexandra Witze

I

n 25 years of covering US planetary science, I’ve become used to seeing certain faces in press briefings, at conferences and on webcasts presenting discoveries from the NASA spacecraft exploring the Solar System. And I’ve enjoyed ferreting out the complex relationships between these researchers. But I’ve never had a direct, sustained view of their interpersonal interactions. Now, sociologist Janet Vertesi has lifted the curtain for all to see. Embedded with various NASA projects for years, she takes readers into the heart of two of them — the Cassini mission to Saturn and the Mars Exploration Rovers. What we see isn’t always pretty. But it is useful. In Shaping Science, Vertesi does not simply

describe the nuts and bolts of how these missions operate. Rather, she draws sweeping conclusions about the very nature of scientific discovery — what gets found — and how it depends on the ways in which scientists collaborate. That has implications for just about Shaping Science: Organizations, Decisions, and Culture on NASA’s Teams Janet Vertesi Univ. Chicago Press (2020)

any group of researchers in any field. Vertesi builds on classic work on the emergence of knowledge, such as that of sociologists Harry Collins, who spent years embedded among gravitational-wave hunters, and Diane Vaughan, who explored the culture of space-shuttle managers to understand how they came to normalize risk. For Vertesi, planetary science is fertile ground for studying the organization of complex teams. For both Cassini and the Mars mission, large groups of scientists, engineers and managers designed, built and operated robots to serve as our emissaries to planets beyond Earth — but they did so in fundamentally different ways. (Distractingly, Vertesi pseudonymizes the missions as “Helen” and “Paris” and gives their players fake names, perhaps to preserve their privacy; cognoscenti will merely play ‘guess who’). Cassini, which launched in 1997 and ended with a plunge into Saturn’s atmosphere in 2017, was a high-stakes mission from the start. It was one of NASA’s flagship planetary missions — costing billions of dollars, freighted with huge expectations, and partnered with the European Space Agency. As a result, it tried to mesh many competing interests into one functioning whole. A complex matrix approach linked groups focused around the specific aspect of the Saturnian system they wanted to study (rings, atmosphere, moons and so on). Mission leaders worked to integrate these aims. This often resulted in different working groups essentially bartering to achieve their science goals: ‘You can photograph the rings at this particular time if I get to switch my plasma instrument on at another time.’ By contrast, the Mars rovers Spirit and Opportunity, which launched in 2003 and ended in 2010 and 2018, respectively, had one principal investigator (Steven Squyres, at Cornell University in Ithaca, New York, although Vertesi spares his blushes by calling him Jeremy). He led the team, with all members providing input to make decisions collectively. Researchers worked together to identify and settle on courses of action, such as what rock to investigate next or which direction to drive the rovers in. Both Cassini and the Mars mission were wildly successful. They made discoveries fundamental to planetary science. But Vertesi argues that the nature of those discoveries was shaped by how their human operators asked questions. Cassini revealed deep insights about the moons, rings and other parts of Saturn from the perspective of individual instruments — such as radar studies of how the lakes on Titan, Saturn’s largest moon, changed over time.

. d e v r e s e r s t h g i r l l A . d e t i m i L e r u t a N r e g n i r p S 0 2 0 2 ©

Nature | Vol 587 | 19 November 2020 | 359

Books & arts

Books in brief Soft Matter Tom McLeish Oxford Univ. Press (2020) Freeze a rose in liquid nitrogen then tap it with a hammer, and the petals shatter. “Its softness is a function of its temperature, not just its molecular constituents and structure,” observes theoretical physicist Tom McLeish, one of the researchers who founded ‘softmatter physics’ in the 1990s. The field covers milkiness, sliminess and pearliness in colloids, polymers, liquid crystals, membranes, foams, granular materials, glasses and gels, and draws on chemistry and biology. This short introduction is fascinating, if unusually challenging.

Bones Roy A. Meals W. W. Norton (2020) Bone is a marvel: light-weight, durable, responsive to changing conditions and self-repairing. But “hardly anybody has ever seen or wants to see living bone, especially their own”, writes orthopaedic surgeon Roy Meals in his revealing, sometimes riveting and finely illustrated investigation, ranging from the dinosaurs to today. Hence, perhaps, the confusion over how many bones humans have: although 206 is the widely accepted figure, it actually varies from person to person. Even the number of ribs can differ, from 24 to 26.

Unsustainable Inequalities Lucas Chancel (transl. Malcolm DeBevoise) Harvard Univ. Press (2020) When the United States announced its withdrawal from the 2015 Paris climate agreement, its president claimed to be protecting US miners’ jobs. How, asks Paris-based economist Lucas Chancel, can the environment be protected while fighting poverty and inequality? His brief and moderately hopeful global analysis mentions Sweden, where poor households get assistance to replace obsolete heating equipment, and Indonesia, which swapped large fossil-fuel subsidies for “a vast program of social protection aimed at reducing inequalities”.

The Flying Mathematicians of World War I Tony Royle McGill-Queen’s Univ. Press (2020) The First World War was crucial to the development of UK aeronautics. Who better to tell the story than an ex-Royal Air Force pilot who is a trained mathematician, a dedicated historian and a lively writer: Tony Royle. His compelling book is inspired by academics who became pilots, such as physicist Frederick Lindemann — later scientific adviser to Winston Churchill — who experimented with putting an aircraft into a deliberate spin, calculating the effects and then stabilizing it. Lindemann’s courage launched a standard spin-recovery procedure.

A World Beneath the Sands Toby Wilkinson Picador (2020) Two anniversaries approach: of the deciphering of Egyptian hieroglyphs in 1822 and the discovery of Tutankhamun’s tomb in 1922. Egyptologist Toby Wilkinson’s book addresses the intervening century, when Western archaeologists and imperialists scrambled to excavate ancient Egypt’s civilization and procure treasures for collections abroad. He tells the story well, with attention to both scholarship and scandal. After 1922, he says, “in embracing scientific rigour, Egyptology would lose its panache”. Andrew Robinson

Spirit and Opportunity resulted in discoveries about specific rocks, dunes or other Martian landforms as seen by many instruments. The first approach yields encyclopedic knowledge in chunks; the second produces more of a synthesis of understanding about a particular landscape. Seen through this lens, these missions offer lessons for teams more generally. Consider data sharing. Vertesi argues that the Mars mission embraced the concept of open data not just because it was a taxpayer-funded mission — the usual explanation — but because its flat, collectivist organization required it. Meanwhile, on Cassini, the leader of the camera team ended up in a cycle of distrust with other scientists when she attempted to maintain control over images from her group. Unsurprisingly, Vertesi notes that institutional sexism probably had a role in the camera leader’s difficulties (one project scientist said he would make her “mud-wrestle” a male researcher to resolve an issue). Other women did rise to positions of power in the Cassini mission, but mainly towards the end of the spacecraft’s life. It was acceptable for women to run an existing mission, not a new one. Happily, this is changing: planetary scientist Elizabeth Turtle is leading NASA’s upcoming Dragonfly mission to Titan. Other lessons involve the challenge of managing people who don’t all work in the same place — particularly acute in the age of COVID‑19 and videoconferencing. Although mission control provides an organizational hub, many team scientists work remotely from their home institutions. They jostle for position from a distance — something all too familiar now. The Cassini team overcame the challenges of working across borders and time zones by nurturing a virtual sense of community, with photographs of tele­conference participants on the wall (shades of endless Zoom calls to come). This gave overseas scientists access to a groundbreaking mission, and gave NASA researchers access to top talent worldwide. Such take-home messages might be useful for collaborations getting off the ground. Vertesi notes that tech start-ups tend to favour the Mars-like flattened hierarchy around one charismatic leader. Bigger institutions, such as universities juggling the interests of departments and disciplines, often use a Cassini-like matrix. In the end, science from both missions flowed directly from the people involved. No matter how the lakes on Titan shimmer, or what the mineralogy of a particular Martian rock turns out to be, it was the people behind the spacecraft, keyboards and endless tele­ conferences that drove what these interplanetary robots discovered. I’m glad to have come to know them even better through this book. Alexandra Witze is a correspondent for Nature based in Boulder, Colorado.

. d e v r e s e r s t h g i r l l A . d e t i m i L e r u t a N r e g n i r p S 0 2 0 2 ©

360 | Nature | Vol 587 | 19 November 2020

Setting the agenda in research

MATTHEW HORWOOD/GETTY

Comment

Consider what information — in what format — would best support your audiences’ decisions.

Five rules for evidence communication Michael Blastland, Alexandra L. J. Freeman, Sander van der Linden, Theresa M. Marteau & David Spiegelhalter

Avoid unwarranted certainty, neat narratives and partisan presentation; strive to inform, not persuade.

. d e v r e s e r s t h g i r l l A . d e t i m i L e r u t a N r e g n i r p S 0 2 0 2 ©

362 | Nature | Vol 587 | 19 November 2020

“B

e persuasive”, “be engaging”, “tell stories with your science”. Most researchers have heard such exhortations many times, and for good reason. Such rhetorical devices often help to land the message, whether that message is designed to sell a product or win a grant. These are the traditional techniques of communications applied to science. This approach often works, but it comes with danger.

There are myriad examples from the current pandemic of which we might ask: have experts always been explicit in acknowledging unknowns? Complexity? Conflicts of interest? Inconvenient data? And, importantly, their own values? Rather than re-examine those cases, we offer ideas to encourage reflection, based on our own research. Our small, interdisciplinary group at the University of Cambridge, UK, collects empirical data on issues such as how to communicate uncertainty, how audiences decide

what evidence to trust, and how narratives affect people’s decision-making. Our aim is to design communications that do not lead people to a particular decision, but help them to understand what is known about a topic and to make up their own minds on the basis of that evidence. In our view, it is important to be clear about motivations, present data fully and clearly, and share sources. We recognize that the world is in an ‘infodemic’, with false information spreading virally on social media. Therefore, many scientists feel they are in an arms race of communication techniques. But consider the replication crisis, which has been blamed in part on researchers being incentivized to sell their work and focus on a story rather than on full and neutral reporting of what they have done. We worry that the urge to persuade or to tell a simple story can damage credibility and trustworthiness. Instead, we propose another approach. We call it evidence communication.

Inform, not persuade Conventional communication techniques might ‘work’ when the aim is to change people’s beliefs or behaviours. But should that always be our aim? Early in the pandemic, we surveyed people across 13 countries from South Korea to Mexico and asked what sources of information they trusted. We also asked them why. Their answers show how sensitive people are to the aims and interests of communicators. “They sell news, not truth,” said one UK respondent about journalists; “I believe the Government are being guided by scientists and genuinely care about the population,” said another; “WHO is paid by China,” replied a respondent from Japan. Friends and family were often warmly described as having “no reason to lie”. These observations fit with the literature, which identifies expertise, honesty and good intentions as the key to trustworthiness1. Researchers need to demonstrate all three: we cannot expect to be trusted on the basis of expertise alone. So how do we demonstrate good intentions? We have to be open about our motivations, conflicts and limitations. Scientists whose objectives are perceived as prioritizing persuasion risk losing trust. During the COVID-19 crisis, one of us (D.S.) has frequently had to refuse journalists who tried to draw him away from his intention to stick to statistical evidence. As he told The Times, “The banner across my T-shirt should

be To Inform and Not Persuade.” The media might urge us to aim for memorable sound bites or go beyond the strength of the data: be honest and aware of such traps.

Offer balance, not false balance We can’t inform people fully if we don’t convey the balance of relevant evidence. We are all subject to a suite of psychological biases that mean we sometimes apply evidence to shore up our own beliefs, and find it difficult to accept evidence that goes against our ideas and hypotheses. People also like to tell (and hear) stories that don’t meander through thickets of opposing opinions or pros and cons. But evidence communicators must challenge these instincts and offer evidence in the round. Partial presentation of evidence crops up across scientific literature and in the public domain. Often, the argument made is that people can’t take in lots of information at once. If you’re presenting written information, you can make it easier for them. Here’s a simple tip from research in medical communication: display the pros and cons in a table rather than stating them in the text. Imagine a table comparing proposed transmission-prevention policies that lays out the projected harms and benefits of each policy in terms of mortality, morbidity, economics, environment and mental health, breaking down subgroups and timescales. For your audiences, knowing what the key pros and cons are is crucial. We neglect people's interests at our peril. As soon as we are perceived to be ignoring or underplaying something our audience considers important, our motivations — and hence our trustworthiness — will be questioned. As one of the Australian participants in our COVID-19 survey in March said about their reason for distrust of official information: “Are they hiding the full effects from us?”

Disclose uncertainties Part of telling the whole story is talking about what we don’t know. The simplest argument for stating uncertainty is that what we think we know is constantly changing (wearing face coverings is an example). One of us (M.B.), writing with others in the medical journal BMJ, admitted that at some point, all three authors had been wrong about COVID-19 (ref. 2). So, either we had better be certain, and right — or we should more humbly state our uncertainties. When zoologist John Krebs became chair of the UK Food Standards Agency in the 2000s, he faced a deluge of crises, including dioxins in milk and the infectious cattle disease bovine

Quick tips for sharing evidence The aim is to ‘inform but not persuade’, and — as the philosopher of trust Onora O’Neill says — “to be accessible, comprehensible, usable and assessable”. • Address all the questions and concerns of the target audience. • Anticipate misunderstandings; pre-emptively debunk or explain them. • Don’t cherry-pick findings. • Present potential benefits and possible harms in the same way so that they can be compared fairly. • Avoid the biases inherent in any presentation format (for example, use both ‘positive’ and ‘negative’ framing together). • Use numbers alone, or both words and numbers. • Demonstrate ‘unapologetic uncertainty’: be open about a range of possible outcomes. • When you don’t know, say so; say what you are going to do to find out, and by when. • Highlight the quality and relevance of the underlying evidence (for example, describe the data set). • Use a carefully designed layout in a clear order, and include sources.

spongiform encephalopathy. He adopted the following strategy: say what you know; what you don’t know; what you are doing to find out; what people can do in the meantime to be on the safe side; and that advice will change3. We check anyone talking to the public about COVID-19 against this list, especially the second point. New Zealand’s response to the pandemic has been praised. And the country’s Ministry of Health web page on COVID-19 test results includes several paragraphs describing uncertainties, including the likelihood of a false negative (meaning that a test says someone’s not infected when they actually are). The US Centers for Disease Control and Prevention page mentions no such uncertainties. Neither does the UK National Health Service website (despite us raising the issue with them): it was deemed too confusing. Even with a highly accurate test, thousands of people get false negatives and false assurance that could lead to risky behaviours. When we trialled the wording with and

. d e v r e s e r s t h g i r l l A . d e t i m i L e r u t a N r e g n i r p S 0 2 0 2 ©

Nature | Vol 587 | 19 November 2020 | 363

Comment without the explicit uncertainties around the test result, we found that the uncertainties did not seem to undermine trustworthiness. However, the wordings did affect people’s perception of whether the test recipient should isolate if they got a negative result. In other words, people correctly interpreted the messages without having their trust undermined by an upfront description of uncertainties. Other research finds little downside in expressing findings as a range (such as ‘between x and y’) rather than an exact number4. Often, the degree of uncertainty is part of the core message. In January 2018, the BBC News website announced that over the three months to the previous November, “UK unemployment fell by 3,000 to 1.44 million”. Left unmentioned (because the UK Office of National Statistics made it hard to find) was the fact that the margin of error was ±77,000. (We are heartened that the Office of National Statistics has since started reporting ranges more prominently, and we have seen journalists follow this lead.)

State evidence quality Audiences also judge the credibility of information based on the quality of the underlying evidence, more than its clarity, the usual priority for a communications department. Here’s a sign of how readily audiences pick out cues for quality of evidence. In a study to learn what formats work best for presenting medical data, we used a version of the phrase “x out of 100 people suffered this side effect”, and about 4% of all participants took the time to write in an open response box that a sample size of 100 people was not enough5. This was a misunderstanding due to our choice of words. We did not literally mean 100 people, but it is notable that the participants were not scientific researchers or even students: they were representative UK residents (120 of the 1,519 respondents who left unsolicited comments overall mentioned sample size). As scientists, we tend to underestimate the sophistication of our audiences’ sensitivity to cues of quality and how these affect trust. In practical terms, overtly stating that a piece of evidence is of high or low quality is unsubtle but definitely noticed by a significant proportion of a non-specialist audience. People in our surveys also ask to know the size and source of data sets, so that they can gauge relevance to them. Such information should be provided.

Inoculate against misinformation Many will worry that following these key principles — especially exposing complexities, uncertainties or unwelcome possibilities — will let ‘merchants of doubt’ or bad actors warp their message. But there are other ways to guard against this. Research on climate change, COVID-19 and other topics shows that if people are pre-emptively warned against

“The urge to persuade or to tell a simple story can damage credibility and trustworthiness.” whether to take a vaccine. Consider the costs and benefits as they see them. When we developed a web tool about treatments for women newly diagnosed with breast cancer, we read the comments on patient forums. This revealed that people wanted to know the magnitude of survival benefit and of possible harms. For example, one woman said that a 1% improvement in survival was not worth the side effects of the drug tamoxifen (we paraphrase to preserve confidentiality). The information we ended up presenting was more complex and what people wanted to know.

What next? The field of evidence communication has been growing over several decades, mainly stemming from researchers in medical communication, but there is still much we don’t know about its effects, or best practice. If one is not trying to change belief or behaviour, it’s hard even to know how to measure success. Like all engagement efforts, many of the effects of a message are moderated greatly by non-verbal cues and the relationships between communicator and audience. But these challenges are why we think it important to consider alternative approaches (see ‘Quick tips for sharing evidence’). In some fields, such as conservation science or public health, researchers might, depending on the circumstances, feel that they should become advocates of their subject, advancing their positions with ‘every trick in the book’. Indeed, all researchers are “partisan advocates of the validity and importance of their work”, according to a recent study9. There is a continuum from ‘informing’ to ‘persuading’ — and researchers should choose their position on it consciously. Political and professional communicators often have aims and obligations that push them towards persuasion, whereas scientists should feel more free to judge what is appropriate. Many researchers do an excellent job of engaging with the public. Still, it bears

. d e v r e s e r s t h g i r l l A . d e t i m i L e r u t a N r e g n i r p S 0 2 0 2 ©

364 | Nature | Vol 587 | 19 November 2020

attempts to sow doubt (known as prebunking), they resist being swayed by misinformation or disinformation6–8. Prebunking requires anticipating potential misunderstandings or disinformation attacks, and that means understanding the concerns of the audience. Read public forums and popular news sources. Consider what decisions your audiences are making and what information — in what format — would best support these, from whether to wear a face covering to

repeating: researchers should not take up the reins of rhetoric blindly or feel that they should always harness the tools used by the media and entertainment industries to shape people’s emotions and beliefs. Nor should they assume that they are apolitical, unbiased and utterly objective — all of us have values, beliefs and temptations. Even if we choose to be an ‘honest broker’, the first person we need to be honest with is ourselves. In our research across ten countries, we see how people’s willingness to be vaccinated against COVID-19 correlates with their background levels of trust in science, scientific researchers and doctors, alongside their worry about the virus and their general beliefs in the efficacy of vaccines. Trust is crucial. Always aiming to ‘sell the science’ doesn’t help the scientific process or the scientific community in the long run, just as it doesn’t help people (patients, the public or policymakers) to make informed decisions in the short term. That requires good evidence communication. Ironically, we hope we’ve persuaded you of that. For more on evidence communication, see Supplementary information.

The authors Michael Blastland is a board member and Alexandra L. J. Freeman is executive director at the Winton Centre for Risk and Evidence Communication, University of Cambridge, UK. Sander van der Linden is a board member at the Winton Centre and director of the Cambridge Social DecisionMaking Lab, University of Cambridge, UK. Theresa M. Marteau is a board member at the Winton Centre and director of the Behaviour and Health Research Unit, University of Cambridge, UK. David Spiegelhalter is chair of the Winton Centre. e-mails: [email protected]; [email protected] Supplementary information accompanies this Comment: see go.nature.com/3pivy6v 1. White, M. P. & Eiser, J. R. in Trust in Risk Management: Uncertainty and Scepticism in the Public Mind. Ch. 4, 95–117 (Taylor & Francis, 2010). 2. Smith, G. D., Blastland, M. & Munafò, M. Br. Med. J. 371, m3979 (2020). 3. Champkin, J. Significance 10, 23–29 (2013). 4. van der Bles, A. M., van der Linden, S., Freeman, A. L. J. & Spiegelhalter, D. J. Proc. Natl Acad. Sci. USA 117, 7672– 7683 (2020). 5. Brick, C., McDowell, M. & Freeman, A. L. J. R. Soc. Open Sci. 7, 190876 (2020). 6. Roozenbeek, J. & van der Linden, S. Palgrave Commun. 5, 65 (2019). 7. Maertens, R., Roozenbeek, J., Basol, M. & van der Linden, S. J. Exp. Psychol. Appl. https://doi.org/10.1037/ xap0000315 (2020). 8. van der Linden, S., Leiserowitz, A., Rosenthal, S. & Maibach, E. Glob. Challenges 1, 1600008 (2017). 9. Leng, G. & Leng, R. I. The Matter of Facts: Skepticism, Persuasion, and Evidence in Science (MIT Press, 2020).

Setting the agenda in research

PER-ANDERS PETTERSSON/GETTY

Comment

An open-pit copper mine at the Mutanda works run by Glencore in the Democratic Republic of the Congo.

COVID-19 disruptions to tech-metals supply are a wake-up call Ata Akcil, Zhi Sun & Sandeep Panda

The pandemic has temporarily closed mines, factories and borders and destabilized flows of cobalt, lithium and other metals that are crucial for batteries, wind turbines and solar panels.

S

olar panels, wind turbines and batteries need silicon, cobalt, lithium and more to convert and store energy. Access to these elements, known as technology metals, is crucial for combating climate change. Some 3 billion tonnes of metals and minerals will be needed to decarbonize the global energy system by 2050, the World Bank estimates1. Supplies were stretched before COVID-19. Now, the tech-metals sector is in disarray. The pandemic has partly or wholly closed hundreds of mines, smelters and refineries (see go.nature.com/3ehkn9g). Metals production will be at least one-third lower this year

than last, with an estimated potential loss worldwide of almost US$9 billion in revenue. South American and African mines have been hit the hardest. Peru stopped producing iron and tin completely in April and is still trying to get back to 80% of former levels. South Africa’s mines, including those for platinum-group metals, were closed in March and have run at half capacity since May — levels that are financially unviable in the long run. Industrial demand for metals has fallen in the global slowdown2. Factory and border closures have disrupted international supply chains, too. For example, lockdowns in China interrupted supplies of almost half of the

. d e v r e s e r s t h g i r l l A . d e t i m i L e r u t a N r e g n i r p S 0 2 0 2 ©

Nature | Vol 587 | 19 November 2020 | 365

METALS MARKETS

Prices of technology metals have been volatile since COVID-19 closed mines worldwide and factories in Wuhan from the end of January. Here are three trends. Cobalt

Price % change

Zinc

–10 Many metals prices slumped after the virus hit.

–20 Dec

Feb 2020

Apr

Jun

Aug

Oct

Lithium Lanthanum* Dysprosium* Neodymium* 20 Costs of metals used in electric vehicles rose and fell.

0

–20 Dec Gallium

Feb 2020

Apr

Aug

Oct

Indium

40

Metals used for semiconductors have recently become pricier.

20 Price % change

Jun

0

–20 Dec

Feb 2020

Apr

Jun

Aug

Oct

*Latest price data unavailable.

rare earth elements. Chinese and European companies control cobalt production in the Democratic Republic of the Congo (DRC), a nation that is also rich in copper and tin, tantalum, tungsten and gold. Australia and Chile dominate the production of lithium4, whereas southeast Asian countries produce most nickel and China produces most graphite. Regions that are reliant on imports are vulnerable to fluctuations in supplies, prices and political whims. Producers are not immune. For instance, in 2018 a policy change in the DRC triggered an economic cascade that suspended operations at one of the country’s largest cobalt mines, Glencore’s Mutanda mine. The government announced that it would treat cobalt as a strategic substance

. d e v r e s e r s t h g i r l l A . d e t i m i L e r u t a N r e g n i r p S 0 2 0 2 ©

366 | Nature | Vol 587 | 19 November 2020

Copper

0

Valuable materials More than 30 metals are crucial for green-energy technologies. Aluminium and copper are most in demand, for turbine blades, wires and electrodes, for example. Cobalt, lithium, nickel and iron are key ingredients in batteries. For instance, more than 60% of mined cobalt goes into rechargeable batteries. Rare metals such as indium and gallium are widely used in electronics components, such as transistors and computer chips. Neodymium and dysporium are used in magnets. Demand for these ‘critical metals’ is skyrocketing to meet renewable-energy goals. For example, the world’s capacity from wind and solar sources needs to double to deliver half of global supply by 2035 under the Paris climate agreement (see go.nature.com/2jvjte7). More energy will need to be stored, meaning a greater need for vanadium, nickel, lithium and cobalt. Demand for these last two metals is expected to rise by up to fivefold from 2018 levels by 2050 (ref. 1). Other technology advances, such as 5G digital communications, are adding to pressures on these resources. Supplies are controlled by a small number of companies and countries. Among the biggest mining corporations by revenue are, for example, Glencore in Baar, Switzerland; BHP in Melbourne, Australia; Rio Tinto in London; and Vale in Rio de Janeiro, Brazil (see go.nature. com/3ktnrme). China supplies around 90% of

Nickel

Wuhan lockdown

10

Price % change

world’s battery materials earlier in the year. The city of Wuhan, where the virus was first reported, is a major manufacturing hub for vehicles and batteries. Its plants were shut from the end of January until April. Metals markets are volatile as a result. Prices are expected to fall by 13% on average this year2, with decreases ranging from 0.5–4.5% for platinum, copper and aluminium, to 11% for zinc and 17% for nickel. By contrast, some scarce materials face price spikes and fierce competition to secure supplies3–5. These include rare earth elements (such as cerium, yttrium, lanthanum and neodymium), which are used in computer chips, mobile phones, batteries and magnets. Demand for them is soaring because many countries want to boost their renewable-energy sectors to stimulate and decarbonize their economies6. Governments and researchers must work together to secure world supplies of technology metals. Steps include: supporting the mineral and mining sector through the pandemic; tightening regulations for the import and export of metals and sustainable extraction practices; and increasing the recycling rates of metals from electronics waste. Pressingly, more research and development is needed to make it easier and cleaner to produce metals, and to recover them from products that have reached the end of their useful life.

and increased its mining royalty from 2% to 10%. Price turbulence followed, exacerbated by falling demand for electric vehicles in China and bottlenecks in obtaining the chemicals needed to process the ore. Mutanda has been closed since last November, putting 20% of the world’s cobalt production offline. The world’s biggest economies recognize the risks. For example, the US Department of Energy lists 35 critical materials for technologies5. The nation imports gallium mainly from China, the United Kingdom, Germany and Ukraine. For rare earth elements, it depends on China, Estonia, Malaysia and Japan3. It obtains lithium from Argentina, Chile, China and Russia, whereas cobalt comes from China, Norway, Japan, Finland and the DRC5. Similarly, the European Union has a list of 30 critical raw materials. Shortages of rare earth elements could, for example, derail the European Green Deal initiative to decarbonize Europe’s economy by 2050 (ref. 7). China relies on imports, too — of lithium from Australia and Chile, nickel from South Asian countries, platinum-group metals from South Africa and Russia, as well as cobalt from the DRC. The COVID-19 pandemic is exacerbating supply and demand problems. Wuhan’s manufacturing shutdown and a 20% drop in the Chinese economy in the first quarter of this year initially drove down the prices of copper by 19.6%, nickel by 18% and cobalt by 7%. Prices have since been slowly rallying as some manufacturing has opened up. By contrast, prices of dysprosium and neodymium increased around February and March, because these metals were still highly sought after, whereas lithium and lanthanum experienced price drops around April (see ‘Metals markets’). Markets are likely to remain turbulent for the next year at least, as the global economic slowdown depresses demand. Aircraft manufacturers (including Boeing and Airbus) and vehicle producers (such as Tesla, Volkswagen, Jaguar–Land Rover and Toyota) have shrunk production. Mining is still stalled across much of the world, with workers in Zambia, the DRC and South Africa subject to COVID-19 restrictions.

Three options Governments can respond in three ways. First: exploit other reserves. For example, improving access to deposits of rare earth elements in Brazil, Vietnam, Australia, Russia, India and Greenland would reduce reliance on China (see go.nature.com/385tusy). This is a long game— it would take at least 15 years for Europe to establish a supply chain for rare earth elements outside China, for instance7. And the environmental cost of developing mineral reserves is high: waste from mining can contain radioactive elements and other contaminants, and waste water from processing can release ammonia and heavy

SOURCE: SHANGHAI METALS MARKET

Comment

NGUYEN HUY KHAM/REUTERS

Women sort electronics waste for recycling in a Vietnamese village.

metals. All the chemicals in minerals need to be considered for clean production. Second: redesign technologies to use alternative materials. This needs to happen anyway. For example, cobalt is not abundant enough in Earth’s crust to deliver all the batteries the world needs affordably, and it is often mined in poor social and ethical conditions. Manufacturers such as the US car maker Tesla and the battery firm CATL in China are pursuing alternative batteries that are cobalt-free. However, many replacements rely on nickel, which is carcinogenic, or iron (in lithium iron phosphate batteries), which are less efficient at storing charge. Solid state and sodium-ion batteries are other promising options. Third: recycle. Spent batteries and obsolete devices have higher concentrations of metals than do ores, and so extraction from these is potentially more economical. Recycling shores up supplies, even if it cannot meet rising demand. It shrinks supply chains and logistical costs. Countries or regions that have strong technology sectors, such as the EU, the United States, China, India and Japan, produce most electronic waste and could reprocess it domestically8. There are economic, technical and regulatory barriers to overcome, however. Recycling rates are low: worldwide only 17% of electronic waste is collected and treated. Europe has one of the highest rates (around 40% in 2018; see go.nature.com/2tkqfgr), with roughly half (40–60%) of the metal produced in the region coming from scrap (see go.nature.com/3tyj22t). There’s ambition to do more: the European Commission’s Directive on Waste Electrical and Electronic Equipment has a goal of 65% from 2019. Yet few member states have the facilities to meet it. In China, the recycling of spent lithium-ion batteries is beginning to supplement the

supply of critical raw materials. China has made its electronics manufacturers legally responsible for recycling the products they make, or they are taxed to cover the costs. Yet much scrap still goes straight to landfill or (sometimes illegally) to countries such as Nigeria, Ghana, Pakistan, Tanzania and Thailand, many of which have inadequate environmental and health and safety laws. Metals and other toxic substances, such as flame retardants, lead and cadmium, contaminate soil and groundwater and damage workers’ health.

Recharge the sector The following three steps need to be taken to stabilize the supply of critical metals. First, political leaders should support metals and minerals industries and recycling in their post-pandemic stimulus packages. With demand for green-energy technologies surging, this sector is ripe to deliver revenue and jobs. Tax relief for mines would allow them to restart. Cross-border and domestic supply chains should be protected by ensuring that raw materials can be transported from mines to plants. Governments and banks should also target investment to recycling and reuse of tech metals, to reduce reliance on imports. Second, researchers and manufacturers should develop a ‘circular economy’ for these materials8,9. Products need to be more ecofriendly, less energy-hungry, longer-lasting and easier to recycle and repair. Reuse of defunct products, recovery of materials and recycling of metals all need to be integrated into industry, along with cleaner production methods9,10. That will require public and private research to combine principles from metallurgical, chemical, environmental and biotechnical engineering. Third, states should set up a global

association under the United Nations’ Sustainable Development Goals to support the critical-metals sector and renewable-energy technologies. Such a body would focus on the sustainable management of mineral-metal resources, including legal, financial, technological and environmental aspects. Priorities include revitalizing the international supply chain, boosting manufacturing, transferring recycling technologies and innovating in urban waste recycling and resource recovery. A good model is the EU research network EIT RawMaterials, run by the European Institute of Innovation and Technology (EIT). In 2019, it established the Rare Earth Industry Association to bring academia and industry together to strengthen research and policies across the sector. Collaborators include the Association of China Rare Earth Industry, the Japan Society of Newer Metals and Europe’s Critical Raw Material Alliance. Materials are the lifeblood of the tech sector. As the world faces economic, environmental and social upheaval, it is ever more crucial to keep supplies flowing.

The authors Ata Akcil is professor of engineering and technical senior expert for critical raw materials and Sandeep Panda is an assistant professor of mineral biotechnology in the Mineral-Metal Recovery and Recycling Research Group, Department of Mining Engineering, Süleyman Demirel University, Isparta, Turkey. Zhi Sun is professor of metal recycling at the National Engineering Laboratory for Hydrometallurgical Cleaner Production Technology, Institute of Process Engineering, Chinese Academy of Sciences, Beijing, China. e-mails: [email protected]; sandeeppanda@ sdu.edu.tr; [email protected] 1. Hund, K., La Porta, D., Fabregas, T. P., Laing, T. & Drexhage, J. Minerals for Climate Action: The Mineral Intensity of the Clean Energy Transition (World Bank, 2020). 2. World Bank. Commodities Market Outlook: Implications of COVID-19 for Commodities (World Bank, 2020). 3. US Geological Survey. Mineral Commodity Summaries 2020 (USGS, 2020). 4. Australian Government, Department of Industry, Innovations and Science, Australian Trade and Investment Commission. Australia’s Critical Minerals Strategy 2019 (Commonwealth of Australia, 2019). 5. US Department of Energy. Testimony of Daniel Simmons, Assistant Secretary for Energy Efficiency and Renewable Energy, U.S. Department of Energy, Before the U.S. Senate Committee on Energy & Natural Resources, September 17, 2019 (US Dept of Energy, 2019); available at http:// go.nature.com/3thsscc 6. Kim, T.-Y. & Karpinski, M. Clean Energy Progress After the COVID-19 Crisis Will Need Reliable Supplies of Critical Minerals (International Energy Agency, 2020). 7. EIT RawMaterials. Position Paper on COVID-19 (EIT RawMaterials, 2020); available at http://go.nature. com/2jjmxbg 8. Işıldar, A. et al. J. Hazard. Mat. 362, 467–481 (2019). 9. Sun, Z. et al. ACS Sustain. Chem. Eng. 5, 21–40 (2017). 10. Sethurajan, M. et al. Crit. Rev. Environ. Sci. Technol. 49, 212–275 (2019).

. d e v r e s e r s t h g i r l l A . d e t i m i L e r u t a N r e g n i r p S 0 2 0 2 ©

Nature | Vol 587 | 19 November 2020 | 367

Expert insight into current research

GETTY

News & views

Figure 1 | A haze over Paris. Daellenbach et al.3 have modelled the distribution across Europe of atmospheric aerosols that have the capacity to induce oxidative stress in cells when inhaled. They find that these aerosols are associated with human activities and are concentrated in regions of high human population density.

Environmental science

A map of potentially harmful aerosols in Europe Rodney Weber

Atmospheric particles that increase levels of cellular oxidants when inhaled might be especially harmful. An analysis reveals which emissions should be limited to minimize the potential adverse health effects of such particles in Europe. See p.414 The inhalation of outdoor airborne particles can damage health and lead to premature death1,2. On page 414, Daellenbach et al.3 report that certain components of atmospheric aerosols — those that have the potential to increase the oxidization of molecules in cells — might be responsible for most of the acute adverse effects of these particles in Europe. They also find that the distribution of these potentially harmful aerosols differs from that of the overall mass of aerosols, with hotspots in regions where the human population density is high. The overall mass concentration of particles is

currently used as the key metric for assessing the risk of aerosols and to guide air-quality regulations. If the oxidation potential of aero­ sols is a stronger indicator of adverse effects, then the new results show that the evidence used at present to inform policymaking is deficient. In many regions, the concentration of aerosol mass in the atmosphere has been monitored for decades. Correlations have thus been established1,2 between outdoor aerosol-mass levels and a diverse set of detrimental respiratory and cardiovascular diseases, along with

a host of other conditions, such as diabetes and dementia. However, the finding that a broad metric such as mass concentration is linked with these health effects is surprising, because aerosol particles are extremely complex. For example, they range continuously in diameter from tens of nanometres to tens of micrometres2,4. Particle size governs where particles are deposited in the respiratory system and whether they are transported to other organs, which in turn can influence the adverse effect produced2,4. Moreover, myriad chemical compounds can be found in aerosols, and the amount of each constituent varies continuously as particles grow and age4,5. Given the vast complexity of the particles we inhale, is there a better metric than mass concentration that could provide more-detailed insight into, and correlates more strongly with, the ill effects of aerosols on people? One area of research has focused on a physiological mechanism known as oxidative stress to explain many of the varied adverse health outcomes of aerosols6,7. The idea is that particles carry oxidants into the lung and/or — probably a much worse effect — that specific components of the aerosol catalytically generate oxidants in cells when deposited in the lung or transported to other regions

. d e v r e s e r s t h g i r l l A . d e t i m i L e r u t a N r e g n i r p S 0 2 0 2 ©

Nature | Vol 587 | 19 November 2020 | 369

News & views of the body. The oxidant load in cells is normally controlled, but if it becomes too high, the resulting damage to cellular components leads to oxidative stress and produces an overall inflammatory response. Chemical assays have therefore been developed to analyse either the oxidants on particles or the aerosol components that generate oxidants in vivo, thereby allowing the oxidative potential (OP) — the capacity of particles to induce oxidative stress — to be measured. An ideal OP assay would comprehensively account for all the compounds that produce oxidative stress8 and their possible inter­ actions with each other. Current assays arguably do not do this because they are sensitive to a few specific compounds. To address this problem, Daellenbach et al. used three assays: one that quantifies the amount of oxidants on particles and two that assess the possible in vivo response. The authors loosely refer to all three as measures of aerosol OP, but agreement on more-precise terminology is needed in this field to clarify the source of oxidants (whether they were delivered by aerosols or formed in cells from aerosol components), and to aid consistency in the reporting of future results. The authors collected about 90 samples of PM10 — particulate matter with a diameter of 10 micrometres or less — at each of 9 sites in Switzerland and Liechtenstein, and assessed the OP of the samples using the three assays. The authors then developed an air-quality model that predicted OP throughout Europe by extrapolating the data from the nine measurement sites, assuming that these locations were representative of all of Europe. They finally combined the model data with population-density data to calculate and compare human exposure to aerosol mass and OP. No comparisons were made to actual health data. Daellenbach and co-workers found that OP was closely linked to human activities. The main sources of OP included organic aerosols produced indirectly from combustion, for example by residential wood burning, and metals from vehicle non-exhaust-pipe emissions (such as those produced by the use of brakes). This link to human activities resulted in the formation of OP hotspots at certain places with high population densities (Fig. 1). By contrast, the overall mass distribution of PM10 was found to be more spatially uniform, because it was dominated by wind-blown mineral dust, organic aerosols derived indirectly from vegetation emissions, and sources of inorganic species, such as sulfate, nitrate and ammonium salts. An earlier study9 of aerosol-mass concentration also found that these salts contributed widely to aerosol mass and linked them to agricultural emissions (ammonium nitrate, for example, is a widely used fertilizer), thus suggesting that agricultural emissions should be a focus of efforts aimed

concentration with metrics that focus more on the composition or source of the aerosols. But aerosol data founded on plausible biological mechanisms should be a better guide for future research addressing aerosol links to adverse health. Furthermore, as regulations and emissions driven by climate change alter the ambient aerosol composition in many regions, the usefulness of a mass-concentration metric might diminish. It is therefore prudent to explore other aerosol metrics, as Daellenbach et al. have done. Greater evaluation is now needed to help connect these metrics to human health data, to assess their utility. Rodney Weber is in the School of Earth and Atmospheric Sciences, Georgia Institute of Technology, Atlanta, Georgia 30032, USA. e-mail: [email protected] 1. World Health Organization Regional Office for Europe. Review of Evidence on Health Aspects of Air Pollution — REVIHAAP Project (WHO, 2013). 2. United States Environmental Protection Agency. Integrated Science Assessment for Particulate Matter (Final Report, Dec 2019) (EPA, 2019). 3. Daellenbach, K. R. et al. Nature 587, 414–419 (2020). 4. United States Environmental Protection Agency. Air Quality Criteria for Particulate Matter (Final Report, 2004) (EPA, 2004). 5. Jimenez, J. L. et al. Science 326, 1525–1529 (2009). 6. Nel, A. Science 308, 804–806 (2005). 7. Li, N. et al. Environ. Health Perspect. 111, 455–460 (2003). 8. Ayres, J. G. et al. Inhal. Toxicol. 20, 75–99 (2008). 9. Lelieveld, J., Evans, J. S., Fnais, M., Giannadaki, D. & Pozzer, A. Nature 525, 367–371 (2015). 10. Bates, J. T. et al. Environ. Sci. Technol. 53, 4003–4019 (2019). 11. Bates, J. T. et al. Environ. Sci. Technol. 49, 13605–13612 (2015). 12. Yang, A. et al. Occup. Environ. Med. 73, 154–160 (2016).

Developmental biology

Universal assembly instructions for placentas Jennifer L. Watts & Amy Ralston

Our understanding of how mammalian embryos develop is based largely on mice. A study now reveals striking similarities and intriguing differences between mouse, cow and human embryos. See p.443 The placenta is a defining feature of being a mammal, and its formation is one of the first steps in mammalian development. The embryo begins to make its placenta without direct guidance from its mother — rather, it follows a set of molecularly encoded, do-it-yourself assembly instructions. Whether these instructions are universal or unique to each species of mammal is a long-standing mystery. Gerri et al.1 report on page 443 a

. d e v r e s e r s t h g i r l l A . d e t i m i L e r u t a N r e g n i r p S 0 2 0 2 ©

370 | Nature | Vol 587 | 19 November 2020

at lowering premature mortality from aerosols over Europe. However, Daellenbach and colleagues show that these inorganic salts have low OP, and therefore are less concerning for human health. Clearly, any policy for protecting people from aerosols will be very different depending on which particle property — mass concentration or OP — is used to develop it. It should be noted that evidence supporting the use of OP assays as indicators of health risks is mixed. The results of some toxicology tests of aerosol components do indeed correlate with OP determined by specific assays, and Daellenbach and colleagues show that this is the case for one of their assays. Several epidemiological studies have also shown that OP determined by some assays is more closely linked to specific adverse respiratory and cardio­vascular effects than is aerosol mass, but other studies do not10. Moreover, it is difficult to compare studies that examine the effects of aerosol-mass concentration on human health with those that look at the effects of OP, because large data sets are needed for robust comparisons, and these are available only for mass-concentration studies. Instead, human exposure to OP is often predicted using computational models derived from a limited set of measurements, increasing the uncertainty of the results11,12. This is also an unavoidable limitation of Daellenbach and colleagues’ study. Periodic reviews of the scientific bases for aerosol health effects by the World Health Organization 1 and the US Environmental Protection Agency2 have so far found little evidence for replacing aerosol-mass

remarkable similarity in how mouse, cow and human embryos make their placentas. Historically, the mouse embryo has served as the model for elucidating the molecular mechanisms that guide cell-fate outcomes (decisions) during mammalian development. Now-classic studies established that the ballshaped mouse embryo develops an external ‘rind’ of cells fated to become placenta about three days after fertilization. These cells,

called the trophectoderm, encircle a group of inner cells that are considered pluripotent — they possess the capacity to produce all cell types of the body (reviewed in ref. 2). In mouse embryos, this first cellular differentiation involves the polarization of trophectoderm cells along one axis, known as the apical–basal axis. Cell-polarity proteins accumulate on the apical side of trophectoderm cells, repressing signalling through the HIPPO pathway3,4. By contrast, HIPPO signalling is active in the pluripotent cells, because they are unpolarized. In the pluripotent cells, HIPPO signalling prevents the transcription factor YAP1 from moving to the nucleus5. In trophectoderm cells, nuclear YAP1 promotes the expression of the trophectoderm genes Cdx2 and Gata3, and represses the pluri­ potency gene Sox2 (refs 5–7). These discoveries in mouse embryos were an essential step towards understanding how embryos of other species create distinct cell types. Early mouse, cow and human embryos are structurally quite similar, raising the possibility that molecular mechanisms guiding the first cell-fate decision in development are evolutionarily conserved across mammalian species. But, curiously, the CDX2 protein, which is thought to be a master regulator of trophectoderm in mouse, does not seem to be present in cow or human embryos at the time of trophectoderm formation8,9, suggesting that other genes must regulate the first cell-fate decision in these species. However, the mechanism(s) for species other than mouse have not yet been described. This is where Gerri and colleagues add a new page to the mammalian embryo instruction book. First, the authors analysed gene expression in human and cow embryos, and demonstrated that YAP1 localization and GATA3 gene expression are conserved between species as the trophectoderm emerges. Next, they disrupted cell polarization in each species by inhibiting atypical protein kinase C (aPKC), a key polarization protein. This prevented nuclear localization of YAP1, and disrupted GATA3 expression. These observations point to a conserved gene-regulatory module that governs the first cell-fate decision in mouse, cow and human embryos (Fig. 1a).  The observations also raise exciting possibilities for future study. For example, it is still unknown whether aPKC influences GATA3 through YAP1 in cow and human embryos as it does in mice. Although disruption of aPKC interfered with YAP1 nuclear localization and GATA3 expression in cow and human embryos, the requirement for YAP1 in GATA3 regulation was not tested in cow or human embryos. This leaves open the possibility that an aPKC-regulated transcription factor other than YAP1 could regulate GATA3 in cow and human trophectoderm. To distinguish between these possibilities, YAP1 should be

a Mouse, cow, human

b Mouse Trophectoderm cell Pluripotent cell

aPKC

YAP1

GATA3

YAP1

Sox2

Nucleus

Figure 1 | Shared pathways during early mammalian development. In the very early stages of the development of mouse, cow and human embryos, the outer cells of the embryo become trophectoderm (a cell type destined to give rise to the placenta), whereas the inner cells become pluripotent (capable of producing all cell types of the body). a, Gerri et al.1 demonstrate that, in the trophectoderm cells of all three species, the presence of a protein called atypical protein kinase C (aPKC) leads (through inhibition of the HIPPO signalling pathway, not shown) to the movement of YAP1 protein into the nucleus. Here, YAP1 promotes transcription of the gene GATA3 — a key trophectoderm-promoting factor. b, In mice, YAP1 does not move to the nucleus in pluripotent cells — GATA3 is not expressed, whereas the pluripotency gene Sox2 is. The mechanisms that govern the establishment of pluripotency in cow and human embryos remain unclear.

hyperactivated or inhibited in cow and human embryos, as has been done previously in mice. These analyses will bring us closer to understanding the conserved programs underlying mammalian early development. In spite of the striking conservation in how cell polarity and HIPPO signalling regulate GATA3 expression in mouse, cow and human embryos, Gerri and colleagues also report a notable difference. In mice, YAP1 inhibits expression of the pluripotency gene Sox2 in the trophectoderm, so restricting Sox2 expression to the embryo’s core7 (Fig. 1b). By contrast, in cow and human embryos, the SOX2 gene is initially expressed in both trophectoderm and pluripotent cells. Thus, YAP1 does not affect the initial patterning of SOX2 gene expression in cows or humans, as it does in mice8,10. Gerri et al. find that SOX2 expression does eventually become restricted to pluripotent cells in cow and human embryos, but it is not yet known whether this later process depends on YAP1. If so, the role of the signalling pathway would be conserved between species, although its timing would not. The fact that SOX2 is initially broadly expressed in cow and human embryos raises intriguing follow-up questions. For instance, does it indicate that pluripotency is defined at a later developmental stage in cow and human embryos than in mice? Or could there be alternative genes defining pluripotency at the earlier stages in cow and human? Future studies to address these possibilities will have broad-ranging implications. Discoveries in mouse and human embryos contribute directly to our understanding of stem-cell biology. Cultured stem cells were

first derived from both pluripotent and trophectoderm cells of the mouse embryo, paving the way for the establishment of stem-cell lines from human embryos2,11. Human stem cells have since been used in visionary efforts to study development and disease12. The knowledge gleaned from embryos thus guides our understanding of how to optimize protocols to manipulate the identity and function of stem cells, as well as bringing us closer to understanding the universal assembly instructions for mammalian embryogenesis. Jennifer L. Watts and Amy Ralston are in the Reproductive and Developmental Sciences Training Program, Michigan State University, East Lansing, Michigan 48824, USA. J.L.W. is also in the Molecular, Cellular and Integrative Physiology Program, Michigan State University. A.R. is also in the Department of Biochemistry and Molecular Biology, Michigan State University. e-mail: [email protected]

1. Gerri, C. et al. Nature 587, 443–447 (2020). 2. Rossant, J. Annu. Rev. Genet. 52, 185–201 (2018). 3. Leung, C. Y. & Zernicka-Goetz, M. Nature Commun. 4, 2251 (2013). 4. Hirate, Y. et al. Curr. Biol. 23, 1181–1194 (2013). 5. Nishioka, N. et al. Dev. Cell 16, 398–410 (2009). 6. Ralston, A. et al. Development 137, 395–403 (2010). 7. Frum, T., Murphy, T. M. & Ralston, A. eLife 7, e42298 (2018). 8. Berg, D. K. et al. Dev. Cell 20, 244–255 (2011). 9. Niakan, K. K. & Eggan, K. Dev. Biol. 375, 54–64 (2013). 10. Wicklow, E. et al. PLoS Genet. 10, e1004618 (2014). 11. Shahbazi, M. N. & Zernicka-Goetz, M. Nature Cell Biol. 20, 878–887 (2018). 12. Shi, Y., Inoue, H., Wu, J. C. & Yamanaka, S. Nature Rev. Drug Discov. 16, 115–130 (2017). This article was published online on 16 October 2020.

. d e v r e s e r s t h g i r l l A . d e t i m i L e r u t a N r e g n i r p S 0 2 0 2 ©

Nature | Vol 587 | 19 November 2020 | 371

News & views Complexity science

Hierarchies defined through human mobility Elsa Arcaute

An analysis of worldwide data finds that human mobility has a hierarchical structure. A proposed model that accounts for such hierarchies reproduces differences in mobility behaviour across genders and levels of urbanization. See p.402

a

b Probability

Container

Movement

Location

Individual containers

Container size

c

Aggregation of containers

Power law

Container size

Figure 1 | A model of human mobility. a, Alessandretti et al.4 present a model that can reproduce key properties of human mobility. In the model, observed spatial scales of human movement — such as the scales of neighbourhoods, cities and countries — are represented by different-sized containers. In this schematic, a person moves between particular locations in small containers, which are inside medium-sized containers, which are inside large containers. b, The authors applied their model to global mobility data. This plot illustrates, for two individual containers, the probability of it having a particular size, on a log–log scale. Such probability distributions are known as log-normal distributions. c, When all the containers are aggregated, rather than being considered individually, the container size instead follows a distribution called a power law. The generation of these two different distributions from the same data set reconciles two different perspectives on human mobility.

. d e v r e s e r s t h g i r l l A . d e t i m i L e r u t a N r e g n i r p S 0 2 0 2 ©

372 | Nature | Vol 587 | 19 November 2020

For example, we might ask which metro line will take us north of the city, whether there are direct trains to a particular city or which airline will take us to a particular country. The paper by Alessandretti and colleagues provides a solution to this mobility riddle. It presents a model that agrees with our hierarchical perception of space — that individuals have different scales of mobility depending on context. The authors analysed GPS location data for hundreds of thousands of people worldwide at a high temporal and spatial resolution, and they inferred the hierarchical structure of each individual’s mobility. They confirmed that the perceived structure is not an artefact of our brains, nor of the imposition of administrative delimitations, but corresponds to the way we move in space.

Probability

Our intuition suggests that humans travel across characteristic spatial scales, such as neighbourhoods, cities and countries. However, analyses of large data sets indicate that human mobility has no intrinsic scales1–3. On page 402, Alessandretti et al.4 combine worldwide data with modelling to solve this conundrum. Rural areas, settlements and cities evolve to sustain the lives of their inhabitants. For example, footpaths sometimes transition into roads or even railways to facilitate the different inter­actions between individuals, communities and other social groups. At the individual level, each person travels to connect with others and exchange friendship, knowledge or goods, to be part of rituals and to access urban functions such as education, economic opportunities and leisure. Each of us is unique, and we might be convinced that our lives are more exciting than are those of our neighbours — but maybe not as exciting as those of musicians, who are regularly out rehearsing, holding gigs in different parts of the city and touring all over the country or even the world. However, if our daily movements left traces, as ants leave pheromone trails, would these have a perceptible pattern? And would this pattern hold if we were living in a different city or country? These questions can be answered properly only by analysing global mobility patterns. Widespread geographical tracking of the use of smartphones, credit cards and other technologies has allowed academics to tap into these data sets and conclude that human travel cannot be characterized by spatial scales1–3. Such results have made their way into leading scientific journals. However, they seem to contradict not only our intuition but also what is accepted in the field of geography — that the mobility of individuals depends on context and is constrained by cost. We plan our trips and perceive the associated space in a hierarchical way. This viewpoint is reflected in the selection of a specific mode of transport according to where we want to go.

Alessandretti et al. used these global traces to identify typical spatial scales, which are referred to as containers in the paper (Fig. 1a). The authors discovered that container size has a probability distribution known as a log-normal distribution (Fig. 1b), corroborating recent results on the distribution of settle­ ment sizes5. They found that a log-normal distribution provides a better statistical fit than does a scale-free (power-law) distribution, in opposition to the scale-free mobility behaviour reported in the literature1–3. The authors reconciled these results by obtaining a power-law distribution from the aggregation of all containers (Fig. 1c). A further achievement of the paper relates to the use of the model to produce simulated traces of human mobility, and how these traces reproduce differences in mobility behaviour associated with gender and level of urbanization. Alessandretti et al. found that, although the mobility of women is more complex than is that of men, it is also spatially smaller. More­ over, they confirmed that people living in rural areas have much larger containers than those of individuals in urban areas. The origin of the observed hierarchical structure has puzzled academics for more than a century. Many theories and models6–8 have been developed in an attempt to capture patterns resulting from the co-evolution of the physical form9 and the function of cities. However, these attempts have encountered various challenges emerging from the fact that infrastructure changes slowly, whereas land use and demographics change quickly.

Urban systems have been shaped by mobility and the need to satisfy different human interactions modulated by the speed of transportation10. For centuries, we have left traces of mobility through our road networks11, encoding the hierarchical structure of urban systems at multiple scales. An open question is whether Alessandretti and colleagues’ research can be extended to explain why such patterns emerge worldwide and why cities have their particular morphologies. Is the observed organization of urban spaces the result of centuries of mobility? And could the authors’ work help us predict the future of our cities, now that we can tap into the traces of the movements that shape them? Elsa Arcaute is at the Centre for Advanced Spatial Analysis, University College London,

London W1T 4TJ, UK. e-mail: [email protected]

1. Brockmann, D., Hufnagel, L. & Geisel, T. Nature 439, 462–465 (2006). 2. González, M. C., Hidalgo, C. A. & Barabási, A.-L. Nature 453, 779–782 (2008). 3. Song, C., Koren, T., Wang, P. & Barabási, A.-L. Nature Phys. 6, 818–823 (2010). 4. Alessandretti, L., Aslak, U. & Lehmann, S. Nature 587, 402–407 (2020). 5. Corral, Á., Udina, F. & Arcaute, E. Phys. Rev. E 101, 042312 (2020). 6. Christaller, W. Central Places in Southern Germany (Prentice-Hall, 1966). 7. Alonso, W. Location and Land Use: Toward a General Theory of Land Rent (Harvard Univ. Press, 1964). 8. Wilson, A. G. Entropy in Urban and Regional Modelling (Pion, 1970). 9. Batty, M. & Longley, P. Fractal Cities: A Geometry of Form and Function (Academic, 1994). 10. Pumain, D. Espace Géogr. 26, 119–134 (1997). 11. Arcaute, E. et al. R. Soc. Open Sci. 3, 150691 (2016).

Microbiology

Identifying gut microbes that affect human health Sigal Leviatan & Eran Segal

When determining whether gut microbes affect human health, it is hard to distinguish between a causal and a correlative relationship. Analysis of microbial links to human traits and habits correlated with disease offers a step forward. See p.448 The resident microorganisms in the human body, termed the microbiota, represent diverse communities of microbial species comprising a complex ecology of tens of trillions of mainly bacterial cells1. Our gut microbiota, the largest and most diverse of these communities, is in constant interaction with our body’s cells and systems (such as the immune system)2, and it both shapes, and is being shaped by, our health status. The particular composition and diversity of the gut microbiota are associated with many health conditions3. However, it is usually not known whether such associations are just correlative or a consequence of the health condition, or whether they might cause, or contribute to, the illness. Addressing this problem is highly challenging because of the many physiological and lifestyle differences that can exist between individuals who are healthy and those who have the illness of interest. Such confounders — the variables that correlate with both microbiota and health status — might underlie the many discrepancies observed between the outcomes of different studies linking the composition of the gut microbiota and human health4. On page 448, Vujkovic-Cvijin et al.5 tackle

this problem. First, they consider physiological and lifestyle differences between people with and without a particular disease, and identify differences that might themselves be associated with the composition of the gut microbiota. Such differences can cause variation in the composition of gut microbes between healthy individuals and those who

“Failing to match individuals on their level of alcohol consumption could result in a misleading conclusion.” have the disease. Without knowing about these differences, it would be easy to misclassify a correlative and confounding association between lifestyle and the microbiota as being an informative causal association between disease and microbiota composition. Next, the authors attempted to deal with such confounders by taking the approach of one-to-one matching6 of individuals who had a particular condition with healthy individuals who were similar to them with regard to such potential confounders (Fig. 1). An example

might be matching with an individual of the same age, gender and body mass index (a value used in assessing a person’s weight that takes height into consideration). This type of matching procedure is often used in observational studies in which individuals cannot be assigned randomly to two groups and subjected to the two different scenarios being compared7. Vujkovic-Cvijin et al. report that gender, age, bowel-movement quality (categorized as stools that are solid, normal or loose), body mass index and level of alcohol consumption are among the strongest potential confounders that could hinder efforts to identify true associations between disease and gut-microbiota composition. This is because these characteristics are strongly associated both with microbiota composition and with disease status. When examining the differences between individuals with a condition such as type 2 diabetes and people who do not have this condition (but who might have other diseases), there seem to be many statist­ically significant associations between disease status and the abundances of different gut bacteria. By contrast, if individuals who have or do not have the disease are matched using some of the confounder criteria mentioned, many of these associations cease to be statistically significant. This implies that some gut-microbiota changes previously attributed to certain diseases might instead stem from other underlying causes related to these confounders. For example, alcohol consumption causes gut-microbiota changes, and individuals who have certain diseases consume less alcohol than average (perhaps because of the drugs that they take). Therefore, failing to match individuals on their level of alcohol consumption could result in a misleading conclusion that microbiota changes associated with the disease are attributable to the disease itself, rather than to a below-average alcohol intake. A potential problem with Vujkovic-Cvijin and colleagues’ approach is that some of the suggested confounders might be associated with disease symptoms, rather than being lifestyle choices; people in these confounding categories could in that case already be sick but undiagnosed, or on the path to being ill. In such cases, matching with healthy individuals might actually introduce bias8. For example, matching people on their level of alcohol intake makes no sense when studying alcoholic liver disease. Moreover, even if potential confounders are not linked to the defining symptoms of the disease in question, or are not uniquely matched to symptoms of the disease, it should still be a cause for concern if matching for the confounder would mean that the resulting matched group is not representative of healthy individuals. For instance, matching people who have lung cancer with individuals who don’t have it, after the same number of

. d e v r e s e r s t h g i r l l A . d e t i m i L e r u t a N r e g n i r p S 0 2 0 2 ©

Nature | Vol 587 | 19 November 2020 | 373

News & views a Healthy

Individuals with the disease

individuals

Low alcohol intake

Random sampling

Individuals not matched for factors that can affect the microbiota

b Healthy

Individuals with the disease

individuals

Targeted sampling

Individuals matched for factors that can affect the microbiota

Figure 1 | Comparing populations to assess connections between gut microbes and human disease. Vujkovic-Cvijin et al.5 identified factors that affect the composition of gut microbes (termed microbiota) and that differ in prevalence between populations with and without a specific disease. a, For example, the proportion of individuals who have low levels of alcohol intake might differ between the healthy and ill populations. A random sampling of individuals for comparison that does not take this factor into account might mean that microbiota differences that seem to be associated with disease status arise because of this factor. b, The authors instead compared individuals matched for factors that can affect the microbiota. However, such sampling might select individuals not representative of a healthy population.

years of heavy smoking, will not provide a truly healthy control group. With that in mind, people with inflammatory bowel disease should not be matched with a healthy matching group on the basis of bowel-movement quality. Nor should people who have type 2 diabetes be matched with a healthy cohort on the basis of blood levels of the glycoprotein HbA1C, which offers a way of assessing long-term excess sugar levels (something that the authors don’t do). Researchers should also be suspicious of matching people who have type 2 diabetes with a healthy cohort on the basis of body mass index. In an effort to address this issue, the authors repeated their analysis using a smaller cohort, in which none of the individuals in the healthy group self-reported any type of disease at all (the previous criterion for healthy individuals was just those who did not self-report the specific disease of interest). They found similar associations between disease status and the physiological and lifestyle differences, although these associations were now either less statistically significant than in the original analysis or no longer significant. Unfortunately, removing individuals with any self-reported disease does not rule out matching the people from the disease cohort with control individuals who might nevertheless be undiagnosed, or whose disease status might be borderline; this could happen if, for example, people who have diabetes are matched with those who are pre-diabetic. This problem, whose scope extends beyond this study, raises a key question for all medical studies: what constitutes a healthy cohort? Finally, it is important to remember that

Sigal Leviatan and Eran Segal are at the Weizmann Institute of Science, Rehovot 76100, Israel. e-mails: [email protected]; [email protected] 1. Sender, R., Fuchs, S. & Milo, R. Cell 164, 337–340 (2016). 2. DeSalle, R. & Perkins, S. L. Welcome to the Microbiome (Yale Univ. Press, 2015). 3. Honda, K. & Littman, D. R. Nature 535, 75–84 (2016). 4. Clemente, J. C., Ursell, L. K., Parfrey, L. W. & Knight, R. Cell 148, 1258–1270 (2012). 5. Vujkovic-Cvijin, I. et al. Nature 587, 448–454 (2020). 6. Schlesselman, J. J. Case-Control Studies: Design, Conduct, Analysis (Oxford Univ. Press, 1982). 7. Rose, S. & van der Laan, M. J. Int. J. Biostat. 5, 1 (2009). 8. Costanza, M. C. Prevent. Med. 24, 425–433 (1995). This article was published online on 4 November 2020.

Immunology

Interferon deficiency can lead to severe COVID Eric Meffre & Akiko Iwasaki

Understanding what contributes to the development of severe COVID-19 would be of great clinical benefit. Analysis of people in whom this occurred pinpoints a key role for the signalling pathway mediated by type I interferon proteins. Infection with the SARS-CoV-2 coronavirus results in diverse outcomes for COVID-19, with the disease tending to be more severe and lethal for older males1,2. Yet some young people can also have severe COVID-19. What determines susceptibility to this disease? Writing in Science, Zhang et al.3 and Bastard et al.4 shed light on a key factor that affects whether life-threatening COVID-19 develops. The studies implicate deficiencies in interferon proteins, specifically, type I interferons (IFN-I). Such deficiencies might arise, as Zhang and colleagues report, through

. d e v r e s e r s t h g i r l l A . d e t i m i L e r u t a N r e g n i r p S 0 2 0 2 ©

374 | Nature | Vol 587 | 19 November 2020

identifying potential confounders between gut-microbiota composition and human health does not imply that these are unrelated. Nor does it imply a lack of causality where a relationship does exist. For example, if alcohol consumption causes changes to the micro­biota that, in turn, contribute to developing type 2

diabetes, then a causal effect exists between the microbiota and the disease; but this will not be seen after matching individuals on their level of alcohol consumption. The same will be true if inflammatory bowel disease results in the types of microbiota change that cause diarrhoea, and individuals are matched on their bowel-movement quality. Thus, Vujkovic-­ Cvijin and colleagues’ results do not rule out the microbiota having a causal effect. The question of causality between the microbiota and human disease is a central topic in studies in this area. These findings will certainly continue to fuel research in the field for years to come, and Vujkovic-Cvijin et  al. have taken a step forward for our thinking about this issue.

inherited mutations in genes encoding key antiviral signalling molecules, or, as Bastard and colleagues describe, by the development of antibodies that bind to and ‘neutralize’ IFN-I. Among people who developed severe COVID-19, such neutralizing antibodies were mostly in older males. The IFN-I family includes IFN-α, IFN-β and IFN-ω. These molecules provide innate immune defences — they mount an initial rapid antiviral response. IFN-I proteins are a type of immune-signalling molecule called a cytokine; they are induced when a cell detects viral

RNA through sensors, such as the proteins TLR3, TLR7 and TLR8 that are found in cellular organelles called endosomes. The IFN-I molecules then bind to the cell-surface receptor IFNAR (comprised of the proteins IFNAR1 and IFNAR2), resulting in the transcription of hundreds of genes5 that block the replication and spread of the virus. Zhang et al. examined whether people who had life-threatening COVID-19 pneumonia harboured mutations in genes that had previously been associated with severe cases of viral infections such as influenza. These genes belong to the TLR3 and IFN-I signalling pathways. The authors looked for mutations in 13 genes of interest. They found that 3.5% of the individuals (23 of the 659 people tested) had mutations in 8 of these genes, rendering the gene products incapable of producing or responding to IFN-I (Fig. 1). In vitro studies by Zhang and colleagues confirmed these findings, and indicated that the mutations produce ‘loss-of-function’ versions of proteins. The authors found that people carrying these mutations had low to undetectable levels of IFN-α in their blood plasma during coronavirus infection, linking the mutations to defective IFN-α production in response to viral challenge. By contrast, of 534 individuals with either asymptomatic or mild COVID-19, only one harboured a loss-of-function mutation at one of the 13 sites studied. This individual had a mutation in the IRF7 gene, which encodes a protein required for the production of IFN-I. None of the people tested who had gene variants in the TLR3 or IFN pathway had previously had severe viral infections. This suggests that although SARS-CoV-2 antiviral defences might rely crucially on IFN-I, other types of viral infection can be controlled by alternative mechanisms in those individuals. Severe COVID-19 infection has also been reported in four young men who had a loss-of-function mutation in the TLR7 gene6, providing further evidence that genetic errors in IFN-I pathways contribute to severe COVID-19. Another possible cause of interferon deficiency is the generation of antibodies that target IFN-I — a form of autoimmunity. An individual with an auto­immune disease has autoantibodies that target proteins naturally produced by the body. Autoantibodies that neutralize cytokines might therefore confer a similar susceptibility to infection to that seen in people who have genetic defects affecting cytokine pathways. Anti-IFN-I autoantibodies have been identified in various diseases, including in people with a condition called autoimmune polyglandular syndrome type 1 (APS-1). It was reported7 in June that an individual with APS-1 developed severe COVID-19 pneumonia. However, the roles of such auto­ antibodies in disease have not been explored in depth. Remarkably, Bastard et al. report that, of

a Abnormality in interferon production

b Abnormality in the response to interferon

SARS-CoV-2

IFN-α

RNA Human cell

IFNAR1

IFNAR2

IFN-ω Anti-IFN-I autoantibody

Endosome TLR7 TLR3

TRIF

TBK1

UNC93B

IRF7 IRF3

Interferon-encoding gene

Interferon-dependent gene encoding antiviral molecule

Figure 1 | A defective antiviral signalling pathway. a, Normally, when the SARS-CoV-2 coronavirus enters human cells, it reaches an organelle called the endosome, where viral RNA is recognized by Toll-like receptors such as TLR7 and TLR3. This recognition drives a pathway (only some pathway proteins are shown) that leads to the expression of genes encoding type I interferon proteins. Zhang et al.3 found that people with severe COVID-19 had mutations in genes that encode components of this process; components associated with such mutations are shown in red. Such individuals do not produce interferon normally. A mutated version of the gene that encodes TLR7 has been reported5 previously in people with severe COVID-19. b, Zhang et al. also identified mutations in genes encoding the receptor for interferon (which consists of the proteins IFNAR1 and IFNAR2). Bastard and colleagues4 report that other individuals with severe COVID-19 have autoantibodies that bind to certain of the body’s type I interferons (IFN-α and IFN-ω, but not IFN-β), and thus block signalling mediated by IFN-α and IFN-ω. Such signalling defects hinder antiviral gene expression.

the 987 people with severe COVID-19 whom they tested, 135 (13.7%) had antibodies that recognized an IFN-α subtype (IFN-α2), IFN-ω or both, whereas none of the 663 people with asymptomatic or mild COVID-19, and only 0.3% of healthy individuals examined (4 of 1,227) had such autoantibodies. In addition, plasma samples in 10.2% of the 987 individuals with severe COVID-19 had interferon-neutralizing activity; the authors observed that this could hinder the ability of IFN-α2 to block in vitro infection of human

“These autoantibodies have the potential to affect the course of SARS-CoV-2 infection.” cells with SARS-CoV-2. The finding demonstrates that these auto­antibodies have the potential to affect the course of SARS-CoV-2 infection. Notably, 94% of the people who had anti-IFN-I antibodies were male, and they were generally older than most of the other individuals. Bastard and colleagues argue that the auto­antibodies were present before the people were exposed to SARS-CoV-2, because these auto­antibodies were detected early, within one to two weeks of infection. Furthermore, two of them had confirmed pre-existing autoantibodies against IFN-I. What leads to the production of these

autoantibodies? B cells of the immune system that make autoantibodies are normally selectively eliminated during development. The B  cells that produce anti-cytokine autoantibodies in people with APS-1 arise as a result of defects in this selection process8. Thus, the anti-IFN-I antibodies found by the authors might arise as a consequence of faulty B-cell-tolerance checkpoints. Why is autoantibody production skewed towards a greater occurrence in older men? Such faulty B-cell selection seems different from that of other autoimmune diseases, which tend to affect mainly females. Although it is known that the regulation of developing B cells is similar in young and middle-aged males and females, autoantibody levels in older people have not been investigated9. Many genes on the X chromosome encode molecules, such as FOXP3, BTK and CD40L, that are essential for immune responses and for early B-cell checkpoints9. Perhaps some such genetic mutation on the X chromosome favours the emergence of anti-cytokine autoantibodies. If so, males would be more vulnerable because they depend on a single copy of these genes on their X chromosome, unlike females, who have a back-up gene copy on a second X chromosome. This leads to a central question. How does a defective IFN-I response lead to life-threatening COVID-19? The most direct explanation is that IFN-I deficiencies lead to uncontrolled viral replication and spread. However, IFN-I

. d e v r e s e r s t h g i r l l A . d e t i m i L e r u t a N r e g n i r p S 0 2 0 2 ©

Nature | Vol 587 | 19 November 2020 | 375

News & views deficiencies might also have other consequences for immune-system function, such as the loss of suppression of immune-signalling complexes called inflammasomes and enhanced production of cytokines that are made downstream of these complexes10. Mice engineered to have abnormalities in the IFN-I pathway are more likely to die of influenza as a result of excessive inflammasome activation, not because of high levels of viral replication11, and such a phenomenon might explain severe COVID-19 in IFN-I-deficient people. Individuals with genetic mutations in the IFN-I-induction pathway would therefore benefit from therapy that provides interferon, but such treatment would not help those with mutations in the genes encoding IFNAR. Furthermore, people who have neutralizing antibodies to IFN-α and IFN-ω might benefit from therapy that provides other types of interferon, such as IFN-β and IFN-λ, if given early during infection. What other anti-cytokine autoantibodies might people carry? If others are found, it will

be interesting to determine whether they also affect the course of infectious diseases. Might such autoreactive antibodies interfere with efforts to achieve vaccine-induced immunity in certain individuals? These latest results also suggest that blood samples (described as convalescent plasma) from people who have recovered from COVID-19, which can offer a source of antibodies targeting coronavirus, should be examined to exclude antiIFN-I autoantibodies before being given as treatment. As the search for effective treatment and vaccines continues, these key questions will help in refining the path forward. The new findings also highlight the need to examine both the genetic and the autoantibody-mediated contributors to severe cases of infectious disease. Eric Meffre and Akiko Iwasaki are in the Department of Immunobiology, Yale University School of Medicine, New Haven, Connecticut 06520, USA. A.I. is also at the Howard Hughes

Medical Institute, Chevy Chase, Maryland. e-mails: [email protected]; [email protected]

1. Richardson, S. et al. J. Am. Med. Assoc. 323, 2052–2059 (2020). 2. Grasselli, G. et al. J. Am. Med. Assoc. 323, 1574–1581 (2020). 3. Zhang, Q. et al. Science 370, eabd4570 (2020). 4. Bastard, P. et al. Science 370, eabd4585 (2020). 5. Schoggins, J. W. et al. Nature 472, 481–485 (2011). 6. van der Made, C. I. et al. J. Am. Med. Assoc. 324, 663–673 (2020). 7. Beccuti, G. et al. J. Endocrinol. Invest. 43, 1175–1177 (2020). 8. Sng, J. et al. Science Immunol. 4, eaav6778 (2019). 9. Meffre, E. & O’Connor, K. C. Immunol. Rev. 292, 90–101 (2019). 10. Guarda, G. et al. Immunity 34, 213–223 (2011). 11. Pillai, P. S. et al. Science 352, 463–466 (2016). E.M. declares competing financial interests: see go.nature. com/3jxyagh for details. This article was published online on 2 November 2020.

Publishing high-quality Research & Reviews in all areas of immunology Discover our portfolio of leading journals which cover all areas of immunology, including Research & Reviews, News, Commentaries and Historical perspectives.

Nature Immunology: nature.com/ni Nature Reviews Immunology: nature.com/nri

Nature

@NatImmunol @NatRevImmunol

A86030

. d e v r e s e r s t h g i r l l A . d e t i m i L e r u t a N r e g n i r p S 0 2 0 2 ©

Perspective

LifeTime and improving European healthcare through cell-based interceptive medicine https://doi.org/10.1038/s41586-020-2715-9 Received: 29 April 2020 Accepted: 25 August 2020 Published online: 7 September 2020 Open access Check for updates

Nikolaus Rajewsky1,2,3,4,204 ✉, Geneviève Almouzni5,204 ✉, Stanislaw A. Gorski1,204 ✉, Stein Aerts6,7, Ido Amit8, Michela G. Bertero9, Christoph Bock10,11,12, Annelien L. Bredenoord13, Giacomo Cavalli14, Susanna Chiocca15, Hans Clevers16,17,18,19, Bart De Strooper6,20,21, Angelika Eggert3,22, Jan Ellenberg23, Xosé M. Fernández24, Marek Figlerowicz25,26, Susan M. Gasser27,28, Norbert Hubner2,3,4,29, Jørgen Kjems30,31, Jürgen A. Knoblich32,33, Grietje Krabbe1, Peter Lichter34, Sten Linnarsson35,36, Jean-Christophe Marine37,38, John C. Marioni39,40,41, Marc A. Marti-Renom9,42,43,44, Mihai G. Netea45,46,47, Dörthe Nickel24, Marcelo Nollmann48, Halina R. Novak49, Helen Parkinson39, Stefano Piccolo50,51, Inês Pinheiro5, Ana Pombo1,52, Christian Popp1, Wolf Reik41,53,54, Sergio Roman-Roman55, Philip Rosenstiel56,57, Joachim L. Schultze47,58,59, Oliver Stegle39,41,60,61, Amos Tanay62, Giuseppe Testa15,63,64, Dimitris Thanos65, Fabian J. Theis66,67, Maria-Elena Torres-Padilla68,69, Alfonso Valencia44,70, Céline Vallot55,71, Alexander van Oudenaarden16,17,18, Marie Vidal1, Thierry Voet7,41 & LifeTime Community Working Groups*

Here we describe the LifeTime Initiative, which aims to track, understand and target human cells during the onset and progression of complex diseases, and to analyse their response to therapy at single-cell resolution. This mission will be implemented through the development, integration and application of single-cell multi-omics and imaging, artificial intelligence and patient-derived experimental disease models during the progression from health to disease. The analysis of large molecular and clinical datasets will identify molecular mechanisms, create predictive computational models of disease progression, and reveal new drug targets and therapies. The timely detection and interception of disease embedded in an ethical and patient-centred vision will be achieved through interactions across academia, hospitals, patient associations, health data management systems and industry. The application of this strategy to key medical challenges in cancer, neurological and neuropsychiatric disorders, and infectious, chronic inflammatory and cardiovascular diseases at the single-cell level will usher in cell-based interceptive medicine in Europe over the next decade.

Although advances in medicine have led to remarkable progress in certain disease areas, most chronic disorders still cannot be completely cured. This is mainly because most such diseases are detected only late in their progression, once gross physiological symptoms manifest themselves, at which point tissues and organs have often undergone extensive or irreversible damage. At this stage, the choice of interventions is typically quite limited. It is difficult to predict whether a patient will respond to a particular treatment (often invasive or aggressive therapies that can be of modest benefit), or whether therapy resistance will emerge and lead to a relapse. Despite technology-driven revolutions that enable a patient’s physiology to be investigated at the level of molecules1,2 and placed in the context of tissues3,4, in most cases our ability to detect and predict diseases at an early stage is limited by our incomplete mechanistic understanding of disease at the cellular level. Cells develop and differentiate along specific lineage trajectories to form functionally distinct cell types and states5, which, together with their neighbouring cells, underlie and control normal physiology

(Fig. 1). However, we have not been able to systematically detect and understand the molecular changes that propel an individual cell along these trajectories during normal development or ageing, or the molecular causes that trigger deviations from healthy trajectories and drive cells and tissues towards disease (Fig. 1). Timely detection and successful treatment of disease will depend crucially on our ability to understand and identify when, why, and how cells deviate from their normal trajectories. More accurate cellular and molecular diagnostics will enable us to intercept disease sufficiently early to prevent irreparable damage. To achieve this interceptive medicine (Fig. 1), we need to invest in approaches that provide a detailed molecular understanding of the basis of disease-related heterogeneity in tissues, with sufficient molecular, cellular and temporal resolution. Several challenges need to be overcome in order to understand complex disease landscapes, which comprise of vast numbers of potential cellular states (Fig. 1). First, we need to resolve normal cellular heterogeneity across space and time to begin to define the cell types, states

A list of affiliations appears at the end of the paper. *A list of members and their affiliations appears at the end of the paper.

Nature | Vol 587 | 19 November 2020 | 377

Perspective and cell–cell interactions that normally exist in the body. This is a main goal of the Human Cell Atlas consortium6. However, to discover the cellular bases of diseases requires that we track cellular heterogeneity and the molecular composition of cell trajectories in health and during disease progression longitudinally—throughout an individual’s lifetime. Second, we need to understand the molecular mechanisms and complex networks that define a cell’s state, and control its function, fate and trajectory over time, to be able to reconstruct a cell’s history and predict its future. This is essential for selecting the optimal intervention for an individual patient. Thus, systematic and longitudinal profiling of samples from many individuals is required. Third, we have yet to develop the computational frameworks required for integrating temporal data and patient profiles, with large cohorts to identify regulatory changes and to dissect the causes and manifestations of disease. Current attempts to model human disease have not succeeded in integrating the thousands of molecular phenotypes that are acquired from patients. Finally, we are limited by our lack of knowledge of the underlying causes of disease. To predict any given patient’s response to a specific therapy may require testing or modifying cells from the patient in an experimental system, a challenge that has yet to be routinely implemented. To address these challenges experts from different disciplines came together in 2018 to form the LifeTime Initiative (https:// lifetime-initiative.eu). It has since grown to be a pan-European community consisting of more than 90 research institutions with support from 80 companies and several funding agencies and national science academies. In 2019 the initiative was awarded a Coordination and Support Action by the European Commission to develop a Strategic Research Agenda (SRA)7 for a large-scale, long-term initiative with a roadmap for implementing cell-based interceptive medicine in Europe in the next decade. The ambitious goal is the early detection and interception of complex diseases, as well as the ability to select the most effective therapeutic strategy for a patient. Between March 2019 and June 2020 the initiative established several multi-disciplinary working groups (listed in the Supplementary Information), organized numerous workshops, meetings and surveys (and thereby engaged the wider community) and commissioned stakeholder interviews and an impact study. The European Commission will use LifeTime’s SRA during the planning of the next research and innovation framework programme: Horizon Europe. Here, we outline LifeTime’s vision and key aspects of the SRA towards establishing cell-based interceptive medicine. Central to LifeTime’s vision and approach is the development and integration of new technologies, such as single-cell multi-omics, high-content imaging, artificial intelligence (AI) and patient-derived experimental disease models. The application of these integrated approaches to medical challenges and their incorporation into both experimental and clinical workflows are expected to directly benefit patients. For example, appropriate single-cell based biomarkers will give physicians early warning that a cell or tissue is entering a disease trajectory. Understanding disease heterogeneity at the cellular level and knowing the molecular aetiology of a disease will allow researchers to systematically identify drug targets and resistance mechanisms and to define therapeutic approaches, based on a given disease’s molecular or cellular vulnerability. This strategy differs markedly from classical approaches to drug discovery8. The stratification of patients on the basis of underlying disease mechanisms, assessed in situ within single cells, will help physicians to select the most appropriate treatment(s) or to use combination therapies that are tailored to the individual. These will be used first to identify cells that are deviating from the healthy trajectory, to steer them away from disease, and later to reduce the threat of relapse (Fig. 1). This transformative single-cell data-driven approach has the potential to increase the success rates of clinical trials and the efficacy of novel therapeutic interventions in clinics over the next decade. Overall, the LifeTime strategy is likely to affect both diagnosis and treatment, to greatly improve health and quality of life, 378 | Nature | Vol 587 | 19 November 2020

and to reduce the societal burden of diseases such as cancer, neurological and neuropsychiatric disorders, infectious diseases, and chronic inflammatory and cardiovascular diseases. Below, we outline the development and implementation of technology at the heart of LifeTime’s approach, describe LifeTime’s mechanism for identifying medical priorities, discuss the required infrastructures in Europe, interactions with industry and innovation, ethical and legal issues, describe LifeTime’s education and training vision, and estimate the expected impact of the LifeTime approach on medicine and healthcare. LifeTime builds on and will collaborate with related international initiatives that are paving the way by producing reference maps of healthy tissues in the body, such as the Human Cell Atlas (HCA)6 and the NIH Human Biomolecular Atlas Program (HuBMAP)9.

Technology development and integration Single-cell technologies—particularly transcriptomics—are generating the first reference cell atlases of healthy tissues and organs, and are revealing a previously hidden diversity of cell subtypes and functionally distinct cell states6. Single-cell analyses of patient samples are beginning to provide snap shots of changes in cell composition and pathways that are associated with diseases such as cancer10–15, chronic inflammatory diseases16,17, Alzheimer’s disease18–20, heart failure21, and sepsis22. Because pathophysiological processes within individual cells involve different molecular levels, understanding the underlying mechanisms requires the integration of current single-cell approaches. LifeTime proposes the integration of several approaches7. This includes combining transcriptomics (Fig. 2) with methodologies that provide additional information on chromatin accessibility, DNA methylation, histone modifications, 3D genome organization, and genomic mutations23–25. Future developments will enable the incorporation of single-cell proteomes, lipidomes, and metabolomes, which will add key insights into different cellular states and their roles in health and disease. In addition to specific cell subtypes and the role of cellular heterogeneity, it is crucial to investigate the surrounding tissue context and organ environment. New spatial ‘-omic’ approaches, particularly spatial transcriptomics, include information on the locations of diseased cells, their molecular makeup and aberrant cell–cell communication within the tissue26–32. Advanced imaging approaches also now enable the systematic spatial mapping of molecular components, in situ, within cells and of cells within tissues28,33–37. The cellular context, with respect to different immune and stromal cell types, extracellular components and signalling molecules that contribute to disease progression, will help to identify the roles of specific cell types and interactions in diseases32,38–40. The implementation of cell lineage tracing approaches41, which link cellular genealogies with phenotypic information about the same cells, may help us to understand how populations of cells develop dynamically to form the specific architecture of a healthy or a diseased tissue. LifeTime proposes to develop the necessary single-cell methodologies and end-to-end pipelines (Fig. 2), which will be integrated into robust, standardized multi-omics and imaging approaches, and scaled to profile hundreds of thousands of patients’ cells7. This will require an in-depth analysis of longitudinal human samples obtained from patients and cohorts, including European and national clinical trial groups as well as initiatives collecting longitudinal biological material connected to well-annotated clinical information (Fig. 3). Linking these data to clinical outcomes will identify the cellular parameters that are permissive to a therapeutic response, for example, during checkpoint blockade immunotherapy12,42,43 or treatment of multiple myeloma11. By detecting rare drug-resistant cells that are present before11,44 or that emerge during treatment45, therapeutic regimens and combinatorial treatments can be adapted to improve outcomes. Handling these large molecular datasets will require sophisticated and distributed computational and bioinformatics infrastructures

a

Neurological and neuropsychiatric diseases

Healthy ageing

b

Disease

Cancer

Health

c

Healthy cells

Disease trajectory

Interceptive medicine • Early detection • Molecular mechanisms • New drug targets and therapies

Cardiovascular diseases Infectious diseases

Chronic inflammatory diseases

Health trajectory

Fig. 1 | Early disease detection and interception by understanding and targeting cellular trajectories through time. a, Cells are programmed to develop and differentiate along many different specific lineage trajectories (blue trajectories) to reach their functional state. When these normal lineage processes go awry, it can cause a cell to deviate from a healthy state and move towards a complex disease space (coloured manifolds defined by multi-dimensional molecular space—including gene expression, protein modifications and metabolism), as shown by red trajectories. b, Many diseases are detected only at a relatively late stage with the onset of symptoms (red trajectory) and when pathophysiological changes can be at an advanced stage

(red cells). At this point, cells, tissues and organs have undergone extensive and often irreversible molecular and physiological changes since the initial events that caused them to deviate from a healthy state. Hence, the choice of interventions may be limited and often involves harsh or invasive procedures. c, Understanding the early molecular mechanisms that cause cells to deviate from a healthy to a disease trajectory will provide biomarkers for the early detection of disease, and new drug targets and innovative therapies to intercept diseases before the onset of pathophysiology and the manifestation of symptoms.

(see ‘Implementation and infrastructure’), as well as the development of tools to integrate and ensure the interoperability of different data types, including single-cell multi-omics, medical information and electronic health records. LifeTime will work with ongoing European and national efforts to integrate molecular data into electronic health records and to establish standards and interoperable formats to address specific disease challenges. This process will promote the development of advanced personalized models of disease. To be able to implement routine longitudinal sampling of patients, we need to develop approaches for sampling small biopsies, including liquid biopsies, that will detect individual cells or cell-free DNA released from pathological cells before and during therapy46. Multi-dimensional descriptors of cell states from patients taken from different stages of disease or therapy will be used to derive new biomarker sets or to enhance current panels. Collaboration with ongoing atlas projects, industrial partners and regulatory authorities will be key for benchmarking and deriving the new standards that will enable us to deploy these new methods in the clinic. We hope that this will achieve earlier disease detection and guide the appropriate selection of drug targets and therapies (Fig. 3). Unlocking the potential of unprecedented amounts of integrated digital information (including molecular data describing how individual cells make decisions) requires AI, in particular machine learning approaches that can identify meaningful molecular patterns and dependencies in the datasets47,48. Although such approaches have proven very useful when applied to medical imaging data and have enabled the identification of subtle disease-associated changes49, medical imaging cannot capture the full complexity of human physiology

nor the status of a disease at the single-cell level. High-content imaging, together with information about gene expression, chromatin states, and protein and metabolic parameters, will contribute to the stratification of disease phenotypes. Machine learning and advanced modelling approaches will be used to integrate and analyse the different layers of cellular activity, and can generate multi-scale and potentially even causal models that will allow us to infer regulatory networks and to predict present and future disease phenotypes at the cellular level47,50–52 (Fig. 2). The deep integration of machine learning technologies with spatial multi-omics and imaging technologies and data has the potential to usher in a new age of digital pathology to aid in decision-making by physicians (Fig. 3). By considering not only anatomical, physiological and morphological aspects, but also multidimensional molecular and cellular data, it will be possible to provide a more granular representation of a patient’s disease state to complement the pathologist’s slides and bulk measurements in tissues (for example, of mRNA or metabolites). We envision as the final goal the incorporation of new AI-based decision-aiding systems that will integrate and interpret available molecular, cellular, individual disease trajectory and imaging information. Interpretable and accountable AI systems will also provide the basis for clinical recommendations. The integration of cellular information should lead to a more precise description of a patient’s molecular and physiological history, and will guide early detection, allow predictive prognosis, and guide recommendations for therapeutic interventions to deliver more precise and effective treatments (Fig. 3). Nature | Vol 587 | 19 November 2020 | 379

Perspective Patients

• Combining multi-omics and lineage tracing • Scaling for clinical applications

• Data integration • Multi-scale dynamic models • Predictive disease models • Adaptive experimental design

Patient-derived experimental disease models • Large-scale perturbation strategies to understand disease mechanisms • Tissue–organ interactions • Inclusion of physiological processes • Improved cellular complexity and tissue architecture

Fig. 2 | Hallmarks of the LifeTime approach to disease interception and treatment. The schematic represents the development and integration of key technologies for investigating human diseases, as envisioned by the LifeTime Initiative. Single-cell multi-omics and imaging technologies will be developed for high-throughput applications. Different modalities will be combined to provide insight into underlying mechanisms, based on coordinated changes between different regulatory molecular layers. Insight into cellular genealogies and cellular dynamics will require the integration of lineage tracing tools. Technologies will also need to be scaled for clinical deployment. The integration and analysis of large, longitudinal multi-omics and imaging datasets will require the development of new pipelines and machine learning tools. These include the development of causal inference and interpretative machine learning approaches to create molecular networks for predictive and multiscale disease models. Patient-derived disease models such as organoids will be further developed to improve tissue architecture and the incorporation of physiological processes such as vasculature, nerve innervation and the immune system, to provide models that more faithfully recapitulate disease processes. Improved knowledge of disease mechanisms will require the application of large-scale perturbation tools to organoids. Tissue–tissue and organ–organ interactions will be recreated using microfluidics and organ-on-a-chip technologies to study key systemic interactions in diseases.

Understanding the cellular origin and aetiology of disease from a patient-centred perspective requires systems that faithfully recapitulate key aspects of a patient’s pathophysiology, and render them experimentally tractable to test mechanistic hypotheses and predictions. Organoids are an emerging experimental system that allow aspects of organ development, regeneration and pathophysiology to be modelled3,4,53 (Fig. 2). Derived from adult or pluripotent human stem cells, organoids can capture individual features that are unique to each patient and can be interrogated molecularly in space and time. Importantly, by comparing organoid models from diseased and healthy individuals, unique disease features can be extracted even if the specific genetic cause of a disease is unknown. Therefore, organoid models offer a valuable tool for achieving some of the main goals of LifeTime, especially in cases in which repeated access to patient tissues is limited or impossible (for example, in neurological and neuropsychiatric disorders). Despite their promise, organoids still require substantial development to harness their full potential for disease modelling (Fig. 2). LifeTime proposes to advance the models to capture the full degree 380 | Nature | Vol 587 | 19 November 2020

Organoid

Tracking

• Integration of novel modalities

Understanding mechanisms

• Further development of single cell and imaging technologies

Artificial intelligence/ machine learning

Single-cell multi-omics

Artificial intelligence

Quantified, digitalized trajectories

Experimental disease models

Disease trajectory

Health trajectory

Targeting

Single-cell multi-omics

Personalized disease models

Patient stratification

New drug targets

Predictive computational models

Personalized therapies

Fig. 3 | Exploiting the LifeTime dimension to empower disease targeting. Single-cell multi-omics analysis of patient-derived samples (such as blood or tissue) or personalized disease models (for example, organoids and experimental disease models) will be profiled longitudinally to cover the different disease stages. Large-scale multidimensional datasets will provide quantitative, digitalized information that will provide information about the decision-making processes of cells. These will be analysed using AI and machine learning to arrive at predictive models for disease trajectories, providing cellular and molecular mechanisms of disease onset and progression. Models will be validated using large-scale perturbation analysis and targeted functional studies in disease models, which will be used in an iterative process to improve both computational and disease models.

of cellular heterogeneity and tissue-specific structural and metabolic conditions54, and to incorporate key physiological aspects, such as immune responses, vascularization or innervation. Because complex interactions between multiple tissues and organs are involved in many diseases, it will be necessary to develop tissue engineering principles that combine multiple organoids in pathophysiologically relevant crosstalk (‘organoids-on-a-chip’). To optimize translational potential, LifeTime will engage in standardizing, automating and scaling organoid approaches, to allow systematic derivation, propagation and banking of organoids. Such industrialization is also needed for large-scale chemical or genetic perturbations (for example, CRISPR–Cas screens), and for elucidating the genetic bases of disease variability and drug response at population-relevant scales, in both the preclinical and clinical contexts (Fig. 3). The resulting mechanistic dissection, enabled by large-scale perturbations, will be used to validate corresponding AI models of disease interception and progression. In addition to organoids, in vivo model systems are necessary to translate the science from the bench to humans. A complex biological system is required to study the myriad of host–disease and host–pathogen interactions associated with complex diseases, such as infectious diseases, cancer or Alzheimer’s disease. The use of animal models is

important for understanding the complex temporal relationships that occur in diseases, such as those involving the vasculature, immune system and pathogens as well as neuronal networks in the brain. LifeTime will therefore improve the clinical relevance of animal models and make use of approaches in which patient-derived tissues can be integrated into in vivo models55–59 to study the dynamics of cellular heterogeneity in space and time. LifeTime, as a community, has the capacity to develop and integrate these technologies, which often require expertise and specialized instrumentation that are located in distinct laboratories. A coordinated effort can achieve the required benchmarking and standardization of technologies, workflows and pipelines. This will also ensure that the data, software and models generated adhere to FAIR (findable, accessible, interoperable, and reusable) principles60 (see ‘Implementation and infrastructure’), are available across national borders, and are in full compliance with international legislations such as the European General Data Protection Regulation. Moreover, LifeTime will ensure that technologies, including AI and organoids, will be developed in an ethically responsible way in collaboration with patients, putting the patient at the centre (see ‘Ethical and legal issues’).

Identification of medical priorities LifeTime has initiated a mechanism, called Launchpad, to systematically identify medical challenges that can be addressed through LifeTime’s approach and have a direct effect on patient care. Initially, the focus has been on five disease areas that are a substantial burden to society: cancer, neurological and neuropsychiatric disorders, infectious diseases, chronic inflammatory diseases and cardiovascular diseases. Other disease areas will be continuously monitored (for example, rare Mendelian diseases and metabolic diseases), and research programmes initiated as technologies and infrastructures develop. The LifeTime Launchpad has defined several criteria to identify the medical challenges. These include: societal impact (including incidence and prevalence, disease severity, economic impact and the pressing need for new and more efficient clinical treatments and early detection), evidence for cellular heterogeneity that limits current clinical avenues, availability of samples from biobanks, relevant preclinical models, existence of patient cohorts including those enabling longitudinal studies, clinical feasibility and ethical considerations, as well as alignment with national and EU funding priorities. Subsequently, multidisciplinary working groups, including clinicians, in each disease area have used these criteria to define the following disease challenges and to develop ten-year roadmaps to address them in the LifeTime SRA7. Despite cancer broadly covering hundreds of individual tumour types, there are critical knowledge gaps that are common to all cancer entities, including the mechanisms of early dissemination and therapy resistance. Metastatic dissemination of a subpopulation of cancer cells is a leading cause of death in almost all cancer types. Successful treatment of advanced and metastasized forms of cancer remains difficult, despite the development of targeted therapies and immunotherapies, owing to the emergence of drug or therapy resistance. To address these medical priorities, LifeTime recommends focusing on understanding the cell types and states—malignant cells and their microenvironment—that are involved in early stages of cancer dissemination, and the reprogramming of cellular states during disease and their effect on resistance to therapies. For neurological disorders, a major challenge is a lack of understanding of the early events in disease onset to enable the development of disease-modifying therapies. The lack of access to longitudinal samples from patients necessitates the establishment of cohorts of patient-derived disease models to understand the cellular heterogeneity associated with disease. The discovery of pathways and biomarkers that will allow the stratification of patients on the basis of the cellular mechanisms that drive a disease will make it possible to design new

clinical trials to reevaluate drugs that were previously tested without such stratification, and to broaden the drug target portfolio. As seen during the coronavirus disease 2019 (COVID-19) pandemic, it is important to be able to understand infection mechanisms and the host response in order to rapidly identify the most likely effective treatment for an infection. At the same time, the continuous rise of antimicrobial resistance requires the discovery of new therapeutic strategies. A key medical challenge for infectious diseases is to understand the cellular response to infections and to develop precision, immune-based therapeutic strategies to combat infections. Chronic inflammatory diseases impose a high burden owing to their long-term debilitating consequences, which result from the structural destruction of affected organs or tissues. Current therapies treat the symptoms but do not cure or fully control the chronic inflammatory pathophysiology. While different targeted therapies exist, they are expensive and their success is limited by high rates of non-response to treatment. Consequently, there is an urgent need to explore and understand how cellular heterogeneity contributes to the pathology of inflammatory diseases61 and how this relates to the predicted course of disease and the response of a patient to one of the numerous available therapies. Many cardiovascular and metabolic diseases lack effective therapies owing to a lack of knowledge of their underlying causes and the link between abnormal cardiac cell structure or function and pathophysiology. The identified medical priority is to understand the cellular and molecular mechanisms involved, in order to enable early diagnosis and the design of new mechanism-based therapies for precise clinical treatment. The LifeTime disease roadmaps can be divided broadly into three phases7: first, immediate research into the identified medical challenges using established, scaled single-cell technologies, computational tools and disease models; second, the development of new technologies that are required to address specific medical challenges, including the development of spatial multi-omics and imaging approaches and advanced patient-derived model systems for longitudinal analyses; and finally, the application of these next-generation technologies to the longitudinal analyses of patient samples, or patient-derived models, combined with machine learning to generate patient trajectories and predictive models of disease. The resulting predictions and biomarkers will be validated in prospectively collected patient cohorts within clinical trials that will also include longitudinal liquid biopsies. The routine clinical use of predictors and biomarkers for risk stratification of patients and resulting interventions—where feasible—is the pre-final step. The final step is the extension of predictors and biomarkers to the analysis of large longitudinal patient cohorts, such as national cohorts, for developing secondary and tertiary prevention approaches based on the new biomarkers. During the implementation of these roadmaps, the initiative will establish an experimental design working group to develop systematic procedures to ensure that research samples are acquired from diverse cohorts (including age, sex, and ethnicity). This will require the development of strict criteria for the inclusion of samples and to ensure appropriate coverage of critical metadata. They will also define standardized procedures for the acquisition and processing of samples from different pathology sites (depending on the disease area). It is envisaged that during disease challenge pilot projects, an experimental design oversight body will determine, using early data, the number of diseases that should be studied as the initiative develops, with recommendations on the sample sizes required to obtain sufficient statistical power.

Implementation and infrastructure The scale of the data that will be generated and analysed, the cross-disciplinary and international structure, and the ambition of Nature | Vol 587 | 19 November 2020 | 381

Perspective LifeTime to pioneer novel analytics using AI, place LifeTime in an excellent position to shape the next generation of computational infrastructure for medical and biological data in Europe. This will require close interaction with and evolution of the established European infrastructure (Fig. 4), such as the European Open Science Cloud (EOSC) and high-performance computing infrastructures through the European High-Performance Computing (EuroHPC) initiative. LifeTime will also interact with related European Life Sciences Research Infrastructures62 to create added value and to avoid duplication of effort in strategies and tools for sharing and accessing data and the development and application of standards. As medicine is inherently decentralized, LifeTime will also help to connect EU medical systems and develop large federated European data infrastructures. Fragmentation of research across borders, disciplines and timeframes needs to be overcome. The generation of data and development of technology by LifeTime will be harmonized across expert groups and centres, allowing the results to be quickly applied in clinics. Thus, a coordinated approach is required that integrates the multidisciplinary expertise of single-cell technologies, data science, and organoids as well as in vivo models across Europe. It must also engage clinicians and patients to achieve medical impact. To address these challenges, LifeTime proposes a multidisciplinary network of LifeTime Centres (Fig. 4) with different complementary thematic clusters across Europe, each working in close association with hospitals. These connected, flexible innovation nodes will share resources, gather the necessary critical mass for global competitiveness, and be open for collaboration with the entire scientific community. LifeTime Centres should deliver a number of key functions: ∙ Serve as platforms for the development and advancement of breakthrough technologies for single-cell research in -omics and imaging, AI (in particular machine learning), and experimental and computational disease models. ∙ Closely and actively collaborate with patients, clinicians, hospitals and healthcare systems, in some cases with a specific disease focus. ∙ Set standards in data generation, standardization and management, implementing FAIR principles. ∙ Set standards in ethical, legal and societal issues (ELSI) programmes by working together in multidisciplinary teams aimed at responsible research and innovation. ∙ Offer opportunities to collaborate, test and benchmark new methodologies and analysis methods; for example, in adaptive experimental design. ∙ Offer unique opportunities to industry to translate recent knowledge and novel technologies from the laboratory to the market. ∙ Provide an early access programme to new technologies developed by companies. ∙ Function as open, interconnected education hubs, delivering training in the new technologies to researchers, scientific personnel and clinicians, as well as providing engagement activities for patients and the public. LifeTime aims to analyse data that are inherently distributed across different clinical centres in different countries, which is a substantial challenge. These data are usually not accessible outside a national, regional clinical care system or specified data ‘safe havens’; when they are accessible, accredited systems are often required for storing the data and information governance may be at the hospital, federal or international level. This means that a federated approach is the only way to access and integrate information from various European healthcare systems. Thus, the LifeTime data and computational network, building on cloud technologies, will provide the necessary capacities to enable federated analytics across the LifeTime centres and will provide a technical and legal framework for integrating core information structures, multi-omics assays, imaging, AI and machine learning technologies, and health records (Fig. 4). A joint Data Coordination Centre, following a multi-level approach, will ensure transparent data access control, 382 | Nature | Vol 587 | 19 November 2020

compatibility and standardization. Within this framework, LifeTime will also coordinate and pioneer open data sharing and reuse and collaboration, including models of access before publication of data. To start this cooperative LifeTime Centre network, the initiative can build on initial developments and programmes by LifeTime members in a number of European countries; for example, the VIB Single-cell Accelerator Programme in Belgium, the Berlin Cell Hospital/Clinical Single-cell Focus in Germany, the UK’s Sanger/EBI/Babraham Single Cell Genomics Centre, and the LifeTime Single-Cell Centre in Poland. To avoid duplication and lack of standardization, the LifeTime Cell Centre network should be coordinated through an entity or framework that optimizes coordination and support to achieve the LifeTime vision. Funding for specific research projects that involve one or more LifeTime Centres could come from a portfolio of private and public funding opportunities, on both the national and pan-European levels. The network will interact closely with key European efforts and will contribute to EU strategies and programmes.

Interaction with industry and innovation Collaborations with the private sector will be key for the rapid translation and delivery of technologies, instrumentation, diagnostics and therapies (Fig. 4). Currently, more than 80 companies support LifeTime’s vision. These span multiple sectors as well as industrial associations and networks such as the European Federation of Pharmaceutical Industries (EFPIA) and the Euro-BioImaging Industry Board (EBIB). The transformation of breakthrough discoveries into solutions to improve the health of European citizens will involve several crucial steps. These include the creation of a unifying framework that fosters and streamlines pre-competitive interactions between academia and industry at the interfaces of computer science, single-cell biology, -omics, imaging, patient-derived disease modelling and precision medicine. A large-scale collaboration platform across Europe should be developed that provides umbrella agreements, regular meetings, dual training of early-career scientists in academia and industry, and exchange programmes. This will enable joint projects between public and private sectors that span the entire biomedical innovation cycle from discovery research and technology development to implementation in hospitals and the healthcare industry. Cross-sectoral collaborations between small, medium-size and large companies with different development timelines and distinct business models is crucial to stimulate innovation. To expedite the identification of, and investment in, emerging technologies developed in academic and industrial laboratories, successful local initiatives such as tech watch and accelerator programmes (for example, the VIB Single-cell Accelerator) should be scaled and coordinated at the EU level. LifeTime aims to create a networking and match-making platform for individuals and academic and industry organizations that share the goal of developing and integrating breakthrough technologies and applying them in the clinic to benefit patients. Further measures could foster innovation and entrepreneurship. For example, a pre-seed, pre-incubator funding scheme based on competitive calls to support start-up or technology transfer ideas. The creation of a dedicated European ecosystem is also essential. This will require additional key measures, such as the development of enabling digital environments and the promotion of early disease interception with all necessary stakeholders (for example, patients, regulators, payers, and others), as described in the LifeTime call for action launched in December 2019 (https://lifetime-initiative.eu/ make-eu-health-research-count/).

Ethical and legal issues The implementation of LifeTime’s vision triggers relevant ethical questions from all societal groups that are directly affected by the

1 Cell Centre Network for the European Community Region B

Region A

2 5HVHDUFKSURJUDPPH Region F

Region C LifeTime Coordinating entity

3 0HGLFDODQGELRORJLFDO GDWDPDQDJHPHQW 4 ,QGXVWU\DQGLQQRYDWLRQ SURJUDPPH 5 (GXFDWLRQDQGWUDLQLQJ SURJUDPPH 6 (WKLFVDQGVRFLHW\ SURJUDPPH

Region E LifeTime Cell Centre Network 6LQJOHFHOOPXOWLRPLFVDQGLPDJLQJ $UWLILFLDOLQWHOOLJHQFHPDFKLQHOHDUQLQJ ([SHULPHQWDOGLVHDVHPRGHOV +RVSLWDOVDQGFOLQLFV

Region D Community interactions (XURSHDQLQIUDVWUXFWXUH (6)5,(26& (XUR+3& ,QWHUQDWLRQDOLQLWLDWLYHV '1XFOHRPH *$*++&$+X%0$3+7$1 &ROODERUDWLYHUHVHDUFKFDOOVDQGVHUYLFH DFFHVVIRULQGLYLGXDO3,VDQGFOLQLFLDQV

Fig. 4 | Blueprint of the LifeTime Initiative. LifeTime proposes a large-scale research initiative to coordinate national efforts, and to foster collaboration and knowledge exchange between the public and private sectors. LifeTime recommends the implementation of several programmes. (1) A network of Cell Centres to support the European Community. The interdisciplinary centres would complement each other’s strengths and expertise in the three LifeTime technology areas and operate in tight association with hospitals, integrating technology development with clinical practice. The connected but geographically distributed nodes would serve as both innovation hubs with

strong links to industry and open education and training centres. Community coordination would avoid duplication of effort and increase effectiveness; this model requires funding instruments for a central coordination body. (2) The LifeTime research and technology integration programme includes both technology development and integration and the discovery of disease mechanisms and clinical applications. (3) Medical and biological data management platform. (4) Programmes fostering industry and innovation. (5) Education and training. (6) Ethics and societal engagement.

project (patients, clinicians and scientists), and from society in general. LifeTime aims to pioneer a real-time or parallel ELSI programme that will predict, identify, monitor and manage the ethical impact of the biomedical innovations that arise from research and technology development, ensuring that implementation follows ethical guidelines. LifeTime’s ELSI programme can be used as a testing ground for other international interdisciplinary initiatives (Fig. 4). Ethical issues will be identified and managed as early as possible, and the programme will ensure that ethical and research integrity guidance is implemented throughout the entire research process to stimulate positive effects and mitigate negative ones63. Specialists in bioethics, public engagement, ethics of technology and lawyers have identified LifeTime’s ethical and societal priority areas. These include questions related to the derivation, storage and use of organoids, the use of AI, data ownership and management, anonymization of data, equity of access to such revolutionary medical care, the definition of health and illness, and transparent science communication to society64. To initiate a relationship of trust with the public, we will include diverse modes of communication and engagement, for example through art, citizen science and public dialogue, contributing to scientific literacy, and promoting individual critical thinking and public participation in decision-making processes.

building in health and research systems, and substantial deployment of technology in clinics. This will lead to a collaborative, fast-developing and interdisciplinary environment in research and in hospitals, which will require new training inputs. To respond to these needs, LifeTime will create an Education and Training Programme, ensuring the sustainable application of new technologies and the implementation of new medical and scientific approaches (Fig. 4). Importantly, this will be done in an integrative scheme that intersects the multiple LifeTime disciplines and areas of action: disruptive technologies applied to medical challenges, technology transfer and innovation, research integrity, data management and stewardship, ethical and societal issues, communication and emotional skills, or management of medico-scientific and collaborative projects. Each LifeTime training activity will be based on multi-lateral education: basic researchers will teach other researchers and clinicians about the potential of the technological solutions, while clinicians will teach researchers about the clinical needs and biological challenges of the diseases in focus. This will strictly follow the idea of bench to bedside and back. The programme will have an inclusive philosophy to ensure that it can provide training to the wider community, including researchers, clinicians, technical operators, managers and staff of technology platforms, as well as administrators, patients and the lay public. LifeTime envisions the organization of cycles of colloquia and outreach activities to inform the public, the formulation of short-term courses compatible with a culture of lifelong learning and adaptability, and interdisciplinary Masters and PhD programmes. Through education and training, LifeTime will engage and inform society, will develop

Education and training The introduction of interceptive medicine into clinical practice in parallel with a multidisciplinary research programme will require capacity

Nature | Vol 587 | 19 November 2020 | 383

Perspective new professional curricula and will train a new generation of highly skilled medical scientists and support staff, in order to foster scientific and medical excellence in an ethical, responsible and inclusive framework.

1. 2. 3. 4. 5.

Impact on medicine and healthcare Medicine and healthcare are rapidly expanding pillars of our economy. EU countries collectively spend more than €1,400 billion per year on healthcare for their 500 million citizens. Given the dimensions and spiralling healthcare costs associated with an ageing population, these numbers will continue to increase unless we can mitigate the damaging effects of ageing. We expect that coupling current health monitoring with early detection and disease interception will have a major economic impact. In Europe, 20% of the population will soon be over 65 years old, with an age distribution that will continue to change until 12% are over 80 years old in 208065. Given the prevalence and cost of caring for people with degenerative conditions and the increase in chronic lifestyle-induced diseases, the knowledge and technologies developed by LifeTime are urgently needed to detect these diseases earlier, and to avoid their worst manifestations. LifeTime would also have an impact in the era of unexpected pandemics such as COVID-19 by rapidly determining the cellular and molecular basis of the disease. This would identify potential therapeutic strategies for patient subgroups as well as representing a starting point for the development of effective new therapies. One of healthcare’s largest outstanding issues is that many patients do not respond to commonly prescribed treatments. Whereas well-controlled randomized clinical trials provide evidence for the statistical utility of a given therapy, in practice often many patients must be treated before a single patient will show a measurable benefit. Other patients may not benefit at all or even be harmed66, leading to an economic loss that is estimated to be in the hundreds of billions of Euros per year. The variable therapeutic responses that originate from the cellular and genetic heterogeneity that exists in cancer and other complex diseases, contributes not only to the failure of treatments, but also to the rising cost of drug development, which is currently estimated at around €1–2 billion per drug. In silico models for disease trajectories generated by LifeTime will enable the integration of personal genetic and lifestyle information into predictive models of disease course. This will allow physicians to determine and implement optimal therapeutic strategies that are tailored to the individual (precision medicine) with sophisticated timing of disease interception. The knowledge gained will also contribute to more appropriate selection of patients for clinical trials.

6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17.

18.

19. 20. 21.

22. 23. 24. 25. 26. 27. 28. 29. 30. 31. 32.

Outlook summary Recent advances in key single-cell technologies, AI and patient-based experimental systems, such as induced pluripotent cells and organoids, have set the stage for their integration and deployment to improve mechanistic molecular understanding, prediction, and treatment of disease onset and progression. Patients will benefit from cell-based medicine through the earlier detection of diseases at a stage where they can be effectively intercepted. The integrated technologies will enable the selection, monitoring and, if necessary, modification of therapeutic strategies for an individual to improve clinical outcomes based on high-resolution cellular information. Within the next decade, the obtained molecular mechanistic information has the potential to revolutionize drug discovery processes and clinical trial design, and eventually to be incorporated into clinicians’ daily decision-making processes. As the LifeTime community continues to grow, new individuals, institutions and companies are encouraged to join and contribute to establishing a European platform to implement single-cell and data-driven medicine to address the growing burden of complex and chronic diseases. 384 | Nature | Vol 587 | 19 November 2020

33. 34.

35. 36. 37. 38. 39. 40. 41.

42. 43.

Claussnitzer, M. et al. A brief history of human disease genetics. Nature 577, 179–189 (2020). Karczewski, K. J. & Snyder, M. P. Integrative omics for health and disease. Nat. Rev. Genet. 19, 299–310 (2018). Clevers, H. Modeling development and disease with organoids. Cell 165, 1586–1597 (2016). Lancaster, M. A. & Knoblich, J. A. Organogenesis in a dish: modeling development and disease using organoid technologies. Science 345, 1247125 (2014). Tanay, A. & Regev, A. Scaling single-cell genomics from phenomenology to mechanism. Nature 541, 331–338 (2017). Regev, A. et al. The human cell atlas. eLife 6, e27041 (2017). The LifeTime Initiative. LifeTime Strategic Research Agenda. https://lifetime-initiative.eu/ wp-content/uploads/2020/08/LifeTime-Strategic-Research-Agenda.pdf (2020). Yofe, I., Dahan, R. & Amit, I. Single-cell genomic approaches for developing the next generation of immunotherapies. Nat. Med. 26, 171–177 (2020). HuBMAP Consortium. The human body at cellular resolution: the NIH Human Biomolecular Atlas Program. Nature 574, 187–192 (2019). Guo, X. et al. Global characterization of T cells in non-small-cell lung cancer by single-cell sequencing. Nat. Med. 24, 978–985 (2018). Ledergor, G. et al. Single cell dissection of plasma cell heterogeneity in symptomatic and asymptomatic myeloma. Nat. Med. 24, 1867–1876 (2018). Li, H. et al. Dysfunctional CD8 T cells form a proliferative, dynamically regulated compartment within human melanoma. Cell 176, 775–789.e718 (2019). Puram, S. V. et al. Single-cell transcriptomic analysis of primary and metastatic tumor ecosystems in head and neck cancer. Cell 171, 1611–1624.e1624 (2017). Tirosh, I. et al. Dissecting the multicellular ecosystem of metastatic melanoma by single-cell RNA-seq. Science 352, 189–196 (2016). van Galen, P. et al. Single-cell RNA-seq reveals AML hierarchies relevant to disease progression and immunity. Cell 176, 1265–1281.e1224 (2019). Der, E. et al. Tubular cell and keratinocyte single-cell transcriptomics applied to lupus nephritis reveal type I IFN and fibrosis relevant pathways. Nat. Immunol. 20, 915–927 (2019). Zhang, F. et al. Defining inflammatory cell states in rheumatoid arthritis joint synovial tissues by integrating single-cell transcriptomics and mass cytometry. Nat. Immunol. 20, 928–942 (2019). Grubman, A. et al. A single-cell atlas of entorhinal cortex from individuals with Alzheimer’s disease reveals cell-type-specific gene expression regulation. Nat. Neurosci. 22, 2087–2097 (2019). Keren-Shaul, H. et al. A unique microglia type associated with restricting development of Alzheimer’s disease. Cell 169, 1276–1290.e1217 (2017). Mathys, H. et al. Single-cell transcriptomic analysis of Alzheimer’s disease. Nature 570, 332–337 (2019). Wang, L. et al. Single-cell reconstruction of the adult human heart during heart failure and recovery reveals the cellular landscape underlying cardiac function. Nat. Cell Biol. 22, 108–119 (2020). Reyes, M. et al. An immune-cell signature of bacterial sepsis. Nat. Med. 26, 333–340 (2020). Argelaguet, R. et al. Multi-omics profiling of mouse gastrulation at single-cell resolution. Nature 576, 487–491 (2019). Clark, S. J. et al. scNMT-seq enables joint profiling of chromatin accessibility DNA methylation and transcription in single cells. Nat. Commun. 9, 781 (2018). Rooijers, K. et al. Simultaneous quantification of protein–DNA contacts and transcriptomes in single cells. Nat. Biotechnol. 37, 766–772 (2019). Chen, W. T. et al. Spatial transcriptomics and in situ sequencing to study Alzheimer’s disease. Cell 182, 976–991.e19 (2020). Giladi, A. et al. Dissecting cellular crosstalk by sequencing physically interacting cells. Nat. Biotechnol. 38, 629–637 (2020). Moffitt, J. R. et al. Molecular, spatial, and functional single-cell profiling of the hypothalamic preoptic region. Science 362, eaau5324 (2018). Nitzan, M., Karaiskos, N., Friedman, N. & Rajewsky, N. Gene expression cartography. Nature 576, 132–137 (2019). Ståhl, P. L. et al. Visualization and analysis of gene expression in tissue sections by spatial transcriptomics. Science 353, 78–82 (2016). van den Brink, S. C. et al. Single-cell and spatial transcriptomics reveal somitogenesis in gastruloids. Nature 582, 405–409 (2020). Vickovic, S. et al. High-definition spatial transcriptomics for in situ tissue profiling. Nat. Methods 16, 987–990 (2019). Bintu, B. et al. Super-resolution chromatin tracing reveals domains and cooperative interactions in single cells. Science 362, eaau1783 (2018). Cardozo Gizzi, A. M. et al. Microscopy-based chromosome conformation capture enables simultaneous visualization of genome organization and transcription in intact organisms. Mol. Cell. 74, 212–222.e215 (2019). Chen, K. H., Boettiger, A. N., Moffitt, J. R., Wang, S. & Zhuang, X. RNA imaging. Spatially resolved, highly multiplexed RNA profiling in single cells. Science 348, aaa6090 (2015). Mateo, L. J. et al. Visualizing DNA folding and RNA in embryos at single-cell resolution. Nature 568, 49–54 (2019). Medaglia, C. et al. Spatial reconstruction of immune niches by combining photoactivatable reporters and scRNA-seq. Science 358, 1622–1626 (2017). Jackson, H. W. et al. The single-cell pathology landscape of breast cancer. Nature 578, 615–620 (2020). Keren, L. et al. A structured tumor-immune microenvironment in triple negative breast cancer revealed by multiplexed ion beam imaging. Cell 174, 1373–1387.e1319 (2018). Maniatis, S. et al. Spatiotemporal dynamics of molecular pathology in amyotrophic lateral sclerosis. Science 364, 89–93 (2019). Baron, C. S. & van Oudenaarden, A. Unravelling cellular relationships during development and regeneration using genetic lineage tracing. Nat. Rev. Mol. Cell Biol. 20, 753–765 (2019). Helmink, B. A. et al. B cells and tertiary lymphoid structures promote immunotherapy response. Nature 577, 549–555 (2020). Krieg, C. et al. High-dimensional single-cell analysis predicts response to anti-PD-1 immunotherapy. Nat. Med. 24, 144–153 (2018).

44. Kim, C. et al. Chemoresistance evolution in triple-negative breast cancer delineated by single-cell sequencing. Cell 173, 879–893.e813 (2018). 45. Rambow, F. et al. Toward minimal residual disease-directed therapy in melanoma. Cell 174, 843–855.e819 (2018). 46. Corcoran, R. B. & Chabner, B. A. Application of cell-free DNA analysis to cancer treatment. N. Engl. J. Med. 379, 1754–1765 (2018). 47. Eraslan, G., Avsec, Ž., Gagneur, J. & Theis, F. J. Deep learning: new computational modelling techniques for genomics. Nat. Rev. Genet. 20, 389–403 (2019). 48. Lähnemann, D. et al. Eleven grand challenges in single-cell data science. Genome Biol. 21, 31 (2020). 49. Topol, E. J. High-performance medicine: the convergence of human and artificial intelligence. Nat. Med. 25, 44–56 (2019). 50. Argelaguet, R. et al. Multi-Omics Factor Analysis—a framework for unsupervised integration of multi-omics data sets. Mol. Syst. Biol. 14, e8124 (2018). 51. Efremova, M. & Teichmann, S. A. Computational methods for single-cell omics across modalities. Nat. Methods 17, 14–17 (2020). 52. Pearl, J. & Mackenzie, D. The Book of Why: The New Science of Cause and Effect (Penguin, 2019). 53. Amin, N. D. & Paşca, S. P. Building models of brain disorders with three-dimensional organoids. Neuron 100, 389–405 (2018). 54. Knoblich, J. A. Lab-built brains. Sci. Am. 316, 26–31 (2016). 55. Bleijs, M., van de Wetering, M., Clevers, H. & Drost, J. Xenograft and organoid model systems in cancer research. EMBO J. 38, e101654 (2019). 56. Byrne, A. T. et al. Interrogating open issues in cancer precision medicine with patient-derived xenografts. Nat. Rev. Cancer 17, 254–268 (2017). 57. Espuny-Camacho, I. et al. Hallmarks of Alzheimer’s disease in stem-cell-derived human neurons transplanted into mouse brain. Neuron 93, 1066–1081.e1068 (2017). 58. Hasselmann, J. et al. Development of a chimeric model to study and manipulate human microglia in vivo. Neuron 103, 1016–1033.e1010 (2019). 59. Mancuso, R. et al. Stem-cell-derived human microglia transplanted in mouse brain to study human disease. Nat. Neurosci. 22, 2111–2116 (2019). 60. Wilkinson, M. D. et al. The FAIR guiding principles for scientific data management and stewardship. Sci. Data 3, 160018 (2016). 61. Schultze, J. L.The SYSCID Consortium & Rosenstiel, P. Systems medicine in chronic inflammatory diseases. Immunity 48, 608–613 (2018). 62. Life Science RI European Life Science Research Infrastructures https://lifescience-ri.eu/ home.html (2020). 63. Sugarman, J. & Bredenoord, A. L. Real-time ethics engagement in biomedical research: ethics from bench to bedside. EMBO Rep. 21, e49919 (2020). 64. Torres-Padilla, M. E. et al. Thinking ‘ethical’ when designing a new biomedical research consortium. EMBO J. 39, e105725 (2020). 65. European Commission. People in the EU: who are we and how do we live? https:// ec.europa.eu/eurostat/documents/3217494/7089681/KS-04-15-567-EN-N.pdf/8b2459fe0e4e-4bb7-bca7-7522999c3bfd (Eurostat, 2015). 66. What happened to personalized medicine? Nat. Biotechnol. 30, 1 (2012). Acknowledgements We acknowledge all participants that have attended and contributed to LifeTime meetings and workshops through many presentations and discussions. We thank J. Richers for artwork and A. Sonsala, A. Tschernycheff and C. Lozach for administrative support. LifeTime has received funding from the European Union’s Horizon 2020 research and innovation framework programme under grant agreement 820431. Author contributions All authors contributed to the writing of the article and provided comments and feedback. They all approved submission of the article for publication. The individuals listed at the end of the paper are members of Working Groups that contributed to the writing of the LifeTime Strategic Research Agenda (listed in full in the Supplementary Information). Please note that the complete LifeTime Community is much broader and includes many associates and supporters that are actively contributing to and advocating for LifeTime (further information can be found at https://lifetime-initiative.eu). Competing interests C.B. is an inventor on several patent applications in genome technology and cofounder of Aelian Biotechnology, a single-cell CRISPR screening company. H.C. is a non-executive board member of Roche Holding, Basel. A.P. holds European and US patents on ‘Genome Architecture Mapping’ (EP 3230465 B1, US 10526639 B2). W.R. is a consultant and shareholder of Cambridge Epigenetix. T.V. is co-inventor on licensed patents WO/2011/157846 (methods for haplotyping single cells), WO/2014/053664 (high-throughput genotyping by sequencing low amounts of genetic material), WO/2015/ 028576 (haplotyping and copy number typing using polymorphic variant allelic frequencies). All other authors declare no competing interests. Additional information Supplementary information is available for this paper at https://doi.org/10.1038/s41586-0202715-9. Correspondence and requests for materials should be addressed to N.R., G.A. or S.A.G. Peer review information Nature thanks Michael Snyder, Ali Torkamani and the other, anonymous, reviewer(s) for their contribution to the peer review of this work. Reprints and permissions information is available at http://www.nature.com/reprints. Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line

to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/. © The Author(s) 2020 Berlin Institute for Medical Systems Biology, Max Delbrück Center for Molecular Medicine in the Helmholtz Association, Berlin, Germany. 2Charité-Universitätsmedizin, Berlin, Germany. 3 Berlin Institute of Health (BIH), Berlin, Germany. 4German Center for Cardiovascular Research (DZHK), Partner Site Berlin, Berlin, Germany. 5Institut Curie, CNRS, PSL Research University, Sorbonne Université, Nuclear Dynamics Unit, Equipe Labellisée Ligue contre le cancer, Paris, France. 6VIB Center for Brain and Disease Research, Leuven, Belgium. 7Department of Human Genetics, KU Leuven, Leuven, Belgium. 8Department of Immunology, Weizmann Institute of Science, Rehovot, Israel. 9Centre for Genomic Regulation (CRG), Barcelona Institute of Science and Technology, Barcelona, Spain. 10CeMM Research Center for Molecular Medicine of the Austrian Academy of Sciences, Vienna, Austria. 11Department of Laboratory Medicine, Medical University of Vienna, Vienna, Austria. 12Ludwig Boltzmann Institute for Rare and Undiagnosed Diseases, Vienna, Austria. 13Department of Medical Humanities, Julius Center for Health Sciences and Primary Care, University Medical Center Utrecht, Utrecht, The Netherlands. 14 Institute of Human Genetics, UMR 9002, CNRS and University of Montpellier, Montpellier, France. 15Department of Experimental Oncology, IEO, European Institute of Oncology IRCCS, Milan, Italy. 16Hubrecht Institute, Royal Netherlands Academy of Arts and Sciences (KNAW), Utrecht, The Netherlands. 17University Medical Center Utrecht, Utrecht, The Netherlands. 18 Oncode Institute, Utrecht, The Netherlands. 19The Princess Máxima Center for Pediatric Oncology, Utrecht, The Netherlands. 20Department of Neurosciences, KU Leuven, Leuven, Belgium. 21UK Dementia Research Institute at UCL, University College London, London, UK. 1

22 Department of Pediatric Oncology/Hematology, Charité-Universitätsmedizin Berlin, Berlin, Germany. 23Cell Biology and Biophysics Unit, European Molecular Biology Laboratory, Heidelberg, Germany. 24Institut Curie, PSL Research University, Paris, France. 25Institute of Bioorganic Chemistry, Polish Academy of Sciences, Poznan, Poland. 26Institute of Computing Science, Poznan University of Technology, Poznan, Poland. 27Friedrich Miescher Institute for Biomedical Research, Basel, Switzerland. 28Faculty of Natural Sciences, University of Basel, Basel, Switzerland. 29Cardiovascular and Metabolic Sciences, Max Delbrück Center for Molecular Medicine in the Helmholtz Association (MDC), Berlin, Germany. 30Department of Molecular Biology and Genetics (MBG), Aarhus University, Aarhus, Denmark. 31Interdisciplinary Nanoscience Centre (iNANO), Aarhus University, Aarhus, Denmark. 32Institute of Molecular Biotechnology of the Austrian Academy of Sciences (IMBA), Vienna, Austria. 33Medical University of Vienna, Vienna, Austria. 34Division of Molecular Genetics, German Cancer Research Center (DKFZ), Heidelberg, Germany. 35Division of Molecular Neurobiology, Department of Medical Biochemistry and Biophysics, Karolinska Institutet, Stockholm, Sweden. 36Science for Life Laboratory, Stockholm, Sweden. 37Laboratory for Molecular Cancer Biology, VIB Center for Cancer Biology, KU Leuven, Leuven, Belgium. 38Department of Oncology, KU Leuven, Leuven, Belgium. 39European Molecular Biology Laboratory, European Bioinformatics Institute, Wellcome Genome Campus, Cambridge, UK. 40Cancer Research UK Cambridge Institute, University of Cambridge, Cambridge, UK. 41Wellcome Sanger Institute, Wellcome Genome Campus, Cambridge, UK. 42CNAG-CRG, Centre for Genomic Regulation, Barcelona Institute of Science and Technology, Barcelona, Spain. 43Universitat Pompeu Fabra, Barcelona, Spain. 44ICREA, Barcelona, Spain. 45Department of Internal Medicine, Radboud Center for Infectious Diseases, Radboud University Medical Center, Nijmegen, The Netherlands. 46Radboud Institute for Molecular Life Sciences, Radboud University Medical Center, Nijmegen, The Netherlands. 47Life and Medical Sciences Institute (LIMES), University of Bonn, Bonn, Germany. 48Centre de Biochimie Structurale, CNRS UMR 5048, INSERM U1054, Université de Montpellier, Montpellier, France. 49VIB Technology Watch, Ghent, Belgium. 50 Department of Molecular Medicine, University of Padua School of Medicine, Padua, Italy. 51 IFOM, The FIRC Institute of Molecular Oncology, Padua, Italy. 52Institute for Biology, Humboldt University of Berlin, Berlin, Germany. 53Epigenetics Programme, Babraham Institute, Cambridge, UK. 54Centre for Trophoblast Research, University of Cambridge, Cambridge, UK. 55 Department of Translational Research, Institut Curie, PSL Research University, Paris, France. 56 Institute of Clinical Molecular Biology, Kiel University, Kiel, Germany. 57University Hospital Schleswig-Holstein, Campus Kiel, Kiel, Germany. 58German Center for Neurodegenerative Diseases (DZNE), Bonn, Germany. 59PRECISE, Platform for Single Cell Genomics and Epigenomics at the German Center for Neurodegenerative Diseases and the University of Bonn, Bonn, Germany. 60Division of Computational Genomics and Systems Genetics, German Cancer Research Center (DKFZ), Heidelberg, Germany. 61Genome Biology Unit, European Molecular Biology Laboratory, Heidelberg, Germany. 62Department of Computer Science and Applied Mathematics, Weizmann Institute of Science, Rehovot, Israel. 63Department of Oncology and Hemato-oncology, University of Milan, Milan, Italy. 64Human Technopole, Milan, Italy. 65Biomedical Research Foundation, Academy of Athens, Athens, Greece. 66Institute of Computational Biology, Helmholtz Zentrum München - German Research Center for Environmental Health, Neuherberg, Germany. 67Department of Mathematics, Technical University of Munich, Munich, Germany. 68Institute of Epigenetics and Stem Cells (IES), Helmholtz Zentrum München - German Research Center for Environmental Health, Munich, Germany. 69Faculty of Biology, Ludwig-Maximilians Universität, Munich, Germany. 70Barcelona Supercomputing Center (BSC), Barcelona, Spain. 71CNRS UMR3244, Institut Curie, PSL University, Paris, France. 204These authors contributed equally: Nikolaus Rajewsky, Geneviève Almouzni, Stanislaw A. Gorski. *A list of affiliations appears at the end of the paper. ✉e-mail: [email protected]; [email protected]; [email protected]

Nature | Vol 587 | 19 November 2020 | 385

Perspective LifeTime Community Working Groups Lavinia Alberi72,73, Stephanie Alexander23, Theodore Alexandrov74,75, Ernest Arenas76, Claudia Bagni77,78, Robert Balderas79, Andrea Bandelli80, Burkhard Becher81, Matthias Becker47,58,59, Niko Beerenwinkel82,83, Monsef Benkirane84, Marc Beyer58,59, Wendy A. Bickmore85, Erik E. A. L. Biessen86,87, Niklas Blomberg88, Ingmar Blumcke89, Bernd Bodenmiller90, Barbara Borroni91, Dimitrios T. Boumpas65,92,93, Thomas Bourgeron94, Sarion Bowers41, Dries Braeken95, Catherine Brooksbank39, Nils Brose96, Hilgo Bruining97, Jo Bury98, Nicolo Caporale15,63,64, Giorgio Cattoretti99, Nadia Chabane100, Hervé Chneiweiss101,102,103, Stuart A. Cook104,105,106,107, Paolo Curatolo108, Marien I. de Jonge46,109, Bart Deplancke110, Bart De Strooper6,20,21, Peter de Witte111, Stefanie Dimmeler112, Bogdan Draganski113,114, Anna Drews58,59, Costica Dumbrava115, Stefan Engelhardt116, Thomas Gasser117,118, Evangelos J. Giamarellos-Bourboulis92,119, Caroline Graff120,121, Dominic Grün122,123, Ivo G. Gut42,43, Oskar Hansson124,125, David C. Henshall126, Anna Herland127, Peter Heutink118,128, Stephane R. B. Heymans129,130,131, Holger Heyn42,43, Meritxell Huch132, Inge Huitinga133,134, Paulina Jackowiak25, Karin R. Jongsma13, Laurent Journot135, Jan Philipp Junker1, Shauna Katz24, Jeanne Kehren136, Stefan Kempa1, Paulus Kirchhof137,138,139,140, Christine Klein141, Natalia Koralewska25, Jan O. Korbel61, Malte Kühnemund142, Angus I. Lamond143, Elsa Lauwers6,20, Isabelle Le Ber144, Ville Leinonen145,146, Alejandro López-Tobón15,63,64, Emma Lundberg147, Astrid Lunkes68, Henrike Maatz29, Matthias Mann148,149, Luca Marelli15,150,151, Vera Matser39, Paul M. Matthews152,153, Fatima Mechta-Grigoriou154, Radhika Menon155, Anne F. Nielsen31, Massimiliano Pagani151,156, R. Jeroen Pasterkamp157, Asla Pitkänen158, Valentin Popescu1, Cyril Pottier159,160, Alain Puisieux24, Rosa Rademakers159,160, Dory Reiling161, Orly Reiner162, Daniel Remondini163, Craig Ritchie164, Jonathan D. Rohrer165, Antoine-Emmanuel Saliba166, Raquel Sanchez-Valle167, Amedeo Santosuosso168,169,170,171, Arnold Sauter172, Richard A. Scheltema173,174, Philip Scheltens175, Herbert B. Schiller176, Anja Schneider58,177, Philip Seibler141, Kelly Sheehan-Rooney61, David J. Shields178, Kristel Sleegers159,160, August B. Smit179, Kenneth G. C. Smith180,181, Ilse Smolders182, Matthis Synofzik117,118, Wai Long Tam49, Sarah A. Teichmann41,183, Maria Thom184,185, Margherita Y. Turco54,186, Heleen M. M. van Beusekom187, Rik Vandenberghe188, Silvie Van den Hoecke49, Ibo van de Poel189, Andre van der Ven45, Julie van der Zee159,160, Jan van Lunzen190,191, Geert van Minnebruggen98, Alexander van Oudenaarden16,17,18, Wim Van Paesschen192, John C. van Swieten193, Remko van Vught155, Matthijs Verhage194,195, Patrik Verstreken6,20, Carlo Emanuele Villa15,63,64, Jörg Vogel166,196, Christof von Kalle3, Jörn Walter197, Sarah Weckhuysen159,160,198, Wilko Weichert199, Louisa Wood200, Anette-Gabriele Ziegler201,202 & Frauke Zipp203 Department of Medicine, University of Fribourg, Fribourg, Switzerland. 73Swiss Integrative Center for Human Health SA (SICHH), Fribourg, Switzerland. 74Structural and Computational Biology Unit, European Molecular Biology Laboratory, Heidelberg, Germany. 75Skaggs School of Pharmacy and Pharmaceutical Sciences, University of California San Diego, La Jolla, CA, USA. 76Department of Medical Biochemistry and Biophysics, Karolinska Institutet, Stockholm, Sweden. 77Department of Fundamental Neurosciences, University of Lausanne, Lausanne, Switzerland. 78Department of Biomedicine and Prevention, University of Rome Tor Vergata, Rome, Italy. 79Becton Dickinson, San Jose, CA, USA. 80Science Gallery International, Dublin, Ireland. 81Unit of Inflammation Research, Institute of Experimental Immunology, University of Zurich, Zurich, Switzerland. 82Department of Biosystems Science and Engineering, ETH Zurich, Basel, Switzerland. 83Swiss Institute of Bioinformatics, Lausanne, Switzerland. 84Institut de Génétique Humaine, Université de Montpellier, Laboratoire de Virologie Moléculaire CNRS-UMR9002, Montpellier, France. 85MRC Human Genetics Unit, Institute of Genetics and Molecular Medicine, University of Edinburgh, Edinburgh, UK. 86Department of Pathology, Cardiovascular Research Institute Maastricht, Maastricht University, Maastricht, The Netherlands. 87Institute for Molecular Cardiovascular Research, RWTH University Hospital Aachen, Aachen, Germany. 88ELIXIR Hub, Wellcome Genome Campus, Cambridge, UK. 89 Neuropathologisches Institut, Universikätsklinikum, Erlangen, Germany. 90Department of Quantitative Biomedicine, University of Zurich, Zurich, Switzerland. 91Department of Clinical and Experimental Sciences, University of Brescia, Brescia, Italy. 92Fourth Department of Internal Medicine, School of Medicine, National & Kapodistrian University of Athens, Athens, Greece. 93University of Cyprus Medical School, Nicosia, Cyprus. 94Human Genetics and Cognitive Functions Unit, Institut Pasteur, UMR 3571, CNRS, Université de Paris, Paris, France. 95 Imec, Leuven, Belgium. 96Department of Molecular Neurobiology, Max Planck Institute of Experimental Medicine, Göttingen, Germany. 97Department of Child and Adolescent Psychiatry, Amsterdam UMC, Amsterdam, The Netherlands. 98Flanders Institute for Biotechnology (VIB), Ghent, Belgium. 99Department of Medicine & Surgery, Università degli studi di Milano-Bicocca, Milan, Italy. 100Centre cantonal autisme, Département de psychiatrie, CHUV, Allières, Lausanne, Switzerland. 101Institut National de la Santé et de la Recherche Medicale (INSERM), Paris, France. 102Sorbonne Universités, Paris, France. 103Centre National de la Recherche Scientifique (CNRS), Paris, France. 104National Heart and Lung Institute, Imperial College London, London, UK. 105MRC-London Institute of Medical Sciences, Hammersmith Hospital Campus, London, UK. 106Program in Cardiovascular and Metabolic Disorders, Duke-National University of Singapore, Singapore, Singapore. 107National Heart Research Institute Singapore (NHRIS), National Heart Centre Singapore, Singapore, Singapore. 108 Department of System Medicine, University of Rome Tor Vergata, Rome, Italy. 109Department of Laboratory Medicine, Radboud Center for Infectious Diseases, Radboud University Medical Center, Nijmegen, The Netherlands. 110Institute of Bioengineering, Ecole Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland. 111Department of Pharmaceutical and Pharmacological Sciences, University of Leuven, Leuven, Belgium. 112Institute for Cardiovascular Regeneration, Goethe University, Frankfurt, Germany. 113Department of Clinical Neurosciences, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland. 114Department of Neurology, Max-Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany. 115Communication Networks, Content & Technology, European Commission, Brussels, Belgium. 116Institute of Pharmacology and Toxicology, Technische Universität München, Munich, Germany. 117Department for Neurodegenerative Diseases, Hertie Institute for Clinical Brain Research, University of Tübingen, Tübingen, Germany. 118German Center for Neurodegenerative Diseases, Tübingen, Germany. 119Hellenic Institute for the Study of Sepsis, Athens, Greece. 120Department of NVS, Division of Neurogeriatrics, Karolinska Institutet, Stockholm, Sweden. 121Unit of Hereditary Dementia, Karolinska University 72

386 | Nature | Vol 587 | 19 November 2020

Hospital-Solna, Stockholm, Sweden. 122Max-Planck-Institute of Immunobiology and Epigenetics, Freiburg, Germany. 123Centre for Integrative Biological Signaling Studies, University of Freiburg, Freiburg, Germany. 124Clinical Memory Research Unit, Lund University, Lund, Sweden. 125Memory Clinic, Skåne University Hospital, Malmö, Sweden. 126FutureNeuro SFI Research Centre, Royal College of Surgeons in Ireland, Dublin, Ireland. 127Division of Microand Nanosystems, KTH Royal Institute of Technology, Stockholm, Sweden. 128Hertie Institute for Clinical Brain Research, Tübingen, Germany. 129Department of Cardiology, Cardiovascular Research Institute Maastricht (CARIM), Maastricht University Medical Centre, Maastricht, The Netherlands. 130Department of Cardiovascular Research, University of Leuven, Leuven, Belgium. 131Netherlands Heart Institute (ICIN), Utrecht, The Netherlands. 132Max Planck Institute of Molecular Cell Biology and Genetics, Dresden, Germany. 133Swammerdam Institute for Life Sciences, University of Amsterdam, Amsterdam, The Netherlands. 134Netherlands Institute for Neuroscience, Amsterdam, The Netherlands. 135Montpellier GenomiX (MGX), Institut de Génomique Fonctionnelle, Montpellier, France. 136Bayer AG Pharmaceuticals, Berlin, Germany. 137 Institute of Cardiovascular Sciences, University of Birmingham, Birmingham, UK. 138 Department of Cardiology, University Heart and Vascular Center Hamburg, Hamburg, Germany. 139Sandwell and West Birmingham and University Hospitals Birmingham NHS Trusts, Birmingham, UK. 140German Center for Cardiovascular Research (DZHK), Partner Site Hamburg/Kiel/Lübeck, Hamburg, Germany. 141Institute of Neurogenetics, University of Lübeck, Lübeck, Germany. 142CARTANA, Stockholm, Sweden. 143Centre for Gene Regulation and Expression, University of Dundee, Dundee, UK. 144Department of Neurology, Hôpital La Pitié Salpêtrière, Paris, France. 145Neurocenter, Neurosurgery, Kuopio University Hospital and Institute of Clinical Medicine, University of Eastern Finland, Kuopio, Finland. 146Unit of Clinical Neuroscience, Neurosurgery, University of Oulu and Medical Research Center Oulu, Oulu University Hospital, Oulu, Finland. 147Science for Life Laboratory, KTH - Royal Institute of Technology, Stockholm, Sweden. 148Department of Proteomics and Signal Transduction, Max Planck Institute of Biochemistry, Martinsried, Germany. 149Proteomics Program, Novo Nordisk Foundation Center for Protein Research, University of Copenhagen, Copenhagen, Denmark. 150 Centre for Sociological Research, KU Leuven, Leuven, Belgium. 151Department of Medical Biotechnology and Translational Medicine, University of Milan, Milan, Italy. 152Department of Brain Sciences, Imperial College London, London, UK. 153UK Dementia Research Institute at Imperial College London, London, UK. 154Institut Curie, Stress and Cancer Laboratory, Equipe labélisée par la Ligue Nationale contre le Cancer, PSL Research University, Paris, France. 155 MIMETAS, Leiden, The Netherlands. 156IFOM, The FIRC Institute of Molecular Oncology, Milan, Italy. 157Department of Translational Neuroscience, UMC Utrecht Brain Center, University Medical Center Utrecht, Utrecht University, Utrecht, The Netherlands. 158A. I. Virtanen Institute for Molecular Sciences, University of Eastern Finland, Kuopio, Finland. 159VIB Center for Molecular Neurology, Antwerp, Belgium. 160Department of Biomedical Sciences, University of Antwerp, Antwerp, Belgium. 161District Court, Amsterdam, The Netherlands and Court of Appeal, The Hague, The Netherlands. 162Department of Molecular Genetics, Weizmann Institute of Science, Rehovot, Israel. 163Department of Physics and Astronomy, Bologna University, Bologna, Italy. 164Centre for Clinical Brain Sciences, University of Edinburgh, Edinburgh, Scotland, UK. 165Dementia Research Centre, Department of Neurodegenerative Disease, UCL Queen Square Institute of Neurology, University College London, London, UK. 166 Helmholtz Institute for RNA-based Infection Research (HIRI), Helmholtz-Center for Infection Research (HZI), Würzburg, Germany. 167Alzheimer’s Disease and Other Cognitive Disorders Unit, Fundació Clínic per a la Recerca Biomèdica, Institut d’Investigacions Biomèdiques August Pi i Sunyer (IDIBAPS), Universitat de Barcelona, Barcelona, Spain. 168European Center for Law, Science and new Technologies (ECLT), University of Pavia, Pavia, Italy. 169Department of Law, University of Pavia, Pavia, Italy. 170Institute of Advanced Studies (IUSS), Pavia, Italy. 171World Commission on the Ethics of Scientific Knowledge and Technology (COMEST-UNESCO), Paris, France. 172Office of Technology Assessment at the German Parliament, Berlin, Germany. 173 Biomolecular Mass Spectrometry and Proteomics, Bijvoet Center for Biomolecular Research and Utrecht Institute for Pharmaceutical Sciences, University of Utrecht, Utrecht, The Netherlands. 174Netherlands Proteomics Center, Utrecht, The Netherlands. 175Alzheimer Center, Amsterdam University Medical Center, Amsterdam, The Netherlands. 176Institute of Lung Biology and Disease, German Center for Lung Research (DZL), Helmholtz Zentrum München, Munich, Germany. 177Department of Neurodegenerative Diseases and Geriatric Psychiatry, University Bonn, Bonn, Germany. 178Oncology R&D, Pfizer Inc, San Diego, CA, USA. 179 Department of Molecular and Cellular Neurobiology, Center for Neurogenomics and Cognitive Research, Amsterdam Neuroscience, VU University Amsterdam, Amsterdam, The Netherlands. 180Department of Medicine, University of Cambridge, Cambridge, UK. 181 Cambridge Institute of Therapeutic Immunology and Infectious Disease, Jeffrey Cheah Biomedical Centre, University of Cambridge, Cambridge, UK. 182Department of Pharmaceutical Sciences, Center for Neurosciences (C4N), Vrije Universiteit Brussel, Brussels, Belgium. 183 Department of Physics, Cavendish Laboratory, Cambridge, UK. 184Division of Neuropathology, National Hospital for Neurology and Neurosurgery, London, UK. 185 Department of Clinical and Experimental Epilepsy, UCL Queen Square Institute of Neurology, London, UK. 186Department of Pathology, University of Cambridge, Cambridge, UK. 187 Department of Cardiology, Erasmus MC University Medical Center, Rotterdam, The Netherlands. 188Department of Neurology, University Hospital Leuven, KU Leuven, Leuven, Belgium. 189Department of Values, Technology and Innovation, Delft University of Technology, Delft, The Netherlands. 190ViiV Healthcare, London, UK. 191University Medical Center, Hamburg, Germany. 192Department of Neurosciences, University Hospital Leuven, KU Leuven, Leuven, Belgium. 193Department of Neurology, Erasmus Medical Centre, University Medical Center Rotterdam, Rotterdam, The Netherlands. 194Department of Functional Genomics, Center for Neurogenomics and Cognitive Research, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands. 195Department of Clinical Genetics, Center for Neurogenomics and Cognitive Research, Amsterdam University Medical Center, Amsterdam, The Netherlands. 196Institute of Molecular Infection Biology, University of Würzburg, Würzburg, Germany. 197Department of Genetics, Saarland University, Saarbrücken, Germany. 198Division of Neurology, Antwerp University Hospital, Antwerp, Belgium. 199Institute of Pathology, Technical University Munich, Munich, Germany. 200Babraham Institute, Babraham Research Campus, Cambridge, UK. 201 Institute of Diabetes Research, Helmholtz Zentrum München, Munich, Germany. 202Technical University Munich, at Klinikum rechts der Isar, Munich, Germany. 203Department of Neurology, University Medical Center of the Johannes Gutenberg University Mainz, Mainz, Germany. A full list of members and their affiliations appears in the Supplementary Information.

Article

A blue ring nebula from a stellar merger several thousand years ago https://doi.org/10.1038/s41586-020-2893-5 Received: 4 May 2020 Accepted: 1 September 2020 Published online: 18 November 2020 Check for updates

Keri Hoadley1,10 ✉, D. Christopher Martin1,10, Brian D. Metzger2,3,10, Mark Seibert4,10, Andrew McWilliam4, Ken J. Shen5, James D. Neill1, Gudmundur Stefansson6,7,8, Andrew Monson7,8 & Bradley E. Schaefer9

Stellar mergers are a brief but common phase in the evolution of binary star systems1,2. These events have many astrophysical implications; for example, they may lead to the creation of atypical stars (such as magnetic stars3, blue stragglers4 and rapid rotators5), they play an important part in our interpretation of stellar populations6 and they represent formation channels of compact-object mergers7. Although a handful of stellar mergers have been observed directly8,9, the central remnants of these events were shrouded by an opaque shell of dust and molecules10, making it impossible to observe their final state (for example, as a single merged star or a tighter, surviving binary11). Here we report observations of an unusual, ring-shaped ultraviolet (‘blue’) nebula and the star at its centre, TYC 2597-735-1. The nebula has two opposing fronts, suggesting a bipolar outflow of material from TYC 2597-735-1. The spectrum of TYC 2597-735-1 and its proximity to the Galactic plane suggest that it is an old star, yet it has abnormally low surface gravity and a detectable long-term luminosity decay, which is uncharacteristic for its evolutionary stage. TYC 2597-735-1 also exhibits Hα emission, radial-velocity variations, enhanced ultraviolet radiation and excess infrared emission—signatures of dusty circumstellar disks12, stellar activity13 and accretion14. Combined with stellar evolution models, the observations suggest that TYC 2597-735-1 merged with a lower-mass companion several thousand years ago. TYC 2597-735-1 provides a look at an unobstructed stellar merger at an evolutionary stage between its dynamic onset and the theorized final equilibrium state, enabling the direct study of the merging process.

The blue ring nebula (Fig. 1a) is a rare far-ultraviolet-emitting object discovered by the Galaxy Evolution Explorer (GALEX)15. It has not yet been observed in any other part of the electromagnetic spectrum (Extended Data Fig. 1, Supplementary Information). It is ring-shaped and smooth, extending about 8′ across the sky at a slightly inclined (15°), face-on view. Like other extended, far-ultraviolet sources16, molecular hydrogen (H2), which fluoresces throughout the far-ultraviolet (λ 24 μm, IRAS; solid circles) provide upper limits only. A stellar black-body continuum with temperature T = 5,850 K is also presented (dashed black line). Synthetic stellar models that best match the inferred stellar properties of TYC 2597-735-1 (grey spectra and greyed region) do not account for the infrared excess and suggest a far-ultraviolet excess is present. The far-ultraviolet and infrared are both frequently observed in systems actively accreting matter from warm, gaseous and dusty disks, such as T Tauri protostars12. Models of warm, dusty circumstellar disks reproduce the observed infrared excess: light blue dotted line, Tdust ≈ 600 K, about 0.2– 3 au, assuming a disk inclination angle of about 15°; solid blue line, Tdust ≈ 1,200– 300 K, about 0.2–1.5 au, assuming a disk inclination angle around 15°. b, TYC 2597-735-1 exhibits Hα emission, an unusual trait for evolved stars. The Hα line profile shows variability over short timescales. There is an enhanced blue edge to the emission, a signature of infalling material and traditionally interpreted as accretion flows or disk winds14. The Hα emission, excess infrared emission, enhanced far-ultraviolet radiation and radial-velocity variations (Extended Data Fig. 5) suggest that TYC 2597-735-1 is actively accreting materiel from the disk, creating the observed infrared excess emission.

with a binary-star merger. To test this scenario and estimate the initial state of the system, we use the stellar evolution code MESA (Modules for Experiments in Stellar Astrophysics)24 to explore the effect that a stellar merger has on long-term stellar properties25 (Supplementary Information). We find that a low-mass companion (Mc ≈ 0.1M☉) reasonably reproduces the effective temperature, luminosity and surface gravity of TYC 2597-735-1 at a post-merger age of tage ≈ 1,000 yr, accounting for the position of TYC 2597-735-1 in Teff–log(g) space (Extended Data Fig. 3). These models also predict that TYC 2597735-1 was approximately 0.1 B-mag brighter a century ago, which we observe in historical DASCH (Digital Access to a Sky Century at Harvard) archive records of TYC 2597-735-1 (Supplementary Information, Extended Data Fig. 6). The best-fitting models are those where the merger happens after the primary begins to evolve off its main sequence (Extended Data Fig. 7). Such timing may not be coincidental if the companion was dragged into the star through tidal interaction (Fig. 3a): the timescale for tidal orbital decay depends sensitively on the primary radius (τtide ∝ R★−5)26, accelerating the orbital decay as the primary reaches its subgiant phase. Indeed, tidally induced mergers may explain the dearth of short-period planets around evolved A stars27. The events leading up to and following the merger of TYC 2597-735-1 and its companion shape the system we see today. As TYC 2597-735-1 and its companion approached sufficiently closely, the former overflowed its Roche lobe onto the latter, initiating the merger (Fig. 3b). Numerical simulations demonstrate that the earliest phase of the merger process

results in the ejection of matter through the outer L2 Lagrange point in the equatorial plane of the binary28. Most of this matter remains gravitationally bound around the star system, forming a circumbinary disk. The companion, unable to accommodate the additional mass, is dragged deeper into the envelope of the primary in a runaway process11 (Fig. 3c). This delayed dynamical phase is accompanied by the ejection of a shell of gas. A portion of the ejected material is collimated by the circumbinary disk into a bipolar outflow29. The balance of the mass lost during primary–companion interactions remains as a circumstellar disk, which spreads out and cools over time, eventually reaching sufficiently low temperatures to form dust. We see evidence of this relic disk around TYC 2597-735-1 today as an infrared excess. A simple analytic model, which follows the spreading evolution of the gaseous disk due to internal viscosity over the thousands of years since the merger (Supplementary Information), is broadly consistent with both the present-day gas accretion rate (≤1.5 × 10−7M☉ yr−1), estimated from Hα, and a lower limit on the present-day dust disk mass (Mdisk,dust ≳ 5 × 10−9M☉), obtained by fitting the infrared spectral energy distribution of TYC 2597735-1 (Supplementary Information). Accretion of disk material onto TYC 2597-735-1 could account for its observed stellar activity (for example, Hα emission and far-ultraviolet excess). Angular momentum added to the envelope of the star by the merger, and subsequent accretion of the disk, would also increase the surface rotation velocity of TYC 2597-735-1. We find the de-projected surface rotation velocity of TYC 2597-735-1 to be approximately 25 km s−1 (Supplementary Information, Extended Data Fig. 8), larger than expected for a star that has just evolved off the main sequence (less than 10 km s−1)30. The bipolar ejecta shell expands away from the stellar merger, cooling and satisfying within weeks the conditions for molecular formation and solid condensation (Fig. 3d), as seen directly in the observed ejecta of luminous red novae31. The mass ejected, inferred from merger simulations and modelling the light curves of luminous red novae, is typically32 around 0.01M☉–0.1M☉, consistent with our lower mass limit of the ultraviolet nebula (0.004M☉). As the nebula expands and sweeps up interstellar gas, a reverse shock crosses through the ejecta shell, heating electrons in its wake. These electrons excite the H2 formed in the outflow, which fluoresces in the far-ultraviolet (Fig. 3e). Although dust is almost always observed in the ejecta of stellar mergers31, we find no evidence of dust in the ultraviolet nebula (for example, ultraviolet or optical reddening). We speculate that either the dust was destroyed in the reverse shock33 or the bipolar ejecta shell has thinned out sufficiently that dust is currently undetectable. Consistent with the latter, this ultraviolet nebula marks the oldest observed stellar merger so far, being at least roughly 3–10 times older than the previously oldest stellar-merger candidate, CK Vulpeculae (1670), which is still shrouded by dust34. The system was caught at an opportune time—old enough to reveal the central remnant, yet young enough that the merger-generated nebula has not dissolved into the interstellar medium. The discovery of an ultraviolet nebula introduces a new way of identifying otherwise-hidden late-stage stellar mergers. With 1–10 of these objects expected to be observable in the Milky Way (Supplementary Information), future far-ultraviolet telescopes may uncover more late-stage stellar mergers. As the only known merger system not shrouded by dust, TYC 2597-735-1 provides a unique opportunity to study post-merger morphology. For example, the close separation of the initial stellar binary—which has properties broadly similar to those of protoplanetary disks—could enable the for mation of second-generation planets35. However, given the relatively short time (less than roughly 100 million years) until TYC 2597-735-1 reaches the end of its nuclear burning life, its potential window of habitability may be drastically reduced compared to main sequence stars. Nature | Vol 587 | 19 November 2020 | 389

Article a

c

e Shocked interstellar material Shocked H2 Unshocked H2

Mp

Ultraviolet nebula

Mc

Reverse shock

Stellar activity and accretion: Hα emission, radial velocity and far-ultraviolet excess

b

d

Dust and molecules Atomic gas

Dust forms in disk: infrared excess emission

L2

Fig. 3 | Schematic of the merger events responsible for the current state of TYC 2597-735-1 and its ultraviolet nebula (not to scale). a, (Top view.) As the primary star (mass Mp; red) evolves off the main sequence and its envelope expands (red arrows), its companion (mass Mc; yellow) is slowly dragged inwards (grey dotted trajectory). b, (Top view.) Over the course of many orbits, the primary overflows its Roche lobe and deposits mass onto its companion. The companion, unable to hold on to this excess mass, spills it over into the common Lagrange point (L 2). The companion begins to spiral into the primary. c, (Side view.) The companion plunges into the primary. Additional mass is ejected, shaped by the circumstellar disk formed by the L 2 overflow (blue). d, (Side view.) A bipolar outflow (purple arrows), ejected at speeds of at least

Online content Any methods, additional references, Nature Research reporting summaries, source data, extended data, supplementary information, acknowledgements, peer review information; details of author contributions and competing interests; and statements of data and code availability are available at https://doi.org/10.1038/s41586-020-2893-5.

Sana, H. et al. Binary interaction dominates the evolution of massive stars. Science 337, 444–446 (2012). 2. Temmink, K. D., Toonen, S., Zapartas, E., Justham, S. & Gänsicke, B. T. Looks can be deceiving. Underestimating the age of single white dwarfs due to binary mergers. Astron. Astrophys. 636, A31 (2020). 3. Schneider, F. R. N. et al. Stellar mergers as the origin of magnetic massive stars. Nature 574, 211–214 (2019). 4. Davies, M. B., Piotto, G. & de Angeli, F. Blue straggler production in globular clusters. Mon. Not. R. Astron. Soc. 349, 129–134 (2004). 5. Leiner, E., Mathieu, R. D., Vanderburg, A., Gosnell, N. M. & Smith, J. C. Blue lurkers: hidden blue stragglers on the M67 main sequence identified from their Kepler/K2 rotation periods. Astrophys. J. 881, 47 (2019). 6. Wang, L., Kroupa, P., Takahashi, K. & Jerabkova, T. The possible role of stellar mergers for the formation of multiple stellar populations in globular clusters. Mon. Not. R. Astron. Soc. 491, 440–454 (2020). 7. Belczynski, K. et al. The origin of the first neutron star–neutron star merger. Astron. Astrophys. 615, A91 (2018). 8. Bond, H. E. et al. An energetic stellar outburst accompanied by circumstellar light echoes. Nature 422, 405–408 (2003). 9. Kulkarni, S. R. et al. An unusually brilliant transient in the galaxy M85. Nature 447, 458–460 (2007). 10. Tylenda, R. & Kamiński, T. Evolution of the stellar-merger red nova V1309 Scorpii: spectral energy distribution analysis. Astron. Astrophys. 592, A134 (2016). 11. Ivanova, N. et al. Common envelope evolution: where we stand and how we can move forward. Astron. Astrophys. Rev. 21, 59 (2013). 1.

390 | Nature | Vol 587 | 19 November 2020

the escape velocity of the system, expands and adiabatically cools, quickly forming dust and molecules. The primary puffs up (red arrows) and brightens from the excess energy it received by consuming the companion. e, (Side view.) Over the next several thousand years, TYC 2597-735-1 slowly settles back to its equilibrium state. TYC 2597-735-1 displays activity, attributed to accretion flows fed by its remnant circumstellar disk (Hα emission, radial velocity and far-ultraviolet excess emission). The ejected outflow sweeps up interstellar material, initiating a reverse shock that clears the dust and excites H2. The forward shock is seen in ultraviolet and Hα emission outlining the nebula today, whereas the reverse shock is revealed by the far-ultraviolet glow of H2 fluorescence.

12. Adams, F. C., Lada, C. J. & Shu, F. H. Spectral evolution of young stellar objects. Astrophys. J. 312, 788–806 (1987). 13. Figueira, P., Santos, N. C., Pepe, F., Lovis, C. & Nardetto, N. Line-profile variations in radial-velocity measurements. Two alternative indicators for planetary searches. Astron. Astrophys. 557, A93 (2013). 14. Lima, G. H. R. A., Alencar, S. H. P., Calvet, N., Hartmann, L. & Muzerolle, J. Modeling the Hα line emission around classical T Tauri stars using magnetospheric accretion and disk wind models. Astron. Astrophys. 522, A104 (2010). 15. Martin, D. C. et al. The galaxy evolution explorer: a space ultraviolet survey mission. Astrophys. J. Lett. 619, L1–L6 (2005). 16. Martin, D. C. et al. A turbulent wake as a tracer of 30,000 years of Mira’s mass loss history. Nature 448, 780–783 (2007). 17. Gaia Collaboration. VizieR Online Data Catalog: Gaia DR2, I/345 https://vizier.u-strasbg.fr/ viz-bin/VizieR?-source=I/345 (2018). 18. Ness, M. et al. ARGOS - III. Stellar populations in the Galactic bulge of the Milky Way. Mon. Not. R. Astron. Soc. 430, 836–857 (2013). 19. Edwards, S. et al. Forbidden line and H alpha profiles in T Tauri star spectra: a probe of anisotropic mass outflows and circumstellar disks. Astrophys. J. 321, 473–495 (1987). 20. Sahai, R., Findeisen, K., Gil de Paz, A. & Sánchez Contreras, C. Binarity in cool asymptotic giant branch stars: a GALEX search for ultraviolet excesses. Astrophys. J. 689, 1274–1278 (2008). 21. Fukui, Y. et al. Molecular outflows in protostellar evolution. Nature 342, 161–163 (1989). 22. Kamath, D., Wood, P. R., Van Winckel, H. & Nie, J. D. A newly discovered stellar type: dusty post-red giant branch stars in the Magellanic Clouds. Astron. Astrophys. 586, L5 (2016). 23. Bujarrabal, V. et al. High-resolution observations of IRAS 08544–4431. Detection of a disk orbiting a post-AGB star and of a slow disk wind. Astron. Astrophys. 614, A58 (2018). 24. Paxton, B. et al. Modules for experiments in stellar astrophysics (MESA): pulsating variable stars, rotation, convective boundaries, and energy conservation. Astrophys. J. Suppl. Ser. 243, 10 (2019). 25. Metzger, B. D., Shen, K. J. & Stone, N. Secular dimming of KIC 8462852 following its consumption of a planet. Mon. Not. R. Astron. Soc. 468, 4399–4407 (2017). 26. Goldreich, P. & Soter, S. Q in the Solar System. Icarus 5, 375–389 (1966). 27. Johnson, J. A. et al. Retired A stars and their companions: exoplanets orbiting three intermediate-mass subgiants. Astrophys. J. 665, 785–793 (2007). 28. Pejcha, O., Metzger, B. D. & Tomida, K. Cool and luminous transients from mass-losing binary stars. Mon. Not. R. Astron. Soc. 455, 4351–4372 (2016).

29. MacLeod, M., Ostriker, E. C. & Stone, J. M. Bound outflows, unbound ejecta, and the shaping of bipolar remnants during stellar coalescence. Astrophys. J. 868, 136 (2018). 30. de Medeiros, J. R., Da Rocha, C. & Mayor, M. The distribution of rotational velocity for evolved stars. Astron. Astrophys. 314, 499–502 (1996). 31. Kamiński, T. et al. Submillimeter-wave emission of three Galactic red novae: cool molecular outflows produced by stellar mergers. Astron. Astrophys. 617, A129 (2018). 32. Pejcha, O., Metzger, B. D., Tyles, J. G. & Tomida, K. Pre-explosion spiral mass loss of a binary star merger. Astrophys. J. 850, 59 (2017). 33. Martínez-González, S. et al. Supernovae within pre-existing wind-blown bubbles: dust injection versus ambient dust destruction. Astrophys. J. 887, 198 (2019).

34. Kamiński, T. et al. Organic molecules, ions, and rare isotopologues in the remnant of the stellar-merger candidate, CK Vulpeculae (Nova 1670). Astron. Astrophys. 607, A78 (2017). 35. Schleicher, D. R. G. & Dreizler, S. Planet formation from the ejecta of common envelopes. Astron. Astrophys. 563, A61 (2014). Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. © The Author(s), under exclusive licence to Springer Nature Limited 2020

Nature | Vol 587 | 19 November 2020 | 391

Article Data availability All GALEX imaging and grism data of TYC 2597-735-1 and its ultraviolet nebula are publicly available from the Mikulski Archive for Space Telescopes (MAST) in raw and reduced formats (http://galex.stsci.edu/ GalexView/ or https://mast.stsci.edu/portal/Mashup/Clients/Mast/ Portal.html). All Keck–LRIS and Keck–HIRES data for TYC 2597-735-1 are publicly available from the Keck Observatory Archive (https://koa. ipac.caltech.edu/cgi-bin/KOA/nph-KOAlogin). TYC 2597-735-1 raw photometric light-curve frames, plates and light curves from 1895 to 1985 are publicly available as a part of the DASCH programme (https:// projects.iq.harvard.edu/dasch). Data for the more recent photmetry for the light-curve construction is available from the corresponding author on request. All other photometric data for TYC 2597-735-1 were obtained from publicly archived ground- and space-based imaging and surveys, stored on the SIMBAD Astronomical Database (http:// simbad.u-strasbg.fr/simbad/) and the NASA/IPAC Infrared Science Archive (https://irsa.ipac.caltech.edu/frontpage/). The relevant data products from the Habitable-zone Planet Finder Spectrograph (HPF) campaign for TYC 2597-735-1 are publicly available at https://github. com/oglebee-chessqueen/BlueRingNebula.git.

Code availability We used MESA24 for a portion of our analysis. Although MESA is readily available for public use, we used a custom subroutine and MESA inline code to produce the TYC 2597-735-1 merger evolution model, publicly available at https://github.com/oglebee-chessqueen/ BlueRingNebula.git. Use the ATLAS9 pre-set grid of synthetic stellar spectra36 to fit the TYC 2597-735-1 spectral energy distribution to representative stellar spectra. All synthetic stellar spectra are publicly available at https://www.stsci.edu/hst/instrumentation/ reference-data-for-calibration-and-tools/astronomical-catalogues/ castelli-and-kurucz-atlas. Portions of our analysis used communitydeveloped core Python packages for astronomy, photutils37 and astropy38. 36. Castelli, F. & Kurucz, R. L. in Modelling of Stellar Atmospheres (eds Piskunov, N., Weiss, W. W. & Gray, D. F.) A20 (IAU, 2003). 37. Bradley, L. et al. photutils: photometry tools (2016). 38. The Astropy Collaboration. The astropy project: building an open-science project and status of the v2.0 core package. Astron. J. 156, 123 (2018). 39. Afşar, M. et al. A Spectroscopic survey of field red horizontal-branch stars. Astron. J. 155, 240 (2018). Acknowledgements This research is based on observations made with GALEX, obtained from the MAST data archive at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy under NASA contract NAS 5–26555. Some of the data presented were obtained at the W. M. Keck Observatory, which is operated as a scientific partnership between the California Institute of Technology, the University of California and NASA. This research made use of the Keck Observatory Archive, which is operated by the W. M. Keck Observatory and the NASA Exoplanet Science Institute, under contract with NASA, and made possible by the financial support of the W. M. Keck Foundation. We recognize and acknowledge the very important cultural role and reverence that the summit of Maunakea has always had within the indigenous Hawaiian community. We are fortunate to have the opportunity to conduct observations from this mountain. Some of the data presented were obtained at the Palomar Observatory. This research made use of the NASA/IPAC Infrared Science Archive, which is operated by the Jet Propulsion Laboratory,

California Institute of Technology, under contract with NASA. We thank V. Scowcroft for obtaining Spitzer/IRAC photometry of TYC 2597-735-1. Funding for APASS was provided by the Robert Martin Ayers Sciences Fund. The DASCH data from the Harvard archival plates was partially supported from National Science Foundation (NSF) grants AST-0407380, AST0909073 and AST-1313370. The American Association of Variable Star Observers has been helpful for finder charts, comparison star magnitudes and recruiting skilled observers, including S. Dufoer, K. Menzies, R. Sabo, G. Stone, R. Tomlin and G. Walker. These results are based on observations obtained with the HPF on the Hobby–Eberly Telescope (HET), which is named in honour of its principal benefactors, William P. Hobby and Robert E. Eberly. These data were obtained during HPF’s engineering and commissioning period. We thank the resident astronomers and telescope operators at the HET for the execution of our observations with HPF. We thank C. Cañas for providing an independent verification of the HPF SERVAL pipeline using a CCF-based method to calculate the radial velocities, which resulted in fully consistent radial velocities to the SERVAL-based radial velocities presented here. The HET is a joint project of the University of Texas at Austin, the Pennsylvania State University, Ludwig-Maximilians-Universität München and Georg-August Universität Gottingen. The HET collaboration acknowledges support and resources from the Texas Advanced Computing Center. This work was partially supported by funding from the Center for Exoplanets and Habitable Worlds, which is supported by the Pennsylvania State University, the Eberly College of Science and the Pennsylvania Space Grant Consortium. We thank A. Gil de Paz for obtaining the narrow-band-filter Hα imagery, J. Johnson for commissioning TYC 2597-735-1 radial velocity measurements as part of the California Planet Finder programme, and A. Howard for leading Keck–HIRES spectra and performing the primary radial-velocity reduction on all HIRES data. K.H. acknowledges support from a David and Ellen Lee Postdoctoral Fellowship in Experimental Physics at Caltech, and thanks L. Hillenbrand and E. Hamden for discussions about aspects of this work. B.D.M. acknowledges support from the Hubble Space Telescope (number HST-AR-15041.001-A) and the NSF (number 80NSSC18K1708). K.J.S. received support from the NASA Astrophysics Theory Program (NNX17AG28G). G.S. and A.Mo. acknowledge support from NSF grants AST-1006676, AST-1126413, AST-1310885, AST-1517592, AST-1310875 and AST-1907622, the NASA Astrobiology Institute (NNA09DA76A) and PSARC in their pursuit of precision radial velocities in the near-infrared with HPF. We acknowledge support from the Heising-Simons Foundation via grant 2017-0494 and 2019-1177. Computations for this research were performed on the Pennsylvania State University’s Institute for Computational and Data Sciences. G.S. acknowledges support by NASA HQ under the NASA Earth and Space Science Fellowship Program through grant NNX16AO28H, and is a Henry Norris Russell Fellow. Author contributions K.H. and B.D.M. organized and wrote the main body of the paper. K.H. and M.S. performed the data reduction and analysis of the GALEX data, investigated the source of the ultraviolet emission, quantified the mass the far-ultraviolet nebula, and led the the analysis of the Hα emission and variability of TYC 2597-735-1. B.D.M. led all theoretical and analytic interpretation efforts of the ultraviolet nebula origins and TYC 2597-735-1 in the context of stellar mergers and present-day luminous red novae. D.C.M. and M.S. led the GALEX programme that led to the detection of the ultraviolet nebula in 2004 and all subsequent follow-up observations of the nebula with GALEX; both contributed to the overall interpretation of the observational data. D.C.M. contributed to the organization and writing of the paper. M.S. led the radial-velocity analysis and the interpretation and analysis of the infrared excess in the spectral energy distribution of TYC 2597-735-1, modelled this distribution (stellar and dust infrared excess components), and coordinated all ground-based observations of the blue ring nebula and TYC 2597-735-1 at Palomar Observatory and W. M. Keck Observatory. K.H. also helped in the interpretation and analysis of the infrared excess in the spectral energy distribution of TYC 2597-735-1. A.Mc. derived the physical parameters, performed the model atmosphere chemical abundance analysis of TYC 2597-735-1, and participated in discussions of observations, analysis and interpretation. K.J.S. performed the MESA calculations and participated in discussions of observations, analysis and interpretation. J.D.N. handled the data analysis, reported the result of the velocity structure of the Hα shock observed with Keck–LRIS, and participated in discussions of observations, analysis and interpretation. G.S. performed the HET–HPF radial-velocity and differential line-width indicator extractions and provided expertise on the interpretation of the combined radial-velocity datasets. A.Mo. coordinated HET–HPF observations, and performed and reduced all TMMT B-band observations. B.E.S. extracted and analysed the long-term light-curve data from May 1897 to September 2019. Competing interests The authors declare no competing interests. Additional information Supplementary information is available for this paper at https://doi.org/10.1038/s41586-0202893-5. Correspondence and requests for materials should be addressed to K.H. Peer review information Nature thanks the anonymous reviewer(s) for their contribution to the peer review of this work. Peer reviewer reports are available. Reprints and permissions information is available at http://www.nature.com/reprints.

Extended Data Fig. 1 | TYC 2597-735-1 and its ultraviolet nebula in different bandpasses. From left to right, top to bottom: GALEX far-ultraviolet (FUV), GALEX near-ultraviolet (NUV), DSS-II B band, DSS-II R band, Palomar Hale 200-inch COSMIC Hα narrow band, 2MASS J band, 2MASS H band, 2MASS K band, and WISE 3.4 μm (W1), 4.6 μm (W2), 12 μm (W3) and 22 μm (W4). A reference line for 1 arcmin is included in the GALEX far-ultraviolet image. At a

distance of 1.93 kpc, 1 arcmin corresponds to 0.56 pc. Each image covers 10′ × 10′. All images are scaled by asinh(⋅) to accentuate any faint, diffuse emission. The GALEX near-ultraviolet image has been scaled to show the western shock of the blue ring nebula, which makes the brighter near-ultraviolet stars (including TYC 2597-735-1) more enhanced.

Article

Extended Data Fig. 2 | The source of emission in the far-ultraviolet nebula. a, GALEX low-resolution far-ultraviolet grism imaging reveals that the blue ring nebula emits light only around 1,600 Å. Together with the lack of near-ultraviolet radiation, this points to H2 fluorescence as the main source of emission from blue ring nebula. b, Synthetic models of H2 fluorescence change the distribution of light produced by H2 in the far-ultraviolet, depending on the source of excitation (examples shown are Lyα photon pumping (top spectrum)

and electron-impact excitation (bottom spectrum). We convolved high-resolution synthetic H2 fluorescence spectra with the GALEX grism spectral resolution to produce the plots shown. Pumping by Lyα photons (which probably come directly from TYC 2597-735-1) creates peaks in the distribution that are not seen by GALEX near 1,450 Å. Electron-impact fluorescence produces a spectral distribution that better matches where GALEX sees the far-ultraviolet emission being produced.

Extended Data Fig. 3 | TYC 2597-735-1 is an outlier when compared with other moderately evolved stars of similar mass. A large sample of moderately evolved stars39 demonstrates that the effective temperature Teff and surface gravity g of TYC 2597-735-1 are not consistent with the majority of other stars following similar evolutionary tracks. If the present-day observable

properties of TYC 2597-735-1 are a consequence of a previous stellar merger, as our MESA models suggest, then we expect that TYC 2597-735-1 is currently puffed up more than usual and will continue to relax over the next thousands of years to better match the trend of evolving stars in Teff–log(g) space (Extended Data Fig. 7).

Article

Extended Data Fig. 4 | Stellar Hα emission properties of TYC 2597-735-1. a, TYC 2597-735-1 exhibits Hα emission, an unusual trait for evolved stars. The Hα line profile shows variability over short timescales. There is an enhanced blue edge to the emission, a signature of gaseous accretion or disk winds14. The Hα emission suggests that TYC 2597-735-1 is actively accreting matter, possibly from the disk that creates its observed infrared excess emission. b, The Hα bisector velocities at different parts of the Hα emission line profile as a function of time (day; MJD, modified Julian date). Differentcoloured diamonds (see legend) represent different flux levels in the line

profile probed to determine the bisector value. The points in the line profile plotted show the most dramatic shifts away from the line centre. The line peak bisector is also shown (purple), to demonstrate the day-by-day variability observed in the line profile. The dashed grey line represents no velocity shift from the Hα wavelength centre. Except for the line peak, which fluctuates around about 0 km s−1 shifts, the line profile tends towards negative bisector velocity values, providing evidence that the line profile tends towards a blueshifted enhancement.

Extended Data Fig. 5 | Radial velocity of TYC 2597-735-1. All uncertainties are taken as the standard deviation in each data point. a, The best-fitting Keck– HIRES period using the iodine-cell calibration technique. Telluric calibration points show the discrepancy between the two methods. Keck–HIRES radial-velocity (RV) signal suggests a period of about 13.75 days for a companion that produces a radial-velocity amplitude of 196 m s−1. b, The bisector velocity span (BVS) as a function of Keck–HIRES radial-velocity signal shows an anticorrelation trend. c, HET–HPF differential line width (dLW) as a function of radial velocity, highlighting clear variations in the differential line width as a function of radial velocity, which is observed to vary from −250 m s−1 to 250 m s−1 in the HPF radial-velocity data. d, Ca ii infrared triplet (IRT) indices from HET–HPF show strong correlation with differential line width. L1, 8,500 Å;

L2, 8,545 Å; L3, 8,665 Å; all line indices are normalized to the average index of L2. e, We show the range of mass the companion could have (assuming the Keck–HIRES iodine-cell radial-velocity signal is the result of a companion), on the basis of its 13.7-day orbital period (a ≈ 0.1 au; blue vertical line). We also show the minimum mass required to eject a collimated, biconical outflow with the velocity of the blue ring nebula (BRN; purple lower limit), owing to the conversion of gravitational energy to kinetic energy as its orbit decays from infinity to a ≈ 0.1 au. We put this hypothetical companion into context with other objects, including Jupiter (MJ; orange line), brown dwarfs (MBD; yellow shaded region) and M stars (0.1M☉; red line). The current radial extent of TYC 2597-735-1 (about 10R☉) is shaded green.

Article

Extended Data Fig. 6 | Light curve of TYC 2597-735-1 since 1895. A full description of the process of generating the century-long light curve TYC 2597735-1 is provided in Supplementary Information. The uncertainty in the binned magnitudes is the root-mean-square scatter divided by the square root of the number of plates used per bin. The rough trend of the light curve of TYC 2597-

735-1 shows a total B-mag decay of 0.11–0.12 mag between 1895 and 2015, consistent with 0.09–0.1 mag per century. This falls in the range of predicted secular decay in the MESA models for the case study of the stellar merger history of TYC 2597-735-1.

Extended Data Fig. 7 | Evolution of a stellar merger between a 2M ⊙ primary star and a 0.1M ⊙ companion. MESA evolutionary models were created to look at how the energy injected into the primary star changes its observed characteristics over time. The coloured lines represent mergers at different evolutionary stages of the primary as it evolves towards the red-giant branch.

The horizontal dotted black lines represent the observed parameters for TYC 2597-735-1. This model outcome represents one scenario that helps to justify the history of TYC 2597-735-1 including a stellar merger that created a blue ring nebula 1,000 years later (vertical dashed line).

Article

Extended Data Fig. 8 | Demonstration of the velocity line profile fitting to an unblended Fe i line (5,569.6 Å). Left, rotational velocity fit only (vsin(i); red line). The U-shaped rotational velocity profile alone does not capture the line wings of the Fe i line of TYC 2597-735-1 (black). Middle, macroturbulence

velocity fit only (ζ; red line). Although the fit to the line wings is improved, the line core is too narrow. Right, convolved rotational plus macroturbulent velocity profiles provide a better fit to the observed Fe i line (fit, red line; data, black line).

Article

Observation of gauge invariance in a 71-site Bose–Hubbard quantum simulator https://doi.org/10.1038/s41586-020-2910-8 Received: 19 March 2020

Bing Yang1,2,3,4,8, Hui Sun1,2,3,4, Robert Ott5, Han-Yi Wang1,2,3,4, Torsten V. Zache5, Jad C. Halimeh5,6,7, Zhen-Sheng Yuan1,2,3,4 ✉, Philipp Hauke5,6,7 ✉ & Jian-Wei Pan1,2,3,4 ✉

Accepted: 2 September 2020 Published online: 18 November 2020 Check for updates

The modern description of elementary particles, as formulated in the standard model of particle physics, is built on gauge theories1. Gauge theories implement fundamental laws of physics by local symmetry constraints. For example, in quantum electrodynamics Gauss’s law introduces an intrinsic local relation between charged matter and electromagnetic fields, which protects many salient physical properties, including massless photons and a long-ranged Coulomb law. Solving gauge theories using classical computers is an extremely arduous task2, which has stimulated an effort to simulate gauge-theory dynamics in microscopically engineered quantum devices3–6. Previous achievements implemented density-dependent Peierls phases without defining a local symmetry7,8, realized mappings onto effective models to integrate out either matter or electric fields9–12, or were limited to very small systems13–16. However, the essential gauge symmetry has not been observed experimentally. Here we report the quantum simulation of an extended U(1) lattice gauge theory, and experimentally quantify the gauge invariance in a many-body system comprising matter and gauge fields. These fields are realized in defect-free arrays of bosonic atoms in an optical superlattice of 71 sites. We demonstrate full tunability of the model parameters and benchmark the matter–gauge interactions by sweeping across a quantum phase transition. Using high-fidelity manipulation techniques, we measure the degree to which Gauss’s law is violated by extracting probabilities of locally gauge-invariant states from correlated atom occupations. Our work provides a way to explore gauge symmetry in the interplay of fundamental particles using controllable large-scale quantum simulators.

Quantum electrodynamics (QED), the paradigmatic example of a gauge-invariant quantum field theory, has fundamentally shaped our understanding of modern physics. Gauge invariance in QED—described as a local U(1) symmetry of the Hamiltonian—ties electric fields E and charges ρ to each other through Gauss’s law, ∇ ⋅ E = ρ . The standard model of particle physics, including, for example, quantum chromodynamics, has been designed on the basis of this principle of gauge invariance. However, despite impressive feats17,18, it remains extremely difficult for classical computers to solve the dynamics of gauge theories3–6. Quantum simulation offers the tantalizing prospect of sidestepping this difficulty by microscopically engineering gauge-theory dynamics in table-top experiments, based on, for example, trapped ions, superconducting qubits and cold atoms7–16. In the quest for experimentally realizing gauge-theory phenomena, a large quantum system is essential to mitigate finite-size effects irrelevant to the theory in the thermodynamic limit. Moreover, while Gauss’s law in QED holds fundamentally, it is merely approximate when engineered in present-day cold-atom experiments keeping both fermionic matter and dynamical

gauge fields explicitly15,16. Thus, it is a crucial challenge to determine the reliability of gauge invariance in large-scale quantum simulators19. Here we verify Gauss’s law in a many-body quantum simulator. To this end, we devise a mapping from a Bose–Hubbard model (BHM) describing ultracold atoms in an optical superlattice to a U(1) lattice gauge theory with fermionic matter. We exploit the formalism of quantum link models (QLMs)3,20, which incorporate salient features of QED, in particular Coleman’s phase transition in one spatial dimension (1D) at topological angle θ = π (ref. 21). Here, gauge-invariant ‘matter–gauge field’ interactions emerge through a suitable choice of Hubbard parameters, effectively penalizing unwanted processes. Experimentally, we prepare large arrays of atoms in high-fidelity staggered chains, realize the quantum phase transition by slowly ramping the lattice potentials, and observe the characteristic dynamics via probing of site occupancies and density–density correlations. In our model, Gauss’s law constrains boson occupations over sets of three adjacent sites in the optical lattice. By tracking the coherent evolution of the state in these elementary units, we detect the degree of local violation of Gauss’s law.

Hefei National Laboratory for Physical Sciences at Microscale, University of Science and Technology of China, Hefei, China. 2Department of Modern Physics, University of Science and Technology of China, Hefei, China. 3Physikalisches Institut, Ruprecht-Karls-Universität Heidelberg, Heidelberg, Germany. 4CAS Centre for Excellence and Synergetic Innovation Centre in Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei, China. 5Institute for Theoretical Physics, Ruprecht-Karls-Universität Heidelberg, Heidelberg, Germany. 6Department of Physics, University of Trento, Trento, Italy. 7Kirchhoff Institute for Physics, Ruprecht-Karls-Universität Heidelberg, Heidelberg, Germany. 8Present address: Institut für Experimentalphysik, Universität Innsbruck, Innsbruck, Austria. ✉e-mail: [email protected]; [email protected]; [email protected] 1

392 | Nature | Vol 587 | 19 November 2020

Quantum phase transition

ℓ , ℓ +1

(

)

G^ℓ = (− 1)ℓ +1 S^ ℓz, ℓ +1 + S^ ℓz−1, ℓ + ψ^ †ℓψ^ℓ ,

(3)

where b^ †j, b^j are creation and annihilation operators, n^j = b^ †jb^j , J is the tunnelling strength, and U is the on-site interaction. The energy offset ε j = (− 1) j δ /2 + jΔ consists of a linear tilt Δ to suppress long-range tunnelling along the 1D chain, and a staggered superlattice potential δ. Here, the even sites j of the superlattice correspond to the matter sites ℓ in the lattice gauge theory, while we identify odd sites j with link indices ℓ, ℓ + 1. Choosing δ ≫ J and on-site interaction U ≈ 2δ effectively constrains the system to the relevant subspace limited to the number states |0⟩, |2⟩ on odd (gauge) sites and |0⟩, |1⟩ on even (matter) sites. On this subspace, we can hence identify the operators as S^ +ℓ , ℓ +1 ≃ (b^ †j =2ℓ +1) 2/ 2 and similarly ψ^ℓS^ +ℓ , ℓ +1ψ^ℓ +1 ≃ b^2ℓ (b^ †2ℓ +1) 2b^2ℓ +2 / 2 (using a Jordan–Wigner transformation for the matter sites), see Methods. This term can be physically realized by atoms on neighbouring matter sites combining into a doublon (that is, two indistinguishable atoms residing in one site). The rest mass corresponds to m = δ − U/2, which enables us to cross the phase transition by tuning m    0. The strength of the gauge-invariant coupling (t~ ≈ 8 2 J 2 /U ≈ 70 Hz at resonance m ≈ 0) is much larger than the dissipation rate, enabling a faithful implementation in a large many-body system. The experiment starts with a quasi two-dimensional Bose–Einstein condensate of about 100,000 87Rb atoms in the x–y plane. We implement a recently demonstrated cooling method in optical lattices to create a Mott insulator with a filling factor of 0.992(1) (ref. 24). Figure 2a shows a uniform area containing 10,000 lattice sites, from which a region of interest (ROI) with 71 × 36 sites is selected for simulating the gauge theory. A lattice along the y axis with depth 61.5(4)Er isolates 2 2 the system into copies of 1D chains. Here, Er = h /(2mRb λ s ) is the recoil

...

~ t

...

... ... m

b



Gauss’s law, G = 0



QLM BHM

Antiparticle site 1/2

–1 –1/2

1/2

0

1/2

–1/2 0 –1/2

+

QLM BHM

Particle site –1/2 1

c

1/2

1/2

0

1/2

–1/2 0 –1/2

Hubbard simulator Odd Even

J G + Δ 2Δ

U

...

... x Odd t

71 sites

36 chains

U   H^BHM = ∑ − J (b^ †jb^j +1 + h.c.) + n^j (n^j − 1) + ε jn^j , 2  j 

–∞

or

(2)

to be conserved quantities for each matter site ℓ. We choose, as is usual, to work in the charge-neutral sector, where the state |ψ⟩ fulfils ∑ ℓ Qˆℓ ψ ⟩ = 0 , and in the Gauss’s law sector specified by Gˆℓ ψ ⟩ = 0, ∀ℓ. Ensuring adherence to this local conservation law is the main experimental challenge, as it intrinsically constrains matter and electric fields across three neighbouring sites (see Fig. 1b). We simulate this QLM with ultracold bosons in a 1D optical superlattice as sketched in Fig. 1c (see Methods for details). The experiment is governed by the BHM

m

...

ℓ , ℓ +1

to two values (red and blue arrows in Fig. 1). Further, h.c. denotes hermitian conjugate. Using staggered fermions22, matter fields ψˆℓ represent particles and antiparticles on alternating sites, with alternating electric charge Q^ ℓ = (− 1)ℓ ψ^ †ℓψ^ℓ. By tuning the fermion rest mass m, we can drive the system across a quantum phase transition from a charge-dominated disordered phase to an ordered phase, characterized by the spontaneous breaking of charge and parity (C/P) symmetries21,23; see Fig. 1a. During the transition, owing to the term proportional to t~ (gauge– matter coupling strength), particle–antiparticle pairs annihilate accompanied by the correct adjustment of the electric field according to Gauss’s law. Gauss’s law requires the generators of the U(1) gauge transformations,



t

Using the QLM formalism, the gauge field is represented by spin-1/2 operators S^ ℓz, ℓ +1 on links connecting neighbouring lattice sites, E^ ≡ (− 1)ℓ +1S^ z , corresponding to an electric field coarse-grained

+

x



(1)

Electric field

Matter

+

...

Odd Even



 it~  H^QLM = ∑ − (ψ^ℓS^ +ℓ , ℓ +1ψ^ℓ +1 − h.c.) + mψ^ †ℓψ^ℓ  . 2  ℓ 

a

+

Our target model is a U(1) gauge theory on a 1D spatial lattice with ℓ = 0, 1, …, N − 1 sites, described by the Hamiltonian (see Methods)

1 Even y

0 x

...

... or

...

... Odd

Even

Fig. 1 | Quantum simulation of a U(1) lattice gauge theory. a, A quantum phase transition separates a charge-proliferated phase from a C/P symmetry-breaking phase where the electric field (triangles) passes unhindered through the system (sketched at particle rest mass m → −∞ and +∞, respectively). The Feynman diagram, depicted as wavy lines, describes the gauge-invariant annihilation of ∼ particles and antiparticles (circles with charges) with a coupling strength of t . The transition leads to two opposite configurations in terms of the directions of the electric field. b, Gauss’s law strongly restricts the permitted gauge-invariant configurations of charges and neighbouring electric fields. The matter field consists of antiparticle and particle sites. The mapping from QLM to BHM is sketched in the shaded diagrams, where the eigenvalues in Gauss’s law are labelled below each site. c, Simulation of the model on a 71-site Bose–Hubbard system consisting of ultracold atoms in an optical superlattice. See main text for nomenclature. We sweep through the quantum phase transition by controlling the Hubbard parameters over time t. Particle–antiparticle annihilation is realized by atoms initially residing on even (shallow) sites binding into doublons on odd (deep) sites. The upper and lower panel depict the initial and final state, respectively. Insets are their corresponding atomic densities.

energy, with λs = 767 nm the wavelength of a ‘short lattice’ laser, h the Planck constant, and mRb the atomic mass. The near-unity filling enables the average length of defect-free chains to be longer than the 71 sites. Even without a quantum gas microscope, the size of our many-body system is confirmed by counting the lattice sites with single-site resonance imaging (see Methods). Along the x direction, another lattice, with wavelength λl = 2λs (the ‘long lattice’), is employed to construct a superlattice that divides the trapped atoms into odd and even sites. Two different configurations of the superlattice are used here. Nature | Vol 587 | 19 November 2020 | 393

Article a

Initialization

Odd

Mott insulator y

b

Even

...

100 sites

ROI

Even Odd

–39.8 –11.4

...

...

0

~ m/t 0

0.53

2.3

11.6 p(m)(0)

Even site Matter

1

x

–2.4

...

p(m)(1) p(m)(2) p(m)(3)

100 sites

Evolution

In situ detection of p(m/g)(n)

(iii)

Parity projection

For n > 2

t (ms)

80

0 120

State flip

PA light

Split atoms

p(g)(3)

c

1.0

0.5

0.8

0.4

0.6

0.3

0.4

0.2

0.2

0.1

0

Doublons on odd sites

(ii)

Site-selective addressing

40

1

p(g)(2)

Deviation of electric field

(i)

0

Particle density

–200

p(g)(1)

Odd site Gauge field

Population of even sites

0

p(g)(0)

71 sites

100 J/h (Hz)

Ramping

m/h (Hz)

200

0

0 0

20

40

60

80

100

120

t (ms) Fig. 2 | Probing the many-body dynamics. a, Experimental sequence. Starting from a near perfect Mott insulator in the ‘short’ lattice, the initial staggered state is prepared by removing the atoms on odd sites. We drive the phase transition by ramping the mass m = δ − U/2 and the tunnelling J. Afterwards, the occupation probabilities p (m/g)(n) are identified for even and odd sites by engineering the atomic states with measurement schemes (i)–(iii), see Methods section ‘State preparation and detection’. b, c, Time-resolved observation of the C/P-breaking phase transition. As revealed by the probabilities (b), atoms initially residing on even sites (upper panel) bind into doublons on odd sites (lower panel),

corresponding to an annihilation of particles on matter sites and a deviation of the electric field, quantified by ∑ ℓ (− 1)ℓ ⟨Eˆℓ, ℓ +1 (t ) − Eˆℓ, ℓ +1 (0)⟩/(2N ). c, Experimental observables and their correspondence in the QLM. Measured results agree well with theoretical predictions (solid curves) from the time-adaptive density matrix renormalization group (t-DMRG) method, where our numerics takes into account spatial inhomogeneity and sampling over noisy experimental parameters (see Methods). Error bars and shaded regions, s.d. The dashed lines represent the exact evolution of the ideal QLM (see Methods).

First, to manipulate quantum states in isolated double wells, which we use for state initialization and readout, the superlattice phase is controlled to match the positions of the intensity maxima of the short and long lattices. Second, in contrast, when performing the phase transition, overlapping the intensity minima of the lattices enables the production of identical tunnelling strength between neighbouring sites. To prepare the initial state, we selectively address and flip the hyperfine state of the atoms residing on odd sites24, followed by their removal using resonant light. The remaining atoms on the even sites of the 1D chains correspond to an overall charge neutral configuration. They form the ground state of our target gauge theory, equation (1), at m → −∞ in the Gˆℓ ψ = 0 sub-sector. The phase transition is accessed by slowly tuning the superlattice structure in terms of the Hubbard parameters. The linear potential Δ = 57 Hz per site (formed by the projection of gravity) as well as the main contribution to the staggered potential δ = 0.73(1) kHz (arising from the depth of the long lattice) are kept constant during the 120-ms transition process. This ramp speed has been chosen to minimize both non-adiabatic excitations when crossing the phase transition and undesired heating effects. As shown in Fig. 2a, the tunnelling strength J/U is ramped from 0.014 up to 0.065 and back to 0.019. Simultaneously, we linearly lower the z-lattice potential to ramp the on-site interaction U from 1.82(1) kHz to 1.35(1) kHz. This ramp corresponds to driving the system from a large and negative m, through its critical point at m ≈ 0, to a large and positive value deep within the C/P-broken phase. To probe the system dynamics, we ramp up the lattice barriers after evolution time t and extract the probability distributions p(m/g) (n) of j the occupation number n. With our optical resolution of about 1 μm, in situ observables average the signal over a small region around site j. Our measurements distinguish between even matter sites (m) and odd gauge-field sites (g). We illustrate the procedure for p(g) (n) . j

To extract it for n ≤ 3, we combine the three schemes sketched in Fig. 2a ((i)–(iii); see Methods for a detailed application of (i)–(iii) to obtain the p(m/g) (n)). (i) The mean occupation of gauge-field sites is recorded by j in situ absorption imaging after applying a site-selective spin flip in the superlattice, which gives n (g) = ∑n np(g) (n) with natural numbers n. j (ii) We use a photoassociation (PA) laser to project the occupancy into odd or even parity. Unlike selecting out doublons via Feshbach resonances25,26, the PA-excited molecule decays spontaneously and gains kinetic energy to escape from the trap. After this parity projection, the (g) residual atomic density is n (g) c = ∑n mod2 (n)p j (n). (iii) A further engineering of atoms in double wells allows us to measure the probabilities of occupancies larger than two. We first clean the matter sites and then split the atoms into double wells. After a subsequent parity projection via illumination with PA light, the remaining atomic density is (g) n (g) c + 2p j (2) . From the population, we find that high-energy excitations, such as n = 3, are negligible throughout our experiment. As the data for p(m/g) (n) in Fig. 2b, c show, after the ramp through the j phase transition, on average 80(±3)% of the atoms have left the even sites and 39(±2)% of double occupancy is observed on the odd sites (we checked the coherence and reversibility of the process by ramping back from the final state, see Methods). This corresponds to the annihilation of 78(±5)% of particle–antiparticle pairs. From the remaining 22(±5)% of particles that have not annihilated, we estimate the average size of ordered domains after the ramp to be 9 ± 2 sites. The formation of ordered domains can be further confirmed by measuring density– density correlations C(i , j ) = ⟨nˆi nˆj ⟩ (refs. 27–29). We extract the correlation functions in momentum space after an 8-ms time of flight. For a bosonic Mott state with unity filling, the correlation function shows a bunching effect at momentum positions of ±2ħk, where k = 2π/λs is the wave vector. Two more peaks at ±ħk appear in the correlation function of our initial state owing to the staggered distribution, as shown in Fig. 3a.

394 | Nature | Vol 587 | 19 November 2020



+

+



+



a

a

y

Detecting | 002 or | 200

Detecting | 010

x

J

J

Phase transition Tunnelling

Address odd sites

Split atoms

+



Address odd sites

Domain

2 0 –2

1

State population

4

–1.0

–0.5 0 0.5 Distance in momentum space (2 k)

Fig. 3 | Density–density correlation. a, Left, idealized sketches of the initial (top) and final (bottom) state. The domain length of the final state equals the distance between two unconverted atoms, which are removed from the system before measurement. Right, measured interference patterns in the initial and final states (averaged over 523 and 1,729 images, respectively). The x lattice defining the 1D chains is tilted by 4° relative to the imaging plane. b, Single-pixel sections along the x direction through the centre of the patterns in a. In the final state, additional peaks at ±0.5ħk appear, indicating the emergence of a new ordering.

| 002 and |200

| 010

...

0.5

...

...

...

...

...

0

1.0

1

Gauge violation, (t)

Correlation amplitude (10–3)

Final state



Initial state

b

+

b

Tunnelling

J

0.1

0.01

0

20

40

60

80

100

120

t (ms)

The width of these peaks is mainly determined by the spatial resolution of the absorption imaging. The correlation function of the final state in Fig. 3 shows two broader peaks at ±0.5ħk, which indicates the emergence of a new ordering with a doubled spatial period. The finite correlation length ξ of the final state broadens the interference pattern. Assuming exponential decay of density–density correlations, C(i , j ) ∝ exp(− |i − j|/ξ ), we obtain the correlation length of the final state as ξ = 4.4+2.0 −1.0 sites (see Methods). Thus, we can achieve many-body regions with spontaneously broken C/P symmetry. Finally, we quantify the violation of Gauss’s law, for which we monitor the probabilities p|… n n n …⟩ of the three allowed gauge-invariant j −1 j j +1 Fock states sketched in Fig. 1b, |…nj −1nj nj +1…⟩ = |…010…⟩, …200…⟩, and …002…⟩, j even. To achieve this, we have developed a method to measure the density correlations between neighbouring lattice sites within double wells. Unlike the approach in Fig. 2a, which does not give access to correlations between sites, here we distinguish different states by their dynamical features (Fig. 4a). In particular, we use the characteristic tunnelling frequency to distinguish the target states from the others. For example, to detect the state …010…⟩, we perform tunnelling sequences between double wells in two mirrored superlattice configurations (setting the parameters to J/h = 68.9(5) Hz and U/h = 1.71(1) kHz to avoid frequency overlap between different processes). The tunnelling frequency 2J/h for the state |10⟩ in a double well is one order of magnitude higher than the superexchange frequency 4J2/(hU) for the states |20⟩ or |11⟩. Thus, the oscillation amplitudes at frequency 2J/h yield the probabilities p|…01nj+1 …⟩ and p|… nj−1 10…⟩ . In addition, the probability p|… n 1n …⟩ equals p(m)(1) (see Fig. 2b, c). With these, we can deduce a j −1

j +1

j

lower bound p|…010…⟩ ≥ p|…01nj +1 …⟩ + p|… nj −1 10…⟩ − p|… nj −1 1nj +1 …⟩. We obtain the population of the states …002…⟩ and …200…⟩ in a similar fash-

ion (see Methods). From these measurements, we can obtain the degree of gauge violation ϵ(t), defined as the spatial average of 1 − ⟨ψ(t) Pℓ ψ(t)⟩, where Pℓ projects the system state |ψ(t)⟩ onto the local gauge-invariant subspace. As shown in Fig. 4b, throughout our entire experiment the summed probabilities of gauge-invariant states remains close to 1. Thus, our many-body quantum simulator retains gauge invariance to an excellent degree, even during and after a sweep through a quantum phase transition.

Fig. 4 | Fulfilment of Gauss’s law. a, Correlated measurements detect gauge-invariant states |…nj −1 nj nj +1 …⟩, j even, within gauge–matter–gauge three-site units. For probing …010…⟩ (left), we first flip the hyperfine levels of the atoms on odd sites. Then, we change the superlattice into two kinds of double well structures and monitor the tunnelling of the middle atoms. For …002…⟩ and …200…⟩ (right), we split the doublons into two sites and mark them by the hyperfine levels. Their state populations correlate to the oscillation amplitudes of tunnelling dynamics. b, The state populations of the gauge-invariant states are plotted in the upper graph, where the initial and final phases of the QLM are sketched in the shaded diagrams. From these probabilities, we extract the gauge violation ϵ (t ) = 1 − (p …010…⟩ + p …002…⟩ + p …200…⟩ ) as shown in the bottom graph. While the inversion between the Fock states after the phase transition is stronger in the ideal QLM (exact numerics, orange and blue curves), fulfilment of Gauss’s law and a high level of gauge invariance is retained throughout. The experimental results are in quantitative agreement with t-DMRG calculations for our isolated Bose–Hubbard system (red curve). Error bars and shading, s.d.

In conclusion, we have developed a fully tunable many-body quantum simulator for a U(1) gauge theory and demonstrated that it faithfully implements gauge invariance, the essential property of lattice gauge theories. Future extensions may give access to other symmetry groups and gauge theories in higher dimensions. The main challenge for the latter is to combine the model with a plaquette term that has been demonstrated previously in the present apparatus13. Importantly, our results enable the controlled analysis of gauge theories far from equilibrium, which is notoriously difficult for classical computers3–6. A plethora of target phenomena offers itself for investigation, including false vacuum decay30,31, dynamical transitions related to the topological θ-angle32–34, and thermal signatures of gauge theories under extreme conditions35.

Online content Any methods, additional references, Nature Research reporting summaries, source data, extended data, supplementary information, acknowledgements, peer review information; details of author contributions and competing interests; and statements of data and code availability are available at https://doi.org/10.1038/s41586-020-2910-8. Nature | Vol 587 | 19 November 2020 | 395

Article 1. 2. 3. 4. 5. 6. 7.

8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18.

Weinberg, S. The Quantum Theory of Fields: Volume 1, Foundations (Cambridge Univ. Press, 2005). Gattringer, C. & Lang, C. Quantum Chromodynamics on the Lattice: An Introductory Presentation (Springer, 2009). Wiese, U.-J. Ultracold quantum gases and lattice systems: quantum simulation of lattice gauge theories. Ann. Phys. 525, 777–796 (2013). Zohar, E., Cirac, J. I. & Reznik, B. Quantum simulations of lattice gauge theories using ultracold atoms in optical lattices. Rep. Prog. Phys. 79, 014401 (2016). Dalmonte, M. & Montangero, S. Lattice gauge theory simulations in the quantum information era. Contemp. Phys. 57, 388–412 (2016). Bañuls, M. et al. Simulating lattice gauge theories within quantum technologies. Eur. Phys. J. D 74, 1–42 (2020). Clark, L. W. et al. Observation of density-dependent gauge fields in a Bose–Einstein condensate based on micromotion control in a shaken two-dimensional lattice. Phys. Rev. Lett. 121, 030402 (2018). Görg, F. et al. Realization of density-dependent Peierls phases to engineer quantized gauge fields coupled to ultracold matter. Nat. Phys. 15, 1161–1167 (2019). Martinez, E. A. et al. Real-time dynamics of lattice gauge theories with a few-qubit quantum computer. Nature 534, 516–519 (2016). Bernien, H. et al. Probing many-body dynamics on a 51-atom quantum simulator. Nature 551, 579–584 (2017). Surace, F. M. et al. Lattice gauge theories and string dynamics in Rydberg atom quantum simulators. Phys. Rev. X 10, 021041 (2020). Kokail, C. et al. Self-verifying variational quantum simulation of lattice models. Nature 569, 355–360 (2019); correction 580, E9 (2020). Dai, H.-N. et al. Four-body ring-exchange interactions and anyonic statistics within a minimal toric-code Hamiltonian. Nat. Phys. 13, 1195–1200 (2017). Klco, N. et al. Quantum-classical computation of Schwinger model dynamics using quantum computers. Phys. Rev. A 98, 032331 (2018). Schweizer, C. et al. Floquet approach to Z 2 lattice gauge theories with ultracold atoms in optical lattices. Nat. Phys. 15, 1168–1173 (2019). Mil, A. et al. A scalable realization of local U(1) gauge invariance in cold atomic mixtures. Science 367, 1128–1130 (2020). Calzetta, E. A. & Hu, B.-L. B. Nonequilibrium Quantum Field Theory (Cambridge Univ. Press, 2008). Berges, J. Introduction to nonequilibrium quantum field theory. AIP Conf. Proc. 739, 3–62 (2004).

396 | Nature | Vol 587 | 19 November 2020

19. Halimeh, J. C. & Hauke, P. Reliability of lattice gauge theories. Phys. Rev. Lett. 125, 030503 (2020). 20. Chandrasekharan, S. & Wiese, U. J. Quantum link models: a discrete approach to gauge theories. Nucl. Phys. B 492, 455–471 (1997). 21. Coleman, S. More about the massive Schwinger model. Ann. Phys. 101, 239–267 (1976). 22. Susskind, L. Lattice fermions. Phys. Rev. D 16, 3031–3039 (1977). 23. Rico, E., Pichler, T., Dalmonte, M., Zoller, P. & Montangero, S. Tensor networks for lattice gauge theories and atomic quantum simulation. Phys. Rev. Lett. 112, 201601 (2014). 24. Yang, B. et al. Cooling and entangling ultracold atoms in optical lattices. Science 369, 550–553 (2020). 25. Winkler, K. et al. Repulsively bound atom pairs in an optical lattice. Nature 441, 853–856 (2006). 26. Jördens, R., Strohmaier, N., Gunter, K., Moritz, H. & Esslinger, T. A Mott insulator of fermionic atoms in an optical lattice. Nature 455, 204–207 (2008). 27. Altman, E., Demler, E. & Lukin, M. D. Probing many-body states of ultracold atoms via noise correlations. Phys. Rev. A 70, 013603 (2004). 28. Fölling, S. et al. Spatial quantum noise interferometry in expanding ultracold atom clouds. Nature 434, 481–484 (2005). 29. Simon, J. et al. Quantum simulation of antiferromagnetic spin chains in an optical lattice. Nature 472, 307–312 (2011). 30. Hauke, P., Marcos, D., Dalmonte, M. & Zoller, P. Quantum simulation of a lattice Schwinger model in a chain of trapped ions. Phys. Rev. X 3, 041018 (2013). 31. Yang, D. Y. et al. Analog quantum simulation of (1+1)-dimensional lattice QED with trapped ions. Phys. Rev. A 94, 052321 (2016). 32. Zache, T. et al. Dynamical topological transitions in the massive Schwinger model with a θ term. Phys. Rev. Lett. 122, 050403 (2019). 33. Huang, Y.-P., Banerjee, D. & Heyl, M. Dynamical quantum phase transitions in U(1) quantum link models. Phys. Rev. Lett. 122, 250401 (2019). 34. Magnifico, G. et al. Symmetry-protected topological phases in lattice gauge theories: topological QED2. Phys. Rev. D 99, 014503 (2019). 35. Berges, J., Floerchinger, S. & Venugopalan, R. Thermal excitation spectrum from entanglement in an expanding quantum string. Phys. Lett. B 778, 442–446 (2018). Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. © The Author(s), under exclusive licence to Springer Nature Limited 2020

Methods Target model Our experiment is motivated by the lattice Schwinger model of QED in one spatial dimension in a Kogut–Susskind Hamiltonian formulation36, a H^QED= ∑ (E^ ℓ2, ℓ +1 + m(− 1)ℓ ψ^ ℓ†ψ^ℓ) 2 ℓ i − ∑ (ψ^ ℓ†U^ℓ , ℓ +1ψ^ℓ +1 − h.c.), 2a ℓ

(4)

with lattice spacing a, gauge coupling e, and where we have set ħ and c to unity for notational brevity. Gauge links and electric fields fulfil the commutation relations [Eˆℓ , ℓ +1, Uˆm, m+1] = eδℓ , mUˆℓ , ℓ +1, while fermion field operators obey canonical anti-commutation relations {ψ^ ℓ†, ψ^m} = δℓm. Here, we use ‘staggered fermions’22, which are an elegant way to represent oppositely charged particles and antiparticles, using a single set of spin-less fermionic operators, but at the expense of alternating signs on even and odd sites. Gauge transformations are expressed in terms of the local Gauss’s law operators

G^ℓ = E^ℓ , ℓ +1 − E^ℓ −1, ℓ − e

ψ^ †ℓψ^ℓ + (− 1)ℓ 2

(5)

.

These generate local U(1) transformations parametrized by real ^ transforms as O ^ ′ = V^ †O ^ V^ , numbers αℓ , under which an operator O ˆ ˆ with V = exp[i ∑ ℓ αℓGℓ ]. Explicitly, the matter and gauge fields transform according to ψ^ ′ℓ = exp(−ieαℓ)ψ^ℓ , U^ ′ℓ , ℓ +1 = exp(−ieαℓ)U^ℓ , ℓ +1exp(ieαℓ +1) and Eˆ′ℓ , ℓ +1 = Eˆℓ , ℓ +1 . In the absence of external charges, a physical state |ψ(t)⟩ is required to be invariant under a gauge transformation, that is,Vˆ ψ(t)⟩ = ψ(t)⟩. Thus, gauge invariance under Hamiltonian time evolution is equivalent to Gˆℓ ψ(t)⟩ = 0, ∀ℓ, t , that is, [Gˆℓ , HˆQED] = 0 and the Gˆℓ are conserved charges. In our experiments, we achieve explicit probing of this local conservation law (see Fig.  4b and further below). Using the QLM formalism20, we represent U(1) gauge fields in our 2 ^+ analogue quantum simulator by spin-1/2 operators U^ℓ , ℓ +1 → S 3 ℓ , ℓ +1 2 ^− ℓ +1 z ( S ) for  odd (even) ℓ , as well as E^ . In this → e( − 1) S^ 3

ℓ , ℓ +1

ℓ , ℓ +1

ℓ , ℓ +1

spin-1/2 QLM representation, the electric field energy term propor2 tional to E^ represents a constant energy offset, and hence drops out. Therefore, without loss of generality, we may set e → 1 in the following. With the rather untypical sign conventions in the above QLM definition, and an additional particle–hole transformation on every second matter site (ψ^ℓ ↔ ψ^ †ℓ , ℓ odd), the alternating signs of the staggered fermions are cancelled, yielding a simpler homogeneous model30,31. The Hamiltonian takes the form of equation (1) and Gauss’s law is represented by equation (2). For large negative values of the mass, m → −∞, the ground state is given by fully occupied fermion sites and an alternating electric field Eˆ. However, for large positive masses, the absence of fermions is energetically favourable. In this configuration there are no charges, and hence the electric fields are aligned, in a superposition of all pointing to the left and all pointing to the right (see Fig. 1). In between these two extreme cases, the system hosts a second-order quantum phase transition, commonly termed Coleman’s phase transition21,37. While the quantum link Hamiltonian, equation (1), is invariant under a transformation37 of parity (P) and charge conjugation (C) the ground state does not always respect these symmetries: The vacuum state for m → −∞ is C- and P-invariant, but the respective vacua in the m → ∞ phase are Cand P-broken. An order parameter for the transition is given by the staggered change of the electric fields with respect to the initial configuration, ∑ ℓ ⟨Eˆℓ , ℓ +1(t) − Eˆℓ , ℓ +1(0)⟩/(2N ).

Mapping to Bose–Hubbard simulator Starting from the Hamiltonian of equation (1), we employ a Jordan– Wigner transformation with alternating minus signs, ψ^ †ℓ = (− 1)ℓ exp[iπ

ℓ −1



ℓ ′=0

ψ^ℓ = (− 1)ℓ exp[−iπ

+

(6a)

z − ((− 1) ℓ ′ σ^ ℓ ′ + 1)/2]σ^ ℓ

(6b)

ℓ−1



z

((− 1) ℓ ′ σ^ ℓ ′ + 1)/2]σ^ ℓ ,

ℓ ′=0

σ^ z + 1 ψ^ †ℓψ^ℓ = ℓ , 2

(6c)

replacing the fermionic operators ψ^ℓ /ψ^ †ℓ by local spin-1/2 operators σˆ±ℓ and non-local strings involving σˆ ℓz ′ < ℓ. We further identify the eigenstates of σˆ ℓz with two bosonic harmonic oscillator eigenstates 0⟩ℓ and 1⟩ℓ . Projecting to the subspace ℋℓ = span{ 0⟩ℓ , 1⟩ℓ }, we then realize the spin operators in terms of bosonic creation/annihilation operators aˆ† /aˆ as follows: − σ^ ℓ = Pℓa^ℓ Pℓ ,

(7a)

+ † σ^ ℓ = Pℓa^ ℓ Pℓ ,

(7b)

z † σ^ ℓ = Pℓ(2a^ ℓ a^ℓ − 1)Pℓ ,

(7c)

where Pℓ is the projector onto ℋℓ. The bosonic commutation relations [aˆℓ , aˆ†ℓ ] = 1 when restricted to ℋℓ imply the required algebra of the Pauli matrices as given by [ˆ σ ℓz, σˆ±ℓ] = ± 2σˆ±ℓ and [ˆ σ +ℓ, σˆ−ℓ] = σˆ ℓz . Similarly, we identify the two eigenstates of the ‘gauge’ spins S^ ℓz, ℓ +1 with two eigenstates 0⟩ℓ , ℓ +1 and 2⟩ℓ , ℓ +1 associated with further bosonic opera† tors, dˆℓ , ℓ +1 and dˆℓ , ℓ +1 located at the links. Projecting to the subspace ℋℓ , ℓ +1 = span{ 0 ℓ , ℓ +1, 2 ℓ , ℓ +1}, we have

1 S^ −ℓ , ℓ +1 = Pℓ , ℓ +1(d^ℓ , ℓ +1) 2 Pℓ , ℓ +1, 2

(8a)

1 S^ +ℓ , ℓ +1 = Pℓ , ℓ +1(d^ †ℓ , ℓ +1) 2 Pℓ , ℓ +1, 2

(8b)

1 S^ ℓz, ℓ +1 = Pℓ , ℓ +1(d^ †ℓ , ℓ +1d^ℓ , ℓ +1 − 1)Pℓ , ℓ +1, 2

(8c)

fulfilling the desired angular momentum algebra. With these replacements, the Hamiltonian, equation (1), becomes

  t~ † [a^ℓ (d^ ℓ† , ℓ +1) 2 a^ℓ +1 + h.c.]P, H^QLM = P ∑ ma^ℓ a^ℓ + 2 2  ℓ 

(9)

where P = ∏ ℓ PℓPℓ , ℓ +1 . In the main text, the projection P is implied in the notation Aˆ ≃ Bˆ, which abbreviates the equality A^ = PB^P for two operators Aˆand Bˆ. We emphasize that even though equation (9) is written in terms of bosonic operators, the projectors together with the Jordan–Wigner transform ensure—at the level of the Hamiltonian and diagonal observables—the equivalence with the original lattice gauge theory including fermionic matter. The Hamiltonian in equation (9) is generated effectively in our Bose– Hubbard system through a suitable tuning of the parameters described in equation (3). As a preceding step, matter sites are identified with even sites of the optical superlattice (a^ℓ → b^j =2ℓ , ℓ = 0, …, N − 1) and gauge links with odd sites of the superlattice (d^ℓ , ℓ +1 → b^j =2ℓ +1, ℓ = 0, …, N − 2 ). For our quantum simulator, we have N = 36 matter sites and 35 gauge

Article links, yielding a total of 71 bosonic sites. The principle for generating the gauge-invariant dynamics is then conveniently illustrated in a three-site building block consisting of optical-lattice sites j = 0, 1, 2, as shown in Extended Data Fig. 1. The system is initialized in a state where all matter sites are singly occupied, while gauge links are empty, that is, the system starts in the boson occupation state |101⟩. By choosing U, δ ≫ J and U ≈ 2δ, the two states |101⟩ and |020⟩ form an almost degenerate energy manifold α (not the absolute ground-state manifold of the BHM). This manifold is well-separated from the states |110⟩ and |011⟩, shown in the middle of Extended Data Fig. 1. Hence, direct tunnelling of the bosons into (and out of) the deep gauge-link well is energetically off-resonant and suppressed. The effective dynamics between the states within the manifold α is then described by degenerate perturbation theory38, leading to the Hamiltonian of equation (9) acting on the subspace indicated by the projectors P . An explicit calculation yields the effective coupling

1 1 1  1  + + + t~ = 2 J 2  , δ+ Δ U− δ+ Δ δ− Δ U− δ− Δ

(10)

∼ U ≈2δ which reduces to a simple relation close to resonance, t → 8 2 J 2 /U . A key ingredient for this manner of generating the term proportional to t~ was the particle–hole transformation30,31. It enabled us to rewrite this term, which is usually interpreted as a kinetic hopping term, as the simultaneous motion of two bosons on neighbouring matter sites into the gauge link in between (and back). Note that our approach of constraining to an energy manifold α from the total Hilbert space is different from previous work, where the authors proposed to implement gauge symmetry in the ground state manifold by adding a term  proportional to ∑ x G 2x to the Hamiltonian19,30,39,40. The above result, where we include couplings of our initial-state manifold to other manifolds at order J, is valid for both the building block as well as for the extended system close to resonance. In our many-body system, at this order in perturbation theory, the mass is represented by the energy imbalance of on-site interaction and staggering, m = δ − U/2, such that the gauge-invariant particle creation/ annihilation becomes resonant once fermions are massless. In the chosen parameter configuration, occupations other than the desired |0⟩, |1⟩ (even) and |0⟩, |2⟩ (odd sites) are highly suppressed, as we confirmed through numerics and direct measurement (see Fig. 2b). We also include a linear tilt potential to suppress the tunnelling of atoms to their next-nearest neighbouring sites, for example, 02001⟩ ↔ 02100⟩ (here j = 0, …, 4), as such processes are also generated at second order and would break gauge invariance.

State preparation and detection The experiment begins with a quasi two-dimensional quantum gas of ~8.6 × 104 atoms prepared by adiabatically loading a nearly pure Bose– Einstein condensate into a single well of a pancake-shaped standing wave. The pancake trap is generated by interfering two blue-detuned laser beams at wavelength λs = 767 nm, which provides the confinement along the z axis. We implement a staggered-immersion cooling for the quantum gas to create a Mott insulator with near-unity filling24. The cooling is performed within an optical superlattice where the atoms are separated into superfluid and Mott-insulator phases with a staggered structure. The superlattice potential can be written as V (x) = Vscos2(kx) − Vlcos2(kx /2 + φ).

(11)

Here, Vs and Vl are the depths of the short and long lattices, respectively. The relative phase φ determines the superlattice configuration, which is controlled by changing the relative frequency of these lasers. At φ = 0, the atoms on odd and even sites of the double wells experience the same trap potential. In the cooling stage, we keep the phase at φ = 7.5(7) mrad to generate a staggered energy difference for the odd

and even sites. After cooling, the final temperature (Tf ) of the Mott-insulator sample with n = 2 is kBTf = 0.046(10)U. Then, we freeze the atomic motions and remove the high-entropy atoms. Based on the low-entropy sample, a Mott insulator with 99.2(±0.1)% of single occupancy is prepared by separating the atom pairs within the double-well superlattice. Figure 2a shows such a two-dimensional sample with a homogeneous regime containing 104 lattice sites. The technique of site-selective addressing is widely used in our experiment24,41. For the Mott insulator, all the atoms are prepared in the hyperfine level of |↓⟩ = |F = 1, mF = −1⟩. We define another pseudospin state as |↑⟩ = |F = 2, mF = −2⟩. When the direction of the bias field is along x and the phase of the electro-optical modulator is set to π/3, the energy splitting between |↓⟩ and |↑⟩ has a 28-kHz difference for the odd and even sites. We edit the microwave pulse and perform a rapid adiabatic passage to selectively flip the hyperfine states of atoms on odd or even sites, achieving an efficiency of 99.5(±0.3)%. For state initialization, we flip the atomic levels of odd sites and then remove these atoms with a resonant laser pulse. This site-selective addressing is also employed for state readout, as shown in Fig. 1b. Combining such techniques with absorption imaging, we record the atomic densities of odd and even sites successively in a single experimental sequence. We use a parity projection of the atom number to probe the distribution of site occupancies. The basic idea is to remove the atom pairs by exciting them to a non-stable molecular state via the photoassociation (PA) process. The laser frequency is 13.6 cm−1 red-detuned to the D2 line of the 87Rb atom, which drives a transition to the v = 17 vibrational state in the 0−g channel. The decay rate of the atom pairs is 5.6(2) kHz in the laser intensity of 0.67 W cm−2. After applying this PA light for 20 ms, the recorded atom loss equals the ratio of atom pairs. For detecting the filling number of more than double occupancy, we first engineer the atoms in the double wells and then detect the number parity with PA collision. As shown in Fig. 2a, we remove the atoms occupying the even sites and then separate the atoms of odd sites into double wells. If the occupancy is more than two, we can observe some atom loss after applying the PA light. The remaining atom number after this operation (g) is n (g) t = nc + 2p (2). From these measurements, we obtain the upper bound of the probabilities for these highly excited states, such as three or four atoms. Here, we consider the excitations up to three atoms with the probability p(m/g)(3). Hence, the probabilities of matter or gauge sites derived from these detections are:

p(m/g)(0) = 1 −

1  (m/g)  + n¯(m/g) n¯ t  , 2  c

p(m/g)(1) = n¯c(m/g) −

1  (m/g)  − n¯t(m/g) , n¯ 2  

p(m/g)(2) =

1  (m/g)  − n¯c(m/g) , n¯ 2  t 

p(m/g)(3) =

1  (m/g)  − n¯t(m/g) . n¯ 2  

(12)

These probabilities refer to the observables given by the detection methods (i)–(iii) in the main text.

Imaging individual sites We develop a technique to detect individual atoms at any specific site residing in the 1D optical lattice, without requiring a quantum gas microscope. By lifting the energy degeneracy of the transition frequency in each lattice site, we can flip the atomic state with a locally resonant microwave pulse. The potential used for shifting the energy levels is provided by a homogeneous magnetic gradient. We set the magnetic axis along the x direction with a 7.3 G bias field, and meanwhile apply a ~70 G cm−1 gradient field along this axis. In such a magnetic field, the energy level for the |↓⟩ → |↑⟩ transition is split by 5.6 kHz per lattice site. Addressing individual sites is realized by flipping the atomic

internal level from |↓⟩ to |↑⟩ with a square π pulse. Afterwards, the atom number on the corresponding site is recorded on a CCD camera with in situ absorption imaging. This method enables the imaging of atoms with a spatial resolution better than the optical resolution of our imaging system. Instead, the resolution is determined by the energy splitting between lattice sites and the Fourier broadening of the microwave transition. To achieve such a high precision, we improve the stability of the magnetic field and of the position of the optical lattice. At an arbitrary microwave frequency, the position of the flipped stripe changes from shot-to-shot with a standard deviation of 0.11 μm. We set the Rabi frequency for the transition to 1.9 kHz, to make sure the Fourier broadening is smaller than the splitting between the neighbouring sites. The number occupations on lattice sites are measured by scanning the microwave frequency. The frequency starts at 6.819104 GHz and ends at 6.819532 GHz, covering 75 sites of the optical lattice. To benchmark our method, we perform this site-resolved imaging in a staggered state, as shown in Extended Data Fig. 2. Two essential features are captured by our measurement. First, the detected atomic density oscillates with the same period as the site occupancy of the staggered state. Second, the central position of the flipped atoms follows the staircase behaviour of the discrete lattice sites. We can clearly locate each individual lattice site in the 1D chain with this site-resolved imaging technique.

Building blocks and state reversibility Before performing the phase transition, the elementary parameters of the Hubbard model are calibrated precisely. The lattice depths are measured by applying a parametric excitation to the ground-band atoms. Then, we derive the Wannier functions of the atoms at certain lattice depths, from which the on-site interaction U and tunnelling J are obtained by integrating the overlap of Wannier functions. The linear potential Δ = 57 Hz per site is formed by the projection of gravity along the x axis. The staggered offset δ is generated by the long lattice at a superlattice phase of φ = π/2. We investigate the building block of our model and observe the coherent dynamics. To prepare a sample with isolated units, we quench the short lattice to 11.8(1) Er for 3 ms starting from the staggered initial state. During such a short time, some of the atoms start to bind into doublons and enter the odd sites. We remove the majority atoms residing on the even sites, thereby creating a dilute sample with isolated building blocks. Afterwards, the superlattice is reshaped into the configuration of φ = π/2 and the dynamics of the atoms are monitored at the resonant condition with U = 1.17(1) kHz, J = 105(1) Hz. The atoms oscillate between the state |020⟩ and |101⟩ via second-order hopping, forming an effective two-level system. In this building block, the self-energy correction shifts the resonant point to U = 2δ − 4J2/U. Extended Data Fig. 3a shows a Rabi oscillation with negligible decay in this dilute sample, indicating excellent coherence of the system. The amplitude of the oscillation is determined by the preparation fidelity of the building block. Another characteristic feature of a coherent adiabatic transition is its reversibility. Figure 2 shows a quantum phase transition from the charge-dominated phase to the C/P-broken phase. In our sample with 71 sites, we compensate part of the residual potential of the blue-detuned lattice with a red-detuned dipole trap, which reduces the spatial inhomogeneity. The coherence of the system allows us to recover the particle–antiparticle phase in another 120 ms. We ramp the mass m and tunnelling J in a reversed way as ∼ compared to the curves given in Fig. 2a, thereby decreasing m /t from 11.6 to −39.8 in order to return to the charge-dominated phase. The occupancy of even sites is recovered to 0.66(3) in Extended Data Fig. 4, which is attributed mainly to the non-adiabaticity of the ramping process.

Numerical calculations The dynamics of our 71-site quantum simulator can hardly be computed up to the times we are interested in by classical numerical methods. However, we can calculate results at smaller system sizes and then check for convergence. To understand the quantum phase transition, we use exact diagonalization (ED) to calculate the QLM, and the time-adaptive density matrix renormalization group method (t-DMRG) to simulate the dynamics governed by the BHM. To compute the dynamics in the ideal QLM, we use the mass m and ∼ coupling strength t as deduced from the Hubbard parameters. The time-dependent dynamics in the QLM fully obey Gauss’s law. Using this conservation law to restrict our calculations to the implemented Hilbert space, we perform numerically exact diagonalizations for system sizes ranging from L = 8 up to L = 52 sites (see Extended Data Fig. 5a). Owing to finite-size effects, the dynamics for smaller systems (such as L = 8) show strong oscillations after crossing the critical point. We find that the non-adiabaticity caused by the ramping reduces the fidelity of our final state with increasing system size, owing to the closure of the minimal gap at the critical point. The discrepancy between the curves for L = 40 and L = 52 is of the order of 10−3, indicating the volume convergence of our calculations. In Fig. 4b, the orange and blue curves for state populations are the ED results for system size of L = 52. We apply t-DMRG42,43 to calculate the full dynamics of the 1D Bose-Hubbard chain. For our simulations, convergence is achieved at a time step of 10−4 s, a truncation threshold of 10−6 per time step, and a maximum occupation of 2 bosons per site. Finite-size effects are also investigated for several chain lengths. Similar to the behaviour in the QLM, the dynamics becomes smooth with increasing chain length. Extended Data Fig. 5b shows volume convergence between the results for system sizes L = 32 and L = 40 sites. The theoretical predictions in Fig. 2c and Fig. 4b are obtained with system size L = 32. Moreover, some imperfections of our system are taken into account in our t-DMRG calculations. Owing to the inhomogeneity of the Gaussian-shaped y-lattice, the on-site interaction at the edge of the 71-site chain is about 10 Hz smaller than at the central site. Also, fluctuations of the depth of the long lattice lead to about ±4.5 Hz uncertainty in the staggered energy δ. Including these influences into our model, we estimate experimental observables with ±1σ confidence intervals (equal to the standard deviations). These two numerical methods show consistent behaviour at the converged system sizes, which means the discrepancies between our experiments and numerical calculations are not caused by finite-size effects. We attribute the remaining deviations to heating due to off-resonant excitations of the atoms by the optical lattice beams. Although the correlation length is 4.4+2.0 −1.0 sites and the domain size is 9(2) sites, both the ED and t-DMRG calculations converge only once the system size is above about 40 sites, showing the essential role of many-body effects in the observed phenomena. Density–density correlations Constrained by the finite resolution of our microscope, we are not able to extract density–density correlations from the in situ images. However, we can measure the correlation function by mapping the atomic distributions into momentum space. After a free expansion, the relation between the initial momentum kx and real-space position x is kx = mRbx/t. One characteristic momentum corresponding to the unity-filling Mott insulator is kx = 2ħk, which is related to the real-space position of x0 = ht/(mRbλ/2). Then, the correlation function for a long chain with Nddc sites is Ck(x) = 1 +

1 2 N ddc

∑ exp[− i2πx(i − j)/x0]ninj . i, j

(13)

Article Here, the position x can be discretized into the pixels of the imaging plane. From this relation, we can easily find that the interference patterns emerge at a multiple of x0/d, where d is the periodicity of an ordered site occupation. Hence, the initial state with d = 2 has first-order peaks at kx = ħk, and the states with d = 4 would have peaks at kx = 0.5ħk. To detect the density–density correlations, we release the cloud and let it expand in the x–z plane for 8 ms. The lattice depth along the y axis is 25.6(2)Er, which blocks the crosstalk of different 1D chains. Loosening the confinement along the z axis strongly reduces the interaction between atoms, but also degrades the optical resolution. The characteristic length is x 0 = 105 μm, which allows us to observe the new ordering with the microscope. We find that the initial size of the sample is much smaller than that of the cloud after expansion. The exposure time for the absorption imaging is 10 μs, thereby making photon shot noise the major source of fluctuations on the signal. The pattern in Fig. 3a is obtained by calculating the correlation function as defined in equation (13). For each image, the density correlation is the autocorrelation function. When we have a set of images, we perform this procedure using two different routes28. One is calculating the autocorrelation function for each image and then averaging them. Another is first averaging the images and then calculating the autocorrelation function once, which is used for normalizing the signal. Then we obtain the normalized density–density correlation. Such a method enables the extraction of correlations from noisy signals and is also robust to the cloud shape. The patterns in Fig. 3a are averaged over 523 and 1,729 images, respectively. In the horizontal direction of the imaging plane, some stripes appear around y = 0, which is caused by the fluctuations of the atomic centre and total atom number28. Unlike the in situ images, the atoms outside the region of interest still contribute to signals in momentum space. The correlation length is obtained from the width of the interference peak. For an entirely ordered state, such as the initial state, the amplitude of the density correlation is inversely proportional to the atom number, and the width is determined by the imaging resolution. However, spontaneous symmetry breaking in the phase transition induces the formation of domains. At finite correlation length ξ, the peak width becomes broader. Assuming the correlation function decays exponentially in this 1D system, we can deduce ξ from the peak width. To extract the peak width, we first subtract the background profile from the correlation function. The background is a single-pixel section through the pattern centre, whose direction is along −4° with respect to the horizontal plane. As shown in Extended Data Fig. 6, we apply a Lorentzian fitting to the curve and find the width is 4.5 ± 1.1 μm. The peaks at ±ħk and ±2ħk have widths of 2.0(2) μm and 1.9(4) μm respectively, which corresponds to the imaging resolution. Considering the broadening due to optical resolution, we obtain the correlation length as ξ = 4.4+2.0 −1.0 sites.

Potential violations of Gauss’s law In our setup, potential gauge-violation terms arising from coherent processes are suppressed owing to suitably engineered energy penalties. We can estimate the effect of these error terms in a three-site building block consisting of two—initially occupied—matter sites and the gauge link in between, described by the initial state |…nj nj +1nj +2…⟩ = |…101…⟩, j even. The main cause of gauge violation stems from the desired matter–gauge-field coupling, which requires a second-order process of strength ∝J2/δ involving the gauge-violating bare tunnelling J. Similarly to a detuned Rabi oscillation, the population of the gauge-violating states |…110…⟩ and |…011…⟩ is of the order of (J/δ)2. At the highest coupling strength, which we reach at t = 60 ms, we have J/δ = 0.13, that is, gauge-violating states have at most a few per cent of population. Rather than an incoherent dynamics that leads to accumulation of gauge violation over time, this bare tunnelling is a coherent process that strongly mitigates the increasing of the induced

gauge violation. In Fig. 4b, the oscillations in the t-DMRG calculations are caused by such a detuned tunnelling process. We further theoretically calculate the gauge violation of our system at long evolution time, as shown in Extended Data Fig. 7b. The gauge violation does not increase substantially even when the ramping time is about one order of magnitude longer than our experimental timescale. At the same order of perturbation theory, direct tunnelling between matter sites can occur, with a coupling strength of the order of J2/δ. This second-order tunnelling is energetically suppressed by U ± 2Δ when we consider the initial filling and the linear potential. The coherent oscillation in Fig. 3a indicates that the atoms perform only the desired conversion between matter and gauge-field sites, and otherwise reside in their respective building blocks. Likewise, the staggered and linear potential suppress any long-range transport, which is also confirmed by the nearly constant size of the atomic cloud measured in absorption imaging. If all gauge-violating many-body states experience such energy penalties, a deformed symmetry emerges that is perturbatively close to the ideal original one44, and which indefinitely suppresses gauge invariance-violating processes19. In the present case, the leading gauge-violating processes are energetically penalized, but violations at distant sites may in principle energetically compensate each other. This may lead to a slow leakage out of the gauge-invariant subspace through higher-order processes. On the experimentally relevant timescales, these processes are, however, irrelevant. Our theoretical calculations, which are based on unitary time evolution, capture only coherent sources of gauge violations such as those mentioned above. As the agreement with our measured data suggests (see Fig. 4b), coherent processes contribute substantially to the weak gauge violation ϵ(t), especially the first-order tunnelling J. In addition, there may appear dissipative processes that violate Gauss’s law. Pure dephasing processes that couple to the atom density commute with Gauss’s law and thus do not lead to gauge violations. In contrast, atom loss might affect gauge invariance. The lifetime characterizing the atom loss in optical lattices is about 10 s, two orders magnitude longer than the duration of our sweep through the phase transition. Finally, the finite lifetime of Wannier–Stark states45 caused by the lattice tilt is also much longer than experimentally relevant times. Our direct measurements of the violation of local gauge invariance corroborate the weakness of the various potential error sources over our experimental timescales.

Measurement of Gauss’s law violations We can verify the local fulfilment of Gauss’s law—without the need for full state tomography—by measuring the probabilities of the gauge-invariant states. Considering the relevant three-site units, we can couple the central site with its left or right site by isolating the atoms into double wells. Then the sensing is achieved by the atom which can discriminate the filling number of its neighbour via subsequent dynamics. For the state |10⟩ in a double well, the atom can tunnel and evolve to another state |01⟩. The frequency is dramatically different from the state |20⟩, which would tunnel in a pair with a strength of 4J2/U. Since we mark the atoms with hyperfine levels, atoms in the |↓↑⟩ would exchange their hyperfine state in a superexchange process. As shown in Extended Data Fig. 8, we observe the dynamics of these states by initially preparing them in double wells. The superexchange frequency is 4J2/hU = 11 Hz. However, the atom pairs cannot tunnel freely because this dynamics requires a further stabilization of the superlattice phase φ. Even though the superexchange interaction can drive the evolution, such a process does not contribute to the oscillation amplitude at the frequency 2J/h. In addition, the state |0nj⟩ does not contribute to the desired signal because we flip the hyperfine levels of the atoms on odd sites before implementing the tunnelling sequence. We fit the oscillation with a function y = y0 + Ae−t/τsin(2πf + ϕ0). The frequency f, initial phase ϕ0, offset value y0, and damping rate τ are fixed in

the fitting. The signal is identified not only by the atom population but also by the characteristic frequency. Therefore, we can establish relations between the oscillation amplitude and the state probability. Extended Data Figure 9 shows the measurements for determining the population of gauge-invariant states. As illustrated in Fig. 4a, we monitor the oscillation of tunnelling at four different sequences. After an evolution time t, the state detections begin by ramping the short lattice to 51.3(4)Er. Then we tune the superlattice phase φ from π/2 to 0 or π and consequently divide the atoms into isolated double wells. In the procedure for detecting the state |010⟩, we address and flip the hyperfine level of atoms residing the odd sites to |↑⟩, and thereby mark the sites by their hyperfine levels. Afterwards, we quench the depth of the short- and long-lattice to 18.7(1)Er and 10.0(1)Er simultaneously. The atoms tunnel from even to odd sites within each double well, whose expectation value is recorded by performing absorption imaging. As shown in Extended Data Fig. 9a, b, the oscillation amplitudes are almost equal to the ratios of even-site atoms. For detecting the state …002…⟩ and …200…⟩, the procedure consists of more operations because the doublons cannot tunnel easily. Before the splitting of doublons, we remove the atoms residing on the even sites to ensure a 99.3(±0.1)% efficiency of atom splitting. For instance, the state |12⟩ in the double well would disturb the separation of doublons and also influence the following signal. Next, we perform the state flip operation and tune the superlattice phase from 0 (π) to π (0) to reach another configuration. In these double wells, the oscillations corresponding to atom tunnelling are shown in Extended Data Fig. 9c, d. However, we should exclude the probability of other kinds of states, such as …012…⟩, because we remove the central particle and project it into …002…⟩. To clarify the process by which we derive the final probability, the states that may contribute to the signals are listed in a square array in Extended Data Fig. 10. Using seven experimental observables, we can extrapolate the population of the state |002⟩ and |200⟩. Actually, the other high-energy excitations, such as four particles per site, are also eliminated from this calculation. After performing the error propagation, the errors of the total probabilities mainly arise from the shot noise of the absorption imaging. In Fig. 4b, the probabilities of the state |010⟩ at t = 0, 30 ms represent the gauge-invariant terms with smaller errors. Through these measurements, we are thus able to measure the probabilities of the states |…nj −1nj nj +1…⟩ = |…010…⟩ , |…002…⟩, and …200…⟩, j even, from which we can compute the local projectors onto the gauge-invariant states, Pℓ = 010⟩⟨010 + |002⟩⟨002| + |200⟩⟨200|, where ℓ = j /2 denotes the central matter site. These measurements enable us to certify the adherence to gauge invariance in our U(1) lattice-gauge quantum simulator.

Data availability Data for figures that support the current study are available at https:// doi.org/10.7910/DVN/3RXD5F. Source data are provided with this paper.

Code availability The codes used for the theoretical curves are available at https://doi. org/10.7910/DVN/3RXD5F. 36. Kogut, J. & Susskind, L. Hamiltonian formulation of Wilson’s lattice gauge theories. Phys. Rev. D 11, 395–408 (1975). 37. Pichler, T., Dalmonte, M., Rico, E., Zoller, P. & Montangero, S. Real-time dynamics in U(1) lattice gauge theories with tensor networks. Phys. Rev. X 6, 011023 (2016). 38. Lewenstein, M., Sanpera, A. & Ahufinger, V. Ultracold Atoms in Optical Lattices: Simulating Quantum Many-body Systems (Oxford Univ. Press, 2012). 39. Zohar, E. & Reznik, B. Confinement and lattice quantum-electrodynamic electric flux tubes simulated with ultracold atoms. Phys. Rev. Lett. 107, 275301 (2011). 40. Banerjee, D. et al. Atomic quantum simulation of dynamical gauge fields coupled to fermionic matter: from string breaking to evolution after a quench. Phys. Rev. Lett. 109, 175302 (2012). 41. Yang, B. et al. Spin-dependent optical superlattice. Phys. Rev. A 96, 011602 (2017). 42. Schollwöck, U. The density-matrix renormalization group in the age of matrix product states. Ann. Phys. 326, 96–192 (2011). 43. McCulloch, I. P. Matrix Product Toolkit. https://people.smp.uq.edu.au/IanMcCulloch/ mptoolkit/ (2015). 44. Chubb, C. T. & Flammia, S. T. Approximate symmetries of Hamiltonians. J. Math. Phys. 58, 082202 (2017). 45. Glück, M., Kolovsky, A. R. & Korsch, H. J. Wannier–Stark resonances in optical and semiconductor superlattices. Phys. Rep. 366, 103–182 (2002). Acknowledgements We thank J. Berges, Q. J. Chen, Y. J. Deng, S. Jochim and W. Zheng for discussions. We thank Z. Y. Zhou and G. X. Su for their help with the experimental measurements. This work is part of and supported by the National Key R&D Program of China (grant 2016YFA0301603), NNSFC grant 11874341, the Fundamental Research Funds for the Central Universities (special funds for promoting the construction of world-class universities and disciplines), the Anhui Initiative in Quantum Information Technologies, the DFG Collaborative Research Centre ‘SFB 1225 (ISOQUANT)’, the ERC Starting Grant StrEnQTh (project-ID 804305), Q@TN – Quantum Science and Technology in Trento, and the Provincia Autonoma di Trento. Author contributions B.Y., Z.-S.Y., P.H. and J.-W.P. conceived the research; P.H. conceived the theoretical idea; B.Y., Z.-S.Y. and J.-W.P. designed the experiment; B.Y., H.S. and H.-Y.W. performed the experiments and analysed the data; R.O., T.V.Z., J.C.H. and P.H. developed the theory together with B.Y.; and R.O., T.V.Z. and J.C.H. did the numerical simulations. All authors contributed to manuscript preparation. Competing interests The authors declare no competing interests. Additional information Supplementary information is available for this paper at https://doi.org/10.1038/s41586-0202910-8. Correspondence and requests for materials should be addressed to Z.-S.Y., P.H. or J.-W.P. Peer review information Nature thanks Bryce Gadway, Erez Zohar and the other, anonymous, reviewer(s) for their contribution to the peer review of this work. Peer reviewer reports are available. Reprints and permissions information is available at http://www.nature.com/reprints.

Article

Extended Data Fig. 1 | Level structure of a three-site building block (matter– gauge–matter). The energy manifold of interest is given by the state on the left, which represents a particle pair, and the one on the right, where particles have annihilated while changing the in-between gauge field configuration. In the middle, we show the detuned intermediate processes and states by which these ‘physical’ (that is, gauge-invariant) states are coupled. See main text for nomenclature used in this figure.

Extended Data Fig. 2 | Single-site resolved imaging. a, A staggered-filled 1D chain as our initial state is sketched, which begins with the first site (left part) and ends up with the 75th site (right part). The energy levels are split by a linear magnetic gradient field. Therefore, the internal states |↓⟩ (wavefunction in blue colour) and |↑⟩ (wavefunction in orange colour) of each site can be coupled by a local resonant microwave field. b, Spectroscopic measurement of the site occupation. At each frequency, we average the data over five repetitions, and integrate the signal along the y axis. For simplicity, we show only the beginning site and a few sites at the end of the chain. The blue circles are central positions of the atomic densities along x. According to the spatial position of the image, we plot the staircase structure of the lattice sites in cyan. The lower panel shows the atomic density averages over the measurements on the upper panel, where the amplitude is normalized to the maximum atomic density. The sinusoidal fitting (orange-dashed line) shows the positions of the sites and the staggered structure.

Article

Extended Data Fig. 3 | Dynamics in building blocks. a, Observation of coherent evolution in the building blocks, as sketched above the data. We monitored the dynamics at two different coupling strengths. The solid curves are sinusoidal fitting results, which give oscillation frequencies of 89.3(3) Hz (upper plot) and 57.1(3) Hz (lower plot). Error bars, s.d. The small atom number in the dilute sample leads to larger statistical errors as compared to our many-body experiment reported in the main text. As these data show, the decay of ∼ oscillations is insignificant over a range of coupling strengths t , even for values larger than the greatest coupling strength used in the phase transition ∼ (t = 70 Hz ). b, The oscillation frequency and J2/U have an almost linear relation, which is in excellent quantitative agreement with the theoretical prediction based on the BHM (solid curve).

Extended Data Fig. 4 | Quantum phase transition and revival. Over 240 ms, we ramp the mass as follows: first, from negative to positive; second, from positive to negative, back to the symmetry-unbroken charge-proliferated phase. Error bars, s.d. The recovery of the atoms on even sites indicates the reversibility of this phase transition. The solid curve is a guide for the eye.

Article

Extended Data Fig. 5 | Numerical simulations of the phase transition dynamics. These are calculated by ED (a) and DMRG (b) methods. We monitor the evolution of the deviation of the electric field, which corresponds to the double occupancy (‘doublons’) of the odd sites. a, Simulations of the ideal, fully gauge-invariant QLM, using ED calculations under periodic boundary

conditions. b, Simulations of a 1D Bose–Hubbard system modelling our experiment, using the t-DMRG method. The insets show the differences between results for different system sizes and the curve at the largest size (L = 52 for ED and L = 40 for t-DMRG), demonstrating finite-size convergence well below the range of experimental errors.

Extended Data Fig. 6 | Correlation length. Density–density correlation of the final state is plotted against the distance in momentum space. We select the central region with two interfering peaks, where the background has been subtracted from the signal. The solid curve is a Lorentzian fitting of the data. The inset shows the relation between the peak width and the correlation length. The solid curve is calculated for a 1D system with Nddc = 100 sites. The red point represents the correlation length of the final state as shown in Fig. 3a, where the error bars are s.d.

Article

Extended Data Fig. 7 | Ramping speed and gauge violation. a, The phase ∼ transition is driven by ramping the mass m and the effective coupling t . ∼ We start from a large negative value of mass m /t , retain stronger coupling around the critical point, and end up with a large positive mass. b, Gauge violation against total ramping time calculated with the t-DMRG method in a system with 16 (red), 24 (blue) and 32 (orange) optical-lattice sites. Using the same shape of the ramping curve in a, we change the ramping speed by constraining the total ramping time. The squares points are the maxima of ϵ(t) throughout the dynamics, while the circles represent the gauge violations of the final states. Owing to the coherence of our many-body system, ϵ(t) reaches its maximum around the critical point, and decreases after crossing the critical point (see Fig. 4b).

Extended Data Fig. 8 | Dynamics in double wells. Under the same superlattice configuration, we measure the evolution of three different states in double wells. The initial states are |10⟩ (blue squares), |11⟩ (orange triangles) and |20⟩ (red circles). The state |10⟩ oscillates with almost the full amplitude. The superexchange interaction drives the spin exchange process as expected. In contrast, the atom population remains constant for the state |20⟩. Error bars, s.d. The solid curves are exponentially damped sinusoidal fittings, where the frequency, phase and decay rate are fixed. We find that the oscillation amplitude of tunnelling is almost three orders of magnitude larger than the other two fitting values.

Article

Extended Data Fig. 9 | Detecting the gauge-invariant states. We divide the atoms into double wells and then measure atom tunnelling within each twosite unit. a–d, The dynamics of tunnelling for four different experimental sequences, as sketched in the insets. Five different moments during the phase transition (t = 0, 30, 60, 90, 120 ms) are selected for detecting the

gauge-invariant states. Error bars, s.d. We fit the data with a sinusoidal damping function, which has a period of 7.2 ms and an exponential decay constant of (2) (2) (1) 96 ms. The amplitudes of the oscillations in a–d refer to A(1) 10⟩ , A 01⟩ , A 10⟩ and A 01⟩ , respectively. These amplitudes are then used for calculating the state probabilities, where the error bars are s.d.

Extended Data Fig. 10 | Resolving the population of the states. For the detection of states |002⟩ and |200⟩, we extract their probabilities from several measurements. There are 64 states that may contribute to the oscillations, which are listed from |000⟩ to |333⟩ as an 8 × 8 square array (left). The amplitudes of these states according to our detection procedures are given by distinct colours (key at bottom right). For example, the state |002⟩ in the third column of the first

(2) row only contributes to the first observable A(2) 01⟩ + A 10⟩ with a factor of 1, while the state |013⟩ at the end of the first row will be recorded by all these observables with the colour-denoted factors. We use seven terms to deduce the lower bound for (2) (2) (1) (1) o e e the probabilities as p|…002…⟩ + p|…200…⟩ ≥ A|01⟩ + A|10⟩ + A|01⟩ + A|10⟩ − n¯c − 0.5n¯ − 1.5n¯c. Such a relation can be captured from the chequerboard diagram.

Article

The growth equation of cities https://doi.org/10.1038/s41586-020-2900-x

Vincent Verbavatz1,2 & Marc Barthelemy1,3 ✉

Received: 13 February 2020 Accepted: 1 September 2020 Published online: 18 November 2020 Check for updates

The science of cities seeks to understand and explain regularities observed in the world’s major urban systems. Modelling the population evolution of cities is at the core of this science and of all urban studies. Quantitatively, the most fundamental problem is to understand the hierarchical organization of city population and the statistical occurrence of megacities. This was first thought to be described by a universal principle known as Zipf’s law1,2; however, the validity of this model has been challenged by recent empirical studies3,4. A theoretical model must also be able to explain the relatively frequent rises and falls of cities and civilizations5, but despite many attempts6–10 these fundamental questions have not yet been satisfactorily answered. Here we introduce a stochastic equation for modelling population growth in cities, constructed from an empirical analysis of recent datasets (for Canada, France, the UK and the USA). This model reveals how rare, but large, interurban migratory shocks dominate city growth. This equation predicts a complex shape for the distribution of city populations and shows that, owing to finite-time effects, Zipf’s law does not hold in general, implying a more complex organization of cities. It also predicts the existence of multiple temporal variations in the city hierarchy, in agreement with observations5. Our result underlines the importance of rare events in the evolution of complex systems11 and, at a more practical level, in urban planning.

Constructing a science of cities has become a crucial task for our societies, which are growing ever more concentrated in urban systems. Better planning could be achieved with a better understanding of city growth and how it affects society and the environment12. Various important aspects of cities such as urban sprawl, infrastructure development or transport planning depend on the population evolution over time, and multiple theoretical attempts have been made in order to understand this crucial phenomenon.

Growth of cities and Zipf’s law So far, most research in city growth has been done with the idea that the stationary state for a set of cities is described by Zipf’s law. This law is considered to be a cornerstone of urban economics and geography3, and states that the population distribution of urban areas in a given territory (or country) displays a Pareto law with exponent equal to 2 or, equivalently, that the city populations sorted in decreasing order versus their ranks follow a power law with exponent 1. This alleged regularity through time and space is probably the most striking fact in the science of cities and for more than a century has triggered intense debate and many studies1,2,5,10,13–28. This result characterizes the hierarchical organization of cities, and in particular it quantifies the statistical occurrence of large cities. Zipf’s law implies that in any country, the city with the largest population is generally twice as large as the next largest, and so on. It is a signature of the very large heterogeneity of city sizes and shows that cities are not governed by optimal considerations that would lead to one unique size but, on the contrary, that city sizes are broadly distributed and follow some sort of hierarchy16. The empirical

value of the Pareto exponent informs us about the hierarchical degree of a system of cities: a large value of the exponent corresponds to a more equally distributed population among cities, and, vice versa, for small exponent values the corresponding system of cities is very heterogeneous with a few megacities. Studies in economics have suggested that Zipf’s law is the result of economic shocks and random growth processes6–8. Gabaix10 proved in a seminal paper that Gibrat’s law of random growth9—which assumes a population growth rate independent of the size of the city—can lead to a Zipf law with exponent 1, at the expense of the additional and untested assumption that cities cannot become too small. This model remains the most accepted paradigm to understand city growth. Since then, it has also been understood using simplified theoretical models (without any empirical arguments) that migrations from other cities or countries are determinant in explaining random growth29. However, although most of these theoretical approaches focus on explaining Zipf’s law with exponent 1, recent empirical studies3,4, supported by an increasing number of data sources, have questioned the existence of such a universal power law and have shown that Zipf’s exponent can vary around 1 depending on the country, the time period, the definition of cities used or the fitting method13,21,30,31 (we illustrate this in Extended Data Fig. 1, showing that no universal result for the population distribution is observed), leading to the idea that there is no reason to think that Zipf’s law holds in all cases32. Beyond understanding the stationary distribution of urban populations lies the problem of their temporal evolution. As already noted5, the huge number of studies regarding population distribution contrasts with the few analyses of the time evolution of cities.

1 Institut de Physique Théorique, Université Paris-Saclay, CNRS, CEA, Gif-sur-Yvette, France. 2École des Ponts ParisTech, Champs-sur-Marne, France. 3Centre d’Etude et de Mathématique Sociales, CNRS/EHESS, Paris, France. ✉e-mail: [email protected]

Nature | Vol 587 | 19 November 2020 | 397

Article a

France 100

103

Lévy stable law Normal law

10–1 1 – cumulative

Iij /Iji

101 10–1 10–3

10–2 10–3 10–4

y = xU, U= 0.98 and R2 = 0.81 10–5 10–3

10–2

10–1

100

101

102

10–5 –0.2

103

0.0

0.2

Si /Sj

b

0.4

0.6

0.8

](i)

United States 1:2 Lévy stable law

103 1 – cumulative

1:0

Iij /Iji

101 10–1 10–3

10–3

c

10–2

10–1

United Kingdom 102

100 Si /Sj

101

102

0:8 0:6 0:4 0:2

y = xU, U= 1.0 and R2 = 0.71 10–5

Normal law

0:0 –0.3

103

–0.2

–0.1

0.0

0.1

y = xU, U= 1.0 and R2 = 0.93

Lévy stable law

100

Normal law

1 – cumulative

Iij /Iji

101

100

10–1

10–2 10–3

d

10–2

10–1

101

102

10–1

10–2 –10

103

Si /Sj

Canada 103

100

0.2

](i)

–5

0

5

10

15

](i)

y = xU, U= 0.99 and R2 = 0.88

100

1 – cumulative

102 Iij /Iji

101 101 10–1 10–2 10–3 10–3

10–2

10–1

100

101

102

103

Si /Sj

10–1

10–2 Lévy stable law Normal law 10–3 –1.00 –0.75 –0.50 –0.25 0.00 ](i)

0.25

0.50

0.75

Fig. 1 | Migration flow analysis. a–d, Analysis for France (a), the USA (b), the UK (c) and Canada (d). Left, migration-rate ratio versus the ratio of populations. The straight line is a power-law fit that gives an exponent equal to one. Right, empirical right-cumulative distribution function of renormalized migrations

flows ζ i compared to Lévy (continuous red lines) and normal distributions (green dashed lines). See Extended Data Fig. 4 for the left-cumulative distribution function.

As discussed in that same work5, cities and civilizations rise and fall many times on a large range of time scales, and Gabaix’s model is both quantitatively and qualitatively unable to explain these specific chaotic dynamics.

Therefore, a model able to simultaneously explain observations about the stationary population distribution and the temporal dynamics of systems of cities is missing. In particular, we are not at this point able to identify the causes of the diversity of empirical

398 | Nature | Vol 587 | 19 November 2020

Table 1 | Estimates of parameter α

Table 2 | Estimates of parameters for the four datasets

Dataset

Dataset

γ

ν

β = ν + γ/α

France, 2003–2008

0.55 ± 0.06

0.4 ± 0.3

0.8 ± 0.4

0.75 ± 0.07

USA, 2013–2017

0.41 ± 0.05

0.4 ± 0.4

0.7 ± 0.5

0.93 ± 0.07

UK, 2012–2016

0

0.7 ± 0.3

0.7 ± 0.3

0.51 ± 0.05

Canada, 2012–2016

0

0.5 ± 0.4

0.5 ± 0.4

0.78 ± 0.06

MLE

Kolmogorov– Log-moments Hill Smirnov test

France, 2003–2008

1.43 ± 0.07 1.2 #4$ !($"8&7746&$&( ((# s >#4$ !($"8&"/& '2!($" 44($"='4)& ((8"2&7$(/&"#&#`'(2"(82'7($!742!&$" '77#4$!($"8() (&($($4&7!&&2($"47'#$"%4"(&7("#"4/*:%:2&"+().&$4($2&( *:%:% $"488$4$"(+ s >8 >3G6&$&($"*:%:(&"#&##6$&($"+& 4$&(#($2&( 8'"4(&$"(/*:%:4"8$#"4$"(6&7+ 7)/!()$(($"%=()(((&($($4*:%:a=P=W+5$()4"8$#"4$"(6&7=884($E=#% 88#2&"#b6&7'"(# s &5 z'"4#&(&5!4 #&"#4"6(#(&2!7$4" z'"46&$&"((&.7 '$"%()0!&49&%#&#&r*6p:py:p+:0 !&49&% '#8#&(&6$'&7$E&($"=%&!)$"%=&"#(&"82&($"5 '!)&(*6p:q:q+=%%!7(r*6w:w:p+=07f5*6p:p:r+= )&!r*6p:y:y+=&!*6t:y+=6$$#$*6q:t:p+=7&.#6*6r:q:p+=($"%*6p:y:q+=!7/*6p:x:v+=#!7/*6p:q:q+=%$#*6w:v:w+=#$%(*6q:v:rt+= 84&(*6x:pr+=&#y*6p:{:pt+=%$#@B(&*6r:w+=.$282&(*6p:py:q+=&"#7&(($4*6q:rq:yp+:0!&49&% '#8(&($($4&742!&$" &"#(($"47'##,B&4(0&"9;(*6q:x:wp+=G4;7*6q:uu:wv+=72y*6p:p:rw+=72;(*6w:p:r+="72*6w:p:pyx+=>3|C*6r:q+=&"# l2$4*6y:y:q+:;5B("&76&7$#&($"4)(#&(&5#5"7&##&"#!4 #'$"%0!&49&%4'&(#C(&%"2$4G&(& *6p:pv:p+:@47%$4&7422'"$(/42!&$"(($"47'#$"%.(&#$6$(/&"&7/ &"#&  2"(8&7!)&#$6$(/5!82# '$"%()0!&49&%6%&"*6r:t:v+: 1'44#84)(4"('4($"*8772&"'4$!(2'($"47'#&#&(&&6&$7&.$7$(/(&(2"(:;)$ (&(2"()'7#!6$#()8775$"%$"82&($"=5)&!!7$4&.7,

H>44 $"4#='"$z'$#"($8$=5.7$"98!'.7$47/&6&$7&.7#&(&( H>7$(88$%' ()&()&6& 4$&(#&5#&(& H>#4$!($"8&"/($4($" "#&(&&6&$7&.$7$(/ >77>2$4&"k'(A`4(pv &5#&(&&!"H'4&"#&6&$7&.7&(()@'!&"f$$"82&($4m"($('(*@fm=)((!,DD555:.$:&4:'9D+=('#/&44 $", Cko1qqqqqtuv:@B("&76&7$#&($"4)(#&(&&&6&$7&.7&(3fmf$A`4(A03>txuqwv*8&74)74"'2!($"!7$4&($"+=&"#3fmf$A`4(, A0@fpxtwt*8.57262"(z'&7$(/!7$4&($"+:G&(&8()kA+:j)"()"'2.84& &"#4"(78&%$6".$"&/6&$&.7B4##{tq&4)*4)(8p=tqq ((&7+=() &2!7 $E5& 4&!!#&(p=tqq842!'($"%88$4$"4/&"#.4&'2&%$"&7."8$(5& 4"8#./$"47'#$"% &2!7 "'2../"#p=tqq:0&"#2kA#&(&:f4&'84"$#&.7/ 2&77"'2.8'.`4($"()B("&74)(*"ƒ{q((&7=3fmf$A`4(,A03>txuqwv+='.`4(()&(!(#597/&"##&$7/ #$"9$"%5.$""#$"("%'!5)$7 '.`4(!($"%2"()7/&"#7 ()&"2"()7/&74)74"'2!($"5.$""#&"#'#& ()42!&&(%'!:;)B("&76&7$#&($"4)($"47'##lmnH$"84(#&"#'"$"84(# '.`4(=&.$"&/47& $8$4&($"()&($(78)& ." $"#!"#"(7/8'"#(& 4$&(5$().'(2$4.$(& $%"&7:;)'=lmnH$"84(#&"#'"$"84(# '.`4(5.&7&"4#.(5"7$%)(&"# )&6/#$"9*A‡q:qt=)$Hz'&((+:kA#&(&&"#B("&76&7$#&($"#&(&'$"%()""H!&&2($41!&2&"47&($"((:>"&7/$8()$#&(&( 5& '44 8'7$"!7$4&($"%8$"#$"% 82()>kA#&(&=&"#&##$($"&7#&(&(5$()&74)7$"(&988'()!7$4&($"5"(8'"#: ;6&7$#&( )$8($"()2$4.$2& 4$&(#5$().57262"(z'&7$(/=5B&2$"#& !&&(B("&74)(*3fmf$A`4(, A0@fpxtwt+()&(4##.57262"(z'&7$(/$"82&($"&7"%5$()pv 03> z'"4$"%884&7&2!7:k"'H767)$8($" 2$4.$(&42!$($"& 4$&(#5$().57262"(z'&7$(/542!&#.(5"()>kA&"#()$4)(42!$$"%3ƒp=qqr '.`4(.&7&"4#8"2&76:7$# (7:1'.`4(!($"% 7$# (7542!&#(()!($"%"2&7(7*f$(7(74&7 pHr6' wHy=!4($67/+8.() ('#$:-%2&"87#4)&"%&.'"#&"4 54&74'7&(#"&!H%"' .&$& &.6&"#42!&# .(5"()>kA#&(&&"#B("&76&7$#&($"#&(&'$"%()1!&2&"47&($"((=5)$4)/$7##)$%)7/$%"$8$4&"(4"4#&"4 *Aˆq:qqqqqqqqp+.(5"()(5#&(&(: 0!7$4&($"5& &7&((2!(#8().6&($"()&(2&(4)$"%8!"(7/$#"($8$#2$4.$(&H4"8'"#$"%6&$&.7 #'4 &!!&"( #$88"4 .(5"(/!r#$&.($4&"#""H#$&.($44"(7'.`4(:;5&#()$"#=()B("&7#&(&(5.(&$"#*3|n>< (&($($4+.(5"4& &"#4"(7*@B("##G&(&"$2&7&"#()%&"$2 l'2&"&4)!&($4$!&"( 7$"$4&7#&(&

C()#

"D& m"676#$ $"( " ) ('#/ s s s

)mAHz