Artificial Intelligence for Healthy Longevity 3031351754, 9783031351754

This book reviews the state-of-the-art efforts to apply machine learning and AI methods for healthy aging and longevity

356 66 8MB

English Pages 327 [328] Year 2023

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Artificial Intelligence for Healthy Longevity
 3031351754, 9783031351754

Table of contents :
Preface
Contents
Contributors
Part I Biomarkers of Aging and Health
1 AI in Longevity
1.1 Statistical Models of Aging
1.1.1 Deep Learning Basic Principles
1.2 Deep Learning Aging Clocks
1.3 Deep Learning Applications in Medicine
1.3.1 Clinical Practice
1.3.2 Drug Development
References
2 Automated Reporting of Medical Diagnostic Imaging for Early Disease and Aging Biomarkers Detection
2.1 Longevity Medicine and Radiology
2.2 AI and Radiology: State of the Art
2.3 Proof of Concept
2.4 Future of AI in Longevity Medicine
References
3 Risk Forecasting Tools Based on the Collected Information for Two Types of Occupational Diseases
3.1 Development of a Mathematical Model Based on Risk Calculation Methodology Using Artificial Intelligence Tools
3.2 Overview of the Methods Used
3.3 Feature Description of Objects
3.4 Separation of Features by the Type of Their Values
3.5 Separation of Features by Meaning
3.6 The Type of the Problem to Be Solved and the Target Variables
3.7 Trained Models and Their Results
3.8 Linear Regression Model
3.9 Determining the Risk of Sensorineural Hearing Loss
3.10 Determination of the Risk of Vibration Disease (Local Vibration)
3.11 Determination of the Risk of Vibration Disease (General Vibration)
3.12 The Decision Tree Model
3.13 Determining the Risk of Sensorineural Hearing Loss
3.14 Determination of the Risk of Vibration Disease (Local Vibration)
3.15 Determination of the Risk of Vibration Disease (General Vibration)
3.16 Random Forest Model
3.17 Determining the Risk of Sensorineural Hearing Loss
3.18 Determination of the Risk of Vibration Disease (Local Vibration)
3.19 Determination of the Risk of Vibration Disease (General Vibration)
3.20 Predictive Risk Model
3.21 Conclusions and Model Selection
3.22 An Example of an Explicitly Interpreted DSS Fragment Based on an Automated Generated and Optimized Model
3.23 Justification of Risk Prediction Error Estimation
3.24 The First Approach to Estimating the Risk Prediction Error
3.25 The Second Approach to Estimating the Error of the Risk Forecast
3.26 Conclusions
References
4 Obtaining Longevity Footprints in DNA Methylation Data Using Different Machine Learning Approaches
4.1 Introduction
4.2 Biological Age Regression with Machine Learning Models
4.2.1 Baseline Models
4.2.2 One-Tissue Epigenetic Clocks
4.2.3 Pan-Tissue Epigenetic Clocks
4.3 Machine Learning for Age-Related Diseases Classification
4.3.1 Cancer Classification
4.3.2 Phenotype Classification
4.3.3 Case–Control Classification
4.4 Unsupervised Learning for Cancer Differentiating
4.5 Conclusions
References
5 The Role of Assistive Technology in Regulating the Behavioural and Psychological Symptoms of Dementia
5.1 Introduction
5.2 Methodology
5.3 Results
5.3.1 Assistive Technologies and Dementia
5.3.2 Assistive Technology to Aid Communication
5.3.3 Assistive Technology to Aid Motor Behaviour
5.3.4 Assistive Technology to Aid Inappropriate Behaviours
5.3.5 Assistive Technology—Smart Homes
5.3.6 Assistive Technology—Further Artificial Intelligence
5.4 Discussion
5.5 Conclusion
References
6 Epidemiology, Genetics and Epigenetics of Biological Aging: One or More Aging Systems?
6.1 Introduction
6.2 An Overview of Aging Clocks Based on Biometric and Molecular Data
6.2.1 Telomere Length
6.2.2 DNA Methylation Aging Clocks
6.2.3 Blood Biomarker-Based Aging Clocks
6.2.4 Neuroimaging Based Brain Aging Clocks
6.3 Overlap of Aging Clocks
6.3.1 Epidemiological Overlap
6.3.2 Biological Overlap
6.4 Conclusions
References
7 Temporal Relation Prediction from Electronic Health Records Using Graph Neural Networks and Transformers Embeddings
7.1 Introduction
7.2 Methods
7.2.1 Graph Construction
7.2.2 Masked Language Modeling
7.2.3 Model
7.3 Results
7.4 Discussion and Future Work
References
8 In Silico Screening of Life-Extending Drugs Using Machine Learning and Omics Data
8.1 Introduction
8.2 Methods
8.2.1 Data Collection
8.2.2 Lines of Invertebrate Models and Keeping Conditions
8.2.3 Statistics and Reproducibility
8.2.4 Model
8.3 Results
8.3.1 Model Validation on Synthetic Data
8.3.2 Lifespan Tests
8.4 Discussion
References
9 An Overview of Kernel Methods for Identifying Genetic Association with Health-Related Traits
9.1 Introduction
9.2 Introduction to the Kernels Methods for Genomic Data Analysis
9.2.1 The Kernel Functions
9.2.2 The Main Idea of the Kernel Methods
9.2.3 The Kernel Trick
9.2.4 Distance Induced by the Function Kernel
9.3 Kernel Machine Regression for Multi-marker genetic Association Testing
9.3.1 Kernel Linear and Logistic Regression Models for Genetic Association Test
9.3.2 Kernel Linear Regression Models
9.3.3 Kernel Logistic Regression Model
9.3.4 Rare-Variants Association Genetic Test
9.3.5 Connection with Other Multi-markers Association Tests
9.4 Selection of Variables for Gene-Set Analysis Using Kernels Methods
9.5 Kernel Methods for Association Genetics Test with Multiples Phenotypes
9.5.1 Genetic Association Test for Multiple Phenotypes Analysis Based on Kernel Methods
9.5.2 Multivariate Kernel Machine Regression (MKMR)
9.5.3 Multi-trait Sequence Kernel Association Test (MSKAT)
9.5.4 Gene Association with Multiple Traits (GAMuT)
9.6 Kernel Methods for Censored Survival Outcomes in Genetics Association Studies
9.7 Conclusions
References
10 Artificial Intelligence Approaches for Skin Anti-aging and Skin Resilience Research
10.1 Introduction
10.2 Genetics
10.2.1 Skin Aging Genes
10.2.2 Long-Lived Individuals and Twin Studies
10.2.3 Telomere Shortening
10.2.4 Variety of Aging Clocks
10.3 Molecular Aging
10.3.1 Omics Approaches in the Aging Research
10.4 Drug Discovery for Skin Resilience
10.5 Microbiome
10.6 Skin 3D Models
10.7 Biophysical Markers
10.8 Skin Imaging
10.8.1 Skin Image and Video Processing
10.8.2 Morphology Features and aging Patterns
10.8.3 AI Systems for Facial Imaging
10.8.4 Discoloration
10.8.5 Skin Texture
10.8.6 Nails and Hair
10.8.7 Estimation of Age
10.8.8 Simulation of Aging
10.9 Psychology
10.10 The Future of Skin Anti-aging Is Personalized
References
Part II Perspectives and Challenges in Machine Learning Research of Aging and Longevity
11 AI in Genomics and Epigenomics
11.1 AI to Diagnose Monogenic Diseases
11.1.1 AI Helps to Call Genomic Variants from Massive Parallel Sequencing Data
11.1.2 AI for Clinical Interpretation of Genomic Variants
11.1.3 Interpretation of Genomic Data and Clinical Description of Patient’s Phenotype
11.2 Interpretation of Epigenetic Changes in Aging
References
12 The Utility of Information Theory Based Methods in the Research of Aging and Longevity
12.1 Introduction
12.2 Definitions and Applications of Information-Theoretical Methods for the Research of Aging and Aging-Related Ill Health
12.3 The Application of Information-Theoretical Methods for the Evaluation of Biological and Biomedical Boundaries or Thresholds
12.4 Using Information-Theoretical Methods for Risk Group Attribution
12.5 Utilizing Information-Theoretical Methods for Genomic Sequence Analysis
12.6 Conclusion
References
13 AI for Longevity: Getting Past the Mechanical Turk Model Will Take Good Data
References
14 Leveraging Algorithmic and Human Networks to Cure Human Aging: Holistic Understanding of Longevity via Generative Cooperative Networks, Hybrid Bayesian/Neural/Logical AI and Tokenomics-Mediated Crowdsourcing
14.1 Introduction
14.2 Aging as a Complex Network Process
14.3 The Generative Cooperative Network (GCN)
14.4 Emergent Signs in GCN
14.5 How Emergent Signs Interact with Dependent Typing in the GCN
14.6 Data Absorption in GCN
14.7 BayesExpert in GCN
14.8 How BayesExpert can Combine Separate Studies into Coherent Wholes
14.9 Model Combination via Quadratic Programming
14.10 Hand-Crafted Bayes Net Model of Individual Aging
14.11 OpenCog’s BioAtomspace
14.12 Types of Atoms in the BioAtomspace
14.13 Tokenomic Incentivization
14.14 Strategies for Addressing Longevity via the Crowdsourced GCN Meta-Model
References
Author Index
Subject Index

Citation preview

Healthy Ageing and Longevity 19 Editor-in-Chief: Suresh I. S. Rattan

Alexey Moskalev Ilia Stambler Alex Zhavoronkov   Editors

Artificial Intelligence for Healthy Longevity

Healthy Ageing and Longevity Volume 19

Editor-in-Chief Suresh I.S. Rattan, Department of Molecular Biology & Genetics, Aarhus University, Aarhus-C, Denmark Editorial Board Mario Barbagallo, University of Palermo, Palermo, Italy Ufuk Çakatay, Istanbul University-Cerrahpasa, Istanbul, Türkiye Vadim E. Fraifeld, Ben-Gurion University of the Negev, Ber-Shiva, Israel Tamàs Fülöp, Faculté de Médecine et des Sciences, Université de Sherbrooke, Sherbrooke, QC, Canada Jan Gruber, Yale-NUS, Singapore, Singapore Kunlin Jin, Pharmacology and Neuroscience, University of North Texas Health Science Center, Fort Worth, TX, USA Sunil Kaul, Central 5-41, National Institute of Advanced Industrial Science and Technology, Tsukuba, Ibaraki, Japan Gurcharan Kaur, Department of Biotechnology, Guru Nanak Dev University, Amritsar, Punjab, India Eric Le Bourg, Centre de Recherches sur la Cognition Animale, Université Paul-Sabatier, Toulouse, France Guillermo Lopez Lluch, Andalusian Ctr. for Dev. Biology, Pablo de Olavide University, Sevilla, Sevilla, Spain Alexey Moskalev Republic, Russia

, Komi Scientific Center, Russian Academy of Sciences, Syktyvkar, Komi

Jan Nehlin, Department of Clinical Research, Copenhagen University Hospital, Hvidovre, Denmark Graham Pawelec, Immunology, University of Tübingen, Tübingen, Germany Syed Ibrahim Rizvi, Department of Biochemistry, University of Allahabad, Allahabad, Uttar Pradesh, India Jonathan Sholl, CNRS, University of Bordeaux, Bordeaux, France Ilia Stambler , Movement for Longevity & Quality of Life, Vetek (Seniority) Association, Rishon Lezion, Israel Katarzyna Szczerbi´nska, Medical Faculty, Jagiellonian University Medical College, Kraków, Poland Ioannis P. Trougakos, Faculty of Biology, National and Kapodistrian University of Athens, Zografou, Athens, Greece Renu Wadhwa, Central 5-41, National Institute of Advanced Industrial Science and Technology, Tsukuba, Ibaraki, Japan

Maciej Wnuk, Biotechnology, University of Rzeszow, Rzeszow, Poland

Rapidly changing demographics worldwide towards increased proportion of the elderly in the population and increased life-expectancy have brought the issues, such as “why we grow old”, “how we grow old”, “how long can we live”, “how to maintain health”, “how to prevent and treat diseases in old age”, “what are the future perspectives for healthy ageing and longevity” and so on, in the centre stage of scientific, social, political, and economic arena. Although the descriptive aspects of ageing are now well established at the level of species, populations, individuals, and within an individual at the tissue, cell and molecular levels, the implications of such detailed understanding with respect to the aim of achieving healthy ageing and longevity are ever-changing and challenging issues. This continuing success of gerontology, and especially of biogerontology, is attracting the attention of both the well established academicians and the younger generation of students and researchers in biology, medicine, bioinformatics, bioeconomy, sports science, and nutritional sciences, along with sociologists, psychologists, politicians, public health experts, and health-care industry including cosmeceutical-, food-, and lifestyle-industry. Books in this series will cover the topics related to the issues of healthy ageing and longevity. This series will provide not only the exhaustive reviews of the established body of knowledge, but also will give a critical evaluation of the ongoing research and development with respect to theoretical and evidence-based practical and ethical aspects of interventions towards maintaining, recovering and enhancing health and longevity.

Alexey Moskalev · Ilia Stambler · Alex Zhavoronkov Editors

Artificial Intelligence for Healthy Longevity

Editors Alexey Moskalev School of Systems Biology George Mason University Fairfax, VT, USA

Ilia Stambler Movement for Longevity and Quality of Life Vetek (Seniority) Association Rishon Lezion, Israel

Alex Zhavoronkov Insilico Medicine, Inc. Pak Shek Kok, Hong Kong

ISSN 2199-9007 ISSN 2199-9015 (electronic) Healthy Ageing and Longevity ISBN 978-3-031-35175-4 ISBN 978-3-031-35176-1 (eBook) https://doi.org/10.1007/978-3-031-35176-1 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland

Preface

There is a growing scientific consensus that a potentially promising strategy for extending healthy and productive longevity thanks to prevention of aging-related diseases and disabilities is to directly therapeutically treat the degenerative aging processes that constitute the main risks or underlying causes of these diseases. Yet, there is no consensus in sight considering the precise measurements of aging processes that are required in order to predict or early detect aging-related diseases and test the effectiveness and safety of anti-aging interventions. This lack of a common understanding is in no small part due to the fact that aging is a tremendously complex, multifaceted process, with literally anything possibly changing with age, involving a tremendous amount of variables and determinants that need to be integrated. Hence, the problem of data analysis in therapeutic aging research and practice may appear daunting. Here methods of machine learning—or “Artificial Intelligence (AI)”—are hoped to come to the rescue as they are expected to be able to analyze and integrate vast amounts of data and thus help develop comprehensive and reliable metrics of aging. Those would enable the researchers to both predict the aging health trajectory and evaluate the effectiveness of preventive and curative treatments for aging-related ill health. Yet, these methods, in turn, present considerable challenges, not only scientific, but also social. Geroprotective intervention should not only reduce mortality, but also reverse biomarkers of aging, prevent multimorbidity, improve quality of life, be non-toxic and have fewer side effects. There is a clear need to develop and implement artificial intelligence in big data processing in aging and longevity research. AI powered by big data analysis will allow us to predict new patterns, biomarkers, targets or chemicals. The reliability of digital tools is limited by the quality of the data. There are obvious problems with collecting and validating big medical data. This is a very demanding task because it is expensive and time-consuming. An imprecise data source causes algorithms trained on one dataset to fail in other situations or to make unreliable predictions. Insofar as original big data can be obtained by different methods, in different objects, there is a problem of data harmonization before merging or comparing them. In addition, the collection and storage of big medical data come with a number of ethical and security issues. We are still at the beginning of the v

vi

Preface

journey. We need criteria for biomarkers of aging and geroprotectors, standards for collecting big medical data and incentives for market participants to collect and exchange them. Together, a protocol for collecting and analyzing big health data on the health of the older persons could be developed. This multichapter collection addresses these questions and represents state-of-theart efforts to apply machine learning or AI methods for healthy aging and longevity research, diagnosis and therapy development. The book examines the methods of machine learning and their application in the analysis of big medical data, medical images, the creation of algorithms for assessing biological age and effectiveness of geroprotective medications. Thus, the book creates a unique synergy of two highly prominent and promising emerging fields: healthy longevity and artificial intelligence. It emphasizes the manifold promise of applying AI for the achievement of healthy longevity, in both diagnosis and therapy: in creating precision and personalized diagnostics based on massive data analysis on all levels of biological organization, from the molecular level to the entire organism, as well as creating and testing novel therapeutics (geroprotective therapies) for the amelioration and even reversing degenerative aging processes and extending healthy life, screening from the existing treatments as well as designing novel treatments based on structural analysis. At the same time, the book highlights the challenges facing the application of AI for healthy longevity: in terms of fundamental definitions, theoretical justifications, algorithmic and data challenges, including questions of data quality and reproducibility, as well as societal challenges, involving questions of data openness, sharing, standardization and the equitable and beneficial distribution of results. This volume, written by world leading experts working at the intersection of AI and healthy longevity, thus aims to contribute to exploring and providing solutions to the challenges, while maximally realizing the promises and benefits of using AI for healthy longevity. The book is broadly divided into two parts, though the overarching themes of those parts often overlap. The first part presents state-of-the-art original works and reviews on biomarkers of aging and health. The discussions include such as areas as application of deep learning for developing integrative aging clocks, medical diagnostic imaging for the discovery of biomarkers of aging and early detection of diseases, AIbased risk forecasting tools for occupational diseases, exploring longevity footprints in DNA methylation data, the application of AI in assistive technologies for regulating the behavioral and psychological symptoms of dementia. Further topics involve the epidemiology, genetics and epigenetics of biological aging in systemic analysis, AI research of particular organ systems, such as skin anti-aging and resilience research, predictions from electronic health records, methods for identifying genetic associations with health-related traits, in silico screening of life-extending drugs using machine learning and omics data. The second part provides a broad overview of the perspectives and challenges in machine learning research of aging and longevity. These include broad perspectives and challenges of using AI in genomics and epigenomics research, the utility of information theory-based methods in the research of aging and longevity, including the discussions of challenges of theoretical justification, black box and ad-hoc heuristic methodologies, the challenges of human biases, data quality and standardization, as well as exploring algorithmic and human

Preface

vii

networks to improve the holistic understanding of aging and longevity. Both parts provide insights for the practical development of AI-based aging diagnosis and testing, including the prospects and challenges involved. The contributions are in depth, but aim to be accessible and valuable not only for specialists in AI and longevity research, but also for a wider readership, including gerontologists, geriatricians, medical specialists and students from diverse fields, basic scientists, public and private research entities and policy-makers that should be interested in potential intervention in degenerative aging processes using advanced computational tools. The book aims to create a balanced and comprehensive overview of the application methodology that will be of interest to professionals as well as novices in the field. The promises and challenges of AI to help achieve healthy longevity for the population are so significant that stimulating the awareness about them, even at the basic level, is profoundly beneficial as it may encourage more interest and engagement in the field. The synergy between AI and longevity research is only at the beginning. Many challenges and problems yet need to be resolved for this synergy to bring real value to health care, including the need to establish agreeable, theoretically and empirically grounded evaluation criteria for biomarkers of aging and geroprotectors, standards and incentives for data collection and sharing, the ethical dissemination of research results. With a greater awareness and engagement in these topics, we can hope that the health benefits will be brought sooner for the rapidly aging society. Fairfax, USA Rishon Lezion, Israel Pak Shek Kok, Hong Kong

Alexey Moskalev Ilia Stambler Alex Zhavoronkov

Contents

Part I

Biomarkers of Aging and Health

1

AI in Longevity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Fedor Galkin and Alex Zhavoronkov

2

Automated Reporting of Medical Diagnostic Imaging for Early Disease and Aging Biomarkers Detection . . . . . . . . . . . . . . . Anna E. Andreychenko and Sergey Morozov

3

4

5

Risk Forecasting Tools Based on the Collected Information for Two Types of Occupational Diseases . . . . . . . . . . . . . . . . . . . . . . . . . Marc Deminov, Petr Kuztetsov, Alexander Melerzanov, and Dmitrii Yankevich

3

15

31

Obtaining Longevity Footprints in DNA Methylation Data Using Different Machine Learning Approaches . . . . . . . . . . . . . . . . . . Alena Kalyakulina, Igor Yusipov, and Mikhail Ivanchenko

67

The Role of Assistive Technology in Regulating the Behavioural and Psychological Symptoms of Dementia . . . . . . . . . . . . . . . . . . . . . . . Emily A. Hellis and Elizabeta B. Mukaetova-Ladinska

91

6

Epidemiology, Genetics and Epigenetics of Biological Aging: One or More Aging Systems? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115 Alessandro Gialluisi, Benedetta Izzi, Giovanni de Gaetano, and Licia Iacoviello

7

Temporal Relation Prediction from Electronic Health Records Using Graph Neural Networks and Transformers Embeddings . . . . 143 Óscar García Sierra, Alfonso Ardoiz Galaz, Miguel Ortega Martín, Jorge Álvarez Rodríguez, and Adrián Alonso Barriuso

ix

x

Contents

8

In Silico Screening of Life-Extending Drugs Using Machine Learning and Omics Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153 Alexander Fedintsev, Mikhail Syromyatnikov, Vasily Popov, and Alexey Moskalev

9

An Overview of Kernel Methods for Identifying Genetic Association with Health-Related Traits . . . . . . . . . . . . . . . . . . . . . . . . . . 165 Vicente Gallego

10 Artificial Intelligence Approaches for Skin Anti-aging and Skin Resilience Research . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189 Anastasia Georgievskaya, Daniil Danko, Richard A. Baxter, Hugo Corstjens, and Timur Tlyachev Part II

Perspectives and Challenges in Machine Learning Research of Aging and Longevity

11 AI in Genomics and Epigenomics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217 Veniamin Fishman, Maria Sindeeva, Nikolay Chekanov, Tatiana Shashkova, Nikita Ivanisenko, and Olga Kardymon 12 The Utility of Information Theory Based Methods in the Research of Aging and Longevity . . . . . . . . . . . . . . . . . . . . . . . . . 245 David Blokh, Joseph Gitarts, Eliyahu H. Mizrahi, Nadya Kagansky, and Ilia Stambler 13 AI for Longevity: Getting Past the Mechanical Turk Model Will Take Good Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275 Leonid Peshkin and Dmitrii Kriukov 14 Leveraging Algorithmic and Human Networks to Cure Human Aging: Holistic Understanding of Longevity via Generative Cooperative Networks, Hybrid Bayesian/ Neural/Logical AI and Tokenomics-Mediated Crowdsourcing . . . . . 287 Deborah Duong, Ben Goertzel, Matthew Iklé, Hedra Seid, and Michael Duncan Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 317 Subject Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319

Contributors

Anna E. Andreychenko Research and Practical Clinical Center for Diagnostics and Telemedicine Technologies of the Moscow Healthcare Department, Moscow, Russia; ITMO University, St. Petersburg, Russia Adrián Alonso Barriuso Dezzai by MMG, Madrid, Spain; Data Science Laboratory, Universidad Rey Juan Carlos, Madrid, Spain Richard A. Baxter Phase Plastic Surgery, Mountlake Terrace, WA, USA David Blokh The Geriatric Medical Center “Shmuel Harofe”, Beer Yaakov, Affiliated to Sackler School of Medicine, Tel-Aviv University, Tel-Aviv, Israel Nikolay Chekanov Artificial Intelligence Research Institute (AIRI), Moscow, Russia Hugo Corstjens Novigo+, Maaseik, Belgium Daniil Danko HautAI OU, Tallinn, Estonia Giovanni de Gaetano Department of Epidemiology and Prevention, IRCCS NEUROMED, Pozzilli, Italy Marc Deminov National Association of Medical Informatics, Moscow, Russia Michael Duncan Rejuve.AI, Rodney Bay, Saint Lucia; SingularityNET Foundation, Amsterdam, The Netherlands Deborah Duong Rejuve.AI, Rodney Bay, Saint Lucia; SingularityNET Foundation, Amsterdam, The Netherlands Alexander Fedintsev Institute of Biology of Komi Science Center of Ural Branch of Russian Academy of Sciences, Syktyvkar, Russia Veniamin Fishman Artificial Intelligence Research Institute (AIRI), Moscow, Russia

xi

xii

Contributors

Alfonso Ardoiz Galaz Dezzai by MMG, Madrid, Spain; Universidad Complutense de Madrid, Madrid, Spain Fedor Galkin Deep Longevity, Hong Kong, China Vicente Gallego DEZZAI, Madrid, Spain Anastasia Georgievskaya HautAI OU, Tallinn, Estonia Alessandro Gialluisi Department of Epidemiology and Prevention, IRCCS NEUROMED, Pozzilli, Italy; Department of Medicine and Surgery, University of Insubria, Varese, Italy Joseph Gitarts Efi Arazi School of Computer Science, Reichman University, Herzliya, Israel Ben Goertzel Rejuve.AI, Rodney Bay, Saint Lucia; SingularityNET Foundation, Amsterdam, The Netherlands Emily A. Hellis School of Psychology and Vision Sciences, University of Leicester, Leicester, UK Licia Iacoviello Department of Epidemiology and Prevention, IRCCS NEUROMED, Pozzilli, Italy; Department of Medicine and Surgery, University of Insubria, Varese, Italy Matthew Iklé Rejuve.AI, Rodney Bay, Saint Lucia; SingularityNET Foundation, Amsterdam, The Netherlands Mikhail Ivanchenko Institute of Information Technologies, Mathematics and Mechanics, Lobachevsky State University, Nizhny Novgorod, Russia Nikita Ivanisenko Artificial Intelligence Research Institute (AIRI), Moscow, Russia Benedetta Izzi Centro Nacional de Investigaciones Cardiovasculares (CNIC), Madrid, Spain Nadya Kagansky The Geriatric Medical Center “Shmuel Harofe”, Beer Yaakov, Affiliated to Sackler School of Medicine, Tel-Aviv University, Tel-Aviv, Israel Alena Kalyakulina Institute of Information Technologies, Mathematics and Mechanics, Lobachevsky State University, Nizhny Novgorod, Russia Olga Kardymon Artificial Intelligence Research Institute (AIRI), Moscow, Russia Dmitrii Kriukov Skolkovo Institute of Science and Technology, Moscow, Russia Petr Kuztetsov National Association of Medical Informatics, Moscow, Russia; Project Office “Digital Transformation in Occupational Medicine” of the Research Institute of Occupational Medicine, Moscow, Russia

Contributors

xiii

Miguel Ortega Martín Dezzai by MMG, Madrid, Spain; Universidad Complutense de Madrid, Madrid, Spain Alexander Melerzanov “Applied Genetics” Center Moscow Institute of Physics and Technology (National Research University), Moscow, Russia; “Laboratory of Innovative Technologies and Artificial Intelligence in Public Health” Semashko National Research Institute of Russian Academy of Science, Moscow, Russia Eliyahu H. Mizrahi The Geriatric Medical Center “Shmuel Harofe”, Beer Yaakov, Affiliated to Sackler School of Medicine, Tel-Aviv University, Tel-Aviv, Israel Sergey Morozov Osimis S.A., Liége, Belgium Alexey Moskalev Institute of Biology of Komi Science Center of Ural Branch of Russian Academy of Sciences, Syktyvkar, Russia; School of Systems Biology, George Mason University, Fairfax, USA Elizabeta B. Mukaetova-Ladinska School of Psychology and Vision Sciences, University of Leicester, Leicester, UK Leonid Peshkin Systems Biology, Harvard Medical School, Boston, MA, USA Vasily Popov Laboratory of Metagenomics and Food Biotechnology, Voronezh State University of Engineering Technologies, Voronezh, Russia Jorge Álvarez Rodríguez Dezzai by MMG, Madrid, Spain Hedra Seid SingularityNET Foundation, Amsterdam, The Netherlands Tatiana Shashkova Artificial Intelligence Research Institute (AIRI), Moscow, Russia Óscar García Sierra Dezzai by MMG, Madrid, Spain; Universidad Complutense de Madrid, Madrid, Spain Maria Sindeeva Artificial Intelligence Research Institute (AIRI), Moscow, Russia Ilia Stambler The Geriatric Medical Center “Shmuel Harofe”, Beer Yaakov, Affiliated to Sackler School of Medicine, Tel-Aviv University, Tel-Aviv, Israel; Vetek (Seniority) Association—The Movement for Longevity and Quality of Life, Rishon Lezion, Israel; Department of Science, Technology and Society, Bar Ilan University, Ramat Gan, Israel Mikhail Syromyatnikov Laboratory of Metagenomics and Food Biotechnology, Voronezh State University of Engineering Technologies, Voronezh, Russia Timur Tlyachev HautAI OU, Tallinn, Estonia

xiv

Contributors

Dmitrii Yankevich Federal Research and Clinical Center of Intensive Care Medicine and Rehabilitology, Moscow, Russia Igor Yusipov Institute of Information Technologies, Mathematics and Mechanics, Lobachevsky State University, Nizhny Novgorod, Russia Alex Zhavoronkov Deep Longevity, Hong Kong, China; Insilico Medicine, Hong Kong, China; Buck Institute for Research on Aging, Novato, CA, USA

Part I

Biomarkers of Aging and Health

Chapter 1

AI in Longevity Fedor Galkin and Alex Zhavoronkov

Abstract Deep learning models are powerful digital tools that can analyze all kinds of biodata and provide insights that cannot be obtained with other machine learning techniques. An abundance of publicly available data sets promotes the development of neural networks that may be applied in aging research, clinical practice, and drug design. New deep-learning algorithms are quickly adopted by bioinformatics to create analytical and generative solutions that deepen our understanding of the aging process. Although medical AI applications lack regulation, the public and politicians worldwide are acutely aware of the benefits AI tools can yield in the healthcare sector thus generating demand for legislative action. Despite this organizational issue, deeplearning models are sufficiently advanced to be used as a foundation for a new age of highly personalized, quantitative, longevity-focused medicine. Keywords Health · Longevity · Aging · Artificial Intelligence · Aging clocks · Biogerontology · Drug discovery · Deep learning · Life extension · Machine learning

1.1 Statistical Models of Aging Aging is a multifaceted process that has fascinated humanity since ancient times. In Hesiod’s “Theogony,” old age is represented by the malevolent spirit Geras, son of Nyx (the goddess of night) and brother of Moirae (enforcers of fate; Hesiod, C8thC7th BC). Described as “loathsome” and “dreaded even by the gods” by Homer, Geras did not develop a cult following, like one of his brothers, Thanatos (god of death; Homer, C7th-C4th BC). In this depiction, it is clear that old age has been F. Galkin · A. Zhavoronkov (B) Deep Longevity, Hong Kong, China e-mail: [email protected] A. Zhavoronkov Insilico Medicine, Hong Kong, China Buck Institute for Research on Aging, Novato, CA 94945, USA © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 A. Moskalev et al. (eds.), Artificial Intelligence for Healthy Longevity, Healthy Ageing and Longevity 19, https://doi.org/10.1007/978-3-031-35176-1_1

3

4

F. Galkin and A. Zhavoronkov

recognized as a powerful enemy of humankind since its earliest days. However, only recently have we gathered enough knowledge to start to comprehend what aging is beyond mythological concepts. In the quest to understand and ultimately push against the “loathsome Geras,” mathematical models of aging have played a crucial role. Although the first half of the twentieth century saw great progress in building an evolutionary and mechanistic understanding of aging drivers, this process could not be effectively quantified until recently. One of the first attempts to define biological age in humans was carried out in 1960 using a battery of 25 physiological and psychological factors hypothesized to evolve with age (Clark 1960). This study did not feature an aging clock, as in our current understanding; however, an attempt was made to identify the factors that can be reliably associated with the pace of aging. The factors most closely related to aging were blood pressure and lens accommodation. Interestingly, this study also examined the effects of certain psychological features on age, such as reading preference, reasoning ability, and fear of death. Although this model can be used to classify people based on their rate of aging, it offers little insight into the physiological drivers that may be modified to delay agingrelated decline. In 1988, one of the first aging clocks based on linear regression and principal component analysis was published (Nakamura et al. 1988). It features a panel of seven blood biomarkers and four physiological metrics selected for their correlation with chronological age. The model provides an output of “years” of biological age, an intuitive unit illustrating how far an individual has progressed along the typical aging path. Similar approaches were implemented in more recent aging clocks, such as the epigenetic aging clock published in 2013, which ignited a new wave of biogerontological research (Horvath 2013). Since 2013, numerous aging models have been published following similar methodologies applied to various types of biodata (Galkin et al. 2020a). All these aging clocks, including those described in the previous century, rely on detecting linear age-related trends in study samples. Most commonly, elastic net regression is used to approximate chronological age (the target variable) as a linear combination of a fixed-length input vector. More recently, aging clocks have been trained to approximate frailty scores or mortality risk. These models are called “second-generation” aging clocks to reflect the fact that the target variables they estimate are more relevant in the context of aging research than chronological age. Nonetheless, these aging clocks may still rely on chronological age. For example, in PhenoAge, published in 2018, the target variable is called “phenotypical age” and is derived from a mortality risk measure based on blood biomarkers and age (Levine et al. 2018). In GrimAge from 2019, chronological age is used as a dependent variable to estimate time to death (Lu et al. 2019). This approach has been shown to outperform first-generation models in a setting designed to measure aging clock associations with frailty measures, such as reaction time and walking speed (McCrory et al. 2020).

1 AI in Longevity

5

Such aging clocks are commonly used in research settings to detect the antiaging properties of certain interventions or to compare the pace of aging in different populations to assess the internal and external drivers of aging. The main hypothesis for the application of aging clocks is that the decrease in the biological age measures they produce represents a proportional reversal of aging-related damage sustained by the organism. However, many recent reports suggest that this assumption may not hold true for all research settings. Aging is a complicated phenomenon involving processes at every level of biological organization, but current aging models struggle to grasp this complexity. Different aging clock implementations are more sensitive to specific hallmarks of aging or may be influenced by the level of mitotic activity in sampled tissues (Bell et al. 2019; Kabacik et al. 2022). This drawback is further illustrated by the low correlation between the measures that various implementations generate when supplied with an identical sample (Vetter et al. 2022). Thus, “first-” and “second-” generation aging clocks seem to represent only certain aging drivers and might be unfit for validating the anti-aging properties of therapies targeting drivers with different origins. Some of these issues with aging clocks may be alleviated by applying different methodologies at the training stage. One of the core assumptions in the clocks described above is the constant pace of aging. The original epigenetic clock published by Steve Horvath in 2013 features a logarithmic transformation to account for a steeper aging rate in early life; however, it may be that the pace of aging is not constant throughout adulthood (Horvath 2013). Linear regression models (e.g., elastic net) might not be suitable for approximating such a non-monotonic function. Another drawback associated with the application of linear methods in biogerontology stems from their shared assumption that a factor’s effect on the pace of aging is constant across the population. For example, in an elastic net epigenetic clock, increasing the methylation of promoter X always increases the predicted biological age due to the positive coefficient assigned based on the effect of this promoter averaged across all training samples. The context that other features provide is ignored because all features are considered independent. Meanwhile, the biological reality may be that gene X is involved in a wide network of feedback loops, some combination of which may create a context in which high methylation of X attenuates the aging processes. Removing these limitations of current technology requires a more robust method that is designed to handle non-linear trends and co-dependent features. From this perspective, the deep learning algorithms described below offer a unique opportunity to overcome the current challenges faced by the biogerontological community.

1.1.1 Deep Learning Basic Principles Deep learning is a term describing a variety of machine learning techniques that utilize artificial neurons—mathematical objects that receive a predefined set of weighted inputs to provide a single output via a non-linear function. Neurons are

6

F. Galkin and A. Zhavoronkov

typically organized into a network with distinct layers, with each layer receiving the previous layer’s output for processing. Deep neural networks are intended to emulate biological neural structures in extracting high-level information by rehashing its constituents to amplify a signal. Mathematical functions determining the output of a neuron are analogous to the action potential of actual neurons. Because deep neural networks feature a large number of weighted connections that need to be trained, they require much more data than simpler algorithms to function properly. Apart from the classical regression and classification problems that may be solved with less sophisticated models, neural networks can be trained to compress data, modify it, or even create a synthetic sample with the desired properties.

1.2 Deep Learning Aging Clocks Implementing aging clocks with deep learning algorithms may help resolve the many drawbacks present in existing solutions. One of the greatest advantages neural networks possess is their ability to handle co-dependent variables and non-linear cases. Provided a sufficiently large training sample, a neural network is expected to pick up patterns associated with a particular age group to result in more accurate age predictions. A variety of activation functions, architectures, and training procedures offer customization options that are able to overcome the limitations imposed on other algorithms by their assumptions. While most linear aging clocks are trained to assess the pace of aging based on epigenetic data obtained from arrays or sequencing, deep aging clocks can process a much larger number of data types: blood clinical tests (Mamoshina et al. 2018a), facial images (Bobrov et al. 2018), transcriptomes (Mamoshina et al. 2018b), and gut metagenomes (Galkin et al. 2020b). Similar to linear aging clocks, these deep learning aging clocks have been validated in many settings to find significant associations with clinical conditions. Recently, deep and linear aging clocks have been utilized to study the effects of COVID-19 on the pace of aging, which has been hypothesized to affect multiple aging processes in the cell (Polidori et al. 2021). Epigenetic linear models have produced conflicting results. Franzen et al. (2021) showed that epigenetic age is unaffected by the virus, while Pang et al. (2022) found that the effect of the virus depends on the patient’s age. The divergent results in the latter study were explained by the differences in blood cell composition between younger and older adults, the primary specimen type in epigenetic studies. However, a deep aging clock trained on clinical blood tests, a more regular and easy-to-collect data type, demonstrated that the pace of aging was increased in patients with lethal outcomes compared to recovered patients (Galkin et al. 2021). The conflicting results for the epigenetic clocks highlight the need for a multimodal approach to aging quantification to provide the points of view that are unaffected by biases inherent in a single data type. Certain progress has been made in the field of deep aging clocks to develop such models. Linear aging clocks indicate

1 AI in Longevity

7

a connection between mental health and biological age in multiple studies, highlighting that lifetime accumulated stress, trauma, social pressure, and exposure to violence may have long-lasting effects on the epigenetic pace of aging (Zannas et al. 2015; Boks et al. 2015; Jovanovic et al. 2017; Yang et al. 2020; Anderson et al. 2021). However, the models utilized in these studies are not trained on psychological data and thus are unfit to serve as the foundation for a multimodal solution that combines mental and physical health factors. Concurrently, deep learning models were employed to describe the lifetime trends of human psychology and to define psychological age based on survey data (Zhavoronkov et al. 2020). Unlike linear models, which may be used to derive samplelevel insights, the interpretation of deep learning models’ output provides a superior level of personalization. Feature importance analysis methods, such as Shapley values and LIME (Local interpretable model-agnostic explanations), may be applied to understand the inner workings of deep learning models, commonly considered “black boxes.” These methods provide the ability to assess the contribution of each feature in the model to its output and pinpoint the sources of accelerated aging in a person, be it psychological or biological aging. Other deep learning methods, such as self-organizing maps characterized by competitive learning, can be applied to the same objects to create recommendation engines that navigate users toward mental and physical profiles characterized by higher longevity potential (Galkin et al. 2022a). Recent findings demonstrate that this progress in the field of psychological aging may be leveraged to improve physical longevity. An individual’s emotional state has been shown to affect the pace of biological aging, as observed with a clinical blood aging clock (Galkin et al. 2022b). More specifically, the detrimental effects of such factors as feeling hopeless or unhappy or suffering from restless sleep are of the same magnitude as smoking or having low access to healthcare services. As psychological surveys and blood panels are relatively easy to collect (compared to -omics specimens), a combination of a psychological and a hematologic aging clock is a likely candidate for the first multimodal aging clock, which would be a more reliable yardstick than other aging clocks that rely on a single aging dimension. However, despite the many advantages of neural networks, as a method, they possess certain drawbacks that make their immediate practical applications still unachievable. Chief among them is the requirement for a much larger volume of training data. In a network architecture in which each neuron is connected to a neuron in a subsequent layer, the number of parameters to be optimized greatly exceeds that of a linear model. However, in the age of big data and public access repositories, such as Gene Expression Omnibus, this limitation may be easily defeated. Despite the modern abundance of data, the same requirements for data consistency apply to deep learning as to other machine learning techniques. Performing domain adaptation to make data obtained from different platforms comparable remains to be conducted before aging clock training starts for any chosen method. The high degree of personalization achieved by deep learning models renders this technology suitable for practical applications, such as automated initial screening, assessment of the impact of national policies on population health spans, health insurance, geroprotector development, and trials. Other machine learning methods

8

F. Galkin and A. Zhavoronkov

result in models that commonly assume that each new sample belongs to the training cohort. Consequently, they extrapolate the global trends in a sample that may represent a region of the set in which the trend is not observed. Such models may still be used to provide useful insights, provided they are applied to groups of subjects with a composition similar to the training set; however, caution is required when they are applied to analyze the pace of aging at the individual level. In contrast, deep learning clocks are able to handle datasets with a segmented structure, provided that each segment is sufficiently represented. Furthermore, feature importance algorithms enabled for neural networks can highlight the sources of aging acceleration with this local data structure in mind. In the context of epigenetics, deep learning clocks return a different list of gene promoters suspected of accelerating aging in a person, while an elastic net-based clock has a fixed list of epigenetic aging contributors. Deep learning aging clocks are a rapidly developing field of research with high potential for commercialization. Some aging clocks have been patented as diagnostic and drug development tools based on transcriptomic (Aliper et al. 2019), proteomic (Aliper et al. 2020b), and gut flora (Aliper et al. 2020a) biomarkers of aging. Neural networks’ inherent properties strongly position this technology as a foundation for the shift toward a personalized and longevity-focused healthcare system.

1.3 Deep Learning Applications in Medicine 1.3.1 Clinical Practice The medical utility of deep learning goes far beyond aging clock development. Although other embodiments of this methodology are not directly linked to aging research, they have the potential to contribute to the digital healthcare infrastructure and facilitate widespread longevity-focused approaches. By 2021, more than 3000 papers had been published on the topic of deep learning in medical therapeutics, in which the most prevalent use case is image analysis (Nogales et al. 2021). Convolutional neural networks have been trained to analyze MRI scans and detect malignant tumors with high accuracy (Febrianto et al. 2020; Jung et al. 2022). The same network architecture may also be applied to other data types, such as gene expression, to diagnose tumors (Mostavi et al. 2020). For certain implementations, neural networks have been shown to outperform other methods of diagnosis, including established clinical biomarkers and image inspections by professionals (Morrow and Sormani 2020; Mitchell et al. 2020; Feng et al. 2022). Although such models cannot replace specialists, they have the potential to greatly increase the throughput of healthcare systems worldwide by decreasing the inspection time and providing a second opinion when getting multiple experts together is

1 AI in Longevity

9

impossible. Some neural networks have been demonstrated to outperform radiologists in the task of early breast tumor detection (Lotter et al. 2021). Automating such models to process data from patients undergoing non-oncological screening can save many lives. However, there is reasonable concern that AI-assisted diagnostics can lead to overdiagnosis and thus to unnecessary treatment of incidental findings of uncertain hazards (Bulliard and Chiolero 2015; Houssami 2017). Moreover, the widespread adoption of deep learning tools in healthcare may create a statistical landscape that is incompatible with previously collected data that is still used to guide decisionmaking at all levels of society. These concerns emphasize that despite the apparent benefits of deep learning approaches, extra caution should be practiced before such approaches are adopted in clinical practice. Most jurisdictions agree on the cautious approach to deep learning innovations in healthcare. Despite the growing public sentiment that AI tools can enhance the healthcare sector’s efficiency, surveys in the United States (US) show that people are aware of the risks AI adoption entails in this industry (Esmaeilzadeh 2020). The highest perceived risks associated with clinical AI applications are communication barriers, the chance of malfunction, and the lack of regulation. In a European Parliament report published in 2022, seven policies are proposed to ensure the safe deployment of AI models in healthcare (European Parliament 2022). The measures include transferring existing AI norms to healthcare use cases, educating the public, and introducing AI passports intended to increase transparency. The passport should include information on model limitations and assumptions, intended use cases, training and validation reports, and periodic evaluation results. Although the legislation is still being prepared, several solutions are available to consumers and organizations willing to explore anti-aging deep learning tools. Young.AI is a consumer-focused web platform offering free access to various aging clocks developed by Deep Longevity, a Hong Kong–based tech startup. FuturSelf.AI is a web application featuring a psychological aging clock that can provide individuals with lifestyle tips from an AI-driven recommendation engine (Galkin et al. 2022a). For businesses, Senoclock (https://www.deeplongevity.com/senoclock) is available if they want to obtain personalized pace of aging reports for their clients using deep learning clocks. Because the reports might contain clinically actionable items of information, they are intended only to aid a professional consultation, not replace it.

1.3.2 Drug Development Apart from screening and diagnostics, deep learning carries the promise of improving another essential process in the modern healthcare landscape. According to U.S. FDA statistics from 1991 to 2010, more than 90% of preclinical drug candidates eventually fail clinical trials (Takebe et al. 2018). A more recent analysis of clinical trials based on 2000–2010 data from the US, Europe, and Japan showed that the average approval rate of phase I candidates is 13%, while the approval of novel modalities, such as

10

F. Galkin and A. Zhavoronkov

cell or gene therapies, is as low as 4% (Yamaguchi et al. 2021). The average approval rate has remained stagnant for decades, while research costs in drug development have increased (Paul et al. 2010). According to estimates from 2020, the median cost of delivering a single product to the market stands at USD 985 million, with actual costs ranging from USD 315 million to USD 2.8 billion (Oj et al. 2020). As the pool of “low-hanging fruits” in drug discovery diminishes, identifying novel targets and testing novel therapies require increased spending and larger teams. Deep learning has the potential to fast track certain stages of drug development and thus lead to improved research costs. In 2019, the generative neural network GENTRL was demonstrated to propose a lung fibrosis drug candidate that passed preclinical tests in vitro and in vivo (Zhavoronkov et al. 2019). The entire project took only 21 days, which is considered an unprecedented speed for the industry. A higher success rate of AI-assisted trials might lead to a new era of drug development, allowing pharmaceutical companies to efficiently test novel hypotheses and expand their research activities. Lower entry costs to the industry are also expected to promote competition and benefit consumers. With the advent of aging clocks, which offer a way to quantify the pace of aging, this new approach to drug discovery might result in an unprecedented expansion of the pool of known geroprotectors. Compliance with Ethical Standards Conflict of Interest AZ and FG are employed by Deep Longevity limited, a Hong Kong startup developing deep learning aging clocks, part of Endurance RP, a publicly traded company (HKG: 0575). AZ are employed by Insilico Medicine, an AI startup developing digital solutions for drug discovery.

References Aliper AM, Galkin F, Zavoronkovs A (2020a) Aging markers of human microbiome and microbiomic aging clock Aliper AM, Putin E, Zavoronkovs A (2019) Deep transcriptomic markers of human biological aging and methods of determining a biological aging clock Aliper AM, Putin E, Zavoronkovs A (2020b) Deep proteome markers of human biological aging and methods of determining a biological aging clock Anderson JA, Johnston RA, Lea AJ, Campos FA, Voyles TN, Akinyi MY, Alberts SC, Archie EA, Tung J (2021) High social status males experience accelerated epigenetic aging in wild baboons. Elife 10:e66128 Bell CG, Lowe R, Adams PD, Baccarelli AA, Beck S, Bell JT, Christensen BC, Gladyshev VN, Heijmans BT, Horvath S, Ideker T, Issa J-PJ, Kelsey KT, Marioni RE, Reik W, Relton CL, Schalkwyk LC, Teschendorff AE, Wagner W, Zhang K, Rakyan VK (2019) DNA methylation aging clocks: challenges and recommendations. Genome Biol 20(1):249. https://doi.org/10. 1186/s13059-019-1824-y Bobrov E, Georgievskaya A, Kiselev K, Sevastopolsky A, Zhavoronkov A, Gurov S, Rudakov K, del Pilar Bonilla Tobar M, Jaspers S, Clemann S (2018) PhotoAgeClock: deep learning algorithms for development of non-invasive visual biomarkers of aging. Aging 10(11):3249–3259. https:// doi.org/10.18632/aging.101629

1 AI in Longevity

11

Boks MP, van Mierlo HC, Rutten BPF, Radstake TRDJ, De Witte L, Geuze E, Horvath S, Schalkwyk LC, Vinkers CH, Broen JCA, Vermetten E (2015) Longitudinal changes of telomere length and epigenetic age related to traumatic stress and post-traumatic stress disorder. Psychoneuroendocrinology 51:506–512. https://doi.org/10.1016/j.psyneuen.2014.07.011 Bulliard J-L, Chiolero A (2015) Screening and overdiagnosis: public health implications. Public Health Rev 36(1):8. https://doi.org/10.1186/s40985-015-0012-1 Clark JW (1960) The aging dimension: a factorial analysis of individual differences with age on psychological and physiological measurements. J Gerontol 15:183–187. https://doi.org/10.1093/ geronj/15.2.183 Esmaeilzadeh P (2020) Use of AI-based tools for healthcare purposes: a survey study from consumers’ perspectives. BMC Med Inform Decis Mak 20(1):170. https://doi.org/10.1186/s12 911-020-01191-1 European Parliament. Directorate General for Parliamentary Research Services (2022) Artificial intelligence in healthcare: applications, risks, and ethical and societal impacts. Publications Office, LU Febrianto DC, Soesanti I, Nugroho HA (2020) Convolutional neural network for brain tumor detection. IOP Conf Ser: Mater Sci Eng 771(1):012031. https://doi.org/10.1088/1757-899X/771/1/ 012031 Feng X, Provenzano FA, Small SA, for the Alzheimer’s Disease Neuroimaging Initiative (2022) A deep learning MRI approach outperforms other biomarkers of prodromal Alzheimer’s disease. Alzheimer’s Res Therapy 14(1):45. https://doi.org/10.1186/s13195-022-00985-x Franzen J, Nüchtern S, Tharmapalan V, Vieri M, Nikoli´c M, Han Y, Balfanz P, Marx N, Dreher M, Brümmendorf TH, Dahl E, Beier F, Wagner W (2021) Epigenetic clocks are not accelerated in COVID-19 patients. Int J Mol Sci 22(17):9306. https://doi.org/10.3390/ijms22179306 Galkin F, Kochetov K, Keller M, Zhavoronkov A, Etcoff N (2022a) Optimizing future well-being with artificial intelligence: self-organizing maps (SOMs) for the identification of islands of emotional stability. Aging 14(12):4935–4958. https://doi.org/10.18632/aging.204061 Galkin F, Kochetov K, Koldasbayeva D, Faria M, Fung HH, Chen AX, Zhavoronkov A (2022b) Psychological factors substantially contribute to biological aging: evidence from the aging rate in Chinese older adults. Aging 14(18):7206–7222. https://doi.org/10.18632/aging.204264 Galkin F, Mamoshina P, Aliper A, de Magalhães JP, Gladyshev VN, Zhavoronkov A (2020a) Biohorology and biomarkers of aging: current state-of-the-art, challenges and opportunities. Ageing Res Rev 60:101050. https://doi.org/10.1016/j.arr.2020.101050 Galkin F, Mamoshina P, Aliper A, Putin E, Moskalev V, Gladyshev VN, Zhavoronkov A (2020b) Human gut microbiome aging clock based on taxonomic profiling and deep learning. iScience 23(6):101199. https://doi.org/10.1016/j.isci.2020.101199 Galkin F, Parish A, Bischof E, Zhang J, Mamoshina P, Zhavoronkov A (2021) Increased pace of aging in COVID-related mortality. Life 11(8):730. https://doi.org/10.3390/life11080730 Hesiod (C8th-C7th BC) Theogony, trans. Evelyn-White Homer (C7th-C4th BC) Hymn 5 to Aphrodite, trans. Evelyn-White Horvath S (2013) DNA methylation age of human tissues and cell types. Genome Biol 14(10):R115– R115. https://doi.org/10.1186/gb-2013-14-10-r115 Houssami N (2017) Overdiagnosis of breast cancer in population screening: does it make breast screening worthless? Cancer Biol Med 14(1):1–8. https://doi.org/10.20892/j.issn.2095-3941. 2016.0050 Jovanovic T, Vance LA, Cross D, Knight AK, Kilaru V, Michopoulos V, Klengel T, Smith AK (2017) Exposure to violence accelerates epigenetic aging in children. Sci Rep 7:8962. https:// doi.org/10.1038/s41598-017-09235-9 Jung Y, Kim T, Han M-R, Kim S, Kim G, Lee S, Choi YJ (2022) Ovarian tumor diagnosis using deep convolutional neural networks and a denoising convolutional autoencoder. Sci Rep 12(1):17024. https://doi.org/10.1038/s41598-022-20653-2

12

F. Galkin and A. Zhavoronkov

Kabacik S, Lowe D, Fransen L, Leonard M, Ang S-L, Whiteman C, Corsi S, Cohen H, Felton S, Bali R, Horvath S, Raj K (2022) The relationship between epigenetic age and the hallmarks of aging in human cells. Nat Aging 2(6):484–493. https://doi.org/10.1038/s43587-022-00220-0 Levine ME, Lu AT, Quach A, Chen BH, Assimes TL, Bandinelli S, Hou L, Baccarelli AA, Stewart JD, Li Y, Whitsel EA, Wilson JG, Reiner AP, Aviv A, Lohman K, Liu Y, Ferrucci L, Horvath S (2018) An epigenetic biomarker of aging for lifespan and healthspan. Aging 10(4):573–591. https://doi.org/10.18632/aging.101414 Lotter W, Diab AR, Haslam B, Kim JG, Grisot G, Wu E, Wu K, Onieva JO, Boyer Y, Boxerman JL, Wang M, Bandler M, Vijayaraghavan GR, Gregory Sorensen A (2021) Robust breast cancer detection in mammography and digital breast tomosynthesis using an annotation-efficient deep learning approach. Nat Med 27(2):244–249. https://doi.org/10.1038/s41591-020-01174-9 Lu AT, Quach A, Wilson JG, Reiner AP, Aviv A, Raj K, Hou L, Baccarelli AA, Li Y, Stewart JD, Whitsel EA, Assimes TL, Ferrucci L, Horvath S (2019) DNA methylation GrimAge strongly predicts lifespan and healthspan. Aging 11(2):303–327. https://doi.org/10.18632/aging.101684 Mamoshina P, Kochetov K, Putin E, Cortese F, Aliper A, Lee W-S, Ahn S-M, Uhn L, Skjodt N, Kovalchuk O, Scheibye M, Zhavoronkov A (2018a) Population specific biomarkers of human aging: a big data study using South Korean, Canadian, and Eastern European patient populations. J Gerontol: Ser A 73(11):1482–1490. https://doi.org/10.1093/gerona/gly005 Mamoshina P, Volosnikova M, Ozerov IV, Putin E, Skibina E, Cortese F, Zhavoronkov A (2018b) Machine learning on human muscle transcriptomic data for biomarker discovery and tissuespecific drug target identification. Front Genet 9:242. https://doi.org/10.3389/fgene.2018.00242 McCrory C, Fiorito G, Hernandez B, Polidoro S, O’Halloran AM, Hever A, Ni C, Lu AT, Horvath S, Vineis P, Kenny RA (2020) GrimAge outperforms other epigenetic clocks in the prediction of age-related clinical phenotypes and all-cause mortality. J Gerontol A Biol Sci Med Sci 76(5):741–749. https://doi.org/10.1093/gerona/glaa286 Mitchell JR, Kamnitsas K, Singleton KW, Whitmire SA, Clark-Swanson KR, Ranjbar S, Rickertsen CR, Johnston SK, Egan KM, Rollison DE, Arrington J, Krecke KN, Passe TJ, Verdoorn JT, Nagelschneider AA, Carr CM, Port JD, Patton A, Campeau NG, Liebo GB, Eckel LJ, Wood CP, Hunt CH, Vibhute P, Nelson KD, Hoxworth JM, Patel AC, Chong BW, Ross JS, Boxerman JL, Vogelbaum MA, Hu LS, Glocker B, Swanson KR (2020) Deep neural network to locate and segment brain tumors outperformed the expert technicians who created the training data. J Med Imaging (bellingham) 7(5):055501. https://doi.org/10.1117/1.jmi.7.5.055501 Morrow JM, Sormani MP (2020) Machine learning outperforms human experts in MRI pattern analysis of muscular dystrophies. Neurology 94(10):421–422. https://doi.org/10.1212/WNL. 0000000000009053 Mostavi M, Chiu Y-C, Huang Y, Chen Y (2020) Convolutional neural network models for cancer type prediction based on gene expression. BMC Med Genomics 13(5):44. https://doi.org/10. 1186/s12920-020-0677-2 Nakamura E, Miyao K, Ozeki T (1988) Assessment of biological age by principal component analysis. Mech Ageing Dev 46(1):1–18. https://doi.org/10.1016/0047-6374(88)90109-1 Nogales A, García-Tejedor ÁJ, Monge D, Vara JS, Antón C (2021) A survey of deep learning models in medical therapeutic areas. Artif Intell Med 112:102020. https://doi.org/10.1016/j.art med.2021.102020 Oj W, M M, J L (2020) Estimated research and development investment needed to bring a new medicine to market, 2009–2018. JAMA 323(9). https://doi.org/10.1001/jama.2020.1166 Pang APS, Higgins-Chen AT, Comite F, Raica I, Arboleda C, Went H, Mendez T, Schotsaert M, Dwaraka V, Smith R, Levine ME, Ndhlovu LC, Corley MJ (2022) Longitudinal study of DNA methylation and epigenetic clocks prior to and following test-confirmed COVID-19 and mRNA vaccination. Front Genet 13 Paul SM, Mytelka DS, Dunwiddie CT, Persinger CC, Munos BH, Lindborg SR, Schacht AL (2010) How to improve R&D productivity: the pharmaceutical industry’s grand challenge. Nat Rev Drug Discov 9(3):203–214. https://doi.org/10.1038/nrd3078

1 AI in Longevity

13

Polidori MC, Sies H, Ferrucci L, Benzing T (2021) COVID-19 mortality as a fingerprint of biological age. Ageing Res Rev 67:101308. https://doi.org/10.1016/j.arr.2021.101308 Takebe T, Imai R, Ono S (2018) The current status of drug discovery and development as originated in United States academia: the influence of industrial and academic collaboration on drug discovery and development. Clin Transl Sci 11(6):597–606. https://doi.org/10.1111/cts.12577 Vetter VM, Kalies CH, Sommerer Y, Spira D, Drewelies J, Regitz-Zagrosek V, Bertram L, Gerstorf D, Demuth I (2022) Relationship between 5 epigenetic clocks, telomere length, and functional capacity assessed in older adults: cross-sectional and longitudinal analyses. J Gerontol A Biol Sci Med Sci 77(9):1724–1733. https://doi.org/10.1093/gerona/glab381 Yamaguchi S, Kaneko M, Narukawa M (2021) Approval success rates of drug candidates based on target, action, modality, application, and their combinations. Clin Transl Sci 14(3):1113–1122. https://doi.org/10.1111/cts.12980 Yang R, Wu GWY, Verhoeven JE, Gautam A, Reus VI, Kang JI, Flory JD, Abu-Amara D, Hood L, Doyle FJ, Yehuda R, Marmar CR, Jett M, Hammamieh R, Mellon SH, Wolkowitz OM (2020) A DNA methylation clock associated with age-related illnesses and mortality is accelerated in men with combat PTSD. Mol Psychiatry:1–11. https://doi.org/10.1038/s41380-020-0755-z Zannas AS, Arloth J, Carrillo- T, Iurato S, Röh S, Ressler KJ, Nemeroff CB, Smith AK, Bradley B, Heim C et al (2015) Lifetime stress accelerates epigenetic aging in an urban, African American cohort: relevance of glucocorticoid signaling. Genome Biol 16(1):1–12 Zhavoronkov A, Ivanenkov YA, Aliper A, Veselov MS, Aladinskiy VA, Aladinskaya AV, Terentiev VA, Polykovskiy DA, Kuznetsov MD, Asadulaev A, Volkov Y, Zholus A, Shayakhmetov RR, Zhebrak A, Minaeva LI, Zagribelnyy BA, Lee LH, Soll R, Madge D, Xing L, Guo T, AspuruGuzik A (2019) Deep learning enables rapid identification of potent DDR1 kinase inhibitors. Nat Biotechnol 37(9):1038–1040. https://doi.org/10.1038/s41587-019-0224-x Zhavoronkov A, Kochetov K, Diamandis P, Mitina M (2020) PsychoAge and SubjAge: development of deep markers of psychological and subjective age using artificial intelligence. Aging (Albany NY) 12(23):23548–23577. https://doi.org/10.18632/aging.202344

Chapter 2

Automated Reporting of Medical Diagnostic Imaging for Early Disease and Aging Biomarkers Detection Anna E. Andreychenko and Sergey Morozov

Abstract The chapter describes the value of medical imaging for the longevity medicine, the current approach to medical imaging reporting and the potential of automation to harvest all the information present in the images. The current state of the art of AI in radiology is discussed together with the practical results of an unprecedentedly large-scale external validation of AI performance for the most common use cases in radiology during 2020–2021. Keywords Radiology · Medical imaging · Computer vision · Artificial intelligence · Imaging biomarkers

2.1 Longevity Medicine and Radiology Medical imaging represents one of the largest sectors of healthcare. It is there to secure screening, diagnostics and control over the outcomes for a number of diseases. The world is experiencing a burst in the number of radiology equipment, and the number of imaging studies grows accordingly. Radiography, computed tomography and magnetic resonance imaging, mammography, hybrid and other methods of nuclear medicine have become an integral component of the clinical procedures that have now acquired formal status in the numerous standards and guidelines (Scatliff and Morris 2014; Lapi and McConathy 2021).

A. E. Andreychenko Research and Practical Clinical Center for Diagnostics and Telemedicine Technologies of the Moscow Healthcare Department, 24 Petrovka Str., Bldg. 1, Moscow 127051, Russia S. Morozov Osimis S.A., Quai du Banning 6, 4000 Liége, Belgium A. E. Andreychenko (B) ITMO University, St. Petersburg, Russia e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 A. Moskalev et al. (eds.), Artificial Intelligence for Healthy Longevity, Healthy Ageing and Longevity 19, https://doi.org/10.1007/978-3-031-35176-1_2

15

16

A. E. Andreychenko and S. Morozov

The management of clinical resources and the value of radiology have become crucial for all the involved stakeholders. Managers and scientists from all around the world are elaborating on the best ways to organize the radiology services. Now, these innovators find inspiration in the opportunities offered by digital transformation (Gore 2020; Hwang et al. 2021). The pressing challenge of clinical relevance and the resource-saving mind-set have led to an inevitable decrease in the number of screening studies, which is inappropriate as the preventive care should not suffer from the newly emerged constraints. The medical screening is there to target the most high-profile conditions, such as infectious diseases (tuberculosis, etc.) and malignant neoplasms. At the same time, chronic non-communicable diseases (incl. their risk factors and predictors) remain under the radar. Admittedly, it is economically and biologically unreasonable to conduct screening tests that involve ionizing radiation to detect risk factors, but not the condition itself. What can be done in this situation? The detection of early signs and risk factors that affect aging and life expectancy is critical from the social and demographic perspective. In addition, the clinical importance of some signs is yet to be discovered. On the other hand, hunting out and interpreting findings that are either questionable or are at best predictors of a real disease means nothing but a heavier burden on the radiologist’s shoulders. The image reading becomes more time-consuming and therefore drains more resources. It also does not seem practical to increase the number of screening imaging studies. Apart from even more expenses, this would entail more risks for patients (incl. emotional damage, unreasonable invasive interventions for diagnosis verification and additional radiation exposure). We believe that in terms of healthcare and longevity, preventive medicine must rely on the value-based approach. For this reason, the detection of various predictors and risk factors must draw in the least possible amount of resources (incl. the radiologists’ efforts and time). The opportunistic screening concept involves searching for early signs and risk factors of diseases using the imaging studies conducted for some other clinical purpose. For example, the abdomen CT for pancreatitis shows the bone mineral density, lymph node sizes, etc. To minimize the extra costs and labour, it makes sense to automate the evaluation of any non-essential parameters by using artificial intelligence (AI) technologies. Many experts have something to say about the capabilities and specifics of AI use in radiology. However, things that look good on paper rarely do so in real life. A lot of blame lies with the immature technology, inconsistent data and a lack of clinical utility (Ranschaert et al. 2019; Gusev et al. 2022). The medical screening that involves the widespread use of standard studies holds great potential for AI technologies. Medical imaging has a tremendous impact on clinical decision-making and the selection of the most appropriate personal plans for patients. Substantial infrastructure and personnel challenges secure the implementation of medical imaging in low-income and middle-income countries (Lapi and McConathy 2021). Given the limited availability of resources, the AI technologies could save the day, and, if nothing else, secure wider screening opportunities for everyone.

2 Automated Reporting of Medical Diagnostic Imaging for Early Disease …

17

We believe that in terms of healthcare and longevity, preventive medicine must rely on AI-based opportunistic screening. Special algorithms must check the routine diagnostic images for radiomics and other signs and risk factors associated with the disease development, aging, and the elements that critically affect life expectancy. Besides radiology, other diagnostic information on the health status of individuals should be made available in order to develop comprehensive prognosis models for the people’s health. The sources of this information could be instrumental diagnostics such as ECG and its fully AI-based interpretation (Siontis et al. 2021), another source of information could be AI-based analysis of the so-called m-Health technology that facilitates passive monitoring of the people’s health with wearable devices (Khan and Alotaibi 2020).

2.2 AI and Radiology: State of the Art AI and radiology have a relatively long history together. Already back in 1994 Charles Kahn published a review (Kahn 1994) where he described how AI could be applied in radiology at all stages, from a value-based and patient-centred selection of the diagnostic procedure to the automation of image interpretation and results reporting. AI has been applied for medical image analysis already since the middle of the twentieth century (Lodwick et al. 1963; Winsberg et al. 1967) when relatively simple expert systems solely based on the empirical human knowledge were developed. After that, the complexity of such models increased and included statistical information from the gathered data as well. And finally, with a breakthrough in computing and training of the deep neural networks (a so-called deep learning part of the artificial intelligence) it became possible to build systems that demand minimal human involvement beside a supervised training stage and are capable of extracting new knowledge from the data only. Despite a relatively long history of the attempt to automate the image interpretation in radiology, it has not been widely accepted in the clinical practice and has not shown prominent benefits (Lehman et al. 2015). Nevertheless, an outstanding success of the deep neural networks in traditional image analysis (e.g. AlexNet, Inception etc.) has drawn attention to the medical imaging and given a new impulse for the wide development of the computer-aided diagnosis systems based on artificial intelligence (Meskó and Görög 2020). Due to the high level of digitization, prevalence in medical imaging and steady growth of the acquired images (McDonald et al. 2015), pathology and radiology have become the leading areas of AI development in healthcare since 2014 (Meskó and Görög 2020). The majority of AI models trained to analyse medical images are based on the neural networks for the traditional images that are then adapted to classify medical images as well by means of transfer learning (Guan and Loew 2018; Raghu et al. 2019; Morid et al. 2021). This approach was chosen in the first place because of a very limited amount of the available labelled datasets of medical images (Willemink et al. 2020) compared to the traditional images datasets (Deng et al. 2009). However, recent studies have shown that much smaller neural

18

A. E. Andreychenko and S. Morozov

networks trained solely on the medical imaging datasets might be accurate enough for certain medical diagnostic tasks (Alzubaidi et al. 2021). A smaller neural network with the several-fold less parameters could be beneficial for the practical deployment, since its decision will be easier to interpret than a decision of the network with tens or even hundreds of millions of parameters. This is especially relevant when an AI’s decision might deviate from the one commonly accepted by the physicians, and in order to be approved, AI must be able to explain it in a human-accessible form. Next to it, a major potential benefit of AI based on deep learning compared to the rule-based CAD systems is the ability of continuous learning and self-adaptation to the new data. However, to assure safety and robustness of the adaptive AI models in healthcare, their performance must be well-interpreted and transparent. Despite an escalated research in the area of AI in radiology during the last decade, the recent review (Nagendran et al. 2020) revealed that only a very limited amount of studies showed clinically acceptable results. Similarly, a large number of AI solutions in radiology brought to the market by larger companies and multiple start-ups are merely deployed in the routine clinical practice these days (Tadavarthi et al. 2020). Even the pandemic that became an additional trigger for the vast developments of AI solutions for COVID-19 diagnosis worldwide did not lead to the prominent success of AI in radiology (Roberts et al. 2021). There are several possible reasons for an obviously postponed triumph of AI in medical imaging in comparison with the traditional imaging and video. First, it is a very limited amount of properly annotated medical imaging datasets (Li et al. 2021). Since the AI performance depends strongly on the data AI was trained on, a univocal correspondence between the intended clinical use case and labels in the dataset is necessary. Second, the use cases and appropriate tests to assess the AI’s ability to perform its task have to be clearly defined and conducted by involving all of the stakeholders (Wiens et al. 2019). Third, the issue of generalizability of AI models in radiology and potential influence of the confounding factors have to be yet thoroughly investigated (Futoma et al. 2020). For the purpose of unification of the AI models reporting, several expert societies are currently developing guidelines that should be used during the development, testing and selection in order to ensure clinically acceptable quality such as TRIPODML, CONSORT-AI (Liu et al. 2020), SPIRIT-AI (Cruz Rivera et al. 2020), CLAIM (Mongan et al. 2020) and ÉCLAIR (Omoumi et al. 2021). Regardless of the current limitation of AI solutions in radiology, their further development is highly demanded because of a strong need for image interpretation. With today’s widespread digitalization and increased availability of medical imaging, which includes radiology, the number of studies being performed is increasing and the load on radiologists is growing accordingly. To maintain the quality of interpretation of medical images, intelligent technologies for analysing digital medical images are currently being actively developed and are already beginning to be implemented, which can potentially reduce the workload for radiologists and reduce the time for preparing a description protocol by automating some steps in the process of interpreting medical images. Such automatic image analysis systems can not only support and structure the traditional interpretation process: while the radiologist only describes significant findings and transmits his opinion to the attending physician

2 Automated Reporting of Medical Diagnostic Imaging for Early Disease …

19

for diagnosis, in the future, using the accompanying clinical, laboratory and patient demographic data, intelligent systems will be able to diagnose and predict disease development. The development of the latter type of systems belongs precisely to the areas of radiomics and radiogenomics. The part “radio” in the names of these research directions implies the identification of the most significant quantitative characteristics (inaccessible to assess “by eye” by a radiologist) on radiological images, which, together with omics data (i.e. with the accompanying clinical and genetic picture of patients), can be used to make a differential diagnosis. Today one of the promising areas of radiomics is the differentiation of brain tumours using MRI data, which in the future can replace the invasive and in some cases risky brain biopsy procedure to clarify the diagnosis. Therefore, radiomics and radiogenomics systems are sometimes also referred to as “virtual biopsy”. However, the development of reliable AI systems in radiomics and radiogenomics requires huge amounts of reliable data and the formation of a scientific evidence base for their application in scientific and clinical trials. Potential applications of AI and computer vision in radiology cover all stages of the medical imaging processes. AI is able to facilitate a proper image planning and raw data reconstruction in order to increase diagnostic values of the resulting images, while reducing imaging time and preserving patient safety and comfort. Pre-analysis of the acquired images made by AI can help to triage and optimize workload of radiologists to ensure proper and timely patient care. At the image interpretation stage, AI can assist radiologists in the detection, characterization, reporting and automated monitoring of pathological processes. Detection implies marking of specific areas in the images that are radiological signs of the pathological processes in order to speed up image interpretation or draw attention of the radiologists to subtle changes that otherwise could have been missed by the human eye. Characterization involves qualitative and quantitative description of such findings, which is necessary to perform in order to differentiate the lesion and define further patient management. Reporting is a textual description of all the relevant findings revealed by the radiologist, which is then sent to a clinician for reference in the diagnostic and therapy planning process. An identified pathology or preclinical biomarkers could be followed and/or timely revealed during follow up by longitudinal monitoring that evaluates relevant changes occurring naturally or caused by the treatment. Currently, the majority of AI solutions in radiology are focusing on lung cancer detection and diagnosis, breast cancer, prostate cancer and neurological diseases. But many other applications such as MSK imaging, chest and abdominal imaging are being explored by the AI developers as well (van Leeuwen et al. 2021). Beside the established radiological use cases for routine diagnostic procedures, AI offers a unique opportunity to search automatically on a population scale for the additional preclinical biomarkers that are either out of radiologists’ interpretation scope and/or are too subtle for the human eye (Graffy et al. 2019; Zaworski et al. 2021; Pickhardt et al. 2021). Such application is aimed at opportunistic screening for the future adverse health problems or accelerated aging.

20

A. E. Andreychenko and S. Morozov

2.3 Proof of Concept We investigated the applicability of the AI-powered opportunistic screening during the Moscow Experiment on the use of computer vision in medical imaging (Andreychenko et al. 2022). “The Experiment on the use of innovative computer vision technologies for the medical image analysis and subsequent application in the Moscow healthcare system” (www.mosmed.ai) was launched by the Moscow Government in 2019. This is the largest prospective multicentre clinical trial of AI technologies in the real-world scenario. The Experiment involved AI developers and their legal distributors. The organizational, scientific and methodological framework of the Experiment was set by the Moscow Healthcare Department that authorized the Moscow Research and Practical Clinical Center for Diagnostics and Telemedicine Technologies of the Moscow Healthcare Department to hold the Experiment. The technical component of the Experiment was implemented by the Moscow Department of Information Technologies. To support and motivate legal entities to present their AI services, the Moscow Government provided grants for research projects. The Experiment was conducted by means of the Unified Radiological Information Service of the automated United Medical Information and Analytical System of Moscow (hereinafter referred to as URIS UMIAS). URIS UMIAS is a digital platform of the Moscow radiology service that brings together the workstations of X-ray technicians and radiologists, interconnects diagnostic equipment and accumulates information about each device, study, report and user (Polishchuk et al. 2018). More than 1300 diagnostic units and machines are connected to the URIS UMIAS. This is almost 100% of the Moscow radiology equipment stock. Since 2021, both medical staff and patients have had an opportunity to enjoy the benefits offered by URIS UMIAS. They can download their own images and reports via a special portal on the Moscow Government’s official website. By the end of 2021 URIS UMIAS has accumulated more than 11 million radiology studies, including computed tomography (CT), magnetic resonance imaging, radiography, mammography, as well as hybrid studies. The Experiment provides 13 use cases for AI in radiology departments. Fifty AI services from 22 companies took part in the Experiment over the last 2 years (2021–2022). The Experiment was approved by the Independent Ethics Committee of the Moscow Society of Radiology. The principles behind the Experiment correspond to those of the Joint Statement of the European and North American Society of Radiologists and the Ethics of Artificial Intelligence in Radiology (Geis et al. 2019). Patients signed a special informed consent form for voluntary participation. The investigators paid special attention to patient awareness. The Experiment was registered in the Clinical Trials Database with the assigned ID number NCT04489992. General methodological and technical issues of the Experiment were published elsewhere (Andreychenko et al. 2022).

2 Automated Reporting of Medical Diagnostic Imaging for Early Disease …

21

Here is a brief description of the Experiment’s intervention approach. When a patient sought medical care at a public healthcare facility, a standard examination was carried out, during which an attending physician determined whether or not a radiological exam is needed and formed a referral for a study. The assigned study was conducted following the established procedure, and the results were automatically uploaded to URIS UMIAS. For the imaging modalities covered by the Experiment, automatic routing to AI service was configured. From a diagnostic device, the study results of a particular modality were routed to only one AI service in a certain period of time where they were automatically processed. The results of this processing were sent to URIS UMIAS in the DICOM-SR format, where they landed at the radiologist’s workstation as an additional series. All images in this series were labelled “For research purposes only”. AI services offered a probability of the presence of a particular pathology and not an unambiguous answer. Accordingly, a radiologist always has a final say in interpreting and reporting the radiological study results. When writing a study report in URIS UMIAS, a radiologist had to decide on opening the additional series, and whether the AI-service study results must be taken into consideration in full or in part. A decision to check the additional series was made voluntarily and depended on the individual radiologist’s opinion. In 2021 the list of conditions and risk factors that can be automatically detected in the chest CT images was expanded with the following: 1. Osteoporosis. Vertebral compression fractures of over 25% (grades 2–3, according to the semi-quantitative Genant classification); decrease in bone mineral density in Th11–L3 (optimally L1–L2) according to the ACR 2018 criteria and the ISCD 2019 official position. 2. Coronary calcium. Calcifications on coronary angiogram; total Agatston score > 1. 3. Paracardial fat. Paracardial fat volume (pathognomonic) ≥ 200 ml. 4. Aortic lesion. Change in aortic diameter: widened ascending thoracic aorta > 40 mm along the short axis (aortic dilatation; if > 50 mm, an aneurysm must be considered); widened descending thoracic aorta > 30 mm along the short axis (aortic dilatation; if > 40 mm, an aneurysm must be considered); local narrowing of the thoracic aorta > 20% against previous measurements along the short axis (coarctation). Borderline values were considered pathological. This method opened up a new way for automatic assessment of risks and diseases that affect aging and, especially, life expectancy. These include ischemic heart disease, myocardial infarction, heart failure, aortic aneurysm, falls and other complications of osteoporosis, etc. In 2021 five AI services for automated opportunistic screening from 2 companies took part in the Experiment: Genant-IRA, Agatston-IRA, Aorta-IRA, CardiacFatIRA (Russia), Zebra HealthVCF (Israel-USA).

22

A. E. Andreychenko and S. Morozov

All the AI services began working by the study protocol in June 2021, except for Aorta-IRA which joined the others in September. During the project, the AI service developers received 237,774 chest CT images for evaluation; 214,043 (90.0%) of them were successfully processed. The losses can be explained by various technical aspects beyond the scope of this paper. Each AI service received CT images from 28 to 96 CT scanners installed in 16–53 primary care outpatient facilities and municipal clinical hospitals. The scatter in the number of scanners and hospitals is attributed to the patient flows, equipment load and the epidemiological situation. The most interesting aspect here is the diagnostic accuracy of the automated detection of the risk factors that matter for preventive care and longevity. During the Experiment we performed a two-stage evaluation of the diagnostic accuracy through a multicentre retrospective diagnostic study of AI services: • technical test: AI service integrated into the URIS UMIAS (n = 5); • clinical test (stage 1, retrospective study): reference labelled datasets; • clinical test (stage 2, prospective study): a report from the radiologist who conducted a study at the medical facility. Reference datasets were prepared and labelled according to the original methodology (Kulberg et al. 2020; Morozov et al. 2020). The sample size was 451 cases, grouped into 5 sets (56–100 cases) for each purpose. The radiologists’ reports (as a reference test) were analysed automatically using the original software “MedLabel— automated analysis of medical reports”1 based on the natural language processing technologies. Diagnostic accuracy metrics (sensitivity, specificity, area under the ROC Curve (AUC)) were then calculated and compared (Morozov et al. 2019). When computing diagnostic metrics, the activation threshold calculated by the maximum Youden Index was set. For the key results of the retrospective evaluation of the AI services’ diagnostic accuracy, see Table 2.1. According to the conditions of the Moscow Experiment, the developers were asked to present information about the diagnostic accuracy of their solutions, based on their internal test outcomes. Most tests performed with the reference datasets showed a substantial improvement in the diagnostic indicators. The claimed baseline metrics threshold was high enough, so it was not considered statistically significant to raise it even higher. However, it attested to the reliability of the AI services in terms of adaptability and precision. Most of the time, the AI service validation using the reference datasets yielded reasonably high metric values: the AUROC, accuracy, sensitivity and specificity values ranged between 0.91–0.99, 0.91–0.99, 0.86–1.0 and 0.95–1.0, respectively.

1

Certificate of state registration of the computer program “MedLabel—automated analysis of medical reports” No. 2020664321, application date: 27 October 2020, registration date: 11 November 2020.

0.94

0.97

CardiacFat

Aorta

0.99 (0.97–1.0)

0.99 (0.99–1)

0.98 (0.95–1.0)

0.91 (0.85–0.97)

0.89

0.88

0.854

0.89

0.96 (0.91–1.0)

0.96 (0.9–0.99)

0.96 (0.9–0.99)

0.91 (0.83–0.96)

0.99 (0.94–1.0)

Actual

0.88

0.89

0.844

0.90

0.917

Claimed

Sensitivity

0.93 (0.83–1.0)

0.94 (0.83–0.99)

0.96 (0.87–1.0)

0.86 (0.72–0.95)

1.0 (0.93–1.0)

Actual

Since the comparison of the AI services is not within the scope of this chapter, all the results of AI were de-identified

0.872

Agatston

a Note

0.95

Osteoporosis_2a

0.93

Claimed

0.99 (0.98–1)

Actual

Claimed

0.956

Accuracy

AUC

Osteoporosis_1a

AI service

0.9

0.88

0.86

0.87

0.923

Claimed

Specificity

1.0 (1.0–1.0)

0.98 (0.88–1.0)

0.96 (0.86–0.99)

0.95 (0.85–0.99

0.98 (0.89–1.0)

Actual

Table 2.1 Diagnostic accuracy metrics of the AI services to detect risk factors that affect preventive care and longevity (retrospective phase)

2 Automated Reporting of Medical Diagnostic Imaging for Early Disease … 23

24

A. E. Andreychenko and S. Morozov

The worst accuracy was demonstrated by AI service #2 when detecting the osteoporosis signs. None of the indicators went above the lower threshold, including the specificity, which was 0.86. We believe that the situation could be explained by a sub-optimal calibration of the AI service in the given information system and considering the population specifics (originally, the AI service in question was designed for a completely different kind of population). This became an interesting finding as it pinpointed the need for additional AI learning to secure more reliable outcomes of the opportunistic screening among the given population in the real-world environment. AI service #1 demonstrated a higher sensitivity and specificity for osteoporosis compared to the coronary calcium, paracardial fat and change in the aortic diameter. However, no statistically significant difference was registered. The AI outcomes showed low sensitivity for the paracardial fat (0.94) and aortic diameter (0.93), yet the corresponding specificity values were higher (0.98 and 1.0, respectively). This can be explained by the features of the morphometric task and the clinical reason, i.e. a notable reduction in false-positive results that create nothing but an unnecessary burden for radiologists. The proportion of false-positive results for osteoporosis, coronary calcium and paracardial fat was 0.02–0.05%, 0.04% and 0.02%, respectively. No false-positive results were detected for the aortic diameter measurement. The level of false-negative results for the studies of the cardiovascular system was as negligible, ranging from 0.04 to 0.07%. Yet, for osteoporosis, the situation turned out completely different. AI service #2 provided 0.14% false-positive results, while AI service #1 ended up without any. After a joint assessment of the metrics, we managed to calculate the average AI diagnostic accuracy for the detection of the risk factors predisposing the conditions and diseases that affect aging and, especially, longevity. The average AUC, accuracy, sensitivity and specificity values were 0.972 ± 0.034, 0.956 ± 0.029, 0.938 ± 0.051 and 0.974 ± 0.019, respectively. The corresponding mode and median values were 0.99, 0.96, 0.94 and 0.98, respectively. The accuracy data acquired during the first stage were used to verify if the AI service is qualified to work with the real-world data. After the transition to routine data flows, the AI services experienced a decline in accuracy (see Table 2.2). Table 2.2 Comparison of the diagnostic accuracy metrics of the AI services to detect risk factors that affect preventive care and longevity

AI service

AUC (95% CI) Retrospective stage

Prospective stage

Osteoporosis_1

0.99 (0.98–1)

0.943 (0.879–1.000)

Osteoporosis_2

0.91 (0.85–0.97)

0.773 (0.706–0.839)

Agatston

0.98 (0.95–1.0)

0.819 (0.729–0.908)

CardiacFat

0.99 (0.99–1)

0.683 (0.579–0.785)

Aorta

0.99 (0.97–1.0)

0.769 (0.633–0.904)

2 Automated Reporting of Medical Diagnostic Imaging for Early Disease …

25

AI service #1 demonstrated the most stable and precise outcomes for the detection of osteoporosis signs (prospective AUC 0.943 (95% CI 0.879–1.000)) and coronary calcium, which is a predictor of ischemic heart disease (prospective AUC 0.819 (0.729–0.908)). The difference between the above and the retrospective values did not exceed 20%, and the AUC value was considered sufficient. As for the remaining cases, the decline in accuracy was more prominent. However, here several limitations must be taken into account. First, only one opinion of a radiologist directly reporting the study results, not verified by other modes (expert consensus, peer-review, etc.) was used as a “ground truth”. There were no mechanisms for analysing the reports’ content to exclude those containing human biases and defects. Second, the inclusion of the risk factors into the report (i.e. for paracardial fat) has still not been made mandatory and so far remains out of the scope of the guidelines. For this reason, some of the reports may miss the required data. In this case, the correct AI results would be interpreted as false-positive, which negatively affects the overall accuracy. Nevertheless, a slight decline in accuracy seen during the prospective phase did not develop into something critical that would put the entire concept of the automated opportunistic screening in question. The outcomes allowed to develop a further project roadmap, which includes additional learning for the algorithms and dedicated methodological and educational activities with the clinicians. Radiologists believe that the processing speed, i.e. the time spent on automated evaluation, is one of the most important factors that speaks about the real AI service performance. Amidst the Moscow Experiment that took place in the URIS UMIAS environment, the AI services demonstrated the following time ranges when evaluating one chest CT scan: osteoporosis—from 21 to 133 s; coronary calcium, paracardial fat and aortic diameter—from 31 to 251 s. We believe it is safe to say that the performance of the algorithms integrated into the URIS UMIAS environment was sufficiently high. The outcomes completely matched our expectations, meaning that the performance capabilities secured smooth operation and strong commitment from the radiology community. The acquired data proves that it is possible to use AI technologies for automated opportunistic screening. Detection of findings and changes with higher accuracy allows automating the process. This means that aside from findings’ detection and reading, the AI services are capable of providing standard recommendations based on the presence or severity of the risk factors. Given the achieved diagnostic accuracy, the AI operation (which includes automated recommendations) would not place an additional burden on the radiologists by producing false-positive results. On the other hand, the solution would add value for the imaging studies since the clinicians would be able to offer their patients more personalized strategies for disease prevention and lifestyle adjustments. Needless to say that real-world implementation would rely heavily on the quality and robustness of particular AI-based software. As mentioned above, when the integration into the Radiological Information Service follows the protocol, the operation speed of the AI services varies insignificantly. However, notable differences were observed in the diagnostic accuracy dominion. When accessing new data, the precision of AI algorithms may also be sub-par. Therefore, the developers should not only

26

A. E. Andreychenko and S. Morozov

seek to incorporate the best standards and practices into their solutions but also to fine-tune the calibration using the data from the population where the algorithm is supposed to work. Thus, the AI-based opportunistic screening concept is generally viable. It would provide a great contribution to early diagnosis and timely management of pathologies related to aging and critically affecting the life expectancy. The Experiment included only chest CT scans and biomarkers of cardiovascular system diseases such as coronary calcium, paracardial fat and aortic dilation and imaging predictors of bone fractures (osteoporosis). However, medical imaging provides access to a significantly larger range of possible imaging biomarkers, especially by means of MRI as this imaging modality can be safely used multiple times for the same person. The imaging biomarkers though are subject to an automated extraction (Brui et al. 2020; Li et al. 2022) for an effective use in longevity medicine research since they are not routinely and quantitatively reported by radiologists. Information extracted by a direct image analysis can be supplemented by radiomics models (Mirón Mombiela and Borrás 2022) that link textual information from images with the clinical and demographic data. Application of the radiomics models personalizes the extracted biomarkers and thus increases their value for the longevity medicine.

2.4 Future of AI in Longevity Medicine Aging is recognized as one of the main causes of the disease prevalence in the developed countries, including cardiovascular diseases, cancer and neurological disorders (Niccoli and Partridge 2012), i.e. the diseases that are causing the majority of deaths and invalidation among adults. There are established biomarkers of aging based on the laboratory tests which, however, do not discriminate between body organs and systems (Bai 2018). In this case medical imaging could be a valuable supplement in order to provide organ-specific (Good et al. 2001; Albrecht et al. 2014) biological age assessment (Jylhävä et al. 2017). Until the pandemic, most of the aging imaging biomarkers research was focused on the brain (Good et al. 2001; Franke et al. 2010; Armstrong et al. 2019) mainly because of the relatively wide availability and safety of brain MRI scans. The COVID-19 pandemic has caused a massive population screening by chest CT scans that brings novel opportunities for the aging imaging biomarkers research linked to sarcopenia (Boutin et al. 2015), osteoporosis (Kuo and Chen 2017; Cheng et al. 2021) and cardiovascular diseases (Thanassoulis et al. 2012). Undoubtedly, AI and computer vision are going to have a prominent impact on the future development and application of medical imaging. The volume of medical imaging data collected routinely allows the development of AI in the direction of establishing a new knowledge and a large-scale assessment of population health and factors affecting it. The automated and machine-based medical image interpretation will facilitate population-based studies investigating causes of human aging and therapies to halt aging. Nevertheless, a wide successful deployment of AI in medical

2 Automated Reporting of Medical Diagnostic Imaging for Early Disease …

27

imaging requires, in the first place, a strong collaboration between the AI developers and healthcare providers and properly curated and labelled datasets that address both clinical application and machine learning needs. The datasets must not be limited to the imaging data but include patient demographic and clinical data. They should be sufficiently diverse in order to diminish possible confounding factors for the development of complex and weakly interpretable AI models. Such datasets should be collected keeping in mind a rising approach of a digital twin (Kamel Boulos and Zhang 2021) and metaverse (Koo 2021). Compliance with Ethical Standards Conflict of Interest The authors declare that they have no conflict of interest.

References Albrecht E, Sillanpää E, Karrasch S et al (2014) Telomere length in circulating leukocytes is associated with lung function and disease. Eur Respir J 43:983–992. https://doi.org/10.1183/ 09031936.00046213 Alzubaidi L, Duan Y, Al-Dujaili A et al (2021) Deepening into the suitability of using pre-trained models of ImageNet against a lightweight convolutional neural network in medical imaging: an experimental study. PeerJ Comput Sci 7:e715. https://doi.org/10.7717/peerj-cs.715 Andreychenko AE, Logunova TA, Gombolevskiy VA et al (2022) A methodology for selection and quality control of the radiological computer vision deployment at the megalopolis scale. medRxiv 2022.02.12.22270663. https://doi.org/10.1101/2022.02.12.22270663 Armstrong NM, An Y, Beason-Held L et al (2019) Sex differences in brain aging and predictors of neurodegeneration in cognitively healthy older adults. Neurobiol Aging 81:146–156. https:/ /doi.org/10.1016/j.neurobiolaging.2019.05.020 Bai X (2018) Biomarkers of aging. Adv Exp Med Biol 1086:217–234. https://doi.org/10.1007/978981-13-1117-8_14 Boutin RD, Yao L, Canter RJ, Lenchik L (2015) Sarcopenia: current concepts and imaging implications. Am J Roentgenol 205:W255–W266. https://doi.org/10.2214/AJR.15.14635 Brui E, Efimtcev AY, Fokin VA et al (2020) Deep learning-based fully automatic segmentation of wrist cartilage in MR images. NMR Biomed 33:e4320. https://doi.org/10.1002/nbm.4320 Cheng X, Zhao K, Zha X et al (2021) Opportunistic screening using low-dose CT and the prevalence of osteoporosis in China: a nationwide, multicenter study. J Bone Miner Res 36:427–435. https:/ /doi.org/10.1002/jbmr.4187 Cruz Rivera S, Liu X, Chan AW et al (2020) Guidelines for clinical trial protocols for interventions involving artificial intelligence: the SPIRIT-AI extension. Nat Med 26:1351–1363. https://doi. org/10.1038/s41591-020-1037-7 Deng J, Dong W, Socher R et al (2009) ImageNet: a large-scale hierarchical image database. IEEE Comput Vis Pattern Recognit Franke K, Ziegler G, Klöppel S, Gaser C (2010) Estimating the age of healthy subjects from T1-weighted MRI scans using kernel methods: exploring the influence of various parameters. Neuroimage 50:883–892. https://doi.org/10.1016/j.neuroimage.2010.01.005 Futoma J, Simons M, Panch T et al (2020) The myth of generalisability in clinical research and machine learning in health care. Lancet Digit Heal 2:e489–e492. https://doi.org/10.1016/S25897500(20)30186-2

28

A. E. Andreychenko and S. Morozov

Geis JR, Brady AP, Wu CC et al (2019) Ethics of artificial intelligence in radiology: summary of the joint European and North American multisociety statement. Can Assoc Radiol J 70:329–334. https://doi.org/10.1016/j.carj.2019.08.010 Good CD, Johnsrude IS, Ashburner J et al (2001) A voxel-based morphometric study of ageing in 465 normal adult human brains. Neuroimage 14:21–36. https://doi.org/10.1006/nimg.2001. 0786 Gore JC (2020) Artificial intelligence in medical imaging. Magn Reson Imaging 68:A1–A4. https:/ /doi.org/10.1016/J.MRI.2019.12.006 Graffy PM, Liu J, O’Connor S et al (2019) Automated segmentation and quantification of aortic calcification at abdominal CT: application of a deep learning-based algorithm to a longitudinal screening cohort. Abdom Radiol (NY) 44:2921–2928. https://doi.org/10.1007/s00261-019-020 14-2 Guan S, Loew M (2018) Breast cancer detection using transfer learning in convolutional neural networks. In: Proceedings of applied imagery pattern recognition workshop, 1–8 Oct 2017. https://doi.org/10.1109/AIPR.2017.8457948 Gusev A, Morozov S, Lebedev G et al (2022) Development of artificial intelligence in healthcare in Russia BT—handbook of artificial intelligence in healthcare: vol 2: practicalities and prospects. In: Lim C-P, Chen Y-W, Vaidya A et al (eds). Springer International Publishing, Cham, pp 259–279 Hwang EJ, Goo JM, Yoon SH et al (2021) Use of artificial intelligence-based software as medical devices for chest radiography: a position paper from the Korean society of thoracic radiology. Korean J Radiol 22. https://doi.org/10.3348/kjr.2021.0544 Jylhävä J, Pedersen NL, Hägg S (2017) Biological age predictors. EBioMedicine 21:29–36. https:/ /doi.org/10.1016/j.ebiom.2017.03.046 Kahn E (1994) Artfficial in inteffigence radiology: decision support. Radiographics 849–861 Kamel Boulos MN, Zhang P (2021) Digital twins: from personalised medicine to precision public health. J Pers Med 11. https://doi.org/10.3390/jpm11080745 Khan ZF, Alotaibi SR (2020) Applications of artificial intelligence and big data analytics in mhealth: a healthcare system perspective. J Healthc Eng 2020. https://doi.org/10.1155/2020/889 4694 Koo H (2021) Training in lung cancer surgery through the metaverse, including extended reality, in the smart operating room of Seoul National University Bundang Hospital, Korea. J Educ Eval Health Prof 18:33 Kulberg NS, Gusev MA, Reshetnikov RV et al (2020) Methodology and tools for creating training samples for artificial intelligence systems for recognizing lung cancer on CT images. Heal Care Russ Fed 64:343–350 (In Russ). https://doi.org/10.46563/0044-197X-2020-64-6-343-350 Kuo T-R, Chen C-H (2017) Bone biomarker for the clinical assessment of osteoporosis: recent developments and future perspectives. Biomark Res 5:18. https://doi.org/10.1186/s40364-0170097-4 Lapi SE, McConathy JE (2021) Global access to medical imaging and nuclear medicine. Lancet Oncol 22:425–426. https://doi.org/10.1016/S1470-2045(21)00070-X Lehman CD, Wellman RD, Buist DSM et al (2015) Diagnostic accuracy of digital screening mammography with and without computer-aided detection. JAMA Intern Med 175:1828–1837. https://doi.org/10.1001/jamainternmed.2015.5231 Li J, Zhu G, Hua C et al (2021) A systematic collection of medical image datasets for deep learning Li Z, Chen K, Yang J et al (2022) Deep learning-based CT radiomics for feature representation and analysis of aging characteristics of Asian Bony orbit. J Craniofac Surg 33:312–318. https://doi. org/10.1097/SCS.0000000000008198 Liu X, Cruz Rivera S, Moher D et al (2020) Reporting guidelines for clinical trial reports for interventions involving artificial intelligence: the CONSORT-AI extension. Nat Med 26:1364– 1374. https://doi.org/10.1038/s41591-020-1034-x Lodwick GS, Keats TE, Dorst JP (1963) The coding of roentgen images for computer analysis as applied to lung cancer. Radiology 81:185–200. https://doi.org/10.1148/81.2.185

2 Automated Reporting of Medical Diagnostic Imaging for Early Disease …

29

McDonald RJ, Schwartz KM, Eckel LJ et al (2015) The effects of changes in utilization and technological advancements of cross-sectional imaging onradiologist workload. Acad Radiol 22:1191–1198. https://doi.org/10.1016/j.acra.2015.05.007 Meskó B, Görög M (2020) A short guide for medical professionals in the era of artificial intelligence. npj Digit Med 3. https://doi.org/10.1038/s41746-020-00333-z Mirón Mombiela R, Borrás C (2022) The usefulness of radiomics methodology for developing descriptive and prognostic image-based phenotyping in the aging population: results from a small feasibility study. Front Aging. https://doi.org/10.3389/fragi.2022.853671 Mongan J, Moy L, Kahn CE (2020) Checklist for artificial intelligence and medical imaging (claim). Radiol Artif Intell. https://doi.org/10.1148/ryai.2020200029 Morid MA, Borjali A, Del Fiol G (2021) A scoping review of transfer learning research on medical image analysis using ImageNet. Comput Biol Med 128:104115. https://doi.org/10.1016/j.com pbiomed.2020.104115 Morozov SP, Andreychenko AE, Blokhin IA et al (2020) MosMedData: data set of 1110 chest CT scans performed during the COVID-19 epidemic. Digit Diagnostics 1:49–59. https://doi.org/10. 17816/DD46826 Morozov SP, Vladzymyrskyy AV, Klyashtornyy VG et al (2019) Clinical acceptance of software based on artificial intelligence technologies (radiology). https://arxiv.org/abs/1908.00381 Nagendran M, Chen Y, Lovejoy CA et al (2020) Artificial intelligence versus clinicians: systematic review of design, reporting standards, and claims of deep learning studies in medical imaging. BMJ 368:1–12. https://doi.org/10.1136/bmj.m689 Niccoli T, Partridge L (2012) Ageing as a risk factor for disease. Curr Biol 22:R741–R752. https:/ /doi.org/10.1016/j.cub.2012.07.024 Omoumi P, Ducarouge A, Tournier A et al (2021) To buy or not to buy—evaluating commercial AI solutions in radiology (the ECLAIR guidelines) Pickhardt PJ, Summers RM, Garrett JW (2021) Automated CT-based body composition analysis: a golden opportunity. Korean J Radiol 22:1934–1937 Polishchuk NS, Vetsheva NN, Kosarin SP et al (2018) Unified radiological information service as a key element of organizational and methodical work of research and practical center of medical radiology. Radiol Pract 1:6–17 (In Russ) Raghu M, Zhang C, Kleinberg J, Bengio S (2019) Transfusion: understanding transfer learning for medical imaging. Adv Neural Inf Process Syst 32 Ranschaert ER, Morozov S, Algra PR (eds) (2019) Artificial intelligence in medical imaging. Springer, Cham Roberts M, Driggs D, Thorpe M et al (2021) Common pitfalls and recommendations for using machine learning to detect and prognosticate for COVID-19 using chest radiographs and CT scans. Nat Mach Intell 3:199–217. https://doi.org/10.1038/s42256-021-00307-0 Scatliff JH, Morris PJ (2014) From Röntgen to magnetic resonance imaging: the history of medical imaging. N C Med J 75:111–113. https://doi.org/10.18043/ncm.75.2.111 Siontis KC, Noseworthy PA, Attia ZI, Friedman PA (2021) Artificial intelligence-enhanced electrocardiography in cardiovascular disease management. Nat Rev Cardiol 18:465. https://doi.org/ 10.1038/s41569-020-00503-2 Tadavarthi Y, Vey B, Krupinski E et al (2020) The state of radiology AI: considerations for purchase decisions and current market offerings. Radiol Artif Intell 2:e200004. https://doi.org/10.1148/ ryai.2020200004 Thanassoulis G, Peloso GM, Pencina MJ et al (2012) A genetic risk score is associated with incident cardiovascular disease and coronary artery calcium: the Framingham heart study. Circ Cardiovasc Genet 5:113–121. https://doi.org/10.1161/CIRCGENETICS.111.961342 van Leeuwen KG, Schalekamp S, Rutten MJCM et al (2021) Artificial intelligence in radiology: 100 commercially available products and their scientific evidence. Eur Radiol 31:3797–3804. https://doi.org/10.1007/s00330-021-07892-z Wiens J, Saria S, Sendak M et al (2019) Do no harm: a roadmap for responsible machine learning for health care. Nat Med 15–18. https://doi.org/10.1038/s41591-019-0548-6

30

A. E. Andreychenko and S. Morozov

Willemink MJ, Koszek WA, Hardell C et al (2020) Preparing medical imaging data for machine learning. https://doi.org/10.1148/radiol.2020192224 Winsberg F, Elkin M, Macy J et al (1967) Detection of radiographic abnormalities in mammograms by means of optical scanning and computer analysis. Radiology 89:211–215. https://doi.org/10. 1148/89.2.211 Zaworski C, Cheah J, Koff MF et al (2021) MRI-based texture analysis of trabecular bone for opportunistic screening of skeletal fragility. J Clin Endocrinol Metab 106:2233–2241. https:// doi.org/10.1210/clinem/dgab342

Chapter 3

Risk Forecasting Tools Based on the Collected Information for Two Types of Occupational Diseases Marc Deminov, Petr Kuztetsov, Alexander Melerzanov, and Dmitrii Yankevich

Abstract The article represents the construction of algorithms for monitoring and predicting the risk of occupational diseases (sensorineural hearing loss and vibration disease from exposure to local and general vibration) and the use of data from clinical and instrumental examination of patients. Keywords Labor health · Employee health risk estimation · Sensorineural hearing loss · Vibration disease · Effects of vibration on health · Health risk forecasting · Information and statistical methods of forecasting

M. Deminov (B) Skolkovo Innovation Center, Health Modeling Technologies (LLC), Moscow 143026, Russia e-mail: [email protected] M. Deminov · P. Kuztetsov National Association of Medical Informatics, Verkhnyaya Krasnoselskaya, 20C1, Moscow 107140, Russia P. Kuztetsov Project Office “Digital Transformation in Occupational Medicine” of the Research Institute of Occupational Medicine, 31 Budennogo Ave., Moscow 105275, Russia A. Melerzanov “Applied Genetics” Center Moscow Institute of Physics and Technology (National Research University), 9 Institutsky Ln, Dolgoprudny, Moscow 141700, Russia “Laboratory of Innovative Technologies and Artificial Intelligence in Public Health” Semashko National Research Institute of Russian Academy of Science, 12 Vorontsovo Pole, Moscow 105064, Russia D. Yankevich Federal Research and Clinical Center of Intensive Care Medicine and Rehabilitology, Solnechnogorsky District, Lytkino Village, 777, Moscow 141534, Russia © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 A. Moskalev et al. (eds.), Artificial Intelligence for Healthy Longevity, Healthy Ageing and Longevity 19, https://doi.org/10.1007/978-3-031-35176-1_3

31

32

M. Deminov et al.

The health problems of workers caused by acoustic and vibrational influences are noted as important and require study and specific quantitative methods of assessment and forecasting by the World Health Organization (WHO 2021). Measurements and research in this area are carried out in many countries (Sliwinska-Kowalska 2020; Roberts et al. 2018; Ntlhakana et al. 2020; Dobie 2008; Mahbub et al. 2020; Ekman 2021), with the main starting point being the industry standards (ISO 1999:2013; ISO 2631-5:2018) for predicting health risks from acoustic and vibration impacts. Risk assessment and prognosis of occupational diseases, as well as the formation of therapeutic and preventive measures are carried out in order to protect health and preserve the ability to work, prevent and timely detect occupational diseases of workers engaged in work with harmful and (or) dangerous production factors, as well as in cases provided for by the legislation of the Russian Federation (GOST R ISO 1999, 2017). Employees engaged in certain types of work undergo mandatory medical examinations. The assessment when applying for a job is carried out in order to achieve: • determining the compliance of the health status of a person entering a job with the work assigned to him; • early detection and prevention of diseases (Periodic medical examinations); • dynamic health monitoring; • timely detection of diseases, initial forms of occupational diseases; • early features of the impact of harmful and (or) hazardous production factors on the health of workers; • formation of risk groups for the development of occupational diseases. More than 15 million people annually fall under these requirements throughout the Russian Federation, which in turn entails significant requirements for the availability of a highly qualified staff of occupational pathologists evenly distributed throughout the territory of the Russian Federation, the costs of their education and maintenance.

3.1 Development of a Mathematical Model Based on Risk Calculation Methodology Using Artificial Intelligence Tools In order to optimize and objectify the process of passing medical examinations and decision-making, it is extremely necessary to have an automated system to support the adoption of medical decisions by a professional pathologist to assess the risk and forecast the onset of occupational disease, including the formation of therapeutic and preventive measures aimed at minimizing the likelihood of its occurrence. At the moment, there is no such system in the Russian Federation, while the medical community expresses interest in its creation and dissemination, including for the objectification of the assessment of quality control of medical examinations.

3 Risk Forecasting Tools Based on the Collected Information for Two …

33

Based on the above, the purpose of this project is to develop a decision support system for a professional pathologist (hereinafter referred to as SPP) based on a software product that includes an automated model for determining risk and predicting the future state of hearing (risk group) using Artificial Intelligence technologies (hereinafter referred to as AI) based on the results of the analysis of medical indicators of a doctor’s study obtained by conducting an employee questionnaire and instrumental diagnostics, including depending on the level and type of acoustic impact in the workplace, as well as depending on other factors and parameters. The solution of a particular problem of automated determination of the risk of sensorineural hearing loss (hearing loss), as well as the risk of vibration disease, will form the basis for the development of similar tools for other nosological profiles. To solve the task, the scientific group of the project has determined the sequence of actions: Step 1. Formulation and preliminary steps of the problem solution. 1. Collect a structured set of samples of employees in the number of more than 500 people with data on medical indicators obtained by conducting an employee questionnaire and instrumental diagnostics and other data on each of the employees in machine-readable form (for example, MS Excel tables); 2. Bring the data to a format acceptable for further automated processing, remove typos, conduct format-logical control for each column with data, identifying anomalies. 3. To classify employees into 5 risk groups (negligent low risk; low risk; medium; high; very high) of the onset of the disease under study in accordance with the current recommendations of the occupational pathologist to identify risk groups by key parameters. We will carry out such a classification with the help of an expert doctor, repeating the logic of his actions in everyday practice, we will put the risk group line by line opposite each employee. 4. Thus, we will get a machine-expert-trained sample at the output for its subsequent processing using AI technologies. A similar data preparation operation can be implemented in a software tool or using Excel. Step 2. Building an automated data analysis model 5. Further, to solve the problems of automated data research and automated model construction, powerful and, at the same time, accessible software tools were used: scikitlearn, pandas, numpy, seaborn, streamlit (python programming language) and other open-source libraries for working with data. The choice of programming language library data is determined based on the following criteria. All the basic methods and functions necessary for an exhaustive research in the field of data analysis are implemented in the available software libraries (Fig. 3.1). Python libraries have absorbed almost all modern analytical mathematical capabilities and provide researchers and developers with a wide range of convenient,

34

M. Deminov et al.

Fig. 3.1 Examples of visualization of results obtained using analytical methods implemented in the SciKitLearn software library package. It can be seen that almost all the most effective modern analytical technologies and methods are presented in this library package

ready-to-assemble functional software elements: a quick (so-called “seamless”) transition from model development to its implementation in prototype, and then in industrial code; powerful visualization tools (see, for example, the seaborn library) and debugging (Fig. 3.2). 6. Uploading the specified data to the test development environment 7. For further work on the data, built-in tools of factor analysis, clustering, regressions, categorical analysis using the methods of solution trees and “random forests” were used to identify the most information-relevant features that determine the classification by risk groups. There was a need to conduct an experiment with different techniques. The following methods were used: a. The principal component method, including varying the number of leading linear combinations of principal components, that is, the proportion of information left for analysis b. Comparison of regression models for the f2 metric c. Method for selective enumeration of combinations of data to build prediction models of the group according to the worker

3 Risk Forecasting Tools Based on the Collected Information for Two …

35

Fig. 3.2 Visualization of the capabilities of libraries in Python

d. Methods of classifiers based on continuous variables (multilayer neural networks, convolutional networks, etc.) e. Methods of classifiers based on categorical and continuous variables (random trees, random forest, etc.) f. Validation of the constructed model in the metric AUC of the ROC curve by repeated random split into training (70% of the data), validation (15%) and test (15%) of the sample. g. Based on the results and methods of the Bali stages described above, the most accurate (that is, giving the least error) numerical and analytical models of automated risk group determination are selected; they are reflected in detail in the section of the mathematical report. 8. Analytical prognostic models of health groups are constructed based on forecasts of medical indicators that are input to the risk group definition model; an assessment of the accuracy of the prognostic model is provided, as well as a justification for the accuracy assessment. 9. The expert evaluation of the obtained predictive models was carried out; the optimal ones were determined. The applied methods belong to the field that is commonly referred to by the collective term "Artificial Intelligence"; in this work, AI methods such as “teaching without a teacher” (cluster analysis), factor analysis, regression analysis, other informationstatistical, analytical methods that are significantly dependent on computational algorithms that are used. This approach can be illustrated by the first definition of artificial intelligence given by John McCarthy in 1956 at a conference at Dartmouth University. According to McCarthy, “AI researchers are free to use methods that are not observed in humans, if necessary to solve specific problems.”

36

M. Deminov et al.

A leading researcher in the subject area, Gennady Osipov (President of the Russian Association of Artificial Intelligence, permanent member of the European Coordinating Committee on Artificial Intelligence (ECCAI), PhD, professor) gives the following description of AI: “artificial intelligence is an experimental science. Experimentalism of artificial intelligence is that by creating certain computer concepts and models, the researcher compares their behavior with each other and with examples of the solution of the same tasks by the specialist, modifies them based on this comparison, trying to achieve the best match results.” This work was carried out in full compliance with the above mentioned paradigm of artificial intelligence. The following is a description of the features of objects with their detailed properties and characteristics, an analysis of the types of tasks to be solved and the available target variables, a description of the models and the result of their work with the data obtained, as well as conclusions. The work was carried out on the data of the FGBNU Research Institute of Labour Health, presented in the form of a table with 892 rows and 132 columns. Each line is a description of the subject of the study—the patient. The first three columns are the target features that we want to automatically predict.

3.2 Overview of the Methods Used In recent years, the use of computing systems and personal computers has been widely spread and implemented. A digitalization program is being actively implemented in the country. This leads to the fact that sufficient amounts of data are accumulated to build effective and modern decision-making systems that help employees solve problems, sometimes completely excluding human participation. Such a process inevitably affects medicine—an area in which measurements have been carried out throughout history, on the basis of which the necessary result was obtained. In recent years, methods of modern data analysis have become increasingly used for medicine. This applies both to specialized studies with the analysis of images, video streams and audio tracks, and general issues related to the processing of arrays of tabular data. These algorithms can be used to analyze tabular data and obtain estimated results based on them. The capabilities of these algorithms are demonstrated in this paper. Any research begins with an analysis of the existing features that describe objects— this is the first part of this work. Next, we consider the specifics of the task and choose ways to solve it. After that, we consistently analyze various machine learning algorithms and analyze their results. After discussing the results, we attempted a long-term prediction of changes in risks for a particular employee. The final part presents conclusions and observations.

3 Risk Forecasting Tools Based on the Collected Information for Two …

37

3.3 Feature Description of Objects Each object corresponds to 129 features that make up its medical description. Before building a model, you need to check the presented data for correctness. In this study, it was found that there are missing values for some of the features. So there are empty cells in the columns describing the Vibration sensitivity at different frequencies. In all cases, the missing values were replaced by the average values for the attribute in question. In addition, all categorical features are presented in the form of letter classes. In this form, directly, in the presented format, these features cannot be applied as input data for mathematical models, therefore, the presented letter designations must be changed to numerical ones. In this paper, category “a” was replaced by the number 0, category “b” by 1. If there are other classes in the attribute, then they were replaced by numbers accordingly. Features are naturally divided into groups. Examples of such a division and an analysis of the available features are given below.

3.4 Separation of Features by the Type of Their Values Most of the features in the data under consideration have a categorical nature. Categorical features describe the belonging of the object of research to a certain type or class. In our data, most of the categorical features are binary, representing two classes. Most often, this may be the presence of any complaints, the fact of observation of a certain property or behavior of the patient. Examples of such features are shown in Table 3.1. Numerical features represent the description of objects in the form of a numerical expression. At the same time, these features can take both a fixed set of values (such features as year of birth, work experience, etc.) and an infinitely large set of values of real numbers. In our task, we work with medical data, so that the acceptable range of accepted values for almost every attribute is known. According to the general meaning of the attribute, it is possible to understand in which range its values lie. When receiving data from a source, according to these criteria, it is possible to determine the correctness of the data provided or to find anomalies in the data. As with any set of numerical values, various functions can be applied to numerical ones, finding important statistical parameters of the data provided. Thus, it is possible to estimate the mean value, variance and other possible parameters of distributions that can give a general idea of the feature under consideration. Table 3.2 represents a list of numerical features as an example. Next, we will consider the division of features by meaning, where we will consider in more detail the descriptions that were presented above.

38

M. Deminov et al.

Table 3.1 Examples of features: the Python scikit learn library allows the researchers to monitor categorical, numerical, textual and other features during debugging and testing of the program №

Feature name

1

Sex

2

Pain in the feet/and shins

3

Pain in the lumbar spine

4

Hyperhidrosis of the hands

5

Hyperhidrosis of the feet

6

Perforation Mt-2

7

Traumatic brain injuries

8

Smoking

9

Harmful factors

10

1.1.3

11

1.1.4.3.1

12

1.1.4.3.2

13

1.1.4.3.3

14

1.1.4.5

15

1.1.4.8.2

16

1.2.1

17

1.2.10

18

1.2.14.1

19

1.2.2

20

1.2.21.1

21

1.2.25

22

1.2.30.1

3.5 Separation of Features by Meaning It is more convenient to present a general description of objects by dividing their features into groups by meaning. Then, considering each group separately, it is possible to understand in more detail what data the developed models will have to work with. This stage is also very useful, because by doing it, the researchers can find “faces” and “insights” in the data that can significantly improve the result. Examples of the division of features by meaning: • • • • • •

general information (socio-demographic and formal information about work), pain and complaints, thermometry of the hands, vibration sensitivity, medical factors (binary), hearing threshold at various frequencies,

3 Risk Forecasting Tools Based on the Collected Information for Two … Table 3.2 Examples of numerical features №

Numerical feature name

1

Risk of sensorineural hearing loss

2

Risk local vibration

3

Risk general vibration

4

Year of Birth

5

Work experience

6

The year of the current employment

7

Thermometry of the right hand, degrees Celsius

8

Thermometry of the left hand, degrees Celsius

9

Thermometry of the right foot, degrees Celsius

10

Thermometry of the left foot, degrees Celsius

11

Vibration sensitivity at 125 Hz on the right, dB

12

Vibration sensitivity at 125 Hz on the left, dB

13

Vibration sensitivity at a frequency of 32 Hz on the right

14

Vibration sensitivity at 63 Hz on the right

15

Vibration sensitivity at 250 Hz on the right

16

Vibration sensitivity at a frequency of 32 Hz on the left

17

Vibration sensitivity at 63 Hz on the left

18

Vibration sensitivity at 250 Hz on the left

19

Hearing threshold at 250 Hz, (dB)

20

Hearing threshold at the frequency of 500 Hz (dB)

21

Hearing threshold at the frequency of 1000 Hz (dB)

22

Hearing threshold at the frequency of 2000 Hz (dB)

23

Hearing threshold at a frequency of 3000 Hz (dB)

24

Hearing threshold at the frequency of 4000 Hz (dB)

25

Hearing threshold at the frequency of 6000 Hz (dB)

26

Hearing threshold at the frequency of 250 Hz (dB)

27

Hearing threshold at a frequency of 500 Hz, (dB)

28

Hearing threshold at a frequency of 1000 Hz, (dB)

29

Hearing threshold at a frequency of 2000 Hz, (dB)

30

Hearing threshold at a frequency of 3000 Hz, (dB)

31

Hearing threshold at 4000 Hz, (dB)

32

Hearing threshold at a frequency of 6000 Hz, (dB)

39

40

M. Deminov et al.

Fig. 3.3 Distribution of the studied objects by age

• harmful factors, • harmful work. Figures 3.3, 3.4, 3.5, 3.6, 3.7 and 3.8 show distributions and histograms of patients by features. When considering these graphs and others for other distinguished features, we can get an idea of the average patient we will examine. This stage of the work is very important, because here we can see the features and anomalies in the data.

3.6 The Type of the Problem to Be Solved and the Target Variables Target variables represent different risk groups for occupational diseases. In relation to training samples, target variables are determined by expert evaluation, or based on data from long-term (the period of the forecast foundation) prospective occupational pathology medical and medico-social observations comparable to a given period of anticipation of the risk forecast.

3 Risk Forecasting Tools Based on the Collected Information for Two …

Fig. 3.4 Distribution of the studied objects by work experience

Fig. 3.5 Gender distribution of the studied

41

42

M. Deminov et al.

Fig. 3.6 Presented thermometry of the hands (distribution)

Fig. 3.7 Presented stop thermometry (distribution)

Each of the target variables represents different risk groups that take a value from 1 to 5. Thus, each patient is assigned according to one of the 5 risk groups of PD. This task can be considered as a classical classification problem, however, the comparability of risk groups among themselves will not be taken into account, although in reality it is known that the higher the group, the worse, that is, the risk is higher. This feature and information are used in the development of models. With the standard approach to solving and dividing the data into 5 groups, the possibility of comparing the level of risk among patients of the same group and comparing groups among themselves is lost. Therefore, they change the type of problem to be solved from a classification problem to a regression problem, given

3 Risk Forecasting Tools Based on the Collected Information for Two …

43

Fig. 3.8 Vibration sensitivity at 125 Hz (distribution)

that the target variables of different risk groups are comparable to each other. This allows for gradation between employees who have the same risk group, arranging them in ascending or descending order of existing risks. One of the approaches to solving this problem may be to replace the type of problem being solved with a classification problem with a regression problem. At the same time, we will take into account that the target variables of different classes are comparable to each other. In addition, it will be possible to arrange gradation between objects of the same class, building them in the form of increasing or decreasing existing risks. Figures 3.9, 3.10 and 3.11 show the types of target variables and proportions of risk groups that are in our training sample in relation to professional sensorineural hearing loss and vibration disease. The provided sample is not balanced, which must be taken into account when splitting the data into training and test samples and when analyzing the results obtained and indicative metrics. In all three tasks, we see that the presented samples are extremely unbalanced. This circumstance should be taken into account when splitting the data into training and test samples. This fact should be taken into account when analyzing the results and indicative metrics.

3.7 Trained Models and Their Results Below are various models with their brief descriptions and the results they showed on the tasks being solved. In each case, the available data is divided into training and test samples (datasets) in a ratio of 4:1, training takes place on a training dataset, verification and the presented results are checked on a deferred (test) sample. In

44

M. Deminov et al.

Fig. 3.9 Distribution of the results of the assessment of the risk of sensorineural hearing loss obtained by expert means

Fig. 3.10 Distribution of risks of vibration disease (local vibration) obtained by expert means

each case, various metrics are considered that characterize the correctness of the predictions of the obtained models.

3.8 Linear Regression Model Coefficients are selected for each feature in such a way that the target variable is expressed through a general formula:

3 Risk Forecasting Tools Based on the Collected Information for Two …

45

Fig. 3.11 Distribution of vibration disease risks (general vibration) obtained by expert means



num f eatur es (k)

y − = w0 +

wi xi(k)

i=1

The advantages of this model are that the resulting formula is simple and clear. By its appearance and coefficients, it is clear how each feature affects the result. The disadvantage of this approach is that it is quite simplified, for optimal results and analysis of coefficients in formulas, preprocessing of features is needed.

3.9 Determining the Risk of Sensorineural Hearing Loss For the first target feature, the model showed the following metrics on a deferred sample (Table 3.3). Here and further, respectively, the standard concepts and abbreviations accuracy score are used—accuracy score, that is, the proportion of correct answers of the algorithm; MAE—the average absolute error, MSE—the standard error. It can be seen that the model is being built, and its accuracy is reasonably correlated with the size of the training sample already on the basic technologies of model construction. Table 3.3 Metrics of sensorineural hearing loss linear regression model

Model metrics type

Metrics values

Accuracy score

0.7877

Mean absolute error (mae)

0.3227

Mean squared error (mse)

0.1784

46 Table 3.4 Top five features with the highest values of weights

Table 3.5 Top five features with the smallest values of weights

M. Deminov et al.

Top5 “+” features

Values of weights

The year of the current employment

0.2468

Work experience

0.2359

Hearing threshold at 4000 Hz, (dB)

0.1998

The hearing threshold at the frequency of 4000 Hz (dB)

0.1245

The hearing threshold at the frequency of 2000 Hz (dB)

0.0952

Top5 “−” features

Values of weights

The hearing threshold at the frequency of 1000 Hz

− 0.0948

27.6

− 0.0524

Hearing threshold at 250 Hz, (dB)

− 0.0444

Lower limb right (no changes—a, there are changes—b)

− 0.0410

Untreated ear diseases: chronic purulent otitis

− 0.0368

For example, below are top 5 features with the highest values of weights (Table 3.4). Similarly, we can give an example of 5 features with the smallest (negative) weights (Table 3.5). These weights allow the researchers to see the contribution to the result of each feature describing the object. In this case, the features are pre-normalized (the average is subtracted from the features and divided by the amount of variance), so it is correct to compare such coefficients.

3.10 Determination of the Risk of Vibration Disease (Local Vibration) For the second target value, the linear regression model showed the following metrics (Table 3.6). Top 5 features with the highest values of weights (Table 3.7). Similarly, example of 5 features with the smallest negative weights (Table 3.8). These weights allow the researchers to see the contribution to the result of each feature describing the object. In this case, the features are pre-normalized (the average

3 Risk Forecasting Tools Based on the Collected Information for Two … Table 3.6 Metrics of vibration disease linear regression model (local vibration)

Table 3.7 Top five features with the highest values of weights

Table 3.8 Top five features with the smallest values of weights

47

Model metrics type

Metrics values

Accuracy score

0.4692

Mean absolute error (mae)

0.6513

Mean squared error (mse)

0.6827

Top5 “+” features

Values of weights

3.4.1

0.1901

The year of the current employment

0.1200

1.2.2

0.1071

Paint brushes (Normal color—a, Marble-cyanotic—b)

0.1033

Work experience

0.0974

Top5 “−” features

Values of weights

Thermometry of the left hand, degrees Celsius

− 0.5786

Thermometry of the right hand, degrees Celsius

− 0.5786

1.3.5

− 0.1540

Pain in the hands/and forearms (yes—a, no—b)

− 0.1028

3.4.2

− 0.1009

is subtracted from the features and divided by the variance value), so it is correct to compare such coefficients.

3.11 Determination of the Risk of Vibration Disease (General Vibration) For the third target value, similar metrics and results are provided. The linear regression model showed the following metrics (Table 3.9). Top 5 features with the highest values of weights (Table 3.10). Similarly, example of 5 features with the smallest negative weights (Table 3.11). Weights allow the researchers to see the contribution to the result of each feature describing the object. In this case, the features are pre-normalized (the average is subtracted from the features and divided by the amount of variance), so it is correct to compare such coefficients.

48

M. Deminov et al.

Table 3.9 Metrics of vibration disease linear regression model (general vibration) Model metrics type

Metrics values

Accuracy score

0.5474

Mean absolute error (mae)

0.5423

Mean squared error (mse)

0.4683

Table 3.10 Top five features with the highest values of weights Top5 “+” features

Values of weights

1.2.37

0.0646

Hearing threshold at 250 Hz, (dB)

0.0519

The upper limb to the left (there are no changes—a, there are changes—b)

0.0515

Perforation Mt-2

0.0510

2

0.0500

Table 3.11 Top five features with the smallest values of weights

Top5 “−” features

Values of weights

The year of the current employment

− 0.3297

Work experience

− 0.3182

Thermometry of the left hand, degrees Celsius

− 0.2801

Thermometry of the right foot, degrees Celsius

− 0.2189

Thermometry of the left foot, degrees Celsius

− 0.2165

Based on the first results, we can say that the first task is solved better than the next two. The accuracy of solving the first problem is satisfactory, the quality of solutions to the 2nd and 3rd problems is low. Perhaps this model is too simple for such a set of features and does not take into account the patterns inherent in the features.

3.12 The Decision Tree Model The decision tree model is an algorithm that provides an answer by making decisions at various levels, at each of which it checks the object for a certain condition on the selected attribute. Schematically, this algorithm can be represented as a binary tree. To get an answer, the researchers need to go down from its root to one of the leaves. The big advantage of this algorithm is its intuitive clarity.

3 Risk Forecasting Tools Based on the Collected Information for Two … Table 3.12 Metrics of sensorineural hearing loss decision tree model

Model metrics type

Metrics values

Accuracy score

0.8994

Mean absolute error (mae)

0.1184

Mean squared error (mse)

0.0884

49

3.13 Determining the Risk of Sensorineural Hearing Loss For the first target feature, the model showed the following metrics on a deferred sample (Table 3.12). A part of the calculated decision tree is shown in Fig. 3.12. In this model, the maximum depth of tree construction is limited. The accuracy indicated above and other metrics are thereby determined by at least a small part of the original features. Let’s list the features on the basis of which the greatest number of decisions were made when the algorithm was running (Table 3.13). We see that the column names are repeated, most likely each of the repeated features refers to the left and right sides.

3.14 Determination of the Risk of Vibration Disease (Local Vibration) For the second target value, the decision tree model showed the following metrics (Table 3.14). The completed decision tree is shown in Fig. 3.13. Top of the features by which this decision tree works (Table 3.15). Again, we see that the defining parameters for decision-making in this case are a limited set of initial features.

3.15 Determination of the Risk of Vibration Disease (General Vibration) For the third target value, similar metrics and results are provided. The decision tree model showed the following metrics (Table 3.16). The general view of the model (the resulting tree) with the features and conditions of transition to the lower levels is shown in Fig. 3.14. The features that the algorithm focuses on when descending to the leaves of the tree and making the resulting decision (Table 3.17).

Fig. 3.12 Visual representation of the decision tree—a model that gives the most accurate results for the task of predicting the risk of sensorineural hearing loss

50 M. Deminov et al.

3 Risk Forecasting Tools Based on the Collected Information for Two … Table 3.13 A list of the features on the basis of which the greatest number of decisions were made for estimation of the risk of sensorineural hearing loss

Table 3.14 Metrics of decision tree model of vibration disease (local vibration)

51

Feature

Importance

The hearing threshold at the frequency of 4000 Hz (dB)

0.4105

Hearing threshold at 4000 Hz, (dB)

0.3856

The hearing threshold at the frequency of 2000 Hz (dB)

0.0700

Hearing threshold at a frequency of 500 Hz, (dB)

0.0422

The hearing threshold at the frequency of 500 Hz (dB)

0.0375

The hearing threshold at the frequency of 250 Hz (dB)

0.0275

Lower limb right (no changes—a, there are changes—b)

0.0069

The hearing threshold at the frequency of 6000 Hz (dB)

0.0069

The mucous membrane of the posterior pharyngeal state

0.0066

Hearing threshold at a frequency of 1000 Hz, (dB)

0.0062

Model metrics type

Metrics values

Accuracy score

0.8491

Mean absolute error (mae)

0.1799

Mean squared error (mse)

0.1885

We see that this simple but effective algorithm has shown significantly better results than the linear regression model, while the interpretability of the results and the logic of decision-making are just as well understood.

3.16 Random Forest Model This method is based on the construction of a set of algorithms for the decision tree. At the same time, the displayed quality should be better than the quality of individual algorithms. In addition, this algorithm allows you to evaluate the quality of the available features, their importance in decision-making.

Fig. 3.13 Visual representation of the decision tree—a model that gives the most accurate results for the task of predicting the risk of vibration disease (local vibration)

52 M. Deminov et al.

3 Risk Forecasting Tools Based on the Collected Information for Two …

53

Table 3.15 A list of the features on the basis of which the greatest number of decisions were made for estimation of the risk of vibration disease (local vibration) Feature

Importance

Thermometry of the right hand, degrees Celsius

0.4629

Thermometry of the left hand, degrees Celsius

0.3472

Hyperhidrosis of the hands

0.0769

Pain in the hands/and forearms (yes—a, no—b)

0.0257

Attacks of whiteness/blueness of the fingers

0.0163

Hypalgesia and/or hypesthesia of the feet

0.0155

Numbness and/or paresthesia of the hands

0.0107

Age

0.0095

Symptom of white spot and/or Bogolepovs test (negative—a, positive—b)

0.0056

The hearing threshold at the frequency of 2000 Hz (dB)

0.0050

Vibration sensitivity at 250 Hz on the left hand

0.0049

The hearing threshold at the frequency of 250 Hz (dB)

0.0045

Thermometry of the right foot, degrees Celsius

0.0026

The left lower limb (no changes—a, there are changes—b)

0.0022

Numbness and/or paresthesia of the feet

0.0020

Paint brushes (Normal color—a, Marble-cyanotic—b)

0.0017

Hearing threshold at 250 Hz, (dB)

0.0016

1.1

0.0014

The hearing threshold at the frequency of 1000 Hz (dB)

0.0013

Vibration sensitivity at 63 Hz on the left hand

0.0010

The hearing threshold at the frequency of 4000 Hz (dB)

0.0009

Work experience

0.0007

Table 3.16 Metrics of vibration disease decision tree model (general vibration)

Model metrics type

Metrics values

Accuracy score

0.9218

Mean absolute error (mae)

0.1058

Mean squared error (mse)

0.0837

3.17 Determining the Risk of Sensorineural Hearing Loss For the first target feature, the model showed the following metrics on a deferred sample (Table 3.18). The random forest model allows you to evaluate how important the features that describe the objects are. The following are the most important features for the first task (Table 3.19).

54

M. Deminov et al.

Fig. 3.14 Visual representation of the decision tree—a model that gives the most accurate results for the task of predicting the risk of vibration disease (general vibration) Table 3.17 A list of the features on the basis of which the greatest number of decisions were made for estimation of the risk of vibration disease (general vibration)

Feature

Importance

Thermometry of the left foot, degrees Celsius

0.2726

Thermometry of the right foot, degrees Celsius

0.2702

Thermometry of the right hand, degrees Celsius

0.2426

Thermometry of the left hand, degrees Celsius

0.1959

Vibration sensitivity at 250 Hz on the left

0.0098

Hearing threshold at 250 Hz, (dB)

0.0028

The mucous membrane of the posterior pharyngeal state

0.0027

The hearing threshold at the frequency of 4000 Hz (dB)

0.0019

1.1

0.0015

3 Risk Forecasting Tools Based on the Collected Information for Two …

55

Table 3.18 Metrics of sensorineural hearing loss random forest model Model metrics type

Metrics values

Accuracy score

0.9218

Mean absolute error (mae)

0.1310

Mean squared error (mse)

0.0612

Table 3.19 The most important features for the sensorineural hearing loss random forest model Feature

Importance

Loss reduction sum

The hearing threshold at 4000 Hz (dB)

0.3338

0.3338

The hearing threshold at of 4000 Hz (dB)

0.3291

0.6629

The hearing threshold at of 500 Hz (dB)

0.0406

0.7035

The hearing threshold at of 500 Hz (dB)

0.0385

0.7421

The hearing threshold at of 2000 Hz (dB)

0.0338

0.7758

The hearing threshold at of 2000 Hz (dB)

0.0181

0.7940

Untreated ear diseases: chronic purulent otitis

0.0157

0.8096

The hearing threshold at of 250 Hz (dB)

0.0110

0.8206

The hearing threshold at of 6000 Hz, (dB)

0.0102

0.8308

Thermometry of the left foot, degrees Celsius

0.0093

0.8402

Somatic diseases: diseases of the circulatory system

0.0085

0.8486

Vibration sensitivity at of 32 Hz on the left, dB

0.0082

0.8569

The hearing threshold at of 3000 Hz (dB)

0.0071

0.8639

Vibration sensitivity at 125 Hz on the left, dB

0.0071

0.8710

Vibration sensitivity at 63 Hz on the left, dB

0.0064

0.8774

Let’s look at how the total share of the relative loss decrease grows. This value can be interpreted as what part of the decisions we can make using only these first features. In other words, what proportion of the information needed to solve this problem and contained in the data, we use (Fig. 3.15). These data allow us to highlight information that is primarily worth paying attention to when making decisions. We can identify a set of features that ensures maximum awareness of the decision-making system for this type of tasks.

3.18 Determination of the Risk of Vibration Disease (Local Vibration) For the second target feature, the model showed the following metrics on a deferred sample (Table 3.20).

56

M. Deminov et al.

Fig. 3.15 Dependence of the normalized accuracy of the prediction of the risk class of the model on the number of features taken into account in the model (sensorineural hearing loss)

Table 3.20 Metrics of vibration disease random forest model (local vibration) Model metrics type

Metrics values

Accuracy score

0.9218

Mean absolute error (mae)

0.1460

Mean squared error (mse)

0.0977

Below are the top 15 most important features for decision-making when determining the second target feature (Table 3.21). How does the total share of information about the patient in the issue of solving this type of tasks grow from the first important features (Fig. 3.16). First of all, it is worth paying attention to these features when making decisions on determining the risk of local vibration.

3 Risk Forecasting Tools Based on the Collected Information for Two …

57

Table 3.21 The most important features for vibration disease random forest model (local vibration) Feature

Importance

Loss reduction sum

Thermometry of the right hand, degrees Celsius

0.4022

0.4022

Thermometry of the left hand, degrees Celsius

0.3715

0.7737

Numbness and/or paresthesia of the hands

0.0373

0.8110

Hyperhidrosis of the hands

0.0286

0.8397

Paint brushes (Normal color—a, Marble-cyanotic—b)

0.0162

0.8559

Pain in the hands/and forearms (yes—a, no—b)

0.0159

0.8717

Attacks of whiteness/blueness of the fingers

0.0157

0.8874

Age

0.0128

0.9002

Symptom of white spot and/or Bogolepovs test

0.0105

0.9107

Hypalgesia and/or hypesthesia of the feet

0.0076

0.9183

Vibration sensitivity at 125 Hz on the left, dB

0.0043

0.9226

The hearing threshold at the frequency of 4000 Hz (dB)

0.0036

0.9262

Vibration sensitivity at 250 Hz on the left, dB

0.0032

0.9294

Vibration sensitivity at 250 Hz on the right, dB

0.0032

0.9326

Work experience

0.0030

0.9357

Fig. 3.16 Dependence of the normalized accuracy of the prediction of the vibration disease risk class by the model on the number of features taken into account in the model (local vibration)

58

M. Deminov et al.

3.19 Determination of the Risk of Vibration Disease (General Vibration) For the third target feature, the model showed the following metrics on a deferred sample (Table 3.22). Below are the top 15 most important features for decision-making when determining the third target feature (Table 3.23). The growth of the total “importance” of features with an increase in their number (we take the top of the most important features) (Fig. 3.17). These features should first of all be paid attention to when making decisions to determine the risk of general vibration. We see that this algorithm has allowed us to further improve the results obtained. Table 3.22 Metrics of vibration disease random forest model (local vibration) Model metrics type

Metrics values

Accuracy score

0.9596

Mean absolute error (mae)

0.0738

Mean squared error (mse)

0.0363

Table 3.23 The most important features for vibration disease random forest model (general vibration) Feature

Importance

Loss reduction sum

Thermometry of the left foot, degrees Celsius

0.2479

0.2479

Thermometry of the right foot, degrees Celsius

0.2465

0.4944

Thermometry of the right hand, degrees Celsius

0.2280

0.7225

Thermometry of the left hand, degrees Celsius

0.1923

0.9147

Age

0.0109

0.9256

The hearing threshold at the frequency of 4000 Hz (dB)

0.0062

0.9318

Vibration sensitivity at a frequency of 32 Hz

0.0046

0.9364

The hearing threshold at a frequency of 6000 Hz (dB)

0.0039

0.9403

Vibration sensitivity at 63 Hz on the right, dB

0.0033

0.9436

Tinnitus (constant) (yes—a, no—b)

0.0033

0.9468

Hypalgesia and/or hypesthesia of the feet

0.0029

0.9497

Vibration sensitivity at 250 Hz on the left, dB

0.0025

0.9523

Vibration sensitivity at 250 Hz on the right dB

0.0025

0.9548

The hearing threshold at the frequency of 250 Hz (dB)

0.0023

0.9571

Work experience

0.0023

0.9594

3 Risk Forecasting Tools Based on the Collected Information for Two …

59

Fig. 3.17 Dependence of the normalized accuracy of the prediction of the vibration disease risk class by the model on the number of features taken into account in the model (total vibration)

3.20 Predictive Risk Model For the correct formulation of the problem of risk forecasting and for a meaningful study of forecasts, data is needed in which a time series for each patient would be presented. In other words, such a cross-section of parameters that we have at the moment must be repeatedly reproduced in time. When predicting the result at a future moment in time, to a certain extent, it relies on information about the current moment in time (or on information in past moments of time). At this stage, we do not have this reference point on which the created model should have been based. This is necessary not only for prediction, but also for training the model itself. At the current time, this disadvantage can be compensated by the fact that we can change certain features that will naturally change with the passage of time, and look at the result of the model. At the same time, we are forced to assume that all other features, clearly independent of time, will remain unchanged. In reality, other features that fully describe the object can and will change. So, a person’s working conditions may change, certain symptoms and observations may appear. All of the above certainly depends on the length of service, working conditions and changes over time. To take into account such subtle effects, it is necessary to assess how these features change over time. For such models, we come back to the question of the data provided, because to train such algorithms, time series with variable parameters are

60 Table 3.24 Risk of sensorineural hearing loss forecasted by a linear regression model

M. Deminov et al.

Year

Risk estimation

0

2.132

5

2.306

10

2.480

15

2.653

20

2.827

25

3.001

needed. This conclusion confirms the fact that, judging by the results of the models (random forest and decision tree), the main contribution to the result is made by specific medical parameters, which only indirectly depend on time. However, as a first approximation, we can consider how the risk changes in the linear regression model. For example, consider the problem of determining the risk of sensorineural hearing loss when solving it with a linear regression model. Consider a person in a certain risk group. Next, we will change his features of seniority and age, which will grow over time. And let’s estimate how many years the risk will increase to the next degree. In the model we trained, weights for features of age and seniority have positive values. This corresponds to our expectations that the longer a person works and the more experience he has, the greater the risk of getting an occupational disease. We will change the features of age and seniority and monitor how the results that our model produces change (at the same time, we will not forget that other features can also change over time, but in this approach we neglect this). For such a change, we get (Table 3.24). We see that over 25 years of work, with the current features, the risk will increase to 3. It is worth noting once again that most likely, during this time, other features that describe employees should change. In order to correctly predict the risk, we must take into account such changes. As an alternative, we can offer a statistical regression assessment of changes in features, which can be used to calculate time-varying parameters for their further transmission to the model.

3.21 Conclusions and Model Selection The study presents the results of using various mathematical models to determine the risks of neural network hearing loss, local and global vibration. The results of the work show that the problem is solved, the accuracy of the results obtained can be called good. Tables 3.25, 3.26 and 3.27 show comparative indicators of all the presented models.

3 Risk Forecasting Tools Based on the Collected Information for Two …

61

Table 3.25 Metrics of sensorineural hearing loss risk models Accuracy\model

Linear regression

Decision tree

Random forest

Accuracy (the proportion of exact solutions of the model)

0.788

0.894

0.933

MAE (average absolute error)

0.325

0.154

0.127

MSE (Standard error)

0.183

0.099

0.060

Table 3.26 Metrics of vibration disease risk models (local vibration) Accuracy\model

Linear regression

Decision tree

Random forest

Accuracy (the proportion of exact solutions of the model)

0.464

0.838

0.927

MAE (average absolute error)

0.651

0.186

0.148

MSE (Standard error)

0.682

0.183

0.094

Table 3.27 Metrics of vibration disease risk models (general vibration) Accuracy\model

Linear regression

Decision tree

Random forest

Accuracy (the proportion of exact solutions of the model)

0.542

0.927

0.966

MAE (average absolute error)

0.543

0.095

0.077

MSE (Standard error)

0.466

0.061

0.036

Table 3.25—Risk of sensorineural hearing loss. We see that the random forest model performed best in all three tasks. The advantages of this model include the fact that minimal preprocessing of features is required. The decision tree model also showed good results. Nevertheless, the results of this study can be improved if more time is devoted to analyzing existing features, generating new features, configuring hyperparameters of the algorithms used, considering new metrics and the possibilities of using the results obtained. The encouraging results of this work do not allow us to speak with confidence about the possible application of these models for complex risk forecasting over time. Now we can only give an estimate for these parameters. The paper indicates ways to solve this issue: first of all, we need appropriate data that would show us how the features of objects change over time. But even in this form, the models shown can demonstrate the optimal ways to work with staff and their healthcare.

62

M. Deminov et al.

Fig. 3.18 Fragment of the decision tree model (example)

3.22 An Example of an Explicitly Interpreted DSS Fragment Based on an Automated Generated and Optimized Model Here is an example of a constructed decision tree up to a depth of 2 (Fig. 3.18). In this case, the upper-level branching criteria can be considered as decisive rules and expert conditions that can be used when making decisions: In this image, the conditions X[70] ≤ 24.5 are given in the first block. This suggests that we should look at the 70th feature of the description of the object—in this case, it is “The threshold of hearing at a frequency of 4000 Hz (dB)”. Accordingly, for further decision-making, we descend either to the left or to the right branch, depending on the result of our comparison. At the next level, we check the 77th attribute in the left branch, and the 70th again in the right branch, but at a new value. As a result, we finally descend to a certain answer that determines the risk in question. In this case, we can stop at the level we need (or at the available one) and already make the necessary assessment on it.

3.23 Justification of Risk Prediction Error Estimation To estimate the error, we will use the results obtained as a result of the inference of trained models on deferred data.

3 Risk Forecasting Tools Based on the Collected Information for Two …

63

3.24 The First Approach to Estimating the Risk Prediction Error In the first approach, for each resulting risk level, we estimate the resulting average absolute deviation from the true values of the predicted class. As a result, after memorizing the received error level, we will be able to predict the interval in which the true value of risk lies with a high degree of probability. This approach can be described formally. Let us define yˆ j —predicted risk value for the object j, y j —the true value of risk. Then for the risk r = i the prediction error will be determined as follows: | | ∑ | ˆj| j | yˆ j =1 y j − y ] . erri = ∑ [ ˆj = i j I y After making calculations, we can give the error of the obtained estimate, using also other statistics of the obtained deviation modules. Note that this estimate will be the better, the larger the deferred sample we carry out these calculations.

3.25 The Second Approach to Estimating the Error of the Risk Forecast The second approach complements the first approximation in a certain way. In it, we can estimate the resulting error using subsamples of features of objects. This information will be especially useful in the case of limited information about objects. In this approximation, it is necessary to repeat the logic of error calculations, using only the corresponding subspaces of features. In this case, the error interval will naturally narrow with an increase in the number of features for the object under study. Using an estimate for the importance of features of objects (in any of the presented models), we can identify several sets of features that are responsible for, for example, a basic, extended and detailed description of the object, and then make an estimate for the resulting errors for each set of features. With this logic of combining features, the extended set will include the basic one, and the detailed one will include both of them. In general, we can distinguish N such groups of features [g1 , g2 , …,gN ], then for each of these groups for risk level i we will have an error vector [e1 , e2 , …,eN ], each component of the vector will be expressed similarly to the first approach: ∑ eij =

| ( )| | yk − yˆk x j | ] . ∑ [ ˆk = i k I y

k | yˆk (x j )=i

64

M. Deminov et al.

( ) Here yˆk x j —prediction of a model for an object in the description of which there are only features from the group gj . It is also worth noting that these estimates require an even larger dataset for accurate estimates of the errors obtained. The presented study presents the results obtained using the first approach.

3.26 Conclusions In this study there were represented some data-based health risks estimation and forecasting models for workplaces with high levels of noise and vibrations. Features extraction and models quality metrics comparison were described. Similar approaches could be applied for medical and lifestyle personalized data analysis in various areas and tasks. In particular, some of the methods developed in the study has been implemented as a part of a large-scale “Health Heuristics” project aimed to personalized management of quality of life and health risks (Krut’ko et al. 2021). Compliance with Ethical Standards Conflict of Interest The authors declare that they have no conflict of interest. Ethics Approval The original Study was performed with the approval of the Ethics Committee of the Institute of Occupational Medicine (Moscow).

References Dobie RA (2008) The burdens of age-related and occupational noise-induced hearing loss in the United States. Ear Hear 29(4):565–577. https://doi.org/10.1097/AUD.0b013e31817349ec Ekman L, Lindholm E, Brogren E, Dahlin LB (2021) Normative values of the vibration perception thresholds at finger pulps and metatarsal heads in healthy adults. PLoS ONE 16(4):e0249461. https://doi.org/10.1371/journal.pone.0249461 GOST R ISO 1999–2017. National standard of the Russian Federation. Acoustics. Estimation of noise-inducted hearing loss. Group T34 (in Russian). https://docs.cntd.ru/document/120015 7242. Accessed 1 Aug 2022 ISO 1999:2013. Acoustics—estimation of noise-induced hearing loss. ICS: 13.140 Noise with respect to human beings ISO 2631-5:2018. Mechanical vibration and shock—evaluation of human exposure to whole-body vibration—Part 5: method for evaluation of vibration containing multiple shocks Krut’ko VN, Deminov MM, Briko NI, Mitrokhin OV, Chichua DT (2021) Problems of health and quality of life management: intelligent digital platform “Health Heuristics”. Natl Health Care (Russia) 2(2):55–63 (in Russian). https://doi.org/10.47093/2713-069X.2021.2.2.55-63. Accessed 1 Aug 2022 Mahbub MH, Hase R, Yamaguchi N, Hiroshige K, Harada N, Bhuiyan ANH, Tanabe T (2020) Acute effects of whole-body vibration on peripheral blood flow, vibrotactile perception and balance in older adults. Int J Environ Res Public Health 17(3):1069. https://doi.org/10.3390/ijerph170 31069

3 Risk Forecasting Tools Based on the Collected Information for Two …

65

Ntlhakana L, Nelson G, Khoza-Shangase K (2020) Estimating miners at risk for occupational noiseinduced hearing loss: a review of data from a South African platinum mine. S Afr J Commun Disord 67(2):e1–e8. https://doi.org/10.4102/sajcd.v67i2.677 Roberts B, Seixas NS, Mukherjee B, Neitzel RL (2018) Evaluating the risk of noise-induced hearing loss using different noise measurement criteria. Ann Work Expo Health 62(3):295–306. https:/ /doi.org/10.1093/annweh/wxy001 Sliwinska-Kowalska M (2020) New trends in the prevention of occupational noise-induced hearing loss. Int J Occup Med Environ Health 33(6):841–848. https://doi.org/10.13075/ijomeh.1896. 01600 WHO (2020) Deafness and hearing loss. WHO Fact sheets: https://www.who.int/news-room/factsheets/detail/deafness-and-hearing-loss. Accessed 1 Aug 2022

Chapter 4

Obtaining Longevity Footprints in DNA Methylation Data Using Different Machine Learning Approaches Alena Kalyakulina, Igor Yusipov, and Mikhail Ivanchenko

Abstract Aging as an irreversible process is characterized by a progressive decline in many body functions and increased vulnerability to various diseases. Epigenetic modifications, in particular DNA methylation, have been shown to correlate with aging and age-related diseases in many aspects. In this review, we refer to various studies according to machine learning methods employed in supervised learning (regression and classification) and unsupervised learning (clustering). We show how the development of these methods has affected, first, the epigenetic age estimation, a lifespan indicator, its practical applicability and accuracy and, second, the search for epigenetic footprints associated with various diseases. We also emphasize the practical utility of such approaches for predicting various age-related outcomes. Keywords DNA methylation · Longevity · Age-related diseases · Machine learning · Epigenetics

4.1 Introduction Aging is almost ubiquitous in living organisms. Age strongly correlates with mortality, development of a wide range of diseases, and quality of life. Instructively, single features often poorly characterize age, whereas a combination of many can manifest a good predictive power (Zhavoronkov and Mamoshina 2019). Human aging is a complex process associated with a gradual decline in body function, productivity, leading to multiple diseases and inevitably ending in death. Lifestyle changes can help slow the decline in body function and keep the body in the best possible condition with respect to chronological age, which is commonly referred to as “healthy aging” (Zhavoronkov et al. 2019). In contrast to chronological age, the rate of biological age varies from person to person and may predict different A. Kalyakulina · I. Yusipov · M. Ivanchenko (B) Institute of Information Technologies, Mathematics and Mechanics, Lobachevsky State University, Nizhny Novgorod, Russia e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 A. Moskalev et al. (eds.), Artificial Intelligence for Healthy Longevity, Healthy Ageing and Longevity 19, https://doi.org/10.1007/978-3-031-35176-1_4

67

68

A. Kalyakulina et al.

aspects of aging at different stages of life. In late adulthood accelerated biological aging is associated with illness, morbidity, inflammation and mortality, while in middle age it can predict decline in cognitive and physical abilities and compromised longevity (Belsky et al. 2015, 2018; Franceschi et al. 2018; Bergsma and Rogaeva 2020). Reliable estimation of biological age can help in predicting disease onset in asymptomatic carriers and developing preventive strategies, as well as evaluating geroprotective strategies and interventions (Belsky et al. 2018). Advances in biological data acquisition and processing techniques have led to a marked progress in the discovery of molecular markers of aging, including epigenetic biomarkers. One of the key epigenetic mechanisms is DNA methylation, which can influence gene expression without changing the DNA sequence itself. DNA methylation consists of methyl group binding to cytosine in cytosine-guanine dinucleotides (CpG sites). Changes in methylation patterns can be associated with many different factors: lifestyle, environmental exposure, various diseases, aging, and many others (Christensen et al. 2009; Bell et al. 2019). Microarray-based technologies, such as the Illumina HumanMethylation450 and HumanMethylationEPIC arrays (Bibikova et al. 2011; Moran et al. 2016), use pairs of probes to measure the intensity of methylated and unmethylated alleles in each CpG site in all cells of a tissue sample, thus assessing the proportion of methylated DNA copies in a large number of samples. Machine learning refers to data analysis tools that can extract complex dependencies from data, even if there is no prior data on these dependencies or where they are too complex. The rapid development of machine learning techniques, combined with increasing computational power and the availability of large publicly available datasets, has led to increased interest and research in biomedical data analysis using machine learning and deep learning (Zhavoronkov and Mamoshina 2019). The main types of machine learning are: supervised learning, unsupervised learning, and reinforcement learning. Deep architectures with multi-level cascades are usually referred to as deep learning. Supervised learning is usually applied to classification problems in which it is necessary to assign each sample to one of the groups (i.e. to predict a categorical value) and to regression problems in which it is necessary to determine the relationship between the dependent and independent variables (i.e. to predict a continuous value). Prediction of human biological age from DNA methylation data is a classical example of a regression problem in the reviewed context. The most well-known epigenetic clocks use linear models to construct cumulative estimates of methylation levels in age-associated CpGs (Bocklandt et al. 2011; Garagnani et al. 2012; Horvath 2013; Hannum et al. 2013; Weidner et al. 2014; Field et al. 2018). One of their advantages is the ability to measure multi tissue or cellspecific aging (Bergsma and Rogaeva 2020). The recent development of epigenetic clock models is firstly related to increasing the practicality of age predictors and using as few loci as possible (Xiao et al. 2019), and secondly to increasing stability and accuracy by using nonlinear and deep models (Zhavoronkov and Mamoshina 2019; Zhavoronkov et al. 2019; Galkin et al. 2021; de Lima Camillo et al. 2022). Acceleration of epigenetic age is associated with neurodegenerative diseases such as Alzheimer’s disease (Levine et al. 2015), Parkinson’s disease (Horvath and Ritz

4 Obtaining Longevity Footprints in DNA Methylation Data Using …

69

2015), Huntington’s disease (Horvath et al. 2016) and amyotrophic lateral sclerosis (Zhang et al. 2017, 2020a), with HIV (Gross et al. 2016; Levine et al. 2016), Werner and Down syndromes (Horvath et al. 2015a; Maierhofer et al. 2017). In contrast, the epigenetic age in long-livers is less than their chronological age (Horvath et al. 2015b). Since acceleration of epigenetic age is closely related to various ageassociated diseases, another machine learning task common to the field is classification between diseased and healthy subjects, or between different subtypes of the same disease (usually cancer). With the accumulation of high dimension—low sample size data (small number of samples with a large number of features), the methods of dimensionality reduction or feature selection as preprocessing steps before classification tasks become in demand (Piao and Ryu 2017; Ghaddar and Naoum-Sawaya 2018; Batbaatar et al. 2020). In turn, unsupervised learning is most commonly used to solve the problem of clustering samples with different phenotypes or to reduce the dimensionality of the input data. These types of dimensionality reduction algorithms, like principal component analysis, use linear or nonlinear combinations of existing features to create new ones (Si et al. 2016). More recently, a deep learning framework with variational autoencoders was proposed to learn latent DNA methylation representations, demonstrating the ability to track representative differential methylation patterns among clinical tumor subtypes (Titus et al. 2018; del Amor et al. 2021). Finally, reinforcement learning is trained to reach a complex goal in many steps (Zhavoronkov and Mamoshina 2019). Such methods are widely used in drug development, but remain less popular in application to epigenetic data. Here we would refer to DeepCpG, a computational approach for predicting methylation states in individual cells (Angermueller et al. 2017).

4.2 Biological Age Regression with Machine Learning Models 4.2.1 Baseline Models The first prototypical DNA methylation clocks were constructed using saliva (Bocklandt et al. 2011). 88 CpG sites were identified that correlated with chronological age in 34 male identical twins. For the “epigenetic-aging-signature” (Koch and Wagner 2011) containing 19 CpG, a high mean age prediction error (11 years) was obtained. Florath analyzed blood DNA in three steps, resulting in the selection of 17 CpGs to build a regression model for age prediction with an average error of only 2.6 years (Florath et al. 2014). However, the disadvantage of this clock is the use of a single tissue for a very narrow age range (50–75 years) (Bergsma and Rogaeva 2020). The best known DNA aging clocks constructed using methylation data were developed about the same time by Horvath (2013) and Hannum et al. (2013). Both works estimate human chronological age from Illumina microarray data using ElasticNet,

70

A. Kalyakulina et al.

a regularized linear regression method. Horvath’s model uses methylation data of different tissues from two platforms, Illumina 450k and Illumina 27k, the resulting model dimension is 353 CpG sites. At the same time, the Hannum model considers only blood methylation data from Illumina 450k, with the resulting dimensionality of 71 CpG sites. It is worth noting that the CpG sites used by these two models weakly overlap (Galkin et al. 2021). Despite the differences, these aging clocks show similar performance (Horvath and Raj 2018): the median absolute error of 3.6 years for the Horvath clock and the root mean square error of 3.9 years for the Hannum clock. These results opened an entire field of research in human age prediction by DNA methylation. Horvath’s model is widely used as the basic model of the pan-tissue epigenetic clock (Chen et al. 2019; Olova et al. 2019; Fahy et al. 2019; Fitzgerald et al. 2021). The ElasticNet used in this model is also a popular approach for developing epigenetic clocks (Levine et al. 2018; Thompson et al. 2018; Lu et al. 2019; Horvath et al. 2020; Sugrue et al. 2021; Vijayakumar and Cho 2022). Instructively, linear epigenetic aging clocks, in particular (Weidner et al. 2014; Lin et al. 2016; Levine et al. 2018), give comparable accuracy for different resulting sets of CpG sites (Galkin et al. 2020); however, age accelerations predicted by these clocks have been shown to be weakly correlated with each other (Belsky et al. 2018). Several predictors of chronological age based on blood and saliva DNA methylation analysis were reported in Zhang et al. (2019). A predictor of chronological age with an error of only about 2 years was developed using a training cohort with a wide age range, with an additional correction for cellular composition. In non-blood tissues, the results are comparable to the Horvath clock. However, linear regression can have a high error rate and fail to capture nonlinear feature interactions (de Lima Camillo et al. 2022). Inflating the feature space to take into account pairwise interactions is not manageable due to the high dimensionality of DNA methylation data (de Lima Camillo et al. 2022). Thus, it is likely that many important interactions between different CpG sites are overlooked in linear epigenetic clocks. The development of machine learning and artificial intelligence methods has made it possible to build models capable of capturing complex dependencies and interactions between features.

4.2.2 One-Tissue Epigenetic Clocks Commonly, epigenetic aging clocks use methylation data from a single tissue. Whole blood, various types of blood cells, saliva are most commonly used to construct a one-tissue clock. There are many methods to obtain methylation data, both whole-genome (Illumina 27k, 450k, EPIC) and targeted methods (pyrosequencing, massively parallel sequencing, SNaPshot, EpiTYPER) (Aliferi and Ballard 2022). While full-genome data are more commonly used for research purposes due to their high cost, targeted low-dimensional data are cheaper to obtain and can be used for niche applications such as forensic genetics (Zbie´c-Piekarska et al. 2015a, b; Park

4 Obtaining Longevity Footprints in DNA Methylation Data Using …

71

et al. 2016; Freire-Aradas et al. 2016; Parson 2018; Aliferi et al. 2018; Jung et al. 2019). As mentioned earlier, the original approaches to age prediction based on DNA methylation used linear regression. However, an increase in the amount of data collected and the discovery of deeper non-linear relationships between variables necessitates the use of machine learning techniques. One option for the development of epigenetic clocks using nonlinear models was the gradient boosting regression algorithm used in Li et al. (2018) and Zhang et al. (2021). These works consider blood methylation data for 1889 and 1191 samples, respectively, divided into independent training and test subsets. For the CpG sites pre-selected using Pearson correlation with age, the mean absolute deviation was 4.06 years for 6 CpG sites in Li et al. (2018) and 3.90 years for 111 CpG sites in Zhang et al. (2021) for the test data. The same approach to feature selection was applied in Zaguia et al. (2022). Just six CpG sites with a high correlation with age were identified. Four age prediction models were developed using Multiple Linear Regression, Support Vector Regression, Gradient Boosting Regression, and Random Forest Regression. Random Forest Regression showed the best performance with a mean absolute deviation of 4.85 years for the test subset of healthy people and 9.53 years for the test subset of people with various diseases, thus demonstrating sensitivity to health status. A more specific selection of CpG sites by correlation coefficient was used in Fan et al. (2022). A meta-analysis of the correlation coefficient of 7084 Southern Han Chinese cohort individuals was performed to select five genes (ELOVL2, C1orf132, TRIM59, FHL2, and KLF14) associated with age and corresponding 34 CpG sites. Four age prediction models were compared: stepwise regression, support vector regression, and random forest regression for women and men separately, as well as for pooled data. The random forest regression model was optimized with a mean absolute deviation of only 1.15 years in the age categories 1–60 years. Gender was found to have a weak influence on age prediction. Different machine learning approaches for predicting chronological age from blood methylation data were compared in Lau and Fung (2020). In addition to the most common multiple linear regression, random forest regression, support vector machine, and neural networks with one and two hidden layers were used. Authors also considered several methods for feature selection: LASSO—least absolute shrinkage and selection operator (Tibshirani 1996), ElasticNet (Zou and Hastie 2005), SCAD— smoothly clipped selection deviation (Fan and Li 2001), and forward selection (Kutner et al. 2005). All methods selected less than 100 significant CpG sites to build machine learning models. The best result on the test subset for the mean absolute deviation was shown by the multiple linear regression for 56 CpG sites selected by SCAD (3.57 years), for the root mean square error was also shown by the multiple linear regression for 38 CpG sites selected by LASSO (4.68 years). Another paper (Li et al. 2021) attempted to outperform linear models using a specially designed Correlation Pre-Filtered Neural Network (CPFNN), which pre-filters input features using Spearman correlation. CPFNN outperformed the linear Horvath and Hannum models, LASSO regularization neural networks, and Dropout neural networks by at least 1 year in average absolute error (2.7 years) for blood DNA methylation data.

72

A. Kalyakulina et al.

Described linear and gradient models are quite accurate in estimating the age, but the number of considered features is still limited. To study the causes of age acceleration (the difference between real age and methylation age) and its relation to mortality risk (Soriano-Tárraga et al. 2018; Liu et al. 2018), a MethylNet model was developed (Levy et al. 2020). This deep learning model is first trained using variational autoencoders, their layers are used to extract biologically relevant features in new embedding space. The encoder is followed by prediction layers, which can solve both regression and classification problems. The age predicted by MethylNet showed good agreement with the actual age, with a mean absolute error of 3.0 years on the test data. Recently, deep learning models have been introduced in the field. An example of such a model is the DeepMAge aging clock (Galkin et al. 2021)—the first deep learning model developed specifically for age prediction using DNA methylation data. This model used deep feature selection (Li et al. 2016) and gradient-based feature selection (Leray and Gallinari 1999) approaches to find the most significant CpG sites. The final model is a regressor neural network with 4 hidden layers of 512 neurons in each layer, and it is applied to 1000 CpG sites. These CpG sites overlap quite well with the 353 CpG sites from the Horvath model, with 121 CpG sites in the intersection. The absolute median error was 2.77 years and the mean absolute error was 3.80 years in the independent control sample. In addition to healthy subjects, the model was also tested on subjects with various diseases. Interestingly, DeepMAge was the first among such models to predict a higher age for people with diseases, thus demonstrating biological significance. The development of methods for building one-tissue epigenetic clocks has made it possible not only to achieve high accuracy of predictions but also to show higher age acceleration in people with various diseases. Such clocks can be used for early diagnostics of various diseases: the increase of epigenetic age relative to chronological age can be an indicator of beginning problems in the human body. Timely detection of such a difference will make it possible to perform additional tests and start treatment in time. The most popular type of data used for such clocks is DNA methylation of whole blood or different blood cells. However, using only one tissue may not always fully reflect the various processes occurring in the human body and affecting health status and, therefore, the predicted age. To further investigate the influence of the interaction between methylation statuses in different tissues and their influence on the biological age, pan-tissue epigenetic clocks were developed.

4.2.3 Pan-Tissue Epigenetic Clocks The first pan-tissue epigenetic clock was developed by Horvath (2013) with a median absolute error of 3.6 years using regularized linear regression. More recent work (Vijayakumar and Cho 2022) also used the ElasticNet model for 6761 CpG sites with a median absolute error of 2.8 years on test data for 20 different tissues. Another development of Horvitz’s work was a study (Xu et al. 2019) proposing a predictor of

4 Obtaining Longevity Footprints in DNA Methylation Data Using …

73

human age based on a gradient boosting regressor and using DNA methylation data from 14 different non-blood tissues. Principal component analysis was used to filter outliers. As for many one-tissue epigenetic clocks, the CpG sites best correlated with age (according to the Pearson correlation coefficient) were used to build the model, but in this case, their overlap for all tissues was considered. There were 13 such age-related CpG sites in all tissues. Comparison of gradient boosting regressor with three other models (Bayesian ridge, multiple linear regression, and support vector regression) showed better results for the first method: for the independent test sample gradient boosting regressor reaches the mean absolute deviation of 6.08 years, which is still worse than the Horvath’s result. Having previously shown its successful applicability to the analysis of one-tissue DNA methylation data (Levy et al. 2020; Galkin et al. 2021), deep learning is a good candidate for application to pan-tissue data. Moreover, the advantages of using neural networks for pan-tissue data have not been sufficiently explored. The paper (de Lima Camillo et al. 2022) presents AltumAge, a deep neural network that uses 20,318 CpG sites common to Illumina 27k, 450k, and EPIC arrays to predict the age of all tissues. It is hypothesized that a neural network using CpG sites without prefiltering can better predict pan-tissue age because of its ability to detect nonlinear interactions between features and to use the information contained in those CpG sites that would not pass filtering by age correlation or ElasticNet (Horvath 2013). AltumAge uses a multilayer perceptron analogous to Levy et al. (2020) and Galkin et al. (2021), showing excellent accuracy in age prediction, with an average absolute error of only 2.153 years. AltumAge also significantly improves the results of the Horvath model: for the selected 353 CpG sites, the average absolute error is 2.425 years. Compared to Horvath’s model, AltumAge has fewer tissue types with a high mean absolute error. The Horvath model has a high error for breast, uterine endometrium, dermal fibroblasts, skeletal muscle, and heart (Horvath 2013). In AltumAge, the errors were much lower for these tissues. Also, AltumAge has better performance in the elderly, which may be important for identifying biomarkers of age-related diseases. The reason for this may be the wide pool of features used without additional pre-filtering (de Lima Camillo et al. 2022). The AltumAge model also supports the ability to interpret its predictions, showing the relationship of each feature to age.

4.3 Machine Learning for Age-Related Diseases Classification The concept of biological age determined by the epigenetic clock can explain the differences in the biological status of individuals of the same chronological age (Andrews et al. 2017). There is a growing number of works confirming that the difference between biological age (age of DNA methylation) and chronological age (real age of an individual) is closely related to age-related disorders (Xiao et al. 2019). Developments in machine learning methods have made it possible to apply them

74

A. Kalyakulina et al.

to DNA methylation data to solve problems of classifying various age-associated diseases.

4.3.1 Cancer Classification Cancer is one of the leading causes of death in many countries (Cao et al. 2017). It reduces the chances of longevity (Andersen et al. 2005). Although a significant proportion of long-livers live with age-related diseases, most of them have disability related to such diseases delayed toward the end of life (Hitt et al. 1999), suggesting the ability to minimize the impact of these diseases later in life. However, the mechanism by which long-livers minimize the impact of such diseases is poorly understood. An evaluation of the age of cancer diagnosis among long-livers found that their average age of diagnosis was 80.5 years, compared to 63.2 years in the general population (SEER 2004). Some cancers were found to be very rare among long-livers, suggesting that there are certain cancers that may be incompatible with living to a ripe old age (Andersen et al. 2005). Cancer can also be seen as a natural ceiling of human life expectancy—the incidence of disease increases starting about the middle of a person’s maximum life span (Campisi 2003; Tørring 2017). Considering the above, early diagnosis of cancer can help to increase longevity and quality of life. One of the most common machine learning tasks for DNA methylation data is the classification of different cancer types using the TCGA database (Weinstein et al. 2013). In particular, Zheng and Xu (2020) solves the problem of multiclass classification of 18 cancers of different tissues and organs using the GDC repository (GDC) with a multilayer perceptron, while (Ding et al. 2019) classifies 21 cancer types, and the MethylNet architecture considers 32 such (Levy et al. 2020). For both models, accuracy and predictive value are higher than 0.97. Two models, MethylSPWNet and MethylCapsNet, which are extensions of MethylNet, show even higher results (accuracy over 0.98) for 38 CNS tumor subtypes (Levy et al. 2021). MethylCapsNet allows grouping CpGs into capsules and then dynamically routing capsules for prediction and interpretation of results. MethylSPWNet combines groups of CpGs through one locally linked layer; the resulting layer nodes represent biologically relevant units that are run through a multilayer perceptron for prediction. For smaller subsets of TCGA (separately for breast, kidney, lung, and other tissues), classification results are also high (List et al. 2014; Hao et al. 2017; Celli et al. 2018; Dong et al. 2019; Jurmeister et al. 2019; Ma et al. 2020; Lian et al. 2020; Pu et al. 2021). Good tumor classification quality is observed not only for TCGA data, but also for other datasets (including GEO (Clough and Barrett 2016)) (Wei et al. 2006; Capper et al. 2018; Maros et al. 2020). Deep2Met model considers DNA methylation levels as input for a convolutional neural network consisting of five layers to predict whether a patient with colorectal cancer has metastasized or not (Albaradei et al. 2019). The proposed model achieved an area under the accuracy-weighted recall curve (AUPR), which estimates performance for unbalanced classes, of 96.99%. The authors state that the Deep2Met results

4 Obtaining Longevity Footprints in DNA Methylation Data Using …

75

show the ability to diagnose colorectal cancer based on the methylation profiles of individual patients (Albaradei et al. 2019; Nguyen et al. 2021). Two papers (Titus et al. 2018; Wang and Wang 2019) developed a pipeline to classify breast cancer and lung cancer, respectively. After using a variational autoencoder model to explore hidden features of the input data, the authors performed dimensionality reduction and then trained logistic regression classifiers to classify samples into subtypes with high accuracy (Nguyen et al. 2021). A novel class-incremental learning approach called Deep Generative Feature Replay, consisting of incremental feature selection and a scholar network, was proposed in Batbaatar et al. (2020). The scholar network contains a deep generative feature model and a neural network classifier. Incremental feature selection adaptively selects the features with the highest ranking for each task. Variational autoencoders pre-trained the generative models on the selected features for further analysis. A simple neural network was used to classify 2728 cancer samples into 12 categories, achieving over 95% accuracy (Batbaatar et al. 2020). Genetic and epigenetic changes jointly determine tumor initiation and progression (Zhou et al. 2016). Aberrant levels of DNA methylation can be observed in various tumor cell types (Zhang et al. 2020c). Altered DNA methylation has been described as a major cancer-causing event (Campan et al. 2011), which can be divided into hypomethylation and hypermethylation. Excessive activation of proto-oncogenes caused by DNA hypomethylation is the main dysfunctional process during oncogenesis (Renaud et al. 2015, 2016; Good et al. 2018). Divergent methylation patterns are closely related to cell differentiation (Farlik et al. 2016). Even in the same cell line, methylation patterns can be dynamic at different stages (Kaaij et al. 2013; Petell et al. 2016), which is typical for tumor cells. According to initial organs and tissues, tumors can be divided into different subtypes with different patterns of whole-genome methylation (Sahm et al. 2017). For example, hypomethylation of the mucin gene MUC5AC is considered a feature of colorectal cancer (Renaud et al. 2015, 2016). Another study also shows that BRCA1, an important tumor suppressor gene, is closely associated with breast and ovarian cancer when the promoter is hypermethylated (Evans et al. 2018).

4.3.2 Phenotype Classification Another factor influencing longevity is bad habits and lifestyle. Smoking is often associated with the most common causes of death in the elderly and contributes to the high mortality and disability rates associated with many of the chronic diseases common in this age group (Bratzler et al. 2002). Smoking is a deterrent to the successful aging process because of its source of oxidative stress, which is a potentially dangerous mechanism of health effects (Tafaro et al. 2004). Stopping smoking at any age significantly prolongs life (Taylor et al. 2002). Obesity and overweight in adulthood are also associated with significantly reduced life expectancy and increased

76

A. Kalyakulina et al.

early mortality. This decrease is similar to that observed with smoking. Obesity in adulthood is a predictor of death in old age (Peeters et al. 2003). Therefore, a second common type of classification problem is the determination of phenotype by DNA methylation data, particularly the determination of smoking status (e.g., current, former, or never smoker). Smoking is closely related to changes in DNA methylation (Christiansen et al. 2021). Many differentially methylated positions (DMPs) associated with smoking are well known (Guida et al. 2015; Joehanes et al. 2016). In the largest adult smoking methylation study to date (Joehanes et al. 2016), Illumina 450k methylation profiles for 15,907 blood samples were used to identify 2623 smoking-related DMPs. Many of these DMPs had already been associated with smoking in previous studies (Breitling et al. 2011; Zeilinger et al. 2013). The MethylNet architecture mentioned above achieved an accuracy of 73% in classifying smokers and nonsmokers (Levy et al. 2020). For the three smoking statuses (current, former, and never smoker) the sensitivity and specificity were up to 99% according to Bollepalli et al. (2019). Studies of DNA methylation performed in different populations suggest a link between DNA methylation and obesity (Xu et al. 2013; He et al. 2019; Gallardo-Escribano et al. 2020; Chen et al. 2021). Changes in gene methylation can alter gene transcription, leading to abnormal gene expression and ultimately obesity (Rodríguez-Rodero et al. 2017). The accuracy of obesity status classification is similar to that of smoking—about 70% (Lee et al. 2022).

4.3.3 Case–Control Classification Next task is to classify cases and controls for certain diseases. Many diseases are associated with differential DNA methylation (Robertson 2005; Gluckman et al. 2009; Soubry et al. 2013; Aref-Eshghi et al. 2018a; Rauschert et al. 2020). Epigenetics is closely related to environmental influences and is therefore potentially better suited for diagnosing and treating diseases than genetics alone (Berdasco and Esteller 2019). Because epigenetics can mediate between adverse conditions early in life and disease onset later in life, it can play a potential role in early diagnosis (Ong et al. 2015). It has been shown that early life adverse conditions, such as starvation (Jang and Serra 2014) or maternal smoking during pregnancy (Joubert et al. 2016; Rauschert et al. 2019), can indirectly program child development at the epigenetic level (BiancoMiotto et al. 2017). Examples of machine learning applications using epigenetic data include, among others, classification of cardiovascular diseases (Dogan et al. 2018; Zhao et al. 2022), developmental syndromes of the nervous system (Aref-Eshghi et al. 2018b; Haghshenas et al. 2020), mental and neurological disorders (Park et al. 2020). Cardiovascular disease is one of the most common causes of death worldwide (World Health Organization 2022). Early diagnosis can significantly reduce the severity of the consequences of cardiovascular disease, but despite advances in screening, diagnosis remains a major problem. In Cugliari et al. (2019), serious cases of cardiovascular disease (fatal and nonfatal myocardial infarction events,

4 Obtaining Longevity Footprints in DNA Methylation Data Using …

77

coronary revascularization) and cases of sudden death by unspecified cardiac event were considered. Classically, cardiovascular risk is assessed using phenomenological variables such as blood pressure, body weight, smoking status, sex, age, Cugliari et al. (2019) obtained more accurate predictions by combining such data with DNA methylation as Random Forest model inputs, achieving an ROC-AUC of 0.74. One of the diseases associated with aging is Alzheimer’s disease, a progressive form of dementia (Henderson 2007). Existing treatments can slow the progression of symptoms, so it is important to diagnose the disease early. In Mahendran and Durai Raj Vincent (2022) a deep learning based classification model with a specific approach to feature selection was used. Four feature selection models (LASSO Regression, SVM, AdaBoost, Random Forest) were compared, AdaBoost showed the best result. An extended deep recurrent neural network (EDRNN) was implemented and compared with other existing classification models including convolutional neural network (CNN), recurrent neural network (RNN) and deep recurrent neural network (DRNN). The results showed a significant improvement in the classification accuracy of the proposed model (over 89%) compared to other methods (Mahendran and Durai Raj Vincent 2022). Mental disorders range from mood disorders such as bipolar disorder, depression, to psychotic disorders such as personality disorders, eating disorders, and schizophrenia. Two types of mental disorders were studied in Xiong et al. (2020): major depressive disorder and bipolar disorder. Diagnosing these disorders is difficult and problematic in many aspects. Statistical procedures were used to identify the differentially methylated loci that have the greatest impact on the diagnosis through DNA methylation. Principal component analysis (PCA) was performed to project selected 6000 CpGs into 50 principal components. Several binary/multi classification models are then tested based on the best principal components: logistic regression, random forest, neural network, and fuzzy clustering. Predictions of depression have excellent accuracy: almost always 100% for all models with appropriate data projection using the PCA method. For bipolar disorder, accuracy is about 90% for the binary model. Schizophrenia is another common psychiatric disorder. To train a case–control classifier based on blood DNA methylation (Gunasekara et al. 2021) looked at genomic regions of systemic interindividual epigenetic variation (CoRSIVs) included in the Illumina Human Methylation 450K (Gunasekara and Waterland 2019). DNA methylation data in whole blood was used for a classification algorithm with feature selection based on sparse partial least squares discriminate analysis (SPLS-DA) (Chung and Keles 2010; Chun and Kele¸s 2010). The CoRSIVbased model classified 303 individuals as cases with a positive predictive value of 80% (Gunasekara et al. 2021). Another paper addresses the problem of classifying schizophrenia patients from healthy controls (Zhang et al. 2020b). The authors developed a classification method based on deep learning. They propose a feature selection method based on an attention mechanism that embeds a weight-limited layer into the network structure to obtain a sparse representation of DNA methylation data. A deep autoencoder is used to reduce the dimensionality to two dimensions. A linear SVM is taken for the final classification. This algorithm achieves 99% accuracy on test data (Zhang et al. 2020b).

78

A. Kalyakulina et al.

All the above studies solve the problem of classifying different diseases by identifying specific epigenetic markers for specific diseases. Such studies can help early diagnosis of various serious diseases, facilitating timely treatment and increasing the chances of longevity.

4.4 Unsupervised Learning for Cancer Differentiating Another type of machine learning widely used to analyze methylation data is unsupervised learning, specifically clustering. Due to non-Gaussian characteristics, traditional clustering methods based on the Gaussian distribution are not sufficiently suitable for analyzing DNA methylation data (Ma et al. 2014; Si et al. 2016). In order for the range of methylation data to satisfy the definition of the Gaussian distribution, an M-value method has been proposed that uses a logit transform (Du et al. 2010). Also, DNA methylation information can be modeled by a mixture of beta distributions (Du et al. 2010; Laurila et al. 2011; Ma and Leijon 2011; Ma and Teschendorff 2013) by approximate methods such as variational Bayes inference (Bishop 2016) and Gibbs sampling (Liu 1994). In addition, the high dimensionality of the methylation data also creates many problems in the analysis, which is why dimensionality reduction is performed before cluster analysis. However, traditional methods are usually based on the assumption of Gaussian distribution, so the statistical properties of DNA methylation data cannot be effectively described by such dimensionality reduction methods (Si et al. 2016). A deep neural network model with automatic coding is used in Si et al. (2016) to perform the dimensionality reduction task (Schmidhuber 2015). The network consists of several stacked binary restricted Boltzmann machines (Hinton and Salakhutdinov 2006), the input and output of the model are treated as probability values and are naturally bounded in [0, 1]. Such properties are similar to DNA methylation data. Si et al. (2016) investigate DNA methylation data from cancer and healthy samples. First, they test the dimensional reduction effect and show that low-dimensional features can effectively distinguish cancer from normal samples. Then, an unsupervised cluster analysis is performed using features extracted from the deep neural network model. Final clustering is performed using self-organizing feature maps. The results demonstrate that cancer and healthy samples can be efficiently clustered into different groups based on deep neural network-based features with an error rate of less than 3%. Work (Wang and Wang 2018) uses a variational autoencoder model for lung cancer (lung adenocarcinoma and lung squamous cell carcinoma) DNA methylation data and presents initial results of a biologically relevant methylome, which is a lower dimensional latent space. This work uses lung cancer data from the TCGA database (Weinstein et al. 2013). The “1 versus The Rest” logistic regression classifier on encoded latent functions classifies cancer subtypes quite well. The classification accuracy was 0.92 for lung adenocarcinoma tumor samples, 0.75 for normal lung adenocarcinoma samples, 0.99 for lung adenocarcinoma tumor samples and 1.00 for normal lung adenocarcinoma samples (Wang and Wang 2018).

4 Obtaining Longevity Footprints in DNA Methylation Data Using …

79

The previously mentioned MethylNet model (Levy et al. 2020) also uses variational autoencoders to predict cancer type. MethylNet can be used as a mechanism to identify sources of disease heterogeneity and the ability to capture features that are tissue specific. Hidden profiles obtained for cancer subtypes showed clustering with a high agreement with known differences in cancer types. Subtypes tended to be misclassified only within each superclass. Thus, MethylNet not only makes accurate classification predictions but also extracts latent features with high accuracy with respect to tissue biology (Levy et al. 2020). Another paper (del Amor et al. 2021) proposes a deep embedded refined clustering method for breast cancer differentiation based on CpG-island DNA methylation data. The proposed approach consists of two steps. The first step consists of reducing the dimensionality of the methylation data based on the autoencoder. The second stage presents a clustering algorithm based on the soft assignment of the latent space provided by the autoencoder. The proposed method provides a clustering accuracy of 0.99 and an error rate of 0.73% on 137 breast tissue samples, while using another methylation database the accuracy was 0.93 and error rate was 6.57% on 45 samples. Thus, unsupervised learning approaches demonstrate fairly high accuracy in differentiating between different cancers as well as different subtypes of the same cancer. Such approaches can be useful for early disease diagnosis as well as for more precise determination of the disease type and prescription of the most appropriate treatment.

4.5 Conclusions This chapter presents an overview of the different types of machine learning methods as applied to various tasks of searching for longevity footprints in DNA methylation data (Fig. 4.1). The most representative type is supervised learning, which solves two main problems in the context of longevity and methylation data: regression of biological age and classification of age-associated diseases. Age acceleration in relation to a person’s chronological age reflects the real state of health. Positive age acceleration (biological age greater than chronological age) may be the consequence of many negative factors, such as illness, stress, insomnia, and bad habits. At the same time, negative age acceleration (biological age less than chronological) can be achieved through a healthy lifestyle, a balanced diet, and regular physical activity. One of the most common approaches to feature selection in this task is the selection of age-associated CpG sites. A large number of epigenetic clocks, models of human biological age, are constructed using multiple linear regression (the most common is ElasticNet). There are many methods used to solve both classification and regression problems: gradient boosting decision trees, random forest, support vector machines, different neural networks (convolutional, feedforward, recurrent). The development of deep learning methods allowed also to apply such architectures for construction of an estimation of biological age of the person. Examples of such models are, in particular, DeepMAge (Galkin et al. 2021) and AltumAge (de Lima Camillo et al.

80

A. Kalyakulina et al.

2022). The second problem solved by supervised machine learning methods is the classification of age-associated diseases. The most common types of such tasks are classification of cancers, phenotypes or cases against controls. Deep architectures are also used for classification tasks: MethylSPWNet, MethylCapsNet (Levy et al. 2021), Deep2Met (Albaradei et al. 2019). The second type of machine learning is unsupervised learning. When applied to methylation data, this type of methods is most commonly used to solve cancer differentiation problems by clustering. A common element for most approaches is a preliminary analysis step—dimensionality reduction. The most common approaches here are principal component analysis and autoencoders. Of particular note is the MethylNet model (Levy et al. 2020), which implements a complex structure capable of solving all the abovementioned problems using both supervised and unsupervised machine learning. Another type of machine learning, reinforcement learning, is underrepresented for methylation data analysis tasks. One model, DeepCpG (Angermueller et al. 2017), solves the problem of representing the methylation status of CpG sites not represented in the data under study.

Fig. 4.1 A schematic representation of the different types of machine learning methods, the problems they solve, and the approaches used to find longevity footprints in DNA methylation data

4 Obtaining Longevity Footprints in DNA Methylation Data Using …

81

Funding Lobachevsky University funding in the framework of “Priority-2030” State program, project No. N-470-99.

Compliance with Ethical Standards Conflict of Interest The authors declare that they have no conflict of interest.

References Albaradei S, Thafar M, Van Neste C et al (2019) Metastatic state of colorectal cancer can be accurately predicted with methylome. In: Proceedings of the 2019 6th international conference on bioinformatics research and applications. ACM, Seoul Republic of Korea, pp 125–130 Aliferi A, Ballard D (2022) Predicting chronological age from DNA methylation data: a machine learning approach for small datasets and limited predictors. In: Guan W (ed) Epigenome-wide association studies. Springer US, New York, NY, pp 187–200 Aliferi A, Ballard D, Gallidabino MD et al (2018) DNA methylation-based age prediction using massively parallel sequencing data and multiple machine learning models. Forensic Sci Int Genet 37:215–226. https://doi.org/10.1016/j.fsigen.2018.09.003 Andersen SL, Terry DF, Wilcox MA et al (2005) Cancer in the oldest old. Mech Ageing Dev 126:263–267. https://doi.org/10.1016/j.mad.2004.08.019 Andrews C, Nettle D, Larriva M et al (2017) A marker of biological age explains individual variation in the strength of the adult stress response. R Soc Open Sci 4:171208. https://doi.org/10.1098/ rsos.171208 Angermueller C, Lee HJ, Reik W, Stegle O (2017) DeepCpG: accurate prediction of single-cell DNA methylation states using deep learning. Genome Biol 18:67. https://doi.org/10.1186/s13 059-017-1189-z Aref-Eshghi E, Bend EG, Hood RL et al (2018a) BAFopathies’ DNA methylation epi-signatures demonstrate diagnostic utility and functional continuum of Coffin–Siris and Nicolaides– Baraitser syndromes. Nat Commun 9:4885. https://doi.org/10.1038/s41467-018-07193-y Aref-Eshghi E, Rodenhiser DI, Schenkel LC et al (2018b) Genomic DNA methylation signatures enable concurrent diagnosis and clinical genetic variant classification in neurodevelopmental syndromes. Am J Hum Genet 102:156–174. https://doi.org/10.1016/j.ajhg.2017.12.008 Batbaatar E, Park KH, Amarbayasgalan T et al (2020) Class-incremental learning with deep generative feature replay for DNA methylation-based cancer classification. IEEE Access 8:210800–210815. https://doi.org/10.1109/ACCESS.2020.3039624 Bell CG, Lowe R, Adams PD et al (2019) DNA methylation aging clocks: challenges and recommendations. Genome Biol 20:249. https://doi.org/10.1186/s13059-019-1824-y Belsky DW, Caspi A, Houts R et al (2015) Quantification of biological aging in young adults. Proc Natl Acad Sci USA 112:E4104–E4110. https://doi.org/10.1073/pnas.1506264112 Belsky DW, Moffitt TE, Cohen AA et al (2018) Eleven telomere, epigenetic clock, and biomarkercomposite quantifications of biological aging: do they measure the same thing? Am J Epidemiol 187:1220–1230. https://doi.org/10.1093/aje/kwx346 Berdasco M, Esteller M (2019) Clinical epigenetics: seizing opportunities for translation. Nat Rev Genet 20:109–127. https://doi.org/10.1038/s41576-018-0074-2 Bergsma T, Rogaeva E (2020) DNA methylation clocks and their predictive capacity for aging phenotypes and healthspan. Neurosci Insights 15:2633105520942221. https://doi.org/10.1177/ 2633105520942221 Bianco-Miotto T, Craig JM, Gasser YP et al (2017) Epigenetics and DOHaD: from basics to birth and beyond. J Dev Orig Health Dis 8:513–519. https://doi.org/10.1017/S2040174417000733

82

A. Kalyakulina et al.

Bibikova M, Barnes B, Tsan C et al (2011) High density DNA methylation array with single CpG site resolution. Genomics 98:288–295. https://doi.org/10.1016/j.ygeno.2011.07.007 Bishop CM (2016) Pattern recognition and machine learning, softcover reprint of the original 1st edition 2006 (corrected at 8th printing 2009). Springer New York, New York, NY Bocklandt S, Lin W, Sehl ME et al (2011) Epigenetic predictor of age. PLoS ONE 6:e14821. https:/ /doi.org/10.1371/journal.pone.0014821 Bollepalli S, Korhonen T, Kaprio J et al (2019) EpiSmokEr: a robust classifier to determine smoking status from DNA methylation data. Epigenomics 11:1469–1486. https://doi.org/10.2217/epi2019-0206 Bratzler DW, Oehlert WH, Austelle A (2002) Smoking in the elderly—it’s never too late to quit. J Okla State Med Assoc 95:185–191; quiz 192–193 Breitling LP, Yang R, Korn B et al (2011) Tobacco-smoking-related differential DNA methylation: 27K discovery and replication. Am J Hum Genet 88:450–457. https://doi.org/10.1016/j.ajhg. 2011.03.003 Campan M, Moffitt M, Houshdaran S et al (2011) Genome-scale screen for DNA methylationbased detection markers for ovarian cancer. PLoS ONE 6:e28141. https://doi.org/10.1371/jou rnal.pone.0028141 Campisi J (2003) Cancer and ageing: rival demons? Nat Rev Cancer 3:339–349. https://doi.org/10. 1038/nrc1073 Cao B, Bray F, Beltrán-Sánchez H et al (2017) Benchmarking life expectancy and cancer mortality: global comparison with cardiovascular disease 1981–2010. BMJ j2765. https://doi.org/10.1136/ bmj.j2765 Capper D, Jones DTW, Sill M et al (2018) DNA methylation-based classification of central nervous system tumours. Nature 555:469–474. https://doi.org/10.1038/nature26000 Celli F, Cumbo F, Weitschek E (2018) Classification of large DNA methylation datasets for identifying cancer drivers. Big Data Res 13:21–28. https://doi.org/10.1016/j.bdr.2018.02.005 Chen L, Dong Y, Bhagatwala J et al (2019) Effects of vitamin D3 supplementation on epigenetic aging in overweight and obese African Americans with suboptimal vitamin D status: a randomized clinical trial. J Gerontol Ser A 74:91–98. https://doi.org/10.1093/gerona/gly223 Chen N, Miao L, Lin W et al (2021) Integrated DNA methylation and gene expression analysis identified S100A8 and S100A9 in the pathogenesis of obesity. Front Cardiovasc Med 8:631650. https://doi.org/10.3389/fcvm.2021.631650 Christensen BC, Houseman EA, Marsit CJ et al (2009) Aging and environmental exposures alter tissue-specific DNA methylation dependent upon CpG island context. PLoS Genet 5:e1000602. https://doi.org/10.1371/journal.pgen.1000602 Christiansen C, Castillo-Fernandez JE, Domingo-Relloso A et al (2021) Novel DNA methylation signatures of tobacco smoking with trans-ethnic effects. Clin Epigenet 13:36. https://doi.org/ 10.1186/s13148-021-01018-4 Chun H, Kele¸s S (2010) Sparse partial least squares regression for simultaneous dimension reduction and variable selection. J R Stat Soc Ser B Stat Methodol 72:3–25. https://doi.org/10.1111/j.14679868.2009.00723.x Chung D, Keles S (2010) Sparse partial least squares classification for high dimensional data. Stat Appl Genet Mol Biol 9:Article 17. https://doi.org/10.2202/1544-6115.1492 Clough E, Barrett T (2016) The gene expression omnibus database. In: Mathé E, Davis S (eds) Statistical genomics. Springer New York, New York, NY, pp 93–110 Cugliari G, Benevenuta S, Guarrera S et al (2019) Improving the prediction of cardiovascular risk with machine-learning and DNA methylation data. In: 2019 IEEE conference on computational intelligence in bioinformatics and computational biology (CIBCB). IEEE, Siena, Italy, pp 1–4 de Lima Camillo LP, Lapierre LR, Singh R (2022) A pan-tissue DNA-methylation epigenetic clock based on deep learning. npj Aging 8:4. https://doi.org/10.1038/s41514-022-00085-y del Amor R, Colomer A, Monteagudo C, Naranjo V (2021) A deep embedded refined clustering approach for breast cancer distinction based on DNA methylation. Neural Comput Appl. https://doi.org/10.1007/s00521-021-06357-0

4 Obtaining Longevity Footprints in DNA Methylation Data Using …

83

Ding W, Chen G, Shi T (2019) Integrative analysis identifies potential DNA methylation biomarkers for pan-cancer diagnosis and prognosis. Epigenetics 14:67–80. https://doi.org/10.1080/155 92294.2019.1568178 Dogan MV, Grumbach IM, Michaelson JJ, Philibert RA (2018) Integrated genetic and epigenetic prediction of coronary heart disease in the Framingham Heart Study. PLoS ONE 13:e0190549. https://doi.org/10.1371/journal.pone.0190549 Dong R, Yang X, Zhang X et al (2019) Predicting overall survival of patients with hepatocellular carcinoma using a three-category method based on DNA methylation and machine learning. J Cell Mol Med 23:3369–3374. https://doi.org/10.1111/jcmm.14231 Du P, Zhang X, Huang C-C et al (2010) Comparison of beta-value and M-value methods for quantifying methylation levels by microarray analysis. BMC Bioinform 11:587. https://doi.org/ 10.1186/1471-2105-11-587 Evans DGR, van Veen EM, Byers HJ et al (2018) A dominantly inherited 5, UTR variant causing methylation-associated silencing of BRCA1 as a cause of breast and ovarian cancer. Am J Hum Genet 103:213–220. https://doi.org/10.1016/j.ajhg.2018.07.002 Fahy GM, Brooke RT, Watson JP et al (2019) Reversal of epigenetic aging and immunosenescent trends in humans. Aging Cell 18:e13028. https://doi.org/10.1111/acel.13028 Fan J, Li R (2001) Variable selection via nonconcave penalized likelihood and its oracle properties. J Am Stat Assoc 96:1348–1360. https://doi.org/10.1198/016214501753382273 Fan H, Xie Q, Zhang Z et al (2022) Chronological age prediction: developmental evaluation of DNA methylation-based machine learning models. Front Bioeng Biotechnol 9:819991. https:// doi.org/10.3389/fbioe.2021.819991 Farlik M, Halbritter F, Müller F et al (2016) DNA methylation dynamics of human hematopoietic stem cell differentiation. Cell Stem Cell 19:808–822. https://doi.org/10.1016/j.stem.2016. 10.019 Field AE, Robertson NA, Wang T et al (2018) DNA methylation clocks in aging: categories, causes, and consequences. Mol Cell 71:882–895. https://doi.org/10.1016/j.molcel.2018.08.008 Fitzgerald KN, Hodges R, Hanes D et al (2021) Potential reversal of epigenetic age using a diet and lifestyle intervention: a pilot randomized clinical trial. Aging (Albany NY) 13:9419–9432. https://doi.org/10.18632/aging.202913 Florath I, Butterbach K, Müller H et al (2014) Cross-sectional and longitudinal changes in DNA methylation with age: an epigenome-wide analysis revealing over 60 novel age-associated CpG sites. Hum Mol Genet 23:1186–1201. https://doi.org/10.1093/hmg/ddt531 Franceschi C, Garagnani P, Parini P et al (2018) Inflammaging: a new immune–metabolic viewpoint for age-related diseases. Nat Rev Endocrinol 14:576–590. https://doi.org/10.1038/s41574-0180059-4 Freire-Aradas A, Phillips C, Mosquera-Miguel A et al (2016) Development of a methylation marker set for forensic age estimation using analysis of public methylation data and the Agena Bioscience EpiTYPER system. Forensic Sci Int Genet 24:65–74. https://doi.org/10.1016/j.fsi gen.2016.06.005 Galkin F, Mamoshina P, Aliper A et al (2020) Biohorology and biomarkers of aging: current stateof-the-art, challenges and opportunities. Ageing Res Rev 60:101050. https://doi.org/10.1016/j. arr.2020.101050 Galkin F, Mamoshina P, Kochetov K et al (2021) DeepMAge: a methylation aging clock developed with deep learning. Aging Dis 12:1252–1262. https://doi.org/10.14336/AD.2020.1202 Gallardo-Escribano C, Buonaiuto V, Ruiz-Moreno MI et al (2020) Epigenetic approach in obesity: DNA methylation in a prepubertal population which underwent a lifestyle modification. Clin Epigenet 12:144. https://doi.org/10.1186/s13148-020-00935-0 Garagnani P, Bacalini MG, Pirazzini C et al (2012) Methylation of ELOVL2 gene as a new epigenetic marker of age. Aging Cell 11:1132–1134. https://doi.org/10.1111/acel.12005 GDC genomic data commons data portal. https://portal.gdc.cancer.gov/. Accessed 26 May 2022

84

A. Kalyakulina et al.

Ghaddar B, Naoum-Sawaya J (2018) High dimensional data classification and feature selection using support vector machines. Eur J Oper Res 265:993–1004. https://doi.org/10.1016/j.ejor. 2017.08.040 Gluckman PD, Hanson MA, Buklijas T et al (2009) Epigenetic mechanisms that underpin metabolic and cardiovascular diseases. Nat Rev Endocrinol 5:401–408. https://doi.org/10.1038/nrendo.200 9.102 Good CR, Panjarian S, Kelly AD et al (2018) TET1-mediated hypomethylation activates oncogenic signaling in triple-negative breast cancer. Cancer Res 78:4126–4137. https://doi.org/10.1158/ 0008-5472.CAN-17-2082 Gross AM, Jaeger PA, Kreisberg JF et al (2016) Methylome-wide analysis of chronic HIV infection reveals five-year increase in biological age and epigenetic targeting of HLA. Mol Cell 62:157– 168. https://doi.org/10.1016/j.molcel.2016.03.019 Guida F, Sandanger TM, Castagné R et al (2015) Dynamics of smoking-induced genome-wide methylation changes with time since smoking cessation. Hum Mol Genet 24:2349–2359. https:/ /doi.org/10.1093/hmg/ddu751 Gunasekara CJ, Waterland RA (2019) A new era for epigenetic epidemiology. Epigenomics 11:1647–1649. https://doi.org/10.2217/epi-2019-0282 Gunasekara CJ, Hannon E, MacKay H et al (2021) A machine learning case–control classifier for schizophrenia based on DNA methylation in blood. Transl Psychiatry 11:412. https://doi.org/ 10.1038/s41398-021-01496-3 Haghshenas S, Bhai P, Aref-Eshghi E, Sadikovic B (2020) Diagnostic utility of genome-wide DNA methylation analysis in mendelian neurodevelopmental disorders. Int J Mol Sci 21:E9303. https:/ /doi.org/10.3390/ijms21239303 Hannum G, Guinney J, Zhao L et al (2013) Genome-wide methylation profiles reveal quantitative views of human aging rates. Mol Cell 49:359–367. https://doi.org/10.1016/j.molcel.2012.10.016 Hao X, Luo H, Krawczyk M et al (2017) DNA methylation markers for diagnosis and prognosis of common cancers. Proc Natl Acad Sci USA 114:7414–7419. https://doi.org/10.1073/pnas.170 3577114 He F, Berg A, Imamura Kawasawa Y et al (2019) Association between DNA methylation in obesityrelated genes and body mass index percentile in adolescents. Sci Rep 9:2079. https://doi.org/ 10.1038/s41598-019-38587-7 Henderson VW (2007) Alzheimer’s disease and other neurological disorders. Climacteric 10:92–96. https://doi.org/10.1080/13697130701534097 Hinton GE, Salakhutdinov RR (2006) Reducing the dimensionality of data with neural networks. Science 313:504–507. https://doi.org/10.1126/science.1127647 Hitt R, Young-Xu Y, Silver M, Perls T (1999) Centenarians: the older you get, the healthier you have been. Lancet 354:652. https://doi.org/10.1016/S0140-6736(99)01987-X Horvath S (2013) DNA methylation age of human tissues and cell types. Genome Biol 14:R115. https://doi.org/10.1186/gb-2013-14-10-r115 Horvath S, Raj K (2018) DNA methylation-based biomarkers and the epigenetic clock theory of ageing. Nat Rev Genet 19:371–384. https://doi.org/10.1038/s41576-018-0004-3 Horvath S, Ritz BR (2015) Increased epigenetic age and granulocyte counts in the blood of Parkinson’s disease patients. Aging (Albany NY) 7:1130–1142. https://doi.org/10.18632/aging. 100859 Horvath S, Garagnani P, Bacalini MG et al (2015a) Accelerated epigenetic aging in Down syndrome. Aging Cell 14:491–495. https://doi.org/10.1111/acel.12325 Horvath S, Pirazzini C, Bacalini MG et al (2015b) Decreased epigenetic age of PBMCs from Italian semi-supercentenarians and their offspring. Aging (Albany NY) 7:1159–1170. https://doi.org/ 10.18632/aging.100861 Horvath S, Langfelder P, Kwak S et al (2016) Huntington’s disease accelerates epigenetic aging of human brain and disrupts DNA methylation levels. Aging (Albany NY) 8:1485–1512. https:// doi.org/10.18632/aging.101005

4 Obtaining Longevity Footprints in DNA Methylation Data Using …

85

Horvath S, Singh K, Raj K et al (2020) Reversing age: dual species measurement of epigenetic age with a single clock. Dev Biol Jang H, Serra C (2014) Nutrition, epigenetics, and diseases. Clin Nutr Res 3:1. https://doi.org/10. 7762/cnr.2014.3.1.1 Joehanes R, Just AC, Marioni RE et al (2016) Epigenetic signatures of cigarette smoking. Circ Cardiovasc Genet 9:436–447. https://doi.org/10.1161/CIRCGENETICS.116.001506 Joubert BR, Felix JF, Yousefi P et al (2016) DNA methylation in newborns and maternal smoking in pregnancy: genome-wide consortium meta-analysis. Am J Hum Genet 98:680–696. https:// doi.org/10.1016/j.ajhg.2016.02.019 Jung S-E, Lim SM, Hong SR et al (2019) DNA methylation of the ELOVL2, FHL2, KLF14, C1orf132/MIR29B2C, and TRIM59 genes for age prediction from blood, saliva, and buccal swab samples. Forensic Sci Int Genet 38:1–8. https://doi.org/10.1016/j.fsigen.2018.09.010 Jurmeister P, Bockmayr M, Seegerer P et al (2019) Machine learning analysis of DNA methylation profiles distinguishes primary lung squamous cell carcinomas from head and neck metastases. Sci Transl Med 11:eaaw8513. https://doi.org/10.1126/scitranslmed.aaw8513 Kaaij LT, van de Wetering M, Fang F et al (2013) DNA methylation dynamics during intestinal stem cell differentiation reveals enhancers driving gene expression in the villus. Genome Biol 14:R50. https://doi.org/10.1186/gb-2013-14-5-r50 Koch CM, Wagner W (2011) Epigenetic-aging-signature to determine age in different tissues. Aging (Albany NY) 3:1018–1027. https://doi.org/10.18632/aging.100395 Kutner MH, Nachtsheim CJ, Neter J, Li W (2005) Applied linear statistical models Lau PY, Fung WK (2020) Evaluation of marker selection methods and statistical models for chronological age prediction based on DNA methylation. Leg Med 47:101744. https://doi.org/10.1016/ j.legalmed.2020.101744 Laurila K, Oster B, Andersen CL et al (2011) A beta-mixture model for dimensionality reduction, sample classification and analysis. BMC Bioinform 12:215. https://doi.org/10.1186/1471-210512-215 Lee Y-C, Christensen JJ, Parnell LD et al (2022) Using machine learning to predict obesity based on genome-wide and epigenome-wide gene-gene and gene-diet interactions. Front Genet 12:783845. https://doi.org/10.3389/fgene.2021.783845 Leray P, Gallinari P (1999) Feature selection with neural networks. Behaviormetrika 26:145–166. https://doi.org/10.2333/bhmk.26.145 Levine ME, Lu AT, Bennett DA, Horvath S (2015) Epigenetic age of the pre-frontal cortex is associated with neuritic plaques, amyloid load, and Alzheimer’s disease related cognitive functioning. Aging (Albany NY) 7:1198–1211. https://doi.org/10.18632/aging.100864 Levine AJ, Quach A, Moore DJ et al (2016) Accelerated epigenetic aging in brain is associated with pre-mortem HIV-associated neurocognitive disorders. J Neurovirol 22:366–375. https:// doi.org/10.1007/s13365-015-0406-3 Levine ME, Lu AT, Quach A et al (2018) An epigenetic biomarker of aging for lifespan and healthspan. Aging (Albany NY) 10:573–591. https://doi.org/10.18632/aging.101414 Levy JJ, Titus AJ, Petersen CL et al (2020) MethylNet: an automated and modular deep learning approach for DNA methylation analysis. BMC Bioinform 21:108. https://doi.org/10.1186/s12 859-020-3443-8 Levy JJ, Chen Y, Azizgolshani N et al (2021) MethylSPWNet and MethylCapsNet: biologically motivated organization of DNAm neural networks, inspired by capsule networks. npj Syst Biol Appl 7:33. https://doi.org/10.1038/s41540-021-00193-7 Li Y, Chen C-Y, Wasserman WW (2016) Deep feature selection: theory and application to identify enhancers and promoters. J Comput Biol 23:322–336. https://doi.org/10.1089/cmb.2015.0189 Li X, Li W, Xu Y (2018) Human age prediction based on DNA methylation using a gradient boosting regressor. Genes 9:424. https://doi.org/10.3390/genes9090424 Li L, Zhang C, Liu S et al (2021) Age prediction by DNA methylation in neural networks. IEEE/ ACM Trans Comput Biol Bioinform 1. https://doi.org/10.1109/TCBB.2021.3084596

86

A. Kalyakulina et al.

Lian Q, Wang B, Fan L et al (2020) DNA methylation data-based molecular subtype classification and prediction in patients with gastric cancer. Cancer Cell Int 20:349. https://doi.org/10.1186/ s12935-020-01253-4 Lin Q, Weidner CI, Costa IG et al (2016) DNA methylation levels at individual age-associated CpG sites can be indicative for life expectancy. Aging (Albany NY) 8:394–401. https://doi.org/10. 18632/aging.100908 List M, Hauschild A-C, Tan Q et al (2014) Classification of breast cancer subtypes by combining gene expression and DNA methylation data. J Integr Bioinform 11:1–14. https://doi.org/10. 1515/jib-2014-236 Liu JS (1994) The collapsed Gibbs sampler in Bayesian computations with applications to a gene regulation problem. J Am Stat Assoc 89:958–966. https://doi.org/10.1080/01621459.1994.104 76829 Liu Z, Kuo P-L, Horvath S et al (2018) A new aging measure captures morbidity and mortality risk across diverse subpopulations from NHANES IV: a cohort study. PLoS Med 15:e1002718. https://doi.org/10.1371/journal.pmed.1002718 Lu AT, Quach A, Wilson JG et al (2019) DNA methylation GrimAge strongly predicts lifespan and healthspan. Aging (Albany NY) 11:303–327. https://doi.org/10.18632/aging.101684 Ma Z, Leijon A (2011) Bayesian estimation of beta mixture models with variational inference. IEEE Trans Pattern Anal Mach Intell 33:2160–2173. https://doi.org/10.1109/TPAMI.2011.63 Ma Z, Teschendorff AE (2013) A variational Bayes beta mixture model for feature selection in DNA methylation studies. J Bioinform Comput Biol 11:1350005. https://doi.org/10.1142/S02 19720013500054 Ma Z, Teschendorff AE, Yu H et al (2014) Comparisons of non-Gaussian statistical models in DNA methylation analysis. Int J Mol Sci 15:10835–10854. https://doi.org/10.3390/ijms150610835 Ma B, Meng F, Yan G et al (2020) Diagnostic classification of cancers using extreme gradient boosting algorithm and multi-omics data. Comput Biol Med 121:103761. https://doi.org/10. 1016/j.compbiomed.2020.103761 Mahendran N, Durai Raj Vincent PM (2022) A deep learning framework with an embedded-based feature selection approach for the early detection of the Alzheimer’s disease. Comput Biol Med 141:105056. https://doi.org/10.1016/j.compbiomed.2021.105056 Maierhofer A, Flunkert J, Oshima J et al (2017) Accelerated epigenetic aging in Werner syndrome. Aging (Albany NY) 9:1143–1152. https://doi.org/10.18632/aging.101217 Maros ME, Capper D, Jones DTW et al (2020) Machine learning workflows to estimate class probabilities for precision cancer diagnostics on DNA methylation microarray data. Nat Protoc 15:479–512. https://doi.org/10.1038/s41596-019-0251-6 Moran S, Arribas C, Esteller M (2016) Validation of a DNA methylation microarray for 850,000 CpG sites of the human genome enriched in enhancer sequences. Epigenomics 8:389–399. https://doi.org/10.2217/epi.15.114 Nguyen TM, Kim N, Kim DH et al (2021) Deep learning for human disease detection, subtype classification, and treatment response prediction using epigenomic data. Biomedicines 9:1733. https://doi.org/10.3390/biomedicines9111733 Olova N, Simpson DJ, Marioni RE, Chandra T (2019) Partial reprogramming induces a steady decline in epigenetic age before loss of somatic identity. Aging Cell 18:e12877. https://doi.org/ 10.1111/acel.12877 Ong M-L, Lin X, Holbrook JD (2015) Measuring epigenetics as the mediator of gene/environment interactions in DOHaD. J Dev Orig Health Dis 6:10–16. https://doi.org/10.1017/S20401744140 00506 Park J-L, Kim JH, Seo E et al (2016) Identification and evaluation of age-correlated DNA methylation markers for forensic use. Forensic Sci Int Genet 23:64–70. https://doi.org/10.1016/j.fsigen. 2016.03.005 Park C, Ha J, Park S (2020) Prediction of Alzheimer’s disease based on deep neural network by integrating gene expression and DNA methylation dataset. Expert Syst Appl 140:112873. https:/ /doi.org/10.1016/j.eswa.2019.112873

4 Obtaining Longevity Footprints in DNA Methylation Data Using …

87

Parson W (2018) Age estimation with DNA: from forensic DNA fingerprinting to forensic (epi)genomics: a mini-review. Gerontology 64:326–332. https://doi.org/10.1159/000486239 Peeters A, Barendregt JJ, Willekens F et al (2003) Obesity in adulthood and its consequences for life expectancy: a life-table analysis. Ann Intern Med 138:24–32. https://doi.org/10.7326/00034819-138-1-200301070-00008 Petell CJ, Alabdi L, He M et al (2016) An epigenetic switch regulates de novo DNA methylation at a subset of pluripotency gene enhancers during embryonic stem cell differentiation. Nucleic Acids Res 44:7605–7617. https://doi.org/10.1093/nar/gkw426 Piao Y, Ryu KH (2017) A hybrid feature selection method based on symmetrical uncertainty and support vector machine for high-dimensional data classification. In: Nguyen NT, Tojo S, Nguyen LM, Trawi´nski B (eds) Intelligent information and database systems. Springer International Publishing, Cham, pp 721–727 Pu W, Qian F, Liu J et al (2021) Targeted bisulfite sequencing reveals DNA methylation changes in zinc finger family genes associated with KRAS mutated colorectal cancer. Front Cell Dev Biol 9:759813. https://doi.org/10.3389/fcell.2021.759813 Rauschert S, Melton PE, Burdge G et al (2019) Maternal smoking during pregnancy induces persistent epigenetic changes into adolescence, independent of postnatal smoke exposure and is associated with cardiometabolic risk. Front Genet 10:770. https://doi.org/10.3389/fgene.2019. 00770 Rauschert S, Raubenheimer K, Melton PE, Huang RC (2020) Machine learning and clinical epigenetics: a review of challenges for diagnosis and classification. Clin Epigenet 12:51. https://doi. org/10.1186/s13148-020-00842-4 Renaud F, Vincent A, Mariette C et al (2015) MUC5AC hypomethylation is a predictor of microsatellite instability independently of clinical factors associated with colorectal cancer: MUC5AC hypomethylation in colorectal cancer. Int J Cancer 136:2811–2821. https://doi.org/10.1002/ijc. 29342 Renaud F, Mariette C, Vincent A et al (2016) The serrated neoplasia pathway of colorectal tumors: identification of MUC5AC hypomethylation as an early marker of polyps with malignant potential: MUC5AC hypomethylation in colorectal serrated polyps. Int J Cancer 138:1472–1481. https://doi.org/10.1002/ijc.29891 Robertson KD (2005) DNA methylation and human disease. Nat Rev Genet 6:597–610. https://doi. org/10.1038/nrg1655 Rodríguez-Rodero S, Menéndez-Torre E, Fernández-Bayón G et al (2017) Altered intragenic DNA methylation of HOOK2 gene in adipose tissue from individuals with obesity and type 2 diabetes. PLoS ONE 12:e0189153. https://doi.org/10.1371/journal.pone.0189153 Sahm F, Schrimpf D, Stichel D et al (2017) DNA methylation-based classification and grading system for meningioma: a multicentre, retrospective analysis. Lancet Oncol 18:682–694. https:/ /doi.org/10.1016/S1470-2045(17)30155-9 Schmidhuber J (2015) Deep learning in neural networks: an overview. Neural Netw 61:85–117. https://doi.org/10.1016/j.neunet.2014.09.003 SEER (2004) Surveillance, epidemiology, and end results (SEER) program. Public-use data (1973– 2001). National Cancer Institute, DCCPS, Surveillance Research Program, Cancer Statistics Branch Si Z, Yu H, Ma Z (2016) Learning deep features for DNA methylation data analysis. IEEE Access. https://doi.org/10.1109/ACCESS.2016.2576598 Soriano-Tárraga C, Giralt-Steinhauer E, Mola-Caminal M et al (2018) Biological age is a predictor of mortality in ischemic stroke. Sci Rep 8:4148. https://doi.org/10.1038/s41598-018-22579-0 Soubry A, Schildkraut JM, Murtha A et al (2013) Paternal obesity is associated with IGF2hypomethylation in newborns: results from a Newborn Epigenetics Study (NEST) cohort. BMC Med 11:29. https://doi.org/10.1186/1741-7015-11-29 Sugrue VJ, Zoller JA, Narayan P et al (2021) Castration delays epigenetic aging and feminizes DNA methylation at androgen-regulated loci. Elife 10:e64932. https://doi.org/10.7554/eLife.64932

88

A. Kalyakulina et al.

Tafaro L, Cicconetti P, Tedeschi G et al (2004) Smoking and longevity: an incompatible binomial? Arch Gerontol Geriatr 38:425–430. https://doi.org/10.1016/j.archger.2004.04.054 Taylor DH, Hasselblad V, Henley SJ et al (2002) Benefits of smoking cessation for longevity. Am J Public Health 92:990–996. https://doi.org/10.2105/AJPH.92.6.990 Thompson MJ, Chwiałkowska K, Rubbi L et al (2018) A multi-tissue full lifespan epigenetic clock for mice. Aging (Albany NY) 10:2832–2854. https://doi.org/10.18632/aging.101590 Tibshirani R (1996) Regression shrinkage and selection via the lasso. J R Stat Soc Ser B (Methodol) 58:267–288. https://doi.org/10.1111/j.2517-6161.1996.tb02080.x Titus AJ, Bobak CA, Christensen BC (2018) A new dimension of breast cancer epigenetics— applications of variational autoencoders with DNA methylation: In: Proceedings of the 11th international joint conference on biomedical engineering systems and technologies. SCITEPRESS—Science and Technology Publications, Funchal, Madeira, Portugal, pp 140–145 Tørring ML (2017) Cancer and the limits of longevity. BMJ j2920. https://doi.org/10.1136/bmj. j2920 Vijayakumar KA, Cho G (2022) Pan-tissue methylation aging clock: recalibrated and a method to analyze and interpret the selected features. Mech Ageing Dev 204:111676. https://doi.org/10. 1016/j.mad.2022.111676 Wang Z, Wang Y (2018) Exploring DNA methylation data of lung cancer samples with variational autoencoders. In: 2018 IEEE international conference on bioinformatics and biomedicine (BIBM), pp 1286–1289 Wang Z, Wang Y (2019) Extracting a biologically latent space of lung cancer epigenetics with variational autoencoders. BMC Bioinform 20:568. https://doi.org/10.1186/s12859-019-3130-9 Wei SH, Balch C, Paik HH et al (2006) Prognostic DNA methylation biomarkers in ovarian cancer. Clin Cancer Res 12:2788–2794. https://doi.org/10.1158/1078-0432.CCR-05-1551 Weidner CI, Lin Q, Koch CM et al (2014) Aging of blood can be tracked by DNA methylation changes at just three CpG sites. Genome Biol 15:R24. https://doi.org/10.1186/gb-2014-15-2-r24 Weinstein JN, Collisson EA, Mills GB et al (2013) The cancer genome atlas pan-cancer analysis project. Nat Genet 45:1113–1120. https://doi.org/10.1038/ng.2764 World Health Organization (2022) World health statistics 2022: monitoring health for the SDGs, sustainable development goals Xiao F-H, Wang H-T, Kong Q-P (2019) Dynamic DNA methylation during aging: a “prophet” of age-related outcomes. Front Genet 10:107. https://doi.org/10.3389/fgene.2019.00107 Xiong Z, Zhang X, Zhang M, Cao B (2020) Predicting features of human mental disorders through methylation profile and machine learning models. In: 2020 2nd international conference on machine learning, big data and business intelligence (MLBDBI). IEEE, Taiyuan, China, pp 67–75 Xu X, Su S, Barnes VA et al (2013) A genome-wide methylation study on obesity: differential variability and differential methylation. Epigenetics 8:522–533. https://doi.org/10.4161/epi. 24506 Xu Y, Li X, Yang Y et al (2019) Human age prediction based on DNA methylation of non-blood tissues. Comput Methods Programs Biomed 171:11–18. https://doi.org/10.1016/j.cmpb.2019. 02.010 Zaguia A, Pandey D, Painuly S et al (2022) DNA methylation biomarkers-based human age prediction using machine learning. Comput Intell Neurosci 2022:1–11. https://doi.org/10.1155/2022/ 8393498 Zbie´c-Piekarska R, Spólnicka M, Kupiec T et al (2015a) Examination of DNA methylation status of the ELOVL2 marker may be useful for human age prediction in forensic science. Forensic Sci Int Genet 14:161–167. https://doi.org/10.1016/j.fsigen.2014.10.002 Zbie´c-Piekarska R, Spólnicka M, Kupiec T et al (2015b) Development of a forensically useful age prediction method based on DNA methylation analysis. Forensic Sci Int Genet 17:173–179. https://doi.org/10.1016/j.fsigen.2015.05.001

4 Obtaining Longevity Footprints in DNA Methylation Data Using …

89

Zeilinger S, Kühnel B, Klopp N et al (2013) Tobacco smoking leads to extensive genome-wide changes in DNA methylation. PLoS ONE 8:e63812. https://doi.org/10.1371/journal.pone.006 3812 Zhang M, Tartaglia MC, Moreno D et al (2017) DNA methylation age-acceleration is associated with disease duration and age at onset in C9orf72 patients. Acta Neuropathol 134:271–279. https://doi.org/10.1007/s00401-017-1713-y Zhang Q, Vallerga CL, Walker RM et al (2019) Improved precision of epigenetic clock estimates across tissues and its implication for biological ageing. Genome Med 11:54. https://doi.org/10. 1186/s13073-019-0667-1 Zhang M, McKeever PM, Xi Z et al (2020a) DNA methylation age acceleration is associated with ALS age of onset and survival. Acta Neuropathol 139:943–946. https://doi.org/10.1007/s00401020-02131-z Zhang M, Pan C, Liu H et al (2020b) An attention-based deep learning method for schizophrenia patients classification using DNA methylation data. In: 2020 42nd annual international conference of the IEEE engineering in medicine & biology society (EMBC). IEEE, Montreal, QC, Canada, pp 172–175 Zhang S, Zeng T, Hu B et al (2020c) Discriminating origin tissues of tumor cell lines by methylation signatures and dys-methylated rules. Front Bioeng Biotechnol 8:507. https://doi.org/10.3389/ fbioe.2020.00507 Zhang J, Fu H, Xu Y (2021) Age prediction of human based on DNA methylation by blood tissues. Genes 12:870. https://doi.org/10.3390/genes12060870 Zhao X, Sui Y, Ruan X et al (2022) A deep learning model for early risk prediction of heart failure with preserved ejection fraction by DNA methylation profiles combined with clinical features. Clin Epigenet 14:11. https://doi.org/10.1186/s13148-022-01232-8 Zhavoronkov A, Mamoshina P (2019) Deep aging clocks: the emergence of AI-based biomarkers of aging and longevity. Trends Pharmacol Sci 40:546–549. https://doi.org/10.1016/j.tips.2019. 05.004 Zhavoronkov A, Li R, Ma C, Mamoshina P (2019) Deep biomarkers of aging and longevity: from research to applications. Aging 11:10771–10780. https://doi.org/10.18632/aging.102475 Zheng C, Xu R (2020) Predicting cancer origins with a DNA methylation-based deep neural network model. PLoS ONE 15:e0226461. https://doi.org/10.1371/journal.pone.0226461 Zhou S, Treloar AE, Lupien M (2016) Emergence of the noncoding cancer genome: a target of genetic and epigenetic alterations. Cancer Discov 6:1215–1229. https://doi.org/10.1158/21598290.CD-16-0745 Zou H, Hastie T (2005) Regularization and variable selection via the elastic net. J R Stat Soc Ser B Stat Methodol 67:301–320. https://doi.org/10.1111/j.1467-9868.2005.00503.x

Chapter 5

The Role of Assistive Technology in Regulating the Behavioural and Psychological Symptoms of Dementia Emily A. Hellis and Elizabeta B. Mukaetova-Ladinska

Abstract In this paper, we review types of assistive technologies developed to aid people with dementia to and assist them with the behavioural and psychological symptoms of dementia (BPSD). By reviewing current research, our aim was to understand the effectiveness of such technologies in regulating BPSD symptoms, and at the same time consider the issues arising from the use of assistive technologies in these vulnerable individuals. A systematic analysis was used in which 76 studies were identified, with 44 of them included in this review. Reviewing the literature revealed a large movement from basic assistive technologies to much more advanced home systems and artificial intelligence. By reflecting upon the ethical issues in relation to assistive technologies and dementia care, as well as the extent to which quality of life is improved through the regulation of these symptoms, we hope to guide future research in this area. Keywords Older people · Dementia · Behavioural and psychological symptoms of dementia · Dementia care · Assistive technology · Artificial intelligence

5.1 Introduction The demographic projection of the aging population poses concerns in relation to agerelated diseases and as such the care provided for these older individuals (Robinson et al. 2013). Currently, one of the most prevalent age-related diseases are the dementias. dementia has been defined as ‘a syndrome of cognitive impairment that affects memory, cognitive abilities and behaviour’ (World Health Organisation 2019). In addition, dementia significantly interferes with an individual’s capability to undertake daily activities, such as appropriately dressing oneself, communicating with others, and having significant memory problems. E. A. Hellis · E. B. Mukaetova-Ladinska (B) School of Psychology and Vision Sciences, University of Leicester, University Road, LE1 7RH Leicester, UK e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 A. Moskalev et al. (eds.), Artificial Intelligence for Healthy Longevity, Healthy Ageing and Longevity 19, https://doi.org/10.1007/978-3-031-35176-1_5

91

92

E. A. Hellis and E. B. Mukaetova-Ladinska

Individuals with dementia do not experience identical symptoms. Most individuals with dementia will experience a range of behavioural and psychological symptoms during the progression of the disease. These behavioural and psychological symptoms can be classified into five domains: cognitive/perceptual (delusions, hallucinations), motor (e.g., pacing, wandering, repetitive movements, physical aggression), verbal (e.g., yelling, calling out, repetitive speech, verbal aggression), emotional (e.g., euphoria, depression, apathy, anxiety, irritability), and vegetative (disturbances in sleep and appetite) (Finkel 2000; Cloak and Al Khalili 2022). These symptoms have often been associated with poor outcomes for people with dementia (PwD) and their caregivers, and are associated with poor outcomes, including distress among patients, long-term hospitalization, misuse of medication, and increased health care costs (Cerejeira et al. 2012). With up to 90% of people with dementia presenting with some of these symptoms (Cerejeira et al. 2012), it seems appropriate to explore ways in which relief can be provided. It is often the behavioural and psychological symptoms of dementia (BPSD) that are most difficult to regulate. These symptoms are often the reason that PwD are institutionalised (Schneider et al. 1990). In addition to being a predictor of institutionalisation, the BPSD are also associated with caregiver distress and anxiety (Huang et al. 2012; Feast et al. 2016; Romero-Moreno et al. 2016). Thus, it is no surprise that dementia is said to be one of the major factors affecting the quality of life of older generations and is arguably one of the main challenges facing the care providers of older adults, primarily due to the great deal of support and assistance required as the disease progresses. Currently, there is no cure for dementia, but a reduction in the BPSD seems to have a positive impact upon alleviating pressures on caregivers and improving the quality of life of both PwD and their caregivers (Dyer et al. 2018). Often, PwD become frustrated at everyday tasks, causing their caregivers a great deal of anxiety—they may worry about leaving the individual alone. This may be due to concerns that PwD may forget to take medication, putting themselves in harmful situations, i.e. wandering or using unsupervised domestic appliances, or be at a greater risk of falling and sustaining injuries. This inability or reduction in capacity to undertake everyday tasks may decrease the individual’s quality of life, as well as their self-esteem, often leading to social isolation (Sachdev 2021). Frequently PwD experience difficulties with communication, leading to further isolation, and thus having both a negative emotional impact on the individual (Mahendra et al. 2018) and further escalation of BPSD. Mahendra et al. (2018) suggested that these communication difficulties experienced by PwD are often a reflection of underlying impairments with their executive functioning, memory and attention. Although PwD retain sound and speech production, often difficulties with expressive language and the auditory comprehension of abstract language are present. These are associated with a progressive decline in their ability to initiate or maintain conversation, again leading to further social withdrawal, and having a negative emotional impact. These communication difficulties and BPSD are challenging, and the link between quality of life and the BPSD varies from person to person (Feast et al. 2016), thus necessitating further support and

5 The Role of Assistive Technology in Regulating the Behavioural …

93

assistance for PwD in order to decrease the distress caused to them, their families and other people involved in their care. The support required for a PwD is costly, with a predicted an average cost of £58,900 per PwD per year by 2040 (Wittenberg et al. 2020). A recent longitudinal study conducted by Henderson et al. (2019) found that unpaid care accounted for 87% of total costs at around 36 h per week with additional service costs for the 3-month period totalling between £3484 and £8609 per individual. The authors concluded that ‘policy makers should recognize the high costs of unpaid care for people with dementia, who do not always get the support that they need or would like to receive’. We are much in agreement with this statement. These figures along with statistics showing there will be 1 million PwD people living in the UK by 2025 (Blott 2021) indicate the need for an urgent review of dementia care. This emphasises the need for the development of more cost-effective resources, in order to ascertain whether symptoms of dementia and caregiver burden can be reduced. Technology is one tool that can be used to help improve the independent living, safety, and autonomy of PwD. In some instances, it has been advocated to be even a more cost-effective option than residential care (Carswell et al. 2009), which in turn can relieve caregiver burden and may delay the need for 24 h care. At this stage, it is important to highlight the concept of reducing the burden on carers, as it is seemingly a key aim in the development of therapeutic, pharmacological, and technological interventions of dementia. There is a wide body of research surrounding assistive technology (AT) and dementia, with a primary focus on developing new ATs to enable care givers to provide a higher quality and more sustainable level of care. AT is seen as an umbrella term for systems and devices that are customisable with the aim of enhancing, simplifying or sustaining an individual’s independence and functional capabilities. Thus, enabling participation in various areas of their lives that were previously difficult to access or inaccessible, can lead to people’s increased well-being (Van Niekerk et al. 2018). Therefore, with PwD, the AT primary aim is to enable them to continue leading purposeful and independent lives by aiding their social and physical functions, decreasing the likelihood of social isolation and a negative emotional state (Hung et al. 2019). AT is, therefore, seen as the future of dementia care. The aim of this article is to review the current research on AT and its role in regulating PwD’s BPSD. By reviewing current research, we aim to understand the effectiveness of such technologies in regulating symptoms, whilst also considering any issues with AT use in and by vulnerable individuals.

5.2 Methodology To assess and interpret current research in this domain, we used a systematic study selection process. For this, we followed the general consensus that defines an AT as a device that aids an individual to complete a task that they would not usually be able to do, or a device that enables an individual to carry out a task without difficulty

94

E. A. Hellis and E. B. Mukaetova-Ladinska

Participant Identification Terms Technology Terms Dementia Assistive Technology Alzheimer’s Technology Elderly Communication Technology Old Age IT Dementia Symptoms Artificial Intelligence AI Smart Homes Assistive Aids

Other Terms Safety Symptom Regulation Symptom Relief BPSD Psychological Symptoms Wandering Agitation Aggression Inappropriate Behaviour

Fig. 5.1 Search terms used to identify relevant references

and in a safer way (Evans et al. 2015), with the primary aim being for the individual to continue leading a purposeful and independent life, by aiding their social and physical functions (Hung et al. 2019). The search was conducted in stages: planning, conducting, and reporting. The planning stage involved determining search terms and inclusion/exclusion criteria, which then enabled the review to be conducted and results reported. Search terms are presented in Fig. 5.1, whereas the systematic literature searches were carried out across a number of databases (see Fig. 5.2). Researchgate n = 27

PubMed n=3

Initial titles identified n = 76

PsychINFO n=7

Google Scholar n = 34

Rejected as irrelevant n = 12

Abstracts reviewed n = 64 Rejected as irrelevant n = 18 Full papers reviewed n = 46

Fig. 5.2 Systematic search process

Sage Journals n=5

5 The Role of Assistive Technology in Regulating the Behavioural …

95

5.3 Results 5.3.1 Assistive Technologies and Dementia This literature review identified several different ATs that are currently investigated or used to help manage behavioural and psychological symptoms in PwD. Based on their purpose use, they can be grouped into several categories: technologies to aid communication, technologies to aid motor behaviour, technologies to aid inappropriate behaviour, Smart Home technologies and artificial intelligence.

5.3.2 Assistive Technology to Aid Communication As dementia progresses, PwD’s ability to communicate dramatically declines. This inability to communicate and recognise family members, friends and carers often has a negative impact on the individual’s social interaction and may cause loneliness and negative emotions. A common aspect of BPSD is apathy, which involves a decrease in motivations and social engagements, diminished reactivity and emotional indifference (Gilmore-Bykovskyi 2018). These symptoms may be present in up to 83% of individuals suffering from dementia (Baharudin et al. 2019). Since inability to communicate effectively brings great distress to persons with dementia, ATs to aid and regulate this symptom have been developed. These technologies have been summarised in Table 5.1. The concept of Morris et al. (2004) ‘Social Memory Aid’ has been applied to the development of other communication aids, for example these principles have been used in the development of the COGNOW system (Meiland et al. 2007). This system is part of a larger heavily user-driven project supported by the European FP6 research programme, with an aim to develop innovative solutions to help people in the early stages of dementia in maintaining independence. During development the project relied on significant clinical advice and guidance. The system was based on an AT that could be combined with other technologies such as a smart home to provide a variety of assistance. The aim of this technology was to improve the quality of life of PwD by enabling them to make calls independently (Meiland et al. 2007). In a latter study by the same group (Meiland et al. 2012), the usability of the system, known as COGKNOW Day Navigator (CDN), was investigated in a smaller number of PwDs (12–16 people) who participated in 3 cycles, over one year period in 3 research centres. Although PwD and their carers overall valued the CDN as userfriendly and useful, the effectiveness of the system in daily life was limited. The authors contributed the latter to insufficient duration of the testing period caused by delays in development and some instability of the final prototype. Current literature discusses augmentative and alternative communication (AAC) and how it is beneficial in supporting the deterioration in communication witnessed in PwD. AAC harnesses evidence-based strategies and practices to offer support

96

E. A. Hellis and E. B. Mukaetova-Ladinska

Table 5.1 Assistive technologies to aid communication References

Assistive Functional technology (AT) description

Benefits of AT

Issues with AT

Morris et al. Social memory (2004) aid

Facial recognition and name recall aided through cues and questions

Aids social interaction and improves quality of life

Out of date technology—can be applied to the development of other communication aids

Meiland COGKNOW et al. (2007)

Combines visual, written and verbal cues for recall in relation to telephone calls

Can be combined with other technologies e.g. smart homes to provide a variety of assistance; relieves caregiver

Potentially too technologically advanced for an older adult to assist with

Evans et al. AAC devices (2015) and Goodall et al. (2021)

Devices designed Reduces social to enhance and isolation support the PwD’s natural communication abilities

May et al. (2019)

Systems which analyse, support and generate human language

Computerised AAC devices

Current devices use non-electric systems—relatively simplistic

Aids social Affordability interaction; reduces feelings of loneliness; useful through all stages of the disease

Abbreviations: AAC augmentative and alternative communication; AT assistive technology

with communication to PwD through both aided and unaided systems (Fried-Oken et al. 2015). These systems vary in form, from picture communication cards or books, similar to that of Morris et al. (2004), to more complex computerised devices, including assistive devices designed to enhance and support PwD’s natural communication abilities, with an aim of reducing social isolation (Evans et al. 2015; Goodall et al. 2021). Currently most AAC interventions aimed at PwD use non-electronic systems; however there has been emerging evidence that the latest AACs focus on computerised systems which analyse, support and even generate language (May et al. 2019) for use by PwD. These novel AAC developments have the potential not only to enhance the communication skills of PwD, but also, represent a promising novel approach to neuropsychological assessment of speech, via possibility to analyse distinct speech properties, including length, hesitancy, empty content, and grammar (Pakhomov et al. 2010), all known to be affected in the course of dementia, and thus be used for both assessment and monitoring of PwD cognitive deterioration.

5 The Role of Assistive Technology in Regulating the Behavioural …

97

5.3.3 Assistive Technology to Aid Motor Behaviour One of the more common BPSD is wandering. This poses concerns for both PwD and their caregivers; since PwD themselves is at risk of injury, i.e. they may fall while they are wandering and sustain additional medical consequences, including hip fracture and delirium (acute confusion). Due to cognitive impairment, they may be disoriented in place, i.e. do not know where they are or how to navigate their way back to familiar places, talking about attending to former obligations, such as going to work, looking for someone and elopement. They all can cause the PwD both distress and expose them to risks, such as getting lost, or being subjected to physical injuries. This is associated with increased caregivers’ distress, cost of care and negative emotions, as well as raising the likelihood of institutionalized care for PwD. The concern of wandering has been heavily researched, with most articles concluding that wandering is exacerbated at night-time and increases the risk of hazards in the dark. This excessive wandering seems to be related to the environment which the PwD is in. Specifically, a quiet environment is linked to a reduction in agitation (a common cause of wandering) and hence a reduction in wandering behaviour (Carswell et al. 2009). A summary of the assistive technologies to aid motor behaviour is presented in Table 5.2. ATs for motor behaviour are aimed at tracking movements via the use of electronic tags and pressure sensors and are used for the benefit of caregivers. Greiner et al. (2007) discussed technology which addressed wandering in persons with dementia using electronic tags in a dementia care unit. The majority of wandering was identified during the night-time and carers for these individuals found that understanding the movement patterns in this way improved their level of care for PwDs and hence a reduction in falling was observed. Pressure sensors and ‘tagged’ shoes have been discussed in a further study conducted on PwD individuals residing in residential dementia settings (Miura et al. 2008). Results indicated an improvement in wandering behaviours in majority of patients due to quicker care giver intervention. It is noteworthy to add that for the piezoelectric pressure mats (the authors considered these for safety) should only be placed below carpets, meaning they may not be appropriate for all home or residential settings. There is an increase in both development and practical use of these types of devices and systems, particularly in the form of intelligent AT devices. Recent research conducted by Kernebeck et al. (2019) introduced the concept of a smart watch/sensor bracelet worn by PwD which includes an accelerometer for wandering behaviours, a gyroscope for location detection, skin temperature sensing, photoplethysmography to monitor heart rate and Bluetooth beacon recording. This smart watch is linked to an app which can be used on a smartphone or tablet and is readily available for a caregiver to use. The work by Kernebeck et al. (2019) enables linking this smart watch information to general practitioners to enable more informed caregiving. In addition, these devices have the potential to decrease costs associated with dementia care such as injuries due to falls when wandering and additional healthcare costs

98

E. A. Hellis and E. B. Mukaetova-Ladinska

Table 5.2 Assistive technologies to aid motor behaviour References

Assistive technology (AT)

Functional description

Benefits of AT

Issues with AT

Greiner et al. (2007)

Electronic tags

Tracks Enables caregivers None documented movements of to understand PwD movement patterns; improves level of care; reduction in falls

Miura et al. (2008) Pressure sensors; shoe tags; piezoelectric pressure mats

Alerts caregivers to pressure changes indicating movement

Allows movement tracking; alerts caregivers to movement

Piezoelectric pressure mats should only be placed below carpets—not appropriate for all home/residential settings

Kernebeck et al. (2019)

Smart watch

Tracks movement, location, physiological symptoms and Bluetooth beacon recording

Allows movement tracking; prevents falls; monitors vital signs; links to a smartphone application for caregivers; future aims of sending data to general practitioners

Still in developmental stages—not readily available for home use; raises ethical concerns in regards to privacy and data sharing

Sposaro et al. (2010)

iWander smart phone application

Monitors movement

Monitors and tracks Potentially too movements of complicated for PwD; assists with older adults to use wandering behaviour and prevents falls; relieves caregiver burden; maintains functional independence of PwD

Robotic aid for mobility which provides verbal cues to aid with navigation through the environment

Personal adaptable Affordability and mobility aid; aids availability visually impaired and frail individuals

Lacey and Guido—robotic Rodriguez-Losada walker (2008)

5 The Role of Assistive Technology in Regulating the Behavioural …

99

linked to fall-associated injuries in PwD. It is important to note that this AT is still in its development stages and not yet widely available to PwD and their caregivers, or 24-h healthcare facilities (i.e. nursing or residential home catering for PwD). An example of an AT to aid motor behaviour which is currently in use comes in the form of a smart phone application called ‘iWander’ (Sposaro et al. 2010). This has been designed to monitor movement, helping keep track of PwD and assisting with wandering behaviour. This has been shown to have a positive impact on caregiver burden and stress, whilst also keeping the functional independence of PwD. Motor behaviour has also been aided by developments in the robotics field of AT. Robotic walkers are being created, with a prototype being built which acts as a personal mobility aid and functions as a smart walker called ‘Guido’ (Lacey and Rodriguez-Losada 2008) and aids visually impaired and frail individuals to navigate through the environment with verbal cues to avoid objects. A more recently produced smart walker called ‘ASBGo++’ has been developed to autonomously adapt to the users’ needs and level of assistance required (Moreira et al. 2019). Although these ATs are still within their developmental stages, they will be incredibly useful in dementia care, particularly in assisting more frail and vulnerable individuals to walk with a much-decreased risk of falling.

5.3.4 Assistive Technology to Aid Inappropriate Behaviours The inappropriate behaviours detailed in BPSD have been subdivided into physically aggressive behaviour (kicking, hitting, biting), physically non-aggressive behaviour (pacing, inappropriately dressing), verbally aggressive behaviour (swearing, screaming), and verbally non-aggressive behaviour (agitation, repetition). These inappropriate behaviours are one of the more difficult symptoms to regulate, and some technological advances developed to aid with these symptoms are as follows and are summarised in Table 5.3. Additional technologies have been developed to increase independence of PwD. Burleson et al.’s (2015) assistive dressing system leads to an increased autonomy, sense of achievement and potentially an increased quality of life due to a new level of independence for PwD. The COACH video monitoring system allows individuals to carry out daily activities independently with the use of tracking jewellery and an algorithm linked to a video monitor to guide PwD individuals through hand washing (Mihailidis et al. 2008). Similar to this, the ‘Friendly Restroom Project’ (2002–2005) was designed to enable a greater level of independence, autonomy and privacy for people with dementia (Molenbroek and Bruin 2011). This EU funded project’s goal was to provide recommendations for improving the toilet area, in particular focussing on the special needs of elderly and disabled, by performing several user studies and exploring the potential of ATs. The project’s devised prototype was based on anthropometrics (research that includes measurements of the human body) and ergonomics

100

E. A. Hellis and E. B. Mukaetova-Ladinska

Table 5.3 Assistive technologies to aid with inappropriate behaviours References

Assistive technology (AT)

Functional description

Benefits of AT

Issues with AT

Burleson et al. (2015)

Assistive dressing system

An assistive dressing system which uses sensors to provide cues and prompts to aid in the individual to complete the dressing process independently

Leads to increased autonomy; independent achievement; increased QoL

Obtrusive system

Mihailidis et al. (2008)

COACH

Video monitoring system which allows PwD to carry out ADLs independently through tracking jewellery and an algorithm linked to a video monitor to guide the PwD to assist with appropriate hand washing

Increases independence of the PwD; provides relief for caregivers

Ethical concerns regarding privacy with a video monitoring system

Molenbroek and Bruin (2011)

Friendly restroom project

Combination of methods and Increases technologies independence of PwD

Privacy concerns

(involves using anthropometric data when designing products to improve user experience) studies. The data obtained from 255 participants enabled gaining knowledge about variations in human characteristics and behaviour in a toilet environment, the variation in product specifications and its effect on human behaviour. The methods and technologies involved in fulfilling this objective included contactless smart card technologies with read–write capabilities, voice activation interface, motion control and sensor systems, mechanical engineering and robotic techniques, mathematical modelling, as well as ergonomic research and designs inspired by philosophy, gerontechnology and medical and social sciences. End-users and secondary users, as well as care takers and rehabilitation professionals, were involved in all stages of the research and problem-solving processes of the prototype development. As these technologies increase independence and restore a level of dignity, they simultaneously help in the regulation of PwD agitation and aggression (Gedde et al. 2021). It has been suggested that the levels of frustration and agitation experienced in dementia are due to being unable to complete daily tasks and often lead to inappropriate behaviours such as physical and verbal aggression. This leads us to consider the premise of indirectly regulating these behavioural and psychological symptoms manifesting as inappropriate behaviours through these assistive systems.

5.3.5 Assistive Technology—Smart Homes One of the larger systems within the field of AT is the concept of a ‘Smart Home’. These are private or residential settings which have been modified with a combination

5 The Role of Assistive Technology in Regulating the Behavioural …

101

of assistive technologies in order for PwD to live in a more independent capacity. As previous research has shown that individuals who are able to reside in their own homes rather than in a residential setting have an increased quality of life, as well as life expectancy compared to their institutionalised counterparts, it is an important area of technology to consider. The Smart Home technologies discussed are summarised in Table 5.4. Intelligent sensors have been described by Su et al. (2018) which are used as a monitoring and alarm system within the smart home. Following on from this, a bathroom monitoring system has been produced for use within a smart home (Zhang et al. 2021). It assists with use of normal bathroom appliances through the use of a statistical programme to monitor and cue usage. These systems are more commonly used in residential care homes such as the system described by Aloulou et al. (2013) which encompasses many components so make a smart living environment for PwD. These technologies can be personalised to each PwD encouraging a much more independent lifestyle whilst regulating BPSD. A recent study explored the usability of the sensor technology in care homes via design and deployment of a ‘Smart Home In a Box (SHIB)’ approach to monitor PwD wellbeing (Garcia-Constantino et al. 2021). The SHIB included the instalment of sensors (thermal, contact, passive infrared and audio level sensors) in the rooms of PwD residing in a care home. The findings of the study indicate that the straightforward use of this technology, in both residential care setting and PwD homes, is largely hampered by (care) home buildings not being originally designed for an appropriate integration with ambient sensors. Although widely used in care homes, these technologies offer safety and home monitoring, enabling PwD to remain in a familiar environment whilst functioning independently. Examples of this come from ‘The Aware Home Research Initiative’ aiming to enhance the quality of life of PwD and regulate BPSD whilst maintaining independence in their own home (Jones 2011). Much research surrounding ‘smart homes’ comments on systems still under development and there is little evidence of successful use in a community setting. Promisingly, research conducted by Amiribesheli and Bouchachia (2018) has identified gaps in developmental technologies and has produced a ‘smart home’ prototype. Furthermore, the Gator Tech Smart House project is an example of successful integration into a space specifically designed for older and cognitively disabled people (Helal and Bull 2019). However, people do not only live in a house/home but also in community, and thus, scaling up the platform technology approach to a planned living community is needed.

5.3.6 Assistive Technology—Further Artificial Intelligence Artificially intelligent AT may be considered in conjunction with smart homes. Recent research has now enabled the development of artificially intelligent robots which aim to improve the quality of life of individuals with dementia through social care (Bemelmans et al. 2012). There may be potential for robots to be present in the

Smart Home In a Box (SHIB)

Ambient sensors (thermal, contact, passive infrared and audio level sensors) to aid the PwD

Intelligent system to assist with the use of normal bathroom appliances through the use of a statistical programme and cue usage

Garcia-Constantino et al. (2021)

Bathroom monitoring system

Zhang et al. (2021)

Intelligent sensors which control/monitor several domains; thermostats; water; appliances; body temperature; magnetic door switches to monitor movement; electronic tags for commonly misplaced objects; microphones; pressure sensors

Multifaceted system which incorporates many components to make a smart living environment including pressure sensors to aid with wandering behaviours, shower sensors which feedback and send cues to the PwD enabling them to shower independently, water tap sensors, fall detection sensors in the bathroom, a prediction system which anticipates the needs of the PwD within the next hour

Intelligent sensors

Su et al. (2018)

Functional description

Aloulou et al. (2013) Residential system

Assistive technology (AT)

References

Table 5.4 Assistive technologies—smart homes

Provide detailed information to caregiver to aid the PwD; straight forward to use if in the correct setting

Can be personalised to PwD; promotes independence; provides caregiver relief

Promotes independence of PwD

Alerts caregivers to provide necessary care; reduces risk of falls; aids independence of PwD

Benefits of AT

(continued)

Residential care homes are not originally designed for the appropriate integration of ambient sensors, many houses may not be either

Mainly used in residential environments, may not be financially viable to adapt for home use

Privacy concerns

Obtrusive system; ethical concerns in regards to poor use of system; privacy concerns

Issues with AT

102 E. A. Hellis and E. B. Mukaetova-Ladinska

Assistive technology (AT)

Smart home prototype

The Gator Tech Smart House project

References

Amiribesheli and Bouchachia (2018)

Helal and Bull (2019)

Table 5.4 (continued)

Smart home with integrated assistive technologies—programmable space specifically designed to the individual

Smart home which considers the specific requirements of the PwD and their caregivers

Functional description

Enables PwD to stay in their familiar home environment

Beneficial at all stages of dementia and can be adapted with the development of the disease

Benefits of AT

Queries over the potential to upscale this platform to communities to maintain the community environment

In developmental stages, waiting for real-world deployment

Issues with AT

5 The Role of Assistive Technology in Regulating the Behavioural … 103

104

E. A. Hellis and E. B. Mukaetova-Ladinska

home environment due to their ability to guide individuals round their home, prompt medication and monitor health (Robinson et al. 2013). Examples of these artificially intelligent assistive technologies can be viewed in Table 5.5. Social robots in dementia care are used in animal-assisted therapy (AAT) where they substitute the use of animals in therapy sessions. AAT is a form of nonpharmacological intervention for non-cognitive symptoms and behaviour in people with dementia. The most commonly used social robot for this is PARO (Shibata 2006), a social companion robot with the appearance, movement and sound of a baby seal. PARO has been investigated in a range of studies with older people and found to be an effective nonpharmacological approach for managing dementia-related mood and behaviour problems. Lane et al. (2016) reported that PARO is best presented to residents who are relatively calm and approachable, as opposed to actively exhibiting behaviour or mood problems. More importantly, up to 95% of PwD whilst in hospital environment demonstrated beneficial PARO interactions with the most frequent one being speaking and petting (Kelly et al. 2021). In community dwelling adults with dementia, similarly PARO showed benefits in improving facial expressions (affect) and communication of PwD with staff (social interaction) at the day care centres, with care recipients with less cognitive impairment responding significantly better to PARO intervention (Liang et al. 2017). In addition, PARO has been suggested as an alternative to manage pain in people with dementia in long-term care facilities (Pu et al. 2020). The PARO use, overall, indicates three domains of outcome measures that benefit from interventions using the PARO robot: quality of life, BPSD and medical treatment in both community dwelling, inpatients and PwD in longterm facilities. However, there may be cultural influences, that may contribute to the dementia severity response to the PARO intervention, and these need to be explored further in larger multicultural studies. NAO (pronounced ‘now’), a white humanoid robot, measuring 58 cm tall, has sensors for movement, touch, sonar, sound, and vision, it can talk and sing. In a pilot study, with bi-weekly sessions over 3 months with patients with moderate to severe dementia in nursing homes and Day centres, the NAO robot was compared to an animal-shaped robot in therapeutic sessions, and improvements were seen in apathy, irritability, and neuropsychiatric symptoms with NAO (Valentí Soler et al. 2015); however cognitive decline was documented in patients receiving these sessions. A recent interdisciplinary project led by professionals from theatre arts, social work, and engineering designed and delivered an intervention integrating theatre and social robotics. This intervention incorporated Shakespearean text and the social robot, NAO, performing and encouraging the older adult with and without cognitive impairment to perform (Fields et al. 2021). In this study, all participants, irrespectively of their cognitive performance, reported improvements in mood, loneliness, and depression, albeit these changes were greater in people without cognitive impairments. This study is in agreement with previous studies on the benefits of art in dementia (reviewed in Emblad and Mukaetova-Ladinska 2021), extending the use of social robots to participating art for use in PwD. Further social robots to consider are telepresence robots, such as the human-size robots Giraff and VGo. They allow the operator to connect with people anywhere

Assistive technology (AT)

PARO social robot

NAO social robot

“Giraff” robot and “VGo” communications telepresence robot

References

Shibata (2006)

Valentí Soler et al. (2015)

Moyle et al. (2014)

A human size telepresence robot, which enables an operator to connect with a person or group anywhere in the world and to see each other as well as the surroundings. The advantage of these robots is the operator can remotely drive the robot from their computer, allowing visual access to the hospital or residential care setting

A white humanoid robot, measuring 58 cm tall, has sensors for movement, touch, sonar, sound, and vision, it can talk and sing; has a robotic voice that can be replaced with mp3 recordings of a child-like human voice that is easier for patients to understand. It can move its neck and arms, walk, or dance. Software was developed to allow the robot to act out a script (i.e. speech, music and movements for therapy sessions)

A social companion robot with the appearance, movement and sound of a baby seal; programmable behaviour and sensors for posture, touch, sound, and light; eyes, can open and close; it moves its neck (laterally and up-and-down), anterior flippers and tail; emits short and sharp squeals like a real seal

Functional description

Table 5.5 Artificially intelligent assistive technologies

Studies have shown high engagement with the Giraff robot and positive evaluations; family members reported the opportunity to reduce social isolation

A pilot study has shown benefits over a 3 month period of bi-weekly sessions. Improvements were seen in apathy, irritability and neuropsychiatric symptoms (this was not seen with the comparative animal-shaped robot)

Effective non-pharmacological approach for managing dementia-related mood and behaviour problems

Benefits of AT

(continued)

Potential financial issues for wider community use; invasive in a home setting

A pilot study documented cognitive decline in patients having therapeutic sessions with the humanoid robot

Not explored extensively in community and home settings

Issues with AT

5 The Role of Assistive Technology in Regulating the Behavioural … 105

“iCarus” intelligent ambient assisted living system

Yilmaz (2019)

A geo-net system forms a safe area in with the PwD is safe during wandering episodes; when a wandering episode occurs actions are executed which are determined via rule-based context reasoning; aids the PwD to a safe place, notifies the caregiver and emergency services if necessary

An integrative intervention programme supporting the preparation of conversation topics, time management, how to take turns in conversations and reflection on topics

PICMOR (photo-integrated conversation moderated by robots)

Otake-Matsuura et al. (2021)

Functional description Cota: a communication robot with high-level communication capabilities (e.g. conversational scenarios can be rewritten and upgraded); it can be linked to a monitoring system, i.e. a bedside infrared camera, which sends alerts to caregivers as well as the central nursing station Palro: a robot which communicates and inter-acts with people more freely, with a greater degree of freedom and vocabulary (performs music and can lead physical exercises in addition to its conversational functions)

Assistive technology (AT)

Obayashi et al. (2020) “Cota” and “Palro” communication robots

References

Table 5.5 (continued)

Enables PwD to live their lives as “normally” as possible post diagnosis by aiding navigation during wandering episodes; system allows caregivers to extend the functionality of the system to modify the rules catering for the needs of the PwD

Themes are designed to trigger activities that produce new episodic memories and enhance attention and/or planning functions; increased verbal fluency

Reduces social isolation and supports communication needs of PwD; provides relief for caregivers; alerts caregivers; potential for preventing falls

Benefits of AT

Potential for wrongful use—ethical concerns raised

The use of the system for PwD requires further investigations prior to its use in the population

Potential financial issues for wider community use; invasive in a home setting; minimal evidence of use in a community/ home setting

Issues with AT

106 E. A. Hellis and E. B. Mukaetova-Ladinska

5 The Role of Assistive Technology in Regulating the Behavioural …

107

in the world, with remote control of the robot to view the person/people and their surroundings in a hospital or residential setting for example (‘We like to think of it as Skype on wheels’, the authors have commented). In a study of 5 dyads of patients and their family members, the use of telepresence robot was evaluated positively, and families reported that the Giraff robot offered the opportunity to reduce social isolation (Moyle et al. 2014). Communication robots (com-robots), similarly, have been used in some latest studies, to offer one to one intervention to PwD. Thus, Obayashi et al. (2020) reported that older participants (aged ≥ 80 years) with more advanced dementia benefited more from the intervention with com-robots than their younger counterparts did. The study used two types of com-robots: Cota and Palro. Cota was used to remind residents of the scheduled activities, while Palro was employed more as part of the activities, and interacted with participants during recreation. Examples of the robot-initiated conversation as noted in the study, varied, from greetings (e.g. ‘Good morning’, ‘What’s your name?’) to questions about hobbies or medications (e.g. ‘Have you taken medications?’). A most recent study described the use of Photo-Integrated Conversation Moderated by Robots (PICOMOR) (Otake-Matsuura et al. 2021). PICMOR consists of three phases: preparation, conversation, and recall. In the preparation phase, participants use a smartphone with a specially developed application to take photos related to the topic of discussion, which are uploaded to the online PICMOR database. The themes are designed to trigger activities that produce new episodic memories and enhance attention and/or planning functions. In the second (main) phase, the moderate conversation phase, participants are cued to talk when their photos are displayed on the screen. Firstly, all participants describe their own photos; in the second round, they discuss each other’s photos, whereas in the recall phase, the participants completed memory tasks using a specially developed tablet application in which photos previously displayed during the conversation were randomly shown. Participants were asked to indicate who took the photo by selecting the name on the touch panel in this randomised control trial (Otake-Matsuura et al. 2021). Although this study did not involve PwD, the improvement in the communication alone is a promising start that it may find use in dementia and regulating some of the BPSD symptoms that are generated due to poor communication, i.e. agitation, aggression and irritability. The group conversation generated by PICMOR may improve participants’ verbal fluency since participants have more opportunity to provide their own topics, asking and answering questions which results in exploring larger vocabularies. PICMOR is available and accessible to community-living older adults. However, the usability of the system for people with dementia requires further investigations prior its use in this population. Further systems to consider are rehabilitation and socially assistive robots. These robots are programmed to perform physical tasks for PwD or to make the task easier for the individual so they can increase their independence, thus leading to a reduction in frustration and potentially a decrease in inappropriate behaviours. Primarily these technologies have focussed on assisting PwD in activities of daily living (ADLs), from robotic kettles, vacuums, spoons and cleaning robots (Mann 2005; Yi-Lin 2005;

108

E. A. Hellis and E. B. Mukaetova-Ladinska

Foulk 2007), to systems that can heat food, provide social cues and organise clothing (Graf et al. 2004; Mordoch et al. 2013; Robinson et al. 2014). These have been perceived as a useful solution to the social isolation and communication issues described in the BPSD by both people with dementia and their caregivers (Pino et al. 2015). These technologies represent a promising solution in the assistance of the communication problems seen in dementia. Furthermore, Yilmaz (2019) have suggested an ambient assisted living system named ‘iCarus’ which controls a geo-net aimed to guide PwD to a safe place during a wandering episode whilst informing caregivers. While these technologies may be incredibly useful in the future of dementia care, it is important to remember the social aspect of caring for an individual with dementia, robotics should not be looked at as a replacement for person-centred care, PwD still require human contact.

5.4 Discussion The overall aim of this literature review was to investigate to what extent AT can help regulate the behavioural and psychological symptoms of dementia and promote PwD independence and quality of life. From reviewing the literature, it is clear that the field of AT is extensive. Due to an increased demand for such technologies in relation to dementia care there have been vast improvements in the technicality of such resources. Adding to this, research has demonstrated a clear need for AT in the care of people with dementia since BPSD are an integral part of the disease. They are arguably the main source of a poor quality of life for both dementia sufferers and their caregivers and require additional professional and (non)pharmacological interventions, and influence the negative outcomes of the disease, including PwD institutionalisation and mortality. The cost-effectiveness of AT for dementia care has recently been questioned in a large UK randomised controlled trial, where it was placed predominantly to help with reminding/prompting and safety issues in community dwelling individuals with mild to moderate cognitive impairment (Howard et al. 2021). This is in contrast to the documented improvement in communication and disturbed behaviour in more profoundly impaired PwD living in 24 h setting (Valentí Soler et al. 2015), who may well respond better to the technological advances. This review has divided these technologies into several categories: technologies to aid communication; technologies to aid motor behaviour, technologies to aid inappropriate behaviour, Smart Home technologies and artificial intelligence. Each of these sub-divisions and the assistive technologies within them can be deemed useful for PwD. However, each technology is not appropriate for all individuals with dementia or for all stages of the disease. As such, it would be appropriate and useful for each of these technologies to be tested in subjects with different types of dementia and cognitive impairment and care setting (home or 24 h setting) in order to guide the development of future technologies. It is important to consider the cultural differences in line with the uptake of assistive technologies in dementia care. For example, the robot ‘PARO’ was developed

5 The Role of Assistive Technology in Regulating the Behavioural …

109

in Japan and a similar robot ‘Guidance’ has since been tested on individuals with cognitive impairments in New Zealand (Yu et al. 2015), which has had little to no acceptance. This may reflect the cultural differences in the type and frequency of BPSD, which may be explained by a difference in interpretation, measures, and delivery of care services. In line with cultural differences, the ethical implications of these technologies need also to be considered. Upon reviewing the literature, it became clear that many studies do not consider the privacy of PwD which is a key ethical concern. Additionally, many studies did not consider PwD consent for the use of the technology, particularly in smart home monitoring systems in which the individual may not be aware they are being monitored. The promoters of surveillance devices such as tagged shoes and smart watches as well as sensors and cameras within smart homes, have met substantial opposition from those who perceive it as ‘contrary to freedom’. The Mental Welfare Commission of Scotland, in a guidance issued in 2015 stated that ‘efforts must always be taken to support someone to make a decision whenever this is possible. This may include taking extra time to explain what is being proposed, involving advocacy, and using communication aids to help promote discussion and understanding’. These decisions could be made in advance, when PwD have a mental capacity or earlier, so their wishes be followed as per the long-term management when they lack mental capacity to decide for themselves. Another key concern is legal issues, particularly the conformity of technology to human rights and the sharing of data. This is of a concern especially when one considers the potentially damaging effects of such technology when used in the wrong way, for example monitoring systems to aid with criminal activities such as abduction. In line with this, it is clear that researchers should be considering contractual obligations. None of the technologies reviewed considered who would be held responsible if the technology were to fail and an individual was harmed. The rapid development of AT needs to be regulated so it finds its place for the needed and be constructively used, and not be misused against basic human needs. Many of these technologies are aimed at reducing caregiver burden and increasing independence of PwD. However, the development in AT seems to be moving towards minimal human contact, which may be extremely detrimental for a PwD especially in terms of regulating BPSD. It should be made clear for those developing such technologies that the underlying principles are for these technologies to act as catalysts rather than as a substitution for human interactions. When considering issues with the technologies themselves, a potential problem is that although they provide relief for PwD and their caregivers, they may in fact be too complex for an older user, which leads us to consider that the development of robotic technologies may be advantageous. Research has suggested that individuals are more motivated to follow instructions from a robot than from a computer or smartphone application (Belanche et al. 2020; Balakrishnan and Dwivedi 2021). This may be due to PwD developing stronger affinities with robots and their ability to interact with them as opposed to an application on a screen. This would suggest that AT in robotic forms offer a great potential for health interventions, health maintenance and BPSD regulation in PwD. At present there is little research with real-world applications of

110

E. A. Hellis and E. B. Mukaetova-Ladinska

such technologies due to feasibility and economic factors; however, this does show promise for future research in this domain. Urgent change is required in policies and services surrounding dementia care to increase current and future benefits of AT in relation to PwD’s care and in the regulation of BPSD. This includes considering the longitudinal needs of PwD and the dynamic changes of the illness, whilst, most importantly, understanding how to place the technical advances effectively to meet these needs. If we continue to consider dementia in purely clinical manner, with symptoms being addressed by medications and hospital care, we are not addressing majority of concerns and needs of PwD or enabling them to continue living as independently as possible in a familiar environment, or addressing the financial concerns associated with preventable institutionalisation and hospitalisation. Assistive technologies have the ability to empower PwD and their caregivers. Unfortunately, accessibility to the more advanced technologies remains limited and as such adoption remains low. Not only are there cultural, ethical and legal implications of such technologies, but there are also financial concerns in that many of these technologies may not be classified as medical devices within a publicly funded system such as the National Health System (NHS) and cannot, therefore, be prescribed by health care professionals. Stressing the need for changes in policy to allow PwD and their caregivers access to such services and technologies will be the future of dementia care.

5.5 Conclusion The assistive technology used in dementia care has progressed significantly in the last 15 years, moving from basic assistive aids such as alarmed medication boxes and necklaces or bracelets with a built-in alarm to alert caregivers (Mihailidis et al. 2008) to assisted dressing systems, smart homes, and artificially intelligent technologies including augmentative and artificial communication (ACC) (Robinson et al. 2013; Fried-Oken et al. 2015; Yousaf et al. 2019). In addition, online services are being invested in, with the development of mobile applications to aid PwD and their caregivers in their home settings (Yousaf et al. 2019). It is important to note that although there has been a great deal of progression in the field of AT, the earlier developed technologies are still as relevant today as they were when first developed. This may be due to the vast symptomology of the disease and the individuality of experience of such symptoms, suggesting that basic technologies such as medication trackers remain useful to those in the early stages of the disease whilst more advanced technologies may be more useful to those in the later stages. Examples of such devices include assisted dressing systems and navigation systems (Burleson et al. 2015; Meiland et al. 2012). Although AT has already proven to be useful in dementia care both for PwD and for the care givers in regulating BPSD both directly and indirectly, ethical and legal issues still remain at present. Randomised control trials are now required to work through practical, ethical, legal and cultural issues with such technologies.

5 The Role of Assistive Technology in Regulating the Behavioural …

111

This will also influence policy and service adjustments in dementia care formalising the acceptance for individualised assistive technologies as the future of dementia care. Compliance with Ethical Standards The authors declare no financial support for the study. This study is based on review of research papers published in peer reviewed Journals and does not involve direct research on animals and/or humans. Conflict of Interest The authors declare that they have no conflict of interest.

References Aloulou H, Mokhtari M, Tiberghien T, Biswas J, Phua C, Kenneth Lin JH, Yap P (2013) Deployment of assistive living technology in a nursing home environment: methods and lessons learned. BMC Med Inform Decis Mak 13(1):1–17 Amiribesheli M, Bouchachia H (2018) A tailored smart home for dementia care. J Ambient Intell Humaniz Comput 9(6):1755–1782 Baharudin AD, Din NC, Subramaniam P, Razali R (2019) The associations between behavioralpsychological symptoms of dementia (BPSD) and coping strategy, burden of care and personality style among low-income caregivers of patients with dementia. BMC Public Health 19(4):1–12 Balakrishnan J, Dwivedi YK (2021) Conversational commerce: entering the next stage of AIpowered digital assistants. Ann Oper Res 1–35 Belanche D, Casaló LV, Flavián C, Schepers J (2020) Service robot implementation: a theoretical framework and research agenda. Serv Ind J 40(3–4):203–225 Bemelmans R, Gelderblom GJ, Jonker P, de Witte L (2012) Socially assistive robots in elderly care: a systematic review into effects and effectiveness. J Am Med Dir Assoc 13(2):114–120 Blott J (2021) Smart homes for the future of dementia care. Lancet Neurol 20(4):264 Burleson W, Lozano C, Ravishankar V, Rowe J, Mahoney E, Mahoney D (2015) Assistive dressing system: a capabilities study for personalized support of dressing activities for people living with dementia. iproc 1(1):e13 Carswell W, McCullagh PJ, Augusto JC, Martin S, Mulvenna MD, Zheng H, Wang HY, Wallace JG, McSorley K, Taylor B, Jeffers WP (2009) A review of the role of assistive technology for people with dementia in the hours of darkness. Technol Health Care 17(4):281–304 Cerejeira J, Lagarto L, Mukaetova-Ladinska EB (2012) Behavioral and psychological symptoms of dementia. Front Neurol 3:73 Cloak N, Al Khalili Y (2022) Behavioral and psychological symptoms in dementia. StatPearls Publishing. https://www.ncbi.nlm.nih.gov/books/NBK551552/ Dyer SM, Harrison SL, Laver K, Whitehead C, Crotty M (2018) An overview of systematic reviews of pharmacological and non-pharmacological interventions for the treatment of behavioral and psychological symptoms of dementia. Int Psychogeriatr 30(3):295–309 Emblad SY, Mukaetova-Ladinska EB (2021) Creative art therapy as a non-pharmacological intervention for dementia: a systematic review. J Alzheimers Dis Rep 5(1):353–364 Evans J, Brown M, Coughlan T, Lawson G, Craven MP (2015) A systematic review of dementia focused assistive technology. In: Kurosu M (ed) Human-computer interaction: interaction technologies. HCI 2015. Lecture notes in computer science, vol 9170. Springer, Cham, pp 406–417 Feast A, Orrell M, Charlesworth G, Melunsky N, Poland F, Moniz-Cook E (2016) Behavioural and psychological symptoms in dementia and the challenges for family carers: systematic review. Br J Psychiatry 208(5):429–434

112

E. A. Hellis and E. B. Mukaetova-Ladinska

Fields N, Xu L, Greer J, Murphy E (2021) Shall I compare thee… to a robot? An exploratory pilot study using participatory arts and social robotics to improve psychological well-being in later life. Aging Ment Health 25(3):575–584 Finkel S (2000) Introduction to behavioural and psychological symptoms of dementia (BPSD). Int J Geriatr Psychiatry 15(S1):S2–S4 Foulk E (2007) Lonely robots ignored by elderly ludites. The New Zealand Herald. www.nzherald. co.nz/technology/lonely-robots-ignored-by-elderly-luddites/5BMBRUBNXVU2T7ISBKDST CXTZY/ Fried-Oken M, Mooney A, Peters B (2015) Supporting communication for patients with neurodegenerative disease. NeuroRehabilitation 37(1):69–87 Garcia-Constantino MF, Orr C, Synnott J, Shewell CP, Ennis A, Cleland I, Nugent C, Rafferty J, Morrison G, Larkham L, McIlroy S, Selby A (2021) Design and implementation of a smart home in a box to monitor the wellbeing of residents with dementia in care homes. Front Digit Health 3:798889 Gedde MH, Husebo BS, Erdal A, Puaschitz NG, Vislapuu M, Angeles RC, Berge LI (2021) Access to and interest in assistive technology for home-dwelling people with dementia during the COVID-19 pandemic (PAN.DEM). Int Rev Psychiatry 33(4):404–411 Gilmore-Bykovskyi A (2018) Commentary on apathy as a model for investigating behavioral and psychological symptoms in dementia. J Am Geriatr Soc 66(Suppl 1):S13–S16 Goodall G, Taraldsen K, Serrano JA (2021) The use of technology in creating individualized, meaningful activities for people living with dementia: a systematic review. Dementia 20(4):1442–1469 Graf B, Hans M, Schraft RD (2004) Care-O-bot II—development of a next generation robotic home assistant. Auton Robot 16(2):193–205 Greiner C, Makimoto K, Suzuki M, Yamakawa M, Ashida N (2007) Feasibility study of the integrated circuit tag monitoring system for dementia residents in Japan. Am J Alzheimers Dis Other Demen 22(2):129–136 Helal S, Bull CN (2019) From smart homes to smart-ready homes and communities. Dement Geriatr Cogn Disord 47(3):157–163 Henderson C, Knapp M, Nelis SM, Quinn C, Martyr A, Wu Y-T, Jones IR, Victor CR, Pickett JA, Hindle JV, Jones RW, Kopelman MD, Matthews FE, Morris RG, Rusted JM, Thom JM, Clare L, IDEAL Programme Team (2019) Use and costs of services and unpaid care for people with mild-to-moderate dementia: baseline results from the IDEAL cohort study. Alzheimers Dement (NY) 5:685–696 Howard R, Gathercole R, Bradley R, Harper E, Davis L, Pank L, Lam N, Talbot E, Hooper E, Winson R, Scutt B, Ordonez Montano V, Nunn S, Lavelle G, Bateman A, Bentham P, Burns A, Dunk B, Forsyth K, Fox C, Poland F, Leroi I, Newman S, O’Brien J, Henderson C, Knapp M, Woolham J, Gray R (2021) The effectiveness and cost-effectiveness of assistive technology and telecare for independent living in dementia: a randomised controlled trial. Age Ageing 50(3):882–890 Huang S-S, Lee M-C, Liao YC, Wang W-F, Lai T-J (2012) Caregiver burden associated with behavioral and psychological symptoms of dementia (BPSD) in Taiwanese elderly. Arch Gerontol Geriatr 55(1):55–59 Hung L, Gregorio M, Mann J, Horne N, Wallsworth C, Berndt A, Chaudhury H (2019) Exploring the perception of patients with dementia about a social robot PARO in a hospital setting. Innov Aging 3(Suppl 1):S195 Jones BD (2011) The aware home research initiative research and testbed plans. https://www.you tube.com/watch?v=XmF4FScUSmI. Accessed 13 July 2022 Kelly PA, Cox LA, Petersen SF, Gilder RE, Blann A, Autrey AE, MacDonell K (2021) The effect of PARO robotic seals for hospitalized patients with dementia: a feasibility study. Geriatr Nurs 42(1):37–45 Kernebeck S, Holle D, Pogscheba P, Jordan F, Mertl F, Huldtgren A, Bader S, Kirste T, Teipel S, Holle B, Halek M (2019) A tablet app- and sensor-based assistive technology intervention for

5 The Role of Assistive Technology in Regulating the Behavioural …

113

informal caregivers to manage the challenging behavior of people with dementia (the insideDEM study): protocol for a feasibility study. JMIR Res Protoc 8(2):e11630 Lacey GJ, Rodriguez-Losada D (2008) The evolution of guido. IEEE Robot Autom Mag 15(4):75– 83 Lane GW, Noronha D, Rivera A, Craig K, Yee C, Mills B, Villanueva E (2016) Effectiveness of a social robot, “Paro”, in a VA long-term care setting. Psychol Serv 13(3):292–299 Liang A, Piroth I, Robinson H, MacDonald B, Fisher M, Nater UM, Skoluda N, Broadbent E (2017) A pilot randomized trial of a companion robot for people with Dementia living in the community. J Am Med Dir Assoc 8(10):871–878 Mahendra N, Hickey EM, Bourgeois MS (2018) Cognitive-communicative characteristics: profiling types of dementia. In: Hickey EM, Bourgeois MS (eds) Dementia: person-centered assessment and intervention. Routledge/Taylor & Francis Group Mann WC (ed) (2005) Smart technology for aging, disability, and independence: the state of the science. Wiley May AA, Dada S, Murray J (2019) Review of AAC interventions in persons with dementia. Int J Lang Commun Disord 54(6):857–874 Meiland FJM, Reinersmann A, Bergvall-Kareborn B, Craig D, Moelaert F, Mulvenna MD, Nugent C, Scully T, Bengtsson JE, Dröes RM (2007) COGKNOW: development of an ICT device to support people with dementia. J Inf Technol Healthcare 5(5):324–334 Meiland FJM, Bouman AIE, Sävenstedt S, Bentvelzen S, Davies RJ, Mulvenna MD, Nugent CD, Moelaert F, Hettinga ME, Bengtsson JE, Dröes R-M (2012) Usability of a new electronic assistive device for community-dwelling persons with mild dementia. Aging Ment Health 16(5):584–591 Mihailidis A, Boger JN, Craig T, Hoey J (2008) The COACH prompting system to assist older adults with dementia through handwashing: an efficacy study. BMC Geriatr 8(1):28 Miura M, Ito S, Takatsuka R, Kunifuji S (2008) Aware group home enhanced by RFID technology. In: Lovrek I, Howlett RJ, Jain LC (eds) Knowledge-based intelligent information and engineering systems. KES 2008. Lecture notes in computer science, vol 5178. Springer, Berlin, Heidelberg, pp 847–854 Molenbroek JFM, De Bruin R (2011) Overview of the FRR project; designing the toilet of the future. In: Molenbroek JFM, Mantas M, Bruin R (eds) A friendly rest room: developing toilets of the future for disabled and elderly people. IOS Press Mordoch E, Osterreicher A, Guse L, Roger K, Thompson G (2013) Use of social commitment robots in the care of elderly people with dementia: a literature review. Maturitas 74(1):14–20 Moreira R, Alves J, Matias A, Santos C (2019) Smart and assistive walker–ASBgo: rehabilitation robotics: a smart–walker to assist ataxic patients. Adv Exp Med Biol 1170:37–68 Morris M, Lundell J, Dishman E (2004) Catalyzing social interaction with ubiquitous computing: a needs assessment of elders coping with cognitive decline. In: CHI’04 extended abstracts on human factors in computing systems, pp 1151–1154 Moyle W, Jones C, Cooke M, O’Dwyer S, Sung B, Drummond S (2014) Connecting the person with dementia and family: a feasibility study of a telepresence robot. BMC Geriatr 14(1):1–11 Obayashi K, Kodate N, Masuyama S (2020) Measuring the impact of age, gender and dementia on communication-robot interventions in residential care homes. Geriatr Gerontol Int 20(4):373– 378 Otake-Matsuura M, Tokunaga S, Watanabe K, Abe MS, Sekiguchi T, Sugimoto H, Kishimoto T, Kudo T (2021) Cognitive intervention through photo-integrated conversation moderated by robots (PICMOR) program: a randomized controlled trial. Front Robot AI 8:633076 Pakhomov SVS, Smith GE, Chacon D, Feliciano Y, Graff-Radford N, Caselli R, Knopman DS (2010) Computerized analysis of speech and language to identify psycholinguistic correlates of frontotemporal lobar degeneration. Cogn Behav Neurol 23(3):165–177 Pino M, Boulay M, Jouen F, Rigaud A-S (2015) “Are we ready for robots that care for us?” Attitudes and opinions of older adults toward socially assistive robots. Front Aging Neurosci 7:141

114

E. A. Hellis and E. B. Mukaetova-Ladinska

Pu L, Moyle W, Jones C, Todorovic M (2020) The effect of using PARO for people living with dementia and chronic pain: a pilot randomized controlled trial. J Am Med Dir Assoc 21(8):1079– 1085 Robinson H, MacDonald B, Kerse N, Broadbent E (2013) Suitability of healthcare robots for a dementia unit and suggested improvements. J Am Med Dir Assoc 14(1):34–40 Robinson H, MacDonald B, Broadbent E (2014) The role of healthcare robots for older people at home: a review. Int J Soc Robot 6(4):575–591 Romero-Moreno R, Losada A, Márquez-González M, Mausbach BT (2016) Stressors and anxiety in dementia caregiving: multiple mediation analysis of rumination, experiential avoidance, and leisure. Int Psychogeriatr 28(11):1835–1844 Sachdev PS (2021) Restriction of activities, social isolation, and dementia. Int Psychogeriatr 33(11):1125–1127 Schneider L, Pollock V, Lyness S (1990) A metaanalysis of controlled trials of neuroleptic treatment in dementia. J Am Geriatr Soc 35(5):553–563 Shibata T (2006) Therapeutic robot “Paro” for robot therapy. J Robot Soc Jpn 24(3):319–322 Sposaro F, Danielson J, Tyson G (2010) iWander: an android application for dementia patients. In: 2010 annual international conference of the IEEE engineering in medicine and biology, Sept 2010. IEEE, pp 3875–3878 Su CF, Fu LC, Chien YW, Li TY (2018) Activity recognition system for dementia in smart homes based on wearable sensor data. In: 2018 IEEE symposium series on computational intelligence (SSCI), 18–21 Nov 2018. IEEE, Bangalore, India, pp 463–469 Valentí Soler M, Agüera-Ortiz L, Olazarán Rodríguez J, Mendoza Rebolledo C, Pérez Muñoz A, Rodríguez Pérez I, Osa Ruiz E, Barrios Sánchez A, Herrero Cano V, Carrasco Chillón L, Felipe Ruiz S, López Alvarez J, León Salas B, Cañas Plaza JM, Martín Rico F, Abella Dago G, Martínez Martín P (2015) Social robots in advanced dementia. Front Aging Neurosci 7:133 Van Niekerk K, Dada S, Tönsing K, Boshoff K (2018) Factors perceived by rehabilitation professionals to influence the provision of assistive technology to children: a systematic review. Phys Occup Ther Pediatr 38(2):168–189 Wittenberg R, Hu B, Jagger C, Kingston A, Knapp M, Comas-Herrera A, King D, Rehill A, Banerjee S (2020) Projections of care for older people with dementia in England: 2015 to 2040. Age Ageing 49(2):264–269 World Health Organization (2019) Global action plan on physical activity 2018–2030: more active people for a healthier world. World Health Organization Yi-Lin S (2005) Other devices and high technology solutions. In: Mann WC (ed) Smart technology for aging, disability, and independence: the state of the science. Wiley Yilmaz Ö (2019) An ambient assisted living system for dementia patients. Turk J Electr Eng Comput Sci 27(3):2361–2378 Yousaf K, Mehmood Z, Saba T, Rehman A, Munshi AM, Alharbey R, Rashid M (2019) Mobilehealth applications for the efficient delivery of health care facility to people with dementia (PwD) and support to their carers: a survey. Biomed Res Int 2019:7151475 Yu R, Hui E, Lee J, Poon D, Ng A, Sit K, Ip K, Yeung F, Wong M, Shibata T, Woo J (2015) Use of a therapeutic, socially assistive pet robot (PARO) in improving mood and stimulating social interaction and communication for people with dementia: study protocol for a randomized controlled trial. JMIR Res Protoc 4(2):e45 Zhang Y, D’Haeseleer I, Coelho J, Vanden Abeele V, Vanrumste B (2021) Recognition of bathroom activities in older adults using wearable sensors: a systematic review and recommendations. Sensors (Basel) 21(6):2176

Chapter 6

Epidemiology, Genetics and Epigenetics of Biological Aging: One or More Aging Systems? Alessandro Gialluisi, Benedetta Izzi, Giovanni de Gaetano, and Licia Iacoviello

Abstract Vast progress was made in the last decade in the development of markers of biological aging (BA)—namely estimators of the discrepancy between the hypothetical underlying age and the chronological age of an organism—which can be taken as effective healthy aging and public health screening tools. In particular, the spread of artificial intelligence applications in biomedical sciences made a substantial contribution also to the field of biogerontology, with the advent of novel and more accurate clocks based on supervised machine learning algorithms. Despite this promising approach, very little is known about the overlap across the different aging clocks, both in terms of mortality risk prediction and in terms of genetic, epigenetic and, more generally, biological underpinnings. We will attempt to untangle these aspects by (i) providing a brief introduction to BA clocks developed so far, especially those based on supervised machine learning approaches; then reviewing studies investigating their (ii) epidemiological and (iii) genetic/epigenetic overlap. Genes intersecting the two overlaps may represent robust candidates as potential molecular targets for the development and validation of future rejuvenation and anti-aging therapies. Keywords Biological aging · DNA methylation clocks · Telomere length · Brain age · Blood age · Mortality · Genetics · Epigenetics

A. Gialluisi (B) · G. de Gaetano · L. Iacoviello Department of Epidemiology and Prevention, IRCCS NEUROMED, Pozzilli, Italy e-mail: [email protected] A. Gialluisi · L. Iacoviello Department of Medicine and Surgery, University of Insubria, Varese, Italy B. Izzi Centro Nacional de Investigaciones Cardiovasculares (CNIC), Madrid, Spain © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 A. Moskalev et al. (eds.), Artificial Intelligence for Healthy Longevity, Healthy Ageing and Longevity 19, https://doi.org/10.1007/978-3-031-35176-1_6

115

116

A. Gialluisi et al.

6.1 Introduction Biological Age (BA) is conceived as the actual underlying age of an organism and can differ from Chronological Age (CA, Fig. 6.1), which makes it useful to compute BA acceleration estimators, also known as aging clocks, based on different biomedical data sources. These are usually defined as parameters representing the discrepancy (or, put in simpler words, the difference) between biological and chronological age (Δ age = BA − CA). Overall, these tools allow to define biological aging trajectories and may help identifying ways to modify these trajectories through a personalized approach, in a “precision healthy aging” perspective (Gialluisi et al. 2019). Such estimators have been often computed through multivariable regression models including one or few bodily measures as predictors, like blood, spirometry or blood pressure measures (e.g. Klemera and Doubal 2006; Yamaguchi et al. 2012). More recently, supervised machine learning algorithms applied to a wider type of molecular and biometric data were deployed, as in Hannum et al. (2013), Horvath (2013), Putin et al. (2016), Mamoshina et al. (2016, 2018), Cole et al. (2017a, 2018b), Bobrov et al. (2018), Sun et al. (2019), Gialluisi et al. (2021a), showing very good accuracy in predicting chronological age (Gialluisi et al. 2019). Supervised machine learning is an umbrella term representing a group of algorithms which, using a variable number of features (input variables), learn to predict a label (or outcome), which can be either categorical (for classification algorithms) or continuous (for regression algorithms). This is accomplished through a phase in which the algorithm is trained to predict the label as accurately as possible in a training set—i.e. trying to minimize the loss function—and a phase where the accuracy and generalizability of the model is tested in an independent dataset, i.e. the test set (external validation phase). Unlike classical (linear) methods, this approach allows to model complex (non-linear) relationships of several features with the label, at the cost of a frequent “black box effect”, namely the inability to clearly identify the criteria and functions built by the algorithm to carry out the classification/regression task (Gialluisi et al. 2019). Despite BA acceleration is significantly associated with the main determinants of disease like lifestyles (e.g. diet, smoking, drinking habits) and socioeconomic factors (e.g. education and financial income), a notable proportion (> 50%) of the total Δ age variance remains unexplained, and genetic/epigenetic factors are hypothesized to exert a fundamental influence on biological aging rate (Gialluisi et al. 2021a, b). In spite of these interesting findings, few studies in the past attempted to identify genetic and epigenetic overlaps among different aging clocks (Bergsma and Rogaeva 2020; Gialluisi et al. 2021b; Li et al. 2022), an aspect which remains largely unexplored in the field of biogerontology (Li et al. 2020b). Here, we will review more in details the massive biological information produced by these and other studies, exploring the overlap among different aging systems, at the epidemiological, genetic and epigenetic level.

6 Epidemiology, Genetics and Epigenetics of Biological Aging: One …

117

Fig. 6.1 Potential biological aging trajectories during the course of life. Biological aging can be computed as the discrepancy between biological and chronological age (Δ age = BA − CA), from diverse sources of biomedical data. Δ age < 0 indicates that the organism is younger than expected at the time of measurement, while Δ age > 0 suggests an accelerated biological aging, potentially resulting in a higher risk of developing age-related chronic conditions. For this reason, BA acceleration estimates (“aging clocks”) can be used as public health screening tools in the general population. Image adapted from Gialluisi et al. (2021b)

6.2 An Overview of Aging Clocks Based on Biometric and Molecular Data To estimate biological aging, different algorithms were developed in the past using diverse sources of biomedical, instrumental and molecular data. These include both organ-specific measures, such as spirometry (e.g. Yamaguchi et al. 2012), electrocardiography (e.g. Lima et al. 2021), structural neuroimaging (e.g. Cole and Franke 2017; Cole 2020), electroencephalography (e.g. Sun et al. 2019), photographic images of the human skin (e.g. Bobrov et al. 2018), and markers of organismal BA like blood (e.g. Klemera and Doubal 2006; Mamoshina et al. 2016, 2018) and in particular epigenetic markers (e.g. Hannum et al. 2013; Horvath 2013). We will briefly review the most investigated and novel aging clocks, based both on classical regression methods and on supervised machine learning algorithms. While diverse molecular BA markers have been developed other than those reported below (Jylhävä et al. 2017; Cole et al. 2018a; Galkin et al. 2020; Li et al. 2022), we will describe here only the most investigated in terms of epidemiological and biological overlap with other aging clocks.

118

A. Gialluisi et al.

6.2.1 Telomere Length Telomeres—nucleo-protein caps at the end of all eukaryotic chromosomes—tend to shorten at each cell division within somatic cells, a phenomenon better known as telomere attrition (Harley et al. 1990). For this reason, telomere length (TL) tends to decrease as an organism ages and—being conceived as a mitotic clock and commonly measured in peripheral blood mononuclear cells (PBMCs)—was one of the first molecular ageing biomarkers discovered (Zglinicki and Martin-Ruiz 2005). TL associates with both genetic (see below) and non-genetic factors, such as socioeconomic status, smoking, oxidative and psychological stress (Cole et al. 2018a). Also, it associates with an increased risk of clinical events and all-cause mortality, which makes it a good BA acceleration predictor (Cawthon et al. 2003; Wilbourn et al. 2018), despite these associations are not always robust (Sanders and Newman 2013). Compared to other aging clocks, TL does not correlate so strongly with CA (Pearson’s r = − 0.3) (Müezzinler et al. 2013; Jylhävä et al. 2017), and seems not to be the best predictor of mortality risk among many BA acceleration estimators tested (see Sect. 6.3 below for details) (Cole et al. 2018b). Hence, in the last decade, more accurate and powerful epigenetic clocks were developed and investigated.

6.2.2 DNA Methylation Aging Clocks 6.2.2.1

Basic Concepts About DNA Methylation

Among epigenetic modifications, one of the most investigated and well characterized is the addition of a methyl group to CpG dinucleotides—where a cytosine precedes a guanine in the 5, -3, direction—to create 5-methylcytosine (5mC), in a process called DNA methylation (DNAm) (Reale et al. 2022). This plays an essential role in regulating transcription at the level of promoters, enhancers and other genomic regions (Reale et al. 2022). The human genome contains approximately 28 million of such CpG sites, a majority of which are methylated to a variable extent across different cell types, individuals, and conditions. This is usually assessed through experimental techniques like genome-wide methylation arrays and reduced representation bisulfite sequencing, which measure the proportion of cells in a tissue that are methylated at a given CpG site, the so called CpG methylation fraction (or state) (Li et al. 2022). More interestingly in our perspective, many of these CpG sites show a consistent increase or decrease of their methylation fraction over time, which is the property exploited by “epigenetic clocks” to measure the biological aging rate of organisms. These clocks are usually based on regularized linear—typically Elastic Net—regression models, which allow to predict CA while selecting a reduced number of CpGs that better predict age variation and accounting for collinearity biases (Li et al. 2022), although recently deep learning based algorithms with comparable performance were also proposed (Galkin et al. 2021). Elastic Net is a shrinkage method which contains both

6 Epidemiology, Genetics and Epigenetics of Biological Aging: One …

119

Lasso (least absolute shrinkage and selection) and Ridge regression models, which are suited to handle highly dimensional datasets where the number of predictors is notably higher than the sample size of the dataset (p >> n) and the predictors are likely to be highly correlated (Engebretsen and Bohlin 2019; Caulton et al. 2022). Exploiting this algorithm, about 20 DNAm clocks have been proposed in the recent years, which can be generally classified into first and second generation epigenetic clocks (see Bergsma and Rogaeva 2020 for a review). First generation clocks, also known as chronological clocks, are built to estimate CA as accurately as possible (Hannum et al. 2013; Horvath 2013), while second generation clocks, also known as biological clocks, are trained to estimate as accurately as possible phenotypes and clinical risks related to aging, e.g. mortality (Levine et al. 2018; Lu et al. 2019). While the former often show more accurate estimations of CA, the latter better predict aging phenotypes and health outcomes during aging (Chen et al. 2016; Levine et al. 2018; Lu et al. 2019; Bergsma and Rogaeva 2020; Li et al. 2022).

6.2.2.2

First Generation (Chronological) Clocks

The first (chronological) epigenetic clocks developed were merely based on the methylation levels of few (< 10) CpGs, which were found to strongly correlate with age (e.g. Garagnani et al. 2012). However, such DNAm clocks cannot be reliably used for multiple tissues and are often characterized by high Mean Absolute Errors (MAE > 10 years) (Koch and Wagner 2011), or are inconsistently accurate across different DNAm measurement techniques (Declerck and Vanden Berghe 2018). The most investigated first generation clocks are best represented by the Hannum and the Horvath clocks, which were developed independently in 2013 (Hannum et al. 2013; Horvath 2013). These have represented for years some of the most robust BA estimators available, with high Pearson’s r correlations (0.96 for Horvath and 0.91 for Hannum clock) and small MAEs (3.6 and 4.9 years) with CA, along with high predictivity of incident mortality risk (Hannum et al. 2013; Horvath 2013; Marioni et al. 2015). These two clocks are moderately correlated (Lu et al. 2018), having only six overlapping CpG sites (Jylhävä et al. 2017). This limited overlap and covariance is probably due to the fact that the Hannum clock is based on 71 CpGs measured in peripheral blood cells and is strongly dependent on the composition of the blood sample, while the Horvath clock is based on DNAm signals at 353 CpGs from 51 different human tissues/cell types, being independent of age-related changes in blood cell composition. Due to their construction, these two clocks are thought to tag different aging processes, namely immunosenescence for the Hannum clock (for which it is also known as extrinsic epigenetic age acceleration) and an intrinsic cell aging process conserved across different cell types for the Horvath clock (from which an intrinsic epigenetic age acceleration was derived, regressing BA estimates against CA) (Chen et al. 2016). More in general, this concept can be extended to all DNAm clocks, where tissue-specific clocks may better reflect age-related and tissuespecific diseases, while multi-tissue clocks are thought to better capture innate aging processes (Bergsma and Rogaeva 2020).

120

6.2.2.3

A. Gialluisi et al.

Second Generation (Biological) Clocks

Among the second generation (biological) DNAm clocks, the most prominent examples are represented by DNAmPhenoAge (Levine et al. 2018) and GrimAge (Lu et al. 2019). DNAmPhenoAge is a 513 CpG predictor trained on PhenoAge, a measure developed to predict incident mortality risk in the North-American population, applying a penalized linear regression model to nine circulating biomarkers which were most predictive of mortality and chronological age of subjects (Levine et al. 2018). GrimAge represents instead a linear combination of CA, sex, and DNAm surrogate biomarkers (1030 CpGs) for seven plasma proteins and smoking packyears, trained and selected to best predict mortality, outperforming all the abovementioned clocks (Lu et al. 2019; Hillary et al. 2020). Moreover, GrimAge was the only epigenetic clock predicting death risk independently of lifestyles (McCrory et al. 2021).

6.2.2.4

Third Generation (Pace of Aging) Clocks

More recently, a third generation of epigenetic clocks has been proposed, aimed at quantifying the rate of biological aging—namely the pace at which ageing proceeds—rather than merely the extent of ageing accumulated up to the time of measurement, as both first- and second-generation clocks do. This was accomplished through the Pace of Aging (PoA), a clock exploiting longitudinal blood markers measurements in diverse assessment waves in the third and fourth decades of life of participants of the Dunedin cohort (New Zealand), tagging multi-system integrity of different organs (Belsky et al. 2015). Belsky and colleagues developed two epigenetic surrogates of PoA, namely DunedinPoAm (Belsky et al. 2020), and DunedinPACE (Belsky et al. 2022), applying penalized regressions to DNAm measures collected over a 12 and 20 years follow-up, respectively. Both clocks are interpreted as years of biological aging per year of chronological aging, with values > 1 suggesting an accelerated pace of aging and values < 1 indicating a decelerated aging rate. These clocks well predicted all-cause mortality risk, morbidity, disability and aging-related decline, with DunedinPACE even showing influences independent of other epigenetic clocks (see below) (Belsky et al. 2020, 2022).

6.2.3 Blood Biomarker-Based Aging Clocks Biological clocks based on circulating markers levels have gained renewed attention in the last years, thanks to the relatively low costs and routine assessments of biochemical and hemochrome measures—which make them widely applicable in large populations—and to the spread of machine learning applications to biomedical data (Gialluisi et al. 2019). Deep Neural Networks (DNNs) represent the most prominent example of supervised machine learning algorithms in the field, typically

6 Epidemiology, Genetics and Epigenetics of Biological Aging: One …

121

characterized by an input layer, a variable number of hidden “decision” layers and an output layer, imitating the human brain in structure and function (Zhavoronkov et al. 2019). DNNs are able to capture hidden underlying features and to learn complex representations of highly multidimensional data (Mamoshina et al. 2016), automatically select features that are most relevant to predictions (Zhavoronkov and Mamoshina 2019), and have been often proven to be among the best performing algorithms in blood-based BA estimation (Bae et al. 2021). This way, for each vector of input features provided (e.g. the blood test of a given subject), the algorithm is able to return an accurately predicted age (BA) value (Zhavoronkov et al. 2019), despite their population-specificity (Mamoshina et al. 2018; Gialluisi et al. 2019). In a pioneering study, Putin and colleagues used anonymized blood biochemistry records of 41 markers and sex of more than 62,419 subjects from the general Russian population to estimate BA through an ensemble of DNNs (Putin et al. 2016). This algorithm predicted BA quite accurately, with a standard coefficient of determination (R2 , the fraction of variance in chronological age explained by the model) of 0.83, a Pearson correlation of 0.91, and a MAE of 5.55 years (Putin et al. 2016). Mamoshina and colleagues (Mamoshina et al. 2018) later trained similar algorithms on populationspecific datasets, using samples from three ethnically different populations, namely South Koreans (N = 65,760), Eastern Europeans (N = 55,920) and Canadians (N = 20,699), using sex and 19 blood features. These models showed good predictivity of chronological age when they were trained and tested on the same population (R2 = 0.49–0.69, MAE = 5.59–6.36 years) (Mamoshina et al. 2018). The aging acceleration (Δ age) resulting from the above-mentioned BA estimate was associated with incident all-cause mortality in a US and a Canadian cohort (Mamoshina et al. 2018). A similar algorithm was later built by our group in an Italian population cohort, the Moli-sani study (N = 23,858), with comparable accuracy (MAE = 6.00 years; r = 0.76; R2 = 0.57) (Gialluisi et al. 2021a). Beyond replicating associations of the resulting Δ age with mortality, we observed novel associations with first hospitalization risk, for all and specific causes, and with measures of mental and physical wellbeing. Deep learning architectures (guided autoencoders) were recently used to compute an inflammatory age based on 50 circulating cytokines, chemokines and growth factors (iAge). The resulting discrepancy with CA was found to be very negative in centenarians, suggesting a notable BA deceleration, and was associated with multi-morbidity and immunosenescence (Sayed et al. 2021), further validating the developed clock. Alternative algorithms have been also proposed to build blood BA acceleration estimators aimed at predicting mortality as accurately as possible, which should represent the first and main aim of an aging clock (Pyrkov and Fedichev 2019; Li et al. 2022). MORTAL-bioage, developed to predict death risk based on blood markers and CA through Cox models (Pyrkov and Fedichev 2019), and Phenotypic Age (or PhenoAge), based on Gompertz models (Liu et al. 2018), represent prominent examples in the field. Like in second generation epigenetic clocks, this underlines how training biological aging prediction using biological rather than chronological age as a label—or, alternatively, aging phenotypes or surrogates—could notably

122

A. Gialluisi et al.

improve the efficacy of aging clocks as public health tools since this confers a better ability to predict clinical outcomes (Levine et al. 2018; Bergsma and Rogaeva 2020; Li et al. 2022).

6.2.4 Neuroimaging Based Brain Aging Clocks While cognitive performance has represented for years a reliable measure of brain aging, more recently accurate BA estimation methods have been proposed that use brain imaging features, like structural Magnetic Resonance Imaging (MRI) (Cole et al. 2017a, 2018a), multimodal neuroimaging sources (Cole 2020), positron emission tomography (PET) signals (Goyal et al. 2018) or electroencephalograms (EEG) (Al Zoubi et al. 2018; Sun et al. 2019). The most prominent example in the field is represented by brainPAD, a predicted brain age measure derived from Gaussian Process Regression (GPU) applied to gray and white matter structural features (Cole et al. 2017b, 2018b). In a Scottish aging cohort, this measure significantly predicted aging phenotypes and all-cause mortality, even better than “molecular” aging clocks like TL and Horvath’s DNAm age (Cole et al. 2018b). BrainPAD is characterized by good accuracy (R2 ≥ 0.91 and MAE = 4.16 years), a high consistency across different MRI scanners used (1.5 T vs. 3 T), and is quite robust also against the use of raw—rather than segmented—neuroimaging data (R2 = 0.88, MAE = 4.65 years) (Cole et al. 2017a). More recently, a multimodal estimation algorithm based on Lasso regressions including different types of MRI data showed a further improved accuracy in brain age prediction (Pearson’s r = 0.78 and MAE = 3.55 years) (Cole 2020).

6.3 Overlap of Aging Clocks Despite converging evidence that the above-mentioned clocks may tag shared molecular basis, these have been seldomly analyzed together in population studies and their shared underlying mechanisms remain under-investigated. We provide below an overview of studies dissecting the overlap across diverse aging clocks, both in terms of influence on clinical risk (epidemiological overlap) and in terms of genetic/ epigenetic bases (biological overlap).

6.3.1 Epidemiological Overlap Studies comparing different aging clocks in the same cohort mostly analyzed few molecular and functional markers and quite consistently observed partly shared and partly independent influences on aging phenotypes and mortality, in line with

6 Epidemiology, Genetics and Epigenetics of Biological Aging: One …

123

moderate reciprocal correlations, which only partly depend on their shared variance with CA (Marioni et al. 2016; Kim et al. 2017; Cole et al. 2018b; Zhang et al. 2018; Murabito et al. 2018; Gialluisi et al. 2019; Gao et al. 2019; Li et al. 2020b). Marioni et al. (2016) investigated TL and DNAm Hannum age acceleration jointly in a Scottish cohort (N > 1300), reporting only moderate correlations. This may explain the reason why these clocks represented independent fractions of variance in CA—with the Hannum clock’s R2 being tenfold higher than TL’s one—and independently predicted all-cause mortality in a longitudinal setting, with a unitary Standard Deviation (SD) increase in Hannum DNAm age being associated with a 25% increase in death risk and one SD increase in TL being associated with a 11% decrease of mortality (Marioni et al. 2016). In a small US aging population (N = 262; 60–103 years), Kim and colleagues tested instead another DNAm aging parameter—the Horvath clock—and a composite frailty index, observing that the latter outperformed the former in predicting death risk, and that the former did not hold association after adjustment for CA and leukocyte cell fractions, as well as in multivariable models including both aging clocks tested (Kim et al. 2017). Similarly, a 10-CpG methylation-based mortality clock (MRscore) (Zhang et al. 2017) independently predicted incident death risk—showing a 91% increase of risk per SD increase—when modelled jointly with the same frailty index and DNAm Horvath’s clock in survival models, in a longitudinal German study (N > 2300; 50–75 years; ~ 14 years follow-up) (Zhang et al. 2018). Similarly, in a US aging cohort (534 males, 55–85 years), MRscore showed associations with incident all-cause, cardiovascular and cancer death which were independent of TL, DNAmPhenoAge and Horvath’s DNAm age acceleration, outperforming these clocks (Gao et al. 2019). In line with this evidence, in the (US) Framingham Heart Study Offspring Cohort (N = 2471; 66 ± 9 years), the DunedinPACE associations with health-span endpoints were not heavily affected by adjustment for the Horvath, Hannum, or PhenoAge clocks, while in models adjusted for GrimAge associations with mortality, cardiovascular disease, and disability were attenuated, but remained statistically significant (Belsky et al. 2022). In the same cohort, consistent evidence for a certain complementarity in predicting mortality and age-related (cardiovascular) disease risk was found when Murabito and colleagues compared the (blood-based) Klemera–Doubal biological aging estimator (Klemera and Doubal 2006) and a composite circulating inflammatory index, along with intrinsic and extrinsic DNAm age acceleration estimators (Murabito et al. 2018). Also brainPAD was analyzed jointly with other aging acceleration clocks in the (Scottish) Lothian Aging cohort (N = 669, mean age 73 years), where it predicted all-cause mortality independently of DNAm (Horvath’s) age acceleration, but not of TL (Cole et al. 2018b). More recently, our group reported independent influences of PhenoAge acceleration and an independent blood-based biological aging measure on both mortality and first hospitalization risk, in an adult Italian cohort (N = 4772; > 35 years) (Gialluisi et al. 2021a). In the last two years, few studies attempted to combine a higher number of aging clocks from very diverse sources into a single prediction model (Li et al. 2020b; Nie et al. 2022). Li and colleagues analyzed longitudinal trajectories of nine BA acceleration markers—including DNAm (Horvath’s, Hannum’s, DNAmPhenoAge, and

124

A. Gialluisi et al.

GrimAge) clocks, TL and other composite indices of frailty, blood/urine markers and cognitive/mood status—in a Swedish population cohort (N = 845) (Li et al. 2020b). Over a 20 years follow-up (Li et al. 2020b), Horvath’s DNAmAge, DNAmGrimAge, and the frailty index significantly predicted all-cause mortality, in a multivariable survival model (Li et al. 2020b). Interestingly, molecular and functional estimators were weakly correlated, in line with previous evidence (Belsky et al. 2018). Nie and colleagues (Nie et al. 2022) used a similar approach to carry out a study of 4066 young Chinese volunteers, over 403 multi-omic features, including metabolomic, clinical biochemistry, immune repertoire, body composition, physical fitness, EEG, facial skin and gut microbiome features, aimed at quantifying BA acceleration indices of different domains. They identified different aging rates within the same organisms, and clusters of subjects with different multi-organ aging patterns. More importantly, they determined that combining different markers/BAs boosted power in both all-cause and cardiovascular mortality prediction (Nie et al. 2022).

6.3.2 Biological Overlap Very little is known about the genetic, epigenetic and biological overlap among different aging clocks. This represents an interesting prospective field of investigation since moderate to high heritability was reported for TL (0.44–0.86) (Njajou et al. 2007; Broer et al. 2013), DNAm (0.34–0.55) and brainPAD (≥ 0.5) (Cole et al. 2017a) clocks, as well as for longevity (Giuliani et al. 2018), a trait strictly related to biological aging. Moreover, cross-time phenotypic correlations between epigenetic clocks at different time points was reported to be mediated by shared genetic influences (Jylhävä et al. 2019), in line with the partial overlap of both CpGs and genes encompassed by DNAm clocks, and with similar methylation patterns across them (described below) (Bergsma and Rogaeva 2020; Li et al. 2022). In this section, we will disentangle the genetic and epigenetic underpinnings of different BA acceleration indices, trying to gain insights into their shared molecular mechanisms.

6.3.2.1

Genomic Sites Shared Among Different DNA Methylation Clocks

It is well known that there are peculiar methylation patterns and trends across the genome during aging, with hypo- and hypermethylation affecting genomic regions which are very different in location and function (Reale et al. 2022). Indeed, as the organism ages, DNA methylation levels change unevenly, with a general trend towards global hypomethylation and locus-specific tendency to hypermethylation (Horvath and Raj 2018). More specifically, genomic regions like DNA repeats of subtelomeres and CpG isles—namely CpG site-rich regions of roughly 1 kb, which

6 Epidemiology, Genetics and Epigenetics of Biological Aging: One …

125

are often closely embedded to mammalian gene promoters—tend to gain methylation. When methylation occurs in these CG-rich promoters, the transcription of genes regulated by them can be repressed (Reale et al. 2022). In general, hypermethylation of age-associated genes results in a reduction of their expression, despite this relationship is not unequivocal and elevated DNAm over the gene body has been associated with high gene expression levels (Wezyk et al. 2018). On the contrary, most interspersed elements, gene-poor regions (the so-called “gene deserts”), GC-poor promoters and enhancers show age-related hypomethylation. When this takes place in regulatory active regions, this phenomenon is usually associated with upregulation of transcription (Bergsma and Rogaeva 2020). These patterns in the aging methylome characterize molecular mechanisms underlying typical aging-related disease and phenotypes. As an example, genes involved in the immune (T-)cell response and differentiation have been widely demonstrated to undergo a hypermethylation-mediated downregulation of transcription, which finally leads to reduced immunocompetence and response to infections in the elderly population (Tserel et al. 2015). Similarly, age-related hypermethylation of genes that are crucial during development and act as tumor suppressors is directly associated with cancer risk during the course of life and has been suggested to play a key role in neoplastic formation (see Peleg 2022; Reale et al. 2022 for reviews). Overall, the similarities between age-related methylation patterns and epigenetic mechanisms underlying chronic health conditions suggest methylome as a key component of aging and further supports the deployment and use of multiple independent epigenetic clocks, even based on methylation signals from different tissues. Still, although age-related CpGs are overrepresented in the vicinity of Polycomb-binding regions and promoters—which represent key regulators of gene expression—little overlap in terms of CpG has been reported among the different DNAm clocks developed so far (Bergsma and Rogaeva 2020). While this can be somehow expected for those epigenetic clocks based on a small number of CpGs (e.g. Zhang et al. 2018), this is not so for clocks with tens or hundreds of CpGs, e.g. like Hannum’s and Horvath’s clocks. A potential explanation for this might be the different cell types and tissues in which the underlying DNAm measurements were performed or, taking into account all clocks, the different paradigms used for their construction (chronological vs. biological clocks). These factors may also explain why some epigenetic clocks are capable of catching certain age-related phenotypes and clinical risks better than others (Bergsma and Rogaeva 2020). To gain a deeper mechanistic understanding of the 20 epigenetic clocks developed so far, Levine and colleagues (Levine and Higgins-Chen 2022) recently analyzed all the 5717 CpG sites they encompassed—both shared and unshared—clustering them by covariance and range of methylation levels across chronological ages. They identified twelve modules of CpG sites, with peculiar biological characteristics like a monotone loss of DNAm as CA increased, strong directional changes or an exponential change in methylation levels during development, or peculiar sensitivity to epigenetic reprogramming in vitro. A consensus clustering of these modules showed two main clusters: one with non-significant or weakly negative associations with

126

A. Gialluisi et al.

mortality, strong acceleration in tumor versus normal tissue, and moderate rejuvenation during reprogramming, and the other strongly predictive of death risk, with moderate acceleration in tumor versus normal tissue, and stronger resetting in reprogramming (Levine and Higgins-Chen 2022). The limited overlap among epigenetic clocks is observed also in terms of genes encompassed by such clocks, although some genes are more represented than others (Li et al. 2022). Among these, Bergsma and Rogaeva (2020) identified 28 genes encompassed by more than three DNAm clocks, 21 of which show a consistent direction of age-related methylation trend across the different estimators. The most represented genes were EDARADD, ELOVL2 and KFL14, as later confirmed by a more recent work (Li et al. 2022). While EDARADD (EDAR-associated death domain), a gene involved in immune response regulation and cancer, is hypomethylated in five clocks and will be further discussed below since it is implicated in shared genetic underpinnings of different aging clocks, ELOVL2 (Elongation Of Very Long Chain Fatty Acids Protein 2) represents a very interesting candidate since it shows consistent age-related hypermethylation trends across all tissues (Bell et al. 2019) and encodes for an enzyme involved in lipid homeostasis and retinal function. This was demonstrated to recover when the age-related hypermethylation of ELOVL2 promoter was reversed in murine in vivo models (Garagnani et al. 2012; Chen et al. 2020a). KFL14 (Kruppel like factor 14) undergoes hypermethylation—suggesting downregulation—during aging, which has been associated with abnormalities in DNA repair, cell cycle control and apoptosis in familial early-onset Alzheimer Disease (Wezyk et al. 2018). Moreover, it has been implicated in immunosenescence (Peleg 2022). These genes may represent promising candidates for functional validations aimed at better understanding how they contribute to age-related disease onset and at possibly developing anti-aging therapies in the future.

6.3.2.2

Shared Genetic Influences on Diverse Aging Clocks

So far, few studies attempted to identify the genetic underpinnings of the aging clocks developed, mostly through Genome Wide Association Scans (GWAS). These represent large scale studies where millions of common genetic variants throughout the genome—namely Single Nucleotide Polymorphisms (SNPs) or small insertions/ deletions (indels)—are tested for association with a given phenotype over huge samples (usually ranging from tens of thousands to millions). These studies allowed to identify many significant associations of common genetic variants with different BA acceleration estimators, such as TL (Li et al. 2020a), DNAm age (Van Dongen et al. 2016; Lu et al. 2018; Gibson et al. 2019) and brainPAD (Jonsson et al. 2019; Kaufmann et al. 2019), each with a small effect size on their variation. Furthermore, they led to interesting findings in the field. First, they suggested genetic correlations of DNAm age acceleration with lifestyle/socioeconomic factors and longevity (McCartney et al. 2020), which was later confirmed by independent evidence that polygenic scores influencing different BAs predicted longevity, although with small effect sizes (Nie et al. 2022). Second, they suggested an overlap between brain

6 Epidemiology, Genetics and Epigenetics of Biological Aging: One …

127

age acceleration in healthy subjects and polygenic risk of several neuropsychiatric and neurodegenerative disorders (Kaufmann et al. 2019). Third, they indicated many genetic links among aging clocks, both in terms of shared co-heritability and in terms of pleiotropic genes influencing one or more aging clocks. Indeed, in a recent review, we computed genetic correlations (or SNP-based co-heritability) of all the biological aging clocks tested so far in GWAS, namely TL (Codd et al. 2013), epigenetic age acceleration (Hannum, Horvath, GrimAge and DNAmPhenoAge) (McCartney et al. 2020), blood-based markers like PhenoAge and Blood(Bio)Age (Kuo et al. 2020b), and brainPAD (Kaufmann et al. 2019). This revealed low to moderate positive correlations among DNAm clocks—in line with previous evidence (Gibson et al. 2019; McCartney et al. 2020)—and additional significant correlations of these with blood markers based estimators and TL. This patchy co-heritability pattern suggested the existence of partly shared and partly independent genetic influences on the diverse clocks (Gialluisi et al. 2021b). To identify such pleiotropic influences, we performed a multivariate associations analysis of the above mentioned aging clocks, identifying more than 2000 associated variants and 252 genes enriched for such multivariate associations (Gialluisi et al. 2021b). Associated variants were mostly intergenic or intronic (Fig. 6.2), while associated loci were mostly clustered in specific regions of chromosomes 1, 6, 10, 12, 17 and 19 (Fig. 6.3). A network analysis of the enriched genes revealed a significant excess of high confidence interactions (Fig. 6.4), suggesting the existence of a global molecular network among the gene products, and highlighted some local networks of interest. These included the interaction involving apolipoproteins APOE, APOC1 and APOA5, implicated in triglyceride and cholesterol transport and metabolism (Dominiczak and Caslake 2011), cardiovascular disease (Zhou et al. 2018) and

Fig. 6.2 Single variant functional annotations most represented among significant multivariate associations with aging clocks. The histogram reports the proportion of SNPs significantly associated (or in high LD with independent loci associated) with biological aging clocks at the multivariate level, which have a corresponding functional annotation assigned by ANNOVAR (v 17-07-2017) (Wang et al. 2010). Bar colors indicate −log 2 (enrichment) of the indicated functional annotation, compared to all SNPs in the selected reference panel (1000 genomes, phase 3)

128

A. Gialluisi et al.

Fig. 6.3 Genomic loci presenting multivariate associations with different aging clocks. Summary results per genomic risk locus of multivariate association analysis with different aging clocks, as performed by Gialluisi et al. (2021b), are reported

6 Epidemiology, Genetics and Epigenetics of Biological Aging: One …

129

dementia risk (Kuo et al. 2020a); and the interplay among CRP (C-reactive protein), TNF (tumor necrosis factor), SELP (P-selectin, a marker of platelet activation) and IL6R (interleukin-6 receptor), highlighting the importance of inflammation during aging—a phenomenon known as “inflammaging” (Franceschi et al. 2018). Pathway based enrichments substantiated these findings, providing further evidence for the implication of lipid and carbohydrate homeostasis (Gialluisi et al. 2021b).

6.3.2.3

Intersecting Genetic and Epigenetic Overlaps Among Diverse Aging Clocks

For the purpose of this chapter, we intersected the list of genes enriched for multivariate associations with aging clocks (Gialluisi et al. 2021b) with those of genes encompassing three or more DNAm clocks (Bergsma and Rogaeva 2020; Li et al. 2022), identifying six genes meeting both criteria: EDARADD, KLF14, SCGN, NHLRC1, SELP and ASPA (Table 6.1). Since these genes represent natural candidates for further validation at the molecular level, we briefly review them below. Overall, the evidence reported here makes the search for shared genetic underpinnings of BA acceleration estimators very promising, even using “classical” tools like GWAS studies, as suggested elsewhere (Giuliani et al. 2018). EDARADD The ectodysplasin-A receptor-associated adaptor protein (EDARADD; chromosome 1q42.3-q43; OMIM #606603) gene encodes a protein that interacts with ectodysplasin-A receptor (EDAR) through a specific death domain. EDAR is part of the tumor necrosis factor (TNF) receptor family. In mammals, two EDARADD isoforms (isoform A and B) exist, both activating the NF-κB pathway (Sadier et al. 2015). Numerous studies have shown that NF-κB transcription factors are involved in the regulation of genes controlling cell survival and proliferation and involved in several types of cancer (Shishodia and Aggarwal 2004; Courtois and Gilmore 2006; Charbonneau et al. 2014; Wang et al. 2014; He et al. 2018). Mutations affecting EDARADD have been described in anhidrotic ectodermal dysplasia, an ectodermal differentiation disorder involving aberrant development of the exocrine sweat glands, teeth and hair, and other related pathological phenotypes (Bal et al. 2007; Chassaing et al. 2010; Suda et al. 2010; Masui et al. 2011; Wohlfart et al. 2016; Chen et al. 2017; Podzus et al. 2017). Moreover, this gene is also involved in innate immunity regulation and in cytokine signaling (Gibson et al. 2019), and is thought to be overexpressed with aging as its DNAm level decreases (Bergsma and Rogaeva 2020). KLF14 Kruppel-like factor 14 (KLF14; 7q32.2; #609393) belongs to a family of evolutionary conserved transcription factors with zinc finger domain that are known to regulate a variety of cellular processes such as proliferation, differentiation, metabolism, and apoptosis (Lomberk and Urrutia 2005).

130

A. Gialluisi et al.

Fig. 6.4 Interaction network of genes enriched for associations with aging clocks. The reported network represents both direct (physical) and indirect (functional) associations as inferred from the STRING v11.0 database (Szklarczyk et al. 2019), among the genes enriched for multivariate associations with multiple aging clocks in Gialluisi et al. (2021b). Image courtesy of Gialluisi et al. (2021b)

606603

609393

609202

608072

173610

608034

EDARADD

KLF14

SCGN

NHLRC1

SELP

ASPA

17p13.2

1q24.2

6p22.3

6p22.2

7q32.2

1q42.3-q43

Chr region

17:3,474,110–3,503,405

1:169,588,849–169,630,124

6:18,120,440–18,122,677

6:25,652,215–25,701,783

7:130,730,697–130,734,207

1:236,394,286–236,484,930

Genomic coordinates (GRCh38)

Anhidrotic ectodermal dysplasia

Associated diseases

Involved in the hydrolysis of N-acetylaspartate (NAA) to acetate and aspartate

P-selectin, involved in platelet, endothelial cell and leukocyte interactions with beta-integrins

E3 ubiquitin ligase possibly involved in glycogen metabolism

Ca2+ sensor protein involved in insulin metabolism

Canavan disease

Thrombo-inflammatory conditions

Lafora disease

Insulinoma

Zinc finger transcription factor involved Metabolic syndrome, type 2 diabetes, in proliferation, differentiation, obesity, atherosclerosis metabolism and apoptosis

Involved in the NF-κB pathway

Molecular functions

Genes encompassing three or more epigenetic clocks (Bergsma and Rogaeva 2020; Li et al. 2022) and enriched for associations with different biological aging clocks (Gialluisi et al. 2021b) are reported, along with their main molecular function and associated diseases

OMIM #

Gene

Table 6.1 Promising candidate genes for future functional studies on anti-aging therapies

6 Epidemiology, Genetics and Epigenetics of Biological Aging: One … 131

132

A. Gialluisi et al.

KLF14 is the latest identified KLF member and was found to be induced by TGF-β and expressed in an imprinted manner in the intraembryonic and extraembryonic tissues (Parker-Katiraee et al. 2007; Truty et al. 2009). More specifically, in mammalians it is known to be expressed in muscle, brain, heart, fat, and liver (Gonzalez et al. 2013) and works as transcription factor via different mechanisms like formation of transcriptional inhibition complexes and regulation through DNAbinding zinc finger domain (Chasman et al. 2009; Truty et al. 2009). Because of a significantly increased number of nonsynonymous changes that distinguish KLF14 from the other members of the KLF family, KLF14 is the first example of an imprinted gene undergoing accelerated evolution in the human genome (Parker-Katiraee et al. 2007). In addition, a gender-specific effect has been observed for KLF14 regulation (Small et al. 2018). A large body of evidence suggests an important role of KLF14 in regulating lipid and glucose metabolism. In subcutaneous fat, KLF14 was found to be associated with the expression level of multiple genes affecting body mass index (BMI), obesity, cholesterol, insulin, and glucose levels (Small et al. 2011). Several studies have demonstrated that specific regions at KLF14 act as activator of other genomic loci involved in adipose tissue metabolism and in the context of type 2 diabetes (T2D) (de Assuncao et al. 2014; Anunciado-Koza et al. 2016; Lotta et al. 2017; Small et al. 2018). Indeed, several GWAS also showed that variants at the KLF14 locus are associated with HDL-C levels, metabolic syndrome, T2D, and atherosclerosis (Sladek et al. 2007; Teslovich et al. 2010; Ohshige et al. 2011; Small et al. 2011; Chen et al. 2012; Elouej et al. 2016; Nair et al. 2016). Some recent studies suggested a role for KLF14 in the regulation of the immune system and in tumorigenesis (Chen et al. 2020b). Its promoter hypermethylation trend—hence downregulation—during aging has been implicated in the loss of immunocompetence through repressing Foxp3 (Peleg 2022), as well as in familial early-onset AD, affecting DNA repair and cell cycle, and potentially favoring the hypermethylation of TRIM59, another gene incorporated in DNAm clocks which contributes to pro-apoptotic signaling in AD (Wezyk et al. 2018). SCGN SCGN (segretagogin; 6p22.2; #609202) encodes a Ca2+ sensor protein expressed in several tissues, with intracellular localization in neuroendocrine cells and is present in the extracellular milieu as well (Wagner et al. 2000; Gartner et al. 2001). Several reports have demonstrated a role for SCGN in the regulation of insulin secretion, but also interaction with SNAP-25 (Rogstam et al. 2007), an exocytosis component acting remodeling to control focal adhesion of insulin granules (Yang et al. 2016) and other SCGN-mediated events (Kobayashi et al. 2016). SCGN also regulates the expression of corticotropin-releasing hormone (CRH) and matrix metalloproteases-2 (Romanov et al. 2015; Hanics et al. 2017). Emergent evidence is pointing to a role for SCGN in pancreatic beta-cells that is independent from exocytosis, modulating insulin function extracellularly, and regulating systemic metabolism (Sharma et al. 2019). This leads to hypothesize that SCGN may be implicated in some forms of insulinomas— pancreatic neuroendocrine tumors leading to overproduction of insulin. Interestingly,

6 Epidemiology, Genetics and Epigenetics of Biological Aging: One …

133

its product is co-expressed and interacts with the well-known TAU protein in pancreatic Langerhans cells, suggesting that these cells may represent a site of tauopathies external to the central nervous system (Maj et al. 2008). NHLRC1 NHLRC1 (NHL Repeat-Containing Protein 1; 6p22.3; #608072) encodes malin, an E3 ubiquitin ligase that is able to polyubiquitinate protein targeting to glycogen (PTG) by interacting with the adaptor protein laforin (Gentry et al. 2005; Cheng et al. 2007; Vilchez et al. 2007; Solaz-Fuster et al. 2008; Worby et al. 2008; RomaMateo et al. 2012). Because PTS targets protein phosphatase 1 (PP1) involved in the dephosphorylation and activation of glycogen synthase, malin has been suggested to play a role in glycogen metabolism (Printen et al. 1997). Even if experimental data to support this hypothesis are not consistent and the involvement of malin in glycogen metabolism is not yet resolved (DePaoli-Roach et al. 2010), mutations in NHLRC1 are known genetic causes of Lafora disease, a fatal glycogen-storage disorder that manifests as severe epilepsy and is characterized by aberrant glycogen inclusions in nearly all tissues (Nitschke et al. 2018). SELP SELP (Selectin P; 1q24.2; 173610) encodes a well-known marker of platelet and endothelial cell activation, belonging to the family of selectins, which are known to play important roles in several inflammatory conditions (Ley 2003). Upon cell stimulation, P-selectin is mobilized from α-granules in platelets as well as from Weibel–Palade bodies in endothelial cells, and then transported to the cell surface to act in response to inflammatory signals (Liu et al. 2010). P-selectin directly mediates vascular inflammation, facilitating leukocyte adhesion to the vessel wall through the formation of hetero-conjugates between the platelets and polymorphonuclear leukocytes (PMNs) or monocytes (Theilmeier et al. 1999). Therefore, P-selectin has an important role in thrombus formation (Falati et al. 2003) and is directly linked to organ dysfunction and infarctions (Jackson et al. 2009). Prominently, it was recently found that in resting platelets of COVID-19 patients SELP expression was significantly increased, as was the number of circulating platelet-leukocyte aggregates (Manne et al. 2020). ASPA ASPA (Aspartoacylase; 17p13.2; #608034) has been linked with Canavan disease, a severe autosomal recessive with more than 50 causal variants described to date (Hoshino and Kubota 2014). Canavan disease implies a progressive neurodegenerative process characterized by swelling and spongy degeneration of brain white matter. This disorder is a form of leukodystrophy, thereby affecting white matter integrity and in particular myelination (Lotun et al. 2021). Mostly expressed in oligodendrocytes (Baslow et al. 1999), ASPA works as homodimer and is necessary to produce the hydrolysis of N-acetylaspartate (NAA) to acetate and aspartate (Klugmann et al. 2003; Moore et al. 2003). Reduced or absent ASPA activity in oligodendrocytes

134

A. Gialluisi et al.

results in increased levels of NAA, a typical feature of Canavan disease (Hagenfeldt et al. 1987).

6.4 Conclusions Overall, the evidence reviewed above suggests some considerations. First, the development of aging clocks based on supervised machine learning algorithms—be they biological or chronological clocks—applied to a diverse range of biometric data represents a turning point in the field, since it notably improved the prediction of underlying biological aging processes and of incident clinical risks in populations. Moreover, it increased the power to investigate the biological bases of aging. Second, in this perspective each organism should be seen as a multi-aging system (or mosaic) with diverse aging domains or organs aging at different rates, where systemic (organismal) clocks co-exist and partly overlap with organ/tissue-specific counterparts. Third, combining the estimators tagging such aging domains into single multivariable models may help improving the accuracy of mortality and aging phenotypes prediction, hence their efficacy as public health markers. Fourth, these aging domains partly share underlying biological, genetic and epigenetic mechanisms, which warrant further investigations as molecular targets for antiageing interventions. Although effective anti-aging—or, to be more realistic, agedelaying—therapies in humans are still far from being developed, first promising in vivo and in vitro evidence (Reale et al. 2022), along with the potential molecular targets discussed above, suggests this goal may not be that far to be achieved in the future. Notwithstanding it, we should not forget the importance of environmental factors in delaying biological aging. The adoption of healthy lifestyles remains at present the first and most effective anti-aging intervention in the near future. Acknowledgements We thank Dr Maria Benedetta Donati and Dr Chiara Cerletti for a critical review of the present chapter. Compliance with Ethical Standards Conflict of Interest The authors declare that they have no conflict of interest.

References Al ZO, Wong CK, Kuplicki RT et al (2018) Predicting age from brain EEG signals—a machine learning approach. Front Aging Neurosci 10:1–12. https://doi.org/10.3389/fnagi.2018.00184 Anunciado-Koza RP, Manuel J, Koza RA (2016) Molecular correlates of fat mass expansion in C57BL/6J mice after short-term exposure to dietary fat. Ann N Y Acad Sci 1363:50–58. https:/ /doi.org/10.1111/nyas.12958

6 Epidemiology, Genetics and Epigenetics of Biological Aging: One …

135

Bae C, Im Y, Lee J et al (2021) Comparison of biological age prediction models using clinical biomarkers commonly measured in clinical practice settings: AI techniques vs. traditional statistical methods. Front Anal Sci 1:1–12. https://doi.org/10.3389/frans.2021.709589 Bal E, Baala L, Cluzeau C et al (2007) Autosomal dominant anhidrotic ectodermal dysplasias at the EDARADD locus. Hum Mutat 28:703–709. https://doi.org/10.1002/humu.20500 Baslow MH, Suckow RF, Sapirstein V, Hungund BL (1999) Expression of aspartoacylase activity in cultured rat macroglial cells is limited to oligodendrocytes. J Mol Neurosci 13:47–53. https:/ /doi.org/10.1385/JMN:13:1-2:47 Bell CG, Lowe R, Adams PD et al (2019) DNA methylation aging clocks: challenges and recommendations. Genome Biol 1–24 Belsky DW, Caspi A, Houts R et al (2015) Quantification of biological aging in young adults. Proc Natl Acad Sci USA 112:E4104–E4110. https://doi.org/10.1073/PNAS.1506264112/SUPPL_ FILE/PNAS.1506264112.SAPP.PDF Belsky DW, Moffitt TE, Cohen AA et al (2018) Eleven telomere, epigenetic clock, and biomarkercomposite quantifications of biological aging: do they measure the same thing? Am J Epidemiol 187:1220–1230. https://doi.org/10.1093/aje/kwx346 Belsky DW, Caspi A, Arseneault L et al (2020) Quantification of the pace of biological aging in humans through a blood test, the DunedinPoAm DNA methylation algorithm. Elife 9:1–56. https://doi.org/10.7554/eLife.54870 Belsky DW, Caspi A, Corcoran DL et al (2022) DunedinPACE, a DNA methylation biomarker of the pace of aging. Elife 11:1–26. https://doi.org/10.7554/eLife.73420 Bergsma T, Rogaeva E (2020) DNA methylation clocks and their predictive capacity for aging phenotypes and healthspan. https://doi.org/10.1177/2633105520942221 Bobrov E, Georgievskaya A, Kiselev K et al (2018) PhotoAgeClock: deep learning algorithms for development of non-invasive visual biomarkers of aging. Aging (Albany NY) 10:3249–3259 Broer L, Codd V, Nyholt DR et al (2013) Meta-analysis of telomere length in 19,713 subjects reveals high heritability, stronger maternal inheritance and a paternal age effect. Eur J Hum Genet 21:1163–1168. https://doi.org/10.1038/ejhg.2012.303 Caulton A, Dodds KG, Mcrae KM et al (2022) Development of epigenetic clocks for key ruminant species Cawthon RM, Smith KR, O’Brien E et al (2003) Association between telomere length in blood and mortality in people aged 60 years or older. Lancet 361:393–395. https://doi.org/10.1016/S01406736(03)12384-7 Charbonneau B, Block MS, Bamlet WR et al (2014) Risk of ovarian cancer and the NF-κB pathway: genetic association with IL1A and TNFSF10. Cancer Res 74:852–861. https://doi.org/10.1158/ 0008-5472.CAN-13-1051 Chasman DI, Pare G, Mora S et al (2009) Forty-three loci associated with plasma lipoprotein size, concentration, and cholesterol content in genome-wide analysis. PLoS Genet 5:e1000730. https://doi.org/10.1371/journal.pgen.1000730 Chassaing N, Cluzeau C, Bal E et al (2010) Mutations in EDARADD account for a small proportion of hypohidrotic ectodermal dysplasia cases. Br J Dermatol 162:1044–1048. https://doi.org/10. 1111/j.1365-2133.2010.09670.x Chen X, Li S, Yang Y et al (2012) Genome-wide association study validation identifies novel loci for atherosclerotic cardiovascular disease. J Thromb Haemost 10:1508–1514. https://doi.org/ 10.1111/j.1538-7836.2012.04815.x Chen BH, Marioni RE, Colicino E et al (2016) DNA methylation-based measures of biological age: meta-analysis predicting time to death. Aging (Albany NY) 8:1844–1865. https://doi.org/ 10.18632/aging.101020 Chen YT, Liu HC, Han D et al (2017) Association between EDAR polymorphisms and nonsyndromic tooth agenesis in the Chinese Han population. Chin J Dent Res 20:153–159. https:// doi.org/10.3290/j.cjdr.a38770 Chen D, Chao DL, Rocha L et al (2020a) The lipid elongation enzyme ELOVL2 is a molecular regulator of aging in the retina. Aging Cell 1–13. https://doi.org/10.1111/acel.13100

136

A. Gialluisi et al.

Chen X, Shi W, Zhang H (2020b) The role of KLF14 in multiple disease processes. BioFactors 46:276–282. https://doi.org/10.1002/biof.1612 Cheng A, Zhang M, Gentry MS et al (2007) A role for AGL ubiquitination in the glycogen storage disorders of Lafora and Cori’s disease. Genes Dev 21:2399–2409. https://doi.org/10.1101/gad. 1553207 Codd V, Nelson CP, Albrecht E et al (2013) Identification of seven loci affecting mean telomere length and their association with disease. Nat Genet 45:422–427. https://doi.org/10.1038/ng. 2528 Cole JH (2020) Multimodality neuroimaging brain-age in UK biobank: relationship to biomedical, lifestyle, and cognitive factors. Neurobiol Aging 92:34–42. https://doi.org/10.1016/j.neurobiol aging.2020.03.014 Cole JH, Franke K (2017) Predicting age using neuroimaging: innovative brain ageing biomarkers. Trends Neurosci 40:681–690. https://doi.org/10.1016/j.tins.2017.10.001 Cole JH, Poudel RPK, Tsagkrasoulis D et al (2017a) Predicting brain age with deep learning from raw imaging data results in a reliable and heritable biomarker. NeuroImage 163:115–124. https:/ /doi.org/10.1016/j.neuroimage.2017.07.059 Cole JH, Underwood J, Caan MWA et al (2017b) Increased brain-predicted aging in treated HIV disease. Neurology 88:1349–1357. https://doi.org/10.1212/WNL.0000000000003790 Cole JH, Marioni RE, Harris SE, Deary IJ (2018a) Brain age and other bodily ‘ages’: implications for neuropsychiatry. Mol Psychiatry 1–16. https://doi.org/10.1038/s41380-018-0098-1 Cole JH, Ritchie SJ, Bastin ME et al (2018b) Brain age predicts mortality. Mol Psychiatry 23:1385– 1392. https://doi.org/10.1038/mp.2017.62 Courtois G, Gilmore TD (2006) Mutations in the NF-κB signaling pathway: implications for human disease. Oncogene 25:6831–6843. https://doi.org/10.1038/sj.onc.1209939 de Assuncao TM, Lomberk G, Cao S et al (2014) New role for Kruppel-like factor 14 as a transcriptional activator involved in the generation of signaling lipids. J Biol Chem 289:15798–15809. https://doi.org/10.1074/jbc.M113.544346 Declerck K, Vanden Berghe W (2018) Back to the future: epigenetic clock plasticity towards healthy aging. Mech Ageing Dev 174:18–29. https://doi.org/10.1016/J.MAD.2018.01.002 DePaoli-Roach AA, Tagliabracci VS, Segvich DM et al (2010) Genetic depletion of the malin E3 ubiquitin ligase in mice leads to lafora bodies and the accumulation of insoluble laforin. J Biol Chem 285:25372–25381. https://doi.org/10.1074/jbc.M110.148668 Dominiczak MH, Caslake MJ (2011) Apolipoproteins: metabolic role and clinical biochemistry applications. Ann Clin Biochem 48:498–515. https://doi.org/10.1258/acb.2011.011111 Elouej S, Rejeb I, Attaoua R et al (2016) Gender-specific associations of genetic variants with metabolic syndrome components in the Tunisian population. Endocr Res 41:300–309. https:// doi.org/10.3109/07435800.2016.1141945 Engebretsen S, Bohlin J (2019) Statistical predictions with glmnet. Clin Epigenet 1:10–12 Falati S, Liu Q, Gross P et al (2003) Accumulation of tissue factor into developing thrombi in vivo is dependent upon microparticle P-selectin glycoprotein ligand 1 and platelet P-selectin. J Exp Med 197:1585–1598. https://doi.org/10.1084/jem.20021868 Franceschi C, Garagnani P, Morsiani C et al (2018) The continuum of aging and age-related diseases: common mechanisms but different rates. Front Med 5:61. https://doi.org/10.3389/fmed.2018. 00061 Galkin F, Mamoshina P, Aliper A et al (2020) Human gut microbiome aging clock based on taxonomic profiling and deep learning. iScience 23:101199. https://doi.org/10.1016/j.isci.2020. 101199 Galkin F, Mamoshina P, Kochetov K et al (2021) DeepMAge: a methylation aging clock developed with deep learning. Aging Dis 12:1252–1262. https://doi.org/10.14336/AD.2020.1202 Gao X, Colicino E, Shen J et al (2019) Comparative validation of an epigenetic mortality risk score with three aging biomarkers for predicting mortality risks among older adult males. Int J Epidemiol 48:1958–1971. https://doi.org/10.1093/ije/dyz082

6 Epidemiology, Genetics and Epigenetics of Biological Aging: One …

137

Garagnani P, Bacalini MG, Pirazzini C et al (2012) Methylation of ELOVL2 gene as a new epigenetic marker of age. Aging Cell 11:1132–1134. https://doi.org/10.1111/ACEL.12005 Gartner W, Lang W, Leutmetzer F et al (2001) Cerebral expression and serum detectability of secretagogin, a recently cloned EF-hand Ca2+ -binding protein. Cereb Cortex 11:1161–1169. https://doi.org/10.1093/cercor/11.12.1161 Gentry MS, Worby CA, Dixon JE (2005) Insights into Lafora disease: malin is an E3 ubiquitin ligase that ubiquitinates and promotes the degradation of laforin. Proc Natl Acad Sci USA 102:8501–8506. https://doi.org/10.1073/pnas.0503285102 Gialluisi A, Di Castelnuovo A, Donati MB et al (2019) Machine learning approaches for the estimation of biological aging: the road ahead for population studies. Front Med 6. https://doi.org/ 10.3389/fmed.2019.00146 Gialluisi A, Di Castelnuovo A, Costanzo S et al (2021a) Exploring domains, clinical implications and environmental associations of a deep learning marker of biological ageing. Eur J Epidemiol. https://doi.org/10.1007/s10654-021-00797-7 Gialluisi A, Santoro A, Tirozzi A et al (2021b) Epidemiological and genetic overlap among biological aging clocks: new challenges in biogerontology. Ageing Res Rev 72:101502. https://doi. org/10.1016/J.ARR.2021.101502 Gibson J, Russ TC, Clarke TK et al (2019) A meta-analysis of genome-wide association studies of epigenetic age acceleration. PLoS Genet 15. https://doi.org/10.1371/journal.pgen.1008104 Giuliani C, Garagnani P, Franceschi C (2018) Genetics of human longevity within an ecoevolutionary nature-nurture framework. Circ Res 123:745–772. https://doi.org/10.1161/CIR CRESAHA.118.312562 Gonzalez CR, Vallcaneras SS, Calandra RS, Gonzalez Calvar SI (2013) Involvement of KLF14 and egr-1 in the TGF-β1 action on Leydig cell proliferation. Cytokine 61:670–675. https://doi.org/ 10.1016/j.cyto.2012.12.009 Goyal MS, Blazey TM, Su Y et al (2018) Persistent metabolic youth in the aging female brain. Proc Natl Acad Sci USA 1–5. https://doi.org/10.1073/pnas.1815917116 Hagenfeldt L, Bollgren I, Venizelos N (1987) N-acetylaspartic aciduria due to aspartoacylase deficiency—a new aetiology of childhood leukodystrophy. J Inherit Metab Dis 10:135–141. https:/ /doi.org/10.1007/BF01800038 Hanics J, Szodorai E, Tortoriello G et al (2017) Secretagogin-dependent matrix metalloprotease-2 release from neurons regulates neuroblast migration. Proc Natl Acad Sci USA 114:E2006– E2015. https://doi.org/10.1073/pnas.1700662114 Hannum G, Guinney J, Zhao L et al (2013) Genome-wide methylation profiles reveal quantitative views of human aging rates. Mol Cell 49:359–367. https://doi.org/10.1016/j.molcel.2012.10.016 Harley CB, Futcher AB, Greider CW (1990) Telomeres shorten during ageing of human fibroblasts. Nature 345:458–460. https://doi.org/10.1038/345458a0 He HJ, Bing H, Liu G (2018) TSR2 induces laryngeal cancer cell apoptosis through inhibiting NF-κB signaling pathway. Laryngoscope 128:E130–E134. https://doi.org/10.1002/lary.27035 Hillary RF, Stevenson AJ, McCartney DL et al (2020) Epigenetic measures of ageing predict the prevalence and incidence of leading causes of death and disease burden. Clin Epigenet 1–12 Horvath S (2013) DNA methylation age of human tissues and cell types. Genome Biol 14:R115. https://doi.org/10.1186/gb-2013-14-10-r115 Horvath S, Raj K (2018) DNA methylation-based biomarkers and the epigenetic clock theory of ageing. Nat Rev Genet. https://doi.org/10.1038/s41576-018-0004-3 Hoshino H, Kubota M (2014) Canavan disease: clinical features and recent advances in research. Pediatr Int 56:477–483. https://doi.org/10.1111/ped.12422 Jackson SP, Nesbitt WS, Westein E (2009) Dynamics of platelet thrombus formation. J Thromb Haemost 7(Suppl 1):17–20. https://doi.org/10.1111/j.1538-7836.2009.03401.x Jonsson BA, Bjornsdottir G, Thorgeirsson TE et al (2019) Deep learning based brain age prediction uncovers associated sequence variants. bioRxiv 595801. https://doi.org/10.1101/595801 Jylhävä J, Pedersen NL, Hägg S (2017) Biological age predictors. EBioMedicine 21:29–36. https:/ /doi.org/10.1016/j.ebiom.2017.03.046

138

A. Gialluisi et al.

Jylhävä J, Hjelmborg J, Soerensen M et al (2019) Longitudinal changes in the genetic and environmental influences on the epigenetic clocks across old age: evidence from two twin cohorts. EBioMedicine 40:710–716. https://doi.org/10.1016/j.ebiom.2019.01.040 Kaufmann T, Van Der MD, Doan NT et al (2019) Common brain disorders are associated with heritable patterns of apparent aging of the brain. Nat Neurosci. https://doi.org/10.1038/s41593019-0471-7 Kim S, Myers L, Wyckoff J et al (2017) The frailty index outperforms DNA methylation age and its derivatives as an indicator of biological age. GeroScience 39:83–92. https://doi.org/10.1007/ s11357-017-9960-3 Klemera P, Doubal S (2006) A new approach to the concept and computation of biological age. Mech Ageing Dev 127:240–248. https://doi.org/10.1016/j.mad.2005.10.004 Klugmann M, Symes CW, Klaussner BK et al (2003) Identification and distribution of aspartoacylase in the postnatal rat brain. NeuroReport 14:1837–1840. https://doi.org/10.1097/00001756-200 310060-00016 Kobayashi M, Yamato E, Tanabe K et al (2016) Functional analysis of novel candidate regulators of insulin secretion in the MIN6 mouse pancreatic beta cell line. PLoS ONE 11:e0151927. https:/ /doi.org/10.1371/journal.pone.0151927 Koch CM, Wagner W (2011) Epigenetic-aging-signature to determine age in different tissues. Aging (Albany NY) 3:1018–1027. https://doi.org/10.18632/AGING.100395 Kuo CL, Pilling LC, Atkins JL et al (2020a) ApoE e2 and aging-related outcomes in 379,000 UK biobank participants. Aging (Albany NY) 12:12222–12233. https://doi.org/10.18632/aging. 103405 Kuo CL, Pilling LC, Liu Z et al (2020b) Genetic associations for two biological age measures point to distinct aging phenotypes. medRxiv 1–37. https://doi.org/10.1101/2020.07.10.20150797 Levine ME, Higgins-Chen A (2022) Clock work: deconstructing the epigenetic clock signals in aging, disease, and reprogramming Levine ME, Lu AT, Quach A et al (2018) An epigenetic biomarker of aging for lifespan and healthspan. Aging (Albany NY) 10:573–591. https://doi.org/10.1101/276162 Ley K (2003) The role of selectins in inflammation and disease. Trends Mol Med 9:263–268. https:/ /doi.org/10.1016/S1471-4914(03)00071-6 Li C, Stoma S, Lotta LA et al (2020a) Genome-wide association analysis in humans links nucleotide metabolism to leukocyte telomere length. Am J Hum Genet 106:389–404. https://doi.org/10. 1016/j.ajhg.2020.02.006 Li X, Ploner A, Wang Y et al (2020b) Longitudinal trajectories, correlations and mortality associations of nine biological ages across 20-years follow-up. Elife 9:1–20. https://doi.org/10.7554/ eLife.51507 Li A, Koch Z, Ideker T (2022) Epigenetic aging: biological age prediction and informing a mechanistic theory of aging. J Intern Med 1–12. https://doi.org/10.1111/joim.13533 Lima EM, Ribeiro AH, Paixão GMM et al (2021) Deep neural network-estimated electrocardiographic age as a mortality predictor. Nat Commun 12. https://doi.org/10.1038/s41467-021-253 51-7 Liu Z, Miner JJ, Yago T et al (2010) Differential regulation of human and murine P-selectin expression and function in vivo. J Exp Med 207:2975–2987. https://doi.org/10.1084/jem.201 01545 Liu Z, Kuo P-L, Horvath S et al (2018) A new aging measure captures morbidity and mortality risk across diverse subpopulations from NHANES IV: a cohort study. PLOS Med 15:e1002718. https://doi.org/10.1371/journal.pmed.1002718 Lomberk G, Urrutia R (2005) The family feud: turning off Sp1 by Sp1-like KLF proteins. Biochem J 392:1–11. https://doi.org/10.1042/BJ20051234 Lotta LA, Gulati P, Day FR et al (2017) Integrative genomic analysis implicates limited peripheral adipose storage capacity in the pathogenesis of human insulin resistance. Nat Genet 49:17–26. https://doi.org/10.1038/ng.3714

6 Epidemiology, Genetics and Epigenetics of Biological Aging: One …

139

Lotun A, Gessler DJ, Gao G (2021) Canavan disease as a model for gene therapy-mediated myelin repair. Front Cell Neurosci 15:1–13. https://doi.org/10.3389/fncel.2021.661928 Lu AT, Xue L, Salfati EL et al (2018) GWAS of epigenetic aging rates in blood reveals a critical role for TERT. Nat Commun 9. https://doi.org/10.1038/s41467-017-02697-5 Lu AT, Quach A, Wilson JG et al (2019) DNA methylation GrimAge strongly predicts lifespan and healthspan. Aging (Albany NY) 11:303–327. https://doi.org/10.18632/aging.101684 Maj M, Gartner W, Ilhan A et al (2008) Expression of TAU in insulin-secreting cells and its interaction with the calcium-binding protein secretagogin. https://doi.org/10.1677/JOE-09-0341 Mamoshina P, Vieira A, Putin E, Zhavoronkov A (2016) Applications of deep learning in biomedicine. Mol Pharm 13:1445–1454. https://doi.org/10.1021/acs.molpharmaceut.5b00982 Mamoshina P, Kochetov K, Putin E et al (2018) Population specific biomarkers of human aging: a big data study using South Korean, Canadian and Eastern European patient populations. J Gerontol Ser A 73:1482–1490. https://doi.org/10.1093/gerona/gly005 Manne BK, Denorme F, Middleton EA et al (2020) Platelet gene expression and function in patients with COVID-19. Blood 136:1317–1329. https://doi.org/10.1182/blood.2020007214 Marioni RE, Shah S, McRae AF et al (2015) DNA methylation age of blood predicts all-cause mortality in later life. Genome Biol 16:25. https://doi.org/10.1186/s13059-015-0584-6 Marioni RE, Harris SE, Shah S et al (2016) The epigenetic clock and telomere length are independently associated with chronological age and mortality. Int J Epidemiol 45:424–432. https://doi. org/10.1093/ije/dyw041 Masui Y, Farooq M, Sato N et al (2011) A missense mutation in the death domain of EDAR abolishes the interaction with EDARADD and underlies hypohidrotic ectodermal dysplasia. Dermatology 223:74–79. https://doi.org/10.1159/000330557 McCartney DL, Min JL, Richmond RC et al (2020) Genome-wide association studies identify 137 loci for DNA methylation biomarkers of ageing. bioRxiv 1–50 McCrory C, Fiorito G, Hernandez B et al (2021) GrimAge outperforms other epigenetic clocks in the prediction of age-related clinical phenotypes and all-cause mortality. J Gerontol Ser A 76:741–749. https://doi.org/10.1093/GERONA/GLAA286 Moore RA, Le Coq J, Faehnle CR, Viola RE (2003) Purification and preliminary characterization of brain aspartoacylase. Arch Biochem Biophys 413:1–8. https://doi.org/10.1016/s0003-986 1(03)00055-9 Müezzinler A, Zaineddin AK, Brenner H (2013) A systematic review of leukocyte telomere length and age in adults. Ageing Res Rev 12:509–519 Murabito JM, Zhao Q, Larson MG et al (2018) Measures of biologic age in a community sample predict mortality and age-related disease: the Framingham offspring study. J Gerontol Ser A Biol Sci Med Sci 73:757–762. https://doi.org/10.1093/gerona/glx144 Nair AK, Piaggi P, McLean NA et al (2016) Assessment of established HDL-C loci for association with HDL-C levels and type 2 diabetes in Pima Indians. Diabetologia 59:481–491. https://doi. org/10.1007/s00125-015-3835-x Nie C, Li Y, Li R et al (2022) Distinct biological ages of organs and systems identified from a multi-omics study. Cell Rep 38:110459. https://doi.org/10.1016/j.celrep.2022.110459 Nitschke F, Ahonen SJ, Nitschke S et al (2018) Lafora disease—from pathogenesis to treatment strategies. Nat Rev Neurol 14:606–617. https://doi.org/10.1038/s41582-018-0057-0 Njajou OT, Cawthon RM, Damcott CM et al (2007) Telomere length is paternally inherited and is associated with parental lifespan. Proc Natl Acad Sci USA 104:12135–12139. https://doi.org/ 10.1073/pnas.0702703104 Ohshige T, Iwata M, Omori S et al (2011) Association of new loci identified in European genomewide association studies with susceptibility to type 2 diabetes in the Japanese. PLoS ONE 6:e26911. https://doi.org/10.1371/journal.pone.0026911 Parker-Katiraee L, Carson AR, Yamada T et al (2007) Identification of the imprinted KLF14 transcription factor undergoing human-specific accelerated evolution. PLoS Genet 3:e65. https:// doi.org/10.1371/journal.pgen.0030065

140

A. Gialluisi et al.

Peleg S (2022) How to slow down the ticking clock: age-associated epigenetic alterations and related interventions to extend life span Podzus J, Kowalczyk-Quintas C, Schuepbach-Mallepell S et al (2017) Ectodysplasin A in biological fluids and diagnosis of ectodermal dysplasia. J Dent Res 96:217–224. https://doi.org/10.1177/ 0022034516673562 Printen JA, Brady MJ, Saltiel AR (1997) PTG, a protein phosphatase 1-binding protein with a role in glycogen metabolism. Science (80-) 275:1475–1478. https://doi.org/10.1126/science.275.5305. 1475 Putin E, Mamoshina P, Aliper A et al (2016) Deep biomarkers of human aging: application of deep neural networks to biomarker development. Aging (Albany NY) 8:1021–1033. https://doi.org/ 10.18632/aging.100968 Pyrkov TV, Fedichev PO (2019) Biological age is a universal marker of aging, stress, and frailty. bioRxiv 578245. https://doi.org/10.1101/578245 Reale A, Tagliatesta S, Zardo G, Zampieri M (2022) Counteracting aged DNA methylation states to combat ageing and age-related diseases. Mech Ageing Dev 206:111695. https://doi.org/10. 1016/j.mad.2022.111695 Rogstam A, Linse S, Lindqvist A et al (2007) Binding of calcium ions and SNAP-25 to the hexa EF-hand protein secretagogin. Biochem J 401:353–363. https://doi.org/10.1042/BJ20060918 Roma-Mateo C, Sanz P, Gentry MS (2012) Deciphering the role of malin in the lafora progressive myoclonus epilepsy. IUBMB Life 64:801–808. https://doi.org/10.1002/iub.1072 Romanov RA, Alpar A, Zhang MD et al (2015) A secretagogin locus of the mammalian hypothalamus controls stress hormone release. EMBO J 34:36–54. https://doi.org/10.15252/embj.201 488977 Sadier A, Lambert E, Chevret P et al (2015) Tinkering signaling pathways by gain and loss of protein isoforms: the case of the EDA pathway regulator EDARADD. BMC Evol Biol 15:129. https://doi.org/10.1186/s12862-015-0395-0 Sanders JL, Newman AB (2013) Telomere length in epidemiology: a biomarker of aging, age-related disease, both, or neither? Epidemiol Rev 35:112–131. https://doi.org/10.1093/epirev/mxs008 Sayed N, Huang Y, Nguyen K et al (2021) An inflammatory aging clock (iAge) based on deep learning tracks multimorbidity, immunosenescence, frailty and cardiovascular aging. Nat Aging 1:598–615. https://doi.org/10.1038/s43587-021-00082-y Sharma AK, Khandelwal R, Sharma Y (2019) Veiled potential of secretagogin in diabetes: correlation or coincidence? Trends Endocrinol Metab 30:234–243. https://doi.org/10.1016/j.tem.2019. 01.007 Shishodia S, Aggarwal BB (2004) Nuclear factor-κB: a friend or a foe in cancer? Biochem Pharmacol 68:1071–1080. https://doi.org/10.1016/j.bcp.2004.04.026 Sladek R, Rocheleau G, Rung J et al (2007) A genome-wide association study identifies novel risk loci for type 2 diabetes. Nature 445:881–885. https://doi.org/10.1038/nature05616 Small KS, Hedman AK, Grundberg E et al (2011) Identification of an imprinted master trans regulator at the KLF14 locus related to multiple metabolic phenotypes. Nat Genet 43:561–564. https://doi.org/10.1038/ng.833 Small KS, Todorcevic M, Civelek M et al (2018) Regulatory variants at KLF14 influence type 2 diabetes risk via a female-specific effect on adipocyte size and body composition. Nat Genet 50:572–580. https://doi.org/10.1038/s41588-018-0088-x Solaz-Fuster MC, Gimeno-Alcaniz JV, Ros S et al (2008) Regulation of glycogen synthesis by the laforin-malin complex is modulated by the AMP-activated protein kinase pathway. Hum Mol Genet 17:667–678. https://doi.org/10.1093/hmg/ddm339 Suda N, Bazar A, Bold O et al (2010) A Mongolian patient with hypohidrotic ectodermal dysplasia with a novel P121S variant in EDARADD. Orthod Craniofac Res 13:114–117. https://doi.org/ 10.1111/j.1601-6343.2010.01484.x Sun H, Paixao L, Oliva JT et al (2019) Brain age from the electroencephalogram of sleep. Neurobiol Aging 74:112–120. https://doi.org/10.1016/j.neurobiolaging.2018.10.016

6 Epidemiology, Genetics and Epigenetics of Biological Aging: One …

141

Szklarczyk D, Gable AL, Lyon D et al (2019) STRING v11: protein-protein association networks with increased coverage, supporting functional discovery in genome-wide experimental datasets. Nucleic Acids Res 47:D607–D613. https://doi.org/10.1093/nar/gky1131 Teslovich TM, Musunuru K, Smith AV et al (2010) Biological, clinical and population relevance of 95 loci for blood lipids. Nature 466:707–713. https://doi.org/10.1038/nature09270 Theilmeier G, Lenaerts T, Remacle C et al (1999) Circulating activated platelets assist THP-1 monocytoid/endothelial cell interaction under shear stress. Blood 94:2725–2734 Truty MJ, Lomberk G, Fernandez-Zapico ME, Urrutia R (2009) Silencing of the transforming growth factor-β (TGFβ) receptor II by Kruppel-like factor 14 underscores the importance of a negative feedback mechanism in TGFβ signaling. J Biol Chem 284:6291–6300. https://doi.org/ 10.1074/jbc.M807791200 Tserel L, Kolde R, Limbach M et al (2015) Age-related profiling of DNA methylation in CD8 + T cells reveals changes in immune response and transcriptional regulator genes. Nat Publ Gr 1–11. https://doi.org/10.1038/srep13107 Van Dongen J, Nivard MG, Willemsen G et al (2016) Genetic and environmental influences interact with age and sex in shaping the human methylome. Nat Commun 7:1–13. https://doi.org/10. 1038/ncomms11115 Vilchez D, Ros S, Cifuentes D et al (2007) Mechanism suppressing glycogen synthesis in neurons and its demise in progressive myoclonus epilepsy. Nat Neurosci 10:1407–1413. https://doi.org/ 10.1038/nn1998 Wagner L, Oliyarnyk O, Gartner W et al (2000) Cloning and expression of secretagogin, a novel neuroendocrine- and pancreatic islet of Langerhans-specific Ca2+ -binding protein. J Biol Chem 275:24740–24751. https://doi.org/10.1074/jbc.M001974200 Wang K, Li M, Hakonarson H (2010) ANNOVAR: functional annotation of genetic variants from high-throughput sequencing data. Nucleic Acids Res 38:e164. https://doi.org/10.1093/ nar/gkq603 Wang Y, Lin Z, Sun L et al (2014) Akt/Ezrin Tyr353/NF-κB pathway regulates EGF-induced EMT and metastasis in tongue squamous cell carcinoma. Br J Cancer 110:695–705. https://doi.org/ 10.1038/bjc.2013.770 Wezyk M, Spólnicka M, Po´spiech E et al (2018) Hypermethylation of TRIM59 and KLF14 influences cell death signaling in familial Alzheimer’s disease. Oxid Med Cell Longev 2018:6918797. https://doi.org/10.1155/2018/6918797 Wilbourn RV, Moatt JP, Froy H et al (2018) The relationship between telomere length and mortality risk in non-model vertebrate systems: a meta-analysis. Philos Trans R Soc B Biol Sci 373. https:/ /doi.org/10.1098/rstb.2016.0447 Wohlfart S, Soder S, Smahi A, Schneider H (2016) A novel missense mutation in the gene EDARADD associated with an unusual phenotype of hypohidrotic ectodermal dysplasia. Am J Med Genet A 170:249–253. https://doi.org/10.1002/ajmg.a.37412 Worby CA, Gentry MS, Dixon JE (2008) Malin decreases glycogen accumulation by promoting the degradation of protein targeting to glycogen (PTG). J Biol Chem 283:4069–4076. https:// doi.org/10.1074/jbc.M708712200 Yamaguchi K, Omori H, Onoue A et al (2012) Novel regression equations predicting lung age from varied spirometric parameters. Respir Physiol Neurobiol 183:108–114. https://doi.org/10.1016/ j.resp.2012.06.025 Yang SY, Lee JJ, Lee JH et al (2016) Secretagogin affects insulin secretion in pancreatic β-cells by regulating actin dynamics and focal adhesion. Biochem J 473:1791–1803. https://doi.org/10. 1042/BCJ20160137 Zglinicki T, Martin-Ruiz C (2005) Telomeres as biomarkers for ageing and age-related diseases. Curr Mol Med 5:197–203. https://doi.org/10.2174/1566524053586545 Zhang Y, Wilson R, Heiss J et al (2017) DNA methylation signatures in peripheral blood strongly predict all-cause mortality. Nat Commun 81(8):1–11. https://doi.org/10.1038/ncomms14617 Zhang Y, Saum KU, Schöttker B et al (2018) Methylomic survival predictors, frailty, and mortality. Aging (Albany NY) 10:339–357. https://doi.org/10.18632/aging.101392

142

A. Gialluisi et al.

Zhavoronkov A, Mamoshina P (2019) Deep aging clocks: the emergence of AI-based biomarkers of aging and longevity. Trends Pharmacol Sci 40:546–549. https://doi.org/10.1016/j.tips.2019. 05.004 Zhavoronkov A, Mamoshina P, Vanhaelen Q et al (2019) Artificial intelligence for aging and longevity research: recent advances and perspectives. Ageing Res Rev 49:49–66. https://doi. org/10.1016/j.arr.2018.11.003 Zhou Y, Mägi R, Milani L, Lauschke VM (2018) Global genetic diversity of human apolipoproteins and effects on cardiovascular disease risk. J Lipid Res 59:1987–2000. https://doi.org/10.1194/ jlr.P086710

Chapter 7

Temporal Relation Prediction from Electronic Health Records Using Graph Neural Networks and Transformers Embeddings Óscar García Sierra, Alfonso Ardoiz Galaz, Miguel Ortega Martín, Jorge Álvarez Rodríguez, and Adrián Alonso Barriuso

Abstract Temporal relations extraction is a key factor in many Natural Language Processing (NLP) tasks and, particularly, in clinical text mining. Previous studies mostly use linguistic rules, Machine Learning classifiers or Neural Networks for temporal relation extraction. Motivated by the existence of corpus annotated with temporal relations, and by the rise of graphs and Transformers in NLP, we propose a pipeline based on Transformers and Graph Neural Networks with the aim of predicting temporal links between events, entities and temporal expressions. For now, we analyzed the prediction of BEFORE and OVERLAP links from the i2b2 corpus using different types of BERT embeddings, achieving an AUC of 97% and 87%, respectively. Keywords Temporal link prediction · Graphs · GNNs · Transformers

7.1 Introduction Natural Language Processing (NLP) is an area of research and application that explores how computers can be used to understand and manipulate natural language text or speech (Chowdhury 2003). NLP applications go from Machine Translation and Automatic correctors to Question answering systems, summarizers and other Ó. G. Sierra · A. A. Galaz · M. O. Martín · J. Á. Rodríguez · A. A. Barriuso (B) Dezzai by MMG, Madrid, Spain e-mail: [email protected] Ó. G. Sierra · A. A. Galaz · M. O. Martín Universidad Complutense de Madrid, Madrid, Spain A. A. Barriuso Universidad Rey Juan Carlos, Madrid, Spain Data Science Laboratory, Universidad Rey Juan Carlos, Madrid, Spain © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 A. Moskalev et al. (eds.), Artificial Intelligence for Healthy Longevity, Healthy Ageing and Longevity 19, https://doi.org/10.1007/978-3-031-35176-1_7

143

144

Ó. G. Sierra et al.

Machine Learning (ML) systems. Most of these applications can be used both in the general and in specific domains. This is the case for the clinical field, where large amounts of unexplored texts and different NLP systems, like Named Entity Recognition systems, Information retrieval or chatbots, have been used. In the medical field, Machine Learning techniques are particularly relevant. The key concept about ML is that it allows to make predictions on unlabeled data after training a model on labeled data. In the case of medicine in general, and longevity in particular, being able to make predictions is almost priceless. This motivates us to employ ML and NLP techniques to analyze how can we make temporal and thus longevity related predictions from large amounts of labeled medical data. We therefore consider that if we can efficiently sort temporal knowledge from a clinical text, and here is where this work comes into action, then we would be able to understand the past and to predict, as far as possible, the future. In NLP, temporal events and expressions have been the basis on which temporal relationship tasks have traditionally been performed. If we focus on the medical domain, where there is an immense number of texts almost ready to be exploited, a clinical event is anything relevant to the clinical timeline, including verbs, clinical concepts, etc. Time expressions (TIMEXs) give information about when something happened, how long it lasted or how often something occurred. Finally, temporal relations (some examples are “before”, “after” or “overlap”) indicate whether two EVENTs, two TIMEXs, or an EVENT and a TIMEX are related to each other in the clinical timeline (Alfattni et al. 2020). Because of the growing interest in the mining of medical texts in the last decade, numerous tasks and resources have appeared focused on the treatment of temporal information in this domain. The different tasks and corpus share certain means and objectives, but at the same time each one introduces certain peculiarities in the form of new relationships, events, etc. The traditional TimeML system was adapted by the i2b2 (Sun et al. 2013) shared task and the i2b2 temporal relation corpus, which were released in 2012. The TempEval task has had several editions in the last fifteen years, and the THYME corpus (Styler et al. 2014) is nowadays one of the most used resources (Alfattni et al. 2020). Different systems have been proposed in the prediction of temporal relationships in the clinical domain. First rule approaches used rules derived from lexical and grammatical features and clinical knowledge (Zhou et al. 2008) or syntactic information (Bethard et al. 2015). Despite recent advances in other fields, rule-based systems have not been abandoned. Both in the i2b2 and in the Clinical TempEval tasks from semEval, various Machine Learning techniques were used for temporal relation extraction. From MaxEnt (Jung and Stent 2013) or Bayesian classifiers to SVM or CRF (Chikka 2016; MacAvaney et al. 2017), along with rules, heuristics or Decision Trees. Alicante et al. (2016) approached the problem in Italian by clustering and the Expectation–maximization algorithm.

7 Temporal Relation Prediction from Electronic Health Records Using …

145

Over the past 5 years multiple neural network approaches have appeared. From Convolutional Neural Networks (Dligach et al. 2017; Lin et al. 2017) to Recurrent Neural Networks (Han et al. 2019c), LSTMs (Dligach et al. 2017; Maharana 2017; Galvan et al. 2018), Bidirectional LSTMs (Tourille et al. 2017; Goyal and Durrett 2019) and GRUs with attention mechanism (Liu et al. 2019). More recently, Language Models pretrained with Transformers (Vaswani et al. 2017) achieved state of the art results in temporal link prediction: (Guan et al. 2020; Han et al. 2019a; Han et al. 2019b; Han et al. 2020; Wei et al. 2019). From the trend of Language Models underlies the issue of embeddings. Dligach et al. (2017) made use of character and word embeddings, while Maharana (2017) employs word, positional and biomedical embeddings, and Goyal and Durrett (2019) added the use of LSTMs to learn time embeddings from synthetic data. Liu et al. (2019), used the 300-dimension word embeddings from Glove-6B3 as the input based on their preliminary experiments on trained embeddings from the biomedical domain and the THYME corpus. Recently, several attempts have been made to adapt BERT embeddings to the medical domain. This is the case of Bio_ClinicalBERT (Alsentzer et al. 2019) and BlueBert (Peng et al. 2019), which have been trained of different resources, like medical papers and the MIMIC III corpus. Also, some uses of graphs have been made for the time relation task. Bramsen et al. (2006) constructed a directed acyclic graph that encodes temporal relations. Nikfarjam et al. (2013) built a temporal directed graph based on parse tree dependencies. More recently, Zhou et al. (2020) use temporal graphs combined with the Transformer architecture. Nikfarjam et al. (2013) also utilized a hybrid model that combined SVMs with a graph-based inference mechanism within a single sentence based on the frequent patterns/parse dependency relations (Alfattni et al. 2020). Leaving the syntactic component behind, Jeblee and Hirst (2018) generated a list of temporally ordered events from a graph created from the THYME corpus. Graph Neural Networks (GNNs) are one of the hottest topics in Deep Learning. Specifically, link prediction task tries to forecast relations between non-connected nodes from the parameters learned during training. Although a lot has been written about the use of GNNs to predict relationships between nodes (Zhang and Chen 2018) and the combination of graphs and Transformers and the attention mechanism (Hu et al. 2020; Yun et al. 2019), their use in temporal relation extraction in general, and in the clinical domain specifically, remains almost unexploited, to the best of our knowledge. For link prediction two main systems have been used: Graph Autoencoders (GAE) (Kipf and Welling 2016) and SEAL (Zhang and Chen 2018). In this work we built a graph from the I2B2 corpus using contextual embeddings from BERT as node attributes, and compared various types of clinical BERT embeddings. Then we trained a Relational Graph Convolutional Network (R-GCN) (Schlichtkrull et al. 2018) model to perform link prediction. Our contributions are the following: • We proved the power of Graph Neural Networks in temporal relation prediction tasks.

146

Ó. G. Sierra et al.

Fig. 7.1 Graph creation process

• We proved the efficiency of using BERT embeddings as node embeddings in link prediction tasks. • We compared embeddings from different BERT models in order to find the best ones around which we will develop our future models.

7.2 Methods Our approach is based on training a Graph Neural Network on the graphs built from the i2b2 as Electronic Health Records (EHRs) in order to predict missing links from new graphs. In GNNs, the link prediction task is often approached as a comparison among nodes which are connected by an edge and those who are not.

7.2.1 Graph Construction EHRs from the i2b2 are annotated with 9 different types of links between EVENTs, temporal expressions (TIMEXs) and SECTIMEs (the date and time of arrival and departure of the patient). To simplify, we used the BEFORE and OVERLAP links from 302 EHRs. Figure 7.1 shows the graph creation process. In order to turn every EHR into a graph, we used the Python library Deep Graph Library1 (DGL) (Wang et al. 2019), in a way that EVENTs, TIMEXs and SECTIMEs were the nodes and the BEFORE and OVERLAP links were the edges. In addition, we added a third type of link not included in the i2b2, but which we considered would help the model’s predictions: the SAME_SENTENCE relationship connected all those nodes that were in the same sentence in the raw text EHR. By doing this we ensured that, when predicting, we had a relationship that we were able to automatically include in new graphs in which the BEFORE and OVERLAP relationships are still missing.

1

https://www.dgl.ai/.

7 Temporal Relation Prediction from Electronic Health Records Using …

147

We aimed to simplify the design of our graph, so we made all of our nodes the same type (“entity”), and then used node features to store information about their real type (EVENT, TIMEX, SECTIME) in a one-hot-encoded vector. Consequently, our graph was heterogeneous since, despite having only one type of nodes, it had three types of edges. For the purpose of preserving contextual information from the raw text clinical report, we added to our nodes BERT contextual embeddings extracted from each token of the plain text reports. For multi-token entities, we used the mean of the embeddings of its components. In the end, our nodes attributes had 771 dimensions: 3 dimensions from the one-hot-encoded vector which referred to the entity type and 768 dimensions from the BERT embeddings vector.

7.2.2 Masked Language Modeling In addition to testing the effectiveness of the BERT embeddings used as attributes, we aimed to compare if the quality of these embeddings improved after carrying out a Masked Language Modeling (MLM) on the original models. Therefore, we compared the embeddings of 4 models. We got the first two directly from the Hugging Face hub.2 These were originally trained on clinical papers and EHRs: 1. emilyalsentzer/Bio_ClinicalBERT3 2. bionlp/bluebert_pubmed_uncased_L-24_H-1024_A-164 It should be noted that this second model used an embeddings dimension of 1024, and not 768 like the first one. As said before, in order to prove the efficiency of MLM, the second two models were the result of performing a MLM on the first of the previous models, Bio_ClinicalBERT. This leads to: 3. emilyalsentzer/Bio_ClinicalBERT with a Masked Language Modeling through Google Colab) 4. emilyalsentzer/Bio_ClinicalBERT with Masked Language Modeling through our servers) For Masked Language Modeling we masked 15% of our tokens from the i2b2 dataset. We then used a batch size of 64, AdamW optimizer and a learning rate of 1e-5. We trained our models just for 2 epochs, since our metrics start to decrease after that.

2

https://huggingface.co/models. https://huggingface.co/emilyalsentzer/Bio_ClinicalBERT. 4 https://huggingface.co/bionlp/bluebert_pubmed_uncased_L-24_H-1024_A-16. 3

148

Ó. G. Sierra et al.

Fig. 7.2 Diagram of the R-GCN model for link prediction

7.2.3 Model We built our link prediction model following the DGL guidelines.5 Link prediction task is often approached as a binary classification problem, trying to predict whether two nodes are connected by an edge or not. We used a Relational Graph Convolutional Network (R-GCN) as proposed in Schlichtkrull et al. The key difference between R-GCN and a classic Graph Convolutional Network is that in R-GCN, edges can represent different relations. In this case, link prediction is done by reconstructing an edge with an autoencoder architecture, using a parameterized score function. As seen in Fig. 7.2, our R-GCN model was composed of two R-GCN layers. The first R-GCN layer worked as input layer, took in features (BERT embeddings in our case) and projected them to hidden space. As explained in DGL documentation,6 for each node a R-GCN calculates outgoing message using node representation and a weight matrix associated with the edge type, aggregates incoming messages and creates new node representations. As in the guidelines, we followed the negative sampling methodology, which consists in comparing the scores between nodes connected by an edge against the scores between an arbitrary pair of nodes, assuming that those that are connected by and edge will get a higher score than those which are not connected. To do this, during the training loop we constructed the negative graph, which contained k negative examples of each positive edge. To calculate this score between edges we used the dot product predictor proposed by DGL, which calculates the dot product between the node embeddings. Following this method, the score of nodes connected by an edge (C and B in Fig. 7.2) should be higher than the score of those that are not connected (for instance, B and A in Fig. 7.2). Before training our model, we split our 302 graphs between train (90%) and test set (10%). To avoid memory issues while training, we used a batch size of 20 graphs, which we batched together. It should be noted that in DGL this batched graphs do not mix with each other,7 and each input graph becomes a separated component of the batched graph. 5

https://docs.dgl.ai/en/0.6.x/guide/training-link.html. https://docs.dgl.ai/en/0.6.x/tutorials/models/1_gnn/4_rgcn.html. 7 https://docs.dgl.ai/en/0.6.x/generated/dgl.batch.html. 6

7 Temporal Relation Prediction from Electronic Health Records Using …

149

During the training loop we iteratively constructed the negative graph and computed the margin loss. Since the DGL guidelines do not used batched graphs, we modified the original negative graph function, so every subgraph just received negative examples from its own subgraph. Lastly, DGL only allows us to predict one type of relationship at a time, so we trained our model in the prediction of BEFORE and OVERLAP links separately. Since it was our goal to compare 4 different types of embeddings, we set some hyperparameters around which we established the comparison. We used k = 5 to create 5 negative examples of every positive edge. We used hidden and out dimensions of 1024 for the Bio_ClinicalBERT models and 1280 for the BlueBert one. We made use of Adam optimizer. We trained the model in Google Colab for 50 epochs, as said before, with a training batch size of 20 graphs. In the future we will optimize these hyperparameters in the chosen model.

7.3 Results We used the model margin loss to optimize model performance during training. This loss refers to the number of prediction errors made during such training. To evaluate the performance of our model on the test set (10% of the graphs) we used AUC (Area Under the ROC Curve). AUC gives us the probability that a positive example gets a higher score than a negative one, which makes it suitable for link prediction tasks. Following our training by batches for 50 epochs, the loss of our 4 models decreased during training, and the eval AUC increased progressively in the test set for both types of links. After 50 epochs, both BEFORE and OVERLAP AUC reached what we consider remarkable levels. As seen in Table 7.1, if we delve into the comparison between the two original models, Blue_Bert’s loss got a bit lower, while it achieved a slightly higher AUC than Bio_ClinicalBERT. We consider this is due to the larger size of its embeddings. Moreover, if we compare the performance of the original Bio_ClinicalBERT model with our MLM models, we can see that the latter two improved both the train Loss and the eval AUC of the original one. Notably, the one of them trained in Table 7.1 Train Loss and Evaluation AUC results Temporal relation

Before

Overlap

Model\metric

Train loss

Eval AUC

Train loss

Eval AUC

1. Bio_ClinicalBERT

1.67

0.960

2.30

0.846

2. Blue_Bert

1.13

0.965

2.11

0.852

3. Colab MLM

1.03

0.969

2.28

0.861

4. Server MLM

0.83

0.976

2.19

0.872

150

Ó. G. Sierra et al.

our servers (using 4 Nvidia Tesla v100 24 GB) reached the best values of Loss and AUC for both types of relationships out of the four models. Additionally, this fourth model was the only one that exceeded 97% and 87% of eval AUC for both BEFORE and OVERLAP links, respectively.

7.4 Discussion and Future Work We consider that both of our hypotheses have been proven by the results included in the previous section. In first place, Graph Neural Networks and BERT embeddings shown to be an effective combination and offer impressive results in the prediction of temporal relationships in the clinical domain, which opens many lines of future work in this regard. Secondly, BERT embeddings performance improved after carrying out a Masked Language Modeling on the original models. The great difference between the results of BEFORE and OVERLAP seems to follow the logic indicated by a large part of the sources of our references, since the OVERLAP relationship continues to be one of the greatest challenges in predicting temporal relationships today, being it a much more laborious prediction than the BEFORE relation, which is obviously more related to the linear nature of time in the text and therefore easier to predict. The main problem of our analysis is that we cannot establish a clear comparison with previous work on the prediction of temporal relationships from the i2b2 dataset because of two main reasons. First because our study, being initial, focused only on two of the eight relationships noted in the dataset, considering them the most basic and an appropriate starting point for our study. In second place, metrics used in previous temporal link prediction studies (accuracy, recall and F-score) can incorporate noise when predicting links in graph data since most edges are negative edges, so DGL recommends using AUC to evaluate these models. Despite this, we consider that the performance of the model both during training and evaluation is remarkable. The continuous improve of the model during the training loop and the impressive results in the evaluation graphs make us optimistic about future possibilities, since they open a great field in which to continue investigating. In the future we aim to choose the best model, which at the moment seems be the fourth one, which uses embeddings from Bio_ClinicalBERT after performing a Masked Language Modeling on the i2b2 dataset in our servers, in order to continue optimizing it and to combine it with our own medical NER system as a preliminary step to building the timeline from any given Electronic Health Record. In conclusion, we consider that the nature of the graphs and the power of BERT embeddings adapted to the clinical domain offer a great potential when working with temporal relations, being this, to our knowledge, the first work that explores it. This prior analysis of the temporality of an EHR can be truly valuable when dealing with longevity. Generating the timeline of a patient’s history through his medical record can help not only to know about their past but also to make predictions about their future. This benefit increases when we talk about several patients. Then,

7 Temporal Relation Prediction from Electronic Health Records Using …

151

having a large number of clinical texts with their corresponding diseases, treatments and adverse effects all temporally ordered can help to predict and therefore favor the longevity of new patients. Compliance with Ethical Standards Conflict of Interest The authors declare that they have no conflict of interest. Ethics Approval This study is based on anonymous EHRs from the “Evaluating temporal relations in clinical text: 2012 i2b2 Challenge” dataset8 which has not been made public by us. As per them: “The data use portion of this work was approved by the institutional review boards at the Massachusetts Institute of Technology, Partners Healthcare, and SUNY Albany.”

References Alfattni G, Peek N, Nenadic G (2020) Extraction of temporal relations from clinical free text: a systematic review of current approaches. J Biomed Inform 108:103488 Alicante A, Corazza A, Isgro F, Silvestri S (2016) Unsupervised entity and relation extraction from clinical records in Italian. Comput Biol Med 72:263–275 Alsentzer E, Murphy JR, Boag W, Weng W-H, Jin D, Naumann T, McDermott M (2019) Publicly available clinical BERT embeddings. arXiv preprint arXiv:1904.03323 Bethard S, Derczynski L, Savova G, Pustejovsky J, Verhagen M (2015) SemEval-2015 task 6: clinical TempEval. SemEval@NAACL-HLT Bramsen P, Deshpande P, Lee YK, Barzilay R (2006) Inducing temporal graphs. In: Proceedings of the 2006 conference on empirical methods in natural language processing, pp 189–198 Chikka VR (2016) Cde-iiith at semeval-2016 task 12: extraction of temporal information from clinical documents using machine learning techniques. In: Proceedings of the 10th international workshop on semantic evaluation (SemEval-2016), pp 1237–1240 Chowdhury GG (2003) Natural language processing. Ann Rev Inf Sci Technol 37(1):51–89 Dligach D, Miller T, Lin C, Bethard S, Savova G (2017) Neural temporal relation extraction. In: Proceedings of the 15th conference of the European chapter of the association for computational linguistics: volume 2, short papers, pp 746–751 Galvan D, Okazaki N, Matsuda K, Inui K (2018) Investigating the challenges of temporal relation extraction from clinical text. Louhi@ EMNLP, pp 55–64 Goyal T, Durrett G (2019) Embedding time expressions for deep temporal ordering models. arXiv preprint arXiv:1906.08287 Guan H, Li J, Xu H, Devarakonda M (2020) Robustly pre-trained neural model for direct temporal relation extraction. arXiv preprint arXiv:2004.06216 Han R, Hsu I, Yang M, Galstyan A, Weischedel R, Peng N (2019a) Deep structured neural network for event temporal relation extraction. arXiv preprint arXiv:1909.10094 Han R, Liang M, Alhafni B, Peng N (2019b) Contextualized word embeddings enhanced event temporal relation extraction for story understanding. ArXiv abs/1904.11942 Han R, Ning Q, Peng N (2019c) Joint event and temporal relation extraction with shared representations and structured prediction. arXiv preprint arXiv:1909.05360 Han R, Zhou Y, Peng N (2020) Domain knowledge empowered structured neural net for end-to-end event temporal relation extraction. arXiv preprint arXiv:2009.07373 Hu Z, Dong Y, Wang K, Sun Y (2020) Heterogeneous graph transformer. Proc Web Conf 2020:2704– 2710 8

https://portal.dbmi.hms.harvard.edu/projects/n2c2-nlp/.

152

Ó. G. Sierra et al.

Jeblee S, Hirst G (2018) Listwise temporal ordering of events in clinical notes. Louhi@EMNLP Jung H, Stent A (2013) ATT1: temporal annotation using big windows and rich syntactic and semantic features. In: Second joint conference on lexical and computational semantics (* SEM), volume 2: proceedings of the seventh international workshop on semantic evaluation (SemEval 2013), pp 20–24 Kipf T, Welling M (2016) Variational graph auto-encoders. ArXiv abs/1611.07308 Lin C, Miller T, Dligach D, Bethard S, Savova G (2017) Representations of time expressions for temporal relation extraction with convolutional neural networks. BioNLP 2017:322–327 Liu S, Wang L, Chaudhary V, Liu H (2019) Attention neural model for temporal relation extraction. In: Proceedings of the 2nd clinical natural language processing workshop, pp 134–139 MacAvaney S, Cohan A, Goharian N (2017) GUIR at SemEval-2017 Task 12: a framework for cross-domain clinical temporal information extraction. In: Proceedings of the 11th international workshop on semantic evaluation (SemEval-2017), pp 1024–1029 Maharana A (2017) Extraction of clinical timeline from discharge summaries using neural networks Najafabadipour M, Zanin M, Rodríguez-González A, Gonzalo-Martín C, García BN, Calvo V, Bermudez JLC, Provencio M, Menasalvas E (2019) Recognition of time expressions in Spanish electronic health records. In: 2019 IEEE 32nd international symposium on computer-based medical systems (CBMS). IEEE, pp 69–74 Nikfarjam A, Emadzadeh E, Gonzalez G (2013) Towards generating a patient’s timeline: extracting temporal relationships from clinical notes. J Biomed Inf 46:S40–S47 Peng Y, Yan S, Lu Z (2019) Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 Schlichtkrull M, Kipf T, Bloem P, Berg R, Titov I, Welling M (2018) Modeling relational data with graph convolutional networks, pp 593–607 Styler WF, Bethard S, Finan S, Palmer M, Pradhan S, De Groen PC, Erickson B, Miller T, Lin C, Savova G (2014) Temporal annotation in the clinical domain. Trans Assoc Comput Linguist 2:143–154 Sun W, Rumshisky A, Uzuner O (2013) Evaluating temporal relations in clinical text: 2012 i2b2 Challenge. J Am Med Inf Assoc 20(5):806–813 Tourille J, Ferret O, Neveol A, Tannier X (2017) Neural architecture for temporal relation extraction: a Bi-LSTM approach for detecting narrative containers. In: Proceedings of the 55th annual meeting of the association for computational linguistics (vol 2: Short Papers), pp 224–230 Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Gomez AN, Kaiser L, Polosukhin I (2017) Attention is all you need. arXiv preprint arXiv:1706.03762 Veliˇckovi´c P, Cucurull G, Casanova A, Romero A, Lio P, Bengio Y (2017) Graph attention networks. arXiv preprint arXiv:1710.10903 Wang, M, Zheng, D, Ye, Z, Gan, Q, Li, M, Song, X, Zhang, Z (2019). Deep graph library: a graph-centric, highly-performant package for graph neural networks. arXiv preprint arXiv:1909. 01315 Wei Q, Ji Z, Si Y, Du J, Wang J, Tiryaki F, Wu S, Tao C, Roberts K, Xu H (2019) Relation extraction from clinical narratives using pre-trained language models. In: AMIA annual symposium proceedings: American medical informatics association, p 1236 Yun S, Jeong M, Kim R, Kang J, Kim HJ (2019) Graph transformer networks. NeurIPS Zhang M, Chen Y (2018) Link prediction based on graph neural networks. Adv Neural Inf Process Syst 31:5165–5175 Zhou L, Parsons S, Hripcsak G (2008) The evaluation of a temporal reasoning system in processing clinical discharge summaries. J Am Med Inf Assoc 15(1):99–106 Zhou Y, Yan Y, Han R, Caufield JH, Chang K-W, Sun Y, Ping P, Wang W (2020) Clinical temporal relation extraction with probabilistic soft logic regularization and global inference. arXiv preprint arXiv:2012.08790

Chapter 8

In Silico Screening of Life-Extending Drugs Using Machine Learning and Omics Data Alexander Fedintsev, Mikhail Syromyatnikov, Vasily Popov, and Alexey Moskalev

Abstract Geroprotectors are compounds that extend lifespan of model organisms. Currently more than 200 geroprotectors are known, however only few of them are ready to be tested in clinical trials so the discovery of new geroprotectors is an important problem. Here we propose a novel approach for geroprotector discovery based on machine learning and omics data. Keywords Machine learning · Transcriptome · Geroprotector · Aging

8.1 Introduction Aging of the population puts a significant burden on the economy and on the healthcare system in particular. It was estimated that a slowdown in aging that increases life expectancy by 1 year is worth 38 trillion US dollars (Scott et al. 2021). Thus, the discovery of drugs or any other remedies that increase life expectancy and delay the onset of aging-related diseases is crucial for society. Such drugs or other remedies that extend lifespan of model organisms are called geroprotectors and there are now known over 200 geroprotectors that are continuously indexed at geroprotectors.org (Moskalev et al. 2015). However, only a handful of those were shown to fulfill all A. Fedintsev · A. Moskalev (B) Institute of Biology of Komi Science Center of Ural Branch of Russian Academy of Sciences, Syktyvkar 167982, Russia e-mail: [email protected] M. Syromyatnikov · V. Popov Laboratory of Metagenomics and Food Biotechnology, Voronezh State University of Engineering Technologies, Voronezh, Russia A. Moskalev School of Systems Biology, George Mason University, Fairfax, USA

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 A. Moskalev et al. (eds.), Artificial Intelligence for Healthy Longevity, Healthy Ageing and Longevity 19, https://doi.org/10.1007/978-3-031-35176-1_8

153

154

A. Fedintsev et al.

proposed criteria for the evaluation of geroprotectors (Moskalev et al. 2016). These criteria are: 1 2 3 4 5 6 7 8 9

Increased lifespan Amelioration of human aging biomarkers Acceptable toxicity Minimal side effects at therapeutic dosage Improving health-related quality of life Evolutionary conservatism of target or mechanism of action Reproducibility of geroprotective effects on different model organisms Simultaneous influence on several aging-associated causes of death in mammals Increase in stress resistance.

Criteria 1–5 are primary and 6–9—secondary but are also important. It is believed that geroprotectors that fulfill these criteria are more likely to be successfully translated to the clinic. Since authors identified only 11 compounds (acarbose, deprenyl, d-glucosamine, dihydroergocristine methanesulfonate, ellagic acid, fenofibrate, glutathione, metformin, spermidine, tyrosol, and vinpocetine) that fulfill all criteria, more candidate geroprotectors are needed because, most likely, out of these 11 compounds, the majority will fail clinical trials or will show only modest results. Having more candidates not only increases the likelihood of discovering a very potent geroprotector that could solo extend significantly human life expectancy but it also is useful for seeking a potent combination of geroprotectors: an approach that showed very promising results on model organisms. For example, a combination of three drugs extended the lifespan of C. elegans by 96%, while the maximal life extension achieved by a single drug in this study was about 35% (Admasu et al. 2018). The discovery process of new geroprotectors is slow and expensive: even relatively short lived nematodes C. elegans have a median lifespan of about a month in good conditions. Thus, improved prediction of drug performance prior to lengthy experimentation would greatly speed up the discovery process. Numerous machine learning methods were proposed to solve this problem (Dönerta¸s et al. 2019). Some of them rely on similarity to the already known geroprotectors (Liu et al. 2016; Barardo et al. 2017), some utilize genetic data to identify ligands binding to which will most likely affect the aging process (Ziehm et al. 2017). Similarity-based methods may be very precise: indeed, molecules very similar to known geroprotectors likely have similar targets and similar properties. However, the recall of this class of methods is expected to be quite limited: because an algorithm is limited to a specific set of known geroprotectors, it will be unable to find novel geroprotectors that might have a unique molecular structure and even might bind to ligands that weren’t previously identified as having a link to aging. Another proposed approach could potentially help to mitigate this problem. This approach is based on the analysis of the transcriptome (Spindler and Mote 2007). It is established that aging is associated with changes in the gene expression profile (de Magalhães et al. 2009). Therefore, it is plausible that a drug that shifts transcriptomic signature to a younger state could have geroprotective properties.

8 In Silico Screening of Life-Extending Drugs Using Machine Learning …

155

However, it is not always possible to obtain old cells from human donors and use them for screening of thousands of chemical compounds. Luckily, there is publicly available expression data of cancer cells of different lines which were treated with various drugs. Our hypothesis is that, though cancer cells are in many ways different from aged cells, geroprotectors still may have a similar influence on the gene expression of these cells. However, these transcriptomic signatures are unknown and may be quite complex. To learn these signatures, we propose an approach based on binary classification task. Here we also propose a specially designed ML model that can deal with highly dimensional data. We also report our results on synthetic datasets and real-world evaluation of predicted substances on an invertebrate model—bumblebees Bombus terrestris. The bumblebee Bombus terrestris is an economically important pollinator, the best-studied social insects to date and an emerging model species in quantitative and population genetics (Woodard et al. 2015). Bumblebees are a convenient model for gerontological research due to the simplicity of their maintenance in laboratory conditions, as well as the short life cycle of male and worker bumblebees. Invert sugar syrup is used as a carbohydrate food for bumblebees, which makes it easy to test potential water-soluble geroprotectors on these insects.

8.2 Methods 8.2.1 Data Collection We got transcriptomic data for cell line HL60 from Connectivity Map (CMAP) project (Lamb et al. 2006) and selected 44 compounds that have geroprotective properties according to Geroprotectors database (http://geroprotectors.org) (Moskalev et al. 2015). We then took several files with expression data per each compound as positive samples (n = 70 in total). For the negative samples, we selected 64 compounds, which did not show any life extension in screening on C. elegans (Ye et al. 2014) and were presented in CMAP. After all, we got 65 negative samples. Expression data were normalized using Robust Multi-array Averaging (RMA) (Irizarry et al. 2003). Table 8.1 lists the geroprotectors.

8.2.2 Lines of Invertebrate Models and Keeping Conditions In experiments, males of bumblebees Bombus terrestris hatched from pupae for 1 day were used. Bumblebees were placed on 10 individuals in cylindrical cages with a diameter of 14 cm, height of 7 cm. In total, 3 cages (30 bumblebees) were used for each experimental variant. Cages with bumblebees were placed in laboratory insectariums with an automated microclimate maintenance system. In the insectariums,

156

A. Fedintsev et al.

Table 8.1 List of geroprotectors Group

Compounds

Geroprotectors

acetylsalicylic acid, allantoin, alpha-estradiol, bacitracin, Moskalev et al. bezafibrate, bisoprolol, caffeic acid, canadine, (2015) chloramphenicol, chlorprothixene, ciclosporin, colecalciferol, cyproterone, demeclocycline, doxazosin, doxycycline, enalapril, fenofibrate, flurbiprofen, genistein, Kaempferol, kinetin, L-methionine sulfoximine, lisinopril, lithocholic acid, LY-29400, melatonin, metformin, metoprolol, minocycline, myricetin, naringenin, nitrendipine, pergolide, promethazine, quercetin, ramipril, rifampicin, sirolimus, thioridazine, trichostatin A, trimethadione, valproic acid, wortmannin

Reference

Non-geroprotectors 3-hydroxy-DL-kynurenine, acetohexamide, altretamine, Ye et al. (2014) arcaine, biotin, bumetanide, cantharidin, carbamazepine, cefaclor, corticosterone, cortisone, dantrolene, diphenhydramine, disopyramide, domperidone, droperidol, estrone, felodipine, fludrocortisone, fluspirilene, flutamide, fulvestrant, fusaric acid, gabapentin, ganciclovir, gossypol, haloperidol, ketoconazole, ketoprofen, lansoprazole, leflunomide, lumicolchicine, methazolamide, n-acetyl-l-aspartic acid, neostigmine bromide, nifedipine, niflumic acid, nilutamide, nimodipine, ofloxacin, ouabain, pancuronium bromide, picrotoxinin, pinacidil, pirenperone, pivmecillinam, podophyllotoxin, primidone, progesterone, proglumide, propantheline bromide, propofol, ribavirin, riluzole, rolipram, sr-95531, sulfabenzamide, sulfacetamide, sulfaphenazole, sulindac, tolbutamide, triamterene, trimethoprim, vigabatrin

the temperature was maintained at 24–25 °C and the relative humidity at 50–55%. The air inside the insectariums was continuously pumped through the air preparation chamber with filter and sterilize elements. Carbohydrate food for bumblebees was 62% inverted sugar syrup. Sugar syrup was poured into the feeders, the wicks of which were in contact with the perforated bottom of the cages. The studied substances were added to the sugar syrup. Counting the number of dead bumblebees was carried out every three days.

8.2.3 Statistics and Reproducibility Differences between survival curves were analyzed by the log-rank test (Harrington and Fleming 1982). The 50th (median lifespan) and 90th percentiles of lifespan were estimated. Fisher’s exact test (Fisher 1922) was applied for testing the differences

8 In Silico Screening of Life-Extending Drugs Using Machine Learning …

157

in median lifespan and in the 90th percentile of lifespan according to Wang et al. (2004) recommendations. Holm’s method (Holm 1979) was used in all multiple comparisons. Prediction accuracy of ML models was estimated using 10—fold cross validation across 100 random partitions of the data into 10 folds. Wilcoxon signed-rank test with Holm’s correction was used to estimate the significance of differences.

8.2.4 Model Our goal was to build a binary classifier, which could predict a probability that a compound is a geroprotector given expression data. Ensemble methods (i.e. random forest) show quite good results in classifying very high-dimensional expression data (Díaz-Uriarte and Alvarez de Andrés 2006). Random generalized linear model (RGLM), introduced by Song et al. (2013) could be even superior. RGLM is an ensemble predictor based on bootstrap aggregation (bagging) of generalized linear models whose features (covariates) are selected using forward regression according to AIC criterion. RGLM combines bagging with a random subspace method similarly to a Random Forest model but outperforms it in expression data classification. RGLM also outperforms tree predictor (also known as recursive partitioning, Rpart), linear discriminant analysis (LDA), diagonal linear discriminant analysis (DLDA), k nearest neighbor (KNN), support vector machine (SVM) and shrunken centroid (SC). However, there are several concerns about RGLM: 1 RGLM uses forward selection to build a GLM for each bag. Forward selection is not the best alternative since it tends to overfit and is biased (Song et al. 2013) 2 It uses equal voting scheme, but since many random subspaces contain only irrelevant noisy features it might not be optimal. We developed a modified version of RGLM (MRGLM) with two major improvements: 1 Usage of L1-regularization instead of forward selection 2 Usage of weighted voting with weight for each classifier. 8.2.4.1

Lasso Regularization Instead of Forward Selection

One of the reasons why RGLM uses forward feature selection procedure is interpretability: the number of times a feature is selected in the forward GLM across bags (timesSelectedByForwardRegression) could serve as a feature importance measure similar to that of the Random Forest model. But this interpretability comes at an unacceptably high cost: forward variable selection (and other variable selection methods) often greatly overfit the data which results in unstable and inaccurate predictors (Derksen and Keselman 1992). The overfitting problem is especially important in

158

A. Fedintsev et al.

case of high-dimensional data: since the number of features is large, the chance of selecting the wrong feature by the forward procedure increases dramatically. Overfitting occurs when a model describes random error or noise instead of the underlying relationship. It generally occurs when a model is excessively complex, such as having too many parameters relative to the number of observations. In order to prevent overfitting, a lot of techniques were invented. One of them is regularization. A common way to reduce overfitting is regularization. Regularization refers to a process of introducing additional information which is usually of the form of a penalty for complexity, such as restrictions for smoothness or bounds on the vector space norm. Regularization is related to the parsimony principle (Ockham’s razor), i.e. models should be as simple as possible, but not simpler (Chen and Haykin 2002). One of the common regularization methods is L1-regularization (Lasso). Lasso is a regression method which performs both variable selection and regularization introduced by Robert Tibshirani in 1996 (Tibshirani 1996). Variable selection is performed by shrinking some coefficients exactly to zero. In case of OLS the objective of lasso is to solve: { } p N ∑ | | )2 1 ∑( T |β j | ≤ t yi − β0 − xi β min subject to β0 ,β N i=0 j=1 Penalized regression methods are widely used in high-dimensional settings, often with highly sparse underlying models. For example, in genetic association studies, very few genetic markers are expected to be associated with the phenotype of interest. In such cases, sparse regression techniques such as lasso can identify a small subset of relevant predictors and can provide good predictive accuracy. In common scenarios, Lasso performs better than modern regularization methods such as SCAD and adaptive Lasso because it doesn’t depend on complex pre-selection procedures (Benner et al. 2010). It was shown that when the number of samples is relatively small (< 150) Lasso has greater AUC compared to such variable selection methods as stepwise, stability selection, Bolasso and bootstrap ranking (Guo et al. 2015). The elastic net, lasso, adaptive lasso and the adaptive elastic net all had similar accuracies but outperformed ridge regression and ridge regression BLUP in genomic selection task (Ogutu et al. 2012). Also, the least angle regression (LARS) algorithm with Lasso-regularization outperformed Best Linear Unbiased Prediction (BLUP) and a Bayesian method (BayesA) in genomic selection task (Usai et al. 2009). In addition to countering overfitting, LASSO regression is faster than stepwise forward selection procedure.

8.2.4.2

Weighted Voting

As it was mentioned above, the equal voting scheme may not be the best choice for random subspace method and high dimensional data, because most subspaces

8 In Silico Screening of Life-Extending Drugs Using Machine Learning …

159

may contain lots of noisy features and individual classifiers developed from these subspaces may be not very informative. Intuitively, models with better classification accuracy should be assigned more weight. So different approaches are used to work around this problem. For example, weights could be established using different heuristics (Ahn et al. 2007; Li and Zhao 2009). We propose to use the following weighting scheme: wi =

1 − ei 1 log , C ei

where C—normalization constant (all weights should sum up to 1), ei—out of bag estimate (OOB) of the accuracy of i-th classifier. Motivation for this weighting scheme: both bagging and random subspace method decrease dependence between base learners. Assuming independence of base learners, this weighting scheme is proven to be optimal (Li and Zhao 2009).

8.3 Results 8.3.1 Model Validation on Synthetic Data Prior to applying this model to the real CMAP data, we tested the model on public datasets of high dimensional expression data (Table 8.2) and compared it with some of the best ML models for tabular data: RGLM, Random Forest and Gradient Boosting (Xgboost). In all cases MRGLM showed equal or superior performance (Table 8.3). Table 8.2 Public expression datasets for model testing Dataset

Number of samples

Number of features

Binary outcome

Singh et al. (2002)

102

12 600

Prostate tumor versus non-tumor

Chowdary et al. (2006)

104

22 283

Breast cancer versus prostate cancer

Khan et al. (2001)

63

2 308

Most prevalent class versus others

Chin et al. (2006)

118

22 215

Breast cancer versus non-cancer

Golub et al. (1999)

72

7129

Most prevalent class versus others

160

A. Fedintsev et al.

Table 8.3 Parameters of comparison of the model with other models Singh

Chowdary

Khan

Chin

Golub

MRGLM

0.979

1.0

1.0

0.901

1.0

RGLM

0.969*

0.99*

1.0

0.888*

1.0

RF

0.923*

0.92*

0.863*

0.846*

0.887*

Xgboost

0.952*

0.99*

0.899

0.915*

0.91*

*

Differences are significant (p < 0.05)

8.3.2 Lifespan Tests To predict compounds with anti-aging activity we fitted our MRGLM model using normalized training data from CMAP (see Methods section) and then used the trained model to generate scores for the remaining compounds from the CMAP database. As a result, we selected 8 previously not tested compounds with the highest scores and conducted a lifespan test using Bombus terrestris as a model organism and different concentrations of the selected compounds. Results are summarized in Table 8.4. Four compounds were found to significantly extend lifespan of the bumblebees: Clidinium bromide (+13.5% at 10 μM), (±)Mevalonolactone (+13.5% at 1 μM, + 18.9% at 10 μM and + 13.5% at 100 μM respectively), 8-Azaguanine (+18.9% at 100 μM). Fluphenazine dihydrochloride significantly increased median lifespan (+13.5% and + 24.3% at 1 and 10 μM respectively) as well as 90th percentile (+10.9% at 1 and 10 μM). After applying correction for the multiple comparisons only results for Fluphenazine dihydrochloride, (±)-Mevalonolactone and 8-Azaguanine remained statistically significant.

8.4 Discussion In this proof-of-principle study, we show that, by using expression data alongside with machine learning, it is possible in principle to predict compounds that are able to modulate aging at least in invertebrates. To the best of our knowledge, we are the first to propose a modification of the RGLM model that is significantly better than the original model and two other classical ML models on several gold-standard datasets. The proposed model is faster to train and more accurate than the original RGLM model and could be used in many other tasks which involve high-dimensional data. With the help of the proposed model, we identified three compounds that significantly extend lifespan of bumblebees Bombus terrestris: Fluphenazine dihydrochloride, (±)-Mevalonolactone and 8-Azaguanine. Interestingly, Fluphenazine dihydrochloride significantly extended not only the median lifespan but also the 90th percentile.

8 In Silico Screening of Life-Extending Drugs Using Machine Learning …

161

Table 8.4 M—median lifespan, 90%—90th percentile, dM—relative difference between median lifespans of the control and treatment groups, d90—relative difference between 90th percentiles, n—sample size Group

Concentration in sugar syrup (μM)

Control Chlorpromazine hydrochloride

10 1 10 100

Fluphenazine dihydrochloride

1 10 100

(±)-Mevalonolactone

Prochlorperazine dimaleate

Trifluoperazine dihydrochloride

8-Azaguanine

N6,2' -O-Dibutyryladenosine * —p

dM (%)

37 1 100

Clidinium bromide

Median

42

90%

d90 (%)

46

n 49

13.5

47

2.2

49

39.5

6.8

46

0

50

42

13.5

48

4.3

50

13.5

46

0

50

42*

13.5

48

4.3

49

42

42

13.5

47

2.2

50

42*

13.5

51*

10.9

50

46***

24.3

51**

10.9

50

42

13.5

46

0

50

1

42*

13.5

47

2.2

50

10

44**

18.9

48

4.3

50

100

42*

13.5

48

4.3

50

13.5

47

2.2

50

1

42

10

42

13.5

48

4.3

50

100 1

37

0

46

0

50

42

13.5

46

0

50

10

34*

−8.1

46

0

50

100

37

0

46

0

50

1

37

0

46

0

50

10

42

13.5

46

0

50

100

44**

18.9

49

6.5

50

10

42

13.5

46

0

50

100

42

13.5

47

2.2

50

< 0.05, ** —p < 0.01, *** —p < 0.001

Fluphenazine dihydrochloride is an antipsychotic drug. It is believed that its pharmacological effect is due to the interaction of neurons containing dopamine by blocking dopamine receptors. Fluphenazine dihydrochloride is an inhibitor of Tyrosylprotein sulfotransferase—transmembrane proteins that catalyze the transfer of a sulfuryl group from the 3 ‘-phosphoadenosine-5’—phosphosulfate (PAPS) donor to the side chain of select tyrosine residences of proteins translating the secretory pathway (Zhou et al. 2017). (±)Mevalonolactone—mevalonic acid is a precursor in the important metabolic pathway (mevalonate pathway), which plays an important role in cell biochemistry. This pathway converts mevalonate into a number of biologically important molecules

162

A. Fedintsev et al.

such as cholesterol, dolichol, heme-a, ubiquinone. In addition, mevalonic acid is a precursor to the biosynthesis of the terpene family (Goldstein and Brown 1990; Buhaescu and Izzedine 2007). 8-Azaguanine is an antagonist of purine in various biological systems. 8-Azaguanine was the first purine analog to show the effect of inhibiting experimental tumors in mice. Purine analogues are highly relevant in the biological context and are used as drugs in therapy of a number of diseases. In summary, we proposed a supervised classification algorithm for predicting lifespan-extending drugs on a large scale, and narrowing down the scope of candidate drugs needed to be verified by wet-lab experiments. Compliance with Ethical Standards Conflict of Interest The authors declare that they have no conflict of interest.

References Admasu TD, Chaithanya Batchu K, Barardo D et al (2018) Drug synergy slows aging and improves healthspan through IGF and SREBP lipid signaling. Dev Cell 47:67-79.e5. https://doi.org/10. 1016/J.DEVCEL.2018.09.001/ATTACHMENT/7432AAB3-9BFA-4202-98F7-ACA31C1D8 778/MMC7.XLSX Ahn H, Moon H, Fazzari MJ et al (2007) Classification by ensembles from random partitions of high-dimensional data. Comput Stat Data Anal 51:6166–6179. https://doi.org/10.1016/J.CSDA. 2006.12.043 Barardo DG, Newby D, Thornton D, et al (2017) Machine learning for predicting lifespan-extending chemical compounds. Aging (Albany NY) 9:1721. https://doi.org/10.18632/AGING.101264 Benner A, Zucknick M, Hielscher T et al (2010) High-dimensional Cox models: the choice of penalty as part of the model building process. Biom J 52:50–69. https://doi.org/10.1002/BIMJ. 200900064 Buhaescu I, Izzedine H (2007) Mevalonate pathway: a review of clinical and therapeutical implications. Clin Biochem 40:575–584. https://doi.org/10.1016/J.CLINBIOCHEM.2007.03.016 Chen Z, Haykin S (2002) On different facets of regularization theory. Neural Comput 14:2791–2846. https://doi.org/10.1162/089976602760805296 Chin K, DeVries S, Fridlyand J et al (2006) Genomic and transcriptional aberrations linked to breast cancer pathophysiologies. Cancer Cell 10:529–541. https://doi.org/10.1016/J.CCR.2006.10.009 Chowdary D, Lathrop J, Skelton J et al (2006) Prognostic gene expression signatures can be measured in tissues collected in RNAlater preservative. J Mol Diagn 8:31. https://doi.org/10. 2353/JMOLDX.2006.050056 de Magalhães JP, Curado J, Church GM (2009) Meta-analysis of age-related gene expression profiles identifies common signatures of aging. Bioinformatics 25:875. https://doi.org/10.1093/BIOINF ORMATICS/BTP073 Derksen S, Keselman HJ (1992) Backward, forward and stepwise automated subset selection algorithms: Frequency of obtaining authentic and noise variables. Br J Math Stat Psychol 45:265–282. https://doi.org/10.1111/J.2044-8317.1992.TB00992.X Díaz-Uriarte R, Alvarez de Andrés S (2006) Gene selection and classification of microarray data using random forest. BMC Bioinformatics 7:1–13. https://doi.org/10.1186/1471-2105-7-3/FIG URES/1

8 In Silico Screening of Life-Extending Drugs Using Machine Learning …

163

Dönerta¸s HM, Fuentealba M, Partridge L, Thornton JM (2019) Identifying potential ageingmodulating drugs in silico. Trends Endocrinol Metab 30:118. https://doi.org/10.1016/J.TEM. 2018.11.005 Fisher RA (1922) On the interpretation of χ 2 from contingency tables, and the calculation of P. J R Stat Soc 85:87. https://doi.org/10.2307/2340521 Goldstein JL, Brown MS (1990) Regulation of the mevalonate pathway. Nature 343:425–430. https:/ /doi.org/10.1038/343425A0 Golub TR, Slonim DK, Tamayo P et al (1999) Molecular classification of cancer: class discovery and class prediction by gene expression monitoring. Science 286:527–531. https://doi.org/10. 1126/SCIENCE.286.5439.531 Guo P, Zeng F, Hu X et al (2015) Improved variable selection algorithm using a LASSO-type penalty, with an application to assessing hepatitis b infection relevant factors in community residents. PLoS ONE 10:134151. https://doi.org/10.1371/JOURNAL.PONE.0134151 Harrington DP, Fleming TR (1982) A class of rank test procedures for censored survival data. Biometrika 69:553–566. https://doi.org/10.1093/BIOMET/69.3.553 Holm S (1979) A simple sequentially rejective multiple test procedure. Scand J Stat 6:65–70 Irizarry RA, Hobbs B, Collin F et al (2003) Exploration, normalization, and summaries of high density oligonucleotide array probe level data. Biostatistics 4:249–264. https://doi.org/10.1093/ BIOSTATISTICS/4.2.249 Khan J, Wei JS, Ringnér M et al (2001) (2001) Classification and diagnostic prediction of cancers using gene expression profiling and artificial neural networks. Nat Med 76(7):673–679. https:// doi.org/10.1038/89044 Lamb J, Crawford ED, Peck D et al (2006) The connectivity map: using gene-expression signatures to connect small molecules, genes, and disease. Science 313:1929–1935. https://doi.org/10. 1126/SCIENCE.1132939 Li X, Zhao H (2009) Weighted random subspace method for high dimensional data classification. Stat Interface 2:153. https://doi.org/10.4310/SII.2009.V2.N2.A5 Liu H, Guo M, Xue T et al (2016) Screening lifespan-extending drugs in Caenorhabditis elegans via label propagation on drug-protein networks. BMC Syst Biol 10:509–519. https://doi.org/10. 1186/S12918-016-0362-4/FIGURES/8 Moskalev A, Chernyagina E, Tsvetkov V et al (2016) Developing criteria for evaluation of geroprotectors as a key stage toward translation to the clinic. Aging Cell 15:407–415. https://doi. org/10.1111/ACEL.12463 Moskalev A, Chernyagina E, de Magalhães JP et al (2015) Geroprotectors.org: a new, structured and curated database of current therapeutic interventions in aging and age-related disease. Aging (Albany NY)7:616. https://doi.org/10.18632/AGING.100799 Ogutu JO, Schulz-Streeck T, Piepho HP (2012) Genomic selection using regularized linear regression models: ridge regression, lasso, elastic net and their extensions. BMC Proc 6 Suppl 2. https:/ /doi.org/10.1186/1753-6561-6-S2-S10 Scott AJ, Ellison M, Sinclair DA (2021) The economic value of targeting aging. Nat Aging 17(1):616–623. https://doi.org/10.1038/s43587-021-00080-0 Singh D, Febbo PG, Ross K et al (2002) Gene expression correlates of clinical prostate cancer behavior. Cancer Cell 1:203–209. https://doi.org/10.1016/S1535-6108(02)00030-2 Song L, Langfelder P, Horvath S (2013) Random generalized linear model: a highly accurate and interpretable ensemble predictor. BMC Bioinformatics 14:5. https://doi.org/10.1186/14712105-14-5 Spindler SR, Mote PL (2007) Screening candidate longevity therapeutics using gene-expression arrays. Gerontology 53:306–321. https://doi.org/10.1159/000103924 Tibshirani R (1996) regression shrinkage and selection via the lasso. J R Stat Soc Ser B 58:267–288. https://doi.org/10.1111/J.2517-6161.1996.TB02080.X Usai MG, Goddard ME, Hayes BJ (2009) LASSO with cross-validation for genomic selection. Genet Res (camb) 91:427–436. https://doi.org/10.1017/S0016672309990334

164

A. Fedintsev et al.

Wang C, Li Q, Redden DT et al (2004) Statistical methods for testing effects on “maximum lifespan.” Mech Ageing Dev 125:629–632. https://doi.org/10.1016/J.MAD.2004.07.003 Woodard SH, Lozier JD, Goulson D et al (2015) Molecular tools and bumble bees: revealing hidden details of ecology and evolution in a model system. Mol Ecol 24:2916–2936. https://doi.org/10. 1111/MEC.13198 Ye X, Linton JM, Schork NJ et al (2014) A pharmacological network for lifespan extension in Caenorhabditis elegans. Aging Cell 13:206–215. https://doi.org/10.1111/ACEL.12163 Zhou W, Wang Y, Xie J, Geraghty RJ (2017) A fluorescence-based high-throughput assay to identify inhibitors of tyrosylprotein sulfotransferase activity. Biochem Biophys Res Commun 482:1207– 1212. https://doi.org/10.1016/J.BBRC.2016.12.013 Ziehm M, Kaur S, Ivanov DK et al (2017) Drug repurposing for aging research using model organisms. Aging Cell 16:1006–1015. https://doi.org/10.1111/ACEL.12626

Chapter 9

An Overview of Kernel Methods for Identifying Genetic Association with Health-Related Traits Vicente Gallego

Abstract In recent years, with technological developments, the amount of information available has grown very rapidly, which has led to a greater variety in the typology of data. The increase can be observed in the knowledge of human genetics. This knowledge is highly relevant to the improvement of human health and disease prevention. Therefore, an important goal for health professionals is to study the effect of genetic variants on human health and detect the genetic variants associated with health-related traits. The kernel methodology has proven to be a very useful tool for this task, offering a framework with very good properties for the analysis of genomic data. Among these properties, we highlight the analysis of non-linear relationships between a large number of genetic variants and phenotypes of interest in human health, such as diseases, symptoms, or treatments. In this overview, we highlight the key role of kernel methodology in genetic association studies of multi markers with traits relevant to human health for quality of life improvement. In addition, we introduce the kernel methodology for genetic association studies with health phenotypes, as well as the main models based on kernel methods to identify genetic variants associated with different types of health phenotypes, both for single or multiple health phenotypes. Keywords Genetic association analysis · Kernel methods · Human health · Mixed model · Genomic data · Machine learning · Variance component test · Pleiotropy · Multivariate phenotypes · Kernel machine regression

9.1 Introduction Due to the great technological advances in recent years, the information on human genetics has increased enormously, leading to a better knowledge of it. This increased knowledge of human genetics has had a significant impact on medical research and V. Gallego (B) DEZZAI, Madrid, Spain e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 A. Moskalev et al. (eds.), Artificial Intelligence for Healthy Longevity, Healthy Ageing and Longevity 19, https://doi.org/10.1007/978-3-031-35176-1_9

165

166

V. Gallego

health improvement (Carvallo 2017). With the publication of the Human Genome sequence within the Human Genome Project, a good resource for the researchers is generated to understand the fundamentals of genetics in the health context (Khoury et al. 2000). Also, the cost of these DNA sequencing technologies has been reduced considerably, as well as increasing their performance. This is reflected in breakthroughs in research into new treatments, better genome-based drugs, disease prevention, as well as in human lifespan. Ultimately, the advances in genomic technologies are having a profound impact on medical research and pharmacology. The human lifespan is mainly determined by environmental factors. Although there are human family studies that approximately 20–30% of the variation in lifespan is accounted for by genetic factors (Iachine et al. 2006). Therefore, genetics studies and their relationship to human health are an important research framework for health professionals, and in turn, improves health and the quality of life. In the knowledge of genetics, we highlight the discovery of genetic variants and their effect on human characteristics. The genetic variants contribute to the different expressions of the phenotypes or observable traits (National Academies of Sciences et al. 2017). The differences between individuals, in all disease processes, are consequences of the diversity in our DNA composition. Therefore, a disorder such as one type of cancer or diabetes may be related to these genetic variants. Therefore, one of the main objectives in genetics is the identification of sets of genetic variants (usually Single Nucleotide Polymorphism SNPs) that are associated with a phenotype. Improved understanding of the relationships between biomarkers with human traits such as diseases can provide important information for health professionals for the prevention of disease and improvements in the treatment of disease. The identification of genetic markers and/or genes in the human genome correlated to phenotypes, such as complex or common diseases, possible responses of patients to different treatments, as well as to characteristic features of a disease is very important for health professionals. The detection of these genetic variants can generate lines of research for the improvement of the quality of life in society through the prevention of diseases and improvements in treatments. The Kernel methodology highlighted a set of models that makes significant improvements to the identification of sets of genetic variants and/or genes associated with common or complex diseases or potential treatments. Specifically, the kernel methodology provides an advantageous framework for the challenges of large-scale genomic data analysis. Note that from an analytical point of view, the detection of these genetic relationships presents contextual challenges due to the characteristics of the genomic data. One of these characteristics is that the genomic data are high-dimensional data, and the relationships between many genetic variants and human traits as status diseases are analyzed. In genetic association studies, the number of genetic variants is usually larger than the sample size of individuals. Another main character in the genetic association studies is observed on the individual effect. The individual genetic effect of each variant is very small and the analysis of the joint effect of a sample of these genetic variants on the phenotypes is of real interest.

9 An Overview of Kernel Methods for Identifying Genetic Association …

167

In addition, there is a nonrandom association between nearby genetic variants, called Linkage Disequilibrium (LD) such that within a population the alleles are more frequently associated with alleles of neighboring polymorphisms (Sheikh et al. 2017). And genetic variants near one another are linked by linkage disequilibrium (LD) forming blocks of genetic variants (Reich et al. 2001). Finally, we note that the proportion of significant genetic variants is very low in relation to the total number of genetic variants in the genetic association problems. Consequently, the low power of statistical tests to identify the sets of genetic markers associated with a disease or quantitative medical measure. To all these characteristics of the genomic data must also be added the non-linear joint effects of groups of genetic variants on the phenotype, so that the kernel methodology offers a better alternative for detecting genetic variants associated with health phenotype and the genomic data analysis. Due to its framework based on the kernel function, the kernel methodology has been demonstrated to be a good tool to analyse non-linear relationships in the data. This type of relationship is the most common in real life and specifically in Genetics. We also note that the kernel methodology permits working with all types of data, vectorial or non-vectorial (Shawe-Taylor et al. 2004). In addition, the kernel method has been demonstrated to be a good approach for large-scale data analysis solving the problem of a larger number of variables than individuals in the sample (Wang et al. 2015), a frequent case in the Genomic data analysis as mentioned above. Finally, the kernel methods provide the chance to study the joint effect of a set of genetic variants on the phenotype of interest. It is important because the individual effect is very low and, above all, by the correlation between genetic variants due to the Linkage Disequilibrium (LD). Even the power of genetic association tests with disease or quantitative medical measures tends to be robust when the fraction of causal genetic variants is small. Moreover, kernel methods are very attractive for large-scale genetic studies because they are computationally very flexible and feasible (Larson et al. 2019). The chapter is organized as follows. In Sect. 9.2, the kernel Methodology is introduced, we show the main functional concepts of the kernel methods, as well as the different kernels used for the genomic data analysis. Section 9.3 shows the kernel models in the regression framework for genetic association tests with a single phenotype of interest. These phenotypes can be a medical quantitative measure (quantitative variable) or the disease status (binary variable). Section 9.4 shows a kernel-based measure of the importance of variables (KVI) to order SNPs or sets of SNPs according to their joint genetic effect on the disease status. Section 9.5 shows the recent advances in kernel models for genetic association with multiple phenotypes. Section 9.6 shows a summary of the recent advances in genetic association testing for Censored data within the kernel framework. The chapter concludes with a discussion of the kernel methodology for genomic data analysis and new possible research directions.

168

V. Gallego

9.2 Introduction to the Kernels Methods for Genomic Data Analysis We consider kernel methods by their favourable framework for analysing the relationship between health phenotype and high-dimensional genomic data. The kernel methods are a set of algorithms that are used to provide the best possible situation for decision-making. The different types of algorithms of kernel methods perform inferences and predictions from observed data (Zoppis et al. 2019), in our case, Genomic data. The inferences and predictions are performed from the pattern analysis, finding, and studying of general types of relations within the genomic data. From the pattern analysis, the algorithms of kernels methods build models to identify sets of genetic variants associated with health traits or phenotypes, such as a quantitative health measure or disease status, to predict future observations from a given genotype, or to rank genetic variants according to their importance. The genomic data is normally formed by a set of p genetics variants, commonly SNPs (Single Nucleotide Polymorphism. The SNPs are composed of two alleles a (the minor allele) and A (the major allele) and their allele frequency in the population. The three possible genotypes of an SNP are given by A A corresponding to the major homozygote, a A to the heterozygote and aa to the minor homozygote, being coded as 0, 1 and 2, respectively, according to the number of the minor allele present. ( p) Given a set of p SNPs, let G i = G i1 , G i2 , . . . , G i be the genotypes for subject i j (i = 1, . . . , n) with G i having values {0, 1, 2} ( j = 1, . . . , p).

9.2.1 The Kernel Functions The kernels Methods projects the genotypes, G i , for each individual i from the input space G to the Hilbert Space F generated by a given positive semi-definite Kernel in F( between function K (·, ·). The kernel function is defined by⟨ the inner( product )⟩ ) projections of the genotypes of two individuals, φ(G i ), φ G j F = K G i , G j , where the mapping function φ : G → F being unknown. The kernel computes the inner product of pairs of⟨ projections )⟩ genotypes ) ( within ( of the Hilbert space induced by the kernel function φ(G i ), φ G j F = K G i , G j . So, the kernel function converts the genomic information for a pair of subjects into a quantitative value representing the genomic similarity between the two subjects (Smola and Schölkopf 1998; Shawe-Taylor et al. 2004; Schölkopf and Smola 2002; Cristianini et al. 2002). Therefore, from the kernel function, a positive semi-definite symmetric matrix is obtained, called Kernel matrix, containing the genomic similarity between subjects. Note that the kernel matrix statistically can be used as a covariance matrix. All this makes the Kernel methodology a good tool for the challenges posed for large-scale genomic data analysis. By representing the kernel function as a measure of similarity, all genomic information of interest is contained in the kernel matrix.

9 An Overview of Kernel Methods for Identifying Genetic Association …

169

We note that normally genomic data is composed of a larger number of genetic variants than sample size, then this problem is solved by be contained all genomic information of interest in the kernel matrix. Furthermore, the kernel function representation as genomic similarity allows us to test the joint effect of a sample of genetic variants in the genetic association studies. As mentioned above, the individual effect of the genetic variants is minimal, in addition to the correlation between genetic variants due to Linkage Desiquilibrium, then the joint genetics effect is an important characteristic within the genetic association test (Gallego et al. 2017) (Fig. 9.1). For genetic association studies, we highlight the IBS (Identity by State) Kernels and the Polygenic Kernel. The IBS Kernel counts the number of identically shared alleles per state (IBS) between two subjects. (

)

K IBS G i , G j =

∑p k=1

) ( IBS G ik , G kj 2p

where | ) ) ( ) (| ( IBS G ik , G kj = 2I G ik = G kj + I |G ik − G kj | = 1 is the number of alleles shared identical by state between subjects i and j at SNP k. The polygenic Kernel, which is a linear Kernel applied to standardized genotype data,

Fig. 9.1 Summary of the stages involved in the application of Kernel Methods

170

V. Gallego

( ) 1 K P G i , G j = G˜ i G˜ tj p G k −2π

where G˜ ik = √2πi (1−πk ) and πk is the minor allele frequency of SNP k (Larson and k k Schaid 2013). Note that complex diseases can often be influenced by rare variants playing an important role in the genetic association. The rare genetic variants are defined by a MAF (minor allele frequency) minor than 0.05. For association tests involving rare genetic variants as possible causal variants, the Weighted Linear Kernel is the suggested kernel to be used. In the Weighted Linear Kernel, the weights wk for each √ SNPs k = 1, . . . , p are defined by wk = Beta(M AFk ; a1 , a2 ) with the parameters suggested a1 = 1 and a2 = 25 for the beta distribution density function (Wu et al. 2011). Among the weighted kernels, the weighted IBS kernel is also well known. The Weighted Linear Kernel and Weighted IBS Kernel are defined as: (

)

K Gi , G j =

p ∑

wk G ik , G kj

(

)

and K G i , G j =

k=1

p ∑

) ( wk IBS G ik , G kj

k=1

where IBS is the genomic similarity between the subjects i and j defined by the number of alleles shared. In addition, we note that new kernels can be constructed using already defined kernel functions and applying several main operations, thus kernels suitable for specific applications can be obtained. These new kernels must satisfy the property of being symmetric and positive semidefinite (Shawe-Taylor et al. 2004; Zoppis et al. 2019). We highlight the construction of new kernel through Hadamard (or elementwise) products between two Kernel matrices. These Kernel products have proven to be useful for detecting gene interaction effects. For example, we show the interaction Kernel matrix K I = K P ◦ K P where K P is the polygenic Kernel and the Kernel product is the Hadamard or element-wise product used to detect genetic interactions between two SNPs (Gallego et al. 2017).

9.2.2 The Main Idea of the Kernel Methods Another important challenge in genetic association studies is the nonlinear relationships identification. The kernel methodology analyses these non-linear relationships within genomic data, thus offering an alternative to non-linear functions. This alternative is given by the application of very powerful linear algorithms in the higher dimensional space where the points of the input space are projected. The main idea of the kernel methods consists of two steps. First, the original genomic data G 1 , G 2 , . . . , G n are projected into a Hilbert space F thorough a non-linear function φ, as mentioned above. In the Hilbert Space F the projected genomic data would be linearly separable with high probability, being

9 An Overview of Kernel Methods for Identifying Genetic Association …

171

Fig. 9.2 Kernel Methods uses nonlinear functions to project input points in a feature space where linear algorithms are applied

easier to discern their relationships. In addition, this Hilbert space is induced by the defined by)the inner product of the transformed genomic data ⟨ kernel( function )⟩ ( φ(G i ), φ G j F = K G i , G j . The second step consists of the application of linear algorithms to the projected genomic data φ(G 1 ), φ(G 2 ), . . . , φ(G n ) in the Hilbert space F leading to the analysis of non-linear relationships between the original genomic data G 1 , G 2 , . . . , G n . We note that both the Hilbert space F and non-linear function φ can be unknown, because all the necessary information to apply the algorithm of the kernel methodology is in the kernel function thorough the inner product. Therefore, the kernel methods can be seen as an elegant framework for the study of non-linear relationships in genomic data by using linear algorithms in Hilbert space (Fig. 9.2).

9.2.3 The Kernel Trick In addition, any linear algorithm that can be expressed in terms of inner products can be “kernelized” or adapted to the kernel methodology, by replacing the inner product with a kernel function (generally referred as kernel trick) (Zoppis et al. 2019). Then, versions of linear algorithms within the kernel framework are built for pattern analysis in the non-linear domain. Among the kernel algorithms highlight the Support Vector Machines (SVM) (Vapnik 2013), the Kernel Principal Component Analysis (KPCA)(Schölkopf et al. 1997) as well as regression models within framework kernel both linear and logistic.

172

V. Gallego

9.2.4 Distance Induced by the Function Kernel Many of the most important statistical and Machine Learning procedures are based on measures of distance or dissimilarity. As the kernel function can be interpreted as a similarity measure between individuals then all distance-based can be applied within the kernel methods framework. In the kernels Methods, the genotypes, G i for each individual i, are projected from the input space G to the Hilbert Space F generated by a given positive semidefinite Kernel function K (·, ·). In the Hilbert Space F induced by the kernel func⟨tion, the kernel ) the inner product of pairs of genomic projections ( )⟩ function( computes φ(G i ), φ G j F = K G i , G j . Therefore, the Hilbert Space F is defined by the inner product, without it being necessary to know F explicitly. √ The norm φ(G) induced by the inner product is defined as ∥φ(G)∥ = ) φ(G)⟩, and therefore, the length of the line joining the points φ(G i ) and (⟨φ(G), φ G j of F can be obtained from the norm. Then, the genetic distance between two subjects i, j can be defined as: ) ∥ ( ( )∥ /(⟨ ( ) ( )⟩) φ(G i ) − φ G j , φ(G i ) − φ G j d G i , G j = ∥φ(G i ) − φ G j ∥ = /( ⟨ ⟨ ( )⟩ ( )⟩) ⟨φ(G i ), φ(G i )⟩ + φ(G i ), φ G j − 2 ∗ φ(G i ), φ G j = /( ) ( )) ( K (G i , G i ) + K G j , G j − 2K G i , G j = Therefore, the distance induced by the kernel function in the Hilbert Space F / ) ( ) ( ) ( is defined by di j = K (G i , G i ) + K G j , G j − 2K G i , G j being D = di j (i, j = 1, . . . , n) the genetic distance between individuals. ∑n In addition, we introduce the centre of mass φC = n1 i=1 φ(G i ) in the Hilbert Space F, an important element because of its usefulness in many distance-based algorithms. Thus, the squared distance between an individual i genotyped ) ( by G i and ∑ its centre of mass C is given by d 2 (φ(G i ), C) = K (G i , G i ) − n2 nj=1 K G i , G j + ) ( ∑ n 1 s, j=1 K G t , G j . n2

9.3 Kernel Machine Regression for Multi-marker genetic Association Testing In the area of Human Health Genetics, a main goal is the identification of sets of genetic variants associated with phenotypes related to human health. For this purpose, multi-markers association tests are proposed where the phenotypes of interest can be either a “disease status” (binary variable) or a “quantitative health measure” (continuous variables).

9 An Overview of Kernel Methods for Identifying Genetic Association …

173

As mentioned in Sect. 9.2, the algorithms within the framework of regression models can be adapted to the kernel Methods. As an alternative to multivariate linear or multiple logistic regression for these genetic association studies, semi-parametric regression models are proposed. These semi-parametric regression models are composed of a parametric and a nonparametric part. The parametric part consists of the environmental information to be related to the health phenotype while the non-parametric parts model the genetic data to determine the relationship between the genetic variants and the health phenotype of interest. In the non-parametric part, the kernel methodology is applied, the genetic multimarkers are modelled with the kernel methodology (Kwee et al. 2008; Wu et al. 2010). In this way, both the joint effect and the non-linear effects of genetic variants on the health phenotype are analysed. The estimation of the effects on the phenotype of interest in this semi-parametric model, both environmental and genetic, can be solved using the Least-Squares Kernel Machine (LSKM) model for high-dimensional data. Note that this LSKM model can be represented by a specific form of a linear Mixed Model. This connection between the two occurs because the estimating equations of the regularized Least-Squares Kernel Machine (LSKM) model are equivalent to the mixed-effects model fit. The connection between LSKM and a linear Mixed models is used for testing the multimarker association with the health phenotype through a variance component test with the genetic effect as a random effect and depending on the kernel matrix (Liu et al. 2008).

9.3.1 Kernel Linear and Logistic Regression Models for Genetic Association Test Consider an association study involving n genotyped individuals, where G i is the genotype of the subject i (i = 1, . . . , n), given a set of p SNPs, as in Sect. 9.2. Be X i a vector of measured environmental covariates for subject i.

9.3.2 Kernel Linear Regression Models Consider Yi denoting the quantitative trait value for subject i, for example, a quantitative medical measure. A semiparametric regression model is proposed for relating Yi the quantitative medical measure and the large-scale genomic data G i including X i as environmental variable. This semiparametric model can be written as the following: Yi = X iT β + h(G i ) + ei

174

V. Gallego

where β is a vector of coefficients of regression describing the effects of the measured environmental covariates X i being parametrically modelled ei follows a normal distribution with mean 0 and standard deviation σ describing a random subjectspecific environmental effect. Finally, h(G i ) is an unknown nonparametric function corresponding to the effect of the SNP set as a linear combi) ( expressed ∑that can be nation of the Kernel function h(G i ) = nj=1 α j K G i , G j (dual representation). We note that through h, both the joint effect of the sample of genetic variants on the quantitative medical measure and their non-linear relationships are analysed (Kwee et al. 2008). The parameter β and the genetic effect h(G i ) of the semiparametric model Yi = X iT β + h(G i ) + ei can be estimated through the regularized Least Squares Kernel Machine model (LSKM) for high-dimensional data (Liu et al. 2007). As mentioned before, the LSKM models can be represented by a specific linear mixed model because the estimating equations for the regularized LSKM model are equivalent to fitting the mixed-effects model (Liu et al. 2007). Then, the linear mixed model can be written in matrix form as the following: Y = Xβ + h + E where Y denotes the previous quantitative measure medical. X is a matrix representing the environmental fixed effects with β vector of regression coefficients measuring their effect. E follows a multivariate normal distribution with zero mean and variance σ 2 I representing the subject-specific random effect. h is a vector as random effect representing the genetic effect. h follows a multivariate normal distri2 bution with mean 0 and variance–covariance matrix σλ K where K is the kernel matrix containing the genetic similarity between individuals and λ denotes the smoothing parameter (Kwee et al. 2008). We note that in several statistical programmes such as R or SAS there are very common functions with this type of linear mixed model.

9.3.2.1

Association Genetic Test with Quantitative Trait

) ( ∑ The non-parametric function h(G i ) = nj=1 α j K G i , G j in the semiparametric model determines the relationship between the SNPs set and medical measure quantitative. Then, to determine whether a set of genetic variants is associated with a medical measure quantitative, the hypothesis test with the null hypothesis H0 : h(G i ) = 0 is considered. To test this hypothesis, the connection LSKM and linear mixed model is exploited. h is a random effect that follows a multivariate normal distribution with mean 0 and 2 covariance matrix τ K assuming that σλ . As K is a kernel matrix, i.e., a positive semidefinite matrix, then H0 : h(G i ) = 0 is equivalent to τ = 0. Therefore, to test the association of the SNPs set with the medical measure quantitative, a variance component test with is proposed with H0 : τ = 0, h ∼ N (0, τ K ) and the score

9 An Overview of Kernel Methods for Identifying Genetic Association …

175

statistic Q is given by (Kwee et al. 2008): Q=

)t ( ) 1 ( Y − X βˆ K Y − X βˆ 2 2σˆ

where βˆ and σˆ are the maximum likelihood estimates of β and σ in the regression model under the null hypothesis, i.e., without genetics Yi = X iT β. To obtain an approximation of the distribution of Q the Satterthwaite procedure is used, so that, the distribution of Q is approximated to a scaled χ 2 , κχv2 with scale parameter κ and degrees of freedom ν (Zhang and Lin 2003). The Satterthwaite procedure is based on the mean (first moment) and the variance (second moment) of the Q distribution under the null hypothesis, denoted by μ Q and I˜τ τ , respectively. So that: μQ =

tr (P0 K ) 2

( )−1 where P0 = D0 − D0 X X t D0 X X D0 being the projection matrix under the null hypothesis with D0 the identity matrix and I2 tr (K P0 K P0 ) , I˜τ τ = Iτ τ − τ σ with Iτ τ = Iσ σ 2 ) ( tr P0 P0t tr (P0 K P0 ) Iτ σ = and Iσ σ = 2 2 Once the values of μ Q and I˜τ τ are obtained, the significance test H0 : τ = 0 is solved by comparing the value of the resulting scaled score statistic Qκ to a chi-square distribution with ν degrees of freedom with κ =

I˜τ τ 2μ Q

and ν =

2μ2Q . I˜τ τ

9.3.3 Kernel Logistic Regression Model Let Yi denote a disease status for subject i where Yi = 1 denotes a case and Yi = 0 denotes a control (i = 1, . . . , n). Consider the Logistic Regression model (

μi logit(μi ) = log 1 − μi where μi = P(Yi = 1).

) = X i β + G i α + ei

176

V. Gallego

) ( The vector α = α1 , . . . , α p contains the genetic effects of each SNP in the sample. This logistic regression model could be used for the study of genetic association with disease status. However, if this model is used, neither the non-linear relationships nor the joint effect would be considered and we know that SNPs are correlated by Linkage Disequilibrium forming blocks of SNPs, as discussed in Sect. 9.2. In addition, the greater number of SNPs than the number of subjects in the sample, a recurring problem in genomic data analysis, not would be tackled. To address these issues, a semi-parametric logistic regression model is proposed as in the previous case for testing the genetic association with a quantitative trait related with a disease. Therefore, a semi-parametric logistic regression model logit(μi ) = X iT β + h(G i ) + ei with μi = P(Yi = 1) is proposed to relate Yi disease status with largescale genomic data G i and environmental covariate X i . As in the quantitative case, as a linear combination of the the genetic part of the model i ) can ) ( be expressed ∑h(G n α K G , G , matricially h = α K being K the Kernel function h(G i ) = j i j j=1 kernel matrix nxn, defined as a genetic similarity matrix between subjects. Thus, the joint effect of the sample of genetic variants is analysed and the non-linear relationships with disease status are considered (Liu et al. 2008). The connection between LSKM and a linear mixed model can be extended to a generalised linear mixed model. The connection between the semiparametric logistic regression model and a Generalized Linear Mixed Model (GLMM) is produced by the equivalence between the kernel machine estimators and the estimators of the logistic mixed model via Penalized Quasi-Likelihood (PQL) (Liu et al. 2008). The connection between kernel framework and GLMM is exploited and the relationship between disease status and genomic data is studied through the Generalized Linear Mixed Model (GLMM) expressed as ) ( μ = Xβ + h logit(μ) = log 1−μ where h containing all genetic information is treated as a random effect with distribution a normal with mean zero and covariance matrix τ K . The estimation of the effects of genetics hˆ can be performed by the kernel machine estimator or by the estimators of the logistic mixed model via Penalized QuasiLikelihood (PQL) using the existing PQL-based mixed model software, such as SAS GLIMMIX and R GLMMPQL.

9.3.3.1

Association Genetic Test with Binary Trait

To test whether the SNPs set is associated with the disease status, the testing with null hypothesis H0 : h = 0 is proposed. By the connection between kernel framework and GLMM, h is a random effect with h ∼ N (0, τ K ), then testing H0 : h = 0 is equivalent to H0 : τ = 0. The score test Q for the component variance test is given by Zhang and Lin (2003):

9 An Overview of Kernel Methods for Identifying Genetic Association …

177

) )t ( Y − μˆ K Y − μˆ Q= 2 (

( ) where logit μˆ = X αˆ con αˆ estimated under the null hypothesis, i.e., without a genetic effect. The p-value is calculated by comparing the resulting value of Q with a scale chi-square distribution with scale parameter κ and degrees of freedom ν. Through the Satterthwaite procedure the distribution Q is approximated to a scaled χ 2 , κχν2 . The values of κ and ν are given by: κ=

I˜τ τ 2μ Q

ν=

2μ2Q I˜τ τ

and

)) ( ( where μ Q = tr (P20 K ) being D0 = diag μˆ i 1 − μˆ i , i = 1, . . . , n, with P0 = D0 − ( t )−1 I2 D0 X X D0 X X D0 and I˜τ τ = Iτ τ − Iστ σσ with Iτ τ = tr (K P20 K P0 ) , Iτ σ = tr (P02K P0 ) tr P P t and I = ( 0 0 ) . σσ

2

Note that to calculate the p-value for both the continuous and dichotomous cases, we have described the Satterthwaite method based on the use of the first and second moments of the Q distribution under the null hypothesis. In this way, the distribution of the Q statistic under the null hypothesis is approximated by a scaled chi-square distribution. However, there are other procedures such as the Davies Method that exactly obtains the Q distribution under the null hypothesis to a mixture of chi-square distributions of degree 1 (Wu et al. 2013).

9.3.4 Rare-Variants Association Genetic Test In recent years, interest in detecting genetic association of rare variants with quantitative measures related to disease or disease status has increased. The rare genetic variants, defined as alleles with a frequency less than 0.05, can play key roles in influencing complex disease and quantitative measures related to complex disease. As mentioned in Sect. 9.2, the Weighted Linear Kernel by (

)

K Gi , G j =

p ∑ k=1

wk G ik , G kj

178

V. Gallego

√ where the weights wk for each SNPs k = 1, . . . , p are defined by wk = Beta(M AFk ; a1 , a2) being the parameters suggested a1 = 1 and a2 = 25 for the beta distribution density function (Wu et al. 2011). In addition, the Weighted Linear Kernel is the best-known kernel used for the identification of associated rare variants with phenotypes. SKAT (Sequence Kernel Association Test) is a regression procedure to identify rare variants associated with a disease status or a quantitative health-related measure within the framework of the kernel methods. SKAT is based on the Kernel Linear and Logistic Regression model for Genetic association test described above adapted for rare variants. Specifically, SKAT (Sequence Kernel Association Test), for both the quantitative (Kernel Linear Regression) and binary (Kernel Logistic Regression) ) ( cases, is the same kernel model within the regression framework, with the K G i , G j kernel used to be the weighted linear kernel. From the SKAT model we show its relationship with the individual Variant Test Statistic. The score statistic Q for the variance component test in SKAT is given by ) )t ( Y − μˆ K Y − μˆ Q= 2 ) ( where K = GW G t with W = diag w1 , . . . , w p is the Weighted Linear Kernel i.e., being the (genetic and μˆ the estimation of μ under the null hypothesis, )t ( effect) ∑p 2 y − μˆ w S with S = G null. Then Q can also be written as Q = j j j j j=1 the weighted sum of the individuals scores statistic for testing the individual variant effect. (

9.3.5 Connection with Other Multi-markers Association Tests In case–control studies, two powerful genetic association tests are proposed, kernel Logistic Regression (KLR) and Genomic Distance based on Regression (GDBR). Note that GDBR is an NPMANOVA (Non-Parametric Multivariate Analysis of Variance) procedure, also known as PERMANOVA, for genetic association studies (Wessel and Schork 2006; Anderson 2001). Both genetic association tests, KLR and GDBR, are equivalent under no environmental covariates in KLR (Pan 2011). In Kernel Logistic Regression model as shown above, the significance test is resolved by the p-value calculated by comparing the resulting value of the Q score test to a scale chi-squared distribution or a chi-squared mixture of degree 1, the approximate or exact distribution of Q under the null hypothesis. In the case of GDBR, the significance test is solved by permutation since the distribution of the pseudoF-statistic is unknown. The(pseudoF-statistic in GDBR is expressed from the genetic distance matrix ) D = di j where di j measures how far apart the sets of SNPs G i and G j are (i, j = 1, . . . , n). In Sect. 9.2 showed that Kernel function can be interpreted as a

9 An Overview of Kernel Methods for Identifying Genetic Association …

179

( )) ( ( ) measure of genomic similarity between individuals φ(G i ), φ G j F = K G i , G j . Therefore, the distance induced by( the kernel in )the Hilbert Space ) function ( ( )F is defined by di2j = K (G i , G i ) + K G j , G j − 2K G i , G j being D = di j the genomic distance between individuals. Then, the pseudoF-statistic in GDBR is written as (Anderson 2001): F=

SST − SSW SSW

∑ ∑ where SST = n1 i, j∈S di2j is the total sum of squares and SSW = n10 i, j∈S 0 di2j + 1 ∑ 2 i, j∈S 1 di j is the within-group or residual sum of squares. n1 An alternative expression of the F-statistic can be given from the centered simia Gower matrix G is obtained larity matrix G, ) as( the Gower ) matrix. Such ( known ( ) ( −1 2 ) −1 2 11t 11t through G = I − n A I − n with A = ai j = 2 di j = 2 D , I being a n × n identity matrix and 1 being a n × 1 vector of 1’s. Then, the test statistic can also be written as F=

tr (H G H ) tr ((I − H )G(I − H ))

where tr (M) is the trace of a matrix M and H is the projection or hat matrix, i.e., ( )−1 H = Y Y t Y Y t with Y being a n × 1 vector of the centered disease status values Yi − Y (McArdle and Anderson 2001). Note that if G is a positive semi-definite matrix then G is equal to the centered kernel matrix K C with the genomic distance D the induced distance in Hilbert space F (Chen and Li 2013; Zhao et al. 2015), K C is known as the Distance-based Kernel (Larson et al. 2019). Therefore, both KMR and GDBR genetic association tests under the condition of no covariates are equivalent because of the connection between the GDBR statistic-F and the KMR score test given by using the same positive semi-definite matrix as kernel matrix K in KRM and G in GDBR.

9.4 Selection of Variables for Gene-Set Analysis Using Kernels Methods The identification of genetic variants associated with diseases is a major challenge in genetic epidemiology. For most complex diseases their genetic component is still unknown, the so-called missing heritability. Many of the genetic variants have been detected and validated, but many of them are still missing. Note that the set of possible genetic variants associated with common or complex diseases is very large. Furthermore, the proportion of causal genetic markers is very small in relation to the total number of genetic markers.

180

V. Gallego

Therefore, a selection of the subset of promising genetic markers prior to the detection of associated genetic variants may be of interest. A measure of variable importance should be a suitable tool for the selection of promising genetic markers. The measure of variable importance must consider the joint genetic effect, nonlinear relationships, or epistasis by genomic data characteristics such as the correlation between genetic markers by Linkage Disequilibrium and a very low individual genetic effect. Finally, the multi-marker genetics tests as Kernel Logistic Regression or GDBR, shown Sect. 9.3, are procedures that identify genetic variants as a block by testing the joint effect of a genetic variants set. Then, in interpreting the results of these tests a question arises as to which genetic variant may be driving the identified association. The kernel-based measure of variable importance (KVI) is proposed as a measure of importance in sets of genetic variants or sets of genes providing the contribution of a SNP, or groups of SNPs, to the joint effect of a set of genetic variants. Thus, KVI can be used to order individual genetic markers, blocks of genetic markers formed by LD or genes. KVI (kernel-based measure of variable importance) is based on the evaluation of discrimination between cases (Y = 1) and controls (Y = 0) in the Hilbert Space F induced by the kernel functions. ( p) As in Sect. 9.2, given a set of p SNPs, let G i = G i1 , G i2 , . . . , G i be the genotypes j for subject i (i = 1, . . . , n) with G i having values {0, 1, 2} ( j = 1, . . . , p). We denote by I 1 ⊂ {1, 2, . . . , n} the subset of indices corresponding to case 1 , which provides the indices for individuals and by I 0 the complement of I ∑ ∑the control 1 1 individuals. Also, we denote by C = n 1 i∈I 1 φ(G i ) and by C 0 = n10 i∈I 0 φ(G i ) the centre of masses of cases and controls, respectively. The following measure of discrimination, S, is defined by: (

S = S G ,G ,...,G 1

2

p

)

( ) d 2 C 1, C 0 ( ) ∑ ( ) =∑ 2 1 + 2 0 i∈I 1 d φ(G i ), C i∈I 0 d φ(G i ), C

( ) where d 2 C 1 , C 0 is the squared distance between the centre of masses of cases and controls defined as: ( ) ) ) ) 1 ∑ ( 2 ∑∑ ( 1 ∑ ( d 2 C 1, C 0 = 2 K Gi , G j − K Gi , G j + 2 K Gi , G j n n n1 n 0 1 0 1 1 0 0 i, j∈I

i∈I j∈I

i, j∈I

( ) and d 2 φ(G i ), C l is the squared distance between an individual i ∈ I l and its centre of mass C l (l = 0, 1), for its definition see Sect. 9.2. The numerator of S measures the separation of the two centres of masses and the denominator S measures the separation of cases and controls to their respective centres of mass (Gallego et al. 2017). So that, large values of S indicate good discrimination between cases and controls. We propose the Kernel-based measure of importance for the j-th SNP as

9 An Overview of Kernel Methods for Identifying Genetic Association …

K V Ij =

181

S , j = 1, 2, . . . , p S j∗

where S j∗ is the measure of discrimination between cases and controls, S, with the effect of SNP j removed from the analysis by randomly permuting its values. The permutation is repeated several times, that is, for each j = 1, . . . , p, S j∗ = 1 ∑R j(r ) is the measure of discrimination between cases and controls, r =1 S j (r ) where S R S, with the values of the subjects for the original G j randomly permuted, and therefore, the effect of SNP j removed. Large values of K V I j indicate an important contribution of SNP j to the joint effect of the set of SNPs. If SNP j contributes to the joint association of the set of SNPs evaluated, the discrimination measure with the permuted SNP, S j∗ , will be lower than the original discrimination measure S and, consequently, the measure K V I j will be larger than 1. We note that random permutation can also be performed on a group of SNPs instead of a single SNP, therefore the K V I j measure may be used both for ranking SNPs separately, and also for ranking sets of SNPs that form LD blocks or genes.

9.5 Kernel Methods for Association Genetics Test with Multiples Phenotypes Genetic association studies show empirical evidence suggesting that a set of genetic variants may be associated with multiple phenotypes. One example is the phenomenon known as pleiotropy. This phenomenon occurs when a single gene influences several unrelated phenotypes. Note that several phenotypes such as high-density lipoprotein (HDL) cholesterol levels, low-density lipoprotein (LDL) cholesterol levels, total cholesterol (TC) and triglycerides (TG) concentrations are known to be important risk factors for CAD (Coronary Artery Disease) and are therapeutic targets for CAD drug development. Therefore, it is interesting to identify a genetic variant set that may be associated with the set of these phenotypes. For the cases of multiple phenotypes, we must emphasize the desirability of jointly analyzing the phenotypes due to the possible correlation between the phenotypes.

9.5.1 Genetic Association Test for Multiple Phenotypes Analysis Based on Kernel Methods Among the multiple phenotype association tests based on kernel methodology we highlight the Multivariate Kernel Machine Regression (MKMR) (Maity et al. 2012) and Multi-trait Sequence Kernel Association Test (MSKAT) (Liu and Lin 2018), both

182

V. Gallego

within the regression framework. We also highlight the model based on the Kernel Distance Covariance (KDC) framework called Gene Association with Multiple Traits (GAMuT) (Broadaway et al. 2016). In order to show the three methods for multiple phenotype association test, let Yi = (Y1i , . . . , Y Mi ) be the observed value of the M health phenotypes for the a set of p SNPs, be G i = subject ( 1 2i (i = 1,p.). . , n). As in previous sections, given j G i , G i , . . . , G i the genotypes for subject i with G i ∈ {0, 1, 2} ( j = 1, . . . , p) recording the number of minor alleles.

9.5.2 Multivariate Kernel Machine Regression (MKMR) Multivariate Kernel Machine Regression (MKMR) is a kernel model within the regression framework proposed to study the genetic association of a set of p genetic variants with a set of M health-related phenotypes. As in the Kernel Machine Regression (KMR) and Kernel Logistic Regression (KLR) models for a single health phenotype, Multivariate Kernel Machine Regression (MKMR) relates genomic data and covariates to the set of health phenotypes with a semi-parametric model to solve the genetic association test exploiting their relationship with a Multivariate linear model within the mixed models framework. To relate the set of health phenotypes Yi with the covariates X i and genomic data G i , defined as in Sect. 9.2, is proposed the semiparametric model Yti = X iT βt +h t (G i )+eti with (i = 1, . . . , n) and (t = 1, . . . , M). The correlation between the M health phenotypes is given by (e1i , . . . , e1i ) that follows a N or mal(0, ∑) distribution. The vector βt is an unknown coefficient corresponding to the effect of the covariates X on the phenotype t. h t (G i ) is an unknown function providing the genetic effect∑ of a set of p( SNPs of ) interest. Note that its dual representation is given by h t (G i ) = nj=1 αt j K t G i , G j , expressed in matrix form h = αt K t being K t the kernel matrix nxn, defined as a matrix of genetic similarity between subjects. Note the K t is a kernel matrix for the phenotype t (t = 1, . . . , M) but K t is usually the same kernel matrix for each phenotype. To test whether there is a genetic effect on the set of health phenotype, the testing for multiple phenotypes with null hypothesis H0 : h 1 (·) = . . . = h M (·) = 0 is of interest to us. This model, expressed in matrix form, is given by: Y = XT β + h + e ( ) ˆ a M x M block matrix equal ˆ distribution with ∑ where e is a N or mal 0, ∑ to ∑ ⊗ In . Y be defined as Y = (Y11 , . . . , Y1n . . . , Y M1 , . . . , Y Mn ) and h = (h 1 (G 1 ), . . . , h 1 (G n ), . . . , h M (G 1 ), . . . , h M (G n )). Also, X = diag(X 1 , . . . , X M ) T and β = β1T , . . . , β M .

9 An Overview of Kernel Methods for Identifying Genetic Association …

183

Then, there is a connection between semiparametric Multivariate Kernel Machine Regression (MKMR) model and the multivariate Linear Mixed Model, because their estimating equations are identical after deriving the penalized log-likelihood of the first one (Maity et al. 2012; Harville 1977). Exploiting this connection, the analysis of the relationships of the set of phenotypes with the genomic data and the covariates is performed by the multivariate Linear Mixed Model expressed as: Y = XT β + h where h = N or mal(0, K ⌃) con K = diag(K 1 , . . . , K M ) ⊗ In and ⌃ = diag(τ1 , . . . , τ M ) ⊗ In . For testing the genetic marker set association with the set of the health phenotypes, the null hypothesis H0 : h 1 (·) = · · · = h M (·) = 0 is equivalent to the null hypothesis H0 : τ1 = · · · = τ M = 0. The score statistic para MKMR is given by: ( ) ( )t Q = Y − X βˆ V0−1 K V0−1 Y − X βˆ ˆ + K⌃ with βˆ the estimation of β under the null hypothesis and V0 denotes V = ∑ also evaluated under the null hypothesis, i.e., the genetic effect is null. The distribution of test statistic Q is a mixture of chi-squared random variables with weights being the eigenvalues of K. Several methods have been suggested to obtain the distribution of Q. Among these, the Davies Method to obtain the exact distribution of Q (Davies 1980) or by moment matching to approximate the distribution of Q (Duchesne and De Micheaux 2010).

9.5.3 Multi-trait Sequence Kernel Association Test (MSKAT) MSKAT (Multi-trait Sequence Kernel Association Test) is a multi-marker association test to identify rare genetic variants associated with multiple phenotypes. MSKAT is an extension of the genetic association test for single phenotype (SKAT). To relate SNPs set with the set of health phenotypes, the model proposed can be written as Yti = X iT βt + G i γt and variance σt2 where γt is the coefficient vector of length p of the sizes of the set of SNPs. The coefficient βt is the vector of the length of the number of covariates to be adjusted. The matrix correlation between the M phenotypes is given by ∑ = (ρtl ) t, l ∈ {1, . . . , M}. The goal is to test whether the joint effect of the genetic variants is null. Therefore, we are interested in the statistical testing of null hypothesis H0 : γ1 = · · · = γ M = 0. The score statistic test for the joint effect of the variant set on multiple health phenotypes is defined by (Liu and Lin 2018):

184

V. Gallego

Q=

p ∑

ˆ −1 S j w 2j S Tj ∑

j=1

(∑ ) )) ( j( n where S j = S j1 , . . . , S j M with S jt = ˆ it σˆ t−1 . The weights w j i=1 G i yit − μ are determined by the variant MAF, for example, w j can be the above mentioned for SKAT. Both variance estimates σt2 (σˆ t2 ), the correlation between phenotypes ( ) ˆ = ρˆtl ) and μit (μˆ it ) are all estimated under the ∑ = (ρtl ) t, l ∈ {1, . . . , M} (∑ null hypothesis. The distribution under the null hypothesis of the MSKAT score statistic Q is obtained asymptotically as a weighted sum of independent degree 1 chi-squares ) ( where the weights are the eigenvalues of the matrix W RW . The matrix R = τ j τl r jl / )2 ∑n ( j ˆ j and by the correlation ( j, l ∈ {1, . . . , p}) is defined by τ j = i=1 G i − G i ∑n (

G i −Gˆ i j

j

)(

G li −Gˆ li

)

. of the residuals genotype for the variants j-th and the l-th r jl = τ j τl j ˆ We note that G are the predicted values of the regression of variant j with covariates X. Note that with Q in MSKAT we have implicitly used the weighted linear kernel, so that we could only use the weighted linear kernel in MSKAT. Thus, MSKAT is used for the identification of rare genetic variants. Whereas any kernel matrix can be used in MKMR. i=1

9.5.4 Gene Association with Multiple Traits (GAMuT) The method of Gene Association with Multiple Traits (GAMuT) is a test for the association of large-scale genomic data with high-dimensional phenotype data. GAMuT provides a nonparametric test of independence between a set of phenotypes and a set of genetic markers within a framework called Kernel DistanceCovariance (KDC). GAMuT assesses the independence between the pairwise similarity of a set of phenotypes with the pairwise similarity of a set of genetic variants. Note that both the number of phenotypes and genetic variants can be arbitrary, and both continuous and/or categorical phenotypes can be assessed. The GAMuT procedure is the independence test between the elements of two similarity matrices corresponding to genotypes and phenotypes (Broadway et al. 2016). For the genetic similarity matrix, any of the kernel matrices shown in Sect. 9.2 can be used, from the IBS Kernel, Polygenic Kernel to the Linear Weighted Kernel for rare variants analysis. The choice of phenotypic similarity ( )for a set of phenotypes can be quite flexible. We denote the kernel function P Yi , Y j as a quantitative value providing the phenotypic similarity between individuals i and j. The phenotypic )2 ( ) ( ∑M linear kernel is defined as P Yi , Y j = 1 + t=1 Yti Yt j . Once the similarity

9 An Overview of Kernel Methods for Identifying Genetic Association …

185

matrices K and P are defined for genotypes and phenotypes respectively, GAMuT is the procedure that tests the independence between the elements of the two matrices. Be H = (I − 1n 1n /n) a centering matrix, each of the two matrices are centered as K C = H K H and PC = H P H . GAMuT is the independence test of the two-matrix defined as: TG AMuT =

1 trace(PC K C ) n

The TG AMut distribution under null hypothesis follows a weighted sum of χ12 independents where the weights are the eigenvalues of PC and K C multiplied by each other. The TG AMuT distribution under null hypothesis is given by: 1 ∑ λ Pi λ K j χ12 n 2 i, j where λ P and λ K are the eigenvalues of PC and K C , respectively.

9.6 Kernel Methods for Censored Survival Outcomes in Genetics Association Studies In the recent years, the human lifespan has increased due, among other reasons, to improved medical treatments. The identification of a SNPs set associated with the disease progression is an interesting target for the improvement of treatments. Scores test within the kernel framework for the association of a SNP set with the disease progression have been proposed (Cai et al. 2011; Lin et al. 2011; Chen et al. 2014). These scores are built under the standard proportional hazards model framework and testing the effect of a SNP set on the risk of developing a clinical event for the identification of genetic marker set associated with the disease progression. Let Yi = (ti , δi ) i = (1, · · · , n) be independent time-to-event observations with time i and event/censoring indicator δi. Then, with the covariates X i and genomic data G i , defined as in Sect. 9.2, the Cox proportional hazard model is λi (t) = λ0 (t)e X i β+h(G i ) where λ0 (t) is the baseline hazard function, β is the effects for the covariates and h(G i ) is the genetic effect as a linear combination of the Kernel function ) ( be expressed ∑ that can h(G i ) = nj=1 α j K G i , G j , matricially h = α K being K the kernel matrix nxn, defined as a genetic similarity matrix between subjects. A similar connection, shown in Sect. 9.3, between Kernel Machine Regression and mixed effects is used. Then, the above Cox proportional hazard model can be represented as a mixed effects model written as λi (t) = λ0 (t)e X i β+α K

186

V. Gallego

where β is a fixed effects for the covariates and h is a random effect for the genotypes with mean 0 and variance τ K − where K − is Moore–Penrose generalized inverse of K. Therefore, as in Sect. 9.3, to test the genetic association with the risk of developing a clinical event (or with the disease progression), the null hypothesis of interest is H0 : τ = 0. Then, the score statistic for the variance component Q = Rˆ T K Rˆ is proposed with Rˆ is the estimated martingale residual for subject i under the null hypothesis. As in Sect. 9.3, the Q distribution under the null hypothesis is a mixture of chisquare distribution or can also be approximated to a scaled χ 2 through the Satterthwaite procedure based on the first and second moments of the distribution of the Q statistic under the null hypothesis.

9.7 Conclusions The kernel methodology provides a set of appropriate algorithms for the identification of significant genetic variants associated with health-related phenotypes. The use of kernel methodology has had a significant impact on genetic association studies with a paradigm change in approaching genetic association tests by adapting to the characteristics of large-scale genomic data analysis. An advantage of kernel methods is that the genetic association with the phenotype of interest is tested through the joint genetic effect, offering robustness to their results even when the fraction of causal variants is small. Identifying of multi-marker associated with a phenotype in test within kernels methods and not knowing which genetic marker may be driving identified association could be viewed as a limitation, for this issue the kernel-based measure of variable importance KVI provides a ranking of genetic markers depending on the contribution of a genetic variant to the joint effect of a set of genetic variants. Also, KVI can be used to order individual genetic markers, blocks of genetic markers formed by LD or genes offering a way for prior selection to the genetic association test. We highlight the connection of kernel methods within the regression framework with mixed-effects models. Because of this connection, the algorithms of the kernel methodology offer simple and computationally efficient significance tests for genetic association with phenotypes. Furthermore, the kernel methodology offers an elegant and computationally efficient way to study non-linear relationships through powerful linear algorithms. Thus, semiparametric regression models can be “kernelized”, allowing to estimate the effect of covariates and the genetic effect of correlated genetic marker structures due to the LD. In addition, as any linear algorithm that can be expressed in terms of an inner product can be kernelized, a wide variety of possible study designs could be proposed to high-dimensional genomic data.

9 An Overview of Kernel Methods for Identifying Genetic Association …

187

Compliance with ethical standards Conflict of Interest The author declares that he has no conflict of interest.

References Anderson MJ (2001) A new method for non-parametric multivariate analysis of variance. Austral Ecol 26(1):32–46 Broadaway KA, Cutler DJ, Duncan R, Moore JL, Ware EB, Jhun MA et al (2016) A statistical approach for testing cross-phenotype effects of rare variants. Am J Hum Genet 98(3):525–540 Cai T, Tonini G, Lin X (2011) Kernel machine approach to testing the significance of multiple genetic markers for risk prediction. Biometrics 67(3):975–986 Carvallo P (2017) Conceptos sobre genética humana para la comprensión e interpretación de las mutaciones en cáncer y otras patologías hereditarias. Revista Médica Clínica Las Condes 28(4):531–537 Chen H, Lumley T, Brody J, Heard-Costa NL, Fox CS, Cupples LA, Dupuis J (2014) Sequence kernel association test for survival traits. Genet Epidemiol 38(3):191–197 Chen J, Li H (2013) Kernel methods for regression analysis of microbiome compositional data. In: Topics in applied statistics. Springer, pp 191–201 Cristianini N, Shawe-Taylor J, Elissee A, Kandola JS (2002) On kernel-target alignment. Adv Neural Inf Process Syst 14:367–373 Davies RB (1980) The distribution of a linear combination of 2 random variables. J R Stat Soc Ser C (appl Stat) 29(3):323–333 Duchesne P, De Micheaux PL (2010) Computing the distribution of quadratic forms: further comparisons between the Liu-Tang-Zhang approximation and exact methods. Comput Stat Data Anal 54(4):858–862 Gallego V, Oller R, Luz Calle M (2017) Kernel-based measure of variable importance for genetic association studies. Int J Biostat 13(2) Gower JC (1966) Some distance properties of latent root and vector methods used in multivariate analysis. Biometrika 53(3–4):325–338 Harville DA (1977) Maximum likelihood approaches to variance component estimation and to related problems. J Am Stat Assoc 72(358):320–338 Iachine I, Skytthe A, Vaupel JW, McGue M, Koskenvuo M, Kaprio J, Pedersen NL, Christensen K et al (2006) Genetic influence on human lifespan and longevity. Hum Genet 119(3):312–321 Khoury MJ, Burke W, Thomson EJ et al (2000) Genetics and public health in the 21st century: using genetic information to improve health and prevent disease. Number 40. OUP, USA Kwee LC, Liu D, Lin X, Ghosh D, Epstein MP (2008) A powerful and flexible multilocus association test for quantitative traits. Am J Hum Genet 82(2):386–397 Larson NB, Schaid DJ (2013) A kernel regression approach to gene-gene interaction detection for case-control studies. Genet Epidemiol 37(7):695–703 Larson NB, Chen J, Schaid DJ (2019) A review of kernel methods for genetic association studies. Genet Epidemiol 43(2):122–136 Lin X, Cai T, Wu MC, Zhou Q, Liu G, Christiani DC, Lin X (2011) Kernel machine SNP-set analysis for censored survival outcomes in genome-wide association studies. Genet Epidemiol 35(7):620–631 Liu Z, Lin X (2018) Multiple phenotype association tests using summary statistics in genome-wide association studies. Biometrics 74(1):165–175 Liu D, Lin X, Ghosh D (2007) Semiparametric regression of multidimensional genetic pathway data: least-squares kernel machines and linear mixed models. Biometrics 63(4):1079–1088

188

V. Gallego

Liu D, Ghosh D, Lin X (2008) Estimation and testing for the effect of a genetic pathway on a disease outcome using logistic kernel machine regression via logistic mixed models. BMC Bioinform 9(1):292 Maity A, Sullivan PF, Tzeng J (2012) Multivariate phenotype association analysis by marker-set kernel machine regression. Genet Epidemiol 36(7):686–695 McArdle BH, Anderson MJ (2001) Fitting multivariate models to community data: a comment on distance-based redundancy analysis. Ecology 82(1):290–297 National Academies of Sciences, Engineering, Medicine et al (2017) An evidence framework for genetic testing Pan W (2011) Relationship between genomic distance-based regression and kernel machine regression for multi-marker association testing. Genet Epidemiol 35(4):211–216 Reich DE, Cargill M, Bolk S, Ireland J, Sabeti PC, Richter DJ, Lavery T, Kouyoumjian R, Farhadian SF, Ward R et al (2001) Linkage disequilibrium in the human genome. Nature 411(6834):199– 204 Schölkopf B, Smola A, Muller KR (1997) Kernel principal component analysis. In: International conference on artificial neural networks. Springer, pp 583–588 Schölkopf B, Smola AJ (2002) Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press Shawe-Taylor J, Cristianini N et al. (2004) Kernel methods for pattern analysis. CUP Sheikh A, O’Hehir RE, Holgate ST (2017) Middleton’s allergy essentials Smola AJ, Schölkopf B (1998) Learning with kernels, vol 4. Citeseer Vapnik V (2013) The nature of statistical learning theory. Springer, New York Wang X, Xing EP, Schaid DJ (2015) Kernel methods for large-scale genomic data analysis. Brief Bioinformatics 16(2):183–192 Wessel J, Schork NJ (2006) Generalized genomic distance-based regression methodology for multilocus association analysis. Am J Hum Genet 79(5):792–806 Wu MC, Kraft P, Epstein MP, Taylor DM, Chanock SJ, Hunter DJ, Lin X (2010) Powerful SNP-set analysis for case-control genome-wide association studies. Am J Hum Genet 86(6):929–942 Wu MC, Lee S, Cai T, Li Y, Boehnke M, Lin X (2011) Rare-variant association testing for sequencing data with the sequence kernel association test. Am J Hum Genet 89(1):82–93 Wu MC, Maity A, Lee S, Simmons EM, Harmon QE, Lin X, Engel SM, Molldrem JJ, Armistead PM (2013) Kernel machine SNP-set testing under multiple candidate kernels. Genet Epidemiol 37(3):267–275 Zhang D, Lin X (2003) Hypothesis testing in semiparametric additive mixed models. Biostatistics 4(1):57–74 Zhao N, Chen J, Carroll IM, Ringel-Kulka T, Epstein MP, Zhou H, Zhou JJ, Ringel Y, Li H, Wu MC (2015) Testing in microbiome-proling studies with mirkat, the microbiome regression-based kernel association test. Am J Hum Genet 96(5):797–807 Zoppis I, Mauri G, Dondi R (2019) Kernel machines: introduction

Chapter 10

Artificial Intelligence Approaches for Skin Anti-aging and Skin Resilience Research Anastasia Georgievskaya, Daniil Danko, Richard A. Baxter, Hugo Corstjens, and Timur Tlyachev

Abstract The skin is a complex organ whose functioning is affected by both environmental and intrinsic factors, making it a perfect model for studying the aging process at many different levels of analysis. Multi-dimensional data obtained in the course of aging-related research are difficult to analyze. However, with the use of artificial intelligence (AI), datasets at the molecular, genetic and biophysical information levels become more insightful and help strengthen skin resilience. AI also plays a major role in the visualization and simulation of skin and its derivatives (hair and nails). AI-driven technologies thus contribute to advances in skin aging research, including method development and data acquisition, evaluation and interpretation. It supports the development of new drugs, optimizes treatment recommendations and aids in substantiating the effectiveness of personalized approaches. This chapter outlines some future prospects of the application of AI in the areas of personalization and inclusiveness for both skin research and clinical practice. Keywords Artificial intelligence · Skin aging · Skin resilience · Multi-dimensional data · Facial imaging · Skin research · Clinical practice · Personalization

A. Georgievskaya (B) · D. Danko · T. Tlyachev HautAI OU, Tallinn, Estonia e-mail: [email protected] R. A. Baxter Phase Plastic Surgery, Mountlake Terrace, WA, USA H. Corstjens Novigo+, Maaseik, Belgium © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 A. Moskalev et al. (eds.), Artificial Intelligence for Healthy Longevity, Healthy Ageing and Longevity 19, https://doi.org/10.1007/978-3-031-35176-1_10

189

190

A. Georgievskaya et al.

10.1 Introduction We would like to open this chapter by addressing the question of why skin aging research is so important. The quality and appearance of our skin, particularly the skin on the face, serves as the first and most obvious reflection of biological age, and it influences the way we are perceived by others (Little et al. 2011). In fact, many people associate skin appearance with perceived wellness and health (Jones et al. 2001). Maintaining healthy, resilient skin means ensuring one’s skin remains in optimal condition so that its systems are balanced and functioning flawlessly, just as you would find in young, healthy individuals. Skin with high resilience has an ability to protect the body effectively from adverse conditions, not to mention the significant role that healthy, resilient skin plays in the psychology of aging. Skin serves as the utmost barrier between our body and the adverse effects of the environment. We suggest that skin can serve as a model for aging study at very different levels, ranging from molecules and metabolic pathways to tissue, organism level. We propose that “skin aging exposome”, a combination of external and internal factors, affecting a human individual and the response of the human body to these factors that lead to biological and clinical signs of skin aging (Krutmann et al. 2017), and fundamental intrinsic aging processes (Gladyshev 2016) can be studied in skin as a model object using the Artificial Intelligence approach. We are sure that the readers of this book are already very familiar with the skin’s structure, but we’ve created a brief introductory overview. The skin consists of a multi-layered epithelium, the epidermis, and an underlying connective tissue called the dermis. The epidermis consists of keratinocytes, Langerhans cells, mastocytes, Merkel cells, and melanocytes. The epidermis also contains associated structures, such as the hair follicles and sebaceous glands. The epidermis plays a crucial role in the skin’s barrier strength and water-holding capacity. The dermis consists of fibroblasts, macrophages, mast cells, T and B cells, as well as lymphatics, microvascular vessels and nerve endings. The dermis is characterized by an extracellular matrix network of collagen, elastin and proteoglycans, which reflects its biophysical properties, such as elasticity and firmness. A comprehensive review of the anatomy of the skin can be found in Gilaberte et al. (2016). Current research characterizes the skin as more than a barrier with a sensory function, but as a complex biological factory that participates in cell signaling, metabolism and protein synthesis, and as a vital component of the nervous, immune and endocrine systems. Skin aging is a complex phenomenon that cannot be explained by just one causal factor. Accelerated skin aging has been linked to environmental factors, such as sun exposure, air pollution, and elevated ambient temperature. Lifestyle factors connected with skin aging include sleep quality, nutrition and perceived stress levels (Krutmann et al. 2017). Furthermore, it is suggested that skin aging is associated with the general aging of the body and the onset and development of age-related diseases, such as neurodegenerative, cardiovascular, skeletal, and endocrine diseases (Zouboulis et al. 2019; Franco et al. 2022; Gunn et al. 2013).

10 Artificial Intelligence Approaches for Skin Anti-aging and Skin …

191

The fact that our skin is the largest organ exposed to the environment makes it an excellent model of the intrinsic and extrinsic aging processes. Compared to other systems, the skin is relatively easy to sample through both non-invasive and invasive techniques, including swabs, tape strippings, and skin biopsies. Topical sensor measurements (Cho et al. 2022) and imaging techniques of varying resolutions (Kollias and Stamatas 2002) offer an incredible variety of data for researchers. As more datasets are generated, the need for a new generation of analysis tools arises, enabling the revelation of underlying and complex interconnections between the mechanisms that take place in the skin and decline in skin function leading to skin aging. Recently, artificial intelligence techniques have gained in popularity in the analysis of complex aging datasets (Zhavoronkov et al. 2021), supporting the use of new tools for data interpretation and knowledge discovery for general skin and skin aging research (Shen et al. 2017; Holzscheck et al. 2020). High-quality skin datasets of meaningful size are collected by private and governmental organizations, and open-access databases can offer a great variety of information. The availability of such data is essential for accelerating digital health transformation and new fundamental discoveries. New data is collected constantly, and existing datasets are used to test emerging technical approaches and re-analyze datasets. Unfortunately, there are a number of limitations to the use of such data: privacy and security, data ownership, local regulations, bureaucratic processes, and cost barriers can be significant hurdles to leveraging such datasets freely (Khan et al. 2021). We further assert that the following is true for open data generally and skin research in particular. 1. Datasets are produced within an experimental framework or research objective and are thus linked to a particular test or method or a particular body site. This may limit the possibility of re-targeting for other purposes because of its type, format, labeling and metadata. 2. Datasets may lack proper control subsets. 3. Datasets lack diversity and contain samples acquired in a small population group (Rawlings 2006). Since it is unlikely that any dataset will be able to satisfy every researcher’s criteria, unless it is produced for a very specific task (which implicates high cost and time to produce), AI algorithms will play a critical role in helping to bridge the gap between different types of datasets and data sources. AI can help us to conduct a qualitative transformation of the available data and combine the available data sources to create practically usable multi-aspected and standardized datasets. Going forward, researchers worldwide will need to work out a quality standard for data that will allow its reuse in additional tasks outside of the main scope. This chapter will review the rationale for the use of AI in skin aging analysis in both clinical and research studies and in consumer-facing settings to evaluate the various methodologies in use at present, and elaborate on future directions.

192

A. Georgievskaya et al.

10.2 Genetics 10.2.1 Skin Aging Genes A set of SNPs (single nucleotide polymorphisms) associated with the manifestation of pigmentation, sagging/laxity and wrinkling/crow feet has been described primarily for females with different cultural backgrounds. In Chinese Han (Gao et al. 2017) SNP in AHR gene was associated with crow’s feet, SNP in BNC2 was associated with pigment spots on the arms, SNP in COL1A2 was associated with laxity of eyelids. In the Korean population study SNP in FCRL5 was associated with an increased risk of wrinkles, SNP in OCA2 genes was associated with a decreased risk of pigmentation (Oh Kim et al. 2021). In another Korean population study (Lee et al. 2022), researchers found 46 novel highly associated SNPs for melanin, gloss, hydration, wrinkle, and elasticity. In Caucasian (Le Clerc et al. 2013; Jacobs et al. 2015) descent females STXBP5L and FBOX40 genes were suggested to be associated with photo aging. We are still quite far from identifying the genes responsible for overall skin wellness, including such “sophisticated” parameters as skin radiance and glow (Petitjean et al. 2007), which can be described and measured by clinical criteria, but still hard to explain in common words. Therefore, the causality between genetic predispositions and skin properties needs refinements and additional research in diverse population groups to elevate the level of objectiveness. The relationships between carrying a particular single nucleotide polymorphism and skin aging pace have been studied, however, not fully (Flood et al. 2019). An important mark of future research will be understanding of mechanisms determining phenotypic changes, and AI approaches can be an advantageous strategy for this ambitious task.

10.2.2 Long-Lived Individuals and Twin Studies The connection between the rate of intrinsic skin aging and genetically determined longevity status has been demonstrated in several twin studies. In a study carried out by Gunn et al. (Gunn et al. 2013), it was found that offspring of long-lived nonagenarians (persons aged between 90 and 99 years old), siblings (Westendorp et al. 2009) exhibited less wrinkles on the upper inner arm. This site is relatively hidden from sun exposure through the lifetime, suggesting that the observations are linked to intrinsic aging. Male offspring also looked 1.4 years younger than the same-aged control group. Another study of same sex twins aged between 70 and 99 y.o. revealed significant correlation between perceived age and survival, physical and cognitive functioning as well as with leukocyte telomere length (Christensen et al. 2009). Perceived age assessment has a high automation potential with AI, since it is possible to train neural network algorithms to make such predictions from

10 Artificial Intelligence Approaches for Skin Anti-aging and Skin …

193

digital images. Perceived age prediction can be further enhanced by prediction on short video which will give clues about features not visible on static photo such as skin and face dynamics and kinetics. HautAI Skin SaaS® (Estonia) software offers perceived age prediction as a commercial software.

10.2.3 Telomere Shortening One of the hallmarks of intrinsic aging (López-Otín et al. 2013) is progressive telomere shortening, resulting from cell division. Both intrinsic skin aging and photoaging disrupt the telomere loop and contribute to its shortening, eventually leading to DNA damage through the p53 tumor suppressor protein. The authors of (Kosmadaki and Gilchrest 2004) suggested that a better understanding of telomeres and telomerase should greatly expand management options for aging skin.

10.2.4 Variety of Aging Clocks To assess the physiological state of a cell, researchers around the world offer various conceptual approaches—in particular, the so-called “aging clock concepts”. These can be based, among others, on DNA methylation (methylome analysis) (Horvath and Raj 2018), telomere length (Vaiserman and Krasnienkov 2020), gene expression (Holzscheck et al. 2021), altered cell protein profile and metabolic changes (e.g., inflammation (Pilkington et al. 2021)) (Rutledge et al. 2022), accumulation of ROS (Rinnerthaler et al. 2015) and other molecules. One of the most elaborate “clocks” for estimating human biological age is based on epigenetic changes. Over the course of a human’s life, methylation of some genome regions occurs. Specific genomic locations, the so-called CpG dinucleotides (such dinucleotides in which a cytosine nucleotide is followed by a guanine nucleotide) are used as a marker for the aging process. Today, many open-access DNA methylation databases have been created (Edgar et al. 2002), which allows researchers to implement more complex methods for processing data arrays using machine learning and artificial intelligence, which in turn opens up a wider field for the application of aging biomarkers in cell research, endocrinology, medicine, demography, and many other areas. Determining the rate of aging processes is a new and rapidly growing field of science. Some of the seminal work on this topic was proposed independently in 2013 by Horvath (Horvath 2013) and by Hannum et al. (Hannum et al. 2013) The construction of a biological age metric according to DNA methylation can be based on the assessment of the methylation of specific CpGs, both in specific samples (blood, liver, brain, fibroblasts, saliva, cartilage, etc.) (Bocklandt et al. 2011; Xu et al. 2019) or regardless of tissue type, while a specific method can be applicable to various types of tissues and cells (Vijayakumar and Cho 2022). Today, by using machine

194

A. Georgievskaya et al.

learning approaches, it is possible to achieve high accuracy in determining biological age (Zaguia et al. 2022). Various methodologies are used, including random forest regression, support vector regression (Fan et al. 2021), multiple linear regression (Galkin et al. 2021), and gradient boosting regression (Li et al. 2018). Optimized deep learning model frameworks are being developed for automatic analysis of heterogeneous data (Levy et al. 2020). AI-driven multivariate models allow researchers to increase the accuracy of prediction of biological age to mean average error (MAE) of 4.0 years in comparison with MAE 7.5–11.0 years for univariative ones (Becker et al. 2020; Freire-Aradas et al. 2022). To assess the rate of intrinsic and extrinsic aging and biological age of the skin, researchers have developed a highly accurate skin-specific methylome analysis algorithm based on data from more than 500 human skin samples (Boroni et al. 2020). Interestingly, the researchers emphasize the value of using in vitro models in the study of skin aging.

10.3 Molecular Aging During the aging process, a lot of complex metabolic and genetic abnormalities are accumulated in skin cells and matrix: primarily ROS accumulation, mTOR pathway upregulation, TGF-ß pathway downregulation, generation of a chronic proinflammatory environment, etc. Such changes may hamper skin function and lead to an imbalance in the synthesis of biomolecules such as carbohydrates, lipids and proteins. With aging, the skin cells reorganize and metabolic rearrangements of the main signaling pathways happen, defining the change in skin cell function. While not exclusive to skin, TGF-ß and mTOR pathways undergo significant shifts during aging. Transforming growth factor-ß (TGF-ß) pathway is known to be involved in a wide range of biological functions, from cell proliferation and wound healing to cancer (Morikawa et al. 2016). In response to TGF-ß activation, fibroblasts are able to regulate extracellular matrix homeostasis, collagen synthesis and degradation. It is known that in aging skin, the TGF-ß pathway is significantly downregulated (Quan et al. 2010). Also, the mammalian target of rapamycin (mTOR), as a wellknown molecule for longevity interventions, is attracting close attention in skin aging studies. mTOR plays a key role in coordinating signal transduction and cellular response. Thus, the role of different molecules (e.g., rapamycin, resveratrol, etc.) in the mitigation of various aging processes of the skin via suppression of age-related increase in mTOR regulation (Antikainen et al. 2017) is widely studied (Castilho et al. 2009; Ghosh et al. 2010; Li et al. 2014; Lee et al. 2021). The accumulation of reactive oxygen species (ROS) is not exclusive, but a particularly pronounced feature of skin aging, as it is mainly driven byextrinsic stress factors (e.g., UV-lights) that provoke the generation of these harmful molecules (Rinnerthaler et al. 2015). ROS are highly reactive molecules that, if produced in excessive amounts, can lead to mitochondrial dysfunction and DNA damage.

10 Artificial Intelligence Approaches for Skin Anti-aging and Skin …

195

Accumulating DNA damage and decreased synthesis of vital proteins could lead to a vicious circle of pathological intercellular communication. A shift in balance between ROS production and antioxidant defense systems may then lead to accelerated aging. Aging alterations in cell–cell signaling fuels DNA damage accumulation but also leads to age-related tissue degeneration and dysfunction. The degree of the DNA repair decline has tissue-specific characteristics (Chen et al. 2020). Aging skin is characterized by inflammaging. This means a change in the balance of the immune system and inflammatory processes, as well as the formation of a pro-inflammatory environment (Pilkington et al. 2021). Thus, the synthesis of pro-inflammatory proteins increases (Ma et al. 2020) and the level of cytokines changes (Kinn et al. 2015). On a cellular level, skin aging changes are not limited to inflammaging only. A phenomenon known as “stem cell exhaustion” is present in skin. Dysregulation of metabolic pathways and defense systems creates a modulated metabolic environment in skin and leads to dysregulation of the stem cell cycle and, as a consequence, its exhaustion (Castilho et al. 2009; Shyh-Chang et al. 2013).

10.3.1 Omics Approaches in the Aging Research A modern methodology allows us to approach the study of complex biological systems at a new level. For example, Holzscheck et al. (2020) used machine learning for multi-omics multidimensional data to describe the manifestation order of the hallmarks of aging and changes in the signaling pathways (including inflammaging) of human skin tissues over time. In another study (Nie et al. 2022), researchers implemented AI methods for the development of a new approach to biological age assessment based on multi-omics data (metabolome, microbiome, and proteome), skin biophysical features, and physical fitness examinations. With the help of deep neural networks, it was possible to reveal four stages of molecular skin aging based on the analysis of metabolomic shifts. Researchers propose potential protein targets and candidate molecules for possible longevity treatments as well (Yeh et al. 2021). In addition to the obvious changes in the cellular component in tissues undergoing age-related transformations, it is important to understand that there is also a change in the active milieu of the extracellular matrix that is crucial for skin cells development and skin physical properties (Birch 2018). Age-related modifications provoke cell senescence and stem cell exhaustion (Fedintsev and Moskalev 2020). By utilizing AI-driven algorithms, researchers managed to associate 30 genes and some metabolic pathways with age-related extracellular matrix stiffness (Pun et al. 2022).

196

A. Georgievskaya et al.

10.4 Drug Discovery for Skin Resilience We suggest that the possible strategy for skin wellness and anti-aging is the development of protocols for skin resilience preservation. Maintaining healthy skin homeostasis with adequate defense function could be an advantage as it allows the use of natural skin mechanisms to manage the aging process. Kennedy et al. (Kennedy et al. 2020) managed to demonstrate prima facie efficacy of AI in the prediction of extended longevity peptides. Researchers assembled data from public datasets, peer-reviewed scientific papers and patents, and by utilizing a deep learning approach, they were able to predict the anti-aging properties of the plant-derived peptide. They provided in vitro, ex vivo experimental evidence, such as an increase in keratinocytes proliferation and migration, and enhanced collagen formation, as well as an anti-wrinkle effect in a pilot clinical study. BASF launched a naturally derived ingredient called PeptAIde™ (2022a), which claims to combat low-grade inflammation. Using the power of AI, numerous peptides were screened for their ability to help prevent the release of inflammatory mediators, such as TNFα. Using in silico prediction and a machine learning platform, they evaluated trillions of data points to identify the plant-based peptides with the greatest impact on chronic low-grade inflammation in the skin and scalp. The final cosmetic ingredient consists of four multifunctional plant-based peptides, each between 12 and 17 amino acids in size. Its activity was clinically proven on both the skin and hair, showing an improvement in skin smoothness and firmness and a reduction in redness and sensitivity. Ashland introduced a natural biofunctional extract from Santalum album, which was developed using AI (2022b). The technology was used to predict the potential biological activity on the skin based on 17 components of Santalwood™. Bioinformatics predicted multiple biological pathways, including skin olfactory receptors, skin barrier regeneration and cell longevity. Based on these data, the material is expected to reduce cellular senescence and mitigate the effects of air pollution. The active ingredient showed clinical benefits for skin regeneration, firmness, luminosity and wrinkles. Phylogene deployed AI to identify components with relevant and substantial but still hidden activity (Alizés 2022). They analyzed data generated during untargeted proteomics experiments of skin biopsies treated (or not) with the active ingredient(s). Using AI, the proteome expression pattern allowed the identification of pathways affected by the treatment. This led to identification of the potential mechanism of action and translation into functional analysis and potential claims. No prior assumption was made about biological activity and this technology can be used to uncover the activities of newly developed materials or formulations. It can also be used equally well for existing products in need of repositioning. Glycation is a spontaneous and irreversible non-enzymatic complex network of reactions between a carbohydrate and a molecule with a free amino group. It is considered to be one of the fundamental mechanisms involved in aging. The identification of glycation sites is important in order to understand its mechanism and develop

10 Artificial Intelligence Approaches for Skin Anti-aging and Skin …

197

anti-glycation strategies. The identification of glycation sites through experimentation requires substantial time and cost, so computational techniques are an interesting alternative (Alkuhlani et al. 2020). Approaches using a support vector machine, artificial neural network (Que-Salinas et al. 2022), convolutional neural network, recurrent neural network and long- and short-term memory have been presented (Alkuhlani et al. 2020). There have been some successes in predicting glycation sites, but additional efforts should improve these models in terms of overall performance to be able to slow down the formation of these irreversible protein modifications.

10.5 Microbiome The microbiome is recognized as playing an important role in human wellness and health. Both the gut and skin are highly populated by different microbes, including bacteria, fungi and viruses. Studies suggest a close connection between a healthy gut microbiome and healthy skin that is free of acne, atopic dermatitis and psoriasis, achieved primarily through the gut’s role in immune system modulation (O’Neill et al. 2016; Salem et al. 2018). Furthermore, the pace of skin aging may also be associated with the microbiome’s ability to modulate immune responses (Li et al. 2020). In the studies by (Dimitriu et al. 2019) it was demonstrated that demographic, lifestyle, and physiological factors were most associated with skin microbiome composition. Microbiome studies can be challenging and include many pitfalls, ranging from data sampling through to cultivation, sequencing and data analysis (Boxberger et al. 2021). Therefore, further research needs to be conducted in order to understand the mechanism of the skin’s microbiome impact on the skin’s condition. Changes in the human microbiome have been observed in various conditions and the links between the microbiome and human health are being intensively investigated. It was shown that an explainable artificial intelligence model could predict phenotypes, such as skin hydration, age, menopausal and smoking status, from changes in the microbiome’s composition (Carrieri et al. 2021). The study focused on personalized care and well-being, but the results have broader applicability in healthcare.

10.6 Skin 3D Models Aging research at the level of a tissue, requires models that mimic features of the tissue in question. Consequently, skin research, including the development of new cosmetics and anti-aging products, requires appropriate experimental systems. Early testing of new materials in human clinical studies is not an option for ethical and safety reasons. Therefore, in vitro test systems that mimic the physiological properties of real skin are needed. Two-dimensional (2D) skin cell cultures have been used for

198

A. Georgievskaya et al.

decades and are still widely used. Although these 2D cell culture systems have played a key role in advancing our understanding of molecular signaling, cell morphology, and active substance discovery, not all results and conclusions from these 2D cell culture systems are translatable to physiological in vivo systems. Mechanical forces, barrier function, spatial orientation, nutrient and signaling gradients and cell-to-cell communication are critical factors missing in 2D models. The development of 3D models addresses most of these limitations, but at the cost of increased complexity. 3D bioprinting works in a similar way to standard 3D printing, but it uses bioink that consists of the living cells and biomaterials needed to form tissue constructs with a high degree of repeatability, flexibility, and accuracy. Although powerful, it is a complex process that requires the adjustment of multiple parameters. Machine learning principles have been used in 3D bioprinting and have proven useful in the determination of bioink composition and printability, the optimization of numerous parameters during the actual printing process, and the detection and classification of imperfections of the model (Shin et al. 2022). Currently, these techniques are mainly used in the context of personalized medicine and the development of organs and tissue-like structures, but this technology is also within reach for the development of 3D skin models for dermocosmetic and aging research (Olejnik et al. 2022). Future steps in this area include the use of Big Data and digital twins that eventually contribute to digital bioprinting and in silico experimentation (An et al. 2021). Skin color distribution is known to contribute to perceived age. A study by Lu et al. (2021) reported that both Western European and Chinese observers make use of skin color and lightness to rate attractiveness, healthiness, and perceived age. However, skin coloration cues are subtle and not universal, and they are utilized differently within both ethnic groups, reflecting different aesthetic preferences in Eastern and Western cultures. Nagasawa et al. (2022) used neural networks to reproduce a skin mockup with a multilayered structure determined by mapping the absorbance of melanin and hemoglobin, the main pigments that make up skin tone. The multilayered structure with different pigments in each layer contributed greatly to the accurate reproduction of skin tones. Another type of 3D cell culture method is microfabricated systems, also referred to as “organ-on-a-chip”. These are models that mimic the physiological and mechanical functions of human organs. The advantages of such models include the incorporation of microfluidic channels, which are responsible for the supply and removal of nutritional and waste products, respectively, and the allowing of time- and concentrationdependent exposure of the cells to the compounds under study. They also allow multiple organs to be connected. Chong et al. (2022) presented results on a multicellular coculture array representing liver hepatocytes, antigen-presenting cells and a dermal and epidermal skin compartment. The results were integrated with a machinelearning algorithm to predict the skin-sensitizing potential of systemic drugs. This would not be possible in traditional in vitro 3D skin models.

10 Artificial Intelligence Approaches for Skin Anti-aging and Skin …

199

10.7 Biophysical Markers Clinical scales are widely used in skin aging research. More than 100 skin aging scales exist but their measurement properties have only been proven to a limited extent. Moreover, the development of new biophysical instrumental methods has greatly expanded the number of quantifiable parameters, both for in vivo and in vitro research. Establishing correct models to interpret such complex multiscale datasets is necessary. Cho et al. (2022) used machine learning techniques to predict a skin age index based on in vivo biophysical properties, such as skin elasticity, skin color and hydration. The cforest prediction model was validated in a clinical study in which a significant decrease in the predicted skin age index was observed after six weeks of application of an anti-aging product. Our skin is constantly exposed to external stressors, of which UV is one of the most significant. Proper use of suntan lotion is important for maintaining healthy skin. Xiao and Chen (2022) reported on the effect of different sun tan lotions on trans-epidermal water loss (TEWL) and skin water content. Using machine learning algorithms for processing capacitive images of the skin, they showed that measurements of TEWL and skin water content can be used to identify subtle changes in the skin’s state after sun tan lotion application. This approach opens an avenue for investigating the effects of different cosmetic products on skin status in different target groups. Shim et al. (2019) reported on the use of machine learning to predict the sun protection factor (SPF) and protection grade of the UVA level of sunscreens. They reported a good correlation between predicted and in vivo data, but also found that the correlation improved when presence of pigment, amount of pigment grade TiO2 , type of formulation and type of product were included as additional parameters. Besides a potential gain in time and resources in developing and testing a sunscreen formulation, this technique also leads to additional insights into the relationships between formulation characteristics and photoprotective properties.

10.8 Skin Imaging Visual characteristics of skin are illustrative markers of skin properties and reflect aging pace. Therefore, imaging methods are very ubiquitously used to study skin at various levels. Imaging methods can serve as instruments to directly measure skin properties in the visible light spectra of different polarization (e.g. level of pigmentation) (Zortea et al. 2014) as well as indirectly, for example, by measurement of skin molecules signals by near-infrared (NIR)-Raman or fluorescent spectroscopy of porphyrins using skin irradiation by UVA light (Kollias and Stamatas 2002). Because of skin imaging non-invasive nature, it can be used both for in-vitro and in-vivo studies.

200

A. Georgievskaya et al.

We further will primarily focus on the analysis of images and videos captured in regular (not polarized) visible light as the most widespread and accessible type of setting. However, similar approaches can be applied to data acquired in different light settings.

10.8.1 Skin Image and Video Processing Anti-aging science has become increasingly sophisticated in recent years. As new cosmeceuticals and skin care treatments benefit from these advances, it is up to clinicians to objectively measure their outcomes whenever possible. Invasive measurements of skin age are impractical on a day-to-day basis. The challenge is how to bring these non-invasive measurements into the clinic, and to do so without reliance on expensive equipment. The features of healthy, youthful skin include factors such as elasticity, hydration, pore size and wrinkles (topography), radiance, and evenness of tone (Fink et al. 2018) and such parameters are well observable in visible light and therefore can be executed with a smartphone. Deep learning (DL) methods are utilized for skin image and video analysis for different purposes, from melanoma detection for medical purposes to age estimation. These approaches are based on convolutional neural networks, or recently introduced vision transformers (Dosovitskiy et al. 2020)—deep learning architectures that divide input image on several fixed-sized patches and apply self-attention mechanism that allows to learn both local and global features in images and to identify the most significant parts of image for a particular task. Despite the recent advances in semisupervised learning (Doersch et al. 2015; Pathak et al. 2016; Zhai et al. 2019) and the existence of pre-trained models, the fine-tuning or training of DL models still requires a lot of labeled data. In this section, we provide a brief overview of the existing and potential applications of AI for extracting skin aging features and patterns from images.

10.8.2 Morphology Features and aging Patterns Due to the recent advances in facial landmark detection for 2D and 3D facial analysis (Kartynnik et al. 2019; Deng et al. 2020; Wood et al. 2022), an application of AI for facial morphology became more or less straightforward and robust. Facial landmarks are a set of points on the face that allow the marking of facial features, such as the nose, mouth, eyes, etc. Thus, based on landmark locations and the distances between them, one can evaluate different morphology features, e.g., facial asymmetry, eye shape, face oval type, facial proportions, etc. (Alzahrani et al. 2019), that can be used for various cosmetic product and treatment recommendations (Alzahrani et al. 2021).

10 Artificial Intelligence Approaches for Skin Anti-aging and Skin …

201

Research by (Windhager et al. 2019) modeled the sex-specific trajectories of average facial aging based on landmarks only. They discovered that age-related change in face shape is similar for men and women until 50 y.o and differs for > 50 years. They collected 88 surface scans of human faces for analysis and manually labeled the landmarks, which is a time-consuming task. In this case, the application of DL models for landmark detection could potentially speed up, automate and generalize the analysis. Another promising application of AI is the prediction of how different anti-aging treatments might affect facial aging. In a study by (Tanikawa and Yamashiro 2021) deep learning techniques were applied to predict how the 3D facial morphology would change after orthognathic surgery and treatment. They approached this work through the following schema: first, they extracted 3D landmark coordinates from a 3D face tomography image and then fed it to a neural network in order to predict the coordinates after treatment.

10.8.3 AI Systems for Facial Imaging Recent work by Georgievskaya (2022) studied selfies images using AI-based computer vision algorithms for face feature evaluation in different age groups of males and females. The strongest correlation with age was revealed for wrinkles, skin uniformness, pores, and sagging in both male and female groups. The Pearson correlation coefficients between the chronological age and the uniformness, sagging, and pores parameters were equal to − 0.42, − 0.4, − 0.35, respectively. The Pearson correlation coefficient between the chronological age and the Wrinkle score was equal to − 0.56. In this study, the Haut.AI® Skin Metrics Report SaaS platform was used—a tool that enables the software to be accessed from any location where appropriate internet connection is available, with detailed results available instantly (see Fig. 10.1). Originally developed for the evaluation of large data sets to facilitate skin care product development, a version allowing single-subject evaluations has been developed. This enables clinicians and patients to track outcomes and guide treatment strategies. Flament et al. (2022) used big data analysis of selfie images and reported differences in the pace of face aging between European and Chinese women in a study powered by nine AI algorithms models. Volume and texture related skin metrics increased linearly with age in European women compared to lower scores and more gradual increase in the younger age-classes in Chinese women. In Chinese women, pigmentation signs increased regularly between 18 and 40 years, plateaued between 40 and 60 years, then increased in the over 60 s compared to lower scores and a slower more regular increase with age in European women.

202

A. Georgievskaya et al.

Fig. 10.1 An illustrative demonstration of the Haut.AI® Skin Metrics Report SaaS system result. An input image (left) is analyzed with AI-based algorithms and returns scores for more than 15 skin conditions both for the whole face and facial areas separately e.g. cheek or forehead. Moreover, the system produces the detection masks of skin concerns, e.g. wrinkles/lines, sagging, pigmentation etc. that are overlaid on the original image (right image). In this example, we present the detection of fine lines (pink); deep lines (green), pigmentation spots and uniformity (brown) and pores (blue)

10.8.4 Discoloration It is well established in the research that some skin diseases are related to age, e.g., actinic keratosis, irregular pigmentation, and other pigmentation disorders. Among other things, accumulating oxidative stress, mitochondrial abnormalities, DNA damage, telomere shortening, and hormonal shifts provoke melanocytes dysfunction and/or hyperfunction that leads to hypomelanosis (vitiligo) or hypermelanosis (e.g., UV-induced melanogenesis, melasma, etc.), correspondingly (Lee 2021). Most deep learning applications have a special interest in melanoma detection (Adegun

10 Artificial Intelligence Approaches for Skin Anti-aging and Skin …

203

and Viriri 2020; Codella et al. 2016; Li and Shen 2018; Hosseinzadeh Kassani and Hosseinzadeh Kassani 2019) because this type of skin cancer has a high mortality rate (about 75%) (Narayanan et al. 2010). As a result of this strong research interest, most advances have been achieved for this purpose. Nonetheless, there are several deep learning applications for other types of pigmentation analysis (Adegun and Viriri 2020; Li and Shen 2018; Hosseinzadeh Kassani and Hosseinzadeh Kassani 2019) that can identify aging patterns in the skin. For example, in Liu et al. (2020), the authors developed a deep-learning system for the analysis of clinical photographs and patient metadata that can distinguish between 26 of the most common skin conditions, some of which are related to aging processes, e.g., actinic keratosis and lentigo.

10.8.5 Skin Texture It is well established that skin texture features, such as wrinkles, laxity, loss of elasticity and skin dryness, are related to age due to the extracellular matrix synthesis alteration and modified barrier function of aged skin (Krutmann et al. 2021). Traditional image processing methods, such as image filtering and texture analysis, are typically used for skin texture evaluation and age estimation (Cula et al. 2013; Batool and Chellappa 2014, 2015; Ng et al. 2015a, 2015b). For example, in the research by Ng et al. (Ng et al. 2015c), the authors used image filtering to extract the wrinkle features from facial images that were later used for age prediction. The model demonstrated a mean average error (MAE) of 4.87 y.o. on the FERET dataset (Phillips et al. 2000). However, these methods have some limitations, e.g., such techniques are not robust towards the presence of hair, eyebrow and eyelashes, which can lead to false positive predictions. Thus, the more complex and deep-learning based methods have great potential. Due to the lack of high-quality labeled data needed for training deep learning models in (Kim et al. 2022), a semiautomatic labeling strategy is proposed. First, the texture map is extracted from the original image and non-wrinkle textures are removed on the map by multiplying with a roughly labeled wrinkle mask. After that, the map is thresholded. The resulting ground truth maps are used to train the deep learning model. The results were validated on facial images obtained from real skin diagnosis devices where the model demonstrated a higher accuracy than the algorithms based on image processing.

10.8.6 Nails and Hair Despite the fact that various well-known age-related changes and disorders concerning the hair and nails can be observed visually (Van Neste and Tobin 2004; Singh et al. 2005; Trüeb and Tobin 2010), there is still a lack of AI applications

204

A. Georgievskaya et al.

that can identify them. Some examples on the issue of hair loss have been reported (Gallucci et al. 2020; Lee et al. 2020; Sacha et al. 2021). For example, deep-learning models are trained to count the number of hairs or evaluate hair loss. The deep learning-based ScalpEye system (Chang et al. 2020) evaluates hair problems, such as hair loss, dandruff, folliculitis, and oily hair based on scalp hair imaging and microscope images. At the same time, deep learning methods have demonstrated an ability to detect and classify several nail diseases and conditions (Nijhawan et al. 2017; Han et al. 2018). However, there is no quantitative analysis on how the hair and nail conditions measured by the AI systems differ for different ages and ethnic groups. That information could be helpful for a better understanding of the visual aspects of the aging process. In terms of quantitative analysis of hair properties with AI methods it is worth mentioning a Skin and Hair software as a service (SaaS) platform developed by HautAI OÜ. HautAI® Hair Metrics Report Software allows to analysis of 8 hair conditions, including gray hair detection, volume and frizz, that could be potentially applied to hair aging studies.

10.8.7 Estimation of Age Deep convolutional neural networks (CNN) have been widely used for age estimation in images of the face since the CharLearn competition of 2015 (Escalera et al. 2015), where the best solution utilized VGG-16 CNN architecture and outperformed both other computer vision models and human reference (Rothe et al. 2015). Nowadays, various companies provide age estimation services for a broad range of industrial applications, from security to age-oriented advertisements. However, there has been a lot of discussion about the biases of such systems because of the limited data used to train them (Kärkkäinen and Joo 2021). For a range of applications in the cosmetics, aesthetics, longevity and wellness industries, such as product or treatment recommendations, the question why the model predicts a particular age label is just as important as the predicted age itself. Despite the substantial progress made in AI for facial age estimation, only a few of these studies (Pei et al. 2019) (Wang et al. 2022) are focused on the interpretability and explainability of age predictions, or on feature extraction, both of which have the most significant impact on model prediction. The majority of such methods utilize attention mechanisms to identify most influential features. In 2018, Bobrov et al. (2018) proposed the first non-invasive AI visual biomarker of age, PhotoAgeClock, a deep learning method for age estimation that uses images of the corners of the eye to predict the person’s age. Besides the high accuracy (MAE of 2.3 years on validation dataset) achieved, the authors also provided an analysis to identify the most important features that impact the predictions and robustness of the model’s output. The authors reported that eye corner wrinkles and skin pigmentation were the features most predictive of facial age.

10 Artificial Intelligence Approaches for Skin Anti-aging and Skin …

205

Due to a lack of datasets, there is little research on AI for age estimation based on images of different parts of the human skin. In (Georgievskaya et al. 2020), the authors introduced a deep learning approach for age estimation based on both hand and face images. The authors trained deep learning models for hand and face images separately, then provided some outlier analysis. The images of individuals who have a much higher hand age than face age and vice versa were analyzed. In both cases, visual perception of images revealed that face and hand images do indeed look different in terms of aging. This is a promising insight that can be used for product recommendations in the cosmetics industry, e.g., recommending an anti-aging treatment for a particular skin area. Moreover, this study demonstrates that evaluation of skin age is a complex problem and should not be limited by face analysis in AI systems.

10.8.8 Simulation of Aging Generative AI methods are also utilized for the simulation of aging effects. In (Zhang et al. 2017; Zeng et al. 2018), the authors proposed a conditional adversarial autoencoder model that learns a face manifold and allows the generation of face images with or without the “aging” effect, all while preserving the personalized features of the face. In (Antipov et al. 2017; Tang et al. 2018; Zhu et al. 2020) various conditional generative adversarial networks frameworks were proposed that also preserve identity information and include an age classifier that forces the model to generate new face images with target ages. It is worth mentioning here that recent advances in diffusion-based models have demonstrated a promising ability to generate photo-realistic images that outperform generative adversarial networks (Ho et al. 2020; Dhariwal and Nichol 2021), making them an attractive solution for the generation of images with different aging patterns. The diffusion models consist of two parts. The first gradually adds Gaussian noise to original images, while the second is trained to recover images and is used for the generation of new data from noise. Several methods have been proposed to utilize diffusion architectures for conditional photo-generations (Nichol et al. 2021; Ho and Salimans 2022) and could potentially be utilized for the simulation of photo-realistic aging effects.

10.9 Psychology Appearance contributes significantly to well-being. Appearance is subjective, and is influenced by our real body image, existing beauty standards and our self-perception. Fogel and Greenberg (Fogel and Greenberg 2015) also highlighted the role of the relative importance of appearance, and the degree of dissatisfaction with appearance.

206

A. Georgievskaya et al.

While it does not directly impact one’s biological aging per se, the recapitulation of the appearance of youthful vigor contributes positively to quality of life. A 2017 review of the psychology of facelift patients found a satisfaction rate of more than 95%, with “improvement seen in positive changes in their life, increased selfconfidence and self-esteem, decreased self-consciousness about their appearance, and overall improvement in quality of life” (Sarcu and Adamson 2017). A positive outlook has been shown to be associated with longevity (Kim et al. 2017; Lee et al. 2019), so it is possible that a longevity benefit can be derived from surgery, anti-aging and skin resilience-inducing interventions, as well. This view is supported, albeit indirectly, by a concept known as the “socioemotional selectivity theory” or SST, which holds that subjective age predicts late-life health outcomes (Carstensen et al. 1999; Löckenhoff and Carstensen 2004; Uotinen et al. 2005). The longer one expects to live (time horizon view), the younger one behaves, and the younger one’s self-perception of age. SST suggests that time horizons are plastic and modifiable with behavioral interventions. Patients are more likely to undergo procedures if they have information of postprocedure results. In plastic surgery, simulations are commonly used in clinical practice, research (Lambros 2020) and training (Turner et al. 2020). AI methods have been used to simulate the outcomes of real surgical results and set-up post-operation results expectations for patients (Chartier et al. 2022; Chinski et al. 2022). In consumer skin research, the concept of good-looking skin lies at the intersection of objective measures with biophysical measures (Cho et al. 2022) or imaging methods (Flament et al. 2022; Georgievskaya 2022), expert grading (Bazin et al. 2017) and self-perception (Niu et al. 2020). The latter two are subjective and prone to biases (Dehon and Brédart 2001; Kitagami et al. 2010). In modern times, the widespread and frequent use of social media like Instagram has led to growing anxiety around self-image fueled by unrealistic expectations social media translates (Crystal et al. 2020). Sherlock and Wagstaff (2019) have demonstrated that observing beauty and fitness images significantly decreased self-rated attractiveness, and the magnitude of this decrease correlated with anxiety, depressive symptoms, self-esteem, and body dissatisfaction in a group of frequent Instagram users. Expert grading is a common practice in skin aging research, but it utilizes a limited variety of grading categories and can inherit the biases of the expert. Georgievskaya (2022) suggested that the use of gender-specific and age-specific scales developed using AI approaches and big data allows the patients to be compared with their reference groups in a reproducible way and, as a result, set up realistic expectations about the achievable skin results and realistic skin scores, relevant to their demographic group. Algorithms, however, can also be biased and inherit biases from their creators (Kordzadeh and Ghasemaghaei 2022). Algorithms bias in skin research, anti-aging and resilience is a topic which should be taken very seriously and addressed through development of development practices, monitoring principles and fairness tests.

10 Artificial Intelligence Approaches for Skin Anti-aging and Skin …

207

10.10 The Future of Skin Anti-aging Is Personalized It is clear from the above that AI is contributing to advances in skin aging research, including method development; data acquisition, evaluation and interpretation; and drug development and treatment recommendation. The tremendous increase in the amount of (reliable) data allows for detail like never before. However, the rate of skin aging can vary greatly from one individual to another, requiring a personalized approach. For example, it has been reported that the progressive development of perceivable features related to the skin aging process proceeds differently in people from different ethnic backgrounds, in part due to hereditary genetic traits (Vashi et al. 2016). A high concentration of epidermal melanin makes darkly pigmented individuals more vulnerable to dyspigmentation, while a thicker and more compact dermis makes facial lines less noticeable. The aging process between men and women is also dissimilar, as differences were reported in hydration, transepidermal water loss, sebum, microcirculation, pigmentation, thickness and pH (Rahrovan et al. 2018). Differences in the exposome to which individuals are continuously exposed, as well as the individual’s response to the exposome, are other factors that explicitly warrant rigorous personalization in age-related treatments. The above considerations lead irrevocably to a strategy of customization and personalization in the study and treatment of age-related processes in our skin. The concept of personalization in health, disease and wellbeing is not new, but until now, it has been difficult to realize due to a lack of technical knowledge and information. AI has taken on the role of the indispensable enabler of personalized skin aging research. Moreover, these state-of-the-art tools will drive developments in which measurements of the skin serve as biomarkers and risk assessment for pathological conditions, such as cardiovascular disease, diabetes and neurodegenerative disorders (Ni et al. 2021; Hosseini et al. 2021; Koníˇcková et al. 2022). These rather new areas of exploration may further boost research on skin aging and have a significant impact on how aging and age-related defects of our skin and body can be diagnosed and monitored. As an organ equally exposed to the environment and internal body systems, we suggest it is a practical subject for the study of body aging. Deep learning algorithms surpass the previously available methods for discerning complex relationship processes, interpreting data and generating new insights and knowledge. We argue that AI approaches to skin studies can be advantageous in aging research. A plethora of factors, both extrinsic and intrinsic, affect the skin’s condition, and their interference is ongoing. Therefore, the future of skin research lies in the use of multidimensional data, including omics, health records and lifestyle and behavioral data, along with easily accessible skin imaging data. Compliance with Ethical Standards Conflict of Interest Anastasia Georgievskaya, Daniil Danko, and Timur Tlyachev are employed at HautAI OU. Hugo Corstjens acts as the HautAI OU scientific advisor. Richard A. Baxter is employed at Phase Plastic Surgery.

208

A. Georgievskaya et al.

References Adegun AA, Viriri S (2020) Deep Learning-Based System for Automatic Melanoma Detection. IEEE Access 8:7160–7172 Alizés ID (2022) Phylogene Cosmetics. https://www.phylogene.com/index.php?pagendx=323&pro ject=phylogene_en. Accessed 3 Nov 2022 Alkuhlani A, Gad W, Roushdy M, Salem A-BM (2020) Artificial Intelligence for Glycation Site Prediction. In: IEICE Information and Communication Technology Forum. unknown Alzahrani T, Al-Nuaimy W, Al-Bander B (2021) Integrated multi-model face shape and eye attributes identification for hair style and eyelashes recommendation. Computation (basel) 9:54 Alzahrani T, Al-Nuaimy W, Al-Bander B (2019) Hybrid feature learning and engineering based approach for face shape classification. In: 2019 International Conference on Intelligent Systems and Advanced Computing Sciences (ISACS). IEEE An J, Chua CK, Mironov V (2021) Application of Machine Learning in 3D Bioprinting: Focus on Development of Big Data and Digital Twin. Int J Bioprint 7:342 Antikainen H, Driscoll M, Haspel G, Dobrowolski R (2017) TOR-mediated regulation of metabolism in aging. Aging Cell 16:1219–1233 Antipov G, Baccouche M, Dugelay J-L (2017) Face aging with conditional generative adversarial networks. In: 2017 IEEE International Conference on Image Processing (ICIP). unknown, pp 2089–2093 Ashland (2022b) https://www.ashland.com/industries/personal-and-home-care/skin-and-sun-care/ santalwood-biofunctional. Accessed 3 Nov 2022 BASF (2022a) In: PeptAIdeTM 4.0—A new naturally derived active ingredient from BASF that protects skin and hair against silent inflammation. https://www.basf.com/global/en/media/newsreleases/2020/10/p-20-334.html. Accessed 3 Nov 2022a Batool N, Chellappa R (2014) Detection and inpainting of facial wrinkles using texture orientation fields and Markov random field modeling. IEEE Trans Image Process 23:3773–3788 Batool N, Chellappa R (2015) Fast detection of facial wrinkles based on Gabor features using image morphology and geometric constraints. Pattern Recogn 48:642–658 Bazin R, Flament F, Qiu H (2017) Skin Aging Atlas: Volume 5, Photo-aging Face & Body Becker J, Mahlke NS, Reckert A et al (2020) Age estimation based on different molecular clocks in several tissues and a multivariate approach: an explorative study. Int J Legal Med 134:721–733 Birch HL (2018) Extracellular Matrix and Ageing. Subcell Biochem 90:169–190 Bobrov E, Georgievskaya A, Kiselev K et al (2018) PhotoAgeClock: deep learning algorithms for development of non-invasive visual biomarkers of aging. Aging 10:3249–3259 Bocklandt S, Lin W, Sehl ME et al (2011) Epigenetic predictor of age. PLoS ONE 6:e14821 Boroni M, Zonari A, Reis de Oliveira C et al (2020) Highly accurate skin-specific methylome analysis algorithm as a platform to screen and validate therapeutics for healthy aging. Clin Epigenetics 12:105 Boxberger M, Cenizo V, Cassir N, La Scola B (2021) Challenges in exploring and manipulating the human skin microbiome. Microbiome 9:125 Carrieri AP, Haiminen N, Maudsley-Barton S et al (2021) Explainable AI reveals changes in skin microbiome composition linked to phenotypic differences. Sci Rep 11:4565 Carstensen LL, Isaacowitz DM, Charles ST (1999) Taking time seriously. A theory of socioemotional selectivity. Am Psychol 54:165–181 Castilho RM, Squarize CH, Chodosh LA et al (2009) mTOR mediates Wnt-induced epidermal stem cell exhaustion and aging. Cell Stem Cell 5:279–289 Chang W-J, Chen L-B, Chen M-C et al (2020) ScalpEye: A Deep Learning-Based Scalp Hair Inspection and Diagnosis System for Scalp Health. IEEE Access 8:134826–134837 Chartier C, Watt A, Lin O, et al (2022) BreastGAN: Artificial Intelligence-Enabled Breast Augmentation Simulation. Aesthet Surg J Open Forum 4:ojab052 Chen Y, Geng A, Zhang W et al (2020) Fight to the bitter end: DNA repair and aging. Ageing Res Rev 64:101154

10 Artificial Intelligence Approaches for Skin Anti-aging and Skin …

209

Chinski H, Lerch R, Tournour D et al (2022) An Artificial Intelligence Tool for Image Simulation in Rhinoplasty. Facial Plast Surg 38:201–206 Cho C, Lee E, Park G et al (2022) Evaluation of facial skin age based on biophysical properties in vivo. J Cosmet Dermatol 21:3546–3554 Chong LH, Ching T, Farm HJ et al (2022) Integration of a microfluidic multicellular coculture array with machine learning analysis to predict adverse cutaneous drug reactions. Lab Chip 22:1890–1904 Christensen K, Thinggaard M, McGue M et al (2009) Perceived age as clinically useful biomarker of ageing: cohort study. BMJ 339:b5262 Codella N, Nguyen Q-B, Pankanti S, et al (2016) Deep Learning Ensembles for Melanoma Recognition in Dermoscopy Images. arXiv [cs.CV] Crystal DT, Cuccolo NG, Ibrahim AMS et al (2020) Photographic and Video Deepfakes Have Arrived: How Machine Learning May Influence Plastic Surgery. Plast Reconstr Surg 145:1079– 1086 Cula GO, Bargo PR, Nkengne A, Kollias N (2013) Assessing facial wrinkles: automatic detection and quantification. Skin Res Technol 19:e243–e251 Dehon H, Brédart S (2001) An “Other-Race” Effect in Age Estimation from Faces. Perception 30:1107–1113 Deng J, Guo J, Ververas E, et al (2020) RetinaFace: Single-Shot Multi-Level Face Localisation in the Wild. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE Dhariwal P, Nichol A (2021) Diffusion models beat GANs on image synthesis. arXiv [cs.LG] Dimitriu PA, Iker B, Malik K, et al (2019) New Insights into the Intrinsic and Extrinsic Factors That Shape the Human Skin Microbiome. MBio 10.: https://doi.org/10.1128/mBio.00839-19 Doersch C, Gupta A, Efros AA (2015) Unsupervised Visual Representation Learning by Context Prediction. 2015 IEEE International Conference on Computer Vision (ICCV) Dosovitskiy A, Beyer L, Kolesnikov A, et al (2020) An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. arXiv [cs.CV] Edgar R, Domrachev M, Lash AE (2002) Gene Expression Omnibus: NCBI gene expression and hybridization array data repository. Nucleic Acids Res 30:207–210 Escalera S, Gonzàlez J, Baró X, Guyon I (2015) ChaLearn looking at people 2015 new competitions: Age estimation and cultural event recognition. In: 2015 International Joint Conference on Neural Networks (IJCNN). unknown, pp 1–8 Fan H, Xie Q, Zhang Z et al (2021) Chronological Age Prediction: Developmental Evaluation of DNA Methylation-Based Machine Learning Models. Front Bioeng Biotechnol 9:819991 Fedintsev A, Moskalev A (2020) Stochastic non-enzymatic modification of long-lived macromolecules - A missing hallmark of aging. Ageing Res Rev 62:101097 Fink B, Liebner K, Müller AK et al (2018) Hair color and skin color together influence perceptions of age, health, and attractiveness in lightly-pigmented, young women. Int J Cosmet Sci 40(3):303– 312 Flament F, Jacquet L, Ye C et al (2022) Artificial Intelligence analysis of over half a million European and Chinese women reveals striking differences in the facial skin ageing process. J Eur Acad Dermatol Venereol 36:1136–1142 Flood KS, Houston NA, Savage KT, Kimball AB (2019) Genetic basis for skin youthfulness. Clin Dermatol 37:312–319 Fogel BS, Greenberg DB (2015) Psychiatric Care of the Medical Patient. Oxford University Press, USA Franco AC, Aveleira C, Cavadas C (2022) Skin senescence: mechanisms and impact on whole-body aging. Trends Mol Med 28:97–109 Freire-Aradas A, Girón-Santamaría L, Mosquera-Miguel A et al (2022) A common epigenetic clock from childhood to old age. Forensic Sci Int Genet 60:102743 Galkin F, Mamoshina P, Kochetov K et al (2021) DeepMAge: A Methylation Aging Clock Developed with Deep Learning. Aging Dis 12:1252–1262

210

A. Georgievskaya et al.

Gallucci A, Znamenskiy D, Pezzotti N, Petkovic M (2020) Hair counting with deep learning. 2020 International Conference on Biomedical Innovations and Applications (BIA) Gao W, Tan J, Hüls A et al (2017) Genetic variants associated with skin aging in the Chinese Han population. J Dermatol Sci 86:21–29 Georgievskaya A (2022) Artificial Intelligence Confirming Treatment Success: The Role of Genderand Age-Specific Scales in Performance Evaluation. Plast Reconstr Surg 150:34S-40S Georgievskaya A, Tlyachev T, Krutmann J, et al (2020) 14086 A new multimodal age prediction image analysis method from hands images of different age groups by neural network model. J Am Acad Dermatol 83:AB18 Ghosh HS, McBurney M, Robbins PD (2010) SIRT1 negatively regulates the mammalian target of rapamycin. PLoS ONE 5:e9199 Gilaberte Y, Prieto-Torres L, Pastushenko I, Juarranz Á (2016) Anatomy and Function of the Skin. Nanoscience in Dermatology 1–14 Gladyshev VN (2016) Aging: progressive decline in fitness due to the rising deleteriome adjusted by genetic, environmental, and stochastic processes. Aging Cell 15:594–602 Gunn DA, de Craen AJM, Dick JL et al (2013) Facial appearance reflects human familial longevity and cardiovascular disease risk in healthy individuals. J Gerontol A Biol Sci Med Sci 68:145–152 Han SS, Park GH, Lim W et al (2018) Deep neural networks show an equivalent and often superior performance to dermatologists in onychomycosis diagnosis: Automatic construction of onychomycosis datasets by region-based convolutional deep neural network. PLoS ONE 13:e0191493 Hannum G, Guinney J, Zhao L et al (2013) Genome-wide methylation profiles reveal quantitative views of human aging rates. Mol Cell 49:359–367 Ho J, Salimans T (2022) Classifier-Free Diffusion Guidance. arXiv [cs.LG] Ho J, Jain A, Abbeel P (2020) Denoising diffusion probabilistic models. Adv Neural Inf Process Syst Holzscheck N, Söhle J, Kristof B et al (2020) Multi-omics network analysis reveals distinct stages in the human aging progression in epidermal tissue. Aging 12:12393–12409 Holzscheck N, Falckenhayn C, Söhle J et al (2021) Modeling transcriptomic age using knowledgeprimed artificial neural networks. NPJ Aging Mech Dis 7:15 Horvath S (2013) DNA methylation age of human tissues and cell types. Genome Biol 14:R115 Horvath S, Raj K (2018) DNA methylation-based biomarkers and the epigenetic clock theory of ageing. Nat Rev Genet 19:371–384 Hosseinzadeh Kassani S, Hosseinzadeh Kassani P (2019) A comparative study of deep learning architectures on melanoma detection. Tissue Cell 58:76–83 Hosseini MS, Razavi Z, Ehsani AH et al (2021) Clinical significance of non-invasive skin autofluorescence measurement in patients with diabetes: a systematic review and meta-analysis. EClinicalMedicine 42:101194 Jacobs LC, Hamer MA, Gunn DA et al (2015) A Genome-Wide Association Study Identifies the Skin Color Genes IRF4, MC1R, ASIP, and BNC2 Influencing Facial Pigmented Spots. J Invest Dermatol 135:1735–1742 Jones BC, Little AC, Penton-Voak IS, et al (2001) Facial symmetry and judgements of apparent health: Support for a “good genes” explanation of the attractiveness–symmetry relationship, 22(6), pp.417–429. Evolution and human behavior Kärkkäinen K, Joo J (2021) FairFace: Face Attribute Dataset for Balanced Race, Gender, and Age for Bias Measurement and Mitigation. In: 2021 IEEE Winter Conference on Applications of Computer Vision (WACV). pp 1547–1557 Kartynnik Y, Ablavatski A, Grishchenko I, Grundmann M (2019) Real-time Facial Surface Geometry from Monocular Video on Mobile GPUs. arXiv [cs.CV] Kennedy K, Cal R, Casey R et al (2020) The anti-ageing effects of a natural peptide discovered by artificial intelligence. Int J Cosmet Sci 42:388–398

10 Artificial Intelligence Approaches for Skin Anti-aging and Skin …

211

Khan SM, Liu X, Nath S et al (2021) A global review of publicly available datasets for ophthalmological imaging: barriers to access, usability, and generalisability. Lancet Digit Health 3:e51–e66 Kim ES, Hagan KA, Grodstein F et al (2017) Optimism and Cause-Specific Mortality: A Prospective Cohort Study. Am J Epidemiol 185:21–29 Kim S, Yoon H, Lee J, Yoo S (2022) Semi-automatic Labeling and Training Strategy for Deep Learning-based Facial Wrinkle Detection. 2022 IEEE 35th International Symposium on Computer-Based Medical Systems (CBMS) Kinn PM, Holdren GO, Westermeyer BA et al (2015) Age-dependent variation in cytokines, chemokines, and biologic analytes rinsed from the surface of healthy human skin. Sci Rep 5:10472 Kitagami S, Yamada Y, Nagai M (2010) An own-age bias and an own-gender bias in face recognition. The Proceedings of the Annual Convention of the Japanese Psychological Association 74:3EV051–3EV051 Kollias N, Stamatas GN (2002) Optical non-invasive approaches to diagnosis of skin diseases. J Investig Dermatol Symp Proc 7:64–75 Koníˇcková D, Menšíková K, Tuˇcková L et al (2022) Biomarkers of neurodegenerative diseases: biology, taxonomy, clinical relevance, and current research status. Biomedicines 10:1760 Kordzadeh N, Ghasemaghaei M (2022) Algorithmic bias: review, synthesis, and future research directions. Eur J Inf Syst 31:388–409 Kosmadaki MG, Gilchrest BA (2004) The role of telomeres in skin aging/photoaging. Micron 35:155–159 Krutmann J, Bouloc A, Sore G et al (2017) The skin aging exposome. J Dermatol Sci 85:152–161 Krutmann J, Schikowski T, Morita A, Berneburg M (2021) Environmentally-Induced (Extrinsic) Skin Aging: Exposomal Factors and Underlying Mechanisms. J Invest Dermatol 141:1096–1103 Lambros V (2020) Facial Aging: A 54-Year, Three-Dimensional Population Study. Plast Reconstr Surg 145:921–928 Le Clerc S, Taing L, Ezzedine K et al (2013) A Genome-Wide Association Study in Caucasian Women Points Out a Putative Role of the STXBP5L Gene in Facial Photoaging. J Invest Dermatol 133:929–935 Lee LO, James P, Zevon ES et al (2019) Optimism is associated with exceptional longevity in 2 epidemiologic cohorts of men and women. Proc Natl Acad Sci U S A 116:18357–18362 Lee S, Lee JW, Choe SJ et al (2020) Clinically Applicable Deep Learning Framework for Measurement of the Extent of Hair Loss in Patients With Alopecia Areata. JAMA Dermatol 156:1018–1020 Lee SG, Shin JG, Kim Y et al (2022) Identification of Genetic Loci Associated with Facial Wrinkles in a Large Korean Population. J Invest Dermatol 142:2824–2827 Lee J-E, Oh J, Song D, et al (2021) Acetylated Resveratrol and Oxyresveratrol Suppress UVBInduced MMP-1 Expression in Human Dermal Fibroblasts. Antioxidants (Basel) 10.: https:// doi.org/10.3390/antiox10081252 Lee A-Y (2021) Skin Pigmentation Abnormalities and Their Possible Relationship with Skin Aging. Int J Mol Sci 22.: https://doi.org/10.3390/ijms22073727 Levy JJ, Titus AJ, Petersen CL et al (2020) MethylNet: an automated and modular deep learning approach for DNA methylation analysis. BMC Bioinformatics 21:108 Li J, Kim SG, Blenis J (2014) Rapamycin: one drug, many effects. Cell Metab 19:373–379 Li Z, Bai X, Peng T et al (2020) New Insights Into the Skin Microbial Communities and Skin Aging. Front Microbiol 11:565549 Li Y, Shen L (2018) Skin Lesion Analysis towards Melanoma Detection Using Deep Learning Network. Sensors 18.: https://doi.org/10.3390/s18020556 Li X, Li W, Xu Y (2018) Human Age Prediction Based on DNA Methylation Using a Gradient Boosting Regressor. Genes 9.: https://doi.org/10.3390/genes9090424 Little AC, Jones BC, DeBruine LM (2011) Facial attractiveness: evolutionary based research. Philosophical Transactions of the Royal Society b: Biological Sciences 366:1638–1659

212

A. Georgievskaya et al.

Liu Y, Jain A, Eng C et al (2020) A deep learning system for differential diagnosis of skin diseases. Nat Med 26:900–908 Löckenhoff CE, Carstensen LL (2004) Socioemotional selectivity theory, aging, and health: the increasingly delicate balance between regulating emotions and making tough choices. J Pers 72:1395–1424 López-Otín C, Blasco MA, Partridge L et al (2013) The Hallmarks of Aging. Cell 153:1194–1217 Lu Y, Yang J, Xiao K et al (2021) Skin coloration is a culturally-specific cue for attractiveness, healthiness, and youthfulness in observers of Chinese and western European descent. PLoS ONE 16:e0259276 Ma J, Liu M, Wang Y et al (2020) Quantitative proteomics analysis of young and elderly skin with DIA mass spectrometry reveals new skin aging-related proteins. Aging 12:13529–13554 Morikawa M, Derynck R, Miyazono K (2016) TGF-β and the TGF-β Family: Context-Dependent Roles in Cell and Tissue Physiology. Cold Spring Harb Perspect Biol 8.: https://doi.org/10.1101/ cshperspect.a021873 Nagasawa K, Yamamoto S, Arai W, et al (2022) Fabrication of a Human Skin Mockup with a Multilayered Concentration Map of Pigment Components Using a UV Printer. J Imaging Sci Technol 8.: https://doi.org/10.3390/jimaging8030073 Narayanan DL, Saladi RN, Fox JL (2010) Ultraviolet radiation and skin cancer. Int J Dermatol 49:978–986 Ng C-C, Yap MH, Costen N, Li B (2015b) Wrinkle Detection Using Hessian Line Tracking. IEEE Access 3:1079–1088 Ng C-C, Yap MH, Costen N, Li B (2015a) Automatic Wrinkle Detection Using Hybrid Hessian Filter. Computer Vision -- ACCV 2014 609–622 Ng C-C, Yap MH, Costen N, Li B (2015c) Will Wrinkle Estimate the Face Age? 2015c IEEE International Conference on Systems, Man, and Cybernetics Ni J, Hong H, Zhang Y et al (2021) Development of a non-invasive method for skin cholesterol detection: preclinical assessment in atherosclerosis screening. Biomed Eng Online 20:52 Nichol A, Dhariwal P, Ramesh A, et al (2021) GLIDE: Towards Photorealistic Image Generation and Editing with Text-Guided Diffusion Models. arXiv [cs.CV] Nie C, Li Y, Li R et al (2022) Distinct biological ages of organs and systems identified from a multi-omics study. Cell Rep 38:110459 Nijhawan R, Verma R, Ayushi, Mittal A (2017) An Integrated Deep Learning Framework Approach for Nail Disease Identification. In: 2017 13th International Conference on Signal-Image Technology & Internet-Based Systems (SITIS). unknown, pp 197–202 Niu G, Sun L, Liu Q et al (2020) Selfie-Posting and Young Adult Women’s Restrained Eating: The Role of Commentary on Appearance and Self-Objectification. Sex Roles 82:232–240 O’Neill CA, Monteleone G, McLaughlin JT, Paus R (2016) The gut-skin axis in health and disease: A paradigm with therapeutic implications. BioEssays 38:1167–1176 Oh Kim J, Park B, Yoon Choi J et al (2021) Identification of the Underlying Genetic Factors of Skin Aging in a Korean Population Study. J Cosmet Sci 72:63–80 Olejnik A, Semba JA, Kulpa A et al (2022) 3D Bioprinting in Skin Related Research: Recent Achievements and Application Perspectives. ACS Synth Biol 11:26–38 Pathak D, Krahenbuhl P, Donahue J, et al (2016) Context Encoders: Feature Learning by Inpainting. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Pei W, Dibeklioglu H, Baltrusaitis T, Tax DMJ (2019) Attended End-to-end Architecture for Age Estimation from Facial Expression Videos. IEEE Trans Image Process. https://doi.org/10.1109/ TIP.2019.2948288 Petitjean A, Sainthillier J-M, Mac-Mary S et al (2007) Skin radiance: how to quantify? Validation of an optical method. Skin Res Technol 13:2–8 Phillips PJ, Moon H, Rizvi SA, Rauss PJ (2000) The FERET evaluation methodology for facerecognition algorithms. IEEE Trans Pattern Anal Mach Intell 22:1090–1104 Pilkington SM, Bulfone-Paus S, Griffiths CEM, Watson REB (2021) Inflammaging and the Skin. J Invest Dermatol 141:1087–1095

10 Artificial Intelligence Approaches for Skin Anti-aging and Skin …

213

Pun FW, Leung GHD, Leung HW et al (2022) Hallmarks of aging-based dual-purpose disease and age-associated targets predicted using PandaOmics AI-powered discovery engine. Aging 14:2475–2506 Quan T, Shao Y, He T et al (2010) Reduced expression of connective tissue growth factor (CTGF/ CCN2) mediates collagen loss in chronologically aged human skin. J Invest Dermatol 130:415– 424 Que-Salinas U, Martinez-Peon D, Reyes-Figueroa AD, et al (2022) On the Prediction of In Vitro Arginine Glycation of Short Peptides Using Artificial Neural Networks. Sensors 22.: https:// doi.org/10.3390/s22145237 Rahrovan S, Fanian F, Mehryan P et al (2018) Male versus female skin: what dermatologists and cosmeticians should know. Int J Women’s Dermatol 4:122–130 Rawlings AV (2006) Ethnic skin types: are there differences in skin structure and function? Int J Cosmet Sci 28:79–93 Rinnerthaler M, Bischof J, Streubel MK et al (2015) Oxidative stress in aging human skin. Biomolecules 5:545–589 Rothe R, Timofte R, Van Gool L (2015) DEX: Deep EXpectation of Apparent Age from a Single Image. In: 2015 IEEE International Conference on Computer Vision Workshop (ICCVW). pp 252–257 Rutledge J, Oh H, Wyss-Coray T (2022) Measuring biological age using omics data. Nat Rev Genet. https://doi.org/10.1038/s41576-022-00511-7 Sacha JP, Caterino TL, Fisher BK et al (2021) Development and qualification of a machine learning algorithm for automated hair counting. Int J Cosmet Sci 43(Suppl 1):S34–S41 Salem I, Ramser A, Isham N, Ghannoum MA (2018) The Gut Microbiome as a Major Regulator of the Gut-Skin Axis. Front Microbiol 9:1459 Sarcu D, Adamson P (2017) Psychology of the Facelift Patient. Facial Plast Surg 33:252–259 Shen Y, Stanislauskas M, Li G et al (2017) Epigenetic and genetic dissections of UV-induced global gene dysregulation in skin cells through multi-omics analyses. Sci Rep 7:42646 Sherlock M, Wagstaff DL (2019) Exploring the relationship between frequency of Instagram use, exposure to idealized images, and psychological well-being in women. Psychol Pop Media Cult 8:482–490 Shim J, Lim JM, Park SG (2019) Machine learning for the prediction of sunscreen sun protection factor and protection grade of UVA. Exp Dermatol 28:872–874 Shin J, Lee Y, Li Z, et al (2022) Optimized 3D Bioprinting Technology Based on Machine Learning: A Review of Recent Trends and Advances. Micromachines (Basel) 13.: https://doi.org/10.3390/ mi13030363 Shyh-Chang N, Daley GQ, Cantley LC (2013) Stem cell metabolism in tissue development and aging. Development 140:2535–2547 Singh G, Haneef N, Uday A (2005) Nail changes and disorders among the elderly. Indian J Dermatol Venereol Leprol 71:386 Tang X, Wang Z, Luo W, Gao S (2018) Face Aging with Identity-Preserved Conditional Generative Adversarial Networks. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Tanikawa C, Yamashiro T (2021) Development of novel artificial intelligence systems to predict facial morphology after orthognathic surgery and orthodontic treatment in Japanese patients. Sci Rep 11:15853 Trüeb RM, Tobin D (2010) Aging Hair. Springer Science & Business Media Turner AE, Abu-Ghname A, Davis MJ et al (2020) Role of Simulation and Artificial Intelligence in Plastic Surgery Training. Plast Reconstr Surg 146:390e–391e Uotinen V, Rantanen T, Suutama T (2005) Perceived age as a predictor of old age mortality: a 13-year prospective study. Age Ageing 34:368–372 Vaiserman A, Krasnienkov D (2020) Telomere Length as a Marker of Biological Age: State-of-theArt, Open Issues, and Future Perspectives. Front Genet 11:630186

214

A. Georgievskaya et al.

Van Neste D, Tobin DJ (2004) Hair cycle and hair pigmentation: dynamic interactions and changes associated with aging. Micron 35:193–200 Vashi NA, de Castro Maymone MB, Kundu RV (2016) Aging differences in ethnic skin. J Clin Aesthet Dermatol 9:31–38 Vijayakumar KA, Cho G-W (2022) Pan-tissue methylation aging clock: Recalibrated and a method to analyze and interpret the selected features. Mech Ageing Dev 204:111676 Wang H, Sanchez V, Li C-T (2022) Improving Face-Based Age Estimation With Attention-Based Dynamic Patch Fusion. IEEE Trans Image Process 31:1084–1096 Westendorp RGJ, van Heemst D, Rozing MP et al (2009) Nonagenarian siblings and their offspring display lower risk of mortality and morbidity than sporadic nonagenarians: The Leiden Longevity Study. J Am Geriatr Soc 57:1634–1637 Windhager S, Mitteroecker P, Rupi´c I et al (2019) Facial aging trajectories: A common shape pattern in male and female faces is disrupted after menopause. Am J Phys Anthropol 169:678–688 Wood E, Baltrusaitis T, Hewitt C, et al (2022) 3D face reconstruction with dense landmarks Xiao P, Chen D (2022) The Effect of Sun Tan Lotion on Skin by Using Skin TEWL and Skin Water Content Measurements. Sensors 22.: https://doi.org/10.3390/s22093595 Xu Y, Li X, Yang Y et al (2019) Human age prediction based on DNA methylation of non-blood tissues. Comput Methods Programs Biomed 171:11–18 Yeh S-J, Lin J-F, Chen B-S (2021) Multiple-Molecule Drug Design Based on Systems Biology Approaches and Deep Neural Network to Mitigate Human Skin Aging. Molecules 26.: https:// doi.org/10.3390/molecules26113178 Zaguia A, Pandey D, Painuly S et al (2022) DNA Methylation Biomarkers-Based Human Age Prediction Using Machine Learning. Comput Intell Neurosci 2022:8393498 Zeng J, Ma X, Zhou K (2018) CAAE : Improved CAAE for Age Progression/Regression. IEEE Access 6:66715–66722 Zhai X, Oliver A, Kolesnikov A, Beyer L (2019) S4L: Self-Supervised Semi-Supervised Learning. 2019 IEEE/CVF International Conference on Computer Vision (ICCV) Zhang Z, Song Y, Qi H (2017) Age Progression/Regression by Conditional Adversarial Autoencoder. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Zhavoronkov A, Bischof E, Lee K-F (2021) Artificial intelligence in longevity medicine. Nature Aging 1:5–7 Zhu H, Huang Z, Shan H, Zhang J (2020) Look Globally, Age Locally: Face Aging With an Attention Mechanism. ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) Zortea M, Schopf TR, Thon K et al (2014) Performance of a dermoscopy-based computer vision system for the diagnosis of pigmented skin lesions compared with visual evaluation by experienced dermatologists. Artif Intell Med 60:13–26 Zouboulis CC, Bechara FG, Dickinson-Blok JL et al (2019) Hidradenitis suppurativa/acne inversa: a practical framework for treatment optimization—systematic review and recommendations from the HS ALLIANCE working group. J Eur Acad Dermatol Venereol 33:19–31

Part II

Perspectives and Challenges in Machine Learning Research of Aging and Longevity

Chapter 11

AI in Genomics and Epigenomics Veniamin Fishman, Maria Sindeeva, Nikolay Chekanov, Tatiana Shashkova, Nikita Ivanisenko, and Olga Kardymon

Abstract Genetics is an important factor determining the predisposition to the development of many diseases and influencing the overall life expectancy of a person. We have already discussed the genetics bases of polygenic diseases; in this chapter we will focus on monogenic diseases. There are currently two main challenges in the genetics diagnosis of monogenic diseases. The first challenge is the detection of an individual’s genomic variants using high-throughput sequencing data. The second challenge is interpretation of the detected genetic variants, i.e. understanding its functional and clinical significance. Both tasks require analysis of large datasets, and significant advances in this area were recently achieved using ML-methods. We will start this chapter from brief introduction of current techniques allowing sequencing of individual genomes. In this part, we will highlight the limitations of current methods, and discuss AI-based approaches that allow to increase specificity and sensitivity of the sequencing data analysis. We will separately discuss challenges associated with the detection of different types of genomic variants: single nucleotide variants, large copy-number variations affecting thousands or millions of base pairs, and balanced chromosomal rearrangements, such as inversions and translocations. Following the review of current tools for genomic variants detection, we will describe state-of-the art methods of data interpretation. We will first focus on mechanisms underlying pathogenic effects of protein-coding variants and AI-based predictive tools allowing to score pathogenic effects of amino acid substitutions. Next, we will briefly explain the complexity of epigenetic mechanisms underlying epigenetic regulation and show how modern AI-based approaches allow the interpretation of genetic variants in noncoding regions of the human genome. Finally, we will highlight new trends focused on the integration of genomic data with clinical description of patient phenotype. In the last paragraph, we describe genetic and epigenetic changes associated with aging. We will show how scoring these changes using AI-based tools allow development of epigenetic clocks of aging and determine individual age-associated risks. Keywords Genomic variants · NGS · Medical genetics · Epigenetics · Chromatin · Methylation V. Fishman · M. Sindeeva · N. Chekanov · T. Shashkova · N. Ivanisenko · O. Kardymon (B) Artificial Intelligence Research Institute (AIRI), Moscow, Russia e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 A. Moskalev et al. (eds.), Artificial Intelligence for Healthy Longevity, Healthy Ageing and Longevity 19, https://doi.org/10.1007/978-3-031-35176-1_11

217

218

V. Fishman et al.

11.1 AI to Diagnose Monogenic Diseases 11.1.1 AI Helps to Call Genomic Variants from Massive Parallel Sequencing Data Each person carries about 20–30 thousand rare variants in the genome. In this chapter, we will define rare genetic variants as variants with a population frequency of less than 0.5%, although it should be noted that this is a rather conditional threshold. A small number of variants may be ultra-rare, carried by only one or a few people on Earth, for example—3–4 de novo mutations that are formed in the germ cells of the parents or during early embryogenesis. In the case of polygenic traits (which include most common human diseases), individual variants usually do not have sufficient effect size to provide clinically significant changes in the phenotype. However, in the case of monogenic diseases, the discovery of a single rare or ultra-rare genomic variant may be sufficient to explain the pathological phenotype. To search for such variants, genome and exome sequencing approaches are traditionally used, generating large amounts of data. In the following sections, we will describe methods of genome sequencing and tools used to search for somatic and germ-line genomic variants in sequencing data.

11.1.1.1

Detection of Germ-Line Single Nucleotide Variants and INDELs

A standard practice for sequencing large quantities of genomic material is a so-called shotgun sequencing. This method implies that original nucleic acids in the analyzed sample are sheared (cut with ultrasound) randomly to small enough chunks which then would be read individually by a sequencing device. These reads, each spanning about a hundred or two nucleotides, are typically produced in millions (or in the case of human whole genome sequencing, even in first billions). These numbers multiplied by a read length would usually result in quantities much exceeding the original genomic material size, in common cases 50–100 times over. Due to sequencer’s inner technical workings, each read nucleotide might have been erroneously assigned, though the probability of these errors is usually estimated by the sequencer itself and is presented alongside read nucleotides as their quality. Other sources of introduced errors include laboratory operations necessary for the sample’s sequencing, such as a DNA amplification. So, reading a genomic region over and over again might produce a coverage just enough to compensate for these errors. A naive approach for a variant calling implies searching for consistently encountered mismatches (when compared to a reference genome) in multiple reads aligned over the same genomic position. For a diploid genome, there are two expected ratios for these mismatches: 50% of coverage (a heterozygote), and a 100% of coverage (a homozygote). Yet in a real sequenced sample one will rarely encounter these ideal ratios due to randomness of shotgun sequencing, introduced errors, and different

11 AI in Genomics and Epigenomics

219

mappability (effective coverage might be lower in areas of genome enriched in repeats and mutations as reads will be harder to align over these). A good example here is a VarScan2 (Koboldt et al. 2012) which employs a Fisher’s exact test of the read counts supporting each allele compared to an expected distribution (based just on errors). As an alternative, one can choose a Bayesian method such as the one implemented in samtools/bcftools (Li 2011) which computes genotype likelihoods with regard to observed reads and their respective base qualities. The simplest variant calling algorithms work surprisingly well for singlenucleotide variants in easier genomic contexts, but they have trouble capturing indels and clusters of multiple closely stationed variants. The freebayes algorithm (Garrison and Marth 2012) generalizes Bayesian methods to infer haplotypes and phase nearby variants against each other. Variant phasing provides an important piece of information in the context of rare diseases as with it becomes possible to tell whether two heterozygous variants are in a cis-state (creating a haplotype situated on the same one chromosome copy) or in a trans-state (creating a compound heterozygous state, each variant altering a different chromosome copy and thus triggering a recessive disease just like a homozygous mutation would). Recent iterations of GATK’s HaplotypeCaller (Van der Auwera and O’Connor 2020) also employ haplotype phasing through a local reassembly of reads with a De Bruijn graph and a PairHMM allele likelihood estimation. Using these algorithms greatly enhances the quality of alignments and therefore yields more precise complex variants. Another two important stages involved in the GATK’s pipeline are BQSR and VQSR: base and variant quality score recalibrations, respectively. BQSR builds a model of covariation based on the observed reads and a set of prior known variants (with different metrics involved: a base quality, its position in the read, matching a known variant, etc.), then adjusts the base qualities so that downstream tools like HaplotypeCaller produce even more accurate likelihood estimates. Conversely, VQSR is performed after the variant calling. Again, it uses a prior known set of “true” variants (like common polymorphisms) and considers a multitude of variant annotations/statistics such as allele coverages and qualities, strand bias test, etc., to build a Gaussian mixture model able to assign a score to each variant. This score then is presented as a single final filtering threshold for variants. GATK’s approaches work best in large-scale analyses such as cohort genotyping studies or in continuous clinical practice settings where lots of samples are sequenced according to the same protocols on the same sequencers. It is also possible to use neural networks for the variant calling task. The DeepVariant (Poplin et al. 2018) produces genotype likelihoods with a trained convolutional neural network. It uses an Inception v3 image recognition architecture that was adapted to classify pileup images of genomic regions. Pileup image here is a tensor that encodes reads covering the probed variant, with six channels: read base, base quality, mapping quality, strand of alignment, allele support status, and base reference difference status. The network’s task is amazingly similar to what scientists actually do when they want to manually investigate a variant. Instead of calculating basic statistics and probabilities at the immediate variant position, people literally look at supporting reads and visually assess the validity of the genotype, perhaps somewhat

220

V. Fishman et al.

intuitively. DeepVariant turned out to be highly accurate, as multiple benchmarks found its precision was on par with GATK’s highly specialized design. It performs even better when applied to the data from non-Illumina sequencers such as PacBio and Oxford Nanopore which exhibit significantly different error profiles. AI tools are also important for decoding PacBio and Oxford Nanopore devices raw signals to the sequences of nucleotide bases, a procedure called basecalling, which will be discussed later. A common critique of DeepVariant is that it uses a general-purpose image recognition engine under the hood. While providing enough precision with the right training, its resource consumption is far from optimal. Developers of Clairvoyante (Luo et al. 2019) were able to create a specialized architecture that has 13 times less parameters. Its input tensors do not encode reads at all, but rather consider 4 differently computed nucleotide counts (coverage of each allele, insertions, deletions, and alternative alleles) in a vicinity (16 base pairs in each direction plus the position itself) of the candidate variant. Scaled Exponential Linear Units was used as an activation function to avoid batch normalization and its computational burden, and a L2 regularization was applied in all layers of the network. The 4th level of the network also had a dropout regularization. The last two levels (4th and 5th) were fully connected; the 4th produced alternative alleles’ probabilities and the 5th added zygosity, variant type, and indel length to the result. Clairvoyante achieved a substantial speedup and had a much smaller memory footprint compared to the DeepVariant and other callers. It had two limitations however. First, multiallelic positions (e.g. where there’s no reference allele; both sample’s alleles are different non-reference nucleotides) were not supported. More importantly, maximum indel length was limited to 4 base pairs, which is far from applicable in a clinical environment (to note, GATK is able to consistently call indels of 50 bp and even longer). This was fixed in the Clair (Luo et al. 2019) by the same team. Clair replaced the first two layers with bidirectional long short-term memory layers. Its outputs are also much richer to allow for multiallelic calls and longer indels, and are able to cross-validate each other. Clair has 50% more parameters and is 10–20% slower than Clairvoyante but still significantly outperforms all competitive ML callers. A recommended practice in rare disease case studies is a trio analysis. Instead of sequencing just an affected individual, one can additionally sequence their parents. This allows to utilize inheritance patterns and case–control status, empowering further clinical analysis and at the same time simplifying it by removing a priori noninformative variants. Usually, trio filtering is applied after performing the variant calling in all samples independently, which is in fact suboptimal as some variants might be omitted in some samples by barely missing quality thresholds and such. An extension of DeepVariant called DeepTrio (Ip et al. 2020) aims to overcome that effect by performing a simultaneous variant calling in trios. One peculiar application for ML in variant calling has nothing to do with diseases of our times, but rather helps tracing evolutionary patterns and population structures of extinct species such as Neanderthals or wooly mammoths. Ancient DNA can be extracted from bones and other unearthed remains, and sequenced just like any modern DNA. It will be degraded though, torn apart and fractured, and will

11 AI in Genomics and Epigenomics

221

exhibit specific substitutions due to chemical properties of nucleotides. For example, deamination events near fragment ends lead to post-replicative mutations as cytosine converts to uracil (which is complementary to adenine instead of guanine) and adenine converts to hypoxanthine (complementary to cytosine instead of thymine). Abnormal distributions of those events are even used as a contamination control measure by telling whether the sample contains modern DNA. Variant calling in this setting also inevitably has to respect them. ARIADNA (Kawash et al. 2018) employs boosted regression tree models that combine traditional metrics (coverage, quality, etc.) with specific ancient DNA features: distance from read end, C-to-T substitutions, and neighboring mutation rates. Another difficulty lies in constructing training/ testing datasets as there’s no ground truth available about ancient genomes. Authors of ARIADNA used potential variant locations common to four wooly mammoth genomes as true positives, and locations occurring only in one genome were deemed as false positives. A couple of Neanderthal genomes and simulated ancient DNA data were also used for benchmarking purposes. As expected, GATK failed the benchmarks, erroneously taking ancient DNA-related events for proper mutations. An older non-ML tool snpAD (Prüfer et al. 2017) developed specifically for ancient DNA genotyping was found to overcompensate and reported ~ 30% less variants than ARIADNA. The volume of Neanderthal variants found by ARIADNA was also closer to numbers typically observed in modern humans.

11.1.1.2

Detection of Single Nucleotide Variants and INDELs in Cancer Samples

One specific application of genetic variant callers is the detection of mutations in cancer samples. This is a more difficult task due to the heterogeneity of tumors. Tumor cells accumulate mutations very quickly, but, as in the case of hereditary diseases, most mutations do not affect the course of the disease and do not give the cell any advantage. Therefore, most mutations are present in only a small proportion of tumor cells. However, as soon as an anticancer treatment is applied, a variety of mutations triggers Darwinian selection. As a result of rare carriers of harmful (from the patient’s point of view), but beneficial (from the point of view of the ability to protect the cell from therapeutic agents) mutations very quickly crowd out all other cells. As a result, the tumor becomes resistant to the drug. Therefore, in order to successfully fight cancer, it is important to evaluate the entire spectrum of mutations present in tumor cells. To do this, artificial intelligence-based methods are being developed that allow increasing sensitivity in the detection of mutations and identifying rare events characteristic of a small proportion of tumor cells. The first algorithms developed for this purpose were based on the “classical” approaches of mathematical statistics. For example, one of the most famous MuTect (Cibulskis et al. 2013), as well as the SomaticSniper (Larson et al. 2012) and JointSNVMix2 (Roth et al. 2012) tools, use Bayesian statistics methods to distinguish mutations present in a tumor sample from artifacts. The VarScan2 already mentioned above, and similar tool VarDict use Fisher’s exact test as a statistical

222

V. Fishman et al.

criterion. Despite the claimed high accuracy of these methods, each of them is applicable only for some specific cases. For example, MuTect imposes strict filters on variants found simultaneously in both tumor and control tissue samples. Although for many samples this approach reduces the number of false positive results related to germinal variants, it is unacceptable in the analysis of oncological samples that have an admixture of surrounding tissues, which is typical, for example, for liquid cancers. To create a universal algorithm for detecting mutations in cancer samples, algorithms based on machine learning methods are actively used. For example, the SomaticSeq method (Fang et al. 2015) generates a set of candidate mutations using five different algorithms based on “classical” statistics methods. This approach makes it possible to increase the sensitivity of the method and make it suitable for a variety of tumor types. To remove false positives from the resulting sample, SomaticSeq aggregates 72 different characteristics of candidate variants using a stochastic boosting machine-learning algorithm. The subsequent development of this algorithm made it possible to completely abandon the use of “classical” statistical models for the search for candidate mutations, and move on to direct analysis of sequencing data using deep machine learning algorithms (Sahraeian et al. 2019). Other ML algorithms are built on similar principles, such as SNooPer (Spinella et al. 2016) ili DeepVariant (Sahraeian et al. 2019). The search for mutations in cancer cells made it possible to find mutational drivers for the development of oncological diseases. Many of these driver mutations present in tumor cells have been known for a long time and serve as “markers” of a particular type of cancer. Detection of a driver mutation in some cases makes it possible to choose a targeted therapy that is effective against this particular tumor. For example, the use of drugs gefitinib, erlotinib, afatinib, osimertinib, dacomitinib is approved by the FDA in case of detection of mutations in the epidermal growth factor receptor (EGFR) gene in lung cancer. The list of known mutations that are actionable (that is, in any way affecting treatment tactics) or druggable (that is, allowing you to select a specific pharmacological drug) is quite wide—they include variants in the EGFR/ErbB1, HER2/ErbB2 genes, c-Met, RET, ALK, ROS1, NTRK, c-Kit, PDGFR, FGFR1, FGFR2, FGFR3, RAS, BRAF, MEK, mTOR, AKT, PTEN, PIK3CA, CDK4, IDH1, IDH2, BRCA1/2 and ATM, ERα (Danesi et al. 2021).

11.1.1.3

Detection of Imbalanced Structural Variants

It should be noted that pathogenic mutations are not only point events—substitutions, deletions or insertions of one or a few nucleotides. A large number of genomic variants are associated with major structural rearrangements involving tens, hundreds, thousands, and sometimes even millions of nucleotides. For oncological diseases, many cases can be cited when driver mutations are gene fusions, deletions or amplifications. One well known example is the fusion of two genes, BCR and ABL. BCR-ABL fusion is found in most patients with chronic myelogenous leukemia, and in some patients with acute lymphoblastic leukemia or acute myelogenous leukemia

11 AI in Genomics and Epigenomics

223

(Westbrook et al. 1992). Each of these genes encodes a protein that regulates cell division. The structure of proteins is fine-tuned to prevent excessive cell division. However, the product formed during the fusion turns out to be overactive—and stimulates cells to excessive division and malignant transformation. It is important that BCR-ABL translocation is treatable—a complete cure is possible in more than 80% of cases. Discussion of methods allowing the development of pharmacological agents against BCR-ABL fusion product and similar mutated proteins are out of the scope of this chapter. However, we note that these methods often involve AI-based approaches and refer the reader to other chapters of this book for the details. An example of BCR-ABL gene fusion shows that structural rearrangements, as well as point mutations, can lead to the formation of pathogenic genetic variants. However, detecting structural variants in the human genome is much more difficult than point mutations. The difficulty in detecting chromosomal rearrangements is due to the peculiarities of the applied sequencing technologies. The most used technology platform today is Illumina—and as was mentioned above this technology allows “reading” short sequences of about 100 nucleotides. Illumina sequencing requires the fragmentation of long human genomic DNA (about 3 billion nucleotides) into many short fragments before sequencing. In order to read through the site of a chromosomal rearrangement using Illumina sequencer, it is necessary to find among the millions of short fragments those that are located near the break points and contain long enough sequences at breakpoint flanks to ensure read alignment on the genome. Given the random nature of DNA fragmentation, this is a fairly unlikely event that requires hundreds of millions of molecules to be read. In addition, the junctions of rearranged chromosome segments are often located within repeated sequences. Such sequences cannot be unambiguously mapped using short Illumina reads, so in practice, a direct search for chimeric DNA fragments in Illumina sequencing data is rarely used and, even when used, can serve only as an auxiliary method. To make the search for structural rearrangements more efficient, alternative approaches to sequencing and data analysis are used. For example, unbalanced chromosomal rearrangements, deletions, duplication, and amplification, lead to a change in the number of DNA fragments within the rearranged region. The copy number of a genome region can be estimated using data on its genomic coverage, i.e. by counting the number of reads aligned to that area. However, genomic coverage depends not only on the number of DNA fragments. The efficiency of amplification, which is used in almost all methods of sample preparation, other enzymatic processes (for example, ligation of DNA adapters before sequencing), as well as the accuracy and efficiency of mapping (which, in turn, depends on the uniqueness of genomic fragments), strongly shifts the representation of different genomic regions. The use of targeted enrichment, for example, in the case of exome sequencing, leads to an even greater bias in the representation of various DNA fragments. Partially, this problem is solved by comparing the genomic coverage of the loci of the studied sample with the control prepared in the same way. However, even under the same sample preparation protocol, the samples differ in the efficiency of targeted enrichment, the number of PCR cycles, and other parameters that cannot be completely unified in

224

V. Fishman et al.

the experiment; therefore, to search for unbalanced chromosomal rearrangements, it is necessary to remove the sample-specific features of the coverage. Artificial intelligence methods are often used to normalize the coverage necessary to search for chromosomal rearrangements. For example, in the XHMM (Fromer et al. 2012) and CONIFER (Krumm et al. 2012) methods, the variance of the genomic coverage associated with the properties of individual samples is first removed from the data. To do this, a matrix is constructed in which the objects are the loci, and the features are their coverage in each of the samples. This matrix is transformed by the principal component method to remove the variance associated with sample preparation. The coverage profile normalized in this way is analyzed using hidden Markov models (HMM) to search for loci with different copy numbers—zero (homozygous deletions), 0.5 (heterozygous deletions), 1 (normal copy number of the diploid locus), 1.5 (heterozygous duplications), 2 (homozygous duplications). In general, it should be noted that most copy number variant (CNV) detection algorithms are implemented in two stages: at the first stage, the coverage is normalized taking into account sample- and locus-specific features, and at the second stage, the genome is segmented into regions of different copy numbers. For the XHMM method discussed above, the first stage is implemented using the principal component analysis (PCA), and the second—using HMM. The advantage of using PCA is that this method does not require the researcher to formulate a hypothesis about which artifacts affect loci coverage. On the other hand, PCA is able to capture linear relationships, while the dependence of loci coverage on several factors, including the main source of bias in loci coverage, their GC content, is non-linear (Benjamini and Speed 2012). One of the ways to normalize such factors is to study the distribution of biases and select special functions that approximate the observed dependencies. Following this approach, GC-content is normalized using non-linear functions in the CODEX (Jiang et al. 2015) and CODEX2 (Jiang et al. 2018) algorithms. However, given the effectiveness of advanced machine learning methods in approximating various non-linear relationships, it is not surprising that a number of tools use them to find CNVs. In particular, the MFCNV (Zhao et al. 2020) algorithm uses a small fully connected neural network with three layers to classify the copy number of a genomic locus based on four parameters: (1) read coverage of this locus; (2) its GC-composition; (3) reading quality characteristics for each nucleotide of the locus (provided by the Illumina instrument), and (4) similarities between the coverage of the studied locus and its neighbors (the inclusion of this parameter is motivated by the fact that the coverage of neighboring loci should differ in the CNV boundary). The dudeML algorithm (Hill and Unckless 2019) analyzes the coverage of the locus, the coverage variance in the vicinity of the locus, and the number of chimeric reads using a random forest classifier. The CNV-RF algorithm (Onsongo et al. 2016) works in a similar way. Some tools use ML algorithms to increase the sensitivity and specificity of results obtained with other algorithms. For example, CN-Learn (Pounraja et al. 2019) uses the features of the locus (GC-composition, mappability, etc.) together with the output of four CNV detection tools (CANOES (Backenroth et al. 2014), CODEX (Jiang et al. 2015), XHMM (Fromer et al. 2012) and CLAMMS (Packer et al. 2016)) as input

11 AI in Genomics and Epigenomics

225

parameters to the Random Forest classifier, and the algorithm DeepCNV (Glessner et al. 2021) uses deep learning-based approaches to filter out false positives from the output of the PennCNV (K. Wang et al. 2007) algorithm. Despite the progress of tools based on ensemble learning and deep neural networks, it should be noted that the most commonly used algorithms today include PCA coverage normalization and subsequent segmentation of the genome into regions of different copy numbers using HMM. Such widely used tools as Excavator (Magi et al. 2013), ExomeDepth (Plagnol et al. 2012), DECoN (Fowler et al. 2016), and many others rely on HMM for genomic segmentation. Finally, we note that none of the developed algorithms currently provide high enough precision and recall, thus the current recommendation is to use several algorithms together (Gabrielaite et al. 2021).

11.1.1.4

Detection of Inversions and Balanced Translocations

Detection of balanced chromosomal rearrangements, such as inversions and translocations, is even more difficult than the search for CNVs. Such rearrangements cannot be detected using coverage information, since it does not change in this case. The only source of information in the case of Illumina-data are translocation breakpoints, which, as mentioned above, in some cases are not available for mapping. There are several non-standard methods of sample preparation, such as mate-pair (Redin et al. 2017) or Hi-C (Mozheiko and Fishman 2019; Fishman et al. 2018) libraries, that can partially solve this problem. The Hi-C protocol is especially promising for solving complex chromosomal rearrangements, as it is extremely sensitive and provides a high resolution when using the optimized protocol (Gridina et al. 2021). Despite some progress in balanced translocation detection achieved using Illumina-based sequencing, alternative sequencing techniques, such as Oxford Nanopore and PacBio technologies, recently gained attention as promising methods for structural variants analysis. Both of these methods make it possible to read long— from thousands to millions of base pairs—DNA fragments, thus largely solving the problem of repeated sequences. Both technologies use artificial intelligence methods to analyze sequencing data. The essence of the Oxford Nanopore (ONT) method is that denatured DNA is pulled by a motor protein through a pore whose diameter is only slightly larger than the size of individual nucleotides. A current of ions constantly flows through the pore embedded into the membrane. Stretching DNA through a pore hinders the current of ions to some extent, and the change in the current of ions depends on the nucleotide sequence of the stretched fragment. At each moment of time, there are five nucleotides of DNA inside the pore, so each of the 1024 combinations of nucleotides leads to a specific change in the ion current.

226

V. Fishman et al.

Due to the small size of device and availability of sequencers (the Oxford Nanopore minION sequencer, unlike the huge Illumina instruments, are not much larger than a conventional USB stick), the ease of preparing genomic libraries, the ability to sequence DNA and RNA, and, of course, the length of reads, this technology attracts more and more attention. However, an important limitation of the Oxford Nanopore method remains the low accuracy of sequencing, which is about 1% errors per nucleotide (an order of magnitude higher than in the Illumina technology). To decode the current profile into an oligonucleotide sequence, ONT uses the Guppy neural network. The use of a GPU allows this tool to significantly speed up the process of converting the primary current signal into a nucleotide sequence. The increase of performance is so high, that the ONT company refused to support the CPU analog of Guppy called Albacore several years ago. Moreover, ONT developers provide tools for training or retraining alternative neural network models using a more complex architecture or a specific set of training data (https://github.com/nan oporetech/sloika). As was recently shown, the use of such models makes it possible to increase the accuracy of nucleotide sequence determination from ~ 99.3 to 99.9% (Wick, Judd, and Holt 2019). PacBio is an alternative technology for reading long DNA fragments. It is based on zero-mode waveguide—a special device that allows to focus light in a very small volume (zeptolitre sample). The sequence reaction involves a DNA template and polymerase, as well as nucleotides, each of which carries a specific fluorescent dye. In this case, the light is focused in such a way that only the area where the DNA polymerase is fixed falls into the detection zone. The nucleotide attached by DNA polymerase enters the detection area, which makes it possible to read the fluorescence signal and, thus, to determine the complementary nucleotide of the DNA template. After that, the dye linker-pyrophosphate product is cleaved from the nucleotide and diffuses out of the ZMW to end the fluorescence pulse. One of the main limitations of PacBio, as well as Oxford Nanopore technology, is the transformation of the device signal (in this case, changes in the fluorescence level) into a DNA sequence. To solve this problem, PacBio uses circular consensus sequencing technology—sequenced DNA molecules are ligated to form rings that are processed by DNA polymerase. Due to the movement along the ring, it is possible to sequence each template molecule several times. The received data are averaged— i.e. a consensus sequence is built for each circular DNA molecule, which makes it possible to compensate for errors in individual rounds of reading the ring. Consensus building is an important step in the analysis of sequencing data, which significantly affects the accuracy of the data obtained. Until recently, this method was performed using HMM (Chin et al. 2013), but more recently, PacBio, in collaboration with developers from Google, released a preprint (Baid et al. 2021) describing the GATE neural network based on transformer layers. The use of a neural network to process the signals detected by the sequencer has significantly improved the accuracy of the advanced PacBio sequencing. Thus, the output of reads with an error rate of less than 0.1% increased by 27% due to the use of the GATE neural network.

11 AI in Genomics and Epigenomics

227

Thus, machine learning-based methods are actively used in advanced sequencing technologies, providing more accurate detection of single nucleotide variants, deletions, duplications, and, importantly, balanced chromosomal rearrangements, which are not visible for conventional Illumina technology.

11.1.1.5

Detecting Genetic Mutations Using Morphological Features

Typically DNA sequencing allows detection of pathogenic mutation, prediction of molecular phenotype and selection of targeted therapy. However, recent advances in computed vision turned the diagnostic procedure other way around, allowing to infer pathogenic mutation directly from morphology of samples. Studies have been carried out using computer vision for the simultaneous classification of a tumor and the detection of mutations in genes from images of histological sections. The model based on the convolution neural network (CNN) from the slice image predicts the presence of the most frequently mutated genes (STK11, EGFR, FAT1, SETBP1, KRAS and TP53) in adenocarcinoma with AUCs from 0.733 to 0.856 as measured on a held-out population (Coudray et al. 2018). Similar results were obtained for images of histological sections of Hepatocellular carcinoma: the most frequently mutated genes CTNNB1, FMN2, TP53, and ZFX4 can be predicted with external AUCs from 0.71 to 0.89 (M. Chen et al. 2020). In the case of breast cancer, CNN allows the identification of key prognostic molecular markers such as estrogen receptor (ER), progesterone receptor (PR), and Her2 status with 0.89 AUC (ER), 0.81 AUC (PR), and 0.79 AUC (Her2) (Rawat et al. 2020). Detection of mutations based on morphological features of sample is not limited to oncology. Germ-line mutations can be predicted using the recently developed Face2Gene application (https://www.face2gene.com/). Multiple congenital diseases cause specific abnormalities of facial features. Based on this observation, Face2Gene model was trained on the set of known disease-causing mutations paired with corresponding pictures of the affected child. This allowed building a classification network that discriminates disease-causing genes, with the sensitivity exceeding 80–90% for some genes (Latorre-Pellicer et al. 2020). Although we are still far from substituting the classical approach of variants detection by sequencing, it can be used to prioritize sequencing targets, suggest clinical significance of detected mutations, and bring the attention of clinicians to particular genes or syndrome.

11.1.2 AI for Clinical Interpretation of Genomic Variants As was noted above, tens of thousands of rare variants can be found in the genome of each person. Even if they can be identified, it is extremely difficult to determine which of these variants are associated with pathology. In this section, we will discuss the tools that are used in the clinical interpretation of genomic variants. We will separately consider the problem of interpreting variants in protein-coding sequences

228

V. Fishman et al.

and in non-coding regions of the genome, since the biological mechanisms that explain the manifestations of mutations differ significantly for these cases.

11.1.2.1

Interpretation of Variants in Protein-Coding Genes

At first glance, the interpretation of genetic variants in protein-coding sequences may seem like a trivial task. The genetic code, deciphered more than half a century ago, makes it easy to determine how the mutation affects the amino acid composition of the protein. In some cases, for example, in the case of nonsense or synonymous substitutions, it is also easy to predict how the function of a protein will change as a result of a change in its amino acid sequence. However, for most genomic variants, the consequences are not so obvious. An amino acid substitution can be either a neutral or a pathogenic event, depending on which domain of the protein the change occurs and how it will affect the structure of the protein or protein complex. According to the ClinVar database, more than 50% of the known genetic variants are currently unknown for their functional role in the development of pathologies (Landrum et al. 2016). In this regard, a special role in the interpretation of mutations in the coding region is played by structural approaches based on the assessment of the effect that nonsynonymous substitutions at the level of three-dimensional protein packaging can lead to. In some cases, such approaches make it possible to propose a hypothesis about their functional role at the molecular level. Moreover, knowledge of the structure and molecular mechanisms underlying pathogenicity of mutations may allow the development of targeted therapy approaches to compensate for the effect of these mutations. Today, there are a number of examples when the pathogenic effect of mutation that destabilizes protein and leads to its incorrect functioning could be compensated by low molecular weight compounds (Chiti and Kelly 2022). One such example is the KRAS gene, mutations in which are common in a wide range of cancers. The structure-based design of compounds targeting the mutant form of KRASG12C has revealed a number of compounds capable of fixing the mutant protein in an inactive conformation and inducing tumor regression (Janes et al. 2018). The destabilizing effect of the non-synonymous Y220C substitution in the P53 sequence associated with the development of many oncological diseases can be compensated by small molecule compounds that bind to the small molecule binding site formed as a result of the Y220C mutation (Bauer et al. 2020). Many pathogenic mutations can be located at the interface of protein–protein interactions, thereby deregulating the signaling pathways in which these proteins are involved. Structural information allows efficient interpretation of the functional significance of many relevant substitutions (Wang et al. 2012; Xiong et al. 2022). Despite the large number of experimentally obtained three-dimensional structures of human proteins, the three-dimensional packing of many protein domains and protein complexes still remains undeciphered, which greatly complicates the task of predicting the functional effects caused by changes in the amino acid composition for many clinically significant mutations.

11 AI in Genomics and Epigenomics

229

The development of approaches to predicting protein structures using artificial intelligence methods certainly makes it possible to close this gap. DeepMind’s AlphaFold2 software has shown high accuracy in Critical Assessment of Protein Structure Prediction (CASP) 3D protein structure prediction, greatly exceeding the accuracy of other methods, achieving a protein structure prediction accuracy comparable to experimental 3D structure deciphering approaches (Jumper et al. 2021). This method includes a number of innovations in the field of deep machine learning and is trained on experimentally obtained structures and sequences of proteins available over the past 50 years in the Protein Data Bank. The AlphaFold method relies on both primary protein sequence analysis and multiple sequence alignment analysis to identify evolutionarily related amino acid residues. The idea of this approach is that if two residues are close in space and interact with each other, then their substitutions during evolution can be linked in order to preserve the structure or function of the protein (de Juan et al. 2013). Approaches implemented in AlphaFold include both relying on the methods used in natural language processing and implementing a number of rotation invariant functions, which as a result allows AI to efficiently learn the relationship between sequence and three-dimensional protein packaging. Moreover, AlphaFold collaborated with EMBL-EBI to predict structural models of the entire human proteome and also launched a global initiative to predict the entire known protein space (Varadi et al. 2022). There is no doubt that AlphaFold and similar machine learning-based tools will lead to tremendous advances in the interpretation of genomic variants in proteincoding regions. However, today they have a number of limitations. Thus, knowledge of the three-dimensional packing of proteins does not always make it possible to unambiguously interpret the functional role of substitutions and, as a rule, requires the use of additional tools, including for assessing the effect of substitutions on changes in protein stability (Marabotti et al. 2021). Large multidomain proteins, protein complexes, or highly mobile protein fragments are currently predicted with limited accuracy. Also, high-precision prediction of binding sites for small molecules, cofactors, metals, or RNA/DNA has yet to be realized in AI-based approaches. Since the problem of accurately predicting the effect of mutations on the protein structure and the corresponding protein interactions is still not completely solved, alternative approaches to the study of the functional significance of amino acid substitutions are also being developed. In particular, by analyzing the evolution of proteins, one can evaluate how certain mutations affect the viability of an organism. It can be expected that substitutions in evolutionarily conserved regions of a protein will, on average, have a greater effect on its function than substitutions in non-conserved regions. However, in each specific case, the effect of substitution will be determined by the degree of conservation of the region, the flanking amino acid sequence, and the physical properties of the altered amino acids. Therefore, the interpretation of data on evolutionary conservation is a non-trivial task, for which machine learning approaches have been actively used recently. For example, the 3C-net (Won et al. 2021) algorithm receives as input alignments of amino acid sequences of different species, on the basis of which it makes a prediction about the functional significance

230

V. Fishman et al.

of amino acid substitutions. The Primate AI (Sundaram et al. 2018) algorithm works in a similar way, aggregating data on the alignment of amino acid sequences of orthologous proteins across species, amino acid substitutions frequently observed in primate populations, and the physicochemical properties of amino acids. All these data are fed to the convolutional neural network, which, on their basis, predicts the pathogenicity score of the observed amino acid substitution. A lot of other “aggregators” are constructed according to a similar principle, which simultaneously analyze the properties of amino acids, their evolutionary conservatism, and genomic characteristics of the region under study in order to predict the consequences of nonsynonymous substitutions in a protein (Rentzsch et al. 2019; Ioannidis et al. 2016; Shihab et al. 2013; Adzhubei et al. 2010; Kumar et al. 2009). These algorithms use a wide range of machine learning methods—from linear regression to training deep neural networks.

11.1.2.2

Interpretation of Non-coding Variants

The task of interpreting variants in non-coding sequences differs significantly from interpreting mutations in protein-coding regions. Even the very question of whether a particular sequence can code for a protein already requires complex research, often using artificial intelligence methods. For example, it was recently shown that the 5’-untranslated regions of protein-coding genes may contain alternative open reading frames, disruptions of which influence the main reading frame translation and thus cause pathologies. Interestingly, in some cases, the initiating codon for such frames is not the canonical ATG, but its modifications, such as GAG, in combination with one of the KOZAK sequences. Thus, interpretation of genomic variants in the sequence upstream main reading frame requires annotation of translation initiation starts. For this task, modern machine learning language models, such as DNA-BERT, have recently been successfully used, which made it possible to construct a map of alternative 5’-upstream reading frames for the human genome (Sindeeva et al. 2022). Another example of clinically relevant non-coding mutations is mutations in splice sites, the sequences determining boundaries of protein coding fragments of gene body (exons) and non-coding intronic sequences. Mutation of splice site may cause intron retention in coding sequence, which leads to changed amino-acid composition of protein. As with KOZAK sequences, splice sites do not have an unambiguous sequence motif (GU and AG dinucleotides located at the ends of the intron are a necessary but not sufficient condition for the formation of a splice site). For a long time, the search for mutations that affect splicing was hampered by the lack of an algorithm for predicting splice sites, and testing hypotheses about splicing disorders required laborious experiments. However, in 2019, an article by a group of developers from Illumina was published in Cell journal, in which they described the SpliceAI neural network (Jaganathan et al. 2019). Trained on sequences of splice sites experimentally found in the human genome, the SpliceAI convolutional neural network is able to accurately predict the formation of new or disruption of existing splice site as a result of mutations. A feature of the SpliceAI architecture is the use

11 AI in Genomics and Epigenomics

231

of dilated convolutions. The architecture of dilated convolutions allows aggregating information from a large section of the analyzed sequence—in the case of SpliceAI, the prediction of the splicing site is based on the analysis of a sequence of several thousand base pairs. This is critical for the analysis of splicing sites, the location of which depends on the secondary structure of mRNA, often formed by spatial interactions between regions separated from each other by large sequence distance. There are also numerous non-coding regulatory elements outside the gene bodies that affect their function. Due to the wide variety of epigenetic mechanisms, predicting the effects of genomic variants in such regions remains one of the most challenging tasks in modern genetics. For regulatory sequences, there is no unambiguous “code”, as in the case of the amino acid coding system, on the basis of which one could interpret changes in the nucleotide composition. The effect of nucleotide substitutions on the function of such expression enhancers depends on the set of transcription factors that bind to them. Transcription factors, in turn, can have a variety of DNA-binding domains, and even for proteins with the same DNA-binding domain, the DNA-binding profile can vary significantly due to the action of cofactors. In addition, many transcription factors can interact with each other and with other components of the epigenetic regulation system: histone modifications, DNA methylation, three-dimensional chromatin folding, etc. There are some tools that can score the effects of genomic variants on one specific epigenetic mechanism—for example AI-based algorithms recently developed by us allow predicting changes of genome architecture caused by structural variants (Belokopytova et al. 2019; Belokopytova and Fishman 2020). However, these tools do not capture the whole complexity of epigenetic regulation in humans. A more general approach based on machine learning techniques has recently been proposed to decipher the epigenetic code of non-coding elements. For this, a large database of genome-wide profiles of gene expression and epigenetic modifications accumulated by the Encode consortium was used. For each of the genomewide profiles, the authors trained a neural network capable of predicting experimentally measured data (including gene expression) based on the DNA sequence. By feeding altered DNA sequences into the model, it is possible to predict changes in the expression profile or transcription factor binding. The architecture of neural networks used in such models has improved significantly in recent years—whereas the first studies were carried out using relatively simple convolutional neural networks (ExPecto (Zhou et al. 2018) and Basenji (Kelley et al. 2018) algorithms), the recent researches suggest the use of numerous transformer layers to analyze a wide context (tens of thousands of pairs bases) around the locus of interest (EnFormer (Avsec et al. 2021) algorithm). One of the shortcomings of the above epigenetic models is the lack of cell specificity. Since epigenetic mechanisms can differ significantly between cell types, each of them requires a separate model to be trained—and this requires experimental data describing the target epigenetic characteristic. Therefore, it is impossible to obtain a prediction for a cell type in which the target characteristic has not been experimentally measured using this approach. Recently, a solution to this problem has been proposed in the framework of the DeepCT algorithm (Sindeeva et al. 2022), in

232

V. Fishman et al.

which many different epigenetic properties measured for hundreds of human cells are used to train one integral model. This approach, called transfer learning, has also proven itself well in teaching AI algorithms outside the biological sciences—for example, when creating linguistic models. Due to transfer learning, DeepCT, firstly, finds correlations between different epigenetic characteristics of cells; secondly, it captures the relationship between the nucleotide composition of loci and their epigenetic properties; and, thirdly, it clusters cell types similar in their properties. Based on the patterns learned, the DeepCT model is able to reconstruct unmeasured epigenetic profiles based on available ones, as well as predict the effects of genomic variants for any of the human cell types. AI-based approaches for non-coding variants interpretations can be employed to analyze causal relationships in GWAS data. As was mentioned in Chap. 9, thousands of markers (SNP) were associated with aging and healthspan/lifespan using GWAS approach (Deelen et al. 2019). But this information is not enough to give a biological interpretation of exactly how these markers affect molecular mechanisms (CanoGamez and Trynka 2020). It has been shown that trait-associated SNPs are three times more likely than random SNP to be associated with gene expression. One way of finding such association between SNPs and expression changes is based on co-expression analysis (Porcu et al. 2019; Momozawa et al. 2018; Zhu et al. 2016). However, this requires to simultaneously genotype and measure expression in a large collection of samples. The biggest dataset of such eQTL data was collected by the Genotype-Tissue Expression (GTEx) project. It contains information of 54 tissue/ cell types, where for one gene about 1000 markers were tested. Another initiative, the BLUEPRINT project, measured the transcriptome, together with DNA methylation and histone modifications, in the most abundant cell types in peripheral blood from 197 individuals (Chen et al. 2016). Information about gene expression in immune cells could be found in a dataset provided by CEDAR project. It contains data about six types of circulating immune cells (CD4 + T lymphocytes, CD8 + T lymphocytes, CD19 + B lymphocytes, CD14 + monocytes, CD15 + granulocytes, platelets), as well as in tissues obtained by biopsy under the ileum, colon and rectum. But even together these datasets don’t cover all genes and tissues, and this is one of the limitations of methods. ML-based approaches like EnFormer allow to infer how SNP affects expression of nearby genes, i.e., find causal GWAS SNPs, which influences first of all on gene expression and as a consequence on investigated traits. These ML-based methods significantly outperform naive approaches where SNP are assigned to the nearest gene, which often leads to false positive results. Thus, epigenetic inferences using ML-techniques improve interpretation of GWAS data and can be employed in development of polygenic risk scores.

11 AI in Genomics and Epigenomics

233

11.1.3 Interpretation of Genomic Data and Clinical Description of Patient’s Phenotype It is pertinent to note that clinical interpretation of genetic variants is impossible without information about the patient’s phenotype. Typically, the process of molecular diagnosis process as follows. First, the patient’s DNA is sequenced and genomic variants are called using one of the aforementioned approaches. Next, the determined variants should be filtered based on their clinical significance. Although AI-based methods predicting changes of protein structure or gene expression could help to prioritize variants during the filtration process, it is essential to compare the patient’s phenotype and the phenotype described previously in humans with the same gene disrupted. This comparison is typically performed manually, limiting the speed of analysis and making the whole diagnostic process somewhat subjective. An increasing number of clinics around the world maintain medical records in the electronic health records (EHR) format, which has made it possible to start using NLP methods for recognizing and analyzing medical records for phenotyping patients and predicting the development or presence of diseases (Juhn and Liu 2020; Zeng et al. 2019). By adding an EHR model to the NLP genomic data interpretation pipeline, it becomes possible to reduce the time for searching for the cause of diseases from several months to several hours (De La Vega et al. 2021; Linder et al. 2021; Clark et al. 2019).

11.2 Interpretation of Epigenetic Changes in Aging We devote most of this chapter discussing the genetic variants that distinguish people from each other. However, from the genetics point of view, a child and an old man, sick and healthy, are very similar to each other. We are born and die with the same set of genes, accumulating only a small (compared to the genome size) number of individual mutations over a lifetime. The accumulation of mutations certainly plays an important role in the aging process. However, it is not only about mutations—after all, even without mutations, the same genes can work in completely different ways. For example, each human cell carries exactly the same set of genes (not counting the sex, immune cells and some other exceptions), but the cells are completely different from each other. This is because genes work differently in different cells. The science that studies non-inherited changes in how genes work is called epigenetics, and over the past decades, this science has led to several important discoveries in the field of health and longevity. In this part of the chapter, we describe which epigenetic changes correlate best with the aging process. The search for such epigenetic markers is an important task, since the aging process itself is a serious risk factor for many diseases: dementia, diabetes, cancer, cardiovascular diseases, etc. (Jaul and Barron 2017).

234

V. Fishman et al.

Many biological characteristics show a strong correlation with chronological age: telomere length (Galkin et al. 2020), DNA methylation level (Salameh et al. 2020), and others. Such biomarkers can be used to construct more complex composite biomarkers that can be used for early diagnosis of diseases, as well as for predicting the “biological” age of an organism (Salameh et al. 2020). Aging clocks are models designed to predict age based on a set of biomarkers based on available datasets of healthy people (tissues) labeled with chronological age. These models can use biomarkers of various modalities, for example: photographs of the corners of the eye (Salameh et al. 2020), MRI images (Cole et al. 2017), a set of parameters of a biochemical blood test (Putin et al. 2016), methylation levels measured in several CpG islands (Horvath 2013; Hannum et al. 2013; Levine et al. 2018). Having a set of biomarkers for a particular person and an aging clock model, it is possible to predict his “biological age” based on the available biomarkers. The difference between the prediction and the actual chronological age is called age acceleration. We will focus next on aging clocks based on genetic and epigenetic biomarkers. Telomeres (regions of DNA located at the ends of chromosomes) are essential for the process of DNA replication and are shortened with each replication cycle due to the phenomenon called terminal under-replication. Because of this phenomenon, telomere length is one of the factors limiting the maximum number of cell divisions. However, the mechanisms underlying age-related telomere shortening are not well-established (Vaiserman and Krasnienkov 2021). Due to the fact that telomere shortening can also be caused by life-course stress exposures, e.g. inflammation, oxidative stress (Aviv 2008), regarding telomeric aging as a mitotic clock-like process and creating an accurate age predictor based on it would have its obstacles, and so far no aging clocks based on telomeric aging have been proposed. There is, however, a link between decreased telomere length and onset of some age-related diseases, like cancer (Shammas 2011), Alzheimer’s disease (Liu et al. 2016), coronary heart disease (Haycock et al. 2014). In the process of aging, DNA methylation patterns change (attachment of a methyl group to cytosine in the composition of the CpG dinucleotide) and may be the fundamental mechanism of human aging (Slieker et al. 2016; Fraga et al. 2005). There are many epigenetic clocks based on DNA methylation data at several CpG dinucleotide locations. Such models allow predicting age (also called the DNA methylation age) in different tissues and at different stages of life (Horvath 2013; Hannum et al. 2013). The biological mechanisms underlying changes in DNA methylation patterns as measured by aging clocks are not well defined, but recent GWAS find an association between variants in genes associated with metabolism, the immune system, and aging with epigenetic age acceleration (Lu et al. 2016; 2017). AI models for prediction of DNA methylation age mainly rely on ML methods and were first introduced by (Hannum et al. 2013) and (Horvath 2013) in 2013. The Hannum clock employs an elastic net regression model to predict chronological age from CpG markers from the whole blood at 71 CpG sites, as well as clinical parameters like gender and body mass index (BMI). The dataset used consisted of n = 656 samples with methylation recorded as the frequency of methylation of a given

11 AI in Genomics and Epigenomics

235

CpG marker across the population of blood cells taken from a single individual. This model achieves RMSE = 4.9 years on the test set and was only trained to predict well for whole blood derived samples. In contrast to Hannum clock, Horvath clock (Horvath 2013) estimates the DNA methylation age using methylation levels at 353 CpG sites. In contrast to many other methylation clocks, the Horvath clock has the ability to predict age in most human tissues and cell types. The elastic net regression model of this clock was trained on a dataset of 7000 + samples, which assess DNA methylation levels in 51 different tissues and cell types, to predict chronological age of samples. This model achieves a median absolute error of 3.6 years on the test set (average over all available tissues), and performs well in heterogeneous tissues (e.g. whole blood, peripheral blood mononuclear cells, cerebellar samples, occipital cortex, buccal epithelium, colon, adipose, liver, lung, saliva, uterine cervix) as well as in individual cell types. Moreover, the Horvath clock allows to measure the methylation age even during the developmental stage. Fetal tissues, embryonic and induced pluripotent stem cells produce a DNA methylation age between − 1 and 0 years. Unlike these early models, DNAm PhenoAge model (Levine et al. 2018) constructs its age prediction not by training to approximate the chronological age, but a surrogate measure of “phenotypic age”. The authors of this model are motivated to do so by the fact that some of the “phenotypic aging measures”, derived from clinical biomarkers, have been shown to be better indicators of remaining life expectancy than chronological age (Levine 2012) suggesting that they may be approximating individual-level differences in biological aging rates. Thus, nine clinical markers and chronological age are combined into the measure of phenotypic age that the elastic net regression model is trained to predict using 513 CpG sites. The dataset used consisted of n = 456 samples with methylation levels measured in whole blood cells. Although DNAm PhenoAge was developed using methylation data from whole blood, it strongly correlates with chronological age in a wide variety of tissues and cells (r = 0.71 across all tissues/cells). GrimAge clock (Lu et al. 2019) is constructed in two stages: defining surrogate DNAm biomarkers of physiological risk factors and stress factors (which include several plasma proteins and a DNAm-based estimator of smoking pack-years) and constructing a single composite biomarker called DNAm GrimAge. The authors observe that in some instances, the DNAm-based surrogate biomarkers (e.g. for smoking pack-years) are better predictors of mortality than the actual observed (selfreported) biomarker. A total of 1030 CpG sites are used in the elastic net regression model trained on the dataset of whole blood DNA methylation levels of size n = 2356. The model predicts time-to-death, as opposed to aforementioned models which use age as the target (chronological or phenotypic). In this case age acceleration (AgeAccelGrim) is defined as difference between the observed DNAm GrimAge and its expected value. So how can these aging clocks be used and compared? DNA methylation age allows to identify individuals who show substantial deviations from their actual chronological age, and this accelerated biological aging has been associated with mortality (Levine et al. 2018; Lu et al. 2019), future onset and mortality across several

236

V. Fishman et al.

types of cancer (Lu et al. 2019; Horvath 2013; Zheng et al. 2016; Levine et al. 2015a, b), cardio-vascular diseases (Horvath 2013; Levine et al. 2018), Alzheimer’s disease (Levine et al. 2018), and others. Below is a short overview of epigenetic clocks’ association with some of these diseases. In cancer, despite a global hypomethylation, CpG islands are hypermethylated. Common occurrence of DNA hypermethylation in all types of cancer makes it a great biomarker for early cancer detection (Anglim et al. 2008). It has been shown that age acceleration measured DNA methylation-based by epigenetic clocks mentioned above is associated with increased cancer risk and shorter cancer survival, after adjusting for major cancer risk factors (Dugué et al. 2018; Levine et al. 2018; Lu et al. 2019). A definitive Alzheimer’s disease (AD) diagnosis is only possible by examining brain tissue after death. However, through early detection of clinical biomarkers of AD, the effectiveness of available treatments can be greatly increased if administered in the early stages (Salameh et al. 2020). Several clinical and epidemiological aspects of AD indicate a role for epigenetic factors in its etiology. In monozygotic twins discordant for AD, significantly reduced levels of DNA methylation were observed in temporal neocortex neuronal nuclei of the AD twin (Mastroeni et al. 2009). Another study suggests a broad spectrum of epigenetic pathways such as DNA methylation, histone modification, and non-coding RNAs that appear to be aberrant (Wang et al. 2008). It has been shown that Horvath clock age acceleration of dorsolateral prefrontal cortex is correlated with several neuropathological measurements (e.g. diffuse plaques, neuritic plaques, and amyloid load), as well as with a decline in global cognitive functioning, episodic memory and working memory among individuals with AD (Levine et al. 2015a, b). Type 2 diabetes is a multifactorial disease with both genetic, epigenetic, and environmental factors contributing to its cause. Pre-diabetic stage identification, prevention or early treatment of T2D are highly important to prevent damage to several of the body’s systems, as it can cause a multitude of problems due to high glucose levels (e.g. cardiovascular problems, neuropathy, nephropathy, retinopathy (Salameh et al. 2020). DNA methylation based biomarkers can help expand current screening methods to improve T2D diagnosis. It has been shown that AgeAccelGrim is associated with T2D (Lu et al. 2019). Horvath clock has also shown association with phenotypes associated with T2D, speaking to the ability of DNA methylation to serve as a potential mediator of the relationship between aging and the phenotypes associated with age-related disease, or alternatively as a biomarker (Grant et al. 2017). Cardiovascular disease (CVD) are a group of disorders of the heart and blood vessels. The exact cause of CVD isn’t clear, nor is the epigenetics of CVD wellstudied (de la Rocha et al. 2020), but there are several known risk factors, which include age, diabetes, smoking, obesity, high blood pressure. The impact of these risk factors themselves however, have been studied extensively. Indeed, CVD has been associated with elevated global DNA methylation levels (Kim et al. 2010). Both Horvath clock, PhenoAge and GrimAge clocks were reported to be associated with CVD (Levine et al. 2018; Lu et al. 2019; Lind et al. 2018). Several studies

11 AI in Genomics and Epigenomics

237

suggest that the same association cannot be established for Hannum clock, showing either weaker or no association (Lind et al. 2018; Perna et al. 2016). The field of epigenetic clock models at the moment seems to be dominated by regularized regression models based on DNA methylation (Galkin et al. 2020). Deep learning techniques may reduce this dominance of DNA methylation-based models with accurately estimating age from transcriptomic and blood test data (Putin et al. 2016; Mamoshina et al. 2018). For DNA methylation-based models though, training DNNs with a large number of parameters using all the features identified by modern DNAm screening platforms (e.g. Illumina HumanMethylationEPIC platform profiles > 850' 000 sites) would require datasets of non-existent magnitude, which could be reduced by performing feature selection first (Galkin et al. 2020). One of the advantages of applying deep learning methods to this task would be that one could train multi-purpose DNN models that would not only predict biological age, but also some other biological characteristics (e.g. time-to-disease for various diseases). Another advantage would be the ability to create multimodal models, e.g. ones combining DNAm data, blood test data and/or medical/photographic images as a model’s input to increase prediction accuracy. Compliance with ethical standards Conflict of Interest The authors declare that they have no conflict of interest.

References Adzhubei IA, Schmidt S, Peshkin L et al (2010) A method and server for predicting damaging missense mutations. Nat Methods 7:248–249. https://doi.org/10.1038/nmeth0410-248 Anglim PP, Alonzo TA, Laird-Offringa IA (2008) DNA methylation-based biomarkers for early detection of non-small cell lung cancer: an update. Mol Cancer 7:81. https://doi.org/10.1186/ 1476-4598-7-81 Aviv A (2008) The epidemiology of human telomeres: faults and promises. J Gerontol Ser A 63:979–983. https://doi.org/10.1093/gerona/63.9.979 Avsec Ž, Agarwal V, Visentin D et al (2021) Effective gene expression prediction from sequence by integrating long-range interactions. Nat Methods 18:1196–1203. https://doi.org/10.1038/s41 592-021-01252-x Backenroth D, Homsy J, Murillo LR et al (2014) CANOES: detecting rare copy number variants from whole exome sequencing data. Nucleic Acids Res 42:e97. https://doi.org/10.1093/nar/ gku345 Baid G, Cook DE, Shafin K et al (2021) DeepConsensus: gap-aware sequence transformers for sequence correction 08(31):458403 Bauer MR, Krämer A, Settanni G et al (2020) Targeting cavity-creating p53 cancer mutations with small-molecule stabilizers: the Y220X paradigm. ACS Chem Biol 15:657–668. https://doi.org/ 10.1021/acschembio.9b00748 Belokopytova P, Fishman V (2020) Predicting genome architecture: challenges and solutions. Front Genet 11:617202. https://doi.org/10.3389/fgene.2020.617202 Belokopytova PS, Nuriddinov MA, Mozheiko EA et al (2019) Quantitative prediction of enhancer– promoter interactions. Genome Res 30:72–84. https://doi.org/10.1101/gr.249367.119

238

V. Fishman et al.

Benjamini Y, Speed TP (2012) Summarizing and correcting the GC content bias in high-throughput sequencing. Nucleic Acids Res 40:e72. https://doi.org/10.1093/nar/gks001 Cano-Gamez E, Trynka G (2020) From GWAS to function: using functional genomics to identify the mechanisms underlying complex diseases. Front Genet 11 Chen L, Ge B, Casale FP et al (2016) Genetic drivers of epigenetic and transcriptional variation in human immune cells. Cell 167:1398-1414.e24. https://doi.org/10.1016/j.cell.2016.10.026 Chen M, Zhang B, Topatana W et al (2020) Classification and mutation prediction based on histopathology H&E images in liver cancer using deep learning. Npj Precis Oncol 4:1–7. https:/ /doi.org/10.1038/s41698-020-0120-3 Chin C-S, Alexander DH, Marks P et al (2013) Nonhybrid, finished microbial genome assemblies from long-read SMRT sequencing data. Nat Methods 10:563–569. https://doi.org/10.1038/ nmeth.2474 Chiti F, Kelly JW (2022) Small molecule protein binding to correct cellular folding or stabilize the native state against misfolding and aggregation. Curr Opin Struct Biol 72:267–278. https://doi. org/10.1016/j.sbi.2021.11.009 Cibulskis K, Lawrence MS, Carter SL et al (2013) Sensitive detection of somatic point mutations in impure and heterogeneous cancer samples. Nat Biotechnol 31:213–219. https://doi.org/10. 1038/nbt.2514 Clark MM, Hildreth A, Batalov S et al (2019) Diagnosis of genetic diseases in seriously ill children by rapid whole-genome sequencing and automated phenotyping and interpretation. Sci Transl Med 11:eaat6177. https://doi.org/10.1126/scitranslmed.aat6177 Cole JH, Poudel RPK, Tsagkrasoulis D et al (2017) Predicting brain age with deep learning from raw imaging data results in a reliable and heritable biomarker. Neuroimage 163:115–124. https:/ /doi.org/10.1016/j.neuroimage.2017.07.059 Coudray N, Ocampo PS, Sakellaropoulos T et al (2018) Classification and mutation prediction from non–small cell lung cancer histopathology images using deep learning. Nat Med 24:1559–1567. https://doi.org/10.1038/s41591-018-0177-5 Danesi R, Fogli S, Indraccolo S et al (2021) Druggable targets meet oncogenic drivers: opportunities and limitations of target-based classification of tumors and the role of molecular tumor boards. ESMO Open 6:100040. https://doi.org/10.1016/j.esmoop.2020.100040 de Juan D, Pazos F, Valencia A (2013) Emerging methods in protein co-evolution. Nat Rev Genet 14:249–261. https://doi.org/10.1038/nrg3414 de la Rocha C, Zaina S, Lund G (2020) Is any cardiovascular disease-specific DNA methylation biomarker within reach? Curr Atheroscler Rep 22:62. https://doi.org/10.1007/s11883-020-008 75-3 De La Vega FM, Chowdhury S, Moore B et al (2021) Artificial intelligence enables comprehensive genome interpretation and nomination of candidate diagnoses for rare genetic diseases. Genome Med 13:153. https://doi.org/10.1186/s13073-021-00965-0 Deelen J, Evans DS, Arking DE et al (2019) A meta-analysis of genome-wide association studies identifies multiple longevity genes. Nat Commun 10:3669. https://doi.org/10.1038/s41467-01911558-2 Dugué P-A, Bassett JK, Joo JE et al (2018) DNA methylation-based biological aging and cancer risk and survival: pooled analysis of seven prospective studies. Int J Cancer 142:1611–1619. https://doi.org/10.1002/ijc.31189 Fang LT, Afshar PT, Chhibber A et al (2015) An ensemble approach to accurately detect somatic mutations using SomaticSeq. Genome Biol 16:197. https://doi.org/10.1186/s13059-015-0758-2 Fishman VS, Salnikov PA, Battulin NR (2018) Interpreting chromosomal rearrangements in the context of 3-dimentional genome organization: a practical guide for medical genetics. Biochem Mosc 83:393–401 Fowler A, Mahamdallie S, Ruark E et al (2016) Accurate clinical detection of exon copy number variants in a targeted NGS panel using DECoN. Wellcome Open Res 1:20. https://doi.org/10. 12688/wellcomeopenres.10069.1

11 AI in Genomics and Epigenomics

239

Fraga MF, Ballestar E, Paz MF et al (2005) Epigenetic differences arise during the lifetime of monozygotic twins. Proc Natl Acad Sci 102:10604–10609. https://doi.org/10.1073/pnas.050 0398102 Fromer M, Moran JL, Chambert K et al (2012) Discovery and statistical genotyping of copy-number variation from whole-exome sequencing depth. Am J Hum Genet 91:597–607. https://doi.org/ 10.1016/j.ajhg.2012.08.005 Gabrielaite M, Torp MH, Rasmussen MS et al (2021) A Comparison of tools for copy-number variation detection in germline whole exome and whole genome sequencing data. Cancers 13:6283. https://doi.org/10.3390/cancers13246283 Galkin F, Mamoshina P, Aliper A et al (2020) Biohorology and biomarkers of aging: current stateof-the-art, challenges and opportunities. Ageing Res Rev 60:101050. https://doi.org/10.1016/j. arr.2020.101050 Garrison E, Marth G (2012) Haplotype-based variant detection from short-read sequencing. ArXiv12073907 Q-Bio Glessner JT, Hou X, Zhong C, et al (2021) DeepCNV: a deep learning approach for authenticating copy number variations. Brief Bioinform 22:bbaa381. https://doi.org/10.1093/bib/bbaa381 Grant CD, Jafari N, Hou L et al (2017) A longitudinal study of DNA methylation as a potential mediator of age-related diabetes risk. GeroScience 39:475–489. https://doi.org/10.1007/s11357017-0001-z Gridina M, Mozheiko E, Valeev E et al (2021) A cookbook for DNase Hi-C. Epigenetics Chromatin 14:15. https://doi.org/10.1186/s13072-021-00389-5 Hannum G, Guinney J, Zhao L et al (2013) Genome-wide methylation profiles reveal quantitative views of human aging rates. Mol Cell 49:359–367. https://doi.org/10.1016/j.molcel.2012.10.016 Haycock P, Heydon E, Kaptoge S et al (2014) Leucocyte telomere length and risk of cardiovascular disease: systematic review and meta-analysis. BMJ 349:g4227. https://doi.org/10.1136/bmj. g4227 Hill T, Unckless RL (2019) A deep learning approach for detecting copy number variation in nextgeneration sequencing data. G3 GenesGenomesGenetics 9:3575–3582. https://doi.org/10.1534/ g3.119.400596 Horvath S (2013) DNA methylation age of human tissues and cell types. Genome Biol 14:3156. https://doi.org/10.1186/gb-2013-14-10-r115 Ioannidis NM, Rothstein JH, Pejaver V et al (2016) REVEL: an ensemble method for predicting the pathogenicity of rare missense variants. Am J Hum Genet 99:877–885. https://doi.org/10. 1016/j.ajhg.2016.08.016 Ip EKK, Hadinata C, Ho JWK, Giannoulatou E (2020) dv-trio: a family-based variant calling pipeline using DeepVariant. Bioinformatics 36:3549–3551. https://doi.org/10.1093/bioinform atics/btaa116 Jaganathan K, Kyriazopoulou Panagiotopoulou S, McRae JF et al (2019) Predicting splicing from primary sequence with deep learning. Cell 176:535-548.e24. https://doi.org/10.1016/j.cell.2018. 12.015 Janes MR, Zhang J, Li L-S et al (2018) Targeting KRAS mutant cancers with a covalent G12Cspecific Inhibitor. Cell 172:578-589.e17. https://doi.org/10.1016/j.cell.2018.01.006 Jaul E, Barron J (2017) Age-related diseases and clinical and public health implications for the 85 years old and over population. Front Public Health 5:335. https://doi.org/10.3389/fpubh.2017. 00335 Jiang Y, Oldridge DA, Diskin SJ, Zhang NR (2015) CODEX: a normalization and copy number variation detection method for whole exome sequencing. Nucleic Acids Res 43:e39. https://doi. org/10.1093/nar/gku1363 Jiang Y, Wang R, Urrutia E et al (2018) CODEX2: full-spectrum copy number variation detection by high-throughput DNA sequencing. Genome Biol 19:202. https://doi.org/10.1186/s13059-0181578-y

240

V. Fishman et al.

Juhn Y, Liu H (2020) Artificial intelligence approaches using natural language processing to advance EHR-based clinical research. J Allergy Clin Immunol 145:463–469. https://doi.org/10.1016/j. jaci.2019.12.897 Jumper J, Evans R, Pritzel A et al (2021) Highly accurate protein structure prediction with AlphaFold. Nature 596:583–589. https://doi.org/10.1038/s41586-021-03819-2 Kawash JK, Smith SD, Karaiskos S, Grigoriev A (2018) ARIADNA: machine learning method for ancient DNA variant discovery. DNA Res 25:619–627. https://doi.org/10.1093/dnares/dsy029 Kelley DR, Reshef YA, Bileschi M et al (2018) Sequential regulatory activity prediction across chromosomes with convolutional neural networks. Genome Res 28:739–750. https://doi.org/ 10.1101/gr.227819.117 Kim M, Long TI, Arakawa K et al (2010) DNA methylation as a biomarker for cardiovascular disease risk. PLoS ONE 5:e9692. https://doi.org/10.1371/journal.pone.0009692 Koboldt DC, Zhang Q, Larson DE et al (2012) VarScan 2: somatic mutation and copy number alteration discovery in cancer by exome sequencing. Genome Res 22:568–576. https://doi.org/ 10.1101/gr.129684.111 Krumm N, Sudmant PH, Ko A et al (2012) Copy number variation detection and genotyping from exome sequence data. Genome Res 22:1525–1532. https://doi.org/10.1101/gr.138115.112 Kumar P, Henikoff S, Ng PC (2009) Predicting the effects of coding non-synonymous variants on protein function using the SIFT algorithm. Nat Protoc 4:1073–1081. https://doi.org/10.1038/ nprot.2009.86 Landrum MJ, Lee JM, Benson M et al (2016) ClinVar: public archive of interpretations of clinically relevant variants. Nucleic Acids Res 44:D862-868. https://doi.org/10.1093/nar/gkv1222 Larson DE, Harris CC, Chen K et al (2012) SomaticSniper: identification of somatic point mutations in whole genome sequencing data. Bioinformatics 28:311–317. https://doi.org/10.1093/bioinf ormatics/btr665 Latorre-Pellicer A, Ascaso Á, Trujillano L et al (2020) Evaluating Face2Gene as a tool to identify cornelia de lange syndrome by facial phenotypes. Int J Mol Sci 21:1042. https://doi.org/10. 3390/ijms21031042 Levine M (2012) Modeling the rate of senescence: can estimated biological age predict mortality more accurately than chronological age? J Gerontol A Biol Sci Med Sci 68(6):667–674. https:/ /doi.org/10.1093/gerona/gls233 Levine ME, Hosgood HD, Chen B et al (2015a) DNA methylation age of blood predicts future onset of lung cancer in the women’s health initiative. Aging 7:690–700. https://doi.org/10. 18632/aging.100809 Levine ME, Lu AT, Bennett DA, Horvath S (2015b) Epigenetic age of the pre-frontal cortex is associated with neuritic plaques, amyloid load, and Alzheimer’s disease related cognitive functioning. Aging 7:1198–1211. https://doi.org/10.18632/aging.100864 Levine ME, Lu AT, Quach A, et al (2018) An epigenetic biomarker of aging for lifespan and healthspan. Aging 10:573–591. https://doi.org/10.18632/aging.101414 Li H (2011) A statistical framework for SNP calling, mutation discovery, association mapping and population genetical parameter estimation from sequencing data. Bioinformatics 27:2987–2993. https://doi.org/10.1093/bioinformatics/btr509 Lind L, Ingelsson E, Sundström J et al (2018) Methylation-based estimated biological age and cardiovascular disease. Eur J Clin Invest 48(2):e12872. https://doi.org/10.1111/eci.12872 Linder JE, Bastarache L, Hughey JJ, Peterson JF (2021) The role of electronic health records in advancing genomic medicine. Annu Rev Genomics Hum Genet 22:219–238. https://doi.org/10. 1146/annurev-genom-121120-125204 Liu M, Huo YR, Wang J et al (2016) Telomere shortening in Alzheimer’s disease patients. Ann Clin Lab Sci 46:260–265 Lu AT, Hannon E, Levine ME et al (2016) Genetic variants near MLST8 and DHX57 affect the epigenetic age of the cerebellum. Nat Commun 7:10561. https://doi.org/10.1038/ncomms10561 Lu AT, Hannon E, Levine ME et al (2017) Genetic architecture of epigenetic and neuronal ageing rates in human brain regions. Nat Commun 8:15353. https://doi.org/10.1038/ncomms15353

11 AI in Genomics and Epigenomics

241

Lu AT, Quach A, Wilson JG et al (2019) DNA methylation GrimAge strongly predicts lifespan and healthspan. Aging 11:303–327. https://doi.org/10.18632/aging.101684 Luo R, Sedlazeck FJ, Lam T-W, Schatz MC (2019) A multi-task convolutional deep neural network for variant calling in single molecule sequencing. Nat Commun 10:998. https://doi.org/10.1038/ s41467-019-09025-z Magi A, Tattini L, Cifola I et al (2013) EXCAVATOR: detecting copy number variants from wholeexome sequencing data. Genome Biol 14:R120. https://doi.org/10.1186/gb-2013-14-10-r120 Mamoshina P, Volosnikova M, Ozerov IV et al (2018) Machine learning on human muscle transcriptomic data for biomarker discovery and tissue-specific drug target identification. Front Genet 9 Marabotti A, Scafuri B, Facchiano A (2021) Predicting the stability of mutant proteins by computational approaches: an overview. Brief Bioinform 22:bbaa074. https://doi.org/10.1093/bib/bba a074 Mastroeni D, McKee A, Grover A et al (2009) Epigenetic differences in cortical neurons from a pair of monozygotic twins discordant for Alzheimer’s disease. PLoS ONE 4:e6617. https://doi. org/10.1371/journal.pone.0006617 Momozawa Y, Dmitrieva J, Théâtre E et al (2018) IBD risk loci are enriched in multigenic regulatory modules encompassing putative causative genes. Nat Commun 9:2427. https://doi.org/10.1038/ s41467-018-04365-8 Mozheiko EA, Fishman VS (2019) Detection of point mutations and chromosomal translocations based on massive parallel sequencing of enriched 3c libraries. Russ J Genet 55:1273–1281 Onsongo G, Baughn LB, Bower M et al (2016) CNV-RF is a random forest-based copy number variation detection method using next-generation sequencing. J Mol Diagn 18:872–881. https:/ /doi.org/10.1016/j.jmoldx.2016.07.001 Packer JS, Maxwell EK, O’Dushlaine C et al (2016) CLAMMS: a scalable algorithm for calling common and rare copy number variants from exome sequencing data. Bioinformatics 32:133– 135. https://doi.org/10.1093/bioinformatics/btv547 Perna L, Zhang Y, Mons U et al (2016) Epigenetic age acceleration predicts cancer, cardiovascular, and all-cause mortality in a German case cohort. Clin Epigenetics 8:64. https://doi.org/10.1186/ s13148-016-0228-z Plagnol V, Curtis J, Epstein M et al (2012) A robust model for read count data in exome sequencing experiments and implications for copy number variant calling. Bioinforma Oxf Engl 28:2747– 2754. https://doi.org/10.1093/bioinformatics/bts526 Poplin R, Chang P-C, Alexander D et al (2018) A universal SNP and small-indel variant caller using deep neural networks. Nat Biotechnol 36:983–987. https://doi.org/10.1038/nbt.4235 Porcu E, Rüeger S, Lepik K et al (2019) Mendelian randomization integrating GWAS and eQTL data reveals genetic determinants of complex and clinical traits. Nat Commun 10:3300. https:// doi.org/10.1038/s41467-019-10936-0 Pounraja VK, Jayakar G, Jensen M et al (2019) A machine-learning approach for accurate detection of copy-number variants from exome sequencing. Genome Res. https://doi.org/10.1101/gr.245 928.118 Prüfer K, de Filippo C, Grote S et al (2017) A high-coverage Neandertal genome from Vindija Cave in Croatia. Science 358:655–658. https://doi.org/10.1126/science.aao1887 Putin E, Mamoshina P, Aliper A et al (2016) Deep biomarkers of human aging: application of deep neural networks to biomarker development. Aging 8:1021–1033. https://doi.org/10.18632/ aging.100968 Rawat RR, Ortega I, Roy P et al (2020) Deep learned tissue “fingerprints” classify breast cancers by ER/PR/Her2 status from H&E images. Sci Rep 10:7275. https://doi.org/10.1038/s41598-02064156-4 Redin C, Brand H, Collins RL et al (2017) The genomic landscape of balanced cytogenetic abnormalities associated with human congenital anomalies. Nat Genet 49:36–45. https://doi.org/10. 1038/ng.3720

242

V. Fishman et al.

Rentzsch P, Witten D, Cooper GM et al (2019) CADD: predicting the deleteriousness of variants throughout the human genome. Nucleic Acids Res 47:D886–D894. https://doi.org/10.1093/nar/ gky1016 Roth A, Ding J, Morin R et al (2012) JointSNVMix: a probabilistic model for accurate detection of somatic mutations in normal/tumour paired next-generation sequencing data. Bioinformatics 28:907–913. https://doi.org/10.1093/bioinformatics/bts053 Sahraeian SME, Liu R, Lau B et al (2019) Deep convolutional neural networks for accurate somatic mutation detection. Nat Commun 10:1041. https://doi.org/10.1038/s41467-019-09027-x Salameh Y, Bejaoui Y, El Hajj N (2020) DNA methylation biomarkers in aging and age-related diseases. Front Genet 11:171. https://doi.org/10.3389/fgene.2020.00171 Shammas MA (2011) Telomeres, lifestyle, cancer, and aging. Curr Opin Clin Nutr Metab Care 14:28–34. https://doi.org/10.1097/MCO.0b013e32834121b1 Shihab HA, Gough J, Cooper DN et al (2013) Predicting the functional, molecular, and phenotypic consequences of amino acid substitutions using hidden Markov models. Hum Mutat 34:57–65. https://doi.org/10.1002/humu.22225 Sindeeva M, Chekanov N, Avetisian M, et al (2022) Cell type-specific interpretation of noncoding variants using deep learning-based methods. 12(31):474623 Slieker RC, van Iterson M, Luijk R et al (2016) Age-related accrual of methylomic variability is linked to fundamental ageing mechanisms. Genome Biol 17:191. https://doi.org/10.1186/s13 059-016-1053-6 Spinella J-F, Mehanna P, Vidal R et al (2016) SNooPer: a machine learning-based method for somatic variant identification from low-pass next-generation sequencing. BMC Genomics 17:912. https:/ /doi.org/10.1186/s12864-016-3281-2 Sundaram L, Gao H, Padigepati SR et al (2018) Predicting the clinical impact of human mutation with deep neural networks. Nat Genet 50:1161–1170. https://doi.org/10.1038/s41588-0180167-z Vaiserman A, Krasnienkov D (2021) Telomere length as a marker of biological age: state-of-the-art, open issues, and future perspectives. Front Genet 11 Van der Auwera GA, O’Connor BD (2020) Genomics in the cloud: using Docker, GATK, and WDL in Terra, 1st edn. O’Reilly Media, Sebastopol Varadi M, Anyango S, Deshpande M et al (2022) AlphaFold protein structure database: massively expanding the structural coverage of protein-sequence space with high-accuracy models. Nucleic Acids Res 50:D439–D444. https://doi.org/10.1093/nar/gkab1061 Wang K, Li M, Hadley D et al (2007) PennCNV: an integrated hidden Markov model designed for high-resolution copy number variation detection in whole-genome SNP genotyping data. Genome Res 17:1665–1674. https://doi.org/10.1101/gr.6861907 Wang S-C, Oelze B, Schumacher A (2008) Age-specific epigenetic drift in late-onset Alzheimer’s disease. PLoS ONE 3:e2698. https://doi.org/10.1371/journal.pone.0002698 Wang X, Wei X, Thijssen B et al (2012) Three-dimensional reconstruction of protein networks provides insight into human genetic disease. Nat Biotechnol 30:159–164. https://doi.org/10. 1038/nbt.2106 Westbrook C, Hooberman A, Spino C et al (1992) Clinical significance of the BCR-ABL fusion gene in adult acute lymphoblastic leukemia: a cancer and leukemia group B study (8762). Blood 80:2983–2990. https://doi.org/10.1182/blood.V80.12.2983.2983 Wick RR, Judd LM, Holt KE (2019) Performance of neural network basecalling tools for Oxford Nanopore sequencing. Genome Biol 20:129. https://doi.org/10.1186/s13059-019-1727-y Won D-G, Kim D-W, Woo J, Lee K (2021) 3Cnet: pathogenicity prediction of human variants using multitask learning with evolutionary constraints. Bioinformatics 37:4626–4634. https://doi.org/ 10.1093/bioinformatics/btab529 Xiong D, Lee D, Li L et al (2022) Implications of disease-related mutations at protein-protein interfaces. Curr Opin Struct Biol 72:219–225. https://doi.org/10.1016/j.sbi.2021.11.012

11 AI in Genomics and Epigenomics

243

Zeng Z, Deng Y, Li X et al (2019) Natural language processing for EHR-based computational phenotyping. IEEE/ACM Trans Comput Biol Bioinform 16:139–153. https://doi.org/10.1109/ TCBB.2018.2849968 Zhao H, Huang T, Li J, et al (2020) MFCNV: a new method to detect copy number variations from next-generation sequencing data. Front Genet 11 Zheng Y, Joyce BT, Colicino E et al (2016) Blood epigenetic age may predict cancer incidence and mortality. EBioMedicine 5:68–73. https://doi.org/10.1016/j.ebiom.2016.02.008 Zhou J, Theesfeld CL, Yao K et al (2018) Deep learning sequence-based ab initio prediction of variant effects on expression and disease risk. Nat Genet 50:1171–1179. https://doi.org/10.1038/ s41588-018-0160-6 Zhu Z, Zhang F, Hu H et al (2016) Integration of summary data from GWAS and eQTL studies predicts complex trait gene targets. Nat Genet 48:481–487. https://doi.org/10.1038/ng.3538

Chapter 12

The Utility of Information Theory Based Methods in the Research of Aging and Longevity David Blokh, Joseph Gitarts, Eliyahu H. Mizrahi, Nadya Kagansky, and Ilia Stambler

Abstract This work surveys some diagnostic applications of information theory based identification methods. It emphasizes the advantages of information-theoretical methodology over other identification methods, such as the heuristic methods of deep learning and neural networks. It exemplifies the utility of information-theoretical methodology with examples of several original applications: establishing physiologically meaningful thresholds, predicting risks for individual age-related diseases and multimorbidity, and evaluating the structuredness of gene nucleotide sequences. We believe, in the future, the theoretically grounded information-theoretical methodology will assume an ever-increasing role in the studies of aging and longevity and their practical clinical applications. With increasing awareness and wider application of the unique capabilities of information-theoretical methodology, it may become a common assistive tool for data analysis and decision making by researchers and clinicians. Keywords Aging · Longevity · Information theory · Normalized mutual information · Artificial intelligence · Multimorbidity · Physiological threshold · Risk prediction · Nucleotide sequence structuredness

D. Blokh · E. H. Mizrahi · N. Kagansky · I. Stambler (B) The Geriatric Medical Center “Shmuel Harofe”, Beer Yaakov, Affiliated to Sackler School of Medicine, Tel-Aviv University, Tel-Aviv, Israel e-mail: [email protected] J. Gitarts Efi Arazi School of Computer Science, Reichman University, Herzliya, Israel I. Stambler Vetek (Seniority) Association—The Movement for Longevity and Quality of Life, Rishon Lezion, Israel Department of Science, Technology and Society, Bar Ilan University, Ramat Gan, Israel © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 A. Moskalev et al. (eds.), Artificial Intelligence for Healthy Longevity, Healthy Ageing and Longevity 19, https://doi.org/10.1007/978-3-031-35176-1_12

245

246

D. Blokh et al.

12.1 Introduction The first applications of mathematics in biology and medicine appeared at the end of the nineteenth century (Galton 1888). However, unlike physics, where mathematics was used to build theoretically based models, that is, models based on the fundamental laws of physics, in biology and medicine, mathematics was used mainly to analyze experimental and epidemiological data. Until the middle of the twentieth century, data analysis was based on mathematical statistics. In the second half of the twentieth century, identification problems appeared in data analysis (diagnostics, forecasting, assignment to a risk group). When solving identification problems, the following problems arose: (1) evaluation of non-linear relationships between random variables; (2) construction of identification algorithms. The solution of these problems within the framework of mathematical statistics turned out to be impossible. Then, heuristics were used to evaluate the relationships and build algorithms, that is, methods that have no theoretical justification. For example, for evaluating relationships, there were used Cramer’s coefficient (Cramer, 1991). And for constructing algorithms, there were used neural networks (McCulloch and Pitts, 1943; Kleene 1956). We consider it unacceptable to use methods for medical data analysis that do not have a theoretical justification, in the same way as, generally in medicine, it is considered unacceptable to use diagnostic methods or treatment methods that do not have a theoretical justification. In other words, in constructing medical identification (diagnostic) algorithms, it may be unjustified to use the common heuristic methods, such as logistic regression, neural networks, or deep learning (Li et al. 2011). We believe that similarly to the heuristic (ad hoc) use of medications, without a theoretical substantiation, also with regard to heuristic (ad hoc) methods of data analysis, without a theoretical substantiation, in case the method fails, it is difficult (or impossible) to determine whether the failure is due to the method used or the specific (possibly idiosyncratic) data that are analyzed. This need for theoretically justified methods is especially pressing considering the deficit of standardization and sharing of data, and few shared clinical evaluation criteria for aging-related ill health that currently exist (Stambler and Moskalev 2021). In contrast, with the development of information theory (Shannon and Weaver 1949; Quastler 1958; Renyi 1959), it became possible to create a unified, theoretically substantiated approach both for evaluating non-linear relationships between random variables and for constructing identification algorithms. This may be considered the principal advantage of the information-theoretical approach for medical data analysis. There are theoretically justified methods of identification (diagnostics), for example, based on linear programming (Mangasarian et al. 1995; Blokh et al. 2006). But these methods lack theoretically justified methods for evaluating relationships between parameters. The system for evaluating the relationship between the identification parameters and the object of identification is a necessary component of any identification

12 The Utility of Information Theory Based Methods in the Research …

247

(diagnostic) system. Presumably, one of the reasons for the inefficiency of “Artificial Intelligence” (“Deep Learning”) in the analysis of coronavirus data (The Alan Turing Institute 2022) and other medical conditions (Pearl 2019; Freeman et al. 2021; Goldfarb and Teodoridis 2022) is the lack of a theoretically grounded method for the evaluation of the relationship between markers and disease in the Deep Learning method. Therefore, the present review describes data analysis methods that utilize theoretically justified approaches based on information theory. The review consists of four sections: 1. Basic concepts of information theory. 2. Construction of optimal biological parameter boundaries. 3. Construction of information-theoretic identification algorithms for disease risk prediction. 4. Application of information theory in genetics. The first section contains formal definitions of the basic concepts of information theory necessary for the analysis of random data. At present, only for discrete random variables, there is a theoretically substantiated measure of the relationship, namely, the normalized mutual information. The second section describes a method for constructing optimal boundaries of random variables, that is, such boundaries that, after discretization by these boundaries, the normalized mutual information between random variables will be maximum. This method may be helpful for the establishment of therapeutically and clinically meaningful biological or physiological thresholds. In the third section, we provide an algorithm of recognition (diagnostics, identification), namely the nearest neighbour rule with a weighted Hamming distance. This algorithm can provide diagnostic rules from multiple diagnostic parameters for individual diseases as well as for multiple diseases (multimorbidity) at the same time, while taking into consideration the exact informative value of each parameter as well as their synergistic relation, when the total effects can be different from the simple sum of the effects of particular parameters. The fourth section describes the application of information theory for the study of gene sequences, in particular the establishment of the “structuredness” of sequences. This capability can provide an additional type of biomarkers for aging research.

12.2 Definitions and Applications of Information-Theoretical Methods for the Research of Aging and Aging-Related Ill Health When discussing the application of artificial intelligence for healthy longevity, it is necessary to consider the definitions of artificial intelligence and healthy longevity. Is artificial intelligence to be equated with machine learning? If so, the drawbacks of machine learning methodologies would make a very deficient version of “artificial

248

D. Blokh et al.

intelligence.” The drawbacks are many, in particular in relation to medical applications, in particular in relation to aging and longevity. As indicated above, the lack of theoretical substantiation is one of the significant drawbacks. Often methods of data analysis are called “artificial intelligence.” We believe such a common sweeping use of the term “artificial intelligence” is overreaching, devaluing the concept of “intelligence” and not reflecting the content matter. “Artificial” commonly implies “similar to natural” or “imitating natural.” Historically, with the understanding of the workings of “natural organs” (such as heart, kidney etc.), it became possible to create corresponding “artificial organs” that imitated the workings of the natural ones. Insofar as the workings of the human brain, the biological processes of intelligence or thought creation, are not understood, it is impossible to speak of their imitation, and accordingly it makes little sense to speak of “artificial intelligence.” Practically, the common current use of the term “artificial intelligence” is mostly operational, without a theoretical foundation. These limitations of the term usage and lack of clear definitions need to be kept in mind in relevant discussions. Considering the definition of “healthy longevity,” it can be operationally understood as an absence of age-related frailty, as commonly defined in geriatric medicine, that is, an active and functional state of older adults characterized by a decreased risk for future poor clinical outcomes, diminished development of disability, dementia, falls, hospitalization, institutionalization or decreased mortality (Fried and Walston 1999). Hence, the ability of computational models, including those that are often branded as “artificial intelligence” to provide early quantitative evaluation of frailty risks is also of great medical and economic significance, which also necessitates the minimization of potential drawbacks, biases and inadequacies of such methodologies. Thus, the healthcare benefits of even minor improvement in the ability of early diagnosis or risk prediction, necessary for early preventive treatments of aging-related diseases, could be immense. To accomplish this goal, improved diagnostic capabilities and methodologies are needed for the aging process itself, capable to reliably estimate the person’s physiological and biological age and the effects of interventions on that age (Stambler 2017a; Stambler et al. 2022). We argue that information-theorybased approaches, utilizing such measures as entropy and mutual information, may provide powerful methodological tools for the solution of these problems. First of all, information theory may allow a more reliable estimation of biological and physiological correlates (biomarkers) of aging, due to its ability to estimate non-linear correlations between parameters, utilizing mutual information measures. The a priori reliance on linear statistical correlations, when trying to determine such biomarkers, has been failing to produce practically applicable results (Butler et al. 2004; Ferrucci et al. 2020). Information-theoretical measures may provide new means to intensify and facilitate this search. Moreover, the preclinical diagnosis requires the simultaneous analysis of a large number of parameters of various kinds, including continuous parameters, with both Gaussian and non-Gaussian distribution, as well as discrete and ranked parameters. Presently, the only theoretically grounded method for the simultaneous analysis of multiple parameters of different kinds is information theory (Blokh and Stambler, 2017a, b).

12 The Utility of Information Theory Based Methods in the Research …

249

Crucially, the existing approaches commonly assume normal (Gaussian) distribution and linear relations of parameters. Hence, they mainly employ linear statistical measures of correlation, such as the correlation coefficient or linear regression. However, such measures do not correspond to physiological realities, where the relations between parameters are non-linear, including the non-linear alterations with age. Hence the currently used methods are ill-suited to evaluate physiological age and aging-ameliorating and healthspan-extending interventions. The main advantage of the information-theoretical methodology is that it can provide an integrated approach that can take into consideration the non-linear interrelation of a multitude of parameters—biomarkers and intervention factors—by using information-theoretical measures, rather than linear statistical measures. Generally, the use of information-theoretical measures can provide the following capabilities for diagnostic modeling, in particular for aging-related diseases (Blokh and Stambler 2015a, 2017a): 1. Evaluation of the influence (or correlation) of diagnostic parameters, biomarkers and risk factors on the actual emergence of disease—establishing causal relations. Such abilities are of critical importance for population health (Lim et al. 2012). 2. Optimal discretization of diagnostic parameters—finding physiologically meaningful thresholds, boundaries and balances (Nicolis and Prigogine 1990). 3. Evaluation of a joint influence of a group of parameters—crucial for multiparametric biological systems (Lim et al. 2012; Blokh and Stambler 2015b). 4. Partitioning of a group of parameters by the amount of information contained in these parameters and selection of a subgroup of parameters containing the greatest amount of information about all the parameters of the group under study—thus selecting the most meaningful and economical diagnostic parameters (Preckova et al. 2012; Molina-Pena and Alvarez 2012; Blokh et al. 2009; Blokh 2013). 5. Evaluation of informational variability (heterogeneity or dissimilarity) of a group of parameters—a potential measure of system adaptation and homeostasis (Radtke et al. 2009; Lipsitz and Goldberger 1992; Blokh and Stambler 2014). 6. Description of the dynamic behavior of the biological systems, comparison between the dynamic behaviors of different systems—thereby evaluating the dynamics of the disease progress and the success or failure of particular treatments—using the time series methodology (Hornero et al. 2009; Liu et al. 2014). These capabilities are of immediate significance for medical diagnostics. Yet, more far-reaching capabilities may be envisioned, even though they may be a longer way off. Information theory can serve as a universal methodology to assess health and disease status, in relation to age, unifying a variety of model systems, focusing on agerelated changes as the root cause of a variety of chronic age-related diseases and health impairments. Information theory may provide the following specific methodological capabilities, currently not available in any other system (Stambler and Blokh 2017): 1. The current health metrics mainly employ statistical measures. Yet, statistical measures are often inadequate, insofar as in biological systems, the relations

250

2.

3.

4.

5.

6.

7.

D. Blokh et al.

between parameters are often non-linear. In contrast, information-theoretical methods allow for the estimation (measurement) of complex non-linear relations between parameters, hence they allow for the inclusion of a wider range of data for making health decisions. Currently, the results from different study models are described in incompatible terms, that do not permit an easy mutual inference. In contrast, the common terms and measures of information theory, such as entropy and mutual information, can serve as a universal language to describe, in a unified way, any number of diverse models and results. Currently, the degree of mutual applicability between animal model systems and humans, as well as between diverse human samples, is uncertain. In contrast, the evaluation of mutual information between different model systems can be used as a standardized and convenient estimate of their mutual applicability. Currently, the effects of various treatments on human health are often examined in a disconnected manner, without knowing the precise interactions of various treatments. The information-theoretical measures of correlation (such as normalized mutual information) can be employed to test the effects of single or combinations of various treatment factors (such as drugs, genes and lifestyle factors) on the health span and the disease status. By the precise quantitative evaluation of the influence of such factors on the health span and disease status, both synergistic positive and antagonistic adverse effects of treatment interactions can be determined. The current systems lack the formal ability to select the most informative (and hence the most clinically useful) parameters. Using information-theoretical methods, the most informative single parameters or groups of parameters with the highest influence on the health span and disease status can be selected. The selection of the most informative parameters, such as those that contain information about other selected parameters, will allow for a more economic, convenient and efficient diagnostic system. This can save time and expenditures on unnecessary testing, by eliminating the less informative parameters from the outset. The current statistical systems are largely heuristic. In contrast, in the information-theoretical diagnostic systems, mutual information is able to provide the exact estimate of similarity between various model systems. Therefore, it may be possible to predict the efficacy of a yet untested drug or treatment using the estimates of its similarity (mutual information) with other tested drugs and treatments along with the similarity of model systems to which they are applied. Such an approach may save on unnecessary animal and human testing and facilitate the development of new drugs and treatments. The current health assessment systems lack a unified standard or frame of reference. The information-theory-based combined metrics for measuring health status may be based on the convenient and standardized evaluation of system stability, using information-theoretical measures, such as entropy and mutual information. The current systems are mainly based on static, average or median population values. The information-theoretical measures of system stability,

12 The Utility of Information Theory Based Methods in the Research …

251

assessing dynamic changes in a particular system, can be self-referential, and hence, truly personalized. 8. The current systems do not permit formal assessment of system stability due to treatments. In contrast, information theory may permit to estimate the effects of particular drugs and treatments, or their combinations, on the stability of a particular system for the short and/or long term, by calculating the system alterations at the input and output caused by the particular treatments. This may provide a common measure of health status and effects of interventions, for the short and long term. These capabilities are based on the known abilities of information theory, such as (1) to estimate non-linear relations; (2) to describe diverse systems in common terms of entropy change; (3) to estimate the degree of similarity or difference between various systems; (4) to examine combined effects of different parameters on a parameter of choice; (5) to select the most informative parameters; (6) to predict outcomes, as has already been demonstrated by the wide use of information theory in diagnosis (Blokh and Stambler 2017a); (7) to estimate the general system stability; (8) to estimate changes in system stability, heterogeneity, regulation and information loss in response to external stimuli (Blokh and Stambler 2017a). Briefly, the information-theoretical approach involves the following conceptual and methodological apparatus. The main concept of information theory is entropy. Let X be a discrete random variable with a distribution function. X

x1

x2

……

xn

P

p1

p2

……

pn

Entropy of random variable X is: H (X ) = −

n ∑

pi log pi

i=1

where pi is the probability of outcome x i occurring. Entropy has the following properties: 1. 0 ≤ H(X). 2. H(X) = 0 if and only if the “random” variable X has only a single outcome, that is to say, this is a certain event. 3. Of all random variables with an equal number of outcomes, the random variable with the outcomes having equal probability, will have the highest entropy. That is to say, of all random variables with n outcomes, the random variable Y will have the greatest entropy if the probability of all its outcomes equals 1/n. 4. Let X and Y be discrete random variables, and XY is the product of those random variables. H(XY ) = H(X) + H(Y ) if and only if X and Y are independent. These properties permit considering entropy as a measure of uncertainty of a random variable.

252

D. Blokh et al.

Simply put, information theory provides the theoretical basis for the measurement of information. Information increases our knowledge of some phenomena, thus decreases the degree of uncertainty about those phenomena. The concept of entropy allows the researchers to precisely measure the degree of uncertainty of the random variables describing some phenomenon, event, experiment or system. For n equally probable outcomes, the degree of uncertainty will depend on the number of outcomes—n. Thus, if there is only one outcome (n = 1), then the event is certain, in other words, its uncertainty is zero. With increasing n, the uncertainty, or the difficulty to predict a particular event, increases. The measures of entropy are formally simple, and have been routinely included in statistical analysis packages, such as SPSS (Acton and Miller 2009) or can be conveniently customized or programmed. In many cases, an adequate measure of uncertainty for a discrete random variable is the normalized Shannon entropy, whose maximal value does not depend on the number of outcomes of a random variable: Let X be a discrete random variable. Normalized Shannon entropy of random variable X is:

S(X ) =

H (X ) = log n



n ∑

pi log pi

i=1

log n

The normalized Shannon entropy has the following properties: 1. 0 ≤ S(X ) ≤ 1. 2. S(X) = 0 if and only if the “random” variable X has only one outcome, a certain event. 3. S(X) = 1 if and only if the probabilities for all the outcomes of a random variable X are equal, the maximally uncertain event. Thus, entropy and normalized Shannon entropy are the information-theoretical measures of uncertainty of a random variable. The uncertainty can also be interpreted as “variability” or “irregularity.” It should be noted that the great potential utility of information theory for the description of biological and biomedical systems, to a great extent, is the result of its ability to describe separate systems (e.g. experiments, events, phenomena) as well as to study them together as a unified system. This ability derives from the property of entropy. Consider two independent systems: S1 and S2. The system S1 has n equally probable states, and the system S2 has m equally probable states. Then system S, combining the systems S1 and S2, has n·m equally probable states. The entropies (uncertainties) of those systems are equal correspondingly: H (S1) = −

n ∑ i=1

(1/n) ∗ log2 (1/n) = log2 n

12 The Utility of Information Theory Based Methods in the Research …

H (S2) = −

m ∑

253

(1/m) ∗ log2 (1/m) = log2 m

i=1

H (S) = −

nm ∑

(1/nm) ∗ log2 (1/nm) = log2 nm

i=1

Yet, log2 nm = log2 n + log2 m. That is to say, the entropy (uncertainty) of the unified system is equal to the sum of entropies (uncertainties) of the original independent systems. Simply put, combining systems under investigation, adds to the uncertainty of our knowledge. This additivity is a fundamental property of entropy, derived from the property of logarithm, and it allows the researchers to construct adequate models of biological systems. Now we will briefly describe the information-theoretical measures of correlation (or influence) between random variables. Let X and Y be discrete random variables. The mutual information between the variables X and Y is: I (X ; Y ) = H (X ) + H (Y ) − H (X Y ) where H(XY ) is the entropy of the product of the random variables X and Y. Mutual information has the following properties: 1. 0 ≤ I(X, Y ). 2. I(X, Y ) = 0 if and only if the random variables X and Y are independent (no correlation between the variables). 3. I(X, Y ) = I(Y, X). To estimate the influence and compare the influences of random variables, a more adequate measure is the normalized mutual information (NMI). Let X and Y be discrete random variables. The normalized mutual information (also termed “uncertainty coefficient”) is: R(X ; Y ) =

H (X ) + H (Y ) − H (X Y ) I (X ; Y ) = H (Y ) H (Y )

The normalized mutual information has the following properties: 1. 0 ≤ R(X ; Y ) ≤ 1. 2. R(X ; Y ) = 0 if and only if the random variables X and Y are independent (no correlation between the variables). 3. R(X ; Y ) = 1 if and only if there is a functional relation (correlation or influence) between X and Y. 4. R(X 1 ; Y ) ≤ R(X 1 , X 2 ; Y ) and R(X 2 ; Y ) ≤ R(X 1 , X 2 ; Y ). That is to say, the mutual influence of two random variables X 1 and X 2 on a random variable Y is greater than or equal to the influence of any of the random variables X 1 and X 2 on Y.

254

D. Blokh et al.

The normalized mutual information (or uncertainty coefficient) is also routinely available in statistical packages or can be conveniently programmed and/or customized. For example, in the SPSS statistical package, the uncertainty coefficient is obtained through the procedure: SPSS → Analyze → Descriptive Statistics → Crosstabs → Statistics → Uncertainty Coefficient (Rovai et al. 2014). The current work reviews some of the applications of information-theoretical analysis for the investigation of aging and aging-related diseases. The research of aging and aging-related diseases is particularly suitable for the application of information theory methods, as aging processes are multi-parametric, where continuous parameters coexist side by side with discrete parameters, and where the relations between the parameters are as a rule non-linear. These constraints prohibit or make it difficult to study problems of aging and aging-related diseases by biostatistical methods, and thus necessitate the use of information-theoretical analysis (Blokh and Stambler 2017a).

12.3 The Application of Information-Theoretical Methods for the Evaluation of Biological and Biomedical Boundaries or Thresholds One of the crucial applications of information-theoretical methodologies may be the evaluation of biological and medical boundaries and/or thresholds. The importance of being able to establish such boundaries, especially in aging research, may derive from evolutionary theory of aging. The existence of functional temporal thresholds in the organism’s life course may correspond with two major evolutionary theories of aging: the “Mutation Accumulation Theory” and “Antagonistic Pleiotropy Theory.” Both these theories posit a distinction between an earlier versus later period of an organism’s life course which may have distinct implications for fitness and aging and presumably also for the corresponding differences in health. In this context, a brief reminder about the history of evolutionary theories of aging is in order (Stambler 2017b). In 1952, the British immunologist Peter Medawar developed the general theory of “Mutation Accumulation” (Medawar 1952). According to this theory, only the genes expressed earlier in life, during the reproductive period, are selected for and important from the evolutionary perspective. What happens after the reproductive period is largely irrelevant for the evolutionary success, hence the late-acting mutations accumulate and cause the damage of aging. In 1957, the American evolutionary biologist George Williams added an important specification (Williams 1957). According to Williams, it is not just the mere accumulation of late-acting mutations that causes senescence. According to him, the very same genes that aid survival and reproduction in an early period of life history, can be damaging and cause senescence in a later period. This concept came to be known as the “Antagonistic Pleiotropy” theory of aging. A large number of observations seem to support it. In Williams’s reasoning, for example, the rapid accumulation of

12 The Utility of Information Theory Based Methods in the Research …

255

calcium (“calcification”) early in life can be beneficial for bone and muscle development, hence increased stamina. However, later in life, enhanced calcium deposition can contribute to atherosclerosis. Another hypothetical example Williams uses is that “a gene that favored erythrocyte longevity might be far from ideal for the maximization of oxygen-carrying capacity” (Williams 1957). Other examples can be adduced along those lines. Thus, high levels of testosterone may give a good edge in sexual competition, yet later in life may contribute to prostate growth. Even at a more fundamental level, oxidative phosphorylation in the respiratory chain is what sustains life, yet the free radicals, formed in the process, cause aging damage. In another instance, the shortening of telomeres plays a part in cell differentiation and prevents cancer, but it also leads to cell replication limit and thus aging and death. Further evidence for the validity of the antagonistic pleiotropy theory have continued to accumulate for diverse animal models as well as humans (Ezcurra et al. 2018; Smith et al. 2013). The propositions of the above two theories were formalized in 1966 by the British evolutionary biologist William Hamilton (1966). Using Fisher’s reproductive value or the “Malthusian parameter,” he showed that there is always a greater selective premium on early rather than late reproduction, since a probability to survive to a certain age, declines with age. These equations apply both to the “Mutation Accumulation” and “Antagonistic Pleiotropy” theories. According to the British evolutionary biologist Brian Charlesworth, “it is at present hard to be sure which of the two most likely important mechanisms by which this property of selection influences senescence (accumulation of late-acting deleterious mutations or fixation of mutations with favorable early effects and deleterious late effects) plays the more important role” (Charlesworth 2000). Yet, for the purposes of the present discussion, both types of evolutionary theory of aging imply a distinction between earlier and later periods in life history, and a hypothetical temporal threshold when the fitness effects of the earlier period become displaced by deteriorative aging effects of the later period. Information-theoretical methodology may assist in establishing such thresholds. The establishment of such thresholds may have not only fundamental implications for the understanding of the aging processes, but also practical implications for personalized and preventive medical treatments. Thus, the establishing of such thresholds may assist in early detection of age-related diseases, establishing the range of early intervention, before the threshold is reached; and preventive interventions, aiming to postpone or even eliminate the temporal thresholds. Below is an illustrative example for establishing some temporal thresholds for the emergence of several major aging-related diseases. The example is intended as a methodological pointer. With more data, especially longitudinal data on the human life course, more clinically significant thresholds may be evaluated using information theory. For this example, illustrating the methodology, we used the data of geriatric patients hospitalized at the Shmuel Harofe Geriatric Medical Center in Beer Yaakov, Israel. The patients’ data were accessed retrospectively, according to the principles of

256

D. Blokh et al.

the Declaration of Helsinki. The Institutional Review Board of Shmuel Harofe Geriatric Medical Center approved the study (IRB Approval No. 53, date of approval: 13/ 07/2017). The patient data used in this research were anonymized before the study. We analyzed a total of 2211 geriatric patients, aged 61–103, including 1290 females and 921 males. We utilized 17 routinely used diagnostic parameters: Age (years), blood glucose (mg/dL), Urea (mg/dL), Creatinine (mg/dL), Cl (mmol/L), Na (mmol/L), K (mmol/L), Uric Acid (mg/dL), Ca (mg/dL), Phosphorus (mg/dL), Total Proteins (g/dL), Albumin (g/dL), Triglycerides (mg/dL), Cholesterol (mg/dL), B12 (pg/ml), Ferritin (ng/ml), WBC (103 /µl). These diagnostic parameters were related to 14 diseases and medical conditions, with the following numbers of patients for each specific disease or condition respectively: Diabetes Mellitus (DM)—1260, Hypertension (HTN)—1693, Lipidaemia—881, Ischemic Heart Disease (IHD)—600, Myocardial Infarction (MI)—330, Congestive Heart Failure (CHF)—439, Atrial Fibrillation (AF)— 565, Cerebrovascular Accident (CVA)—541, Chronic Renal Failure (CRF)—525, Heart Failure (HF)—381, Anaemia—619, Dementia—589, Chronic Obstructive Pulmonary Disease (COPD)—364, Malignancy—431. The method of finding physiologically meaningful thresholds involves the optimal discretization of diagnostic parameters (Nicolis and Prigogine 1990). The questions of discretization (or boundary setting) commonly arise when, alongside discrete parameters, there are also considered continuous parameters, or when the discretization of a parameter does not match the parameter’s biological significance or the setting of the problem. An optimal discretization of parameter X relative to a discrete parameter Y is such a discretization in which the uncertainty coefficient c(X, Y ) assumes the maximal values. Simply put, in order to calculate the boundary of a parameter, we find the minimal and maximal values of the parameter, then establish, step by step, a set of boundaries in that interval, and then we select the boundary for which the normalized mutual information is maximal. This point, with the maximum value of normalized mutual information, will determine how strongly the diagnostic parameter is related to a disease, and hence would determine the most diagnostically meaningful boundary that would best “discriminate” the healthy from the diseased subjects. That is to say, the boundary calculated in such a way enables the most informative indication of a change of a parameter as correlated with a disease. In other words, with the parameter values above a boundary, the state is most distinct from the state below the boundary, for example indicative of an emergence of a pathology or a risk state. In establishing the thresholds of biological parameters, sometimes it may be preferable to establish a single boundary, and sometimes it may be preferable to establish meaningful intervals. In other words, in some cases it makes sense to speak of parameters “above and below the norm,” where “the norm” is understood as some approximate dividing point, hence we consider two categories. In other cases, it may be more appropriate to see the norm as an extended interval, while the values outside of that interval can be seen as abnormal, hence—3 categories. Or else, there may be a middle interval where the distinction between diseased and healthy subjects is difficult, but becomes clearer outside that interval. Here “abnormality” is understood

12 The Utility of Information Theory Based Methods in the Research …

257

simply as enhanced correlation with the disease, while “normality” is a higher correlation with health status, as shown by the maximal values of the normalized mutual information. Using our data on geriatric patients we illustrate the creation of “point boundaries” (Table 12.1) and “interval boundaries” (Table 12.2) using sets of selected parameters, with the highest values of normalized mutual information. Generally, the results agree with clinical intuitions. Thus, the strongest indications of Chronic Renal Failure are associated with the markers: Creatinine, Urea and Uric Acid, with the normalized mutual information values of 0.2894, 0.1586 and 0.1197 respectively. This is expected, as these are common parameters related to renal function. The respective boundaries of 1.19, 57 and 6.5 may indicate a significant increase of risk above these values. Blood glucose is most strongly correlated with Diabetes Mellitus, which should be expected (NMI = 0.1084). Blood glucose is also strongly correlated with multimorbidity (multiple diseases), though beside diabetes, it has rather low correlation values (NMI) with other diseases. The glucose boundary of 135 may represent an increased risk. The relative normalcy interval for blood glucose is broader, between 106 and 155 (NMI = 0.1291). Above this interval, the risk may increase strongly. Yet, of greatest interest for the present discussion of aging is the relationship of age to the emergence of diseases. In particular, the present method enables researchers to establish the age boundaries that may indicate an increased risk for particular diseases. Thus (quite expectedly), age was shown as a good correlate of dementia (a paradigmatic “age-related disease”) compared to other diseases, with the highest normalized mutual information of 0.035. The optimal age boundary was found to be 76 years old (NMI = 0.035). Also for hypertension, the boundary age was found to be 76 years, with NMI = 0.0247. These are the most confident boundaries for risk prediction, with the highest level of NMI. That is to say, for example, in the case of dementia, 76 is the minimal age when the patient may be confidently diagnosed with dementia. Under 76, there may be more doubts and less confidence. Notably, these age boundaries for dementia and hypertension are different from age boundaries for risks for other diseases: e.g. about 80 years old for Ischemic Heart Disease, Myocardial Infarction and Atrial Fibrillation; though, these boundaries show rather low values of normalized mutual information (0.0026, 0.0024, 0.016 respectively). It may be difficult to mechanistically explain why the boundaries for the increasing risks of dementia and hypertension, at least for the present cohort, exist at 76 years, and not at some other year, and why other age boundaries exist for other diseases. Though it is also not easy to explain why the WHO convention designates people at and over the “round number” of 75 years old as “late elderly” and moreover why the WHO designation for “premature mortality” is under 70 years old, but not older (WHO 2022). In contrast to the accepted WHO thresholds that rather arbitrarily classify people according to their chronological age, the present information-theorybased boundaries may establish some evidence base for meaningful distinctions of “later” versus “earlier” aging periods, based on personalized health metrics and risks.

Cholesterol mg/dL

Triglycerides mg/dL

Albumin g/dL

Total proteins g/dL

Ca mg/dL

Uric acid mg/dL

Cl mmol/L

Creatinine mg/dL

Urea mg/dL

Glucose mg/dL

Age

146

0.0204

NMI

Bound

118

0.0025

NMI

Bound

3.16

0.0031

NMI

Bound

6.17

0.0011

NMI

Bound

9.14

0.0153

NMI

Bound

3.9

168

0.0032

90

0.0014

3.19

0.0012

7.03

131

0.0013

126

0.0098

3.28

0.0047

5.97

0.0034

8.9e−4

152

0.0020

143

0.0025

3.34

0.0015

5.89

9.0e−4

9.25

0.0150

4.8e−4 8.48

5.4

0.0017

103

0.0335

0.92

0.0221

39

0.0018

158

0.0026

80

IHD

3.9

6.1e−4

103

0.0025

0.69

0.0017

67

0.0069

98

0.0044

85

Lipidemia

9.09

0.0234

4.9

0.0027

7.8e−4

NMI

Bound

102

0.0298

103

0.0121

NMI

0.8

0.0257

42

0.0077

103

0.0247

76

HTN

Bound

0.94

0.0112

NMI

Bound

33

0.1084

Bound

135

NMI

0.0100

NMI

Bound

83

Bound

DM

153

0.0019 143

0.0013

89

0.0011

6.6e−4 158

4.01

0.0020

5.91

0.0034

9.29

0.0589

7.3

0.0070

100

0.0361

1.2

0.0327

48

0.0023

145

0.0071

82

CHF

3.76

0.0015

6.57

0.0024

8.67

0.0226

5.9

0.0047

98

0.0277

0.93

0.0179

46

0.0027

110

0.0024

80

MI

192

0.0071

115

0.0033

3.23

0.0028

6.72

0.0025

8.69

0.0243

5.4

0.0039

100

0.0129

1.03

0.0151

36

0.0033

97

0.0163

80

AF

141

0.0020

114

8.6e−4

3.12

0.0045

6.98

0.0040

9.42

0.0053

5.4

0.0028

102

0.0074

1.1

0.0035

48

0.0055

120

0.0030

88

CVA

134

0.0058

104

0.0050

127

0.0066

120

0.0257

3.78

0.0182

4.6e−4 3.82

5.97

0.0058

8.98

0.0087

4.9

0.0026

103

0.0076

0.83

0.0023

178

0.0012

161

0.0136

3.54

0.0014

6.21

0.0141

8.65

0.0112

4.9

0.0033

103

0.0134

1.1

0.0113

35

0.0026

30

156

2.0e−4

0.0076

88

Anemia

148

0.0044

88

HF

6.1

0.0059

8.92

0.1197

6.5

0.0091

102

0.2894

1.19

0.1586

57

0.0058

123

0.0059

79

CRF

191

0.0012

129

0.0061

3.5

5.1e−4

6.92

0.0015

9.33

0.0018

5.6

0.0167

175

0.0023

138

0.0029

3.78

0.0026

6.45

0.0014

8.48

0.0113

5.4

0.0049

101

0.0030

2.6e−4 103

1.1

0.0035

43

0.0018

157

0.0028

85

COPD

0.83

0.0053

36

5.4e−4

137

0.0350

76

Dementia

(continued)

134

0.0017

91

0.0143

3.2

0.0074

6.15

0.0104

8.82

0.0015

6.5

0.0033

103

0.0014

1.04

0.0027

70

0.0032

145

0.0043

77

Malign

Table 12.1 Optimal point boundaries and maximal values of normalized mutual information (NMI) at the optimal boundaries for diagnostic parameters versus diseases

258 D. Blokh et al.

WBC 103 /µl

7.1

0.0013

NMI

0.0044

DM

Bound

NMI

Table 12.1 (continued)

HTN

0.0058

10.3

0.0016

Lipidemia

0.0013

7.9

0.0034

IHD

0.0011

9.2

0.0044

MI

0.0023

11.2

0.0048

CHF

0.0026

6.7

0.0021

AF

0.0013

9

0.0040

CVA

0.0017

7.4

0.0013

CRF

HF

7 0.0013

9.5e−4

0.0052

9.4

0.0038

Anemia

0.0022

11

0.0037

0.0106

10.8 0.0037

8.5

COPD 9.1e−4

Dementia 8.6e−4

Malign

0.0030

6.3

0.0023

12 The Utility of Information Theory Based Methods in the Research … 259

Cl mmol/L

Creatinine mg/dL

Urea mg/dL

Glucose mg/ dL

Age

103

Bound 155 upper

42

Bound 69 upper

67

36

101

0.0368 0.0041

Bound 102 lower

0.016

NMI

103

0.7

0.9

Bound 1.16 upper

0.67

0.66

Bound 0.77 lower

0.0143 0.0313 0.0045

29

Bound 33 lower

NMI

128

124

0.1291 0.0103 0.0078

101

Bound 106 lower

NMI

86

72 90

73

MI

90

82

CHF

84

75

AF

88

72

CVA

90

76

CRF

90

88

HF

89

73

102

101 127

124 94

92 133

133 125

123 175

174 169

167

96

0.038

0.93

0.75

0.029

70

38 78

42 62

29 47

27 63

36 71

69

65

34

1.19

0.66 1.03

0.71

1.1

1.07

1.44

1.03

1.21

0.82

97

96

99

98

103

96

0.0333 0.0415 0.0156 0.0083 0.3426 0.009

0.93

0.72

100

0.0151

1.1

1.05

0.0213 0.0394 0.0186 0.0041 0.1853 0.0057 0.016

70

38

0.0041 0.0066 0.0062 0.0059 0.0067 0.0137 0.0047 0.0044

132

131

96

0.0041

0.77

0.77

0.0083

64

63

0.0034

139

138

0.0536

86

72

104

103

0.0066

90

77

34

33

1.11

1.1

100

(continued)

102

0.0053 0.0065

0.88

0.87

0.0053 0.0046

29

28

0.0051 0.0056

163

161

0.004

86

73

Anemia Dementia COPD Malign

0.0036 0.0035 0.0101 0.0192 0.0041 0.0078 0.0074 0.0119

82

81

Lipidemia IHD

0.0116 0.0265 0.0068

77

Bound 90 upper

NMI

75

HTN

Bound 81 lower

DM

Table 12.2 Optimal interval boundaries and maximal values of normalized mutual information (NMI) at the optimal boundaries for diagnostic parameters versus diseases

260 D. Blokh et al.

5.7

Bound 6.6 upper

9.2

Bound 8.99 upper

Albumin g/ dL

7.01

Bound 5.85 upper

3.36

Bound 3.12 upper

3.32

3.29

0.0042 0.0032 0.0111

3.33

Bound 3.1 lower

NMI

6.19

6.11

0.0055 0.0059 0.0062

6.94

NMI

8.54

8.52

0.0043 0.0037 0.0047

9.17

Bound 8.96 lower

NMI

5.8

5.6

0.0211 0.0269 0.0015

4.5

NMI

105

104

MI 100

CHF 102

AF 102

CVA 104

CRF 104

HF 103

5.9

3.8 7.4

5.3 7.7

5.4 5.4

4.5 6.6

5 5.3

4.8 7.5

4.8

8.76

8.72 9.44

9.28 9.15

9.13 9.55

9.5 8.94

8.91 9.04

8.98

9.25

8.65

5.85

5.79 6.97

5.91

3.44

3.41 4.08

3.04

6.83

6.73

7.13

7.07

6.91

5.96

5.85

5.79

3.58

3.54

4.07

4.04

3.82

3.27

4.11

3.53

0.0291 0.0163

3.55

3.16

0.0047 0.0061 0.0034 0.0239 0.0046

6.93

6.89

0.0062 0.0087 0.0043 0.0069 0.0038 0.007

3.43

3.42

0.0039 0.0062 0.005

5.95

5.86

0.0032 0.0076 0.0051 0.0043 0.0068 0.0082 0.0074 0.0162

9.25

9.15

0.0198 0.0299 0.0732 0.0313 0.0068 0.1386 0.0098 0.0147

7.4

4.7

0.0083

3.17

3.15

0.0032

7.2

7.2

0.0042

8.46

8.44

0.0066

4.5

4.2

0.0233

105

104

5.7

5.6

8.88

8.83

6.25

5.75

3.22

3.2

(continued)

0.0069 0.016

3.83

3.8

0.0061 0.0098

6.01

5.98

0.0064 0.0134

8.59

8.55

0.0134 0.0045

5.8

5.6

0.0058 0.006

103

Anemia Dementia COPD Malign

0.0039 0.0064 0.0075 0.0052 0.0045 0.0112 0.0033 0.0071

98

Lipidemia IHD

0.0018 0.0029 0.0014

Bound 3.8 lower

NMI

105

HTN

Total Bound 5.82 proteins g/dL lower

Ca mg/dL

Uric acid mg/dL

DM

Bound 104 upper

Table 12.2 (continued)

12 The Utility of Information Theory Based Methods in the Research … 261

192

Bound 147 upper

9.9

9.7

12.1

7.9

0.0074 0.0112 0.0035

Bound 10.2 upper

NMI

133

130

0.0054 0.0048 0.0038

191

NMI

80

80 152

97

AF

88

83

CVA

116

116

CRF

154

133

HF

170

163

194

189

9.6

9.1

198

116 149

146 135

131 168

127 158

157

9.1

8.9 8.2

8.1 8.4

8 8.9

8.8 11.4

10.8

10

8.2

0.0045 0.0079 0.0048 0.0055 0.0084 0.0063

198

196

0.0134

10.8

5.8

0.0026

154

151

0.0034

134

128

83

81

145

140

6.3

6.2

0.0057 0.0097

9.9

9.1

0.0045 0.0095

142

138

0.0049 0.0048

101

101

Anemia Dementia COPD Malign

0.0055 0.0092 0.0052 0.0046 0.0087 0.0056 0.0088 0.0045

12.1

11.9

124

121

CHF

0.0053 0.0066 0.0091 0.0083 0.0069 0.0089 0.0052

143

140

MI

0.0068 0.008

178

152

0.006

118

75

Lipidemia IHD

0.0253 0.0053 0.0034

Bound 144 lower

NMI

WBC 103 /µl Bound 10 lower

Cholesterol mg/dL

143

Bound 155 upper

HTN

140

DM

Triglycerides Bound 86 mg/dL lower

Table 12.2 (continued)

262 D. Blokh et al.

12 The Utility of Information Theory Based Methods in the Research …

263

If we consider the period boundaries of relative normalcy, then for dementia, the interval is from 72 to 86 years old. That is to say, up to the age of 72, there is almost no dementia, and over 86 there are few cases, possibly due to high mortality (Table 12.2). For hypertension, the interval ranges from 75 to 76 years old, that is, there is almost no difference from a single point boundary, and the values of normalized mutual information are similar: 0.0247 for the point boundary and 0.0265 for the period boundary. Thus, even in this limited sample, it can be seen that risks for different diseases may differ with reference to their time appearance, and the duration of the risk. Generally, the existence of temporal boundaries of vulnerability during an organism’s life course is suggested by the evolutionary theories of aging. Yet several types of periodicity were indicated by various studies, for example step-wise, undulating or U-shaped changes in gene expression during a life history (Lehallier et al. 2019; Burgers et al. 2011). The discrepancies in the timing and types of life history periods, even in the same organism, may be related to heterochronicity (heterochrony) of aging, i.e. the fact that different systems age at different rates and often in different modes (Nie et al. 2022). Unfortunately, the evolutionary theory of aging, in its present state, whether relying on the “Mutation Accumulation” or “Antagonistic Pleiotropy” or other premises, is unable to provide concrete quantitative predictions for clinical practice in humans. The present methodology may be a small step toward providing such a predictive capability. Though, vast additional research will be needed to provide periodization that could assist early diagnosis and preventive treatment of agingrelated conditions, especially those that would be based on longitudinal observations of diverse cohorts.

12.4 Using Information-Theoretical Methods for Risk Group Attribution Beside establishing the most informative diagnostic boundaries, based on maximal normalized mutual information, a task of immediate practical significance is to use information-theoretical measures to attribute a subject to a risk group for a particular disease. This is the goal of preventive medicine, and information-theoretical measures can be suitable for this task. Thus, in order to attribute a subject to a risk group for a particular disease or group of diseases (multimorbidity), normalized mutual information can be a convenient and advantageous measure. In contrast to the evaluation of parameters’ correlations using the linear correlation coefficient, the normalized mutual information enables the investigators to determine non-linear association of the diagnostic evaluation parameters of interest with the presence of a disease or a group of diseases. Moreover, the normalized mutual information value provides the exact amount of information (or informative value) that each diagnostic parameter contains about the presence of diseases. The formulas for the calculation of normalized mutual information were presented in the Definitions section above, according

264

D. Blokh et al.

to the standard procedure. A normalized mutual information approaching zero signifies a weaker correlation value, whereas one approaching unity shows a stronger correlation between evaluation parameters (Renyi 1959; Zvarova and Studeny 1997). It is important to emphasize that normalized mutual information measures the precise amount of information that each evaluation parameter contains about the presence of a disease. Based on such exact quantities of information or informative weights/values of all the parameters, it is possible to create a diagnostic rule to evaluate the person’s risk to develop a particular disease or group of diseases at a particular age, categorizing the subjects into risk groups according to the strength of the correlation. In constructing the algorithm for risk group categorization, we do not use the common heuristic methods such as logistic regression, neural networks, or deep learning (Li et al. 2011). When applying mathematics in medicine, we believe that the use of heuristic approaches (algorithms) should be possibly avoided as theoretically unsubstantiated. Rather, we use the theoretically-grounded method of information theory. As an algorithm for assigning a patient to a risk group, we use the nearest neighbor rule with the weighted Hamming distance (Blokh et al. 2008; Hamming 1986; Huang et al., 2018) d(v, z) =

n ∑

| | 2 j−1 |v j − z j |

j=1

where v = (v1 , v2 ,…, vn ) and z = (z1 , z2 ,…, zn ) are n-dimensional binary vectors, and the weights 20 , 21 , …, 2n−1 are defined by the corresponding normalized mutual information. Under the present approach, the initial parameters are transformed into binary parameters, while each pattern is a set of n-dimensional binary vectors. For each parameter, the normalized mutual information estimates the correlation of this parameter with a disease or medical condition or a group of diseases and medical conditions (multimorbidity). Thus, the more the normalized mutual information, the greater is the correlation and the greater is the weight this parameter obtains in the weighted Hamming distance. The attribution to the risk group is determined by the vector w = (w1 , w1 , …, wn ) of patterns, found at the minimal distance from the vector z = (z1 , z2 , …, zn ) of the corresponding tested patient. That is to say, if vector w, such that d(w, z) = min d(v, z) = min

n ∑

| | 2 j−1 |v j − z j |

j=1

where the minimum is searched on the set of all the vectors v = (v1 , v2 , …, vn ) of two patterns, belongs to the pattern 1, then the vector z = (z1 , z2 , …, zn ) is attributed to the group of patients corresponding to the pattern 1. And if it belongs to the pattern 2, then also the vector z = (z1 , z2 , …, zn ) belongs to the group of patients corresponding to the pattern 2.

12 The Utility of Information Theory Based Methods in the Research …

265

The nearest neighbour rule’s theoretical justification with the weighted Hamming distance was presented earlier (Blokh et al. 2008; Huang et al. 2018). To illustrate the rule, and facilitate its use in the clinic, we often present this rule as a decision tree (Blokh 1987). The application and algorithm for the construction of decision trees and analogous approaches have been presented earlier (Blokh et al. 2008, 2007, 2021; Podgorelec et al. 2002). The decision tree can be useful in clinical evaluation practice, mainly for two reasons: a decision tree diagnostic model closely follows the description of clinical decision making, and it can be easily theoretically justified and interpreted. In the present work, we built the decision rules regarding the risk for major aging-related diseases and their combination (multimorbidity), using a small set of diagnostic parameters routinely available to treating physicians. This way, we were able to demonstrate the potential common applicability of information-theoretical methodology to assess the risk of age-related diseases. Yet, it should be noted that the present sample of geriatric patients from a single geriatric hospital may not faithfully represent the risks in the general population, but is rather provided as an illustration of the method that is representative and may be applicable for geriatric settings. It should be further noted that in order to build an algorithm for attribution to a risk group for a single disease, such as hip fracture, with satisfactory results, a limited number of parameters (e.g. five or six) may be sufficient (Blokh et al. 2021). With such a limited number of parameters, it is very convenient to represent the algorithm in the form of a decision tree. However, in many cases, more parameters are needed to build an algorithm for attributing a subject to a risk group with satisfactory results for multimorbidity (three and more diseases). This may be due to the increasing interrelation of factors involved, which generally makes the estimation of multimorbidity more difficult, and thus it is often excluded from clinical assessment. The higher number of parameters also makes the visual presentation of the decision tree difficult. It should be emphasized that for the application of the information-theoretical model, all the data must be discretized. Here, the discretization thresholds (boundaries) were determined according to the algorithm for physiological boundaries evaluation by maximizing normalized mutual information (see the previous section). Following the data discretization and calculation of the normalized mutual information values for all the parameters under consideration, we select the most informative parameters with the highest values of normalized mutual information for the diagnostic model construction. With these most informative parameters, we construct the diagnostic model using the weighted Hamming distance. There is currently no general consensus for the clinical evaluation and selection criteria of several diseases as a multimorbidity (Stambler and Moskalev 2021; Blokh et al. 2017, 2019; Hafezparast et al. 2021; Johnston et al. 2019). Commonly, a patient is considered multimorbid with at least two or three chronic diseases. In the present evaluation, we consider a multimorbidity as composed of 3 chronic diseases and conditions diagnosed at the geriatric clinic. If the subject has at least 3 diseases, the

266

D. Blokh et al.

subject is categorized as multimorbid (MM) and coded as 1. If less than 3, then the subject is categorized as non-multimorbid (NMM) and coded as 0. The diagnostic process consists of three steps: 1. In the training set, we establish the optimal boundaries by the maximum values of normalized mutual information. 2. According to the boundaries, we arrange the markers and implement their binarization according to the values above or below the boundary. In this way, we obtain the binary vectors of the training set and the binary vectors of the test (diagnostic) set. 3. For the tested vector (a patient to be diagnosed) we find the closest vector from the training set. In this way, the diagnosis is performed by the “nearest neighbour rule”. Several combinations of diseases were considered in this way. Here we present as an example the multimorbidity set composed of 3 medical conditions: Hypertension (HTN), Ischemic Heart Disease (IHD) and Malignancy. Table 12.3 illustrates an example of diagnosis, showing data for a single reference subject from the training set versus a single diagnosed subject from the test set. For this example of multimorbidity test, Table 12.3 shows 12 biomarkers with the highest normalized mutual information, arranged in the order of descending values of the normalized mutual information. The table also shows the values of normalized mutual information for each parameter and the corresponding optimal parameter boundary values. Then examples are presented for a single training set (reference) subject and a single test set (diagnosed) subject, showing their respective parameter values and the corresponding binarization values in relation to the boundary. The subject is binary-coded 0 if the parameter value is less than or equal to the boundary, and 1 if it is greater than the boundary. Females are assigned the code 1, and males are coded as 0. The entire columns with the binary codes represent the binary vectors of the reference and diagnosed subject respectively. The column with the binary codes for the diagnosed subject shows the binary vector nearest to the reference subject’s binary vector. Note that the codes for the first 5 “most important markers” (the 5 upper rows, showing the most “weighty” diagnostic parameters with the greatest values of normalized mutual information) are the same for the diagnosed subject versus the reference subject. That is to show, the diagnosed subject’s values are near to the value pattern of the reference. The training set included 120 subjects: 60 multimorbid subjects and 60 nonmultimorbid subjects. The test set included 500 subjects, including 33 multimorbid subjects, of which 20 were diagnosed correctly, and 467 non-multimorbid subjects, of which 243 were diagnosed correctly. Thus, the diagnostic test results yielded the Sensitivity of (20/33)*100%≈60.6% and Specificity of (243/467)*100%≈52%. The parameters that were indicated to have the highest informative value for diagnosing multimorbidity (the highest normalized mutual information) fit clinical intuitions. All these parameters are known to have not just particular symptomspecific or disease-specific effects, but systemic effects on the health status and serve as systemic indicators for health problems. Thus, sex is known to be discriminative

12 The Utility of Information Theory Based Methods in the Research …

267

Table 12.3 An example of multimorbidity test Biomarker

Normalized Boundary Exemplary Mutual reference Information (NMI) subject from the training set: parameter value

Exemplary reference subject from the training set: binary code value

Exemplary diagnosed subject from the test set: parameter value

Exemplary diagnosed subject from the test set: binary code value

Sex

0.0462

0.5

1

1*

1

1*

ALTL U/L

0.0356

22.56

12

0*

16

0*

CKL U/L

0.0341

105.25

117

1*

155

1*

Uric acid mg/dL

0.0326

7.3

5.1

0*

6.1

0*

102.51

100

0*

102

0*

Cl mmol/L 0.0321 Creatinine mg/dL

0.0296

1.08

0.91

0

1.09

1

CRP mg/L

0.0285

102.03

202

1

18

0

Age

0.0217

87.56

88

1

81

0

LDH U/L

0.0214

320.26

378

1

522

1

Glucose mg/dL

0.0192

98.85

145

1

94

0

Cholesterol 0.0176 mg/dL

171.71

97

0

162

0

1.17

0

2.56

1

TSH µIU/ mL

0.0169

2.09

The multimorbidity here is composed of Hypertension (HTN), Ischemic Heart Disease (IHD) and Malignancy. The table shows the values of optimal boundaries, and normalized mutual information, and examples of a single reference subject from the training set versus a single diagnosed test subject from the test set, with their corresponding parameter values and code values after binarization less than or equal to versus above the boundary *Note that the same binarization code values for the training set and test set parameter show that in these subjects, the parameter values are on the same side of the binarization boundary, which indicates a closeness of the reference and diagnosed patterns

for life expectancy, general cardio-vascular causes of morbidity and mortality, and frailty. Age (aging) is generally determinative of old-age multimorbidity and frailty. The tests ALT (the alanine aminotransferase test) indicative of liver cell damage, may suggest general intoxication, heart and kidney insufficiency and infections. The tests for Uric Acid and Creatinine, indicative of kidney damage or dehydration due to different diseases, may also suggest general intoxication. The test CKL (Creatine Kinase level) indicative of muscle (especially heart) or brain damage may indicate general or multiple injury. In addition, the Lactate Dehydrogenase (LDH) may indicate multiple tissue injury due to various causes, such as infections, malignancy, traumatic or ischemic injury etc. The CRP test (C reactive protein) may indicate

268

D. Blokh et al.

systemic inflammation. Blood glucose, cholesterol and chloride (Cl) may indicate risks for the cardiovascular system diseases as well as metabolic syndrome and multimorbidity (Blokh et al. 2017, 2019). Moreover, thyroid stimulating hormone (TSH) has systemic effects on aging metabolism. The information-theoretical method helps bring all those systemic parameters into a single diagnostic system for multiple aging-related diseases. Yet, it should be additionally emphasized that with another choice of diseases that compose a multimorbidity, or when just diagnosing a single disease using the same methodology, the set of the most informative diagnostic parameters may be different. In the future, it may be possible as well to select the choice of multimorbidity composition as being determinative for some major unifying clinical outcome, including diseases that are the main causes of morbidity and mortality and subsequently are the most determinative for the lifespan.

12.5 Utilizing Information-Theoretical Methods for Genomic Sequence Analysis We would like to conclude by providing yet another example of applicability, emphasizing the tremendous potential of information-theoretical methodology in genomics and other types of omics research. Below we will present some rationale and methodological direction, but extensive exemplification would go beyond the feasible scope of this work (Blokh et al. 2020). The problem of analyzing symbolic sequences appears in many areas of research, such as “big data” (Wong 2019) and “dynamic systems” (Masoller et al. 2015). The most significant example of a symbolic sequence is nucleotide sequence. Moreover, a nucleotide sequence is an interesting and important mathematical object. Of special importance is the task of clustering nucleotide sequences (James et al. 2018; Priness et al. 2007; Song et al. 2012; Androulakis et al. 2007). A nucleotide sequence is hereby referred to as a sequence whose elements assume the values A, C, G, T. First the mathematical analysis of nucleotide sequences was suggested by the physicist Gamow (1954). The problem of symbols relation of nucleotide sequences was first discussed by the physicist Yockey in the 1950s (Yockey et al. 1958). About 50 years later, in 2003, the mathematician Gelfand (2004) noted that “the use of mathematics in studying gene sequences is an adequate language”. This implied the finding of formal (mathematical) properties of gene nucleotide sequences. Yet, insufficient attention has been paid to this subject. The main method of investigating numeric sequences (or discrete numerical time series) is the construction and analysis of autocorrelation functions. However, the principal difference of numeric sequences from nucleotide sequences is that the nucleotides in the sequence take the symbolic values A, C, G, T. This means that statistical apparatus cannot be used for the analysis of such sequences, insofar as statistics does not have theoretically justified measures of correlation between

12 The Utility of Information Theory Based Methods in the Research …

269

symbolic (discrete) random variables. The impossibility of utilizing theoretically justified statistical methods in genetics has been noted earlier (Scheffe 1999). Therefore, information theory, having a solid theoretical justification, has been increasingly used in the study of biological data. Currently, genes are classified according to phenotype, that is, there are groups of genes associated with oncology, diabetes, longevity, etc. To the best of our knowledge, there is no classification of genes according to their “internal” properties, that is, according to the properties of DNA, a classification similar to the table of elements in chemistry, the classification of elements by atomic weight. It is possible that, in the future, researchers will construct a classification of genes by the properties of the corresponding information function of the DNA of the gene, that is, by the relationship between DNA elements. We suggest an algorithm for the construction and analysis of autocorrelation (information) functions of gene nucleotide sequences. Normalized mutual information is used here as a measure of correlation between discrete random variables. The information functions indicate the degree of structuredness of gene sequences. In the future, it may be possible to construct the information functions for diverse gene sequences and find significant differences between information functions of genes of different types. It may be hypothesized that the features of information functions of gene nucleotide sequences may be related to phenotypes of these genes. Earlier, in the Section “Definitions,” there were defined the concepts and properties of entropy, mutual information and normalized mutual information for discrete random variables. Hereafter, these definitions will be used for the analysis of symbolic sequences. Let x(n) = (x(1), x(2), …, x(n),…) represent discrete time series having symbolic values. Let x(n + j) = (x(1 + j), x(2 + j),…, x(n + j),…) be a time series x(n) with a lagj. The auto-mutual information of the time series x(n) with a lagj equals: I (x(n); x(n + j)) = H (x(n)) + H (x(n + j))−H (x(n), x(n + j)) The normalized auto-mutual information of the time series x(n) with a lagj equals: C(x(n); x(n + j )) =

H (x(n)) + H (x(n + j )) − H (x(n), x(n + j )) I (x(n); x(n + j )) = H (x(n + j )) H (x(n + j )

The normalized mutual information C(x(n); x(n + j)) is then calculated as a function of the lagj. We shall refer to function F( j) = C(x(n); x(n + j)) as the information function of the discrete time series x(n). Properties of the Information Function F( j): 1. 0 ≤ F( j) ≤ 1; 2. F( j) = 0 if and only if x(n) and x(n + j) are mutually independent;

270

D. Blokh et al.

3. F( j) = 1 if and only if there exists a functional relationship between x(n) and x(n + 1). Let {x 1 (n), x 2 (n), …, x k (n)} be a set of discrete time series, whose elements are symbols, e.g. gene nucleotide sequences, n = 1, 2, 3, …, and the maximum value n for a sequence x i (n) 1 ≤ i ≤ k equals the number of elements in this nucleotide sequence. The algorithm of distributing a set of time series {x 1 (n), x 2 (n), …, x k (n)} consists of 3 procedures: 1. 2. 3. 1.

Construction of an information function matrix; Ranking of columns of the information function matrix; Application of a multiple comparisons method. Construction of an information function matrix. For each time series x i (n) 1 ≤ i ≤ k, we construct the information function Fi ( j )1 ≤ i ≤ k, 1 ≤ j ≤ m

where m is the number of lags in the information function. We obtain the k × m [F i ( j)] matrix of values of the information functions, i.e. a matrix where each row is an information function of the corresponding time series. 2. Ranking of columns of the information function matrix. Each row of [F i ( j)] matrix is an information function of time series, and each column contains the values of information functions corresponding to the same lag. For each column of [F i ( j)] matrix, we rank its entries and assign the rank 1 to the smallest entry of the column. We obtain k × m matrix of ranks [r i ( j)], with each column of the matrix containing ranks from 1 to k. We estimate the element interconnection of the i-th time series as compared to the element interconnection of other time series by the sum of all the elements of i-th row of the matrix [r i ( j)]. Such an estimation allows us to use multiple comparisons of rank statistics for the comparison of time series interconnection. 3. Application of a multiple comparisons method. We compare rank sums using the Newman-Keuls test (Glantz 1994). An illustrative distribution of the information functions for a selection of genes that are often investigated in aging and cancer research has been considered earlier (Blokh et al. 2020). The results of the distribution suggested some plausible connections between the particular genes’ information functions and some of the known phenotypes of these genes. Yet a further corroboration of such connections for these and other genes will require extensive further research (Blokh et al. 2020). The above algorithm represents a new information theory based method for the evaluation of the level of structuredness of gene sequences (information function) by the sequences’ normalized mutual information. This new method may serve as an

12 The Utility of Information Theory Based Methods in the Research …

271

additional structural evaluation tool for genomic analysis, and for omics biomarkers analysis generally. In the future, it may be possible to associate between the gene structuredness as evaluated by the present method and the expression and phenotype of particular genes under consideration. Here we present the methodology to calculate the gene structuredness, while the association of the gene structuredness with gene expression and phenotypic function will be the task of future work. This further exemplifies the potential applicability of information theory-based approaches.

12.6 Conclusion Here we have presented several examples of the utility of information-theoretical methods in biomedical studies generally and the studies of aging and longevity in particular. Generally, this utility derives from the theoretical justification and unique capabilities of information theory, such as the ability to evaluate non-linear relations, cumulative effects and optimal boundaries. We believe, in the future, the theoretically grounded information-theoretical methodology will assume an ever-increasing role in the studies of aging and longevity and their practical clinical applications. With increasing awareness and wider application of the unique capabilities of informationtheoretical methodology, it may become a common assistive tool for data analysis and decision making by researchers and clinicians. Compliance with ethical standards Conflict of Interest The authors declare that they have no conflict of interest.

References Acton C, Miller R (2009) SPSS for social scientists. Palgrave Macmillan, New York, pp 298–304 Androulakis IP, Yang E, Almon RR (2007) Analysis of time-series gene expression data: methods, challenges, and opportunities. Annu Rev Biomed Eng 9:205–228 Blokh D (2013) Information-theory analysis of cell characteristics in breast cancer patients. Int J Bioinf Biosci 3:1–5 Blokh D, Stambler I (2014) Estimation of heterogeneity in diagnostic parameters of age-related diseases. Aging Dis 5:218–225 Blokh D, Stambler I (2015a) Applying information theory analysis for the solution of biomedical data processing problems. Am J Bioinform 3(1):17–29 Blokh D, Stambler I (2015b) Information theoretical analysis of aging as a risk factor for heart disease. Aging Dis 6(3):196–207 Blokh D, Stambler I (2017a) The application of information theory for the research of aging and aging-related diseases. Prog Neurobiol 157:158–173 Blokh D, Stambler I (2017b) The use of information theory for the evaluation of biomarkers of aging and physiological age. Mech Age Dev 163:23–29

272

D. Blokh et al.

Blokh D, Afrimzon E, Stambler I, Korech E, Shafran Y, Zurgil N, Deutsch M (2006) Breast cancer detection by Michaelis-Menten constants via linear programming. Comput Methods Program Biomed 85:210–213 Blokh D, Stambler I, Afrimzon E, Shafran Y, Korech E, Sandbank J, Orda R, Zurgil N, Deutsch M (2007) The information-theory analysis of Michaelis-Menten constants for detection of breast cancer. Cancer Detect Prev 31:489–498 Blokh D, Zurgil N, Stambler I, Afrimzon E, Shafran Y, Korech E, Sandbank J, Deutsch M (2008) An information-theoretical model for breast cancer detection. Methods Inf Med 47:322–327 Blokh D, Stambler I, Afrimzon E, Platkov M, Shafran Y, Korech E, Sandbank J, Zurgil N, Deutsch M (2009) Comparative analysis of cell parameter groups for breast cancer detection. Comput Methods Program Biomed 94:239–249 Blokh D, Stambler I, Lubart E, Mizrahi EH (2017) The application of information theory for the estimation of old-age multimorbidity. Geroscience 39(5–6):551–556 Blokh D, Stambler I, Lubart E, Mizrahi EH (2019) An information theory approach for the analysis of individual and combined evaluation parameters of multiple age-related diseases. Entropy 21(6):572 Blokh D, Gitarts J, Stambler I (2020) An information-theoretical analysis of gene nucleotide sequence structuredness for a selection of aging and cancer-related genes. Genom Inform 18(4):e41 Blokh D, Stambler I, Gitarts J, Pinco E, Mizrahi EH (2021) Information-theoretical analysis of blood biomarkers for age-related hip fracture risk evaluation. Appl Med Inform 43(1):14–23 Blokh AS (1987) Graph schemes and algorithms. Vishaya Shkola, Minsk Burgers AMG, Biermasz NR, Schoones JW, Pereira AM, Renehan AG, Zwahlen M, Egger M, Dekkers OM (2011) Meta-analysis and dose-response metaregression: circulating insulin-like growth factor I (IGF-I) and mortality. J Clin Endocrinol Metab 96(9):2912–2920 Butler RN, Sprott R, Warner H, Bland J, Feuers R, Forster M et al (2004) Biomarkers of aging: from primitive organisms to humans. J Gerontol A Biol Sci Med Sci 59(6):B560-567 Charlesworth B (2000) Fisher, Medawar, Hamilton and the evolution of aging. Genetics 156:927– 931 Cramer H (1991) Mathematical methods of statistics. Princeton University Press, Princeton (Eighteenth printing, first published in 1946) Ezcurra M, Benedetto A, Sornda T, Gilliat AF, Au C, Zhang Q, et al. (2018) C. elegans eats its own intestine to make yolk leading to multiple senescent pathologies. Curr Biol 28(16):2544–2556.e5 Ferrucci L, Gonzalez-Freire M, Fabbri E, Simonsick E, Tanaka T, Moore Z, Salimi S, Sierra F, de Cabo R (2020) Measuring biological aging in humans: a quest. Aging Cell 19(2):e13080 Freeman K, Geppert J, Stinton C, Todkill D, Johnson S, Clarke A et al (2021) Use of artificial intelligence for image analysis in breast cancer screening programmes: systematic review of test accuracy. BMJ 374:n1872 Fried LP, Walston J (1999) Frailty and failure to thrive. In: Hazzard WR, Blass JP, Ettinger WH, Halter JB, Ouslander JG (eds) Principles of geriatric medicine and gerontology, 4th edn. McGraw Hill, New York, pp 1387–1402 Galton F (1888) Co-relations and their measurement: chiefly from anthropometric data. Proc R Soc 45:135–145 Gamow G (1954) Possible mathematical relation between deoxyribonucleic acid and proteins. Biol Meddel Kongel Danske Vidensk Selsk 22:1–13 Gelfand IM (2004) Speech at the meeting of royal east research, September 3, 2003. Matematicheskoe Prosveshenie 3:13–14 Glantz SA (1994) Primer of biostatistics, 4th edn. McGraw-Hill, New York Goldfarb A, Teodoridis F (2022) Why is AI adoption in health care lagging? Brookings Institution, Washington. https://www.brookings.edu/research/why-is-ai-adoption-in-health-care-lag ging/. Accessed 1 July 2022

12 The Utility of Information Theory Based Methods in the Research …

273

Hafezparast N, Turner EB, Dunbar-Rees R, Vodden A, Dodhia H, Reynolds B (2021) Adapting the definition of multimorbidity: development of a locality-based consensus for selecting included long term conditions. BMC Fam Pract 22:124 Hamilton WD (1966) The moulding of senescence by natural selection. J Theor Biol 12(1):12–45 Hamming RW (1986) Coding and information theory. Prentice Hall, Englewood Cliffs, New Jersey Hornero R, Abásolo D, Escudero J, Gómez C (2009) Nonlinear analysis of electroencephalogram and magnetoencephalogram recordings in patients with Alzheimer’s disease. Philos Trans A Math Phys Eng Sci 367:317–336 Huang Z, Wei Z, Zhang G (2018) RWBD: learning robust weighted binary descriptor for image matching. IEEE TCSVT 28(7):1553–1564 James BT, Luczak BB, Girgis HZ (2018) MeShClust: an intelligent tool for clustering DNA sequences. Nucl Acids Res 46:e83 Johnston MC, Crilly M, Black C, Prescott GJ, Mercer SW (2019) Defining and measuring multimorbidity: a systematic review of systematic reviews. Eur J Public Health 29(1):182–189 Kleene SC (1956) Representation events in nerve nets and finite automata. In: Shannon CE, McCarthy J (eds) Automata studies (annals of mathematics studies no. 34). Princeton University Press, Princeton, pp 3–41 Lehallier B, Gate D, Schaum N, Nanasi T, Lee SE, Yousef H et al (2019) Undulating changes in human plasma proteome profiles across the lifespan. Nat Med 25:1843–1850 Li J, Burke EK, Qu R (2011) Integrating neural networks and logistic regression to underpin hyper-heuristic search. Knowl Based Syst 24:322–330 Lim SS, Vos T, Flaxman AD, Danaei G, Shibuya K, Adair-Rohani H et al (2012) A comparative risk assessment of burden of disease and injury attributable to 67 risk factors and risk factor clusters in 21 regions, 1990–2010: a systematic analysis for the global burden of disease study 2010. Lancet 380:2224–2260 Lipsitz LA, Goldberger AL (1992) Loss of ‘complexity’ and aging: potential applications of fractals and chaos theory to senescence. JAMA 267:1806–1809 Liu CJ, Huang CF, Huang RY, Shih CS, Ho MC, Ho HC (2014) Solving reality problems by using mutual information analysis. Math Prob Eng 2014:631706 Mangasarian OL, Street WN, Wolberg WH (1995) Breast cancer diagnosis and prognosis via linear programming. Oper Res 43(4):570–577 Masoller C, Hong Y, Ayad S, Gustave F, Barland S, Pons AJ et al (2015) Quantifying sudden changes in dynamical systems using symbolic networks. New J Phys 17:023068 McCulloch WS, Pitts W (1943) A logical calculus of the ideas immanent in nervous activity. Bull Math Biophys 5:115–133 Medawar PB (1952) An unsolved problem of biology. HK Lewis, London Molina-Pena R, Alvarez MM (2012) A simple mathematical model based on the cancer stem cell hypothesis suggests kinetic commonalities in solid tumor growth. PLoS ONE 7:e26233 Nicolis G, Prigogine I (1990) Exploring complexity. W.H. Freeman, New York Nie C, Li Y, Li R, Yan Y, Zhang D, Li T et al (2022) Distinct biological ages of organs and systems identified from a multi-omics study. Cell Rep 38(10):110459 Pearl R (2019) Artificial Intelligence in healthcare: what is versus what will be. Health Manag 19:104–107 Podgorelec V, Kokol P, Stiglic B, Rozman I (2002) Decision trees: an overview and their use in medicine. J Med Syst 26:445–463 Preckova P, Zvarova J, Zvara K (2012) Measuring diversity in medical reports based on categorized attributes and international classification systems. BMC Med Inform Decis Mak 12:31 Priness I, Maimon O, Ben-Gal I (2007) Evaluation of gene-expression clustering via mutual information distance measure. BMC Bioinform 8:111 Quastler H (1958) The domain of information theory in biology. In: Yockey HP (ed) Symposium on information theory in biology, Gatlinburg, Tennessee, October 29–31, 1956. Pergamon Press, New York, pp 187–196

274

D. Blokh et al.

Radtke MA, Midthjell K, Nilsen TI, Grill V (2009) Heterogeneity of patients with latent autoimmune diabetes in adults: linkage to autoimmunity is apparent only in those with perceived need for insulin treatment: results from the Nord-Trøndelag Health (HUNT) study. Diabetes Care 32:245– 250 Renyi A (1959) On measures of dependence. Acta Math Acad Sci Hungar 10:441–451 Rovai AP, Baker JD, Ponton MK (2014) Social science research design and statistics: a practitioner’s guide to research methods and IBM SPSS analysis, 2nd edn. Watertree Press, Chesapeake, pp 367–370 Scheffe H (1999) The analysis of variance. John Wiley & Sons, Hoboken, New Jersey Shannon CE, Weaver W (1949) Mathematical theory of communication. University of Illinois Press, Urbana Smith KR, Hanson HA, Hollingshaus MS (2013) BRCA1 and BRCA2 mutations and female fertility. Curr Opin Obstet Gynecol 25(3):207–213 Song L, Langfelder P, Horvath S (2012) Comparison of co-expression measures: mutual information, correlation, and model based indices. BMC Bioinform 13:328 Stambler I (2017a) Recognizing degenerative aging as a treatable medical condition: methodology and policy. Aging Dis 8(5):583–589 Stambler (2017b) The historical evolution of evolutionary theories of aging. In: Longevity promotion: multidisciplinary perspectives. Longevity History, Rishon Lezion. http://www.longevity history.com/. Accessed 1 July 2022 Stambler I, Moskalev A (2021) Editorial: clinical evaluation criteria for aging and aging-related multimorbidity. Front Genet 12:764874 Stambler I, Alekseev A, Matveyev Y, Khaltourina D (2022) Advanced pathological aging should be represented in the ICD. Lancet Healthy Longev 3(1):E11 Stambler I, Blokh D (2017) The use of information theory for the evaluation of biomarkers of aging and physiological age to predict aging-related diseases and frailty. In: Longevity promotion: multidisciplinary perspectives. Longevity History, Rishon Lezion. http://www.longevityhistory. com/. Accessed 1 July 2022 The Alan Turing Institute (2022) Data science and AI in the age of COVID-19. https://www. turing.ac.uk/sites/default/files/2021-06/data-science-and-ai-in-the-age-of-covid_full-report_2. pdf. Accessed 1 July 2022 WHO (2022) Premature mortality from non-communicable disease. https://www.who.int/data/gho/ indicator-metadata-registry/imr-details/3411. Accessed 1 July 2022 Williams GC (1957) Pleiotropy, natural selection and the evolution of senescence. Evolution 11:398– 411 Wong KC (2019) Big data challenges in genome informatics. Biophys Rev 11:51–54 Yockey HP, Platzman RL, Quastler H (1958) Symposium on information theory in biology, 1956 Oct 29–31, Gatlinburg, Tennessee. Pergamon Press, New York Zvarova J, Studeny M (1997) Information theoretical approach to constitution and reduction of medical data. Int J Med Inform 45:65–74

Chapter 13

AI for Longevity: Getting Past the Mechanical Turk Model Will Take Good Data Leonid Peshkin and Dmitrii Kriukov

Abstract We examine the promise and progress of AI for healthy longevity, discuss the reasons for success and failures, and anticipate the future. AI is about generalization—extrapolation from the high quality curated data. Such data does not generally exist today in biology in general and aging in particular. In order for AI to help advance the field, we must define the system in a formal and quantitative way and build platforms for collecting large, consistent sets of measurements under perturbations. Keywords Regularized regression · Mechanical turk · Aging clocks · Data mining

Artificial Intelligence is widely expected by the general public to make a huge impact on the field of bio-medical research in general and longevity in particular. The enthusiasm, largely driven by sensationalist science journalism and futuristic anti-utopia literature, pictures AI as some sort of a robotic scientist, at least at the intelligence level of a graduate student, but easily scaled up, which conducts scientific inquiry, executes laboratory experiments, collects the measurements and combines the new data with existing knowledge to advance the state of the art. While AI does play an important role, the present reality is drastically different from that vision—AI remains a highly specialized tool, or rather a toolkit, in the hands of human intelligence. Many of the widely celebrated success stories today are not unlike the (publicized by Amazon) story of Mechanical Turk (Standage 2003) from the late eighteenth century—an impressive fit of engineering misleading the public by passing a hidden human expert for an automated chess master. While the mechanism is real, and the impressive accomplishments are present, it is just not the kind of mechanism and accomplishments that the public is led to believe it is!

L. Peshkin (B) Systems Biology, Harvard Medical School, Boston, MA 02115, USA e-mail: [email protected] D. Kriukov Skolkovo Institute of Science and Technology, Moscow, Russia © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 A. Moskalev et al. (eds.), Artificial Intelligence for Healthy Longevity, Healthy Ageing and Longevity 19, https://doi.org/10.1007/978-3-031-35176-1_13

275

276

L. Peshkin and D. Kriukov

We would argue that three most impressive and best recognized accomplishments of AI in biology today are not free of the Mechanical Turk symptoms. While the mammography-based breast cancer risk assessment (Yala et al. 2019), the new antibiotic discovery (Stokes et al. 2020) and the prediction of 3D protein structure (Jumper et al. 2021) all brandish “deep learning”, critically these all have something else in common—a Big Beautiful Data, painstakingly created, annotated and quality-controlled by a giant painstaking effort of the natural intelligence, uniquely applied to each task. Why is it that throughout centuries the same result is celebrated much more if it was obtained by a Mechanical Turk than by a human expert? The answer lies not in the result itself—rather, in the minds of gullible enthusiasts and investors mistaking it for a proof of principle; the hope being that now that “we have figured out how to do it”, a cornucopia of new results will follow, the success will scale up. This text will help illustrate that unfortunately such progress remains a distant future for the field of Longevity, while the disheartening disappointment with AI in general has been widely covered in recent articles by Economist, Forbes, Institutional Investor (Igelsböck 2021; Calvello and Svoboda 2019; Sawhney 2020; Blauw 2019; The Economist 2020; Strickland 2019). Notably, the key industry players like Google, Intel, Microsoft and IBM have invested heavily in applying AI to generalize in the areas of radiology, Imaging, and disease diagnosis, but as one reporter put it “the doctors are still waiting” (Strickland 2019). What distinguishes AI from a mere repository of knowledge? We have previously refuted Ned Block’s famous argument against the validity of the Turing test: the argument that a giant tape containing all possible responses of proverbial Aunt Bertha will pass the test while clearly not being genuinely intelligent (Savova and Peshkin 2007). Essentially, AI has to display a generalization capacity—Artificial Intelligence is above all about artificially reproducing the key aspect of natural intelligence—the ability to observe patterns and to generalize from a limited experience onto previously unexperienced contexts. It is about building a model which recapitulates phenomena and thus could replace expensive and extensive experimentation with the real object by simulations, being able to answer with some precision “whatif” questions. Such a model might not be ultimately expressed by a set of crisp human-readable mathematical formulae but rather by a large set of tuned parameters in an artificial neural net or some other, not yet invented, representation. In biology of aging, such a model must provide a way to assess the current state of an organism and forecast its lifespan and healthspan in a stable environment, outside of a major perturbation, and then go further to allow for perturbations and adjust the predictions accordingly. How would we construct such a model? A dominant direction today is learning from so-called Big Data which was not created with a specific task in mind, from datasets which resulted from profiling genomes and phenotypes, medical records, longitudinal studies e.g. the Framingham Study (Mahmood et al. 2014). Such data is often combined from numerous disjoint sources, each bringing its own bias, and is generally inappropriate for building causal models. An alternative preferred approach is to learn models from data obtained under interventions or perturbations (Molinelli et al. 2013). The idea is to collect measurements at the

13 AI for Longevity: Getting Past the Mechanical Turk Model Will Take …

277

micro- (e.g. metabolites, lipids and proteins) and marco-level (behavior, physical and cognitive fitness), build a model and then search the vast combinatorial space of interventions for these which drive the system towards some goal state. It’s important to figure out what a model of aging specifically should reflect. There are many proposed frameworks of aging, such as the Hallmarks of Aging, Strategies for engineered negligible senescence SENS, and the deleteriome (Gladyshev 2016). Which, if any, of these models reflects the reality of aging? Which of the observed hallmarks of aging, from the molecular to the organism levels, are correlates and which are causes of aging is hard to judge. Biology of aging has yet to become an exact science. A notion of “model” per se implies a certain level of quantitative understanding of a modeled phenomenon which allows for extrapolation of the numerical values. Today, there is no agreement in the field of what is a useful quantitative definition of “aging”. One such definition is “an increase of hazard rate (i.e. the probability of dying) with time”—admittedly a very mathematical notion—precise yet not biologically concrete. Other definitions make use of various “clocks”—quantitative assessment of the state of molecular, tissue and organism level systems such as e.g. fraction of insoluble collagen (Vafaie et al. 2013) or combined state of DNA methylation sites. Some of these measurements merely reflect the aging process while others perhaps drive it. Such clocks can be measured and potentially used to judge how rapidly an organism is aging or whether it is aging in reverse given some interventions. The best known and likely the only direct application of Machine Learning to Aging is what became known as methylation clocks. The clocks are first calibrated, which means a model is trained given samples of animal tissue across ages and DNA methylation sites. A model is a seemingly simple linear one, the training is done using what’s known as regularized regression—regression with a penalty for a number of regressors to ensure that as few as possible informative sites are used. The fact that such models can be created is in itself remarkable and means that DNA methylation reflects aging processes. In order for these models to become directly useful, we have to connect methylation to the other variables and most importantly to the interventions like diet and drugs, to understand causal links. Although regularized linear regression remains an overwhelmingly popular way of constructing aging clocks (Xia et al. 2021), there have been attempts to also apply more advanced modern AI approaches more reminiscent of modern AI. For example, training of deep neural networks (DNN) which starts with a pre-selection of features—that is, methylation sites in the case of a methylation clock (Galkin et al. 2021; de Lima Camillo et al. 2022), protein-coding genes in the case of a transcriptomic clock (Holzscheck et al. 2021; Mamoshina et al. 2018), or the intestinal microbiota metagenome in the case of a microbiome clock (Galkin et al. 2020; Gopu et al. 2020). A typical DNN architecture used is a feed-forward network, an architecture that refers us to the notorious Rosenblatt perceptron having simple and robust nonlinear activation function ReLU (rectified linear unit). Characteristically, in most cases, such an architecture, together with reasonable data preprocessing and feature engineering, is quite enough—here we are favored by the universal approximation theorem (Hornik et al. 1989). On the other hand, works are pleasantly surprising,

278

L. Peshkin and D. Kriukov

where they try to endow such an ancient architecture of a neural network with biological properties, introducing a dropout based on the functional connection of genes within an ontological term (Holzscheck et al. 2021). It is worth noting separately attempts to construct aging clocks directly from data with excellent interpretive power, namely blood clocks (Putin et al. 2016; Avchaciov et al. 2020). In these works we are again faced with the use of feed-forward models trying to extract complex non-linear relationships with aging from the parameters of complete blood counts analysis familiar to every therapist. It should be understood that in each of the cases of building aging clocks described above, the very ability to predict chronological age can hardly be beneficial for us— we ourselves are able to predict it with the help of our eyes from a photograph [and sometimes even special clock is trained for this (Bobrov et al. 2018)]. The authors themselves, as a rule, argue that the variability of the predicted chronological age may reflect the “true biological age” of the organism and even make attempts to extract this biological age from the model residuals. For example, a predicted higher biological age compared to the chronological age is called “aging acceleration” and vice versa. Biological age, in contrast to chronological age, has some potential to become a modern diagnostic tool, and a number of authors emphasize this importance (Zhavoronkov et al. 2019; Bell et al. 2019). On the other hand, once trained, an AI model is of great interest to the researcher in itself. Those features that the model used to build an accurate prediction can lead us to understand the mechanisms of aging and suggest new directions for research (at least we believe in this). Is aging a one-way process or is it flexible and amenable to interventions? It is probably both. In most known to us natural circumstances for a given organism it is a one-way process, but it also spears to be flexible and amenable to artificial intervention. What might such interventions be like? Imagine a process not unlike a beauty salon, in which person does their nails and hair and gets an occasional facelift; all of these are tradeoffs, even if people do not recognize it. Beauty treatments make you look younger at the moment, but cosmetics products may poison your skin and accelerate actual aging. There is evidence of such tradeoffs across organisms in nature; extending lifespan in many species can be accomplished at the expense of reproduction, and in cold-blooded organisms. The lifespan can be increased several-fold by just cooling the environment down or slowing down metabolic processes in other ways. If different parts of organisms react differentially to such perturbations, rejuvenation might result from a particular regime of feeding, light cycle and temperature change, and exhausting all combinations to find the optimal ones can only be done in simulation, provided a good model. The notion that lifespan is manipulatable in principle follows from well-known observations. There are closely related species with drastically different lifespans, the germ-line reset, the dependency of lifespan on diet, and other obvious environmental conditions, such as ambient temperature for cold-blooded animals. We got some clues from studying and getting the complete genome of the naked mole rat (Kim et al. 2011), which lives ten times longer than many closely related rodent species. One relatively new model organism in aging is Daphnia—a small aquatic organism that goes from infancy to frailty in one month. An organism is created from a single

13 AI for Longevity: Getting Past the Mechanical Turk Model Will Take …

279

cell, forms organs, grows, matures, procreates by itself (it is parthenogenetic), then exists for a while in a perfect environment with the right temperature, water quality, nutrients and light, just to decay and fall dead, all in about one month. Although an invertebrate, this organism is not all that different from human, it has readily recognizable parts of common anatomy with humans and lots of homologous cell types. The muscle, heart, blood, innate immune system, gut and auxiliary digestive system, eyes, and other sensory organs made of sensory and control neurons are easily observed using an ordinary dissection microscope. There are signs of aging that are similar to those known in people, such as loss of vision, changes in bone density, hardening of blood vessels, etc. (Cho et al. 2022). With such a short lifespan, we are able to manipulate the conditions with a variety of perturbations in order to build a detailed understanding and a causal picture of aging running experiments in a large population with sample sizes supporting conclusions via very high statistical significance. Even though there are related initiatives (CITP website), surprisingly such a platform has not been developed yet. We obtained strong feasibility evidence for this approach and recently reported the design of our scalable platform for intervention testing in Daphnia (Cho et al. 2022) where we cover the issues of scalability, feasibility, and most importantly superiority of Daphnia as a model organism over other model organisms, and portability of the outcomes to human aging. Separately, we demonstrated a lack of age-related respiratory changes in Daphnia (Anderson et al. 2022). Demonstrating potential to explore altering drug exposure timing, we applied the platform to characterize striking effects of “early life” rapamycin application (Shindyapina et al. 2022), crucially, this work established a complex homologous phenotype in reaction to rapamycin between Daphnia and mice. Another manuscript (Anderson et al. 2021) pertains to so-called “Inverse Lansing effect” i.e. maternal age and provisioning affecting daughters’ longevity. This platform in itself demonstrates the dual application of AI in life science in general and biology of aging: on one hand low-level applications of AI at the instrument level allow to effectively collect the raw data at scale, on another AI is applied to the resulting data at the high level in order to model the system as a whole. The first is exemplified by such tasks as identifying the animals in the field of view and discriminating between the animals and clumps of food or reflections; by telling live animal from a corpse or assigning sex to an individual. The second is exemplified by building a physiological clock which uses many features of the motion and general appearance to model chronological and biological age. Eventually, we plan to build a “health model” in a rigorous sense of the word. For example, we and others already see that calorie intake has a strong effect on the lifespan of Daphnia; yet, even here, we do not have a clear, causal picture. Our approach to teasing out causality builds upon machine learning (Gujral et al. 2014b) and was developed and published with a focus on cancer and metastasis (Gujral et al. 2014a), specifically cell motility. The idea is based upon the same famous class of Machine Learning models known as “Regularized Regression” which is also used in making epigenetic clocks. We use poly-specific drugs (a.k.a. polypharmacology) with many well-characterized molecular targets as “twenty questions” (DUTCHEN, Rata et al. 2020) in order to systematically explore which pathways do and do not affect

280

L. Peshkin and D. Kriukov

a phenotype—aspects of health and lifespan in our case. Unlike with a drug screen, we do not expect to find a magic pill out of thousands or hundreds of thousands of candidates; rather, we use dozens of specifically selected drugs and respective doses to get a complex system to reveal its modularity to us by targeting modules in a disjointed fashion. AI in aging research is still in its early stages; how will it be developing in the next decade, and what needs to happen for it to become an optimal tool in the fight against aging and age-related diseases? In order for AI to start being applicable to aging and biology in general, the data standards in biology must catch up to the data standards and quality of traditional AI domains. So far, AI, whether deep or other, has impressively fit within a very constrained and homogeneous setting, such as the board games. The number and types of objects are very well defined, the data is curated, all situations are mostly at the same scale, and so forth. Even in voice recognition, selfdriving cars, and robotics, the environments are extremely consistent and curated. Biology is very different in this respect; or, rather, our perception of biology at this point is different. We have not found invariant ways to describe phenomena, so any context is essentially unique. Let’s look at one simple question: lifespan. For most species, we only have anecdotal, unreliable data. We would want to know how long species live in a setting where there is no predation and death comes “of old age”. It only happens at the zoo, and zoo data is a pretty good source, but making these records consistent across various countries is tricky (Weigl 2005). It’s shocking perhaps that even the answer to the question of how long people live has a lot of room for improvement. What’s the maximum registered lifespan? Jeanne Calment’s famous record of 122 years has recently come into question (Zak 2019). What is the median lifespan? The answer depends on the country and many other aspects, consider that American life expectancy has been slightly falling in recent years (Khazan 2017). It is not even clear whether the overall lifespan distribution is a Gompertz distribution. There is a discussion of a flattening of the hazard rate in old age; this is seen in other species such as flies, but is there no flattening in humans? In some countries and contexts doctors avoid procedures when their perceived risk is high, as it reflects negatively on the respective medical institution; this would be considered “useless care”. In some contexts, social workers push relatives to “pull the plug” to “avoid needless suffering”, exaggerating risks and downplaying chances of recovery; nurses and unskilled helpers quietly withdraw medical care and assistance, projecting their own religious beliefs of the afterlife and other superstitions onto what’s good for the patient. Such norms are country-specific and naturally skew lifespan statistics; we ignore this component of mortality risk in old age, which has to do with the medical system’s bias against elderly patients. Another example involves data on the perturbations of drugs on lifespan. There is vast literature and databases that attempt to agglomerate the results (Barardo et al. 2017). However, not all the parameters are reflected, and often the same drug and dose are stated to have different effects in the same species. If you believe the literature (Barardo et al. 2017) data seriously 5% of all pharmacological interventions extend lifespan. About 1500 mostly mild but statistically significant lifespan increasing drug assays have been cataloged to date using over 560 drugs in 30 species (Barardo et al.

13 AI for Longevity: Getting Past the Mechanical Turk Model Will Take …

281

2017) mostly in worm and fly. This suggests an opportunity to contrast and compare the effects and mechanisms of many pharmacological perturbations to drastically accelerate and focus the search for anti-aging interventions as it has already been done in the DrugAge (Barardo et al. 2017; Lucanic et al. 2013) project. However, a closer look reveals that the data collected there is inconsistent and full of errors. Searching DrugAge for caffeine produces 31 records, where results for worms at a dose of 7.5 mM are reported to both decrease lifespan by 11.6% in one study and increase by 19.6% in another (both marked “significant”). There are 20 entries for rapamycin in mice, showing both 8 and 36.7% median lifespan increase in the same strain at the same 128 ppm dose. Clearly expecting AI to build models based on such aggregated data is naive. Drugs themselves are surprisingly hard to study because there is no consistent information widely available on which drug is called what or has which chemical structure and CAS identifier; there are rampant inconsistencies and mis-annotations. Getting the “same drug” from two different vendors leads to different results, because drugs have different purity, etc. Overall, the vision that we can fund large optimized data factories for producing Big Data in biology and then unleash AI on it to discover how biology works has not paid off, probably for very fundamental reasons. We do not know where to look for invariants in biology, so measuring e.g. gene and protein expression in a cancer cell line and assuming that your measurement captures the “normal” state and reaction to perturbation by drugs is naive; so many factors influence what things look like, and time of day, cell density, and microenvironment are just a few, so tomorrow, you can remeasure and get very different results (Haibe-Kains et al. 2013). So, getting results of multiple studies from these giant data repositories and hoping to find things in common is doomed to a failure. In many cases AI methods are in a stealth mode, as a part of broader advances in research; automated microfluidics, high-throughput assays, and other automation such as quantitative mass-spec proteomics and single cell transcriptomics (Klein et al. 2015; Wuhr 2013). These are very promising technologies, but here, again, one has to be very careful with interpretation of the data and particularly careful when expecting to merge data from multiple, disjointed studies. In single-cell studies, everything depends on which tissue you work with, how successfully you managed to rapidly dissociate the sample into single cells without killing a lot of cells, particularly in some biased way that masks some cell types entirely, whether (or rather to what extent) cells lyse in the device, and with what parameters you run the device and make and sequence libraries. The same goes for proteomics: it’s easy to miss whole classes of proteins, such as low-abundance transcription factors or lipid-soluble proteins. In both cases, splitting the same exact sample in half and running it through a pipeline would often give you dramatically different outcomes. So while there is indeed some degree of automation, and automation is helpful, it is only practical to a very small extent, and even where it is used, it has to be used with extreme care. One example is about assigning confidence to quantitative mass spectrometry measurements of relative protein abundance. Comparing protein expression across samples is currently done by reporting the most likely value, which does not allow

282

L. Peshkin and D. Kriukov

us to notice significant but small changes or to rank candidate genes for followup research in a statistically sound way. We developed a rigorous statistical model (Peshkin et al. 2019b), which allows to confidently judge a shift of protein expression down to 1%. This is an application of a Bayesian statistical approach borrowed from Machine Learning, and it will empower many new studies and allow us to re-analyze already published data to get to new findings. We will immediately plug this into our aging project, which involves life-long profiling of protein levels in Daphnia (Peshkin et al. 2019a). In conclusion, the biggest bottleneck holding us back from making rapid progress in the field are the ways to collect ML-ready data (Sagar, 2021). We need platforms for the uniform profiling of perturbations. We need consistent, clean information on drugs and lifespan in species. Perhaps most importantly, we need a platform for crowdsourcing efforts and doing citizen science. There is enormous enthusiasm, which largely is wasted in social media groups that discuss food supplements and cheer. I know many bright, generous, and resourceful non-specialists who would be happy to contribute their time in curating data, coding up useful snippets of software, and following instructions to collect samples. Organizing this force of nature will take a Task Rabbit or Mechanical Turk kind of platform. It is hard to believe that to this day, in the United States, someone cannot get their own blood work done “out of curiosity”. People are already getting armed with technology to run some tests at home (Brio website, Healthy.io website). If we trust people to spit into a tube to get their DNA sequenced, we can surely develop instructions to run what would essentially amount to a “citizen-run clinical trial” in which interventions are consistently tested. All in all, the distinction between an atmosphere of competition or cooperation in the field really depends on the perception of whether the field is very young and far from having mature results and “products” or is very close to getting the first drugs and therapies to the market. That’s mainly why the false sense of accomplishment is very harmful to progress. The field is in its infancy, and it’s way too early to worry about marketing, patents and profits and to build walls; we have to find ways to be very open and collaborative. Compliance with ethical standards Conflict of Interest The authors declare that they have no conflict of interest.

References Adzhubei IA, Schmidt S, Peshkin L, Ramensky VE, Gerasimova A, Bork P, Kondrashov AS, Sunyaev SR (2010) A method and server for predicting damaging missense mutations. Nat Methods 7(4):248–249. https://doi.org/10.1038/nmeth0410-248 Anderson CE, Homa C, Jonas-Closs RA, Peshkin LM, Kirschner MW, Yampolsky LY (2021) Inverse Lansing effect: maternal age and provisioning affecting daughters’ longevity and male offspring production. Cold Spring Harbor Laboratory, New York

13 AI for Longevity: Getting Past the Mechanical Turk Model Will Take …

283

Anderson CE, Ekwudo MN, Jonas-Closs RA, Cho Y, Peshkin L, Kirschner MW, Yampolsky LY (2022) Lack of age-related respiratory changes in Daphnia. Biogerontology 23(1):85–97. https:/ /doi.org/10.1007/s10522-021-09947-6 Avchaciov K, Antoch MP, Andrianova EL, Tarkhov AE, Menshikov LI, Burmistrova O, Gudkov AV, Fedichev PO (2020) Identification of a blood test-based biomarker of aging through deep learning of aging trajectories in large phenotypic datasets of mice. Cold Spring Harbor Laboratory, New York Barardo D, Thornton D, Thoppil H, Walsh M, Sharifi S, Ferreira S, Anžiˇc A, Fernandes M, Monteiro P, Grum T, Cordeiro R, De-Souza EA, Budovsky A, Araujo N, Gruber J, Petrascheck M, Fraifeld VE, Zhavoronkov A, Moskalev A, de Magalhães JP (2017) The DrugAge database of agingrelated drugs. Aging Cell 16(3):594–597. https://doi.org/10.1111/acel.12585 Bell CG, Lowe R, Adams PD, Baccarelli AA, Beck S, Bell JT, Christensen BC, Gladyshev VN, Heijmans BT, Horvath S, Ideker T, Issa J-PJ, Kelsey KT, Marioni RE, Reik W, Relton CL, Schalkwyk LC, Teschendorff AE, Wagner W, Zhang K, Rakyan VK (2019) DNA methylation aging clocks: challenges and recommendations. Genome Biol 20(1):1824. https://doi.org/10. 1186/s13059-019-1824-y Blauw S (2019) Banking on AI to fix all our problems? Hate to disappoint you. In: The correspondent. https://thecorrespondent.com/71/banking-on-ai-to-fix-all-our-problemshate-to-disappoint-you/1443955755-75030542. Accessed 28 Jun 2022 Bobrov E, Georgievskaya A, Kiselev K, Sevastopolsky A, Zhavoronkov A, Gurov S, Rudakov K, del Pilar Bonilla Tobar M, Jaspers S, Clemann S (2018) PhotoAgeClock: deep learning algorithms for development of non-invasive visual biomarkers of aging. Aging 10(11):3249–3259. https:// doi.org/10.18632/aging.101629 Brio (2022) https://www.getbrio.com. Accessed 28 Jun 2022c Calvello A, Svoboda L (2019) The overplayed, turbohyped, and underwhelming world of artificial intelligence. Institutional Investor, London Cho Y, Jonas-Closs RA, Yampolsky LY, Kirschner MW, Peshkin L (2022) Intelligent highthroughput intervention testing platform in Daphnia. Aging Cell 21(3):571. https://doi.org/10. 1111/acel.13571 CITP (2022) CITP. https://citp.squarespace.com. Accessed 28 Jun 2022b de Lima Camillo LP, Lapierre LR, Singh R (2022) A pan-tissue DNA-methylation epigenetic clock based on deep learning. NPJ Aging 8(1):85. https://doi.org/10.1038/s41514-022-00085-y Dutchen BS (2022) Striking a chord. In: Harvard medical school. https://hms.harvard.edu/news/str iking-chord. Accessed 28 Jun 2022 Galkin F, Mamoshina P, Aliper A, Putin E, Moskalev V, Gladyshev VN, Zhavoronkov A (2020) Human gut microbiome aging clock based on taxonomic profiling and deep learning. iScience 23(6):101199. https://doi.org/10.1016/j.isci.2020.101199 Galkin F, Mamoshina P, Kochetov K, Sidorenko D, Zhavoronkov A (2021) DeepMAge: a methylation aging clock developed with deep learning. Aging Dis 12(5):1252. https://doi.org/10.14336/ ad.2020.1202 Gladyshev VN (2016) Aging: progressive decline in fitness due to the rising deleteriome adjusted by genetic, environmental, and stochastic processes. Aging Cell 15(4):594–602. https://doi.org/ 10.1111/acel.12480 Gopu V, Cai Y, Krishnan S, Rajagopal S, Camacho FR, Toma R, Torres PJ, Vuyisich M, Perlina A, Banavar G, Tily H (2020) An accurate aging clock developed from the largest dataset of microbial and human gene expression reveals molecular mechanisms of aging. Cold Spring Harbor Laboratory, New York Gujral TS, Chan M, Peshkin L, Sorger PK, Kirschner MW, MacBeath G (2014a) A noncanonical frizzled2 pathway regulates epithelial-mesenchymal transition and metastasis. Cell 159(4):844– 856. https://doi.org/10.1016/j.cell.2014.10.032 Gujral TS, Peshkin L, Kirschner MW (2014b) Exploiting polypharmacology for drug target deconvolution. Proc Natl Acad Sci 111(13):5048–5053. https://doi.org/10.1073/pnas.140308 0111

284

L. Peshkin and D. Kriukov

Haibe-Kains B, El-Hachem N, Birkbak NJ, Jin AC, Beck AH, Aerts HJWL, Quackenbush J (2013) Inconsistency in large pharmacogenomic studies. Nature 504(7480):12831. https://doi.org/10. 1038/nature12831 Healthy.io (2022) Healthy.io. https://healthy.io. Accessed 28 Jun 2022d Holzscheck N, Falckenhayn C, Söhle J, Kristof B, Siegner R, Werner A, Schössow J, Jürgens C, Völzke H, Wenck H, Winnefeld M, Grönniger E, Kaderali L (2021) Modeling transcriptomic age using knowledge-primed artificial neural networks. NPJ Aging Mech Dis 7(1):68. https:// doi.org/10.1038/s41514-021-00068-5 Hornik K, Stinchcombe M, White H (1989) Multilayer feedforward networks are universal approximators. Neural Netw 2(5):359–366. https://doi.org/10.1016/0893-6080(89)90020-8 Igelsböck A (2021) AI: failed promise or a case of unrealistic expectations? Forbes magazine Jumper J, Evans R, Pritzel A, Green T, Figurnov M, Ronneberger O, Tunyasuvunakool K, Bates R, Žídek A, Potapenko A, Bridgland A, Meyer C, Kohl SAA, Ballard AJ, Cowie A, RomeraParedes B, Nikolov S, Jain R, Adler J, Back T, Petersen S, Reiman D, Clancy E, Zielinski M, Steinegger M, Pacholska M, Berghammer T, Bodenstein S, Silver D, Vinyals O, Senior AW, Kavukcuoglu K, Kohli P, Hassabis D (2021) Highly accurate protein structure prediction with AlphaFold. Nature 596(7873):583–589. https://doi.org/10.1038/s41586-021-03819-2 Khazan O (2017) Life expectancy declines among americans for second year. The Atlantic Kim EB, Fang X, Fushan AA, Huang Z, Lobanov AV, Han L, Marino SM, Sun X, Turanov AA, Yang P, Yim SH, Zhao X, Kasaikina MV, Stoletzki N, Peng C, Polak P, Xiong Z, Kiezun A, Zhu Y, Chen Y, Kryukov GV, Zhang Q, Peshkin L, Yang L, Bronson RT, Buffenstein R, Wang B, Han C, Li Q, Chen L, Zhao W, Sunyaev SR, Park TJ, Zhang G, Wang J, Gladyshev VN (2011) Genome sequencing reveals insights into physiology and longevity of the naked mole rat. Nature 479(7372):223–227. https://doi.org/10.1038/nature10533 Klein AM, Mazutis L, Akartuna I, Tallapragada N, Veres A, Li V, Peshkin L, Weitz DA, Kirschner MW (2015) Droplet barcoding for single-cell transcriptomics applied to embryonic stem cells. Cell 161(5):1187–1201. https://doi.org/10.1016/j.cell.2015.04.044 Lucanic M, Lithgow GJ, Alavez S (2013) Pharmacological lifespan extension of invertebrates. Ageing Res Rev 12(1):445–458. https://doi.org/10.1016/j.arr.2012.06.006 Mahmood SS, Levy D, Vasan RS, Wang TJ (2014) The framingham heart study and the epidemiology of cardiovascular disease: a historical perspective. Lancet 383(9921):999–1008. https:// doi.org/10.1016/s0140-6736(13)61752-3 Mamoshina P, Volosnikova M, Ozerov IV, Putin E, Skibina E, Cortese F, Zhavoronkov A (2018) Machine learning on human muscle transcriptomic data for biomarker discovery and tissuespecific drug target identification. Front Genet 9:242. https://doi.org/10.3389/fgene.2018.00242 Molinelli EJ, Korkut A, Wang W, Miller ML, Gauthier NP, Jing X, Kaushik P, He Q, Mills G, Solit DB, Pratilas CA, Weigt M, Braunstein A, Pagnani A, Zecchina R, Sander C (2013) Perturbation biology: inferring signalling networks in cellular systems. PLoS Comput Biol 9(12):e1003290. https://doi.org/10.1371/journal.pcbi.1003290 Peshkin L, Boukhali M, Haas W, Kirschner MW, Yampolsky LY (2019a) Quantitative proteomics reveals remodeling of protein repertoire across life phases of Daphnia pulex. Cold Spring Harbor Laboratory, New York Peshkin L, Gupta M, Ryazanova L, Wühr M (2019b) Bayesian confidence intervals for multiplexed proteomics integrate ion-statistics with peptide quantification concordance*. Mol Cell Proteom 18(10):2108–2120. https://doi.org/10.1074/mcp.tir119.001317 Putin E, Mamoshina P, Aliper A, Korzinkin M, Moskalev A, Kolosov A, Ostrovskiy A, Cantor C, Vijg J, Zhavoronkov A (2016) Deep biomarkers of human aging: application of deep neural networks to biomarker development. Aging 8(5):1021–1033. https://doi.org/10.18632/aging. 100968 Rata S, Gruver JS, Trikoz N, Lukyanov A, Vultaggio J, Ceribelli M, Thomas C, Gujral TS, Kirschner MW, Peshkin L (2020) An optimal set of inhibitors for reverse engineering via kinase regularization. Cold Spring Harbor Laboratory, New York

13 AI for Longevity: Getting Past the Mechanical Turk Model Will Take …

285

Sagar R, (2021) Big data to good data: Andrew Ng Urges ML community to be more data-centric and less model-centric, PEOPLE and TECHNOLOGY magazine Savova V, Peshkin L (2007) Is the turing test good enough? The fallacy of resource-unbounded intelligence. IJCAI 2007:545–550 Sawhney M (2020) 5 reasons why AI may dazzle only to disappoint. Forbes, Jersey Shindyapina AV, Cho Y, Kaya A, Tyshkovskiy A, Castro JP, Gordevicius J, Poganik JR, Horvath S, Peshkin L, Gladyshev VN (2022) Rapamycin treatment during development extends lifespan and healthspan. Cold Spring Harbor Laboratory, New York Standage T (2003) The turk: the life and times of the famous eighteenth-century chess-playing machine. Berkley Publishing Group, New York Stokes JM, Yang K, Swanson K, Jin W, Cubillos-Ruiz A, Donghia NM, MacNair CR, French S, Carfrae LA, Bloom-Ackermann Z, Tran VM, Chiappino-Pepe A, Badran AH, Andrews IW, Chory EJ, Church GM, Brown ED, Jaakkola TS, Barzilay R, Collins JJ (2020) A deep learning approach to antibiotic discovery. Cell 180(4):688-702.e13. https://doi.org/10.1016/j.cell.2020. 01.021 Strickland E (2019) How IBM Watson overpromised and under delivered on AI health care. IEEE Spect 56:24–31 The Economist (2020) An understanding of AI’s limitations is starting to sink in. The Economist, London THELRI.org (202) Why drugs that work in mice don’t work in humans. https://thelri.org/blog-andnews/why-drugs-that-work-in-mice-dont-work-in-humans/. Accessed 28 Jun 2022a Vafaie F, Yin H, O’Neil C, Nong Z, Watson A, Arpino J-M, Chu MWA, Wayne Holdsworth D, Gros R, Pickering JG (2013) Collagenase-resistant collagen promotes mouse aging and vascular cell senescence. Aging Cell 13(1):121–130. https://doi.org/10.1111/acel.12155 Weigl R (2005) Longevity of mammals in captivity: from the living collections of the world—a list of mammalian longevity in captivity Wuhr MH (2013) Accurate and interference-free multiplexed quantitative proteomics using mass spectrometry. In: Google Patents. https://patents.google.com/patent/US10145818B2/en?oq= US10145818. Accessed 28 Jun 2022 Xia X, Wang Y, Yu Z, Chen J, Han J-DJ (2021) Assessing the rate of aging to monitor aging itself. Age Res Rev 69:101350. https://doi.org/10.1016/j.arr.2021.101350 Yala A, Lehman C, Schuster T, Portnoi T, Barzilay R (2019) A deep learning mammography-based model for improved breast cancer risk prediction. Radiology 292(1):60–66. https://doi.org/10. 1148/radiol.2019182716 Zak N (2019) Evidence that Jeanne Calment died in 1934—Not 1997. Rejuven Res 22(1):3–12. https://doi.org/10.1089/rej.2018.2167 Zhavoronkov A, Mamoshina P, Vanhaelen Q, Scheibye-Knudsen M, Moskalev A, Aliper A (2019) Artificial intelligence for aging and longevity research: recent advances and perspectives. Age Res Rev 49:49–66. https://doi.org/10.1016/j.arr.2018.11.003

Chapter 14

Leveraging Algorithmic and Human Networks to Cure Human Aging: Holistic Understanding of Longevity via Generative Cooperative Networks, Hybrid Bayesian/Neural/Logical AI and Tokenomics-Mediated Crowdsourcing Deborah Duong, Ben Goertzel, Matthew Iklé, Hedra Seid, and Michael Duncan

Abstract Aging is best understood as a gradual, sometimes punctuated, change in the dynamical regime of a self-organizing network composed of heterogeneous complex processes interacting via complex nonlinear spatiotemporal interactions. Key easily observable aspects such as the “hallmarks of aging” represent specific manifestations of underlying holistic network dynamics. Different aspects of this selforganizing network are apt to be best understood by reference to different datasets and by means of different analytical approaches. However, it seems likely that to create therapies substantially increasing maximum human lifespan in a reliable way, ultimately a holistic understanding of the dynamics of aging across the organism will be required. This leads naturally to a network-based approach to data-analytics and hypothesis-formation and -evaluation, in which holistic models of aging in the organism are automatically assembled from multiple datasets and models addressing various aspects. One key issue in realizing a network-based approach to aging is the process of mathematically combining multiple models; toward this end we propose an assemblage of techniques beginning with a relatively simple quadratic programming based approach, and culminating in a “computational social science” approach in which multiple models collectively form a social network of models. Within that network, symbols and cultures, reflecting complex holistic patterns in the underlying data, may emerge. Another key issue is how to incentivize an appropriately diverse and capable community of individuals or organizations to contribute data and D. Duong (B) · B. Goertzel · M. Iklé · M. Duncan Rejuve.AI, Rodney Bay, Saint Lucia e-mail: [email protected] D. Duong · B. Goertzel · M. Iklé · H. Seid · M. Duncan SingularityNET Foundation, Amsterdam, The Netherlands © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 A. Moskalev et al. (eds.), Artificial Intelligence for Healthy Longevity, Healthy Ageing and Longevity 19, https://doi.org/10.1007/978-3-031-35176-1_14

287

288

D. Duong et al.

models to the holistic “Generative Cooperative Network” of models. We propose a tokenomics approach, in which a variety of cryptographic token types are used to incentivize contribution to the GCN. These issues comprise much of the inspiration for the Rejuve.AI project, which is building general mechanisms for GCN and tokenomic incentivization. Rejuve.AI is also creating a set of relatively simple aging-related models to seed the GCN, including: • A hand-crafted Bayes Net model assessing aspects of an individual’s path to healthy longevity, via using the logic of the hallmarks of aging to interpret data from questionnaire answers and biosignals (as gathered from users of the Rejuve.AI longevity app). • A model of longevity-related pathways and networks identified by pattern mining in the BioAtomspace, an integrated genomic and medical knowledge base created within the OpenCog Atomspace knowledge-metagraph framework. • A model indicating genes, pathways and networks involved in the longevity of the Methuselah flies, long-lived Drosophila Melanogaster created via experimental evolution over multiple decades. Automated integration of insights from these diverse models, based on diverse datasets, will enable prototyping of the overall GCN framework and will serve as a seed for broader growth of the GCN based on contributions from the research and Rejuve.AI app user communities. Growth of our holistic network will enable the formation of a dynamic, multiresolutional mechanistic simulation of the human body that will shed new light on the causes of aging and its treatment. Keywords Hybrid artificial intelligence · Complex adaptive systems · Multiresolutional simulation · Bayesian networks · Systems biology · Aging · Longevity · Crowdsourcing · Coevolution · Blockchain

14.1 Introduction The last two decades have seen dramatic progress in our understanding of the human aging process, and also in acceptance of the idea that curing aging may be feasible in the not incredibly distant future. We now have what seems to be a decent understanding of the core factors that cause human bodies to deteriorate, and ultimately die, as they get older. There now also exists a diverse assemblage of tools with strong potential to counteract these factors. In a historical sense, it would be rational to feel that in 2022, the human species has control of its own members’ lives and deaths nearly in hand, with just a few more details to be worked out and a few methods to be moved from concept to execution. We believe that the breakthroughs needed to achieve Longevity Escape Velocity (LEV) are probably not going to happen entirely as isolated insights into how to cure particular aspects of aging—though there will likely be some of those—but rather will occur largely in the context of a more holistic, systems-theoretic understanding

14 Leveraging Algorithmic and Human Networks to Cure Human Aging …

289

of the aging process, and the general process of human biological development over the lifespan. We also suspect that many of the breakthroughs to come in the next few decades will not be the work of humans alone, nor of humans together with standard clinical laboratory informatics tools, but rather will result from human scientists deploying AI systems of progressive sophistication and generality. Eventually we will create Artificial General Intelligence (AGI) that will make biological and therapeutic discoveries on its own. While we work toward creating beneficial AGI, we will also work to leverage the latest narrow AI technology, in combination with the best biotechnology and biological understanding, to prolong healthy human life as best as we can. In this article we describe some of the systems, frameworks and tools we are creating, together with our colleagues, to facilitate the deployment of the best of today’s AI technologies to help work toward the cure of aging within a holistic, systems-biology perspective. We present a network-based approach to longevityrelated data-analytics and hypothesis-formation and evaluation, in which holistic models of aging in the organism are automatically assembled from multiple datasets and models. One basic concept underlying the approach outlined here is that, since aging appears to be the result of a variety of different processes acting at a variety of different scales on various different body systems, it seems likely that a diversity of different analytical and simulation and theory-generation techniques will be useful for cracking different aspects of the problem. Different aspects of this overall network are then best understood by different machine learning, reasoning and creative understanding tools. The mathematical combination of different models of biological systems and different AI techniques can be done on a number of different levels of sophistication. On the simplest level, we are currently using quadratic programming to combine the explanations and predictions provided by different models regarding the contributions to an individual’s aging and longevity. On the more advanced side, we have been prototyping a subtler “computational social science” approach rooted in interpretive social science, in which multiple models collectively form a social network of models in which symbols and cultures may emerge, reflecting holistic patterns in the underlying data going beyond the insight of any single model. The overall framework for combining different processes toward common tasks of prediction, simulation, explanation and hypothesis, within which we are implementing these different modes of combination, is called a Generative Cooperative Network (GCN). We have also put a lot of thought and effort into creating systems incentivizing an appropriately diverse and capable community of individuals or organizations to contribute data, algorithms and models to a GCN forming an overall “anti-aging hive mind.” Toward this end, we have conceived a novel tokenomics based approach, in which a variety of cryptographic token types are used to incentivize contribution to the GCN. Practical implementation of these ideas is occurring within our Rejuve.AI project, a network of individuals interested in their own healthy longevity and also in assisting the broader push toward curing aging. Rejuve.AI has developed tools for helping

290

D. Duong et al.

individuals estimate their biological age and understand potential risks to their healthspan, and also tools aimed at finding novel aging therapeutics, leveraging diverse data including data uploaded by Rejuve.AI network members. We have created tools to crowdsource, cull, and combine ideas from network members in the community of science towards building a dynamic multiresolutional mechanistic model of the human body.

14.2 Aging as a Complex Network Process The conceptual perspective underlying all of the work presented and proposed here is one that views aging as a biological systems process. Complex adaptive systems in nature are understood to create and maintain themselves through dynamics involving self reinforcing positive and negative feedback from other systems at higher, lower and the same levels in “dual network” hierarchy/heterarchies (Goertzel 1993; Salthe 2012). The causes behind the dynamic maintenance of biological systems—constituting system “health”—depend on the virtuous and vicious cycles emanating from spanning hierarchical levels in the biological meta-system. Capturing these processes is particularly important to aging, a breakdown of this maintenance and loss of resilience against vicious cycles. AI is deployed in this context toward the understanding of the underlying causal and network processes of the loss of biological system maintenance as a system breakdown, and toward the exploration of new maintenance strategies that will work with existing biological systems already partway through their lifespans. To understand these biological processes and how to change them we analyze datasets and also generate dynamical models simulating these processes at various levels of precision. The Hallmarks of aging, a breakdown of aging into a relatively small number of high level biological processes that cause the breakdown of maintenance in the human body as it ages, is one view into the vicious cycles of aging (Lopez-Otin et al 2013). These hallmarks comprise, apparently, the considerable majority of what goes wrong during aging. If we could solve these—taking into account their various dependencies within the overall body system—we would go a long way toward curing aging. And for an individual, if we can understand how that individual is fairing in terms of the list of hallmarks, we can form a decent conceptual picture of how that human may be progressing toward healthy longevity. As a long-term strategy, we seek to solve aging by creating a dynamic, multiresolutional mechanistic model of the human body, which captures its natural maintenance processes with the resolution needed to explore the effects of changes to the model that can become treatments. I.e. we seek to understand nature by mimicking it by machine inference, so that the treatment effects may be more readily generated. Given the complexity of this task, we aspire to achieve it not via a single AI or modeling technique (though we have a number of new technical approaches to machine inference and modeling to explore), but most centrally via a framework for combining multiple algorithms and models produced via multiple human and

14 Leveraging Algorithmic and Human Networks to Cure Human Aging …

291

automated sources, to yield overall synthetic insights and hypotheses. Our Generative Cooperative Network (GCN) framework uses principles of coevolution to combine crowdsourced analytical and generative models into models that get better and better at solving longevity challenges. The GCN itself is expected to increase in complexity and sophistication as more models are integrated into it. In its more advanced versions, it will leverage principles of consensus from symbolic interactionism in natural social systems to coevolve the micro and macro layers of the multiresolutional simulation. For instance, such principles can help GCN to differentiate phenomena into biological systems of objectified biological concepts, that form modules likely to be useful in solving a number of challenges. Presenting a GCN with challenges that require it to generate approximations of real world data from other data and models of underlying causal and network processes will direct the GCN toward producing a mimic of the natural human body, as well as of particular human bodies in disparate states of health. To create this holistic nature mimic, both adaptive (during simulation time) and non-adaptive models are needed that mimic aspects of the real world. Our first non adaptive descriptions of the real world—to be described in some detail below—take the form of Bayesian networks that describe states of the human body at the clinical level, and how these relate to the states of the human body known to be significant to aging, the hallmarks of aging. Our Bayesian net of longevity will start out as a seed of this descriptive model, to be added onto by the community of science through our BayesExpert model for creating Bayesian networks from the medical literature. As next steps, our growing simulation will be extended to accept crowdsourced generative neural networks like the GAN, the VAE, and transformer based generative models, and finally generative simulation models. Growth by crowdsourcing diverse contributions within a framework guided by the principles of complex adaptive systems will enable our ensemble of AIs to create a robust multiresolutional simulation of the human body with which to infer treatments for aging.

14.3 The Generative Cooperative Network (GCN) The process of generative modeling is in a broad sense a form of mimicry, but it’s important to distinguish mimicry that replicates underlying processes from mimicry that merely reflects surface level patterns. If AI is to solve medical problems as complex as human aging, we believe it will have to mimic not only outer appearances, but the inner processes, mechanistic explanations, and emergent structures of living systems. An example of surface level AI mimicry is the Generative Adversarial Network (GAN) that uses principles of coevolution to direct and scaffold learning to generate fake pictures that are indistinguishable from real (Goodfellow et al. 2014). In the GAN architecture, one neural net tries to generate fake data that is indistinguishable from real data, while another model tries to tell the real instances from the fakes.

292

D. Duong et al.

These two networks are trained in conjunction, so the better one gets, the better the other gets—each serving as “scaffolding” for the other one’s learning. The GCN takes scaffolding to the multiagent level. GCN is a framework of multiple intelligent agents that scaffold each other’s learning through coevolution, but in a cooperative method that is capable of reproducing inner dynamics as well as directly mimicking surface appearances, to create a causal simulation. GCN encompasses principles of causation and emergence in a framework of continuous improvement based not only on ingestion of new observations and experience, but on an endogenous process of symbol emergence and interpretation that fosters innovation. In focusing on interpretation, GCN’s sign formation process manifests the two principles of decentralization and autonomy, which are important both for effective AI and for the creation of an ethical and economically just data and processing ecosystem (the latter being a key tenet of web3 overall) (Owocki et al. 2022). In GCN we capture the natural, non-hierarchical process of co-creation in the emergence of symbols. The meaning of symbols is a consensus based on autonomous perception, where concepts are learned based on individual utility rather than by copying.

14.4 Emergent Signs in GCN Living systems, both social and biological, use signaling for coordination of lower level phenomena into emergent upper level systems. The GCN uses that signaling as the mechanism of self-organization and open endedness to make an improving multiresolutional mechanistic simulation of the human body. The emergence of coordination through signaling promotes open ended self-organization in four ways. First, by means of signs being slippable, “fuzzy”, or open to interpretation. This makes it so that innovations can be discovered and accepted before they come to be indicated more specifically. Agent AI “tabula rasa”s which are not yet converged, injected into a pre-existing society of more converged agents, have more capacity to interpret such signs in ways that are newly useful to society. Second, agents act on other agents through perceived role-signs, rather than as individuals, so that successful interrelations have a network coordinating effect, rather than just a private one. The slippability of the first way allows a diversity of agents to be categorized into any one role. Third, signs create a functional space, or culture, that guides new agents to perform the activities that have worked for agents in the past. This learnable space accumulates past innovations as a scaffold for agents while at the same time being open to new innovation. Fourth, interpretable, implicit signs can become explicit signs with exact instructions once they are so ingrained and certain in a system that it becomes efficient to no longer have to be guessed, freeing up implicit understanding for innovation. These are four endogenous mechanisms of open endedness, which make the emergent systems receptive to exogenous mechanisms of open endedness such as the availability of many different challenges to shape the system. We will explore how multiple challenges can grow a data driven complex adaptive system simulation through the data absorption technique.

14 Leveraging Algorithmic and Human Networks to Cure Human Aging …

293

One way to view GCN’s signaling dynamics is that they are the software emulation of the process of the social emergence of symbols from subsymbols through symbolic interaction. Smolensky developed the idea of the emergence of symbols from subsymbols in the context of neural networks (Smolensky 1988), using an example of a neural network that could detect if a room is a kitchen based on whether it had items in it like a stove or a refrigerator, etc. His point was that a concept like “kitchen” is a statistical entity processed from many instances of kitchens, that cannot be expressed in crisp rules, because any one thing in a kitchen may not be present, but the room would still be thought of as a kitchen. This slippability of concept is the first signaling mechanism of open ended self-organization listed above. However, one important thing Smolensky did not address was the social construction of the concept of kitchen, in the second and third mechanisms. Our case is similar to Smolensky’s kitchen: we have emergent concepts, symbolized by a sign, where agents displaying that sign have a variety of implementations of that concept with no definitive criteria other than their interpretation is good for their own utility. The fact that all cognition is social, and all evolution is coevolution is a missed point in cognitive science, but is an essential point to the networked, holistic view that models the creative process in these natural systems, and is the key both to the model’s ability to complexify and mimic nature and to the open-endedness we seek for our AGIs. To create a dynamic multiresolutional mechanistic simulation, GCN agents compete for simulated tokens in a simulated market. They win these tokens when they solve challenge problems related to the system they are simulating, in this case the human body. They are presented with many challenges that are of many levels of difficulty to solve. To solve a challenge, each agent has at their disposal a repository of models and data, in this case contributed by the community of science. Agents can put these models together themselves to respond to a challenge, or they can challenge other agents to put models together in return for tokens, or both. Each agent also has an entire inductive AI mind, right now a CMA-ES algorithm (Hansen 2008), but it could be a genetic algorithm or a neural network, whose job is to optimize tokens. The only way to get tokens is to win challenges, whether they be the challenges that the humans start the simulation out with or the challenges that other agents offer. The agent makes all of their decisions with their AI mind, decisions which include what models to use from the repository and in what order, what task to offer an agent displaying what sign for how much, what task to accept for how much, and what sign to display. Agents which challenge other agents and accept challenges together become a team of agents, and these agents are paid tokens only if as a team, they win challenges that the humans put forth, with the highest scoring team being the sole winner per challenge. The way that agents are actually chosen for being on a team is what gives the signs meaning. To choose another agent, the sign it displayed is compared to the sign sought, and the agent with the sign closest to the sought sign is picked, given that there is a price overlap and agreement on task. Winning teams receive prize tokens that are distributed according to their learned price agreements. Different challenges are presented to the agents thousands of times, all of which shape the way agents learn to coordinate. For details on how models are composed see the SingularityNET simulation software. The decentralized competitive/cooperative

294

D. Duong et al.

market aspect of the GCN essentially re-uses the SingularityNET simulation software described in (Duong 2018). This simulation software was created to model the same sort of decentralized AI marketplace as in Rejuve.AI’s tokenomics (Rejuve.AI 2021). The signs start out arbitrary but come to have meaning because both the displayer and seeker of a sign induce its meaning based on what will get themselves (the perceiver) the most tokens. This is the first way signs encourage open ended emergence, by being “open to interpretation.” The fact that agents choose other agents based on a sign that both sides induce makes that sign come to mean the agent qualities that caused it to be chosen. Because induction is based on a utility function of the number of tokens made by winning challenges, those qualities are what an agent contributed to its team to make it win. As a side effect the price that emerges is an assignment of credit. The mechanism by which signs come to have meaning is this: suppose there is a successful solution to a problem and a pay out to an agent A, who in turn pays an agent B it has delegated a task to, who in turn pays agent C to whom agent B has delegated another task to. Since the transaction was successful in getting them tokens, all three agents will want to repeat it, by looking for the same sign and displaying the same sign as they did before. They might choose each other again, but they also might choose someone else displaying the sign. Say agent D displays B’s sign, and A choses it. Agent D will now have selective pressure to do the same thing that the other agent, B, who had the same sign did, including delegating the task C did to someone with C’s sign. Because, if D does all those things, it will be on a winning team and win tokens. So B and D display a similar sign and have similar behaviors, and that sign comes to indicate a role. Note however that the sign represents requirements not implementation. D can do C’s job if it is better at it than C, or it can hire an agent who would do it a completely different way if it helps the team win. Note also that recognition of persons according to their roles and not as an individual is the second way signs encourage open ended emergence. Because of roles, every success between two individual agents becomes an opportunity for all agents, some of which may, now or at a later point in a task’s evolution, be more suited to a task than the agent that first innovated it. The signs come to mean a particular specialization, because that is the best way for an agent to use its limited resources towards getting the most tokens, given that there is no penalty for being on multiple teams. Signs come to mean the specialized knowledge about a subset of models found to be modular as agents differentiate into specializations. Because the challenge problems are problems of modeling the human body, the sign comes to represent an emergent biological concept, that is modular in that there are more interconnections within them than there are outside of them, useful in solving problems. For example, agents can specialize in the immune system, and subspecialize in particular ailments of the immune system, or pathways related to the immune system, however we do not tell it which of the bodies systems to form concepts about. Rather they form them according to what solves problems, and can even put together new concepts, such as new pathways, in the process. The variables that generally go within the modules of the same sign—that are not exactly the same

14 Leveraging Algorithmic and Human Networks to Cure Human Aging …

295

set for every specialist wearing that sign—are the subsymbols of that concept, so that the symbols arise from subsymbols in a social process, based on what works best in an agent culture. As the simulation progresses the meaning of symbols are incremented towards many stakeholders’ utilities over time, and come to be the Nash equilibria of the stakeholders, a shelling point with which to coordinate social action. In terms of the symbolic interactionist paradigm this is the process of objectification of symbols, where what the individual once decided freely becomes a social pressure on the individual. Author Goertzel has called this process the principle of individuation vs. self-transcendence, from the point of view of individual, but it is also a mechanistic solution to Micro Macro integration in sociology because it involves true emergence of the social level, the institutional level, from the individual interaction level. The emergence of coordination through signaling is the emergence needed to mechanistically form and transcend levels in a multiresolutional simulation. Once symbols have arisen by consensus, pattern mining in this sort of network via symbolic systems like OpenCog Hyperon can then lead to socially useful symbols that can express compositionally new ideas through logic operations. In the CMA-ES program these signs are float vectors. In a vector space of these signs, vectors that are closer to each other are functionally closer, indicating more similar agent role requirements, and those farther are functionally farther. This is because when agents sign innovations to each other, they have to start out signing for the concepts that their innovations are replacing, until the agents that they are signing to prefer that those signs be differentiated. To keep diversity up, new agents that have not converged upon a strategy may enter an existing agent culture where signs have a shared meaning, and learn some of the language of the agent culture, as it displays different signs along a path in the vector space, incrementally, until it optimizes its own income based on what it can do best and what skills are in demand. This path in function space is how agent culture scaffolds new, not yet converged agents to the point that they can innovate. This functional space is the third way that signs encourage open ended emergence. That path offers smooth learning to get to the next needed skill, that is, it is scaffolded and thus accessible by evolutionary computation. These paths are important because they offer the exploitation part of the exploration–exploitation dilemma. The exploration part comes from the diversity imparted to the system in the new non converged agents that still have “neuroplasticity”. The old agents likely have AI minds that have already converged, and have thus become rigid and inflexible. However, because the new agents have not yet converged, they can create on top of the paths figured out by their ancestors, and because all agents respond to basic utility, they can improve their society, finding better ways to solve problems. Unconverged agents are the continued diversity on which open-ended evolution is built, and those that find the sweet spot between exploitation and exploration will get the most tokens through innovation, and are in fact encouraged to get to this innovative “edge of chaos” because it gets them the most tokens.

296

D. Duong et al.

These emergent self-organizing dynamics are subtle and complex, and are layered on top of the subtlety and complexity of the machine learning and reasoning algorithms and models chosen by and living inside the individual agents. However, we believe this multilayered complexity is likely to be necessary to grapple with the multilayered complexity of the biological processes involved in aging. The aging body itself is a complex society composed of multiple subsystems each playing multiple roles in regard to each other, and learning and adapting its roles in a mutually recursive way with other subsystems. There is good reason to suspect that the best route to understanding the aging body will be via an AI framework manifesting an at least roughly analogous architecture of complex self-organizing “socio-cultural” multi-agency. In fact, it follows a strong potential for a general principle of general intelligence: For a system to be generally intelligent in a world demonstrating certain symmetries, algebraic properties or other persistent abstract patterns, leveraging highly restricted energetic, spatial and temporal resources, it will usually be necessary for the system to itself internally display some version of these same symmetries/ algebraic properties/persistent abstract patterns. This was partially formalized as the “Mind-World Correspondence Principle” in category-theoretic terms in (Goertzel 2013).

14.5 How Emergent Signs Interact with Dependent Typing in the GCN In GCN with implicit typing, agents develop a culture, a functional semantic space that scaffolds other agents, including new agents that have not converged yet, along a path that leads them to the solutions that other agents have found in the past. However scaffolded, and however reachable by evolutionary computation, traveling along such a path is done by trial and error. Agents have classified themselves into types, the signs of which exist in a functional semantic space. However, the sign is limited in that it must basically be memorized. It is only rewarded when it is learned correctly, relative to other agents. This sort of approach can carry us a significant distance toward modeling longevity related data, but it is likely to reach its limits. In order for the agent culture to contain open ended intelligence, it cannot learn everything by trial and error: rather, the emergent type ontology will need to somehow be made explicit and carry with it explicit instructions on requirements. This becoming explicit is the fourth way in which signs encourage open ended emergence. For this we leverage the AI-DSL (Goertzel and Geisweiller 2020) strategy for agent typing. Hyperon’s pattern miner will assist in finding what it is about the agents displaying a role sign which enables its teams to make a profit. PLN inference will express this in AI-DSL, which Hyperon will use to compose the answer from user contributed models, and formally verify exactly what those models do (Goertzel 2014). Knowledge of exact function is important for both AI and medical ethics.

14 Leveraging Algorithmic and Human Networks to Cure Human Aging …

297

The implicit (emergent sign) and explicit AI-DSL methods that GCN agents use are complementary and help each other. The implicit sign method focuses selective pressure on agents long enough for choices to be objectified into institutions so that they are consistent and widespread enough for explication. Implicit signs supply the explicit algorithms with enough examples of emergent capabilities in the ecosystem to infer upon. Explication takes away some of the burden of memorization of implicit signs by trial and error for new agents, so signs can indicate emerging requirements while explicit rules indicate requirements that have already become objectified institutions. Agents and the signs that they display will be fed to the explicit algorithm which will use Hyperon’s pattern mining to interpret the implicit sign’s explicit meaning, through an examination of the behaviors of the agents that display the sign. Once explicit, the hyperon formalization of the sign is a directive that is implementable by agents new to an agent ecosystem, that no longer need to learn the meaning of those particular signs by trial and error (Fig. 14.1).

Fig. 14.1 A consensus forms on the biological concepts in a biological system in a GCN market, where teams form to meet challenges. In this example those teams are accessing hyperon models but they can access and compose any user contributed models in a repository. These challenges shape them into the dynamic concepts of biological systems. They can be read into an embedding space for drug discovery

298

D. Duong et al.

14.6 Data Absorption in GCN GCN illustrates the importance of coevolution to the emergence and dynamic maintenance of signaling systems, whether social or biological. In GCN, the emergent upper level signaling system is the institutional level, where an institution is an agreement on the meaning of signs. We can measure whether emergence is strong by measuring the degree to which signals coordinate behaviors, and measure system degradation by the lack thereof. Indeed, the final of the nine hallmarks of aging, “altered intercellular communication” is the system degradation of aging. The data absorption technique (Duong 2013) is the use of coevolution to mimic a particular existing signaling system, in its particular state of coordination, in order to test interventions on that dynamic system. Coevolution is central to the principle of feedback based emergence. It is necessary for the emergence of institutions from symbolic interaction. For instance, in the GCN, signs come to have shared meanings because they are induced by multiple coordinating agents. It is also necessary for the emulation of signaling based systems as we find in biology, which also arise from symbiotic coordination of multiple subsystems. Furthermore, in the data absorption technique, coevolution offers a way to find causal relationships and treatments in a signaling system. In our proposed use of data absorption for biology within the GCN, some of the challenges that test an algorithm can test internal consistency by ensuring that the upper level emerges from the lower level in accordance with the data. Homeostatic mechanisms break down and as a result new self-reinforcing systems bring the body to a new state. For example, in atherosclerosis, it may be that the liver is already having trouble clearing LDL, at the same time that the intestines are absorbing more of it because of inefficiencies due to aging. There are also more Reactive Oxygen Species (ROS) in the tissues as well. This results in a cascade that gets worse: the ROS injures the epithelium of the arterial wall, and ldl that are there become oxidized by the ROS, creating plaques which eventually rupture, causing a blockage and perhaps a stroke. The stroke results in more dysregulation of oxygen, and the oxygen deprivation causes mitochondria to release more ROS, further damaging blood vessels. Using the data absorption technique to make a causal simulation that takes the body from a healthy state to the described inflammatory state, we would have a repository of models that mimic the adaptive behaviors of the system components. We would have for example, macrophages that surround oxidized LDL that form foam cells and cause plaques to form. The adaptive GCN agents can pick from those that match a data set best, say one that shows the existence of plaques and oxygen and ldl. But there is a time before all the GCN agents have chosen their parts of the system and got them to work together in which no data is generated, before selfreinforcement exists. The way we handle this is to “prime” the system by presenting to them the signaling data of the system without it being part of a self-reinforcing loop yet. Table 14.1 shows the data absorption algorithm for the atherosclerosis example. In implementing data absorption in the GCN we would use some agents that never change their strategies during the simulation, and others that do adapt, in recognition

14 Leveraging Algorithmic and Human Networks to Cure Human Aging …

299

Table 14.1 Data absorption in the atherosclerosis example 1 Start with a repository of adaptive agents that model the reactive behavior of the components of the cholesterol deregulation 2 Have many “data” agents that are not part of a self reinforcing system automatically interact with adaptive agents, with ROS and LDL present 3 Adaptive agents try to generate the data through a self reinforcing system by trial and error of selecting models from the repository 4 Remove data agents when adaptive agents have reproduced the data

of the fact that it does not matter if some agents never act according to personal utility, because it would not make the adaptive agents react to them any differently than to agents that were internally adaptive. Adaptive agents adapt to those around them without giving consideration to whether other’s behaviors are from their own individual utility or if they are merely external mimics. As long as the distribution of their behaviors is what one would expect from those of their class, they can still seed a society that adapts to and thus explains the distribution from agent utility, or lower level rules, and thus cause. These seeding agents that don’t themselves adapt can be removed once the feedbacks that make a multiresolutional simulation have stabilized.

14.7 BayesExpert in GCN The BayesExpert Bayesian network software will facilitate the contribution of models that describe the theories and data from the community of science to the GCN, and will play an important part in applying the data absorption technique to capture the systems of the human body. Bayesian nets will be one of the ways in which non adaptive data is presented to agents to prime the system into self-reinforcement. Our BayesExpert Bayesian network model creator is capable of filling the GCN’s repositories with real world data obtained from randomized controlled trial studies, the gold standard in science. Through the BayesExpert GUI in the SingularityNet marketplace, scientists can submit their medical literature search results, clinical trial results, and theories of causation supported by those results. The statistical language of the BayesExpert dependency rule is that of the medical literature, including sensitivity and specificity, relative risk, and their confidence intervals. These rules express dependent relations as one would expect in a Bayesian net, translated from these statistics. In addition, the scientist can use “and” and “or” from standard logic as well as other functions like “avg” (average) or “if–then–else” to more fully describe the relationships between variables. If those models are useful in solving longevity related problems as part of GCN ensembles, scientists will receive compensation in tokens (Duong 2020).

300

D. Duong et al.

14.8 How BayesExpert can Combine Separate Studies into Coherent Wholes The main technical problem we have to solve in the design of BayesExpert is how to best estimate the probability of a variable given combinations of all the input variables that have never been tested in combination, in other words how to turn a dependency rule into a Conditional Probability Table. The way we do this is by the use of quadratic programming with hard and soft constraints based on the rules of probability. Our algorithm is a way to tell how much the models agree with each other, a form of mutual validation. What we do is find consonant sets of data and relations that make each other more true, particularly where the relative risk statistics are concerned. Intuitively, if we change the relative risks by small normalized fractions of their 95% bounds, and the result is feasible in quadratic programming, then the relations and their data, that is the conditional probability tables and discrete distribution tables, fit the data of the studies when they adjusted to each other only a small amount. However, if they have to change within larger bounds to be feasible, then the studies disagree with each other more and are also less likely to be valid together. We record the amount that we have to change the upper–lower window in the inequalities by, to serve as a validation score (Duong 2022). A variable combination can be wrong in several ways. For one thing, the data may not be matched between variables with the same names, or the logical ands and ors that map the variables can be wrong. The dependency structure can be wrong. Populations in studies expressed in variables many links away in the net may not match, or there may be human errors in transcription of the relative risks. The studies behind the relative risks may be wrong, or the data may be faked. However, all of these errors show up in the validation score, a fact which makes possible the automated machine combination of the GCN, the culling out of crowd sourced studies that do not agree, and guidance on what to include and where the bugs are in manual nets. We have successfully used the validation score to help debug typos and cull studies in the handwritten longevity net. However, one thing, the absence of a variable, is not an error, and thus the validation score cannot find that. The job of the conditional probability table is to mathematically combine the given risks, not to adjust those risks to an outcome: in other words, this is not a prediction algorithm. We describe only known risks of variables, which are correct in combination, as long as they are correct alone. This is true even though more important risk factors to the outcome are not included in the combination. These risks that are unknown to the model were simply not delineated, which doesn’t make the combined relative risks incorrect. Therefore networks made with BayesExpert should be validated through measuring the combined relative risk it calculates in a hold out set of data, rather than by measuring outcomes. That is appropriate because this method does not predict, something that machine learning can do better because it can include all risks. We only include known risk, and because of this, can explain what that risk is better than

14 Leveraging Algorithmic and Human Networks to Cure Human Aging …

301

Table 14.2 An example of a dependency rule, where dependencies in the science literature can be encoded, using relative risk or sensitivity/specificity and their confidence intervals

The input variables, from two separate studies, are in the middle, while output variable values and their priors are at the bottom. There is one CPT per rule in BayesExpert

a machine learning technique can. This explanation is the function of BayesExpert in the longevity app, while machine learning is used for prediction tasks. Additionally, only including known risks is good for describing the known world in the GCN’s simulation, because we can condition risk on the state of the variables within the simulation. Unknown factors as in a machine learning technique would misattribute the risks in a causal simulation. The way the Bayes net will help solve problems is that it will help choose causal models by how well they agree with the space of relative risk relationships it defines (Table 14.2).

14.9 Model Combination via Quadratic Programming The algorithm to convert a dependency rule to a CPT and measure the consonance it has with the rest of the net is: 1. Modularize the future CPT so model contributors do not have to worry about how other variables are constructed. • To ensure there is no double counting of variable’s effects we calibrate the dependency rule. – For each variable with a relative risk (RR) in the dependency rule, we test with the variable and without it, to see the natural RR that occurs without including the dependency directly. This amount is subtracted off the RR before it is run, replacing the RR due to all causes with the direct RR due to the variable, not affecting the outcome through another present variable. – If after calibration the direct RR is close enough to 1, the variable is eliminated as a dependency • To ensure each part of the net fits in with the results of the previous part of the net, we calculate the priors of the input variables from the net rather than from a dataset. – To convert the RRs and sensitivity/specificity to individual probabilities, priors are needed from the previous CPTs along the DAG for every variable, and these are calculated from the subnet so far.

302

D. Duong et al.

2. Convert the relative risks and sensitivity/specificity probabilities to individual probabilities for every input variable of the dependency rule (and future CPT). That is, calculate P(a|b), P(¬a|b), P(a|¬b), and P(¬a|¬b). In the ensuing discussion, we will always use variables b (with or without subscripts) to represent the input or given variables, and variables a to represent the corresponding output(s). We will also denote prior and posterior probabilities by Pˆ and P, respectively. To find relative risks corresponding to particular variable states, we solve the following equations. By the term “good” in the equations, we mean the healthy state(s) of a variable. For example, if there are three classes of obesity while only one normal weight, we consider the normal weight to be “good”. P(a|bi ) i) 1. rr i = P(a|good) = ∑ P(a|b , where G is the set of all possible good input G k ∈G P(a|G k ) states ∑ ˆ k) P(a|G k )∗ P(G ∑ 2. P(a|good) = G k ∈G ˆ P(G ) k G k ∈G [ ] ˆ i ) + P(a|¬bi ) ∗ 1 − P(¬b ˆ 3. P(a) = P(a|bi ) ∗ P(b i) ∑ ˆ ˆ i ), where S is the set of all possible given input 4. P(a) = bi ∈S P(a|bi ) ∗ P(b states 5. P(a|bi ) + P(¬a|bi ) = 1 6. P(a|¬bi ) + P(¬a|¬bi ) = 1

Solution: P(a|good) = ∑

ˆ P(a) ˆ b ∈S P(bi ) ∗ rr i i

P(a|bi ) = P(a|good) ∗ rr i P(¬a|bi ) = 1 − P(a|bi ) ] [ ∑ ˆ k ) ∗ rr k ˆ i ) ∗ rr i − b ∈S P(b ˆ P(a) ∗ P(b k ] [( ) ∑ P(a|¬bi ) = ˆ i ) − 1 ∗ b ∈S P(b ˆ k ) ∗ rr k P(b k P(¬a|¬bi ) = 1 − P(a|¬bi ) To do this for sensitivity and specificity, we solve these equations, where TP, TN, FP, and FN represent True Positive, True Negative, False Positive, and False Negative, respectively: 1. sensitivity = TP/(TP + FN) 2. specificity = TN/(TN + FP) ˆ 3. P(b) = TP + FP

14 Leveraging Algorithmic and Human Networks to Cure Human Aging …

303

ˆ 4. P(a) = FN + TP Therefore: ˆ TP = sensitivity ∗ P(a) ( ) ˆ TN = specificity ∗ 1 − P(a) ˆ FN = P(a) − TP ( ) ˆ FP = 1 − P(a) − TN Solution: P(a|b) = TP/(TP + FP) P(a|¬b) = FN/(TN + FN) P(¬a|b) = FP/(TP + FP) P(¬a|¬b) = TN/(TN + FN) 3. Use the laws of probability on the combinations of input values to compute the rows of the Conditional Probability Tables. Set up the objective function, constraint inequalities, and constraint equalities, for quadratic programming using the individual probabilities calculated in step 2 and the prevalence of the input characteristics in combination from the Bayesian Network DAG assembled so far. The unknown, which is the row of the CPT, is the output of the quadratic programming. Suppose the number of input characteristics (e.g. age group, gender, blood pressure) is N . Let { f n |n = 1, 2, 3, . . . , N } represent variable functions with a common of a population. Denote the range of function f n by domain D of the individuals x U In Rn . Partition each range Rn = i=1 Rni where In is the number of elements in the partition of Rn so that {Rni |i = 1, 2, . . . , In } forms a disjoint cover for Rn . Finally, let S represent the complement of set S, and n(S) represent the number of elements in S. ) ( ∑ If∑we had perfect information, we would have p f n (x) ∈ Rni | f m (x) ∈ Rm j − k l/=m p( f n (x) ∈ Rni | f l (x) ∈ Rlk ) p( f l (x) ∈ Rlk ) = 0, for every n and m. Since our data is obtained from multiple sources and studies, however, and is imperfect, our goal will be to instead minimize the sum of the squares of the errors of all these estimates.

304

D. Duong et al.

) ( ( To do this, ) for each n, m, and j, fix p f n (x) ∈ Rni | f m (x) ∈ Rm j and p f m (x) ∈ Rm j , and calculate the sum of the squares of these errors: ( ) ∑∑ 2 ssen,m = [ p f n (x) ∈ Rni | f m (x) ∈ Rm j − p( f n (x) ∈ Rni | fl (x) ∈ Rlk ) p( fl (x) ∈ Rlk )] . k l/ =m

We have N (N − 1) equations of this form. We next find values for each p( f n (x) ∈ Rni | fl (x) ∈ Rlk ) term with l /= m given by all of the ssen,m equations of this form and calculate the sum of the squares of the errors from these equations to find SS E 1 =

N ∑ ∑

ssen,m .

n=1 m/=n

We perform similar calculations to find corresponding expressions for ssen, m for probabilities of the forms ( ) ∑∑ 2 / Rm j − ssen,m = [ p f n (x) ∈ Rni | f m (x) ∈ p( f n (x) ∈ Rni | fl (x) ∈ Rlk ) p( fl (x) ∈ Rlk )] , k l/ =m

( ) ∑∑ 2 / Rlk )] , ssen,m = [ p f n (x) ∈ Rni | f m (x) ∈ Rm j − p( f n (x) ∈ Rni | fl (x) ∈ Rlk ) p( fl (x) ∈ k l/ =m

and ( ) ∑∑ 2 / Rni | f m (x) ∈ Rm j − / Rlk )] , ssen,m = [ p f n (x) ∈ p( f n (x) ∈ Rni | fl (x) ∈ Rlk ) p( fl (x) ∈ k l/ =m

to obtain similar sums of the squares of errors SS E 2 , SS E 3 , and SS E 4 , respectively. We then form SS E T O T AL = SS E 1 + SS E 2 + SS E 3 + SS E 4 . Putting everything together, we obtain the following quadratic programming problem. Minimize: SS E T O T AL Subject to the following set of constraints: All probabilities p clearly satisfy 0 ≤ p ≤ 1 Constraints imposed on all probabilities p by the relative risk and sensitivity/ specificity confidence intervals required for consonant data sets. p(a) + p(¬a) = 1 P(a|b) ∗ p(b) + p(a|¬b) ∗ p(¬b) = p(a). 4. Iteratively run the Quadratic Programming to find the smallest feasible window. Make the above sse equalities into inequalities by adding and subtracting a

14 Leveraging Algorithmic and Human Networks to Cure Human Aging …

305

proportion of the confidence interval for the relative risk (RR) for each equation (also applies to specificity). • First normalize the confidence intervals of the RRs of variables that affect a condition (within one CPT), by converting them all to a 95% confidence interval, and then finding the relative sizes with the largest 95% confidence interval being the largest. • Multiply the window by these relative sizes to make sure each variable is adjusted by the same proportion to its CI bounds. (The window is the lower and upper bound of the conditional probabilities in the quadratic programming inequalities). • Use a binary search to make the window as small as possible (making it as close to the RR of the studies as possible) and still feasible according to the quadratic programming algorithm • Convert the smallest feasible conditional probability window back to RR confidence intervals, to find what confidence interval the conditional probabilities fall within. This is your validation score. (R R + C I ) ( p(a|b) + smallest_ f easible_window) = . RR p(a|b)

14.10 Hand-Crafted Bayes Net Model of Individual Aging We have “handcrafted” an initial BayesExpert model for our Rejuve.AI app to give users information relevant to their longevity while the network of scientist contributors is forming (Rejuve.AI 2022). This “hand-crafted” literature-based Bayes network combines many clinical studies related to the nine plus one hallmarks of aging with actions users can take to improve their chances of preventing age acceleration due to each of the hallmarks. The Pomegranate Bayesian net that BayesExpert generates from hand coded rules has 315 nodes (Schreiber 2018). Of those, 162 are discrete distributions, the “leaves” of the Bayesian network, that derive their priors from 85,585 NHANES data contributors who took blood tests (National Center for Health Statistics 2022). The remaining 153 nodes are conditional probability tables, each created from the rules. 63 of these were created from dependency rules, and 90 are logic rules using and, or, and avg to organize the data. The relations in the dependency rules come from 62 meta analyses and systematic reviews from reputable medical journals, based on 1500 gold standard Randomized Controlled Trials, on a total of 12.5 million subjects. Their priors also come from the NHANES data. At present users enter health data through surveys that are very similar to the NHANES questions and through wearables. We collect 12 wearable device signals, which prefill survey answers in the app and are sent to an ADTK anomaly detector. The anomaly detector applies thresholds based on the medical literature and also

306

D. Duong et al.

detects autoregression, inter quartile range, and level shift anomalies in the signals (Arundo Analytics 2020). Signal anomalies are sent to the Bayesian Network to be interpreted with the survey data through the rules to find what about the data entered most helped and harmed the hallmarks of aging risk scores. The user is then sent both scientific and accessible literature about the most important things we know about that they are doing and that seem reasonably likely to have potential to change the risk to the hallmark, as inferred by the Bayesian network. Each study relation (such as relative risk) in the network has a validation score that tells how much the window of its probability value had to change in order to have a feasible result in quadratic programming, where a feasible result indicates a match of the data. The present net has an average validation score of 0.056, but a median of 0.008 and a standard deviation of 0.099. The score means that the probability had to change an average five percent, a low amount. This validation score will also be used by the GCN’s fitness function to find more consonant sets and conditionals that make them consonant, that is, with a lower validation score, automatically. However, the median is under 1 percent, which indicates a good match overall. The GCN will explore the conditions in which cliques of agreeing crowdsourced studies help solve problems better than other cliques of studies (Fig. 14.2).

14.11 OpenCog’s BioAtomspace OpenCog is a framework primarily intended for AGI R&D, centered on a distributed metagraph knowledge store called the Atomspace, and oriented toward fostering cooperation of multiple AI algorithms, perhaps representing diverse AI paradigms, in updating and leveraging a single set of Atomspaces toward common goals. A new version called OpenCog Hyperon is under intensive development, with design goals including greatly increased scalability as well as improved usability. The OpenCog BioAtomspace (Mozi 2022) is a metagraph knowledge base of biomedical knowledge, created by importing and integrating a number of different existing open data sources. The BioAtomspace is GCN’s general-purpose background knowledge store, and as the collective long-term symbolic memory that multiple agents in the GCN can write to as well as read from. The BioAtomspace presents a valuable resource for AI algorithms that are capable of biasing their operations based on structured background knowledge. The basic semantics of the BioAtomspace metagraph is divided into MoleculeNodes and GeneNodes. MoleculeNodes represent items listed in catalogs of biomolecules including refSeq (Pruitt et al. 2002), ChEBI (Hastings et al 2016), and UniProt (The UniProt Consortium 2021). GeneNodes represent the gene as a basic unit of abstract scientific/linguistic conceptual representation, using HGNC (European Bioinformatics Institute 2022) as primary source. From these units, more complicated concepts can be constructed to represent the contents of the Gene Ontology (Gene Ontology Consortium 2021), Reactome (Gillespie et al. 2022) pathway database, and BioGRID (Oughtred et al.

14 Leveraging Algorithmic and Human Networks to Cure Human Aging …

307

Fig. 14.2 The Markov blanket of the Telomere Attrition Hallmark node in Rejuve.AI’s longevity Bayesian net currently used in the Rejuve.AI app. There are 9 other hallmarks and many other inputs. The extent to which we can be confident that the structure of the Markov blanket is correct is the extent to which the meta analyses have controlled for confounding, and since the studies are of gold standard Randomized Controlled Trials, or use statistical instruments on observational data. When more data is available, we will be able to learn the structure of the network through techniques such as Pearl’s inductive causation

2020) protein–protein interaction database. This dichotomy allows the representation of “grounding” of biological concepts like “genes” in empirical scientific data about the molecules and processes on which they are based.

308

D. Duong et al.

To elaborate a little further, we have three datasets in the BioAtomspace used for this application, 1. Gene ontology Contains information about a Gene, a GO term (where a Gene is a member of) and the parental relationship between GO terms. Let’s say we have Gene A and Go terms G1, G2, G3, we have the following relationship in the BioAtomspace (using standard OpenCog classic link/node notation),

2. Pathways Two pathway datasets have been imported into the BioAtomspace, which are the small molecule pathway database (SMPDB) and the Reactome pathway database. SMPDB contains small molecules (Chebi) and proteins (Uniprot) with the SMPDB pathway id (SMPDB-ID) where they exist. We can have the following sample relationship

The Reactome pathway database (Physical Entity Identifier mapping) contains Genes, small molecules (Chebi) and proteins (Uniprot) and the reactome pathwayid where they exist. The parental relationship between reactome pathways is also included. A sample relationship in the BioAtomspace is as follows:

14 Leveraging Algorithmic and Human Networks to Cure Human Aging …

309

3. Biogrid Protein Interaction Contains interaction between genes and gene to protein mapping. Given two genes A, B and proteins P1, P2, we can have the following sample relations in the BioAtomspace.

During the gene annotation process in the annotation service, we infer the ProteinProtein interaction from the fact that Genes interact with each other and by expressing proteins, proteins can interact with each other. That means, we can find the following relationship from the above example:

310

D. Duong et al.

However, such information is not stored in the BioAtomspace, rather they are created every time the user requested for a Protein-Protein interaction for a given Gene set.

14.12 Types of Atoms in the BioAtomspace Atoms in the BioAtomspace are of multiple types, among which the most important are ConceptNode, GeneNode and MoleculeNode. GO_terms and Pathway_IDs (Reactome_ID or SMPDB_ID) are of type ConceptNode and Proteins (Uniprot-ID) and Small molecules (ChEBI_ID) are represented as atoms of type MoleculeNode. GeneNodes are used, for example, to represent the fact that genes can express proteins, which have MoleculeNode type and name starting with Uniprot prefix such as:

Pathways are represented as concepts, and can have for instance genes, proteins and small molecules as a member. E.g. the links.

would indicate that the reactome pathway “R-HSA-5684264” has a member gene “MAP2K4”, protein “Uniprot:Q8NFZ5” and small molecule “ChEBI:15,422”. A gene can also have a membership relationship to a GO_term from the Gene ontology dataset and it can also interact with other genes from the biogrid dataset.

14 Leveraging Algorithmic and Human Networks to Cure Human Aging …

311

Concept Nodes are used for any concepts like GO_term ID, pathway_ID, name etc. The use of this sort of knowledge in symbolic reasoning based AI is fairly straightforward. We have experimented with this a fair bit using OpenCog’s Probabilistic Logic Networks (PLN) framework, and we believe that agents leveraging PLN as a technique for transfer learning between datasets on considerably different populations will be very valuable contributors to the GCN. As a concrete example we have near term plans to explore PLN for transfer learning to port longevity insights from Genescient’s long-lived flies to the human domain. However, the BioAtomspace can also be leveraged without too much extra work within neural networks. A number of algorithms can be used to extract numerical embedding vectors from nodes or links in BioAtomspace, which has been explored in (Goertzel et al 2020). One of the objectives of our current primary AGI R&D framework, OpenCog Hyperon, is “Cognitive Synergy”, that is, having AIs that can help with each other’s internal states. Toward this end, we have used several embedding algorithms to represent the Bioatomspace, including a deepwalk tailored to a weighted hypergraph and small datasets, and a Kernel Principal Components Analysis of the properties of BioAtomspace nodes inferred by PLN. Neural networks can support PLN inference through embeddings by guiding its reasoning. For example, we have shown that our deepwalk embeddings can perform analogies like the popular neural net “King, Queen, Man, Woman” example, meaning that a simple subtraction of the vectors shows that the concept of King minus the concept of man, or royalty, is the same as the concept of Queen minus the concept of women in the neural semantic space. However, we have shown that the same is true for nodes in the Bioatomspace, for example, that the concept of “B cell proliferation” minus “T cell proliferation” is the same as the concept of “B cell differentiation” minus “T cell differentiation.” Such vector arithmetic could direct PLN logic so as to prevalent a combinatorial explosion.

14.13 Tokenomic Incentivization Our approach to longevity science recognizes the fact that a complex heterogeneous network of datasets and AI processes is needed to solve the complex heterogeneous network phenomena of aging. To implement this approach, we must incentivize

312

D. Duong et al.

a complex heterogeneous network of people to contribute these data sets and AI processes. Therefore, Rejuve.AI incentivizes network contributions by using principles of self-organizing economics to the advantage of contributors, disrupting present models of compensation and making our decentralized network self-perpetuating. We will attract participants by offering them self-sovereignty through ownership of the products of their own labor and self-determination through opportunities for self-placement in a self-reinforcing economic system. Our tokenomics model is what incentivizes app users, data scientists, and research scientists to contribute their data and models to our GCN ecosystem. The tokenomic infrastructure that incentivizes them is based on a concept of economics that goes back to Locke, that laborers should own the products of their labor, whether that labor is of their bodies or minds. This idea has appeared in various economic theories of equity throughout history, but has not had the economic infrastructure to support it until the advent of AI to determine contribution fairly and blockchain to track it forever. AI and blockchain together make economic worlds other than the exploitation of labor by arbitrage possible. This is in contrast to current data economy practices, of either paying data contributors once for data before knowing the amount of profit that data brings, or not paying them at all in dark patterns often unbeknownst to the data contributor. Our tokenomic infrastructure guarantees that health and intellectual contributions are the property of members in our network forever, and that they own forever at least half of what they have already produced, with the option to own all of it if they can afford to. Rejuve.AI enforces this by tracking individual contributions to end products with our AI, and then compensating those contributors in proportion with every sale of an end product by means of blockchain’s distributed ledger. In doing so we disrupt the “company” itself, of the modern data economy. Our focus on composing small contributions allows people to invest relatively small amounts of time, perhaps in addition to employment that buys their labor, until the network is large and opportunities for contribution so numerous that there is no longer a need to sell their labor and let others reap the profits from it. Because we focus on what their contributions have proven to be able to do, we can let go of human bias in employment that guesses what contributors might be able to do. Rather, AI disrupts economic relations based on tribalistic bias and establishes relations based on the objective usefulness of already completed work, blind to human bias as in double blind clinical trials, blind auditions, and blind justice. Our composition algorithms make available all the talents of those in socially and economically disadvantaged groups that have not been given a fair chance in the modern data economy. The network is designed to generate opportunity dependably, disrupting rampant biased “placement” and thus definition by another in addition to ownership of one’s products by another. Thus our tokenomics infrastructure is to truly enable the concept of self-sovereignty, of truly owning one’s self. Network members have a Data NFT that keeps track of the permissions given to others to use data products, whether they be health data or intellectual property. If those data products are used to create commercial products, data contributors will receive compensation in proportion to the usefulness of their data product to the composed commercial product. Those proportions are kept track of in a product

14 Leveraging Algorithmic and Human Networks to Cure Human Aging …

313

NFT, which is created upon product inception (for example, at the point a lab files a patent). The product NFT is sharded according to contribution, and those shards are used to compensate data and model contributors with every product sale. Data contributors can sell up to half of their shards per product, to get paid in advance if they are in need of immediate funds, have low risk tolerance, or if someone else believes in the product they contributed to more than they do. We allow owning the product of another’s labor only up to 50% in acknowledgement that many people can not wait decades for pharmaceutical products nor afford the risk of their success (Table 14.3). Products are hypothesized by our AI based on composed network data and models. However, to seed the process and pay people even before a product is conceived and before a Product NFT is minted, our tokenomics model presents scientific challenges in the longevity space to the network, of many levels of difficulty. Network members stake tokens in “NFT challenges” in pools where they not only earn interest, but earn a coupon towards purchasing the product NFT of the product that won the challenge. Although it is possible for individuals or preformed teams to win these challenges on their own, additionally the GCN composes data and model contributions into solutions that are more than the sum of their parts, and creates entries to the challenges. Once the challenge is won, a patent is filed if applicable and a product NFT is created. The data contributors that together made the product will be given shards, which they will be able to sell right away to challenge pool stakers who have coupons to receive them, seeding an initial floor for their price. Anticipation of coupons incentivizes labs, data, and model contributors to join challenges, because they know they will be compensated early (when a patent is filed or a product is conceived) as well as later when a product is sold. When labs join those challenges they pay the data contributors for use of their data, sending money back even earlier than product conception. Sending money back before product sales, through product NFT shard sales and coupons for those sales, makes the economy self-reinforcing. Thus, our tokenomics works on the macro level much like our GCN market works on the micro level. The fair proportion of the data and model contributions is based on the emergent price of the model in the inner market of the GCN, and is in fact the way this multiagent reinforcement learning system assigns credit. GCN’s algorithm to choose models depends only on its usefulness to solving a variety of inner challenges, which may be many more than the external macro challenges and are repeated many Table 14.3 Tokens in the Rejuve.AI tokenomics system Token

Purpose

Allocation

When minted

RJV utility token

Currency of the network

Limited Supply Token sale on Open Market

Data NFT

Tracks permissions to use and individuals data and models

One per data contributor

First data contribution

Product NFT

Tracks proportionate contribution of data to products

One per product, sharded

Product conception

314

D. Duong et al.

more times as is needed by a reinforcement learning system. Thus the fair assignment of credit, and blind small data product choice for composition into models, facilitate dependable opportunity for small-scale individual contribution that promotes both the ownership of the products of one’s labor, whether of the mind or body, and a level playing field.

14.14 Strategies for Addressing Longevity via the Crowdsourced GCN Meta-Model Among the many things currently unclear about human aging, perhaps the most frustratingly glaring is: We don’t have a clear idea how many different things will need to be fixed in parallel in order to have a dramatic impact on maximum human healthspan. At one extreme is the possibility of a relatively simple solution such as flipping a “gene switch,” or turning on genes that control the expression of other genes. As an indication of some possible hope in this direction, there is evidence of a “midlife switch” that changes the RNA expression of aging related genes at about age 60 (Timmons et al. 2019). I.e., yes there are nine or ten or whatever hallmarks of aging, each with their own complex sub-cases and stories, but given the underlying interdependencies between the various processes underlying these hallmarks, there could potentially be a small number of genomic, chemical or cellular processes playing a major role in triggering multiple hallmarks. Identifying these via analytics would then point to the discovery of therapeutics addressing this small set of aging triggers. In this case one could see the advent of a “longevity pill” or “longevity injection” or similar, in a quite literal way. At the other extreme is the philosophy that there is really no such thing as aging, there are only diseases of aging. It’s possible there’s no small or small-ish set of triggers underlying the various aging hallmarks, but instead that the reality is just that a lot of different things wear out and go wrong as people get old. Definitely there are interdependencies, such that each thing going wrong tends to accelerate the going-wrong of other things—so that at some point a rapid-fire cascade of failures across the organism is reached. But if none of the things-that-go-wrong has a critical causal role in the overall network of interdependent screw-ups, then the solution to aging is just to address the various age-associated diseases one after another. Working toward both of these approaches concurrently makes perfect sense both scientifically and medically. Partial progress toward removing broad-based aging triggers is likely to help with some age-associated diseases; and of course curing age-associated diseases can have an additive effect to partial fixes to broad-based aging triggers. Concretely: Incremental gains made by manipulation of the aging switch could delay chronic conditions, and solving chronic conditions like cancer would enable carcinogenic treatments for aging itself to be explored.

14 Leveraging Algorithmic and Human Networks to Cure Human Aging …

315

Our own work with Genescient on analyzing DNA, RNA and other data from their long-lived flies is largely inspired by the “broad based aging trigger” philosophy— the hope is that a relatively small number of pathways can be found that are heavily causal for aging in flies, and that transfer learning can then help us port this conclusion to humans, leading to impactful human longevity therapeutics. However, the same datasets can also be helpful for understanding age-associated disease, and in our prior work we have leveraged Methuselah fly genomics to better understand the pathways behind Parkinsons and Alzheimers in humans (Matsagas et al. 2017). The data gathered from the Rejuve.AI app can also be useful across the spectrum of approaches to curing aging. Clinical and genomic data uploaded by members through the app may be helpful as contributions to the data pool associated with a GCN directed toward finding broad-based aging triggers. On the other hand, this sort of data but also simpler data from fitness trackers, smartphone peripherals and questionnaire answers may be valuable for understanding the way the path to aging or healthy longevity varies across individuals—which may be particularly useful in the context of common age-associated diseases. For instance, tracking various activities and biological indicators along with clinical and gene-expression data from various network members moving through the multiple stages of Alzheimer’s could give critical data regarding how to roll back mild Alzheimer’s rather than having it progress to the medium stage, and the advice in this regard would almost surely depend on various particular aspects of a person’s body and life. Compliance with Ethical Standards Conflict of Interest The authors declare that they have no conflict of interest.

References Arundo Analytics (2020) Anomaly detection toolkit. https://adtk.readthedocs.io/en/stable/. Accessed 5 Oct 2022 Duong D (2013) The data absorption technique: coevolution to automate the accurate representation of social structure. In: CSSSA 2013, Santa Fe, NM. http://computationalsocialscience.org/wpcontent/uploads/2013/08/Duong2013.pdf. Accessed 13 July 2022 Duong D (2018) SingularityNET’s first simulation is open to the community. https://blog.singulari tynet.io/singularitynets-first-simulation-is-open-to-the-community-37445cb81bc4. Accessed 13 July 2022 Duong D (2020) Bayesian network and anomaly detection service. https://github.com/Rejuve/bay esnet. Accessed 13 July 2022 Duong D (2022) Bayes expert: crowdsourcing the community of science. https://github.com/Rej uve/bayesnet/blob/master/Rejuve%20BayesExpert.pdf. Accessed 5 Oct 2022 European Bioinformatics Institute (2022) HUGO gene nomenclature committee. https://www.gen enames.org. Accessed 5 Oct 2022 Gene Ontology Consortium (2021) The gene ontology resource: enriching a GOld mine. Nucl Acids Res 49:325 Gillespie M et al (2022) The reactome pathway knowledgebase 2022. Nucl Acids Res 50:687 Goertzel B (1993) The evolving mind. Routledge, Philadelphia

316

D. Duong et al.

Goertzel B, Geisweiller N (2020) AI-DSL: toward a general-purpose description language for AI agents. https://blog.singularitynet.io/ai-dsl-toward-a-general-purpose-description-languagefor-ai-agents-21459f691b9e. Accessed 13 July 2022 Goertzel B et al (2014) Probabilistic logic networks: a new conceptual, mathematical and computational framework for uncertain inference. https://wiki.opencog.org/w/PLNBook. Accessed 13 July 2022 Goertzel B (2013) A mind-world correspondence principle. In: IEEE symposium on computational intelligence for human-like intelligence Goertzel B et al (2020) Embedding vector differences can be aligned with uncertain intensional logic differences. arXiv:2005.12535v1 [cs.AI] Goodfellow I et al (2014) Generative adversarial networks. arXiv:1406.2661 [stat.ML] Hansen N (2008) The CMA evolution strategy. https://cma-es.github.io/. Accessed 13 July 2022 Hastings J et al (2016) ChEBI in 2016: improved services and an expanding collection of metabolites. Nucl Acids Res 44:214 Lopez-Otin C et al (2013) The hallmarks of aging. Cell 153:1194–1217 Matsagas K et al (2017) Multipath natural product supplement suppresses dementia symptoms in amyloid-β and tau transgenic drosophila. J Biol Med Res 1:10 Mozi (2022) Bioatomspace import scripts. https://github.com/MOZI-AI/knowledge-import. Accessed 5 Oct 2022 National Center for Health Statistics (2022) National health and nutrition examination survey. https:/ /www.cdc.gov/nchs/nhanes/index.htm. Accessed 5 Oct 2022 Oughtred R et al (2020) The BioGRID database: a comprehensive biomedical resource of curated protein, genetic, and chemical interactions. Protein Sci 30:187 Owocki K et al (2022) GreenPilled: how crypto can regenerate the world. Blurb Inc., San Francisco Pruitt K et al (2002) The reference sequence (RefSeq) database. In: McEntyre J et al (ed) The NCBI handbook. https://www.ncbi.nlm.nih.gov/books/NBK21091/. Accessed 5 Oct 2022 Rejuve.AI (2021) The decentralized ai-powered longevity research network white paper 0.9, pp 46–64. https://rejuve.ai/wp-content/uploads/2023/02/Rejuve-Network-Whitepaper-1.11.pdf. Accessed 14 June 2023 Rejuve.AI (2022) Longevity bayes. https://github.com/Rejuve/bayesnet/blob/master/sn_bayes/lon gevity_bayes.py. Accessed 5 Oct 2022 Salthe S (2012) Hierarchical structures. Axiomathes 22 Schreiber J (2018) Bayesian networks. https://pomegranate.readthedocs.io/en/latest/BayesianNetw ork.html. Accessed 5 Oct 2022 Smolensky P (1988) On the proper treatment of connectionism. Behav Brain Sci 11:1–23 The UniProt Consortium (2021) UniProt: the universal protein knowledgebase in 2021. Nucl Acids Res 49:480 Timmons J et al (2019) Longevity-related molecular pathways are subject to midlife “switch” in humans. Aging Cell 18:129

Author Index

A Andreychenko, Anna E., 15

H Hellis, Emily A., 91

B Barriuso, Adrián Alonso, 143 Baxter, Richard A., 189 Blokh, David, 245

I Iacoviello, Licia, 115 Iklé, Matthew, 287 Ivanchenko, Mikhail, 67 Ivanisenko, Nikita, 217 Izzi, Benedetta, 115

C Chekanov, Nikolay, 217 Corstjens, Hugo, 189

D Danko, Daniil, 189 Deminov, Marc, 31 Duncan, Michael, 287 Duong, Deborah, 287

F Fedintsev, Alexander, 153 Fishman, Veniamin, 217

G Gaetano de, Giovanni, 115 Galaz, Alfonso Ardoiz, 143 Galkin, Fedor, 3 Gallego, Vicente, 165 Georgievskaya, Anastasia, 189 Gialluisi, Alessandro, 115 Gitarts, Joseph, 245 Goertzel, Ben, 287

K Kagansky, Nadya, 245 Kalyakulina, Alena, 67 Kardymon, Olga, 217 Kriukov, Dmitrii, 275 Kuztetsov, Petr, 31

M Martín, Miguel Ortega, 143 Melerzanov, Alexander, 31 Mizrahi, Eliyahu H., 245 Morozov, Sergey, 15 Moskalev, Alexey, 153 Mukaetova-Ladinska, Elizabeta B., 91

P Peshkin, Leonid, 275 Popov, Vasily, 153

R Rodríguez, Jorge Álvarez, 143

© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 A. Moskalev et al. (eds.), Artificial Intelligence for Healthy Longevity, Healthy Ageing and Longevity 19, https://doi.org/10.1007/978-3-031-35176-1

317

318 S Seid, Hedra, 287 Shashkova, Tatiana, 217 Sierra, Óscar García, 143 Sindeeva, Maria, 217 Stambler, Ilia, 245 Syromyatnikov, Mikhail, 153 T Tlyachev, Timur, 189

Author Index Y Yankevich, Dmitrii, 31 Yusipov, Igor, 67

Z Zhavoronkov, Alex, 3

Subject Index

A Age-related diseases, 67, 73, 74, 91, 126, 190, 234, 236, 245, 249, 255, 257, 265, 280 Aging, 3–10, 16, 17, 19, 21, 24, 26, 67–69, 75, 77, 91, 115, 116, 119–126, 129, 132, 134, 153, 154, 160, 189–201, 203–205, 207, 217, 232–236, 245–249, 254, 255, 257, 263, 265, 267, 268, 270, 271, 275–280, 282, 287–291, 296, 298, 305, 306, 311, 314, 315 Aging clocks, 4–10, 69, 70, 72, 115–118, 120–124, 126–130, 134, 193, 234, 235, 277, 278 Artificial Intelligence, 16, 17, 20, 32, 33, 35, 36, 70, 91, 95, 101, 108, 115, 189–191, 193, 197, 221, 224, 225, 229, 230, 247, 248, 275, 276 Assistive technology, 91, 93, 95–106, 108, 110, 111

B Bayesian Networks, 291, 299, 303, 305, 306 Behavioural and Psychological Symptoms of Dementia, 91, 92, 108 Biogerontology, 5, 115, 116 Biological aging, 7, 68, 115–118, 120, 121, 123, 124, 127, 131, 134, 206, 235 Blockchain, 312 Blood age, 127 Brain age, 122, 127

C Chromatin, 231 Clinical practice, 3, 8, 9, 17, 18, 189, 206, 219, 263 Coevolution, 291–293, 298 Complex Adaptive Systems, 290–292 Computer vision, 19, 20, 26, 201, 204, 227 Crowdsourcing, 282, 291 D Data mining, 275 Deep learning, 3, 5–10, 17, 18, 68, 69, 72, 73, 77, 79, 118, 121, 145, 194, 196, 200–205, 207, 225, 237, 245–247, 264, 276 Dementia, 77, 91–93, 95–97, 99–101, 103–105, 107, 108, 110, 129, 233, 248, 256–263 Dementia care, 91, 93, 97, 99, 104, 108, 110, 111 DNA methylation, 67–80, 118, 124, 193, 231, 232, 234–237, 277 DNA methylation clocks, 69, 124 Drug discovery, 10, 196, 297 E Effects of vibration on health, 31 Employee health risk estimation, 60 Epigenetics, 4–8, 67–73, 75–79, 115–122, 124–127, 129, 131, 134, 193, 217, 231–234, 236, 237, 279 F Facial imaging, 201

© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 A. Moskalev et al. (eds.), Artificial Intelligence for Healthy Longevity, Healthy Ageing and Longevity 19, https://doi.org/10.1007/978-3-031-35176-1

319

320 G Genetic association analysis, 165 Genetics, 19, 70, 75, 76, 115, 116, 118, 122, 124, 126, 127, 129, 133, 134, 154, 155, 158, 165–170, 172–186, 189, 192, 194, 207, 217, 218, 221, 223, 227, 228, 231, 233, 234, 236, 247, 269, 293 Genomic data, 165–171, 173, 176, 180, 182–186, 217, 233, 315 Genomic variants, 217, 218, 222, 227–233 Geroprotector, 7, 10, 153–157 Graph Neural Networks (GNNs), 143, 145, 146, 150 Graphs, 40, 143, 145–150, 219

H Health, 7, 17, 19, 26, 32, 35, 64, 71, 72, 75, 76, 79, 91, 92, 104, 109, 110, 115, 117, 119, 122, 123, 125, 134, 146, 150, 154, 165–168, 172, 173, 178, 182, 183, 186, 190, 191, 197, 206, 207, 233, 246, 247, 249–251, 254, 257, 266, 279, 280, 290, 291, 305, 312 Health risk forecasting, 64 Human health, 165, 166, 172, 197, 250 Hybrid artificial intelligence, 288

I Imaging biomarkers, 26 Information and statistical methods of forecasting, 31 Information theory, 245–252, 254, 255, 264, 269–271

K Kernel Machine Regression, 172, 181–183, 185 Kernel methods, 165, 167–173, 178, 181, 185, 186

L Labor health, 36 Life extension, 154, 155 Longevity, 3, 7–10, 15–17, 22–24, 26, 68, 74, 75, 78–80, 124, 126, 144, 150, 151, 192, 194–196, 204, 206, 233, 245, 247, 248, 255, 269, 271, 275,

Subject Index 276, 279, 288–291, 296, 299–301, 305, 307, 311, 313–315

M Machine learning, 3, 5, 7, 27, 36, 67–71, 73, 74, 76, 78–80, 115–117, 120, 134, 143, 144, 153, 154, 160, 172, 193–196, 198, 199, 222, 224, 227, 229–231, 247, 277, 279, 282, 289, 296, 300, 301 Mechanical Turk, 275, 276, 282 Medical genetics, 165, 167, 174 Medical imaging, 15–20, 26, 27 Methylation, 5, 68–72, 75–80, 118, 119, 123–126, 193, 234, 235, 277 Mixed model, 173, 174, 176, 182, 183 Mortality, 4, 67, 68, 72, 75, 76, 108, 115, 118–124, 126, 134, 203, 235, 248, 257, 263, 267, 268, 280 Multi-dimensional data, 121, 189, 195, 207 Multimorbidity, 245, 247, 257, 263–268 Multiresolutional Simulation, 291, 295, 299 Multivariate Phenotypes, 182, 183

N NGS, 203, 256 Normalized mutual information, 247, 250, 253, 254, 256–258, 260, 263–267, 269, 270 Nucleotide sequence structuredness, 245

O Older people, 104

P Personalization, 7, 189, 207 Physiological threshold, 247 Pleiotropy, 181, 254, 255, 263

R Radiology, 15–20, 25, 276 Regularized regression, 237, 277, 279 Risk prediction, 62, 63, 115, 247, 248, 257

S Sensorineural hearing loss, 31, 33, 39, 43–45, 49–51, 53, 55, 56, 60, 61

Subject Index Skin aging, 189–195, 197, 199, 200, 206, 207 Skin research, 189, 191, 197, 206, 207 Skin resilience, 189, 196, 206 Systems biology, 289 T Telomere length, 118, 192, 193, 234

321 Temporal link prediction, 145, 150 Transcriptome, 6, 154, 232 Transformers, 143, 145, 200, 226, 231, 291

V Variance component test, 173, 174, 178 Vibration disease, 31, 33, 43–49, 51–59, 61