Biomedical Visualisation: Volume 3 [1st ed.] 978-3-030-19384-3;978-3-030-19385-0

This edited book explores the use of technology to enable us to visualise the life sciences in a more meaningful and eng

604 52 9MB

English Pages XVI, 141 [151] Year 2019

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Biomedical Visualisation: Volume 3 [1st ed.]
 978-3-030-19384-3;978-3-030-19385-0

Table of contents :
Front Matter ....Pages i-xvi
The Use of Ultrasound in Educational Settings: What Should We Consider When Implementing this Technique for Visualisation of Anatomical Structures? (Ourania Varsou)....Pages 1-11
Interactive 3D Visualisation of the Mammalian Circadian System (Allison Sugden, Maria Gardani, Brian Loranger, Paul M. Rea)....Pages 13-39
Utilising Anatomical and Physiological Visualisations to Enhance the Face-to-Face Student Learning Experience in Biomedical Sciences and Medicine (Christian Moro, Sue Gregory)....Pages 41-48
Anatomy Visualizations Using Stereopsis: Current Methodologies in Developing Stereoscopic Virtual Models in Anatomical Education (Dongmei Cui, Jian Chen, Edgar Meyer, Gongchao Yang)....Pages 49-65
Statistical Shape Models: Understanding and Mastering Variation in Anatomy (Felix Ambellan, Hans Lamecker, Christoph von Tycowicz, Stefan Zachow)....Pages 67-84
Towards Advanced Interactive Visualization for Virtual Atlases (Noeska Smit, Stefan Bruckner)....Pages 85-96
An Experiential Learning-Based Approach to Neurofeedback Visualisation in Serious Games (Ryan Murdoch)....Pages 97-109
Visual Analysis for Understanding Irritable Bowel Syndrome (Daniel Jönsson, Albin Bergström, Isac Algström, Rozalyn Simon, Maria Engström, Susanna Walter et al.)....Pages 111-122
Immersive Technology and Medical Visualisation: A Users Guide (Neil McDonnell)....Pages 123-134
A Showcase of Medical, Therapeutic and Pastime Uses of Virtual Reality (VR) and How (VR) Is Impacting the Dementia Sector (Suzanne Lee)....Pages 135-141

Citation preview

Advances in Experimental Medicine and Biology 1156

Paul M. Rea Editor

Biomedical Visualisation Volume 3

Advances in Experimental Medicine and Biology Volume 1156 Editorial Board IRUN R. COHEN, The Weizmann Institute of Science, Rehovot, Israel ABEL LAJTHA, N.S. Kline Institute for Psychiatric Research, Orangeburg, NY, USA JOHN D. LAMBRIS, University of Pennsylvania, Philadelphia, PA, USA RODOLFO PAOLETTI, University of Milan, Milan, Italy NIMA REZAEI, Children’s Medical Center Hospital, Tehran University of Medical Sciences, Tehran, Iran

More information about this series at http://www.springer.com/series/5584

Paul M. Rea Editor

Biomedical Visualisation Volume 3

Editor Paul M. Rea Anatomy Facility, Thomson Building, School of Life Sciences, College of Medical, Veterinary and Life Sciences University of Glasgow Glasgow, UK

ISSN 0065-2598     ISSN 2214-8019 (electronic) Advances in Experimental Medicine and Biology ISBN 978-3-030-19384-3    ISBN 978-3-030-19385-0 (eBook) https://doi.org/10.1007/978-3-030-19385-0 © Springer Nature Switzerland AG 2019 This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, express or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG. The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland

Preface

The utilisation of technologies in the life sciences, medicine, dentistry, surgery and allied health professions has been utilised at an exponential rate over recent years. The way we view and examine data now is significantly different to what has been done perhaps 10 or 20 years ago. With the growth, development and improvement of imaging and data visualisation techniques, the way we are able to interact with data is much more engaging than it has ever been. These technologies have been used to enable improved visualisation in the biomedical fields but also on how we engage our future generations of practitioners when they are students within our educational environment. Never before have we had such a wide range of tools and technologies available to engage our end-stage user. Therefore, it is a perfect time to bring this together to showcase and highlight the great investigative works that are going on globally. This book will truly showcase the amazing work that our global colleagues are investigating, and researching, ultimately to improve student and patient education, understanding and engagement. By sharing the best practice and innovation, we can truly aid our global development in understanding how best to use technology for the benefit of the society as a whole. Glasgow, UK

Paul M. Rea

v

Acknowledgements

I would like to truly thank every author who has contributed to the third edition of Biomedical Visualisation. By sharing our innovative approaches, we can truly benefit the students, faculty, researchers, industry and beyond, in our quest for the best uses of technologies and computers in the field of life sciences, medicine, allied health professions and beyond. In doing so, we can truly improve our global engagement and understanding about the best practice in the use of these technologies for everyone. Thank you!

vii

About the Book

Following on from the success of the first two volumes, Biomedical Visualisation, Volume 3, will truly showcase and highlight the innovative use of technologies in enabling and enhancing our understanding of the life sciences, medicine, allied health professions and beyond. This will be of benefit to the students, faculty, researchers and patients alike. The aim of this book is to provide an easy access format to the wide range of tools and technologies which can be used in the age of computers to improve how we visualise and interact with the resources that can improve education and understanding related to the human body, with a particular focus on anatomy and clinical applications using immersive technologies and serious games for this volume.

Chapters 1–6 Anatomical Visualisation and Education The first six chapters have an anatomical focus and examine a wide range of digital technologies and how they can be used, applied and adapted to enhance anatomical education. The first of these chapters examines the history and development of ultrasound, and the applications in an educational setting, and also as a point-of-care ultrasound (POCUS) at the bedside for more rapid diagnosis of clinical conditions as they present. The second chapter presents an innovative way to combine several software packages to create a fully interactive educational and training package to enhance understanding of an area, which can be challenging to learn about from a macro to micro level – that of the circadian rhythm – and a transferable workflow methodology, which could be used in many fields where the aim is to develop interactive training packages. The third chapter is a review of tools and technologies, which can be used to enhance off-campus learning, and up to the current range of visualisation technologies, like virtual, augmented and mixed reality systems. The fourth chapter discusses how using scanning methodologies, like CT imagery, can make stereoscopic models. It will highlight case examples of how to develop anatomical visualisation using stereopsis. The fifth chapter describes a novel way to reconstruct 3D anatomy from imaging datasets and how to build statistical 3D shape models or SSMs. This will also be examined in a clinical context and will examine how this can lay the foundation for advanced diagnostic disease scoring. The sixth chapter examines how to apply interactive visualisations of atlas information in the creation of a virtual resource and the recent advances for providing next-generation interfaces. ix

x

Chapters 7–8 Clinical Applications Using Serious Games and Visual Data Analysis The seventh and eight chapters discuss clinically relevant applications, with the seventh chapter using serious games in neurofeedback for mental health education. The eighth chapter takes the condition of irritable bowel syndrome and utilises interactive visual data analysis to generate an environment for hypothesis creation and reasoning based on scientific information.

Chapter 9–10 Immersive Technologies The final two chapters examine very current immersive technologies – both virtual and augmented reality in medical and biomedical visualisation. The final chapter examined closely the application of virtual reality in the dementia sector, an area of increasing importance with our improved life expectancy.

About the Book

Contents

The Use of Ultrasound in Educational Settings: What Should We Consider When Implementing this Technique for Visualisation of Anatomical Structures?......................................... 1 Ourania Varsou Interactive 3D Visualisation of the Mammalian Circadian System.................................................................................... 13 Allison Sugden, Maria Gardani, Brian Loranger, and Paul M. Rea Utilising Anatomical and Physiological Visualisations to Enhance the Face-to-Face Student Learning Experience in Biomedical Sciences and Medicine.................................................... 41 Christian Moro and Sue Gregory Anatomy Visualizations Using Stereopsis: Current Methodologies in Developing Stereoscopic Virtual Models in Anatomical Education............................................. 49 Dongmei Cui, Jian Chen, Edgar Meyer, and Gongchao Yang Statistical Shape Models: Understanding and Mastering Variation in Anatomy.............................................................................. 67 Felix Ambellan, Hans Lamecker, Christoph von Tycowicz, and Stefan Zachow Towards Advanced Interactive Visualization for Virtual Atlases................................................................................... 85 Noeska Smit and Stefan Bruckner An Experiential Learning-Based Approach to Neurofeedback Visualisation in Serious Games............................... 97 Ryan Murdoch Visual Analysis for Understanding Irritable Bowel Syndrome.......... 111 Daniel Jönsson, Albin Bergström, Isac Algström, Rozalyn Simon, Maria Engström, Susanna Walter, and Ingrid Hotz

xi

xii

Immersive Technology and Medical Visualisation: A Users Guide.......................................................................................... 123 Neil McDonnell A Showcase of Medical, Therapeutic and Pastime Uses of Virtual Reality (VR) and How (VR) Is Impacting the Dementia Sector................................................................................ 135 Suzanne Lee

Contents

About the Editor

Paul M. Rea is a medically qualified clinical anatomist and a senior lecturer and licensed teacher of Anatomy. He has an MSc (by research) in craniofacial anatomy/surgery, a PhD in neuroscience, a diploma in Forensic Medical Science (DipFMS) and an MEd (Learning and Teaching in Higher Education) with merit, with his dissertation examining digital technologies in anatomy. He is an elected fellow of the Royal Society for the encouragement of Arts, Manufactures and Commerce (FRSA), elected fellow of the Royal Society of Biology (FRSB), senior fellow of the Higher Education Academy, professional member of the Institute of Medical Illustrators (MIMI) and fully registered medical illustrator with the Academy for Healthcare Science. Paul has published widely and presented at many national and international meetings, including invited talks. He sits on the Executive Editorial Committee for the Journal of Visual Communication in Medicine, is associate editor for the European Journal of Anatomy and reviews 24 different journals/publishers. He is the public engagement and outreach lead for anatomy coordinating collaborative projects with the Glasgow Science Centre, NHS and Royal College of Physicians and Surgeons of Glasgow. He is also a STEM ambassador and has visited numerous schools to undertake outreach work. His research involves a long-standing strategic partnership with the School of Simulation and Visualisation, The Glasgow School of Art. This has led to multimillion pound investment in creating world leading 3D digital datasets to be used in undergraduate and postgraduate teaching to enhance learning and assessment. This successful collaboration resulted in the creation of the world’s first taught MSc Medical Visualisation and Human Anatomy combining anatomy and digital technologies. The Institute of Medical Illustrators also accredits it. This degree, now into its eighth year, has graduated almost 100 people and created college-wide, industry, multi-institutional and NHS research-linked projects for students. Furthermore, he is the pathway leader for this degree.  

xiii

Contributors

Isac  Algström Department of Science and Technology, Linköping University, Linköping, Sweden Felix Ambellan  Zuse Institute Berlin, Berlin, Germany Albin  Bergström Department of Science and Technology, Linköping University, Linköping, Sweden Stefan Bruckner  Department of Informatics, University of Bergen, Bergen, Norway Jian  Chen  Department of Anesthesiology, Division of Pain Management, University of Mississippi Medical Center, Jackson, MS, USA Dongmei  Cui Department of Neurobiology and Anatomical Sciences, Division of Clinical Anatomy, University of Mississippi Medical Center, Jackson, MS, USA Maria Engström  Department of Medical and Health Sciences, Linköping University, Linköping, Sweden Maria Gardani  School of Psychology, College of Science and Engineering, University of Glasgow, Glasgow, UK Sue Gregory  University of New England, Armidale, NSW, Australia Ingrid Hotz  Department of Science and Technology, Linköping University, Linköping, Sweden Daniel  Jönsson Department of Science and Technology, Linköping University, Linköping, Sweden Hans Lamecker  Zuse Institute Berlin, Berlin, Germany 1000 Shapes GmbH, Berlin, Germany Suzanne Lee  Digital Innovation and Strategy/Pivotal Reality, Glasgow, UK Brian  Loranger School of Simulation and Visualisation, The Glasgow School of Art, Glasgow, UK Neil  McDonnell School of Humanities, College of Arts, University of Glasgow, Glasgow, UK

xv

xvi

Edgar  Meyer Department of Neurobiology and Anatomical Sciences, Division of Clinical Anatomy, University of Mississippi Medical Center, Jackson, MS, USA Christian Moro  Faculty of Health Sciences and Medicine, Bond University, Gold Coast, QLD, Australia Ryan Murdoch  Graduate of Glasgow School of Art’s School of Simulation and Visualization, The Glasgow School of Art, Glasgow, UK Paul M. Rea  Anatomy Facility, Thomson Building, School of Life Sciences, College of Medical, Veterinary and Life Sciences, University of Glasgow, Glasgow, UK Rozalyn  Simon Department of Medical and Health Sciences, Linköping University, Linköping, Sweden Noeska  Smit Department of Informatics, University of Bergen, Bergen, Norway Mohn Medical Imaging and Visualization Centre, Haukeland University Hospital, Bergen, Norway Allison  Sugden Anatomy Facility, Thomson Building, School of Life Sciences, College of Medical, Veterinary and Life Sciences, University of Glasgow, Glasgow, UK School of Simulation and Visualisation, The Glasgow School of Art, Glasgow, UK Christoph von Tycowicz  Zuse Institute Berlin, Berlin, Germany Ourania Varsou  University of Glasgow, School of Life Sciences, Anatomy Facility, Glasgow, UK Susanna  Walter Department of Clinical and Experimental Medicine, Linköping University, Linköping, Sweden Gongchao  Yang Department of Neurobiology and Anatomical Sciences, Division of Clinical Anatomy, University of Mississippi Medical Center, Jackson, MS, USA Academic Information Services, University of Mississippi Medical Center, Jackson, MS, USA Stefan Zachow  Zuse Institute Berlin, Berlin, Germany 1000 Shapes GmbH, Berlin, Germany

Contributors

The Use of Ultrasound in Educational Settings: What Should We Consider When Implementing this Technique for Visualisation of Anatomical Structures? Ourania Varsou

Abstract

Ultrasound is a well-established medical imaging technique with pioneering work conducted by Professor Ian Donald and his colleagues at the University of Glasgow, from the mid-1950s onwards, in terms of introducing it as a diagnostic tool in the field of obstetrics and gynaecology. Since then, ultrasound has been extensively used in clinical and research settings. There are few imaging techniques that have undergone such a fast and thriving evolution since their development. Nowadays, diagnostic ultrasound benefits from two-­ dimensional (2D), three-dimensional (3D), four-dimensional (4D), and a variety of Doppler modes with technologically advanced transducers (probes) producing images of high anatomical fidelity. In the future, there may even be a place for ultrasound in molecular imaging allowing for visualisation at the microscale. Ultrasound is characterised by real-time non-invasive scanning, relative ease of administration, and lack

of ionising radiation. All of these features, make ultrasound an appealing option in educational settings for learning topographic anatomy and potentially enhancing future clinical practice for vocational learners. Sophisticated, but relatively inexpensive, portable handheld devices have also contributed to point-of-care ultrasound (POCUS) becoming the norm for bedside and pre-hospital scanning. It has been argued that ultrasound will become the next stethoscope for healthcare professionals. For this to become a reality, however, training is required on increasing familiarity with knobology, correct use of the machine and transducers, and accurate interpretation of anatomy followed by identification of pathologies. The above require incorporation of ultrasound teaching in undergraduate curricula, outwith the realm of opportunistic bedside learning, accompanied by consideration of ethical topics such as the management of incidental findings and careful evaluation of its pedagogical impact crosssectionally and longitudinally. Keywords

O. Varsou (*) University of Glasgow, School of Life Sciences, Anatomy Facility, Glasgow, UK

Ultrasound · Undergraduate curriculum · Medicine · Incidental findings · Point-of-care ultrasound (POCUS)

© Springer Nature Switzerland AG 2019 P. M. Rea (ed.), Biomedical Visualisation, Advances in Experimental Medicine and Biology 1156, https://doi.org/10.1007/978-3-030-19385-0_1

1

O. Varsou

2

1

Introduction

was conceived by the French physicist Paul Langévin for locating underwater objects such as Medical ultrasound, which is also referred to as submarines (Kurjak 2000; McNay and Fleming ultrasonography or sonography, is a well-­ 1999). The basic physical principles underpinestablished imaging technique that has been ning diagnostic ultrasound are similar to animal extensively used for several decades in clinical echolocation (Fig.  1), also known as bio-sonar, practice, for either diagnostic or therapeutic pur- that is used for instance by bats when emitting poses, and in research settings. sound waves with these bouncing off the surface Diagnostic ultrasound utilises short pulses of of nearby objects and then detecting the reflected sound waves at high frequencies – typically 2 to echoes (Jones 2005). This process allows their 15 MHz that are inaudible to the human ear – for brain to built a multidimensional acoustic image the visualisation of internal anatomical structures or focus on distinctive features of a prey such as (Schellpfeffer 2013) and hence aiding the diag- movement (Simmons et  al. 1979). In a similar nostic process or guiding invasive medical proce- fashion, high frequency sound waves travel in the dures. Interestingly, the origins of diagnostic human body and at different tissue  interfaces ultrasound stem from maritime history with (e.g. soft tissue and fluid) some of these waves SONAR (sound, navigation, and ranging) that are reflected back to the transducer producing

Fig. 1  Simplified explanation of animal echolocation

The Use of Ultrasound in Educational Settings: What Should We Consider When Implementing this…

different images (Schellpfeffer 2013). Specifically, electrical signals are converted into the emitted sound waves when passing through the synthetic piezoelectric crystals of an ultrasound transducer and conversely the reflected sound waves are convert into electrical signals by the piezoelectric crystals that are then used to generate a visual image (Schellpfeffer 2013). This is known as the piezoelectric effect in ultrasound (Fig. 2). Professor Ian Donald and his colleagues at the University of Glasgow, including engineer Tom Brown and Dr John MacVicar, conducted pioneering work from the mid-1950s onwards, in terms of establishing ultrasound as an essential diagnostic tool in clinical medicine. Their work specifically discussed applications in the field of obstetrics and gynaecology with a focus on studying pregnancy along with foetal development and establishing differential diagnosis for women presenting with a grossly distended abdomen and hence distinguishing benign from malignant conditions (Kurjak 2000; McNay and Fleming 1999). Diagnostic ultrasound is one of the few clinical imaging techniques that has undergone such a fast and thriving evolution in a relatively short period of time since its development, especially in relation to non-invasive pre-

3

natal diagnosis making it an almost irreplaceable tool in obstetrics (Kurjak 2000). More recently there has also been a surge in the use of ultrasound in pedagogical settings as a teaching adjunct for learning topographic anatomy with a focus on exploring what constitutes non-­ pathological human form and understanding the functional relationships of surrounding anatomical structures. This use of educational ultrasound will be the main focus of this chapter.

2

Transducers, Scanning Techniques and Modes

Diagnostic and educational ultrasound benefit from 2D, 3D, 4D (i.e. addition of time into the dimensionality of 3D scans and hence visualisation of movement), and several different Doppler modes with a range of technologically advanced transducers (e.g. linear, curvilinear, phased array, and intracavity) that are capable of producing images of excellent quality with high anatomical fidelity. For instance, linear array transducers use high frequencies to produce a rectangular image within a narrow field of view (i.e. true image of area scanned) characterised by good resolution. Linear transducers tend to be combined with low

Fig. 2  Simplified explanation of the piezoelectric effect in diagnostic ultrasound

4

depth settings making them best for visualising superficial structures (i.e. 5 cm) in the abdomen and pelvis (Soni et al. 2014). It is worth noting that in ultrasound, the frequency of the transducer is directly proportional to the resolution with the trade-off that the former is inversely proportional to depth (Schellpfeffer 2013). When starting to incorporate ultrasound teaching into a curriculum, choosing the most suitable transducer can be daunting and also expensive if several different types are to be used. A linear transducer may be a good starting point in educational settings due to its ability to produce direct rectangular images of areas scanned allowing learners to easily compare such 2D images with 2D anatomical diagrams. Even though linear transducers are typically chosen in clinical practice when visualising superficial structures, as diagnosis is not the primary goal in undergraduate curricula, they can still be used for anatomy learning with the depth of the image being manually adjusted up to a certain level. On the other hand, novice learners may find the image of linear transducers restrictive in terms of the field of view that may in turn complicate their understanding of anatomical structures and hence there may be  a preference for the flexibility of the much wider footprint of the curvilinear transducers. This is an area requiring further pedagogical research to recognise and tackle potential barriers, such as identifying and using the most optimal transducer, for learning sonographic anatomy. Each transducer has an orientation marker (Fig. 3) that typically corresponds to the left side of the ultrasound screen, with the exception of echocardiography where images are flipped with the indicator placed on the right side (Moore and Copel 2011), helping operators orient while scanning. It is recommended that the orientation marker is facing towards the right side of the per-

O. Varsou

son being scanned (i.e. the operator’s left side when scanning from the right hand side of the bed) for the horizontal plane (Fig. 3a) or towards their head for the longitudinal plane (Fig.  3b). This conventional approach can be adapted, however, depending on the area being scanned (e.g. longitudinal visualisation of kidneys requires alignment with their long axis that lie in an oblique coronal plane). Scanning in the horizontal plane produces cross-sectional images as if visualising structures from the end of the bed. Superficial structures are at the top of the ultrasound screen, closest to the transducer, with deeper structures being further down. The area of interest should be maintained in the centre of the ultrasound screen (Soni et al. 2014). To achieve this, it is best to hold the transducer like a pen or chopsticks predominantly with the first three fingers and stabilised with the remaining two fingers that can be rested on the body of the person being scanned. For novice learners, it is easier to start with visualisation of anatomical structures in the horizontal or longitudinal planes accompanied with anatomical illustrations enabling them to develop a better understanding of the anatomy. The operator of the scanner and the tutor, if different, should be familiar with the knobology of the machines and well versed with the functionality of the transducers enabling smooth delivery of teaching (Griksaitis et al. 2014). Two-dimensional imaging, also known as B mode (i.e. brightness mode), is the most commonly used type of ultrasound scanning that continuously captures moving structures in real time producing a 2D cross-sectional image (Fig.  4) (Schellpfeffer 2013). Anatomy can be visualised in the horizontal, longitudinal, coronal, and oblique planes enabling a multitude of views (Moore and Copel 2011) while  accommodating for the needs of novice, intermediate, and advanced learners. The B mode can also be used to assess physiological changes such as dilation of the internal jugular vein during the Valsalva manoeuvre (Fig. 5) allowing for better delineation of this blood vessel and hence aiding its catheterisation. As the B mode is the most intuitive, it also tends to be the best for novice learners who are still familiarising themselves

The Use of Ultrasound in Educational Settings: What Should We Consider When Implementing this…

5

Fig. 3  Conventional approach for scanning in the horizontal (a) and longitudinal (b) planes

with the knobology of ultrasound machines and human anatomy. In Doppler imaging, the movement of human tissue (e.g. blood flow within a vessel) leads to a change in the observed frequency of the reflected sound waves (Soni et al. 2014). To fully appreciate the Doppler effect, think of how the pitch of the siren from a moving ambulance increases and intensifies as it approaches a parked car and it decreases eventually fading away as the ambu-

lance passes and moves away. In the same way, movement towards an ultrasound transducer produces reflected sound waves of a higher frequency and movement away leads to reflected sound waves of a lower frequency (Schellpfeffer 2013; Soni et  al. 2014). This frequency change between emitted and reflected sound waves is known as the Doppler shift (Soni et al. 2014) and can de transformed into a sound and/or a waveform with additional colour mapping in which

6

O. Varsou

Fig. 4  Two-dimensional anatomical illustration (a) and B mode ultrasound image (b) of the anterolateral neck. CCA common carotid artery, IJV internal jugular vein

Fig. 5  Two-dimensional anatomical illustration (a) and B mode ultrasound image (b) of the anterolateral neck while performing the Valsalva manoeuvre. CCA common carotid artery, IJV internal jugular vein

brightness corresponds to velocity (Schellpfeffer 2013). Duplex and triplex scanning refers to the combination of 2D ultrasound with one or two types of Doppler imaging respectively (Schellpfeffer 2013). In education settings, assessing flow and distinguishing between a large neighbouring artery (e.g. common carotid artery) and vein (e.g. internal jugular vein) is an excel-

lent way of demonstrating how Doppler imaging works. In this example, red and blue (Fig. 6a, b, c) or waveform above and below the baseline (Fig. 6c) indicate blood flow towards (i.e. artery) and away (i.e. vein) from the ultrasound transducer respectively. It is important to keep in mind that waveform and colour can be customised and inversed by the operator; therefore, it is important

The Use of Ultrasound in Educational Settings: What Should We Consider When Implementing this…

7

Fig. 6  Doppler ultrasound image of the anterolateral neck with colour flow in duplex horizontal (a) and longi-

tudinal scanning (b) and colour flow along with waveform in triplex longitudinal scanning (c). CCA common carotid artery, IJV internal jugular vein

to check prior to starting a scanning session that the conventional settings are in place to avoid confusion. Ultrasound allows visualisation of almost all regions of the human body and hence it can easily be tailored to the needs of different vocational or science curricula and the intended learning outcomes of anatomy teaching sessions. One exception to bear in mind, in terms of reduced visualisation and hence potentially compromised

anatomy, is solid structures such as human bone through which the sound waves do not pass (i.e. all emitted sound waves are fully reflected) resulting in an acoustic shadow distal to it (Ihnatsenka and Boezaart 2010). For instance, the acoustic shadow of a rib may compromise visualisation of the heart while scanning in the horizontal plane. Such artefacts do not preclude the use of ultrasound for regions like the one described above. Instead it is worth anticipating

8

these as part of the lesson plan with the aim of either allowing extra time for scanning or including additional scanning views. It would also be sensible to expose learners, especially from vocational degrees, to ultrasound artefacts from an early stage of their training with the aim of potentially enhancing their future scanning and interpretation skills.

O. Varsou

consider ultrasound as an adjunct to be used alongside traditional approaches for teaching topographic anatomy and not as the only means of learning structures and their associated function. Recently, it has been argued that hands-on ultrasound training is associated with faster and accurate image interpretation most likely due to active facilitation of pattern recognition (Knudsen et al. 2018) with teaching in small groups being a preferred option for learners as it offers them 3 Educational Context more time with the machines (Patel et al. 2017). Regarding procedural skills and potentially aidPoint-of-care ultrasound is increasingly becom- ing future clinical practice, there is a need to beting standard practice for bedside and even pre-­ ter delineate the precise impact of early hospital scanning, performed in real time by educational ultrasound delivered at the undernon-specialist but trained healthcare providers, graduate level by collecting primary research utilising portable ultrasound devices. The aim of data especially prospectively in relation to patient POCUS is to perform a goal-directed examina- outcomes (Feilchenfeld et al. 2017). tion aiding the diagnostic process, providing The educational logistics of intergrading guidance for an invasive procedure and hence ultrasound into undergraduate curricula is increasing safety (Moore and Copel 2011; Soni another aspect worth considering as there is conet al. 2014), or as part of a screening programme siderable variability amongst different instituwith widening access being a topic of current tions (Tarique et al. 2018). Even though there is research aiming  to determine its sensitivity in an argument that curricula moderately  differ relation to different conditions (Moore and Copel from  each, having their own unique features, 2011). This rapid expansion in POCUS is mainly when it comes to vocational degrees there is a attributed to the evolution of ultrasound from minimum level of knowledge and standardised large cumbersome machines into easily trans- skill set that learners have to develop. Instead of portable handheld devices, the increased quality discussing the different approaches on how of scans characterised by high anatomical fidel- ultrasound has been implemented, it is perhaps ity, and the reduction of the associated costs that more sensible to highlight that it would be pruhas made them affordable and therefore accessi- dent for the formation of a consensus panel of ble (Moore and Copel 2011). All these advance- experts that could provide regular updates on ments have led to the notion that ultrasound will best practice guidelines for teaching this skill evolve over time into the next stethoscope (Moore and also on how it could be assessed to demonand Copel 2011; Wittenberg 2014) and hence strate competency levels necessary for healththere has been a growing demand for the provi- related curricula (Dinh et al. 2016; Tarique et al. sion of ultrasonographic training at the under- 2018). It is also essential that the integration of graduate level under the argument that early ultrasound into the curriculum is carefully exposure would be educationally beneficial for planned and implemented allowing for maxifuture healthcare professionals. mum learning (Griksaitis et al. 2014). For the purpose of visualising anatomical structures and developing a better understanding of the underlying physiology, ultrasound is effec- 4 Ethical Considerations tive in terms of improving knowledge and increasing confidence amongst learners who also When incorporating ultrasound into either vocaperceive its addition in teaching favourably tional or science curricula, it is important to con(Tarique et al. 2018). However, it is important to sider how incidental findings will be managed

The Use of Ultrasound in Educational Settings: What Should We Consider When Implementing this…

and streamlined if live volunteers for demonstration purposes are invited for scanning (Griksaitis et al. 2014). In educational settings, these could be defined as unexpected findings with potential health implications, that also include false positives most likely resulting from ultrasound artefacts, identified on the person being scanned. A standardised mechanism for their management is essential to mitigate undue distress (Siegel-­ Richman and Kendall 2017) for  everyone involved  – educationalist, ultrasound scanner operator if different, person being scanned, and learners – that could include a variety of the following recommendations: (i) consent processes aligned with the teaching policies of each institution; (ii) not drawing attention to a potential incidental finding during the actual teaching session and maintaining confidentiality at all times; (iii) preliminary scanning sessions taking place prior to teaching allowing for identification of potential incidental findings in a private setting; (iv) standardised one-to-one debriefing sessions following discovery of potential incidental findings; and (v) follow-up protocols aligned with the insurance policies of each institution. It is also worth considering who will be scanned; volunteers from the student body involving peer-­ examination that is common practice in medicine, simulated patients with no known pathologies, volunteer patients with known pathologies, or volunteers from the general public recruited in a similar fashion to the one employed for research are all viable options. In all of the above, it is important to keep in mind that this should be a voluntary process and students should be actively given the opportunity to decline partaking as a live model for demonstration purposes (Siegel-­ Richman and Kendall 2017). It is also imperative to appreciate that the role of the educationalist is not to diagnose any potential incidental findings and instead their role is to offer guidance for follow-­up without providing false reassurance. The existing literature on the potential biological effects – thermal and mechanical – of diagnostic ultrasound along with relevant safety guidelines are a helpful resource that can also be consulted within an educational context. Potential

9

bio-effects can be mitigated by implementing safety protocols that could include the following: (i) regular monitoring of the mechanical index (MI) and thermal index (TI) of the machines by a designated individual accompanied by fixed tamper-­ proof settings; (ii) limiting exposure duration by sensibly applying the ALARA (as low as reasonably achievable) principle; (iii) avoiding scanning of areas that could be deemed as high risk (e.g. eye); (iv) preventing unsupervised use of the machines; and (v) providing initial and on-going training to all scanner operators and tutors. In all cases, it would be sensible to discuss such safety protocols with the corresponding risk advisor of each institution along with local experts from the field of medical physics, bio-ethics, and pedagogy while taking into consideration existing risk assessment policies. An alternative approach, that would eliminate the issues surrounding potential incidental findings and bio-effects, is the substitution of live volunteers with ultrasound training phantoms. These, however, have drawbacks including associated purchasing and maintenance costs, fixed lifespan, and lack of anatomical variability that is an important skill to master in sonographic pattern recognition. Inexpensive and easy-to-make gelatin-based phantoms have been discussed in the published literature especially for learning how to perform ultrasound-guided invasive procedures (Richardson et al. 2015), which could be used overcoming some of the above issues. Another option is the scanning of preferably soft-­ fixed or unfixed cadavers (i.e. fresh frozen), accompanied by dissection to verify anatomy, with such approaches having been implemented in regional anaesthesia (Sawhney et al. 2017). Overall, this is a thriving area for future research exploration with the unique element of being at the interface of sonographic and educational anatomy. Consensus recommendations in relation to educational ultrasound are needed on safety protocols and the management of potential incidental findings. Drawing experience from existing guidelines on human research and liaising with experts from different fields will allow for a truly multidisciplinary and multifaceted perspective on such topics (Table 1).

O. Varsou

10 Table 1  Summary of strengths and limitations of education ultrasound and important considerations for curriculum implementation Strengths Invaluable teaching adjunct for learning topographic anatomy

Limitations Training trainers at a proficient level to teach ultrasound

Real-time non-invasive scanning

Availability of ultrasound for borrowed machines from clinical settings Set-up and on-going maintenance costs for dedicated machines Potential bio-effects (i.e. mechanical and thermal) Availability of teaching time in curricula

Variety of readily available scanning modes (2D, 3D, 4D, and Doppler) Relative ease of administration

Lack of ionising radiation

5

Considerations Live volunteers versus soft-fixed/ unfixed cadavers versus ultrasound phantoms Incidental findings policy

cal aspects that accompany ultrasound when intergrading this technique within educational courses. Undoubtedly, this is a rapidly expanding area and one to keep abreast as an educationalist. Acknowledgements Dr. Ourania Varsou would like to thank Dr. Filip Zmuda for producing the excellent illustrations that accompany the text of this chapter, Mr. Ian Gordon for taking the photos of figure 3, and Miss Katie Turnbull for assisting with the literature search.

References ALARA safety protocols

Secure storage of machines

Point(s) of integration in the curriculum Hands-on versus simulated teaching Large versus small group teaching

Summary

Educational ultrasound is definitely on the rise and more higher education institutions across the globe are formally incorporating teaching of ultrasonography into their undergraduate curricula with the aim of enhancing scientific and clinical knowledge. Although ultrasound is an invaluable teaching adjunct for learning topographic anatomy, it is important to continue collecting high quality primary research data on its exact impact in relation to future clinical practice that requires longitudinal studies following up learners prospectively over time. It is also imperative to carefully consider the logistical and ethi-

Dinh VA, Lakoff D, Hess J, Bahner DP, Hoppmann R, Blaivas M, Khandelwal S (2016) Medical student core clinical ultrasound milestones: a consensus among directors in the United States. J  Ultrasound Med 35(2):421–434 Feilchenfeld Z, Dornan T, Whitehead C, Kuper A (2017) Ultrasound in undergraduate medical education: a systematic and critical review. Med Educ 51(4):366–378 Griksaitis MJ, Scott MP, Finn GM (2014) Twelve tips for teaching with ultrasound in the undergraduate curriculum. Med Teach 36(1):19–24 Ihnatsenka B, Boezaart AP (2010) Ultrasound: basic understanding and learning the language. Int J Shoulder 4(3):55–62 Jones G (2005) Echolocation. Curr Biol 15(13): R484–R488 Knudsen L, Nawrotzki R, Schmiedl A, Mühlfeld C, Kruschinski C, Ochs M (2018) Hands-on or no hands­on training in ultrasound imaging: a randomized trial to evaluate learning outcomes and speed of recall of topographic anatomy. Anat Sci Educ 11(6):575–591 Kurjak A (2000) Ultrasound scanning – Prof. Ian Donald (1910–1987). Eur J  Obstet Gynecol Reprod Biol 90(2):187–189 McNay MB, Fleming JE (1999) Forty years of obstetric ultrasound 1957-1997: from A-scope to three dimensions. Ultrasound Med Biol 25(1):3–56 Moore CL, Copel JA (2011) Point-of-care ultrasonography. N Engl J Med 364(8):749–757 Patel SG, Benninger B, Mirjalili SA (2017) Integrating ultrasound into modern medical curricula. Clin Anat 30(4):452–460 Richardson C, Bernard S, Dinh VA (2015) A cost-­effective, gelatin-based phantom model for learning ultrasound-­ guided fine-needle aspiration procedures of the head and neck. J Ultrasound Med 34(8):1479–1484 Sawhney C, Lalwani S, Ray BR, Sinha S, Kumar A (2017) Benefits and pitfalls of cadavers as learning tool for ultrasound-guided regional anesthesia. Anesth Essays Res 11(1):3–6

The Use of Ultrasound in Educational Settings: What Should We Consider When Implementing this… Schellpfeffer MA (2013) Ultrasound imaging in research and clinical medicine. Birth Defects Res C Embryo Today 99(2):83–92 Siegel-Richman Y, Kendall JL (2017) Incidental findings in student ultrasound models: implications for instructors. J Ultrasound Med 36(8):1739–1743 Simmons JA, Fenton MB, O’Farrell MJ (1979) Echolocation and pursuit of prey by bats. Science 203(4375):16–21

11

Soni NJ, Arntfield R, Kory P (2014) Point of care ultrasound e-book. Elsevier Health Sciences Tarique U, Tang B, Singh M, Kulasegaram KM, Ailon J (2018) Ultrasound curricula in undergraduate medical education: a scoping review. J  Ultrasound Med 37(1):69–82 Wittenberg M (2014) Will ultrasound scanners replace the stethoscope? BMJ 348:g3463

Interactive 3D Visualisation of the Mammalian Circadian System Allison Sugden, Maria Gardani, Brian Loranger, and Paul M. Rea

Abstract

The daily fluctuations that govern an organism’s physiology and behaviour are referred to as the circadian rhythm. Dramatic changes in our internal or external environment can affect these fluctuations by causing them to shift abnormally. Chronic readjustment in circadian rhythmicity can lead to health defects that extend throughout the organism. These patterns have been known to affect nearly every facet of our health, from our mental state to our physiological wellbeing. Thus, it is important for healthcare professionals from a range of backgrounds to comprehend these connections early on in their education and

A. Sugden Anatomy Facility, Thomson Building, School of Life Sciences, College of Medical, Veterinary and Life Sciences, University of Glasgow, Glasgow, UK School of Simulation and Visualisation, The Glasgow School of Art, Glasgow, UK M. Gardani School of Psychology, College of Science and Engineering, University of Glasgow, Glasgow, UK B. Loranger School of Simulation and Visualisation, The Glasgow School of Art, Glasgow, UK P. M. Rea (*) Anatomy Facility, Thomson Building, School of Life Sciences, College of Medical, Veterinary and Life Sciences, University of Glasgow, Glasgow, UK e-mail: [email protected]

incorporate this knowledge into patient guidance and treatment. Traditionally, the teaching of the circadian rhythm is undertaken by didactic teaching, 2-dimensional (2D) diagrams, and biochemical processes shown from a fixed perspective. There has been a surge in technologies used to develop educational products, but the field of the circadian rhythm has been lagging behind. Therefore, the purpose of this study was to create an interactive learning application for the end-stage user, incorporating industry standard and widely available software packages. Using a mixture of 3DS Max, Photoshop, MeshLab, Mudbox, Unity and Pro Tools, we created a fully interactive package incorporating educational resources and an interactive self test quiz section. Here, we demonstrate a simple workflow methodology that can be used in the creation of a fully interactive learning application for the circadian rhythm, and its wider effects on the human body. With a small-scale study based on feedback demonstrating positive results, and with limited resources in this field, there is enormous potential for this to be applied in the educational and wider public engagement environment related to the circadian rhythm. Indeed, this also provides an excellent framework and platform for development of educational resources for any type of field that needs modernising and updating

© Springer Nature Switzerland AG 2019 P. M. Rea (ed.), Biomedical Visualisation, Advances in Experimental Medicine and Biology 1156, https://doi.org/10.1007/978-3-030-19385-0_2

13

A. Sugden et al.

14

with modern technological advances, engaging a wider audience. Keywords

3D model · Animation · Circadian rhythm · Interactive learning application · Molecular science · Suprachiasmatic nucleus

1

Introduction

Molecular life science has made great strides throughout the past few decades due to breakthroughs in biotechnology and scientific discovery. Yet, teaching this material and engaging students in its complexities, including that of the circadian rhythm remains a challenge. The circadian rhythm, literally meaning “approximately a day”, refers to the endogenous biological functions that synchronise your body with the 24-h environmental night/day  cycles. The circadian mechanism exists at a cellular level and can be synchronized by environmental cues, a term referred to as ‘entrainment’ (Albrecht 2012). It consists of a complex process of photic (Paul et al. 2009) and non-photic stimuli, or zeitgebers, to enable a sense of change in the time of day, anticipating seasonal changes, and to aid organism’s survival, such as migration, hibernation and reproduction (Foster and Kreitzman (2004). In mammals light is only perceived through the retina, but photosensitivity is not limited to rods and cones, but also to intrinsically photoreceptive retinal ganglion cells (ipRGCs; (Berson et al. 2002; Hattar et  al. 2002; Foster and Kreitzman 2004; Paul et al. 2009; Pickard and Sollars 2012)). Indeed, the circadian cycle is complex, and involves a variety of anatomical structures like the optic nerve and chiasm (Rea 2014), suprachiasmatic nucleus (SCN; Kamer and Merrow 2013) and retinohypothalamic tract (Albrecht 2012). As well as the gross anatomical structures, a series of events are also occurring at the molecular, cellular and hormonal level in the circadian cycle along with external, or non-photic processes like body temperature changes, food availability and activity (Sisson 2010; Albrecht 2012; Mohawk

et al. 2012; Kamer and Merrow 2013). Indeed, in modern day living, social interactions, travel, night work and varying degrees of exercise can also impact on the routine of the circadian rhythm (Eastman et  al. 1995; Atkinson et  al. 2007; Fuchikawa et al. 2016). Therefore, this multifactorial and complex process of the circadian rhythm from the molecular and cellular processes, gross anatomy and the wider implications of it on body organs and systems, influenced by our lifestyle, is a highly complex topic. According to Cimer (2012), two of the main reasons that students find difficulty in learning biological concepts include the abstract subject matter and the teaching environments provided. If the student’s are not happy with the course or are not being fully engaged in the material, they will show disinterest in the subject. The abstract nature, complexity, and wide diversity of molecular biology constitute one of the primary reasons why students continue to find this discipline difficult (Tibell and Rundgren 2010). Cimer (2012) demonstrated that topics involving endocrine systems, hormones, cell division, genes, and chromosomes were some of the most difficult topics for students to grasp. Difficulties have also been reported in learning macromolecular structures, cellular coupled reactions, biochemical pathways, and cellular signalling, all of which are present in circadian biology (Tibell and Rundgren 2010). The second argument made by students is the importance of a student-centred learning environment. Given the high demand of university level education, there has been a drastic expansion in class sizes. According to Lipinge (2013), it is common to have classes consisting of 100 students or more. Yet, large classrooms may not facilitate the best learning environment for students and are historically lecture-oriented, which allows for minimal student involvement and interaction (Lipinge 2013). Thus, implementing alternative modes of teaching, such as interactive learning applications, could improve student engagement and facilitate further learning without the need for more classroom instruction. This modality would also allow students to progress through the course material at their own pace.

Interactive 3D Visualisation of the Mammalian Circadian System

15

Constructing an accurate educational visualisation resource of the circadian mechanism would emphasise the salient characteristics of anatomy involved, and also illustrate the relationships of the biological phenomenon. Indeed, a number of digital tools and applications have been developed to enhance student understanding of biological processes (Jastrow and Hollinderbaumer 2004; Ma et al. 2012; Welsh et al. 2014; Manson et al. 2015; Rea 2016; Raffan et  al. 2017), and have been implicated in enhancing learning and understanding (Vavra et al. 2011). Therefore, we set out to create a workflow methodology designing and developing an educational learning platform that would help elucidate, simplify the topic, and engage the user in the process of mammalian circadian mechanisms. This would enable the user to access educational and training materials of complex mechanisms in a visually appealing format. It would also provide a perfect “recipe” to follow in the creation of digital resources not just for the circadian rhythm, and its applications in the human body, but could be utilised in a number of educational areas known to be challenging to teach.

Table 1  List of models from the application that were obtained from BodyParts3D, and the scenes where they are located

2

Materials

3.1

2.1

Model Acquisition

When developing an electronic application it is important to conceptualize the design and interface to fit the user’s preference. Two types of graphic organisers, storyboarding and paper prototyping can accomplish this, and can be reviewed in Table 3. When designing the initial layout for the interactive learning application, it was necessary to employ aspects of both organisational methods. As a large portion of the program was to include animations to convey concepts of circadian anatomy, it was important to use a storyboard approach that users could better visualise the details of the animation. The paper prototype method was also relevant to consider due to the interactive nature of the intended application. Figure 1 illustrates the steps taken to create the prototype, and Fig.  2 demonstrates the final scenes of the paper prototype.

BodyParts3D © (The Database Center for Life Science licensed under CC Attribution-Share Alike 2.1 Japan) was used for the selection of 3D models to be used in the application. This database provides a complete source of 3D digital human anatomy that is freely available and accessible under their licensing. Table  1 details the models that were used from BodyParts3D and the scene where they are located.

2.2

Software

The software used in the creation of the interactive learning application is presented in Table 2. All of the software was obtained through Glasgow School of Art using educational licensing.

Scene 1&5

2 3

4 5

3

Model type Epidermis All brain regions Spinal cord Optic tract Suprachiasmatic nucleus (remodelled) Optic chiasm Hypothalamus Retina (remodelled as rough endoplasmic reticulum) Epidermis Brain and spinal cord Heart Lungs Liver Thyroid gland

Methods

A five-step process was adopted to create the interactive learning application. Each step within the workflow pipeline through to development, including the software used, is summarized in Table 3.

Prototype and Initial Testing

A. Sugden et al.

16

Table 2  Software that was used in the creation of the interactive learning application, with a brief description of its key features Software Autodesk 3D Studio Max © Autodesk Incorporated 2016 (http://www.autodesk. co.uk/products/3ds-max/overview) Unity3D Version 5.1.3f1 © 2015 Unity Technologies ApS. (http://unity3d.com/)

Company Autodesk; CA, USA

Adobe Photoshop CS6 © 1990–2015 Adobe Systems Incorporated (http://www.adobe. com/uk/products/photoshop.html) MeshLab © 2005–2013 (http://meshlab. sourceforge.net/)

Adobe; CA, USA

Autodesk Mudbox 2016 © Autodesk Incorporated 2015 (http://www.autodesk. com/products/mudbox/overview) Avid Pro Tools © Avid Technology, Incorporated 2016 (http://www.avid.com/ products)

Unity Technologies; CA, USA

ISTI (Pisa, Italy) – CNR (Rome, Italy) Autodesk; CA, USA

Avid; MA, USA

Table 3  Comparison between the two types of graphic organisers Organiser Storyboard

Paper prototype

Definition A sequence of drawings that covey the design of each scene within an animation. Usually requires detailed drawings of scene views to convey visual concepts Hand-sketched paper representations of all windows, menus and dialog boxes that are necessary to the intended application (Snyder 2003). It usually lacks visual detail and is only focused on navigation, workflow, terminology and functionality (Travis 2012)

The prototype testing was conducted as a pilot study of provisional data on four students. Two students studied Psychology at the University of Glasgow, but had no previous experience developing or critiquing application design. The other two students studied at the School of Simulation and Visualisation at the Glasgow School of Art, were familiar with application development, but had no prior knowledge of circadian science. This allowed for a diverse range of critics, on a small scale, from both areas of science and technology.

Description A computer graphics program which supports 3D modelling, animation and rendering of output A game development software that allows for 2D and 3D creation. Uses C#, Java, or Boo programing language An image editing software that allows for creation and manipulation of 2D digital images A program designed for processing and editing of large, unstructured models from 3D imports A digital painting and sculpting software that provides tools for creating and modifying 3D geometry and textures Software that allows for composing, recording, editing, and mixing of audio

The techniques used to conduct the prototype test consisted of an observational study detailing the user’s reactions to the application and a short survey. The students were presented with each scene of the prototype and allowed to interact with it as if they were working with the digital application. The facilitator (AS) acted as the “computer,” manipulating the pieces of paper to simulate how the interface would behave. In addition, students were asked to provide verbal opinions of the interface and suggestions to increase the usability. The feedback provided by participants was used throughout the development process of the interactive learning application.

3.2

3D Model Development

For those models which were not present in BodyParts3D, these had to be created using 3D modelling software. The models that needed to be developed are presented in Table 4.

3.2.1 Eye The eye was modelled using four simple spheres and textured using Autodesk 3D Studio Max. The mental ray textures used in the creation of all of the following eye structures are illustrated in Table 5.

Interactive 3D Visualisation of the Mammalian Circadian System

17

Fig. 1  The flow diagram to demonstrate the steps taken to create the paper prototype model for the initial testing of the application

The soft selection tool was used to create the typical elongation of the front aspect of the cornea, by lengthening the front vertex of the sphere (Fig. 3). The model representing the lens within the eye (Fig. 4) and was made by duplicating the model of the cornea, and manipulated by scaling, rotating and placing it to fit. The sclera, iris and pupil of the eye were built using one sphere that was just slightly smaller than and placed within the cornea. Manipulations, such as the soft selection, pinch and extrude tools were applied to make the pupil extend within the eye. This provided more depth when looking through the pupil. The iris and sclera were based on the human eye material in Clint DiClementi 2007 (Clint 2007). A bump map was also applied to the model using the same texture to emphasize the veins on the outer surface of the eye. The retina (Fig.  4) was created by duplicating this structure, resizing it, and manipulating it to fit the internal orbit of the eye. The microscopic cells that make up the retina include the rods, cones, bipolar cells, and the ipRGCs were created based on Benjamin Cummings (2007) as a reference for anatomical accuracy in this model. The cells were modelled using standard shapes and can be seen in Fig. 5, and Table  5 highlights the types of mental ray texture used in 3DSMax.

3.2.2 Suprachiasmatic Nucleus The hair-like projections typically found on the suprachiasmatic nucleus (SCN; Takahashi 2014) were not present on the BodyParts3D model, therefore it had to be modelled separately. This task was undertaken using the freeform branch tool in 3DS Max and the unedited and edited models of the SCN are seen in Fig. 6.

3.2.3 Neurons The neurons within the SCN were designed using 3DS Max, based on the reference images from Penn State Biology Department’s online database. To start with, a simple sphere was created using the push and pull tool to bulge and indent the shape, to give it a more organic appearance. This was then duplicated and the newly created structure was resized to encompass the original structure, thus forming the cell body of the neuron (Fig. 7). The freeform branch tool was then used to create the dendrites. The extended primitive hose shape was created and attached to the nucleus using the compound objects proboolean mechanism. Then the hose shape was manipulated to better resemble the axon and myelin sheath of an anatomical neuron. Finally, the polys of the cell body, without the axon, was duplicated and scaled down to cap the other end of the axon, and this created the terminal branches. Table  5 highlights the types of mental ray texture used in 3DSMax. 3.2.4 Nucleus of Neuron The anatomical structures of the nucleus were built using a combination of 3DS Max and Mudbox. The models included were the nucleolus, rough and smooth endoplasmic reticulum (ER), and mitochondria. The nucleolus was built using a simple sphere. Holes were created in the wall of the sphere by implementing the chamfer vertices mechanism. Cylinders were attached together and distorted to make the smooth ER and the mitochondrion was created using a capsule shaped structure. Due to the many layers of folding, the rough ER was more challenging to create and required it to be built using the modified retina model from BodyParts3D.  By importing the

18

Fig. 2  The initial design of the interactive learning application using a paper prototype design

A. Sugden et al.

Interactive 3D Visualisation of the Mammalian Circadian System Table 4  Models that had to be created for the interactive learning application Scene Start Menu, 1 &2

3 4&5 6

Models Eye Inner Eye (including retina) Rods, cones and retinal ganglion cells Suprachiasmatic nucleus (remodelled) Neurons Internal neuronal structures Adipose tissue

19

it was intended to have general circadian knowledge detailed in the narration, which was geared more towards undergraduate level. More in-depth knowledge and research would be included in the written page content, which would be geared towards graduate level students. The page content and narration for the application was researched and written, referencing information contained in recent literature. The narration was recorded and a soft background music for each scene. Refer to Table 6 for further details of audio production.

Table 5  Detail of each model that was built using a type of mental ray texture within 3DS Max

Model Cornea of the eye

Retina

Rod & cones

Neuron

Type of mental ray texture used Reason for use Glass Provides a glossy, wet material appearance that reflects and refracts light. An environmental reflection was also applied using a high dynamic range image (HDRI) to give the cornea a realistic external reflection Car paint Provide the retina with a red material tint with a yellow reflection, which was representative of an anatomical retina. To achieve a more aesthetic appeal, specular levels, bump map settings and opacity levels were adjusted Glass Allow for a more organic material looking structure to be created. Coloured materials were applied to designate the difference between each type of retinal cell Provided a wet, organic effect, Frosted Glass while decreasing its transparency for better visualisation of the model in a dark environment

model into Mudbox, we were able to mould the edges of the retina to resemble that of the rough ER. This model was then brought back into 3Ds Max and scaled to fit the nucleolus (Fig. 8).

3.2.5 Content and Audio Production The application was to include written and spoken content. In catering for a wider range of students,

3.3

Interactive Application Design

The interactive component of the application was designed in Unity game engine using C# coding language and the flow chart for this is shown in Fig. 9. In addition, an audio controller was also used in the design to allow the user to stop, restart and move through the narrations. The first step of the interactive application design process was importing the acquired and created models. Due to the excessive polygon count on some of the larger models, it was necessary to use a retopology method to reduce the polygon count. Using MeshLab’s retop ‘Quadratic Edge Collapse Decimation’ tool, the polys were reduced for each model with a single click of a button. The epidermis, lungs, heart and liver models were also created and retopped to highlight the effects of all major body regions by the circadian rhythm.

3.3.1 Animations Animations were developed to enhance usability and aesthetic appearance, and can be sunmarised in Table 7. All animations were created using the Mecanim animation tool and state machine function within Unity.

3.4

Quiz

Using the MaterialUI (Invex Games 2014) buttons and code provided by an online tutorial (Brackeys 2016), a simple true/false quiz was

20

Fig. 3  Model of cornea before (left) and after (right) textures were applied

Fig. 4  Model of inner eye structures showing the outer eye and retina

Fig. 5  Model of retinal cells demonstrating rods, cones and the cell wall

A. Sugden et al.

Interactive 3D Visualisation of the Mammalian Circadian System

21

Fig. 6  The unedited version of the SCN imported from BodyParts 3D (left). The final model of the SCN after manipulation (right)

Fig. 7  The initial phase of modelling the neuron, with the axon and dendrites

incorporated into the interactive learning application. The quiz was designed as a rising and setting sun across a desert terrane, playing slowly in the background as the user takes the quiz. The sun was created using the light bloom effect (Grey Houston 2015) and the twinkling stars were created using a particle system.

3.5

Surveys

3.5.1 Student Survey To conduct a pilot study on the interactive learning application, three surveys were designed using the online survey providers SurveyMonkey©. The target sample group of

22

A. Sugden et al.

Fig. 8  Progress of rough endoplasmic reticulum model using modified retina in Mudbox

Table 6  How the audio was created Type of Audio How it was obtained Narration Recorded using Avid Pro Tools, a Zoom H4n Voice Recorder, a microphone and a folio studio. Edited using the AudioSuite Plug-Ins provided with Avid software Background Online download from YouTube© audio created by music author, Grey Houston (2015)

nine students were students of psychology (endstage users) and students from the School of Simulation and Visualisation (digital knowledge and experience). As the application was intended to enhance student’s knowledge of circadian biology, it was important to evaluate its effectiveness to students before and after using it. Therefore, the pre-­evaluation survey was given immediately before using the application and the post-evaluation survey was given to the students immediately after they used the application. Table 8 details the main sections that constitute the pre- and post-surveys. Within the content achievement quiz, the following questions were presented: 1. The “master circadian pacemaker” of the mammalian system is located in the... 2. Which is the strongest zeigeberger in nature?

3. Intrinsically photoreceptive retinal ganglion cells are efficient in conveying… 4. Where is the molecular circadian clock located? 5. What other activator gene binds to CLOCK to initiate the transcription phase of the molecular clock? 6. What are the two proteins that are unique to the secondary feedback loop? 7. How are signals from the suprachiasmatic nucleus transmitted to peripheral tissue? 8. What can cause a phase shift in an individual’s metabolic circadian rhythm? 9. What activity has been shown to cause a phase delay in melatonin?

3.5.2 Instructor Survey As academic staff will be the ones who will be providing this resource as a supplementary resource, they have to be familiar with it, and their opinions will be invaluable in shaping it. Three members of faculty were available to examine the interactive learning application. While they utitlised the digital tool, they were observed while interacting with it to give real time feedback. Following exposure and interaction with the interactive learning application, they were presented with a 12-question survey (Likert scaled) and open-ended questions and the results of these are presented within the results section.

Interactive 3D Visualisation of the Mammalian Circadian System

23

Fig. 9  Summary of the flow of various scenes and movement between each one

3.5.3 Interactive Learning Application Pilot Study The pilot test for the interactive learning application was conducted by presenting the resource to participants in a controlled environment. Participants in this small-scale study were selected using opportunity sampling techniques, where the students and instructors who were most available were asked to test the application. A matched-pairs experimental design was used for this study, which grouped the participants based on their level of experience, allocating each of them as either student or instructor. There were nine student participants and two instructors who participated in the surveys. In the student group, three studied a taught postgraduate degree in medical visualisation and human anatomy (between the Glasgow School of Art and the University of Glasgow), while the other six students were at various stages of their degrees in Psychology, from the School of Psychology at the University of Glasgow. All instructor participants were from the School of Psychology at the University of Glasgow.

All responses from the participants were anonymised by the use of the online surveying method. None of the participant’s personal data was collected, and the users were not required to answer every question. Participants were briefed before the survey and it was made clear to each of them, that by completing the survey, they were consenting to their responses being used for ­further analysis. No formal ethical approval was required. The results were collected during scheduled meetings based on the participant’s availability. There was a pre-evaluation questionnaire issued prior to use of the application, and one at the end as the post-evaluation questionnaire.

4

Results

4.1

Prototype Testing

The first issue that the four students initially reviewing the development process stated was the lack of instruction on how to navigate the interface. Three of the students stated that they

A. Sugden et al.

24

Table 7  Details of the type of animations and the specific mechanisms for how it was created, with the reasoning for this Type of animation Camera movements

Detail oriented animations

Model and location Main camera: Movement between each scene

The eye: Start Menu Scene Neuron impulse: Scene 4

Utilitarian animations

Story type animation

Retinal structures Scene 2 SCN: Scene 3 Peripheral tissues: Scene 6 Molecular circadian clock mechanism: Scene 5

Purpose To create a smooth and immersive camera flythrough effect between each scene

To transition smoothly between two different scene views To grab the user’s attention and provide aesthetic appeal To convey information of an anatomical phenomenon of a neuron sending an impulse signals along the axon

To draw the user’s attention to clickable 3D objects within each scene. A glow effect was created on 3D objects To display the clock mechanism outlined in ‘Molecular Clockwork’ within the literature review

would like something to draw their attention to clickable objects within the scene. The second issue was in the layout of the interface. The initial design had navigational buttons placed in different areas for each scene. All participants stated this was a drawback and suggested that all recurring buttons should have a designated place on each scene. Relating

How it was created An animator controller and animation clip was created for every camera movement and the navigation was linked together within the same state machine. Using code, the specific animation clips were linked to buttons, which would allow the camera to fly to the next important location within the 3D space These were made possible by incorporating box colliers that would trigger a ‘load scene’ mechanism using code Coded to rotate around a fixed position as if it were watching the movements of the mouse (Fig. 10a, b) The impulse was created using the SE Natural Bloom & Dirty Lens shader and camera effect by Sonic Ether. The shader was applied to a simple sphere, the HDR camera setting was enabled and the particle diffuse settings were adjusted to get the correct amount of bloom. Finally the sphere was animated to travel down the neurons axon. This method was duplicated and applied to other neurons within the scene (Fig. 10c) To create a pulsing glow effect, the object’s material diffuse setting was adjusted while recording the animation. The animation was also coded to stop once the object had been clicked on by the user (Fig. 10d) Every movement for each object within the scene was animated using one animation clip and timed to play in unison with the narration using code All of the molecular components, including CLOCK, BMAL1, per, cry, ribosomes, proteins, and DNA transcriptions were created using basic gameobjects in Unity. The materials for the objects were designed in Adobe Photoshop The basic outline of the animation was created with reference to the online lecture by Takahashi (2014)

this back to cognitive load theory, having an unorganized interface could cause further strain on the working memory. Reorganizing these buttons could therefore, reduce the cognitive load and increase the usability of the application. Both psychology students suggested the in-­ text information to be written in a paragraph for-

Interactive 3D Visualisation of the Mammalian Circadian System

25

Table 8  The three main sections of the student surveys Survey Diagnostic survey (DS)

Content achievement quiz (CAQ) Application attitude survey (AAS)

Purpose To determine the student’s level of confidence in circadian biology concepts before and after using the application To assess the student’s knowledge of specific circadian concepts before and after using the application To understand the student’s opinion towards the application after use

Questions 3 questions using a Likert scale ranging from 1–5 (1 – being the least confident and 5 – being the most confident) 9 multiple choice type questions

Provided Pre-­evaluation and post-­ evaluation

12 questions using a Likert scale ranging from 1–5 (1 – being ‘strongly disagree’ and 5 – being ‘strongly agree’)

Post-­ evaluation

Fig. 10 (a) Scene 2 view of retina, rods and cones, and ipRGCs (b) Scene 3 view of SCN (c) Scene 4 view of neurons within SCN (d) Scene 6 view of peripheral tissues

A. Sugden et al.

26

Fig. 10 (continued)

mat, whilst the students with digital expertise preferred a bullet-point approach to information. This highlights further the variance between ­different learning preferences and how these may be subject-specific, an important feature that should be taken into account when designing the application.

4.2

Interactive Learning Application

Following the development and construction process, the final interactive learning application as completed. In consecutive order, the final “product” was as follows:

1 . Eye created using 3DS Max (Figs. 11 and 12) 2. Cellular level of rods, cones, bipolar cells and ipRGCs (Fig. 13) 3. Suprachiasmatic nucleus (SCN) created in 3DS Max (Fig. 14) 4. Neurons and nucleus modelled in 3DS Max (Fig. 15 and 16) 5. Interactive application demonstrating: (a) Options from the start screen (Fig. 17) (b) Entrainment pathway and educational learning materials (Fig. 18) (c) SCN as part of the final animation, integrated with surrounding anatomy (Fig. 19) (d) Individual communicating neurons within the SCN, plus related text (Fig. 20)

Interactive 3D Visualisation of the Mammalian Circadian System

27

post-evaluation questionnaires related to their confidence levels in visualising anatomical structures, circadian rhythm and pathologies of circadian dysfunction.

4.4

Fig. 11  Final render of the eye created using 3DS Max – anterior view

Fig. 12  Final render of the internal structures of the eye including the retina and lens created using 3DS Max

(e) Molecular mechanisms of the circadian rhythmicity (Fig. 21) (f) Peripheral tissue involvement of the circadian rhythm (Fig. 22) (g) Quiz section (Fig. 23) (h) Final interactive application. This demonstrates the scenes where the users learn about peripheral tissues involved in circadian rhythm, and issues that arise ­ due to dysfunction of the circadian cycle.

4.3

Student Feedback

4.3.1 Diagnostic Survey The results for this are presented in Fig. 24 and Table  9. Even with a variety of student users, there is a clear improvement between pre-and

Content Achievement Quiz

The content achievement quiz (multiple choice questions) assessed how effective the application was in improving student’s knowledge of circadian biology. Figure  25 shows the comparison between the pre-and post-evaluation for each participant. According to the collated data, eight out of nine students experienced an increase of 10–20% in content understanding after interacting with the learning application. One student received the same score for both the pre- and post- quiz. Participant 3 and 8 skipped one questions from the pre-quiz, which was counted as incorrect. However, both participants answer the skipped question incorrectly in the post-quiz.

4.4.1 Application Attitude Survey The answers, based on a set of 12 questions linked to the interface, quality of the animations and overall usefulness of the application, are provided in Fig. 26 and Table 10. There was a positive response to all questions, and no participant disagreed or strongly disagreed with any statement. One respondent skipped the fourth statement regarding the intuitiveness of the application. 4.4.2 Additional Feedback An open ended section at the end of the survey allowed for the participants to comment on anything they wanted to related to the application. Two-thirds of respondents commented positively related to the application, with a third of participants mentioning that the narration playing along with the dynamic animation, like the molecular clock (Scene 5) was a strong addition. The interactive learning application was also found to be informative (one-third of respondents) and easy to use (just over one-fifth of users; 22%). The major disadvantage of the application was that it was found to be incredibly detailed and

A. Sugden et al.

28 Fig. 13  Final render of cells within the retina including the cells wall (blue), rods (yellow), cones (purple), bipolar cells (pink) and ipRGCs (green)

Fig. 14  Final render of the recreated suprachiasmatic nucleus model from 3DS Max

lengthy, with a third of users stating that some scenes had a vast amount of text without enough dynamic visual stimuli. Greater detail on how to operate the application was requested by 11% of users. Positive suggestions for improvement stated that more animations would benefit the application, along with greater interactivity, including providing a score at the end of the quiz to provide

a more competitive edge. One third of ­participants and respondents did not complete this section.

4.5

Instructor Feedback

The instructor’s feedback was based on their interpretation of the applications usefulness as a supplementary teaching tool for use in class or in

Interactive 3D Visualisation of the Mammalian Circadian System

29

Fig. 15 Neurons modelled in 3DS Max

Fig. 16 Nucleus modelled in 3DS Max

a visual learning environment. One instructor was unable to complete the survey but provided detailed verbal feedback, while the other instructors provided feedback using the survey. Figure 27 and Table 11 provides a breakdown of the responses. Both instructors, who were members of the faculty of the School of psychology at the University of Glasgow, agreed with most of the statements. One participant selected “somewhat agree” for the statement “I was satisfied with the

overall function of the application”. None of the respondents disagreed or strongly disagreed with any of the Likert scale statements. In the open-ended responses, the most common positive feedback related to the animations and overall aesthetics, with all instructors mentioning the visual appeal of the application. One respondent also positively detailed how the research included was very recent. For the negative aspects, one respondent pointed out that occasional minor errors were

A. Sugden et al.

30

Fig. 17  Main menu where users can navigate to any scene within the application

Fig. 18  Entrainment pathways and educational materials

present in information, with a lack of detail in the research, and lack of emphasis on specific concepts. Two instructors stated that the font size was too small within the application. Finally, one suggestion was made to make the page content appear when the narration discussed it. They emphasised that student’s would probably begin reading the page content while the ­narration was playing and not engage with the audio information.

During the observational study, it was found that all of the instructors began interacting with the interface while the narration was playing. They also seemed to take their time to explore the interface more thoroughly than student participants. For example, while none of the students manipulated the audio slider or opened the ‘references’ window, all three of the instructor participants used these functions.

Interactive 3D Visualisation of the Mammalian Circadian System

31

Fig. 19  Suprachiasmatic nucleus as part of the final animation, integrated with surrounding anatomy

Fig. 20  Individual communicating neurons within the suprachiasmatic nucleus, plus related text

5

Discussion

In this study, we have demonstrated a clear structured approach to create a full interactive learning application in the field of circadian rhythm biology. By selecting widely available software from major manufacturers, the unique methodology which has been presented here of combining image editing, 3D modelling, animations, coding has resulted in the creation of an interactive

learning application. Therefore, with this unique approach combining a variety of tools at key development stages, has demonstrated the ease at which a package can be developed to enhance our student learners of the complexities of the circadian rhythm. Teaching the circadian rhythm and encouraging engagement with molecular science has been a challenge to educators for a long time, and still remains so. Cimer (2012) stated that one of two main reasons students find difficulty in learning

A. Sugden et al.

32

Fig. 21  Molecular mechanisms of the circadian rhythmicity

Fig. 22  Peripheral tissue involvement of the circadian rhythm

biological concepts is that the subject matter can appear abstract and teaching environments need examined to ensure they are more effective in student engagement. Indeed, daily use of technology has become a societal norm where individuals are in constant contact with the digital world. This has sparked a transition from the traditional forms of teaching to more dynamic modalities, which encompass a wider range of learning styles and information

input. Specifically, in education, Guze (2015) has shown that there are now a variety of methodologies used to support students including include computer-assisted learning, mobile device usage, digital games, and simulations. Digital animations have also become increasingly popular nowadays. Traditionally, animations were difficult to create, required expensive resources and were wildly inaccurate. In contrast, technological advances now facilitate creation of

Interactive 3D Visualisation of the Mammalian Circadian System

33

Fig. 23  Quiz section

Fig. 24  Chart demonstrating student confidence levels pre-and post-evaluation on three main categories

animations of professional quality using relatively inexpensive software. The growth of technology over time reflects in the production value of these animations. For instance, the accuracy of anatomical structures and their physiological processes can be perfected to the point of hyperrealism using advanced techniques in animation. This provides students an accurate visualization of the mechanism, which can lend to better conceptualization of the material and greater knowledge acquisition. One of the most influential reasons to use animation for didactic purposes is attributed to their

ability to convey ample information in a short amount of time. This can consist of visual, written, or auditory information, or a combination of the three. A theoretical perspective known as dual-code theory (DCT) emphasizes the use of visual material in combination with linguistic information. The use of DCT to teach complex or abstract concepts in science has been proven by many studies to be an effective pedagogical approach from primary through to postsecondary education. This theoretical perspective provides knowledge into how visual perception affects

A. Sugden et al.

34 Table 9  Data for total users pre-and post-evaluation on the three main categories of confidence items

Fig. 25  Individual anonymised participant scores pre-and post-use quiz results of the interactive learning application

memory and can be implemented to enhance learning and understanding (Vavra et al. 2011). In addition, cognitive theory of multimedia has also demonstrated that the capacity of the working memory is maximized if the visual/pictorial processing channel is combined with the auditory/ verbal processing channel, an idea that is also mirrored in DCT (Brame 2015). An example of this can be seen when providing an animation with narration, where the learner will have a bet-

ter chance of encoding the information from their working memory into their long-term memory. Therefore, within the interactive learning application we have created, it ensures that the theories of the anatomical and physiological perspectives have been combined, with visual and auditory methods to enhance learning opportunities. This includes ensuring the tutorial covers the key features of the internal clock, namely the photic and non-photic stimuli, as previously discussed.

Interactive 3D Visualisation of the Mammalian Circadian System

35

Fig. 26  Attitudinal survey on the interactive learning application Table 10  Numbers of participants who answered each of the categories of questions on the five-point Likert scale with a weighted average

The application is easy to use The layout is visually appealing The buttons are placed in a spot that enhances the usability The animations are intuitive The instructions are clear and understandable for each scene I would recommend this application to a friend I believe this would be a useful tool for students Using this application was more effective than paper based resources This application has enhanced my knowledge of circadian rhythm Using the application was enjoyable I would use the application as a study tool I would use computer-based applications more often is they were available in other topics

Strongly disagree 1 0 0 0

Disagree 2 0 0 0

Somewhat agree 3 0 0 0

Agree 4 4 2 5

Strongly agree 5 5 7 4

Total 9 9 9

Weighted Average 4.56 4.78 4.44

0 0

0 0

0 2

3 5

5 2

8 9

4.63 4.00

0 0 0

0 0 0

0 0 1

4 3 3

5 6 5

9 9 9

4.56 4.67 4.44

0

0

1

4

4

9

4.33

0 0 0

0 0 0

1 1 0

3 2 2

5 6 7

9 9 9

4.44 4.56 4.78

In addition, the suprachiasmatic nucleus, thought to be the “master pacemaker” of the circadian rhythm has been one of the key features of the tutorial. This nucleus consists of two small nerve clusters that are located in an area at the base of the brain called the anterior hypothalamus (Vitaterna et al. 2001). This paired structure contains about 20,000 neurons in each half and each of these neurons produces its own circadian

oscillation (Herbert 1994). In 1972 the chronobiologists Stephan and Zucker proved the importance of the SCN when complete destruction of these nuclei in hamsters led to loss of circadian rhythms and inability of the animals to synchronise to environmental cues (Herbert 1994). Another ground breaking discovery made by Ralph et al. (1990) demonstrated that if SCN tissue was transplanted into a host animal with a

A. Sugden et al.

36

Fig. 27  Instructor application assessment of the interactive learning application Table 11  Data table showing the instructor application evaluation scores on the five-point Likert scale

I would recommend this application to my students I believe this will help me better explain circadian concepts to my students Using this application was enjoyable I would provide more computer-­based applications to my students if they were available in other topics This application could be useful at the undergraduate level This application could be useful at the graduate level I was satisfied with the overall function of the application

Strongly Disagree Somewhat Agree Strongly disagree 1 2 agree 3 4 agree 5 Total 0 0 0 1 1 2

Weighted average 4.50

0

0

0

1

1

2

4.50

0 0

0 0

0 0

2 2

0 0

2 2

4.00 4.00

0

0

0

1

1

2

4.50

0

0

0

1

1

2

4.50

0

0

1

0

1

2

4.00

damaged SCN, the host would exhibit the circadian rhythms of the donor (Ralph et  al. 1990; Vitaterna et  al. 2001). This demonstrated the independent nature of the neuron cells within the SCN and their ability to orchestrate clock rhythms in peripheral tissues. The complexity of the circadian rhythm also arises from the activities at the cellular level. Therefore, key to this interactive learning application was the genes and the transcriptional-­ translational feedback loop mechanism, and its interactivity with the light of day (Vitaterna et al. 2001; Ko and Takahashi 2006; Karmer and Merrow 2013). These were also designed to be as visually engaging as possible, enhancing the ability of the user to ensure transfer from the sensory to the working, and then to the long-term mem-

ory (Mayer 2003; Moreno and Mayer 2007; Brame 2015). The development of this interactive learning application has many uses. This project was developed for the use, in the first instance, for students of an undergraduate psychology degree. However, this can be used by a wide variety of students, both undergraduate and postgraduate, where knowledge and understanding of the circadian rhythm has to be developed. It also has a much wider application, as the circadian rhythm has much wider issues in the human body. Metabolism is controlled by the SCN, and involves input from local peripheral systems in tissues like the liver, pancreas, skeletal muscle, intestine, and adipose tissue (Offermanns and Rosenthal 2008; Mohawk et  al. 2012; Shi and

Interactive 3D Visualisation of the Mammalian Circadian System

Zheng 2013; Takahashi 2014). Therefore, the wider effects of the circadian rhythm on the rest of the body have also been included in the interactive learning application. In addition to this, the wider implications of the circadian rhythm in relation to mood disorders, jet lag and shift work have huge societal and economical implications. To ensure access to an easy to use and understand educational package, like the one we have created, is paramount in understanding of the circadian rhythm and its wider implications on the body. This will have a wide reach, not just to our medical, scientific and psychology communities for education and training, but also to the wider public and those affected by the related issues that can affect the circadian rhythm, like shift workers and the airline industry.

6

Future Development

We have undertaken a small-scale analysis of the usability and functionality of this interactive learning application. However, to assess the wider implications of its use, a larger scale usability study has to be developed. In addition, it may also benefit to assess those with low spatial abilities and ranges to ensure it suits a variety of learner types. We have, in our opinion, and that of a small set of users, created a successful interactive learning application. However, by incorporating further dynamic and interactive components, the users comprehension of anatomical and mechanical circadian functions could be enhanced. Indeed, adding more of these types of tools and interactivity could lend itself to an even more engaging and user-friendly resource. Advancement of this study that may be beneficial would be the development of this interactive learning application as a downloadable application for smart phones, laptops and Internet enabled devices. Again, as with any package of this nature, it can be enhanced by more features of interactivity, content, extended quiz sections, clinical based

37

scenarios and similar. However, what has been shown in this study is a demonstrable feasibility of creating such a resource using widely available software.

7

Conclusion

The purpose of this study was to show how widely available, industry standard software packages, could be used to create an interactive learning application about the circadian cycle. We have shown by a clear presentation of the workflow methodology how to create such a package for the end-stage user wanting to learn more about the circadian cycle, and its wider effects on the human body. We have conducted a small-scale pilot study identifying the opinions and usability from both staff and students alike. To fully assess its effectiveness in the learning, and wider, environments, there will clearly have to be a large-scale study conducted as an extension to this work. However, through this process and the procedures stated, we have clearly defined a way to create interactive learning applications, combining 3D technologies, sound and animations. This ensures the package is attractive and engaging, yet easy to use and follow. The amalgamation of a variety of software packages, and created assets, in the creation of an interactive learning application, is not limited to this field. This approach and methodology can be used to create any type of interactive learning application that requires a visual representation of the material to engage the end-stage user.

References Albrecht U (2012) Timing to perfection: the biology of central and peripheral circadian clocks. Neuron 74(2):246–260 Atkinson G, Edwards B, Rilly T, Waterhouse J  (2007) Exercise as a synchroniser of human circadian rhythms: an update and discussion of the methodological problems. Eur J Appl Physiol 99(4):331–341 Benjamin Cummings of Pearson Education Inc © (2007) https://cdn-images-1.medium.com/max/800/1∗PtvcscbQBA-zyaueyOaPw.jpeg. Accessed 30 Jan 2019

38 Berson DM, Dunn FA, Takao M (2002) Phototransduction by retinal ganglion cells that set the circadian clock. Science 295(5557):1070–1073 Brackeys (2016) How to make a quiz game in unity (E01. UI) – tutorial [video]. Available at: https://www.youtube.com/watch?v=g_Ff1SPhidg. Accessed 30 Jan 2019 Brame C (2015) Effective educational videos. [Online] Vanderbilt University Center for Teaching. Available at: https://cft.vanderbilt.edu/guides-sub-pages/effective-educational-videos/. Accessed 30 Jan 2019 Cimer A (2012) What makes biology learning difficult and effective: students’ views. Educ Res Rev 7(3):61–71 Clint DiClementi © (2007). http://tutorials.render-test. com/eyemodeling.html. Accessed 30 Jan 2019 Eastman CI, Hoese EK, Youngstedt SD, Liu L (1995) Phase-shifting human circadian rhythms with exercise during the night shift. Physiol Behav 58(6):1287–1291 Foster R, Kreitzman L (2004) Rhythms of life. Profile Books Ltd, London Fuchikawa T, Eban-Rothschild A, Nagari M, Shemesh Y, Bloch G (2016) Potent social synchronization can override photic entrainment of circadian rhythms. Nat Commun. http://www.nature.com/articles/ ncomms11662. Accessed 30 Jan 2018 Grey Houston (2015). https://www.youtube.com/ results?search_query=Grey+Houston+. Accessed 30 Jan 2019 Guze PA (2015) Using technology to meet the challenges of medical education. Trans Am Clin Climatol Assoc 126:260–270 Hattar S, Liao HW, Takao M, Berson DM, Yau KW (2002) Melanopsin-containing retinal ganglion cells: architecture, projections, and intrinsic photosensitivity. Science 295(5557):1065–1070 Herbert J (1994) The suprachiasmatic nucleus. The mind’s clock. J Anat 184(2):431 Jastrow H, Hollinderbaumer A (2004) On the use and value of new media and how medical students assess their effectiveness in learning anatomy. Anat Rec Part B New Anat 280(1):20–29 Karmer A, Merrow M (2013) Circadian clocks. Handb Exp Pharmacol 217(1):229–400 Ko CH, Takahashi JS (2006) Molecular components of the mammalian circadian clock. Hum Mol Genet 15(2):R271–R277 Lipinge SM (2013) Challenges of large class teaching at the university: implications for continuous staff development activities, pp  105–120. [Online] Available at: http://people.math.sfu.ca/-vjungic/Lipinge.pdf. Accessed 30 Jan 2019 Ma M, Bale K, Rea P (2012) Constructionist learning in anatomy education: what anatomy students can learn through serious games development. In: Ma M et  al (eds) Serious games development and applications, Lecture notes in computer science, LNCS 7528. Springer-Verlag, Berlin Heidelberg, pp  43–58. isbn:978-3-642-33687-4

A. Sugden et al. Manson A, Poyade M, Rea P (2015) A recommended workflow methodology in the creation of an educational and training application incorporating a digital reconstruction of the cerebral ventricular system and cerebrospinal fluid circulation to aid anatomical understanding. BMC Med Imaging 15:44. https://doi. org/10.1186/s12880-015-0088-6. pmid:26482126 Mayer RE (2003) The promise of multimedia learning: using the same instructional design methods across different media. Learn Instr 13:125–139 Mohawk JA, Green CB, Takahashi JS (2012) Central and peripheral circadian clocks in mammals. Annu Rev Neurosci 35:445–462 Moreno R, Mayer R (2007) Interactive multimodal learning environments. Educ Pscyhol Rev 19:309–326 Offermanns S, Rosenthal W (2008) Encyclopedia of molecular pharmacology, 2nd edn. Springer, New York/Berlin/Heidelberg Paul KN, Saafir TB, Tosini G (2009) The role of retinal photoreceptors in the regulation of circadian rhythms. Rev Endocr Metab Disord 10(4):271–278 Pickard GE, Sollars PJ (2012) Intrinsically photosensitive retinal ganglion cells. Rev Physiol Biochem Pharmacol 162:59–90 Raffan H, Guevar J, Poyade M, Rea PM (2017) Canine neuroanatomy: development of a 3D reconstruction and interactive application for undergraduate veterinary education. PLoS One 12(2):e0168911. https:// doi.org/10.1371/journal.pone.0168911 Ralph MR, Foster RG, Davis FC, Menaker M (1990) Transplanted suprachiasmatic nucleus determines circadian period. Science 247(4945):975–978 Rea P (2014) Clinical anatomy of the cranial nerves, 1st edn. Academic/Elsevier, Amsterdam Rea P (2016) Advances in anatomical and medical visualisation. In: Pinheiro M, Pereira D (eds) Handbook of research on engaging digital natives in higher education settings. IGI Global, Pennsylvania. Chapter 11, pp 244–264 Shi M, Zheng X (2013) Interactions between the circadian clock and metabolism: here are good times and bad times. Acta Biochim Biophys Sin Shanghai 45(1):61–69 Sisson M (2010) Circadian rhythms: zeitgebers, entrainment, and non-photic stimuli. [Online] Available at: http://www.marksdailyapple.com/circadianrhythms-zeitgebers-entrainment-and-non-photicstimuli/#axzz4DfYaLIDz. Accessed 30 Jan 2019 Snyder C (2003) Paper prototyping: the fast and easy way to design and refine user interfaces, San Francisco, Morgan Kaufmann, pp 5–20 Takahashi J (2014) Circadian clocks: clock genes, cells and circuits. [Video] Available at: https://www.youtube.com/ watch?v=whDMhPHOJZM. Accessed 30 Jan 2019 Tibell LA, Rundgren CJ (2010) Educational challenges of molecular life science: characteristics and implications for education and research. CBE Life Sci Educ 9(1):25–33

Interactive 3D Visualisation of the Mammalian Circadian System Travis D (2012) 7 myths about paper prototyping. [Online] UserFocus, Available at: http://www.userfocus.co.uk/ articles/paperprototyping.html. Accessed 30 Jan 2019 Vavra KL, Janjic-Watrich V, Loerke K, Phillips L, Norris S, Macnab J (2011) Visualization in science education. Alberta Sci Educ J 41(1):22–30

39

Vitaterna MH, Takahashi JS, Turek FW (2001) Overview of circadian rhythms. Alcohol Res Health 25(2):85–93 Welsh E, Anderson P, Rea P (2014) A novel method of anatomical data acquisition using the Perceptron ScanWorks V5 scanner. Int J Recent Innovation Trends Comput Commun 2(8):2265–2276. ISSN:2321-8169

Utilising Anatomical and Physiological Visualisations to Enhance the Face-to-Face Student Learning Experience in Biomedical Sciences and Medicine Christian Moro and Sue Gregory

Abstract

The introduction of online learning and interactive technology into tertiary education has enabled biomedical science and medical faculties to provide students with quality resources for off-campus study. This encompasses online self-directed learning, interactive blogs, quizzes, recordings of lectures and other resources. In addition, textbooks are now supplemented with interactive online learning tools, meaning that the student now has more accessibility than ever to engage with content. However, in biomedical sciences and medicine, technology has also enhanced the in-classroom experience. Anatomical and physiological visualisations in virtual, augmented and mixed reality provide students with an unprecedented ability to explore virtual content in-class, while learning remains structured by the facilitator and teaching team. This chapter will provide insights into the past use of technology to enhance off-campus C. Moro (*) Faculty of Health Sciences and Medicine, Bond University, Gold Coast, QLD, Australia e-mail: [email protected] S. Gregory University of New England, Armidale, NSW, Australia e-mail: [email protected]

learning, and then focus on the range of visualisations utilised within the laboratory or classroom in order to facilitate learning in biomedical sciences and medicine, including: augmented reality, virtual reality; mixed reality and Holograms; 3D printing; simulated dissections and anatomy simulation tables; and “Smart” tablets and touchscreen devices. Keywords

Virtual reality · Augmented reality · Educational technology · Mixed reality · Hololens · Medical education

1

 he Phenomenal Rise of Off-­ T Campus/Online Learning

The rise of Massive-Open Online courses (MOOCs) has meant that thousands of students no longer need to physically attend a university in order to receive instruction. Over 900 universities offer 11,400 MOOCs, with enrolments of over 100 million students (Shah 2018). Of these, it is estimated that over 50% are studying within a Science, Technology, Engineering or Mathematics (STEM) discipline (Shah 2018). The motivation for this phenomenal rise appears varied, such as from the desire to modernise,

© Springer Nature Switzerland AG 2019 P. M. Rea (ed.), Biomedical Visualisation, Advances in Experimental Medicine and Biology 1156, https://doi.org/10.1007/978-3-030-19385-0_3

41

42

redesign curriculum and capitalise on promotional opportunities (O’Connor 2014). There are other driving factors for universities (providers) to roll out MOOCs, with governments increasingly creating policy and funding to stimulate development in the online market. For example, the Australian Trade and Investment Commission’s International educational roadmap strategizes for the enrolment of 110 million international students by 2025 (Australian Government 2016) and increasingly incentivises educational providers and industry wishing to enter this space. However, educators have genuine concerns over the rise of MOOCs. There is a lack of consistent engagement between students and instructors in any MOOC, and students appear somewhat reluctant to even communicate with each other to the extent that they do in face-­ to-­face or traditional online classes (Hew et  al. 2018). The student expectations within a MOOC are highly varied, meaning that students often do not know what to anticipate when enrolling (Littlejohn et al. 2016). Instructors are also concerned about the future of MOOCs for online learning, with many considering them far less effective as face-to-face learning (Lowenthal et  al. 2018). It extends far beyond the simple course, with university-hosted MOOC-based online degrees, already enrolling over 9000 students. This means that students increasingly have the option to hold their entire tertiary educational journey, online, and without any face-to-face instruction. The methods employed when using MOOCs as educational tools are varied. Some MOOC studies are increasingly drawing on sources of information outside the set course material, such as content posted on social media (Bicen 2017). However, with course content and interaction with the instructor being two of the major factors enticing students to remain enrolled within a MOOC (Hone and El Said 2016), the use of offsite or outside information risks lowering retention rates. This is an important consideration, as student retention remains an important challenge for MOOC developers and providers. Some studies have suggested that up to 90% of students enrolled in MOOCs dropout (Alraimi et al. 2015).

C. Moro and S. Gregory

However, this number may be unrepresentative, as some students may enrol in a MOOC to “browse” (Reich 2014), or are there solely for the content and less interested in completing any required assessments (Greene et  al. 2015). Although the retention rates are alarmingly low, it may not be a suitable measurement for assessing MOOCs compared to traditional university subjects (DeBoer et al. 2014). Besides MOOCs, the general trend of educational technology appears to be moving towards the ability to facilitate and enable off-campus learning. Technologies surrounding self-directed learning, interactive blogs, quizzes, recordings of lectures and other resources are increasingly invested in by universities, industry, publishers and Educational Technology (EdTech) providers. This allows scalability of resources, accessibility to information and self-paced learning experiences for students regardless of location, even if this trend challenges librarians who deal with the copyright and licencing of these products (Gore 2014). At the other end of the spectrum, educational technology is increasingly being employed to enhance face-to-face learning on campus. This fusion of sessions facilitated by both technology and an instructor has had a much slower adoption than online learning yet is able to provide many of the benefits of online learning, such as the self-­ paced and individualised environment, into the classroom. This is particularly important in medicine and biomedical sciences anatomy and physiology education, where students require an understanding of the human body in a 3D space. Technology shown to be effective in this space includes augmented, virtual and mixed reality, holograms, 3D Printing, simulated dissections and simulation tables, as well as interactive tablets and touchscreen devices. The use of technology in teaching has also facilitated a genuine trend in higher education and medical health education away from the traditional education practice of didactic lectures and tutorials, towards group-based (Moro and McLean 2017), self-directed (Murad et al. 2010) and online education (Clark and Mayer 2016). There is also a range of upcoming technological innovations which may positively benefit

Utilising Anatomical and Physiological Visualisations to Enhance the Face-to-Face Student Learning…

t­eaching. This includes mixed reality (MR) through devices such as the Microsoft HoloLens. For students, technology enables access to course content at any place or time, and for educators, it expands their ability to educate well away from the classroom or lecture hall setting (Goh 2016). These are the exciting trends in educational technology. It also enables educators to teach using multiple modes, which is particularly important in anatomy or health care education (Estai and Bunt 2016).

2

History of Off-Campus/ Distance/Online Education

Learning from a distance (or off-campus), as it was in the 1970s, was through products received in the post (mail) such as printed reading materials and assessment tasks utilising audio cassette tapes, which housed dialogue from the lecturer (Maroto-Alfar and Durán-Gutiérrez 2016). Digital resources came to the fore in the late twentieth century for online education through the use of chat rooms (which began in the 1970s), discussion boards (earliest began in 1985), blogs (as early as 1994) and wikis (in 1994, and Wikipedia in 2001), all enhancing the online interaction between student/lecturer and student/ student. Social media was also being used as a tool to engage students with more learnings, such as LinkedIn in 2002, Facebook in 2004, and YouTube in 2005. More immersive technologies started evolving, being added to the list of tools used by lecturers in the mid-2000s, for example, through the use of a virtual world such as Second Life, which was created in 1994 but only became available for commercial use in 2003 and educational purposes in 2006. There are now more than 200 virtual worlds, and many institutions create their own for use with their students. The success of a virtual world for teaching and learning is well documented for both online and on-campus students. Virtual worlds were used as a low-cost space substituting real world (Gregory and Tynan 2009). For online students, the students felt like they were there, in the same space, at the same time and with their peers.

43

An extensive pilot research from 2008 to 2011 of 3576 students demonstrated the positive impact of using a virtual world for teaching and learning, with results indicating that voluntary virtual world groups academically out-performed the non-virtual world group of initial teacher education students (79.3%), with those who chose to learn using a virtual world attaining a grade of 75% or higher, compared with 46.5% of those who chose not to study using the virtual world achieving this grade (Gregory 2013). Since this 2008–2011 research, digital technologies have advanced, and we can now build virtual classrooms to monitor how this type of teaching and learning is progressing through various other technologies. Figure 1 provides an image of a virtual class being undertaken by an initial teacher education students with a classroom of non-­ player characters (bots), who are programmed to respond to the teacher. Through a comprehensive research project of initial teacher education student’s perceptions of learning in a virtual world between 2008 and 2011, where over 40,000 lines of text were analysed, several themes emerged. These were that students felt that learning through a virtual world was engaging, it was an important component of communication for learning, the ability to be anonymous was important to be able to communicate without bias, the tyranny of distance was overcome to a large extent and the students felt a stronger sense of interaction and collaboration (Gregory 2012). However, they did feel that there were distractions in the virtual world that took them off task and that sometimes, the technology let them down.

3

Technology Today

Since the times of the virtual world education, augmented, virtual, mixed reality and a variety of other technologies have been created to provide an even more immersive experience for students. To provide a better sense of these technologies, the following section defines what each one is, briefly.

C. Moro and S. Gregory

44 Fig. 1  Virtual class of bots (non-player characters) being taught by an initial teacher education student through their avatar

• In virtual reality (VR), the user’s senses (sight, hearing, and motion) are fully immersed in a synthetic environment that mimics the properties of the real world through high resolution, high refresh rate head-mounted displays, stereo headphones and motion-tracking systems (Moro et  al. 2017a, b). This technology enables an individualised learning experience, even in a busy or noisy laboratory or teaching environment. • Augmented reality (AR), through the use of a camera and screen (i.e., smartphone or tablet) digital models are superimposed into the real-­ world. The user is then able to interact with both the real and virtual elements of their surrounding environment (Birt et al. 2018). • Three-dimensional (3D) displays utilize high-­ resolution screens on tablets and smartphones to visualize pseudo-3D models and environments. The user interacts with digital aspects on the screen and manipulates objects using a mouse or finger gestures. • Mixed reality (MR), a continuum of these innovative technologies, combines the real and virtual words through head-mounted see-­ through displays and strong processing power, which allows for visualizations at different

and multiple scales and the design and implementation of comparative mixed reality pedagogy across multiple disciplines.

4

Modern Technology’s Impact on Medicine and Biomedical Science Learning Experiences in Anatomy and Physiology

As described, technology has largely been employed to enhance off-campus learning in universities. There is increasing interest within faculties to invest in technology that enhances the face-to-face experience. Students are no longer content with a single lecturer orating the content in front of a large audience, and instead, expect interactive, engaging learning where they can fully participate with educators and staff. However, achieving this goal is difficult when managing a large-group cohort. As such, technology can bridge this gap between a single instructor being the sole provider of information to students interacting with various modes of delivery. Providing a portion of lesson content through technology-enhanced modes enables students to

Utilising Anatomical and Physiological Visualisations to Enhance the Face-to-Face Student Learning…

have a self-directed structured and interactive lesson, even within a large class size. When used by students during normal sessions, modern educational devices and technologies may also negate some of the negative impacts of having large class sizes, and assist with the student perception towards the receipt of an individualised learning experience (Cash et al. 2017; Monks and Schmidt Robert 2011). One mode of learning useful to biomedical sciences and medicine is through virtual reality. The learner can be entirely immersed in a virtual space with depictions of human anatomical structures or physiological processes (Fig. 2). As virtual reality allows a way to block out all outside interference, distractions from other students or the classroom environment can be negated, and the student can focus on the learning at hand. Virtual reality, therefore, maintains a consistent learning environment, regardless of the size of the class or number of other students working in the vicinity. Virtual reality is increasingly being used to provide students with anatomical knowledge. One point for educators to consider, however, with virtual reality, is the impact of motion sickness on some learners during or after its use (Moro et al. 2017b).

45

Augmented reality, on the other hand, does not cause any motion sickness or adverse effects (Moro et  al. 2017a). By interacting with the object through a camera on a smartphone or tablet, the user remains in their physical environment and visualises any renderings via the screen (Fig. 3). Students can wear headphones to receive audio content, allowing the instructor to provide lessons or instructions that are interactive and self-directed for the student’s investigation. The interactivity of this technology, such as by removing or adding layers to an anatomical model or feature, can set the pace of learning within AR and enables a self-directed speed of instruction to each student within a diverse cohort (Birt et  al. 2018). AR can also be used to simulate real-life surgical or medical procedures, such as intubation, suturing or phlebotomy. The most recent development in educational technology is from mixed reality. This is a fusion of virtual and augmented reality and requires specialised headsets. The most utilised of these is the Microsoft HoloLens (Fig.  4), which allows students to visualise holographic visualisations of human anatomical structures in front of them. These devices can be connected together, with learners able to work with the

Fig. 2  Students learning from within Virtual Reality in an anatomy and physiology class

46

C. Moro and S. Gregory

Fig. 3  Students utilising augmented Reality through a tablet, where the 3D printed marker becomes a visualised model of the brain on the tablet screen

Fig. 4  A student views holograms through the HoloLens which within an anatomical class revising the human brain

educator to dissect a model or navigate through features of the human body. The fact that the room or environment the user is in remains visible, unlike in virtual reality, means that the issues of dizziness or cybersickness so often reported in virtual reality is largely minimised.

Additionally, the use of mixed reality headsets also allows the user’s hands to be free, to write notes, ask questions or interact with the models displayed on the screen. Anatomical and physiological visualisations in virtual, augmented and mixed reality provide

Utilising Anatomical and Physiological Visualisations to Enhance the Face-to-Face Student Learning…

students with an unprecedented ability to explore virtual content in-class, while learning remains structured by the facilitator and teaching team. It also allows simple and rapid changing of models or visualisation modes, which is very useful for the depiction of anatomical variations or to demonstrate structures being dissected in real-time. A range of other technologies are also being introduced to enhance the on-campus learning experience in medicine and biomedical science teaching sessions. This includes 3D printing, which can be used to show anatomical variations or to be tailored to depict a wide range of diseases or patient characteristics (Garcia et  al. 2018). Alternatively, the Virtual Dissection Boards, such as the “Anatomage” Table, can visualise the dissection process using imagery instead of cadavers or donated human specimens. Increasing numbers of biomedical and medical departments utilise technologyenhanced learning to supplement their face-toface lessons, and it has been the individual learners who have benefited.

5

Conclusion

While technology has been increasingly utilised by universities to facilitate off-campus learning in the past, recent years have seen its introduction into on-campus learning experiences. This has created not only the ability for educators to provide structured, modern and individualised sessions, but also allowed students alternative ways to learn during teacher-directed sessions. This has included the introduction of virtual, augmented and mixed reality to the anatomy and physiology labs, as well as 3D printing, and even virtual dissections through interactive tables and screens. This use of educational technology within the laboratory or learning sessions is the start of an exciting time, where students are likely to be taught not solely by a single academic, but through a mixture of real-life and virtual modes of learning, all working concurrently to provide a modern, interactive learning experience.

47

References Alraimi KM, Zo H, Ciganek AP (2015) Understanding the MOOCs continuance: the role of openness and reputation. Comput Educ 80:28–38. https://doi. org/10.1016/j.compedu.2014.08.006 Australian Government (2016) National Strategy for International Education 2025 (AIE2025). http://nsie. education.gov.au/ Bicen H (2017) Determining the effect of using social media as a MOOC tool. Proc Comput Sci 120:172– 176. https://doi.org/10.1016/j.procs.2017.11.225 Birt J, Stromberga Z, Cowling M, Moro C (2018) Mobile mixed reality for experiential learning and simulation in medical and health sciences education. Information 9:31 Cash CB, Letargo J, Graether SP, Jacobs SR (2017) An analysis of the perceptions and resources of large university classes. CBE Life Sci Educ 16:ar33. https:// doi.org/10.1187/cbe.16-01-0004 Clark RC, Mayer RE (2016) E-learning and the science of instruction: proven guidelines for consumers and designers of multimedia learning. Wiley, San Francisco DeBoer J, Ho AD, Stump GS, Breslow L (2014) Changing “course”: reconceptualizing educational variables for massive open online courses. Educ Res 43:74–84. https://doi.org/10.3102/0013189x14523038 Estai M, Bunt S (2016) Best teaching practices in anatomy education: a critical review. Ann Anat 208:151–157. https://doi.org/10.1016/j.aanat.2016.02.010 Garcia J, Yang Z, Mongrain R, Leask RL, Lachapelle K (2018) 3D printing materials and their use in medical education: a review of current technology and trends for the future. BMJ Simul Technol Enhanc Learn 4:27– 40. https://doi.org/10.1136/bmjstel-2017-000234 Goh PS (2016) eLearning or technology enhanced learning in medical education—hope, not hype. Med Teach 38:957–958. https://doi.org/10.3109/01421 59X.2016.1147538 Gore H (2014) Massive Open Online Courses (MOOCs) and their impact on academic library services: exploring the issues and challenges. New Rev Acad Librariansh 20:4–28. https://doi.org/10.1080/1361453 3.2013.851609 Greene JA, Oswald CA, Pomerantz J (2015) Predictors of retention and achievement in a massive open online course. Am Educ Res J  52:925–955. https://doi. org/10.3102/0002831215584621 Gregory S (2012) Learning in a virtual world: student perceptions and outcomes. In: Moyle KW, Wijngaards G (eds) Student reactions to learning with technologies: perceptions and outcomes, vol 1. IGI Global, Hershey, pp 91–116 Gregory S (2013) Comparison of students learning in a virtual world. In: Jerry P, Tavares-Jones N, Gregory S (eds) The hype cycle upswing: the resurgence of virtual worlds. Inter-Disciplinary Press, Oxford, pp 123–134

48 Gregory S, Tynan B (2009) Introducing Jass Easterman: my second life learning space. In: Atkinson R, McBeath A (eds) Same Places, Different Spaces. Proceedings ascilite Auckland The University of Auckland, Auckland University of Technology, and ascilite, Auckland, New Zealand, pp 377–386 Hew KF, Qiao C, Tang Y (2018) Understanding student engagement in large-scale open online courses: a machine learning facilitated analysis of student’s reflections in 18 highly rated MOOCs. Int Rev Res Open Distrib Learn 19. https://doi.org/10.19173/ irrodl.v19i3.3596 Hone KS, El Said GR (2016) Exploring the factors affecting MOOC retention: a survey study. Comput Educ 98:157–168. https://doi.org/10.1016/j. compedu.2016.03.016 Littlejohn A, Hood N, Milligan C, Mustain P (2016) Learning in MOOCs: motivations and self-regulated learning in MOOCs. Internet High Educ 29:40–48. https://doi.org/10.1016/j.iheduc.2015.12.003 Lowenthal P, Snelson C, Perkins R (2018) Teaching Massive, Open, Online, Courses (MOOCs): tales from the front line. Int Rev Res Open Distrib Learn 19. https://doi.org/10.19173/irrodl.v19i3.3505 Maroto-Alfar S, Durán-Gutiérrez Y (2016) Responsive web design: experience at the National Distance University of Costa Rica. In: Dyson LE, Ng W, Fergusson J (eds) Mobile learning futures – sustaining quality research and practice in mobile learning:15th World conference on mobile and contextual learning, mLearn The University of Technology, Sydney, Sydney, Australia, pp 183–190

C. Moro and S. Gregory Monks J, Schmidt Robert M (2011) The Impact of class size on outcomes in higher education 11. https://doi. org/10.2202/1935-1682.2803 Moro C, McLean M (2017) Supporting students’ transition to university and problem-based learning. Med Sci Educ 27:353–361. https://doi.org/10.1007/ s40670-017-0384-6 Moro C, Stromberga Z, Raikos A, Stirling A (2017a) The effectiveness of virtual and augmented reality in health sciences and medical anatomy. Anat Sci Educ 10:549–559. https://doi.org/10.1002/ase.1696 Moro C, Štromberga Z, Stirling A (2017b) Virtualisation devices for student learning: comparison between desktop-based (Oculus Rift) and mobile-based (Gear VR) virtual reality in medical and health science education 33. https://doi.org/10.14742/ajet.3840 Murad MH, Coto-Yglesias F, Varkey P, Prokop LJ, Murad AL (2010) The effectiveness of self-directed learning in health professions education: a systematic review. Med Educ 44:1057–1068. https://doi. org/10.1111/j.1365-2923.2010.03750.x O’Connor K (2014) MOOCs, institutional policy and change dynamics in higher education. High Educ 68:623–635. https://doi.org/10.1007/ s10734-014-9735-z Reich J  (2014) MOOC completion and retention in the context of student intent Shah D (2018) By the numbers: MOOCS in 2017. https:// www.class-central.com/report/mooc-stats-2017/. Accessed 25/09/2018

Anatomy Visualizations Using Stereopsis: Current Methodologies in Developing Stereoscopic Virtual Models in Anatomical Education Dongmei Cui, Jian Chen, Edgar Meyer, and Gongchao Yang

measurement parameters of anatomical structures and for the placement of surgical instruments. Once stereoscopic virtual models have been constructed, their visualization and presentation can be implemented in anatomy education and clinical surgical trainings.

Abstract

Technology for developing three-dimensional (3D) virtual models in anatomical sciences education has seen a great improvement in recent years. Various data used for creating stereoscopic virtual models have also been constantly improving. This paper focuses specifically on the methodologies of creating stereoscopic virtual models and the techniques and materials used in developing stereoscopic virtual models from both our previous studies and other published literature. The presentation and visualization of stereoscopic models are highlighted, and the benefits and limitations of stereoscopic models are discussed. The practice of making 3D measurements on the lengths, angles, and volumes of models can potentially be used to help predict typical D. Cui (*) · E. Meyer Department of Neurobiology and Anatomical Sciences, Division of Clinical Anatomy, University of Mississippi Medical Center, Jackson, MS, USA e-mail: [email protected] J. Chen Department of Anesthesiology, Division of Pain Management, University of Mississippi Medical Center, Jackson, MS, USA G. Yang Department of Neurobiology and Anatomical Sciences, Division of Clinical Anatomy, University of Mississippi Medical Center, Jackson, MS, USA Academic Information Services, University of Mississippi Medical Center, Jackson, MS, USA

Keywords

Stereoscopic models · 3D models · Imaging · Measurement · Anatomy · Anatomical education

1

Introduction

1.1

Literature Review

Digital technology has revolutionized a number of industries, including gaming, entertainment, marketing, business, and education. In anatomical sciences education specifically, digital technology has found its usefulness in the form of computerized three-dimensional (3D) models. The 3D anatomical models and simulation programs that currently exist utilize a variety of display formats, including monoscopic, stereoscopic, autostereoscopic, and mixed reality visualizations (Hackett and Proctor 2016). These mixed reality visualizations include a spectrum ranging from virtual reality to augmented reality (Milgram and Kishino 1994; Milgram et  al.

© Springer Nature Switzerland AG 2019 P. M. Rea (ed.), Biomedical Visualisation, Advances in Experimental Medicine and Biology 1156, https://doi.org/10.1007/978-3-030-19385-0_4

49

D. Cui et al.

50

1995). This spectrum has experienced modifications with advances in technology as the visualizations are now more specific and complex with considerations of the operator, operating system, and operating environment (Billinghurst et  al. 2015; Rouse et  al. 2015). Virtual reality (VR) involves the immersion of the viewer completely in a digital environment through special headsets while augmented reality (AR) involves the projection of digital images into real space whereby the viewer can ambulate freely around the images. One very recent study exploring the development of an AR anatomical model of the canine head for veterinary education discusses advantages and disadvantages of VR and AR (Christ et al. 2018). Digital tools that utilize virtual reality with anatomical models include programs like Oculus, and programs that utilize augmented reality include programs like HoloLens and Visible Body. Monoscopic displays incorporate virtual anatomical models that are rotated in three dimensions but are presented on a two-dimensional screen. Programs that use these monoscopic displays include Essential Anatomy, zSpace, and Anatomage among others. Stereoscopic displays include virtual anatomical models that appear to “pop out” from the viewing screen with the aid of special eye wear. In contrast, autostereoscopic displays allow for the same presentation of models, but without the need for special eye wear. Amira® is one of a number of programs that allow anatomical models to be presented stereoscopically. Several studies using Amira® software feature the creation of stereoscopic models, such as structures of the head and neck (Nguyen and Wilson 2009; Brewer et al. 2012; Cui et al. 2016), of the paranasal sinuses and cervical vertebrae (Chen et al. 2017), of the larynx (Hu et al. 2009), and of the female pelvis (Sergovich et al. 2010). This paper will focus on the various techniques of constructing stereoscopic anatomical models using Amira® software, version 5.6 (FEI Corp.,Hillsboro, OR).

1.2

 urpose of Using Stereoscopic P Virtual Models

The aforementioned “popping out” effect characteristic of stereoscopic displays is caused by the perception of two different, yet corresponding, views as one image (Heron and Lages 2012; Gaida et al. 2014). In reality, these two views are received by the photoreceptors within the left and right retinae, respectively, and the differences in the views create binocular disparity which gives rise to the depth perception referred to as stereopsis (Julesz 1960; Heron and Lages 2012). Thus, humans who are equipped with two eyes are not only afforded binocular vision, but also a stereoscopic perspective of their surroundings. The study and utilization of stereoscopic visualizations has become exceedingly popular (McIntire et  al. 2014), even in the field of anatomy. This popularity is due to the fascination with the real purpose of stereopsis and its applications to virtual anatomical models. Although the essentiality of stereopsis has been disputed (Fielder and Moseley 1996), stereopsis provides several utilities. These advantages include the ability to measure depth directly (Bishop 1996; Westheimer 1994), process and understand complex visual objects more quickly (Wickens et al. 1994), and perform kinesthetic tasks guided by visual observations (Servos et al. 1992; Fielder and Moseley 1996; Melmoth et al. 2009; O’Connor et al. 2010; Bloch et al. 2015). All three of these advantages are extremely relevant for the visualization of virtual 3D models, especially those depicting regions of high complexity. The lattermost advantage is also especially relevant for virtual 3D models that allow the viewers to engage in haptic interactions, either via a touchscreen or control device (e.g., mouse, joystick, etc.).

1.3

Significance

In this fast-paced world of ever-evolving virtual technology, the looming question is whether virtual anatomy is important in anatomical educa-

Anatomy Visualizations Using Stereopsis: Current Methodologies in Developing Stereoscopic Virtual…

tion. According to survey responses from course directors in the United States, the number of total course hours in medical gross anatomy has decreased by more than half between 1955 and 2009 (Drake et al. 2009), though there have been relatively insignificant changes between 2009 and 2013 (Drake et  al. 2014). These reduced hours include those spent by medical students in cadaveric labs where many gain insights into the 3D relationships of anatomical structures. Virtual models could potentially help supplement this decrease in hours with learning tools that still allow students to understand 3D relationships of structures on their own learning time. The convenience of such a resource is important for student-­ directed learning. Such a convenience is especially critical for medical students who still have to learn a large amount of information (Yeh and Park 2015; Lujan and DiCarlo 2006; Tarek 1999; Onion and Slade 1995) in a short amount of time (Leveritt et al. 2016; Ahmed et al. 2010; Terrell 2006; McKeown et al. 2003; Miller et al. 2002), often outside the classroom and before class meetings.

2

 tereopsis and 3D Models: S A Description

Stereopsis is a form of 3D viewing, where stereo means “three-dimensional,” and opsis is derived from the Greek word meaning “sight”. Stereopsis naturally occurs when normal human eyes see a three-dimensional subject, which is a result of binocular vision where two slightly different images project on the viewer’s retinae of the eyes. However, in most cases, when humans view the projected images on a screen, they see them as flat two-dimensional images—a visualization format known as monoscopic viewing. Such a view is quite different than how humans see 3D subjects, such as a coffee cup used to drink a coffee. The stereoscopic 3D displays allow viewers to see projected images in a more authentic 3D viewing. The stereoscopic views of anatomical models discussed in this paper were created via

51

the addition of linearly polarized filters in front of projectors, and viewers wore 3D glasses with polarized lenses that matched the polarization axes of the filters. Thus, when images passed through these lenses, stereoscopic depth was created, with two slightly different images projected to the retinae of the viewer’s eyes. Therefore, viewers saw in three dimensions images that moved and floated in front of their eyes when the models were rotated. The 3D models were created with stacks of sliced images, similar to the construction of a cylindrical block with a stack of donuts, where the models have an inner surface and an outer surface as well as the dimensions of length, width, and height. The created 3D models can be rotated in three dimensions and magnified or reduced in size on the computer screen with zoom in and out options in a monoscopic display format, but the models are not automatically visualized stereoscopically. Viewing the 3D models in a stereoscopic display format requires a stereoscopic projection system and glasses with special lenses for the viewers, and in this format, viewers can see models and images “popping out” of the screen, and feel as though they were able to hold and touch the 3D models.

3

Methods for Constructing Stereoscopic Models

3.1

Volume Rendering

The basic technique for constructing a stereoscopic model is segmentation, which refers to the process of delineating an anatomical structure from a radiographic image (CT or MR image, etc.) that contains millions of pixels with similar intensity. The process of segmentation relies on signal intensity transactions and knowledge-­ based anatomical areas within intensity fields that are relatively homogeneous. The most popular method for segmentation is volume rendering, in which selection criterion is based on the relatively similar intensity of a structure. The intensity window is specified in a range of threshold,

52

D. Cui et al.

which allows the area of the structures of the same intensity to be selected in all slides (Fig. 1a). An example slide of CTA data shows similar selected structures indicated in purple color here (Fig. 1a). Each tissue type has its unique density, and the varying range of intensity threshold

allows various types of tissue to be selected for segmentation. For example, vascular structures, bone, and soft tissue have different tissue densities; therefore, with the volume rendering method, a variety of 3D model structures have been created (Fig. 2, from Cui et al. 2016 ASE).

Fig. 1  Examples of three different rendering methods used for selecting structures in the radiographic data during the segmentation process. All structures having similar intensities were selected using volume rendering

segmentation (a); only arteries were selected using the surface rendering method (b); arteries and some other structures were selected using the semi-auto combined rendering method (c)

Fig. 2  Vascular structures with solid, transparent bone and soft tissue. The volume rendering models of the vascular structures with solid bone (panel a), vascular structures with transparent bone (b), and vascular structures

with transparent bone, soft tissues, and skin (c). [Photo credit: reproduced with permission from Anat Sci Educ 9:179–185 (2016)]

Anatomy Visualizations Using Stereopsis: Current Methodologies in Developing Stereoscopic Virtual…

3.2

Surface Rendering

The surface rendering method is a manual segmentation technique, in which the selection area of the structure is based on the anatomical structure identified by the user. The segmentation of the surface rendering method is performed via a preprocessing step of determining the surface area from the selected images in each slice (Fig. 1b). In surface rendering, the premises are that the given image contains data pertaining to certain tangible structures in the human body and that each structure can be visualized by its surface estimated from the image (Udupa and Goncalves 1993; Martin et al. 2013). This type of technique uses brush, lasso, or blow tools to manually select the desired structure in each image slice. The total number of slices used for model creation depend on the size and length of the

Fig. 3  Example of two models of blood vessels in the head and neck. The model on the left (a) was created using the volume rendering method; the model on the right (b) was created using the surface rendering tech-

53

structure. Once the structure is fully segmented, the software assembles the individual sections into a 3D structure through surface generation (Cui et  al. 2016). The advantage of the surface rendering method is that each structure can be separated into an individual layer by the creation of a unique label-field for that structure, so finished structures in that model can be assembled or dissembled flexibly during the visualization. However, surface rendering is time consuming because it takes many hours of work to complete a single structure (Fig. 3b, from Cui et al. 2016 ASE). For this reason, most model reconstructions, especially the models used in the clinic, are accomplished through the volume-rendering method. Nevertheless, the surface rendering method creates precise structures with the capability of excluding unrelated structures during the segmentation.

nique for segmentation. Of note, vasculature in Panel A is hidden by the voxels occupied by equally dense tissue which, in this case, is bone marrow. [Photo credit: reproduced with permission from Anat Sci Educ 9:179–185 (2016)]

D. Cui et al.

54

3.3

Semi-auto Combined Rendering

Volume and surface rendering techniques are commonly used in 3D reconstruction with Amira®. Both of these techniques are employed in an additional method known as semi-auto combined rendering. The volume rendering method is used to create structures with high-­ contrast attenuation when they are compared to the background, e.g. bone vs. soft tissues in non-­ contrast CT images. The volume rendering technique provides a rapid overview of the anatomy, but it is not able to discriminate structures that have similar densities in the slices. On the other hand, the surface rendering technique is based on structure identification, and it requires detailed anatomic knowledge to make the model. The final product created by surface rendering is manually selected slice by slice—a process which takes much more time than volume rendering method. In order to effectively create high-­quality models, a novel technique of semi-auto-­combined rendering was invented from previous studies (Chen et  al. 2017). The basic idea of semi-auto combined rendering is to use volume rendering to quickly construct the object and de-­ select unwanted structures using surface rendering. With respect to vertebral reconstructions, a mask window was initially set up using attenuation thresholds (windowing), and high-density bony structures such as the skull, vertebrae, sternum, and clavicles were selected for volume rendering. The magic wand was used to select the area of interest (e.g., atlas [C1]). In order to separate adjacent anatomic structures with similar density (masking window threshold), the lasso tool was used to de-select unwanted areas. Final rendering with this semi-auto-combined method generated high-resolution anatomic structures that were properly segmented from one another. A key element of this rendering was the selection of an ideal threshold to maximally pick up the masking area for the area of interest. The semi-­auto combined rendering can increase the speed of the creation process and improve the quality of the models (Fig. 4).

3.4

Measurement

Several measurements that can be collected with Amira® include distance, angle, and volume. A distance can be measured either under 2D or 3D length. If the distance were measured under 2D length, the length of distance would not change with the 3D model manipulation. The 2D-length measurement does not represent the true distance if the 3D object is enlarged or minimized, and it is only correct if the 3D object is moved in a 2D plane such as from side to side. In order to measure the true spatial distance over an object, e.g. the distance between two pupils, the 3D-length measurement has to be used. By selecting the origin and end point of the intended object, a straight line is drawn with an artificial number assigned. The true distance is calculated by measuring the distance on the CT or MRI DICOM scale. Taking distance between two pupils as an example, if the 3D length artificial number is 20, the 3D DICOM scale artificial number is 10, and the true distance of the DICOM scale is 1  cm. The true distance between two pupils is 2 cm. Again, the benefit of using 3D length is that when a 3D object is manipulated on the screen, the 3D length varies relatively per 3D-object size or visualization angle. 3D-angle measurement is straightforward. An origin point, a mid-point, and an end point are selected to measure the angle, e.g. the angle of two pupils to the tip of the nose. The origin and the end point are the pupils, and the mid-point is the tip of the nose. Amira® also provides volume measurement, e.g. the volume of the lateral cerebral ventricle. By selecting the surface of an object (Neck-head-arteries) and attaching command “Surface Area Volume” to it, the volume of the object will be shown in a spreadsheet automatically (See Fig. 5). The other way to do is to attach “Material Statistics” to the label field; the results will be shown in the spreadsheet. Different measurements serve different purposes. The scientific scale recommended for measuring images is the one pertaining to 2D length. 3D length is used for measuring the dynamic 3D objects. Volume measurement can be used for determining the hollow space of the object.

Anatomy Visualizations Using Stereopsis: Current Methodologies in Developing Stereoscopic Virtual…

55

Fig. 4 Volume rendering of skull (a); and select C2 vertebra by magic wand (b); try to select part of C1 vertebra (blue arrow) by magic wand, but piece of C1undesired (red arrow) also selected (c); de-select by lasso tool, and now only desired part of C1 is selected (d)

4

 ata Used for Creating D Stereoscopic Models

4.1

Computer Tomographic (CT) Images

The non-contrast CT images are discussed in this section. Most 3D virtual models in anatomy education are created using CT scans and MRI from the Visible Human Dataset (Spitzer et  al. 1996; Ackerman 1998; Tam 2010; Yeung et al. 2011) or from cadaveric material (Nguyen and Wilson 2009). The examples of CT images used in Fig.  6a were acquired by the Department of Radiology at University of Mississippi Medical Center with a Siemens SOMATOM Definition CT scanner (Siemens, Erlangen, Germany) using routine high-resolution imaging techniques, resulting in voxel dimensions of 0.35 × 0.35 mm in axial dimensions and 0.75 mm in cranio-caudal dimension. Raw data were saved as de-identified Digital Imaging and Communications in Medicine (DICOM) format files (Fig.  6a). DICOM files can be directly uploaded to Amira® all combined in one digital folder in the pool win-

dow for further rendering. One advantage of CT images is that they are basically a fluoro-based scan. The organs that have higher density are brighter than the organs having lower density. For example, the bone can be easily selected and reconstructed by volume rendering, which will be especially useful for orthopedics and pain procedure models. However, the organs with similar densities (e.g., nerves and vessels) are difficult to differentiate by segmentation. Most often, the differentiation requires an anatomist to recognize and separate the objects by manual segmentation at the label window. CT images provide good contrast between bone and soft tissue, and they are particularly good for the reconstruction of bone models. But, unlike MRI, CT has poor definition for the soft tissue.

4.2

 agnetic Resonance Images M (MRI)

The non-contrast MRI images are discussed in this section. MRI is also commonly used for developing anatomical 3D stereoscopic models (Adams and Wilson 2011; Manson et al. 2015).

56

D. Cui et al.

Fig. 5  Example of measurement of 2D distance (49.95), 3D distance (36.20 and 55.44), and 3D angles (81.2 and

126.1°). The measurement results of the “surface area volume” are shown in a spreadsheet (top left portion of the screen) and a histogram plot

The MRI images were acquired by the Department of Radiology with a Siemens 3 Tesla Skyra MRI scanner with voxel dimensions (0.25 × 0.25 mm in axial dimensions and 0.55 mm in craniocaudal dimension), using a routine 3-dimensional (3D) Time Of Flight (TOF) acquisition protocol. Raw data were saved as de-identified Digital Imaging and Communications in Medicine (DICOM) format files. DICOM files can be directly uploaded to Amira® all combined in one digital folder in the pool window for further rendering (Fig. 6c). MRI images are based on hydrogen atoms generating radiofrequency under an external magnetic field. Hydrogen atoms are abundant in water and fat. Different tissues have an independent relax-

ation process after excitation; therefore, the images used for reconstruction could be T1-weight based or T2-weight based. The advantage of MRI images over CT images is that the user can differentiate structures with similar densities on CT but with different intensities on MRI, e.g. different nuclei of basal ganglia. Also, with the advantage of digital software generating specific MRI sequences, the 3D reconstruction could also be made based on diffusion MRI, gradient echo MRI, etc., for different clinical or anatomical purposes. Compared to CT images, the MRI images are inferior to reconstructing models of lungs and bone in either live or cadaveric specimens. MRI data have greater soft tissue contrast

Anatomy Visualizations Using Stereopsis: Current Methodologies in Developing Stereoscopic Virtual…

57

Fig. 6  Example of different data used for creating stereoscopic models. Computer Tomographic (CT) Images (a); Computed Tomographic Angiography (CTA) Images (b); Magnetic Resonance (MR) Images (c); and Magnetic Resonance Angiography (MRA) Images (d)

than CT images (de Crespigny et al. 2008), and they can be used in the construction of 3D anatomy models from soft tissue-rich structures such as the brain and the cerebral ventricular system (Adams and Wilson 2011).

4.3

Computed Tomographic Angiography Images

Computed Tomographic Angiography (CTA) images, which are commonly used for assessing vessel anatomy, serve as a diagnostic tool in a variety of clinical settings, such as vessel evaluation for intracranial aneurysms and subsequent planning of therapeutic interventions (Tomandl et al. 2004). The CTA image acquisition requires administration of intravenous iodinated contrast at a rapid rate, and the imaging is timed to optimize contrast in the arteries, thereby making the arteries easier to identify and evaluate (Cui et al. 2016). The CTA data has been used for creating 3D models for preoperative surgical planning (Rosson et al. 2011; Chae et al. 2014) in the clini-

cal setting, and CTA 3D models have been used for anatomical education (Cui et al. 2016; Govsa et al. 2017). An example of CTA data (Fig. 6b) acquired with a Siemens SOMATOM Definition CT scanner (Siemens, Erlangen, Germany) with voxel dimensions of 0.35 3 0.35  mm in axial dimension and 0.75 mm in craniocaudal dimension, using routine CT angiography (CTA) techniques includes intravenous iodinated contrast administration and bolus timing for an optimal arterial phase. Raw data were saved as DICOM (Digital Imaging and Communications in Medicine) format files. Three-dimensional virtual models were created using Amira® software, version 5.6. (Cui et  al. 2016; Fig.  3. from Cui et al. 2016 ASE.).

4.4

Magnetic Resonance Angiography Images

Magnetic Resonance Angiography (MRA) has been used in the clinical setting to detect vascular aneurysms and vessel pathological conditions. It

D. Cui et al.

58

uses magnetic field properties and spin characteristics of the hydrogen protons of blood (Kiruluta and González 2016). The contrast-enhanced MRA method has advantages of a large field of view, the lack of potentially nephrotoxic contrast agent, and the lack of ionizing radiation compared to the computed tomographic angiography (CTA) images (Grist 2000). In other words, because MRA can be done without iodinated or nephrotoxic contrasts, it has prevented potential nephrotoxic harm or reduced the risk of a potential allergic reaction related to the iodinated contrast media. The example MRA data used in Fig.  6d was provided by the Department of Radiology, University of Mississippi Medical Center (UMMC). The MRA images were acquired with a Siemens 3 Tesla Skyra MRI scanner with voxel dimensions (0.25  ×  0.25  mm in axial dimensions and 0.55  mm in craniocaudal dimension), using a routine 3-dimensional (3D) Time Of Flight (TOF) acquisition protocol. The raw data were saved as DICOM (Digital Imaging and Communications in Medicine) format files (Fig. 6d). Although 3D models from MRA data have been commonly used in clinical settings to help identify vessel abnormalities such as atherosclerotic disease, most models were automatically segmented using volume rendering due to time consideration.

Specimen sections are photographed digitally, preferably using a microscopic camera. Only a greyscale image will be necessary for the digital procession. The stacking file generated in the pool window in Amira® requires an aligned image to restore the true 3D structures; therefore, it is very important to have the section edge next to the specimen during embedding and trimming in addition to ribbon-forming in order to facilitate the future 3D reconstruction. All image files need to be named alphabetically for the stacking file generation purpose. TIF or BMP files are both recognizable by Amira®. The slices can be aligned by the Amira® alignslices module. After the alignment, further processing of the images includes segmentation through surface and/or volume rendering methods which are discussed in other sections. A 3D microscopic structure of a renal corpuscle has been developed from serial histological sections for histology learning using Amira® software by the Jeremy Roth group (2015) in recent years. The details of the procedure for developing the model, including the histological specimen processing, image pre-treatment, and rendering processes, have been described in their paper published in Anatomical Sciences Education (Roth et al. 2015).

5

Stereoscopic Visualization

4.5

5.1

Stereoscopic Projection System

Slice Photographic Images

The development of slice photographic images was first systemically described by Ruthensteiner (Ruthensteiner 2007). Ribbon-forming sectioning is much thinner than paraffin sectioning, and it is the preferred method for generating slices of histological tissues for microscope slides. The histological procedures for preparing permanent-mount microscope slides, including relaxation, fixation, embedding, trimming, transferring, mounting, and staining, are all described in the literature (Ruthensteiner 2007). The main part of this section describes the treatment of slice photographic images to create a 3D model. A number of about 250–350 section images with 0.5–2 μm thickness is suitable for the stacking file in Amira®.

An example of the stereoscopic projection system used in the Amira® software includes a well-­ equipped computer workstation, a high-speed Dell Precision T7600 computer workstation (Dell Inc., Round Rock, TX) with an NVIDIA Quadro K6000 video card (NVIDIA Corp., Santa Clara, CA). The system also incorporates a pair of high-­ definition InFocus IN3128HD digital projectors (InFocus Corp., Wilsonville, OR). A linearly polarized filter (Polarizer film; Edmond Optics America, Barring, NJ) was placed in the light path of each projector, and the axis of polarization of the two filters was offset by 90° (Fig. 7. from Cui et al. 2017 ASE). The projectors were supported

Anatomy Visualizations Using Stereopsis: Current Methodologies in Developing Stereoscopic Virtual…

59

Fig. 7  Three-dimensional projection and viewing. Two high-definition LCD projectors were controlled by the Dell T7600 workstation. One projected an image of the virtual model that would be seen by the right eye (R), and the other projected the slightly different image that would be seen by the left eye (L). A linear polarizing filter was placed in front of each projector; the axis of polarization of the two filters was separated by 90°. The observer wore

special polarizing glasses in which the polarizing angle of the left lens matched that of the “left” projector, and the polarizing angle of the right lens matched the polarizing angle of the “right” projector. These glasses allow the observers to see the “right” image with their right eye and the “left” image with their left eye, resulting in the perception of three-dimensionality. [Photo credit: reproduced with permission from Anat Sci Educ 10:34–45 (2017)]

and aligned using Da-Lite 21,400 commercial stacker (Da-Lite Corp., Warsaw, IN). Images were projected onto a silver (Da-Lite Model C 10000 diagonal, silver matte finish; Da-Lite Corp., Warsaw, IN) screen with a metallic surface which does not depolarize light waves as do normal projection screens. Individuals who viewed 3D models in stereoscopic presentation using the projection system were required to wear 3D glasses with polarized lenses (Cui et al. 2016).

two images were projected via a pair of high-­ definition digital projectors and a linearly polarized filter; the axis of polarization of the two filters was offset by 90°. Individuals wear 3D glasses that have polarized lenses that match the polarization axes of the projector filters of the projector, so the observer’s left eye sees only the projected left eye image and the observer’s right eye sees only the projected right eye image (Fig. 7, from Cui et al. 2017 ASE). A comfortable viewing area with a dim light provides a virtual reality environment for stereoscopic viewing. The 3D models can be rotated and zoomed in and out, and a flexible view of internal or external structures can be achieved via a translate tool.

5.2

Visualization

The visualization of stereoscopic viewing is achieved via two eyes seeing slightly different images so that when the brain puts these two images together, a profound impression of three-­ dimensionality results (Poggio and Poggio 1984). In the example in Fig. 7, the dual projection system produced a “left-eye image” and a “right-eye image” of the computer-constructed model. The

5.3

Presentation and Exploration

The virtual 3D models can be explored via stereoscopic presentation. The orthoslice module allows radiographic slices to interact with a 3D

D. Cui et al.

60

Fig. 8  A surface rendered model of arteries with a superimposed computed tomography angiography slice. Two different views are presented: a raw computed tomography angiography slice overlaid on the model of blood ves-

sels (panel C) at the same level of A and B, with the external carotid artery indicated by blue arrows and the internal carotid artery indicated by yellow arrows. [Photo credit: reproduced with permission from Anat Sci Educ 9:179–185 (2016)]

model generated from the same radiographic data. Each radiographic slice can be superimposed on the 3D model which accurately indicates the location of the structures on the radiographic images and their 3D relationships with the model (Fig.  8 from Cui et  al. 2016 ASE). Annotation can be added to label structures with names for facilitating students’ learning. The presentations of the 3D models can take three formats: (1) Images from the 3D models can be easily captured via snapshot for PowerPoint presentation, posters, and publications. (2) Movie clips can be produced via MovieMaker within the software program for educational purposes. (3) And stereoscopic presentation is the most exciting part among these three. It requires a stereoscopic projection system, and viewers need to wear 3D glasses to experience the 3D virtual reality. In this display format, the models appear to be floating in the center of the room, and viewers feel as though they are almost able to touch the models. The models are virtually displayed in stereo mode with a flexible view and 360° of free rotation in all axes, and structures comprising the models can be easily reassembled and disassembled with or without annotation.

5.4

Implementation

The anatomical structures of 3D stereoscopic models can be implemented for education and learning purposes for medical, dental, and allied health students, as well as undergraduate and graduate students depending on the level and purpose of learners. 3D stereoscopic anatomical models can be used to teach anatomy and evaluate students’ learning and spatial abilities (Luursema et  al. 2006, 2008; Hilbelink 2009; Nguyen and Wilson 2009; Aziz et  al. 2012; Brewer et  al. 2012; Anderson et  al. 2013; Cui et al. 2017; Luursema et al. 2017). Some studies reported a positive impact on students’ performance, and some showed no significant improvements. 3D stereoscopic models have also been utilized for assessments of students’ short-term and long-term retention using a pelvic model in an abstract publication (Meyer et al. 2017). The volume rendering stereoscopic model has been recently used as a guide for surgical procedures and training (Rozen et al. 2008, 2009; Kawanishi et al. 2013; Bloch et al. 2015; Schoenthaler et al. 2016; Unger et al. 2016; Chen et al. 2017). The use of stereoscopic anatomical models for exploring the cervical vertebrae injection procedure is one example (Fig. 9, from Chen et al. 2017 ASE).

Anatomy Visualizations Using Stereopsis: Current Methodologies in Developing Stereoscopic Virtual…

Fig. 9 (a) Cartoon of lateral fluoroscopic view of C1 (atlas) and C2 (axis) joint injection, where the pink line indicates the injection needle; (b) Lateral view, right side, the atlanto-occipital joint (black arrow) formed between the atlas (blue) and occipital bone (part of the skull, white), the atlanto-axial joint (white arrow) formed between the axis (green) and atlas (blue), the right vertebral artery (black arrowhead), and the cervical spinal cord (yellow); (c) Lateral oblique view, right side, the atlanto-­ axial joint (white arrow), the cervical spinal cord (transparent yellow), and the second cervical nerve root (black

61

arrowhead); (d) Anterior view, with transparent skull, the atlanto-axial joint (white arrow), C1 (blue) and C2 (green), cervical spinal cord (yellow), the right vertebral artery (red); (e) C1 (blue) and C2 (green), the cervical spinal cord (transparent yellow and white arrowhead), the transverse foramina (two white arrows), and the groove of the posterior arch (black asterisk); (f) Inferior view of skull, anterior to posterior orientation, C1 (transparent blue), the right vertebral artery (red and black arrowhead). [Photo credit: reproduced with permission from Anat Sci Educ 10:598–606 (2017)]

D. Cui et al.

62

The improvement of the stereoscopic models can be achieved via the process of developing a validated stereoscopic model based on experts’ opinions and evaluation (Meyer et al. 2018).

6

Discussion

6.1

The Benefits

The stereoscopic anatomical models can be developed by researchers and educators, and models can be used for anatomical education and clinical training, as well as for graduate student research projects. The 3D images, videos, and stereoscopic presentations can be used in lecture presentations of virtual laboratory sessions. The capability of radiographic images interacting with 3D models in stereoscopic viewing provide students with a new opportunity to visualize the spatial relationships among the radiographic images (CT, MRI, CTA, and MRA) and 3D structures in different orientations. Annotation of 3D structures can rotate with the model during the presentation, which helps students to associate the name of the structures represented in the models and the radiographic images when the model is being rotated. The length, angle, and volume measurements of the 3D model provide a tool for quantitative study of the spatial orientation of anatomical structures represented in models and their relationships with associated radiographic data. The measurements can be useful to help predict parameters of anatomy structures for the planning of surgical procedures. Studies have shown evidence that stereoscopic models with 3D virtual projection systems can enhance students’ learning of anatomical structures, and students favor a 3D teaching method (Kockro et al. 2015; Cui et al. 2017).

6.2

The Limitations

One limitation of our technique is that the Amira ® software requires a very high-speed computer with large storage space for running very large files. The software also requires an expensive,

high-level video card to provide good image performance. Furthermore, the software requires an expensive annually-renewed license to provide maintenance, updates, and technical assistance. In addition, the multiple projectors and associated equipment require a relatively high level of technical expertise to set up and align properly. These requirements make the 3D system generally unsuitable for personal use, but rather better suited for an institutional environment. The two types of 3D image development also have their respective limitations. The volume rendering segmentation technique can produce a model relatively quickly. But it is limited in its ability to distinguish between nearby anatomical structures, and it sometimes includes unwanted material. In addition, it produces large files with the result that the models cannot be rotated as rapidly and smoothly as models produced by the surface rendering technique. The surface rendering technique, in contrast, requires that an operator manually trace the desired structure in each slice, thus producing a model of relatively small file size and including only exactly what the developer wants. This model can be rotated quickly and smoothly during presentation, but this technique’s drawback is that it requires an operator who is familiar with the relevant anatomy in CT and MR slices. It also requires much more time to develop. A combined semi-­ automatic rendering technique can produce models of relatively small file size that rotate quickly and smoothly during presentation. However, even this technique still requires a level of anatomical knowledge and skill on the part of the developer and also takes a considerable amount of time to develop.

6.3

Future Directions

Stereoscopic virtual models may play an important role in helping students to understand the spatial relationships of human structures in anatomy education. Future directions include determining: (1) how to effectively develop higher-quality, elaborate, realistic stereoscopic models for virtual teaching and learning; (2) how

Anatomy Visualizations Using Stereopsis: Current Methodologies in Developing Stereoscopic Virtual…

63

to better implement and utilize stereoscopic mod- Chae MP, Lin F, Spychal R et al (2014) 3D-printed haptic “reverse” models for preoperative planning in soft els in anatomical education via different learning tissue reconstruction: a case report. Microsurgery environments; and (3) how to evaluate students’ 35:148–153. https://doi.org/10.1002/micr.22293 learning in anatomy education, including stu- Chen J, Smith AD, Khan MA et al (2017) Visualization of stereoscopic anatomic models of the paranasal sinuses dents’ short-term learning retention and long-­ and cervical vertebrae from the surgical and proceterm learning retention. dural perspective. Anat Sci Educ 10:598–606. https:// doi.org/10.1002/ase.1702 Acknowledgments  Authors are grateful to Drs. Andrew Christ R, Gulien J, Poyade M et al (2018) Proof of concept of a workflow methodology for the creation of D, Smith, Tracy C.  Marchant, John T.  McCarty, Anson basic canine head anatomyveterinary education tool L. Thaggard, and Jud Storrs (Dept. Radiology, University using augmentedreality. PLoS One 13:e0195866. of Mississippi Medical Center) for providing the de-­ https://doi.org/10.1371/journal.pone.0195866 identified radiographic images. Special thanks to Dr. James C.  Lynch for his valuable comments and sugges- Cui D, Lynch JC, Smith AD et  al (2016) Stereoscopic vascular models of the head and neck: a computed tions to this paper. Authors thank Drs. Michael N. Lehman, tomography angiography visualization. Anat Sci Educ James C. Lynch, Timothy D. Wilson, and Allan Sinning 9:179–185. https://doi.org/10.1002/ase.1537 for their great support. Authors also thank Mr. Jerome Allison for his technical support. The authors have no Cui D, Wilson TD, Rockhold RW et al (2017) Evaluation of the effectiveness of 3D vascular stereoscopic related conflicts of interest to disclose. models in anatomy instruction for first year medical students. Anat Sci Educ 10:34–45. https://doi. org/10.1002/ase.1626 References de Crespigny A, Bou-Reslan H, Nishimura MC et  al (2008) 3D micro-CT imaging of the postmortem brain. J  Neurosci Methods 171:207–213. https://doi. Ackerman MJ (1998) The visible human project: org/10.1016/j.jneumeth.2008.03.006 a resource for anatomical visualization. Stud Health Technol Inform 52:1030–1032. https://doi. Drake RL, McBride JM, Lachman N et al (2009) Medical education in the anatomical sciences: the winds of org/10.1109/ITAB.1997.649392 change continue to blow. Anat Sci Educ 2:253–259. Adams CM, Wilson TD (2011) Virtual cerebral ventricuhttps://doi.org/10.1002/ase.117 lar system: an MR-based three-dimensional computer model. Anat Sci Educ 4:340–347. https://doi. Drake RL, McBride JM, Pawlina W (2014) An update on the status of anatomical sciences education in United org/10.1002/ase.256 States medical schools. Anat Sci Educ 7:321–325. Ahmed K, Rowland S, Patel V et al (2010) Is the structure https://doi.org/10.1002/ase.1468 of anatomy curriculum adequate for safe medical practice? Surgeon 8:318–324. https://doi.org/10.1016/j. Fielder AR, Moseley MJ (1996) Does stereopsis matter in humans? Eye(Lond) 10(pt 2):233–238 surge.2010.06.005 Anderson P, Chapman P, Ma M et  al (2013) Real-time Gaida D, Garipoli G, Bonanomi C et al (2014) Assessing stereo blindness and stereo acuity on digital displays. medical visualization of human head and neck anatDisplays 25:206–212. https://doi.org/10.1016/j. omy and its applications for dental training and simudispla.2014.05.010 lation. Curr Med Imaging Rev 9:298–308. https://doi. Govsa R, Ozer MA, Sirinturk S et  al (2017) Creating org/10.2174/15734056113096660004 vascular models by postprocessing computed tomogAziz MA, McKenzie JC, Wilson JS et al (2012) The human raphy angiography images: a guide for anatomical cadaver in the age of biomedical informatics. Anat Rec education. Surg Radiol Anat 39:905–910. https://doi. 269:20–32. https://doi.org/10.1002/AR.10046 org/10.1007/s00276-017-1822-2 Billinghurst M, Clark A, Lee G (2015) A survey of augmented reality. Found Trends Hum Comput Interact Grist TM (2000) MRA of the abdominal aorta and lower extremities. J  Magn Reson Imaging 8:73–272. https://doi.org/10.1561/1100000049 11(1):32–43. https://doi.org/10.1002/(SICI)1522Bishop PO (1996) Stereoscopic depth perception and verti2586(200001)11:13.0.CO;2-W cal disparity: neural mechanisms. Vis Res 36(13):1969– 1972. https://doi.org/10.1016/0042-6989(95)00243-X Hackett M, Proctor M (2016) Three-dimensional display technologies for anatomical education: a literature Bloch E, Uddin N, Gannon L et al (2015) The effects of review. J  Sci Educ Technol 25:641–654. https://doi. absence of stereopsis on performance of a simulated org/10.1007/s10956-016-9619-3 surgical task in two-dimensional and three-dimensional viewing conditions. Br J  Ophthalmol 99:240–245. Heron S, Lages M (2012) Screening and sampling in studies of binocular vision. Vis Res 62:228–234. https:// https://doi.org/10.1136/bjophthalmol-2013-304517 doi.org/10.1016/j.visres.2012.04.012 Brewer DN, Wilson TD, Eagleson R et al (2012) Evaluation of neuroanatomical training using a 3D visual real- Hilbelink AJ (2009) A measure of the effectiveness of incorporating 3D human anatomy into an online underity model. Stud Health Technol Inform 173:85–91. https://doi.org/10.3233/978-1-61499-022-2-85

64 graduate laboratory. Br J  Educ Technol 40:664–672. https://doi.org/10.1111/j.1467-8535.2008.00886.x Hu A, Wilson T, Ladak H et al (2009) Three-dimensional educational computer model of the larynx: voicing a new direction. Arch Otolaryngol Head Neck Surg 135:677–681 Julesz B (1960) Binocular depth perception of computergenerated patterns. Bell Syst Tech J 39:1125–1162. https://doi.org/10.1002/j.1538-7305.1960.tb03954.x Kawanishi Y, Fujimoto Y, Kumagai N et  al (2013) Evaluation of two- and three-dimensional visualization for endoscopic endonasal surgery using a novel stereoendoscopic system in a novice: a comparison on a dry laboratory model. Acta Neurochir 155:1621– 1627. https://doi.org/10.1007/s00701-013-1757-2 Kiruluta AJM, González RG (2016) Magnetic resonance angiography: physical principles and applications. Handb Clin Neurol 135:137–149. https://doi. org/10.1016/B978-0-444-53485-9.00007-6 Kockro RA, Amaxopoulou C, Killeen T et  al (2015) Stereoscopic neuroanatomy lectures using a three-dimensional virtual reality environment. Ann Anat 201:91–98. https://doi.org/10.1016/j. aanat.2015.05.006 Leveritt S, McKnight G, Edwards K et  al (2016) What anatomy is clinically useful and when should we be teaching it? Anat Sci Educ 9:468–475. https://doi. org/10.1002/ase.1596 Lujan HL, DiCarlo SE (2006) Too much teaching, not enough learning: what is the solution? Adv Physiol Educ 30:17–22. https://doi.org/10.1152/ advan.00061.2005 Luursema J-M, Verwey WB, Kommers P et  al (2006) Optimizing conditions for computer-assisted anatomical learning. Interact Comput 18:1123–1138. https:// doi.org/10.1016/j.intcom.2006.01.005 Luursema JM, Verwey WB, Kommers P et  al (2008) The role of stereopsis in virtual anatomical learning. Interact Comput 20:455–460. https://doi. org/10.1016/j.intcom.2008.04.003 Luursema JM, Vorstenbosch M, Kooloos J  (2017) Stereopsis, visuospatial ability, and virtual reality in anatomy learning. Anat Res Int 17:1493135–1493137. https://doi.org/10.1155/2017/1493135 Manson A, Poyade M, Rea P (2015) A recommended workflow methodology in the creation of an educational and training application incorporating a digital reconstruction of the cerebral ventricular system and cerebrospinal fluid circulation to aid anatomical understanding. BMC Med Imaging 15:44. https://doi. org/10.1186/s12880-015-0088-6 Martin CM, Roach VA, Nguyen N, Rice CL, Wilson TD (2013) Comparison of 3D reconstructive technologies used for morphometric research and the translation of knowledge using a decision matrix. Anat Sci Educ 6:393–403. https://doi.org/10.1002/ase.1367 McIntire JP, Havig PR, Geiselman EE (2014) Stereoscopic 3D displays and human performance: a comprehensive review. Displays 35:18–26. https://doi.org/10.1016/j. displa.2013.10.004

D. Cui et al. McKeown PP, Heylings DJ, Stevenson M et  al (2003) The impact of curricular change on medical students’ knowledge of anatomy. Med Educ 37:954–961. https://doi.org/10.1016/j.surge.2010.06.005 Melmoth DR, Finlay AL, Morgan MJ et  al (2009) Grasping deficits and adaptations in adults with stereo vision losses. Invest Ophthalmol Vis Sci 50:3711– 3720. https://doi.org/10.1167/iovs.08-3229 Meyer ER, James AM, Cui D (2017) A pilot study examining the impact of two dimensional computer images and three-dimensional stereoscopic images of the pelvic muscles and neurovasculature on short-term and long-term retention of anatomical information for first year medical students. FASEB J 31:580.8 Meyer ER, James AM, Cui D (2018) Hips don’t lie: expert opinions guide the validation of a virtual 3D pelvis model for use in anatomy education and medical training. HAPS Educ 22:5–61. https://doi.org/10.21692/ haps.2018.023 Milgram P, Kishino F (1994) A taxonomy of mixed reality visual displays. IECE Trans Inf Syst E77-D(11): 1321–1329. doi:10.1.1.102.4646 Milgram P, Takemura H, Utsumi A et al (1995) Augmented reality: a class of displays on the reality-virtuality continuum. Telemanipulator Telepresence Technol:2351. https://doi.org/10.1117/12.197321 Miller SA, Perrotti W, Silverthorn DU et al (2002) From college to clinic: reasoning over memorization is key for understanding anatomy. Anat Rec 269:69–80. https://doi.org/10.1002/ar.10071 Nguyen N, Wilson TD (2009) A head in virtual anatomy: development of a dynamic head and neck model. Anat Sci Educ 2:294–301. https://doi.org/10.1002/ase.115 O’Connor AR, Birch EE, Anderson S et  al (2010) The functional significance of stereopsis. Invest Ophth Vis Sci 51:2019–2023. https://doi.org/10.1167/ iovs.09-4434 Onion C, Slade P (1995) Depth of information processing and memory for medical facts. Med Teach 17:307– 314. https://doi.org/10.3109/01421599509008321 Poggio GF, Poggio T (1984) The analysis of stereopsis. Ann Rev Neurosci 7:52–412. https://doi.org/10.1146/ annurev.ne.07.030184.002115 Rosson GD, Shridharani SM, Magarakis M et al (2011) Three-dimensional computed tomographic angiography to predict weight and volume of deep inferior epigastric artery perforator flap for breast reconstruction. Microsurgery 31:510–516. https://doi.org/10.1002/ micr.20910 Roth JA, Wilson TD, Sandig M (2015) The development of a virtual 3D model of the renal corpuscle from serial histological sections for e-learning environments. Anat Sci Educ 8:574–583. https://doi.org/10.1002/ase.1529 Rouse R, Engberg M, JafariNaimi N et  al (2015) MRX: an interdisciplinary framework for mixed reality ­experience design and criticism. Digit Creat 26:175– 181. https://doi.org/10.1080/14626268.2015.1100123 Rozen WM, Ashton MW, Grinsell D et  al (2008) Establishing the case for CT angiography in the preoperative imaging of abdominal wall perforators.

Anatomy Visualizations Using Stereopsis: Current Methodologies in Developing Stereoscopic Virtual… Microsurgery 28:306–313. https://doi.org/10.1002/ micr.20496 Rozen WM, Ashton MW, Whitaker IS et  al (2009) The financial implications of computed tomographic angiography in DIEP flap surgery: a cost analysis. Microsurgery 29:168–169. https://doi.org/10.1002/ micr.20594 Ruthensteiner B (2007) Soft part 3D visualization by serial sectioning and computer reconstruction. In: Geiger DL, Ruthensteiner B (eds) Micromolluscs: methodological challenges—exciting results methodological. https://doi.org/10.11646/zoosymposia.1.1.8 Schoenthaler M, Schnell D, Wilhelm K et  al (2016) Stereoscopic (3D) versus monoscopic (2D) laparoscopy: comparative study of performance using advanced HD optical systems in a surgical simulator model. World J  Urol 34:471–477. https://doi. org/10.1007/s00345-015-1660-y Sergovich A, Johnson M, Wilson TD (2010) Explorable three-dimensional digital model of the female pelvis, pelvic contents, and perineum for anatomical education. Anat Sci Educ 3:127–133. https://doi. org/10.1002/ase.135 Servos P, Goodale MA, Jakobson LS (1992) The role of binocular vision in prehension: a kinetic analysis. Vis Res 32:1513–1521. https://doi. org/10.1016/0042-6989(92)90207-Y Spitzer V, Ackerman MJ, Scherzinger AL et al (1996) The visible human male: A technical report. J  Am Med Inform Assoc 3:118–130 Tam MD (2010) Building virtual models by postprocessing radiology images: a guide for anatomy faculty. Anat Sci Educ 3:261–266. https://doi.org/10.1002/ ase.175

65

Tarek M (1999) The multidimensional learning model: a novel cognitive psychology-based model for computer assisted instruction in order to improve learning in medical students. Med Educ Online 4:4302. https:// doi.org/10.3402/meo.v4i.4302 Terrell M (2006) Anatomy of learning: instructional design principles for the anatomical sciences. Anat Rec 289B:252–260. https://doi.org/10.1002/ar.b.20116 Tomandl BF, Köstner NC, Schempershofe M et al (2004) CT angiography of intracranial aneurysms: a focus on postprocessing. Radiographics 24:637–655. https:// doi.org/10.1148/rg.243035126 Udupa JK, Goncalves RJ (1993) Imaging transforms for visualizing surfaces and volumes. J  Digit Imaging 6:213–236 Unger B, Tordon B, Pisa J et al (2016) Importance of stereoscopy in haptic training of novice temporal bone surgery. Stud Health Technol Inform 220:439–445. https://doi.org/10.3233/978-1-61499-625-5-439 Westheimer G (1994) The Ferrier lecture, 1992. Seeing depth with two eyes: stereopsis. Proc R Soc Lond B 257:205–214 Wickens CD, Merwin DH, Lin EL (1994) Implications of graphic enhancements for the visualization of scientific data: dimensional integrality, stereopsis, motion and mesh. Hum Factors 36:44–61. https://doi. org/10.1177/001872089403600103 Yeh DD, Park YS (2015) Improving learning efficiency of factual knowledge in medical education. J  Surg Educ 72:882–889. https://doi.org/10.1016/j. jsurg.2015.03.012 Yeung JC, Fung K, Wilson TD (2011) Development of a computer-assisted cranial nerve simulation from the visible human dataset. Anat Sci Educ 4:92–97. https:// doi.org/10.1002/ase.190

Statistical Shape Models: Understanding and Mastering Variation in Anatomy Felix Ambellan, Hans Lamecker, Christoph von Tycowicz, and Stefan Zachow

Abstract

In our chapter we are describing how to reconstruct three-dimensional anatomy from medical image data and how to build Statistical 3D Shape Models out of many such reconstructions yielding a new kind of anatomy that not only allows quantitative analysis of anatomical variation but also a visual exploration and educational visualization. Future digital anatomy atlases will not only show a static (average) anatomy but also its normal or pathological variation in three or even four dimensions, hence, illustrating growth and/or disease progression. Statistical Shape Models (SSMs) are geometric models that describe a collection of semantically similar objects in a very compact way. SSMs represent an average shape of many three-dimensional objects as well as their variation in shape. The creation of SSMs requires a correspondence mapping, which can be achieved e.g. by parameterization with

F. Ambellan · C. von Tycowicz Zuse Institute Berlin, Berlin, Germany e-mail: [email protected]; [email protected] H. Lamecker · S. Zachow (*) Zuse Institute Berlin, Berlin, Germany 1000 Shapes GmbH, Berlin, Germany e-mail: [email protected]; [email protected]

a respective sampling. If a corresponding parameterization over all shapes can be established, variation between individual shape characteristics can be mathematically investigated. We will explain what Statistical Shape Models are and how they are constructed. Extensions of Statistical Shape Models will be motivated for articulated coupled structures. In addition to shape also the appearance of objects will be integrated into the concept. Appearance is a visual feature independent of shape that depends on observers or imaging techniques. Typical appearances are for instance the color and intensity of a visual surface of an object under particular lighting conditions, or measurements of material properties with computed tomography (CT) or magnetic resonance imaging (MRI). A combination of (articulated) Statistical Shape Models with statistical models of appearance lead to articulated Statistical Shape and Appearance Models (a-SSAMs). After giving various examples of SSMs for human organs, skeletal structures, faces, and bodies, we will shortly describe clinical applications where such models have been successfully employed. Statistical Shape Models are the foundation for the analysis of anatomical cohort data, where characteristic shapes are correlated to demographic or epidemiologic data. SSMs consisting of several thousands of

© Springer Nature Switzerland AG 2019 P. M. Rea (ed.), Biomedical Visualisation, Advances in Experimental Medicine and Biology 1156, https://doi.org/10.1007/978-3-030-19385-0_5

67

F. Ambellan et al.

68

objects offer, in combination with statistical methods or machine learning techniques, the possibility to identify characteristic clusters, thus being the foundation for advanced diagnostic disease scoring. Keywords

Statistical shape analysis · Medical image segmentation · Data reconstruction · Therapy planning · Automated diagnosis support

mal. The term malformation or anomaly, for instance, may become applicable when the structural change of an organ has a negative (up to life threatening) influence on its function. Hence, to establish a canon of normality with respect to an anatomical structure one has to thoroughly investigate its range of morphological variation first to improve diagnosis and therapeutic performance.

1 What do anatomists or clinicians mean by saying a structure, i.e. an organ, is normal? What does ‘normal’ mean when the term is used in anatomical descriptions? Is it “typical”, “common”, “average”, or any other attribution to the most frequently observed features of relevance? Less frequently or rarely observed features, on the contrary, are denoted as “abnormal”, “unusual”, or “atypical”. Such a terminology resulting from observations of polymorphisms indicates that the term normality is based on statistical criteria. The Latin word ‘normalis’ means conforming to a rule or pattern, where ‘norma’ is used in descriptive anatomy to indicate the standard or normal appearance of a structure (Moore 1989). For example, ‘Norma lateralis’ is used when describing the skull to depict its typical lateral appearance. To recognize anatomical variations it is necessary to identify patterns in size, form, relative position or orientation, or even in appearance or function. A fluctuation of such patterns within a commonly experienced range is considered as normal (natural) variation. However, anatomical variation does not only occur across subjects but also within the same subjects during growth and aging or caused by pathological changes. Occurrences beyond certain limits up to extremes are classified as anomalies or malformations (Sañudo et  al. 2003). Terms for dysmorphia often reflect the exceeding of such limits by prefixes such as ‘hypo’, ‘hyper’, ‘micro’, ‘brachy’ etc. (Jones et al. 2013). The concept of “healthy” and “diseased” also fits into the notion of nor-

Statistical Anatomy

Since an existing range of variation in anatomy is a priori unknown, its definition depends on the amount of observations made. A range of variation can be narrow or wide, depending on the choice of samples that is considered for comparison as well as the anatomical variability itself (cf. Fig. 1). A well chosen and sufficiently large sample set with a Gaussian distribution of patterns can be regarded as covering a “normal” range of variation. Only if the sample set is representative the results of the statistical evaluation can be used to draw conclusions about the population as a whole and thus make general statements. In product ergonomics and clothing, statistical analysis of body measurements is widespread. The entire field of anthropometry illustrates what the anatomy of the future could look like, if one is able to precisely measure anatomical structures in a similar way. In biology, very early attempts in this direction have been made by D’Arcy Wentworth Thompson (1917) who tried to formalize growth and form in a mathematical way. A large amount of anatomical structures is also the foundation for building so called anatomical atlases (Bookstein 1986; Toga 1998). Such atlases are represented by an averaged anatomy that can be regarded as a common denominator for adding additional information that is collected from various samples. In biology the use of anatomical atlases is quite common, to integrate information that is derived from several specimen into a common reference system (Rybak et al. 2010).

Statistical Shape Models: Understanding and Mastering Variation in Anatomy

69

Fig. 1  Various liver shapes (right), averaged liver (left) created from 120+ liver shapes. (Lamecker et al. 2002)

Fig. 2  Geometric reconstructions of anatomical structures, i.e. heads from CT data

The advance in three-dimensional (3D) imaging techniques has opened a new field of research for descriptive anatomy (Sañudo et  al. 2003). From tomographic image data, anatomical structures can be reconstructed three-dimensionally with high geometric accuracy (Zachow et  al. 2007) (cf. Fig. 2). The process of reconstructing anatomical 3D models from measurement data requires a so called segmentation of the data, which is tedious, time-consuming, and labor intensive. Automated methods allow a more efficient processing of large amounts of data sets (Kainmüller et al. 2007, 2009; Seim et al. 2010; Tack et al. 2018; Ambellan et  al. 2019), whereby often more than one anatomical structure can be segmented within a single tomographic data set (Fig. 3, right). The amount of acquired tomographic image data is already extremely large and it is constantly

increasing. However, there is no central administration of the image data, and access for statistical analyses is not possible from an organisational or data protection point of view. To this end, attempts are being made to counter this problem by means of so-called epidemiological longitudinal studies (Osteoarthritis Initiative [OAI], UK Biobank, Study of Health in Pomerania [SHIP], German National Cohort [GNC], etc.). The respective data of such studies will open up new possibilities for statistical anatomy. On its basis, large quantities of anatomical structures can be geometrically generated, which can then be statistically analyzed with regard to their variation in shape as well as to the correlation between shape and other attributes (age, weight, sex, smoking status, etc.). To derive an average shape from a sample and to determine shape variability one needs a concept of correspondence as well as

F. Ambellan et al.

70 Fig. 3 Model-based segmentation. Left: A liver from CT data (Kainmüller et al. 2007). Right: A knee joint from MRI data. (Seim et al. 2010)

Fig. 4  When the tip of the nose (left) is wrongly set into correspondence with a point on the cheek (right),the average of the two heads reveals an implausible correspondence (center)

a measure of distance between shapes. Mathematical details for shape analysis will be given in the following paragraph.

1.1

Statistical Shape Models

While humans possess an intuitive perception of shape and similarity thereof, these notions have to be formalized in order to be processed algorithmically. A first step in performing statistical analysis on shapes is therefore to convert the geometric information of an anatomy into a discrete representation thereof, e.g. a finite subset of its points or polygonal meshes describing an object’s boundary (cf. Fig. 2, center). Given two or more discrete shapes, one of the fundamental problems in shape analysis is to find a meaningful relation between semantic entities and thus the entire parametrization (see Fig.  4). Such correspondence can be hard to estimate as it not only requires an understanding of the structure at local

and global scales but also needs to take semantic information about anatomical entities or functionality into account. Due to this complexity, a plethora of methods following different approaches have been proposed over the last decades, e.g., see (van Kaick et al. 2011) and the references therein. Most (semi-)automatic approaches actually phrase the correspondence problem as a registration between the involved shapes (see Fig. 5). For image-derived shapes, we can exploit rich local descriptors integrating color and texture (Grewe and Zachow 2016), whereas purely shape-based descriptors are generally less distinctive. In the latter case, correspondence estimation is ­frequently based on the matching of a (sparse) set of features that provide a notion of similarity, and/or the proximity of points after (potentially non-­rigid) alignment. A common approach to compute a dense point-wise matching, is to extend a sparse correspondence defined only for a small number of

Statistical Shape Models: Understanding and Mastering Variation in Anatomy

Fig. 5  Matching of a facial surface S to the reference R: Parametrizations ΦS and ΦR are computed and photometric as well as geometric features are mapped to the plane.

71

The dense correspondence mapping ΨΦS→ΦR accurately registers photographic and geometric features from S and R

Fig. 6  Example for a consistent surface decomposition of two pelvises, where each pair of patches is set into correspondence via a common parametrization. (Lamecker 2008)

homologous elements, i.e. elements with the same structure in terms of geometry, function, and appearance. In particular, extending sparse correspondences significantly reduces the computational complexity and allows to incorporate expert knowledge (Lamecker et  al. 2002, 2004) (cf. Fig. 6). A specialized form of correspondence involves a group of shapes simultaneously, such that the group information can serve as an additional constraint in the solution search (Davies et al. 2002). These population-based approaches employ group-wise optimization concerning the quality of the resulting statistical model (e.g. in terms of entropy) and thus enjoy widespread application in the shape analysis community. Once the discretized shapes have been put into correspondence, they can be interpreted as elements in a high-dimensional space. Points in this so-called configuration space not only represent the geometric form of an object but also its scale, position and orientation within the 3D space they are embedded in. By removing these similarity transformations, we derive the concept of shape space (Kendall et al. 2009), which is susceptible to statistical shape analysis. It is, however, this

last step that introduces curvature to shape space yielding a non-trivial geometric structure. Contrary to flat spaces, shortest connecting paths in shape space are not straight lines but curved trajectories called geodesics (see Fig. 7). Whereas this non-linearity ensures consistency, e.g. by preventing bias due to misalignment of shape data, it also impedes the application of classical statistical tools. As a fully intrinsic treatment of the analysis problem can be computationally demanding, a common approach is to approximate it using extrinsic distances. For data with a large spread in shape space or within regions of high curvature, such linearization will introduce distortions that degrade the statistical power (von Tycowicz et  al. 2018). Among the many methods for capturing the geometric variability in a population, Principal Component Analysis (PCA) and its manifold extensions remain a workhorse for the construction of Statistical Shape Models (SSMs). The resulting models encode the probability of occurrences of a certain shape in terms of a mean shape and a hierarchy of major modes explaining the main trends of shape variation (see Fig. 8).

F. Ambellan et al.

72

Fig. 7  Visualization of shortest paths, i.e. geodesics, connecting two body shapes w.r.t. the flat ambient space (red)

and a curved shape space (green). The latter contains only valid shape instances whereas the former develops artifacts, e.g. shrinkage of the arms

Fig. 8  Mean pelvic shape from seven instances (left) and most dominant modes of variation within a population of 150 pelvises (right)

An appealing feature of such a shape modeling approach is that the shape model itself has a generative power. Since all shape instances are in dense correspondence with respect to their geometric representation, a morphing between all shapes contained in such an SSM becomes possible (Gomes et  al. 1999). This means that any weighted combination of shape instances of the SSM leads to new but plausible shapes that are not contained in the training data (cf. Fig. 9).

1.2

Longitudinal Shape Analysis

Processes such as disease progression or recovery, growth, or aging (Fig.  10) are inherently time-dependent, requiring measurements at multiple time points to be sufficiently described. Clinical research therefore increasingly relies on longitudinal studies that track biological shape changes over time within and across individuals to gain insight into such dynamic processes.

Statistical Shape Models: Understanding and Mastering Variation in Anatomy

73

Fig. 9  Interpolation in pelvic shape space generates anatomically plausible shape instances of pelvises

Fig. 10  Mean shape trajectories allow for an interpolation of facial aging effects. (Grewe et al. 2013)

While approaches for the analysis of time series of scalar data are well understood and routinely employed in statistics and medical imaging communities, generalization to complex data such as shapes are at an early stage of research. Methods obtained for cross-sectional data analysis do not consider the inherent correlation of repeated measurements of the same individual, nor do they inform how a subject relates to a comparable healthy or disease-specific population. Integrating longitudinal shape measurements into an SSM allows to statistically analyse the temporal evolution of anatomical structures as well as a vivid visualization of the same using morphing.

Longitudinal analysis requires a common framework based on the use of hierarchical models that include intra-individual changes in the response variable and thereby have the ability to differentiate between cohort and temporal effects. One eligible class of statistical methods are mixed-effects models (Gerig et  al. 2016) that describe the correlation in subject-specific measurements along with the mean response of a population over time. At individual level, continuous trajectories have to be estimated from sparse and potentially noisy samples. To this end, subject-­specific spatiotemporal regression models are employed. They provide a way to describe

F. Ambellan et al.

74

the data at unobserved times (i.e. shape changes between observation times and—within certain limits—also at future times) and to compare trends across subjects in the presence of unbalanced data (e.g. due to dropouts). An approach used is to approximate the observed temporal shape data by geodesics in shape space and based on these, estimate overall trends within groups. Geodesic models are attractive as they feature a compact representation (similar to the slope and intercept term in linear regression) and therefore allow for computationally efficient inference (Nava-Yazdani et al. 2018).

1.3

 rticulated Statistical Shape A Models

In functional analysis it is often necessary to not only consider a single anatomical shape but a shape ensemble of different interacting structures since they are in a spatial relationship that is crucial for the respective function. Very well known shape ensembles in musculoskeletal anatomy are joint structures, e.g. hip or knee joint (Fig. 11). A common method to model such joint structures statistically are so called articulated Statistical Shape Models (a-SSMs) (Boisvert et  al. 2008; Klinder et  al. 2008; Kainmüller et  al. 2009; Bindernagel et al. 2011; Agostini et al. 2014). An a-SSM consists of an SSM for each involved anatomical structure of the joint as well as an analytical joint model that describes the degrees of freedom of the joint motion. A standard approach for modeling a hip joint (Fig. 11, left) is a ball-­ and-­ socket model, which is completely determined through its rotational center, a global

frame and its orientation. The connection to the statistical part of the model is established via the coordinates of the rotational center that are included in the shape statistics, s.t. it becomes a component of the SSM being always placed at a plausible location. Other examples for joint models are hinge joints for the knee or the elbow (Fig.  11, center), often coupled with additional degrees of freedom for rotation and/or translation, or bicondular joints as for the temporomandibular joint (Fig. 11, right). The charming aspect of an articulable ensemble of statistical shape models lies in the fact that shape and joint positions can be varied independently of each other, whereby the relationship between articulation and statistical variation of anatomical relations always leads to a plausible result. In addition, degrees of freedom of joints can still be modeled statistically to analyse motion patterns within a population sample.

1.4

Statistical Appearance Models

An organ or an anatomical structure varies not only in its shape, but also in its internal structure and appearance. For example, bone can have different degrees of mineralization or the appearance of the skin can differ. In medical imaging, different tissue types also yield to different measurements depending on the imaging modality. In addition to the statistical examination of shape, there are therefore good reasons to also consider the internal structure and the respective appearance when examining anatomical variation (Fig. 12).

Fig. 11  Examples of articulated SSMs: hip, knee, jaw. (Kainmüller et al. 2009; Bindernagel et al. 2011)

Statistical Shape Models: Understanding and Mastering Variation in Anatomy

75

Fig. 12 Statistical variation in shape (Lamecker et al. 2006a), articulation, and bone mineral density. (Yao, 2002)

Fig. 13  An osteoarithritic knee (left) showing pathological shape and appearance features (grey areas) versus a healthy control (right)

Models combining shape and image statistics are known as Statistical Shape and Appearance Models (SSAM). Such models play an important role in diagnostics, where it has to be acknowledged that shape statistics alone is not in every case the solution to a problem (Mukhopadhyay et al. 2016). If we e.g. consider knee osteoarthritis (see Fig.  13) and here especially the assessment of femoral/tibial cartilage degenerations we note that the cartilage interface shows macarations before denudations emerge thereof. These macarations can be seen in MRI as the cartilage soaks synovial fluid and appears brighter than usual. If one relies on shape knowledge alone

there is no chance to notice this clear sign of disease progression, i.e. shape statistics remains blind for inflammatory processes. However, it is possible to sample appearance patterns within the tibial and femoral head s.t. a statistical analysis similar to the PCA-based one on shapes can be performed on appearances to solve these ambiguities. Another example is the statistical evaluation of bone mineral density (BMD). SSAMs allow to analyse the relationship between BMD, bone shape, and demographic parameters, like age or sex within a large population.

F. Ambellan et al.

76

2

Applications for Statistical Shape Models

With the help of (articulated) Statistical Shape and Appearance Models (a-SSAMs) geometric priors (i.e. anatomically plausible deformable A tremendous number of applications arose in templates) are given, being a valuable resource the field of statistical shape modelling within the for a reconstruction of shapes from measurement last decades and yet it is very likely that new ones data. This has been successfully demonstrated will emerge in the future (Lamecker et al. 2005; with automated geometry reconstruction Lamecker and Zachow 2016). Since it would approaches using a-SSAMs (Seim et  al. 2010; overexpand this chapter’s scope we will, in the Kainmüller et al. 2009; Tack et al. 2018; Ambellan following, focus on some prominent examples et  al. 2019) that again imply speeding up the extension of the respective SSMs. However, the with respect to anatomical shapes (Fig. 14). benefit of using prior geometric knowledge for reconstructing shapes from measurements 2.1 Imaging and Metrology becomes even more valuable in cases where measurements contain severe disturbances or do not Except in the case of Computer Aided Design completely describe an object due to the circum(CAD), shapes are typically represented in a dis- stance that the measuring field is too small, the crete form by a collection of point measurements object is not fully covered by the field of view, or that are distributed over the surface of an object. the anatomy of interest is simply not fully accesShape measurements can be taken stereo-­ sible to the measurement (Vidal-Migallon et  al. photogrammetrically, tactilely or by tomographic 2015; Wilson et al. 2017; Bernard et al. 2017). In methods (CT, MRI) and they can either be dense its extreme, 3D shapes may even be reconstructed or sparse, thus capturing more or less detail of a from very sparse measurements in case the measured object. In addition, measurements may respective geometric prior is powerful enough to be disturbed by measurement errors and artifacts. extrapolate the missing information. An example To reconstruct an object’s shape from such mea- would be a geometric 3D reconstruction of anasurements robust algorithms are required that are tomical structures from a few 2D radiographs or able to cope with measurement errors, sparsity, or even a single one (Ehlke et al. 2013). incompleteness.

Fig. 14  Examples of statistical shape models: neurocranium, bony orbit, midface, and mandible. (Zachow 2015)

Statistical Shape Models: Understanding and Mastering Variation in Anatomy

77

Fig. 15  SSM-based 3D reconstruction of anatomy from 2D X-ray images. (Lamecker et al. 2006a; Dworzak et al. 2010; von Berg et al. 2011)

Fig. 16  Concept of SSAM-based 3D reconstruction of anatomy from a single radiograph. (Ehlke et al. 2013)

In case two or more radiographs for the same subject are given and the imaging setup, i.e. the spatial relationship between the acquired images (source, patient, detector) is known, an SSM can be fitted to the image data in such a way, that its projections (for example silhouettes) match the boundaries within the given images best (Fig. 15). This concept will be found in today's full body stereo-radiographic imaging systems, becoming an alternative to tomographic imaging in particular orthopedic applications. In cases where only a single radiograph is available, as it often is in functional imaging using fluoroscopy or in orthopedics for imaging weight-bearing situations, (a)-SSAMs offer a valuable resource for a 3D reconstruction of anatomy from the given measurements (Fig.  16). The matching between the deformable template and the image data using SSAMs not only relies on the silhouettes but also

on the appearance of the complete anatomical structure within the images. That way both, shape and appearance are used to robustly drive an algorithm to select a best matching shape and pose from the statistical model. However, it remains to be said that such 3D reconstruction from sparse measurements always requires a representative statistical model to faithfully approximate the imaged anatomy. Shape knowledge in combination with medical X-ray imaging (e.g. C-arm technology) also opens up new possibilities in dose reduction, since image acquisitions can be designed in such way, that a few well chosen perspectives might already be sufficient to reconstruct the anatomy of interest. This becomes especially useful for dose-critical applications as image-based positioning for radiotherapy, intraoperative, or functional imaging.

F. Ambellan et al.

78

2.2

Shape Analysis

With an increasing amount of data from medical imaging and epidemiological studies as well as intensified initiatives to make such data available for research, new possibilities of morphological population analysis arise. Large longitudinal databases, in addition, offer the unique opportunity to investigate the connection between changes in anatomical shape documented through imaging at different time points and disease states rated by domain experts. Shape analysis by means of SSMs serves hereby as a valuable tool that provides a complexity reduced compact encoding for a large set of shapes. In particular, employing the coefficients representing the shapes within the basis of principal modes of variation yields highly-discriminative statistical descriptors that are able to capture characteristic changes in shape (see Fig. 17). This encoding in turn is well suited for the application of established analysis methods, e.g. employing concepts of machine learning. On the one hand, unsupervised learning can be applied to infer hidden structures and patterns in the shape data without relying on clinical variables. For example, a clustering approach could help to identify disease-­ specific subgroups within the data that can improve shape-based risk assessment and treatment planning (Bruse et al. 2017). In particular, clustering on SSM-based shape descriptors from a population diagnosed with coarctation of the aorta identified subgroups in aortic arch shape confirming the current clinical classification

scheme (normal/crenel or gothic) and even revealed a new shape class related to age (Gundelwein et al. 2018). On the other hand, in a supervised framework, labels like disease states can be employed to train classifier systems (von Tycowicz et  al. 2018) that facilitate computer-­ aided diagnostics of anatomical dysmorphisms (Fig. 17). Furthermore, statistical shape descriptors can be used to support the clinical decision-making processes as well as for the development of disease scoring mechanisms that operate fully automatically.

2.3

Product Design

For products like implants or customized instrumentation it is of utmost importance to meet the anatomy related morphological needs of a patient as precise as possible, since otherwise the outcome of a medical intervention may not fulfill the expectations (Fig. 18). In fact, there is evidence that in total knee arthroplasty patient dissatisfaction is at least partially related to a mismatch between the preoperative shape of the distal femur and its shape postoperatively, either due to the shape of the femoral component or its positioning (Akbari Shandiz 2015, 2018). In contrast to individualized design, manufacturers and users (i.e. surgeons) are also interested in having a set of standard implants with the widest possible range of applications in terms of fit. Both, individual design as well as population-­

Fig. 17  Low-dimensional visualization (middle) of an SSM-derived shape descriptor used to separate healthy (left) and severely diseased knees (right), where each point is representing a subject’s femoral shape

Statistical Shape Models: Understanding and Mastering Variation in Anatomy

79

Fig. 18  Digital design and positioning of the tibial component of a knee implant. (Galloway et al. 2013)

Fig. 19 Representative digital shape instances of the bony orbit derived from an SSM (top) and the corresponding physical prototypes for population-based design of orbital implants. (Kamer et al. 2006)

based design, do benefit from shape knowledge. Since SSMs help us to parametrize the morphological variation of anatomies and hence to visualize and to understand it in a better way, they offer - in combination with modern manufacturing techniques such as 3D printing - an immense opportunity to approach the society’s need for mass customization due to a population-based design process (Fig. 19).

2.4

Therapy Planning

Although the word ‘normal’ is probably an inappropriate one to being applied to the human body (Griffiths, 2012) we note that SSMs may help to improve anaplastology with restoring what is ‘normal’ patient specific anatomy. With the help of extensive shape knowledge, which is represented by SSMs, it is possible to plausibly complete pathological morphologies, e.g. fractured or

surgically resected regions (Zachow et al. 2010). In addition SSMs may serve as an objective for plastic and reconstructive surgery to assess malformations (Zachow et al. 2005) and to surgically correct them with respect to normally developed anatomical structures (Fig. 20). Statistical anatomy is also extremely valuable when an objective is missing and constructive rather than reconstructive surgery is required. This is particularly true for congenital malformations such as craniosynostosis or other syndromes associated with skull development, where craniofacial (re)construction is necessary in children to surgically correct disfiguring defects. A reference for cranial remodeling would be the heads of unaffected children. Hence, an SSM of many neurocraniums has been generated and fit to the unaffected regions of an individual patient’s head suffering from craniosynostosis (Hochfeld et al. 2014). The model was then fabricated, sterilized, and intraoperatively used as a template for

F. Ambellan et al.

80

Fig. 20  Reconstruction of mandibular dysplasia using statistical shape modeling. (Zachow et al. 2005)

Fig. 21  Reshaping of an infant’s skull based on statistical shape analysis. (Lamecker et al. 2006b) (photos taken by F. Hafner, Charitè Berlin)

reshaping the forehead of the patient (Fig.  21). Such a model-based planning and intervention reduces the time of surgery and thus the anesthesia as well as the possibilities of complications.

2.5

Diagnosis and Follow-Up

Medical diagnostics is based on a conceptual understanding of healthy (normal) anatomical structures and their deviating (pathological) properties. A comprehensive database of anatomical shapes and appearances in combination with an appropriate classification of the associated health status provides the basis for a profound radiological assessment. The automated segmentation of medical image data using a-SSAMs in combination with machine learning opens up new and efficient possibilities for computer-aided diagnosis (Tack and Zachow, 2019). Well-trained neural networks (i.e. data-driven algorithms) can

propose a classification based on such a database and thus serve as diagnostic decision support. In combination with the assessment of radiological experts, the procedures learn with each new case, so that they continuously represent the expert knowledge. As the number of cases increases, the pre-classification will correspond more and more to the expert opinion and, ideally, in a large number of cases only needs to be confirmed by the radiologist. Since the amount of medical image data is continuously increasing and the time required for radiological diagnosis is a valuable resource, computer-assisted diagnosis systems will make radiological diagnostics more efficient in the future and allow human competence in the assessment of anomalies to focus on cases of doubt. The analysis of extremely large databases can be carried out as often as required in order to retrieve requested cases within a defined range of variation for queries on disease patterns, or to analyse the data over and over again with regard

Statistical Shape Models: Understanding and Mastering Variation in Anatomy

to new disease patterns that have been recently learned by the algorithms. Such automated procedures form the basis for radiological screening and thus the future discipline of radiomics. A fundamental understanding of the diversity of anatomical shapes and the possibility of quantitative shape analysis serves not only diagnostics but also the evaluation of therapeutically induced changes. For a subsequent verification of the effectiveness of a therapeutic treatment or to check whether the planned procedure has been correctly implemented, a comparison between the preoperative condition, the planning, and the therapeutic result is necessary. A morphological comparison requires a plausible dense correspondence which is inherently given by recent algorithms for shape analysis. The application of such methods within a follow-up serves not only to monitor success but also for documentation and quality assurance in future evidence-based medicine.

2.6

Education and Training

By studying anatomy, students must become aware that there is often a broad spectrum of “normal” in the shape or appearance of anatomical structures (Bergmann et al. 1988). Therefore, students must learn how to distinguish between normal and abnormal variations. Classical anatomical atlases or physical anatomical models usually show shapes of healthy structures and their relationship to each other on the basis of just one example. The range of variation occurring in a population is typically not illustrated due to a lack of precise knowledge. Also, the graphical possibilities to illustrate the range of variation of anatomical shapes and positional relationships are limited. In the best case, there are images or physical models of extremely deviant forms, whereas undefined is what exactly the "norm" means. Communicating the importance of anatomical variation to students is still considered challenging. There is currently no systematic approach to the morphological evaluation of anatomical

81

diversity of shapes. This is where new digital possibilities come into play. Statistical 3D shape models can be visualized vividly and with high quality by computer graphics, as in the illustrations shown in this chapter. The possibilities are extremely diverse, from photorealistic to strongly stylized. 3D organ models can be decomposed into anatomical substructures that can be displayed individually or together. Structures can be viewed, measured and annotated from all sides or arbitrarily cut to reveal inner substructures. With SSAMs even virtual medical image data such as X-rays, or tomograms can be generated to communicate varying appearances with respect to imaging. A visualization can either take place on a 2D screen or in 3D, whereas virtual reality techniques may enable an immersive viewing effect. With the help of augmented reality techniques, shapes can also be superimposed on real images in order to carry out visual comparisons. However, the special feature of communicating the diversity of shapes is morphing, where the entire shape space of an anatomical structure can be explored by interactively varying shape parameters with an immediate visual response. The shape parameters themselves are chosen to be as compact as possible in order to keep the number of degrees of freedom as low as possible. Typically, the shapes can be varied using the main modes of variation resulting from component analysis. The animated representation of the shape variation reveals the respective shape spectrum of an anatomical structure to the observer. Statistical shape models, which have been generated from a very large and representative amount of training data, such as epidemiological studies, provide reliable representations of average shapes for anatomical structures as well as their variations within one or more standard deviations up to anomalous shapes. The statistical model can be continuously extended with each newly added shape instance. Any shape that can be generated in the respective shape space of the SSM can be visualized or manufactured as a physical model using appropriate manufacturing techniques. Such models can then be also employed for model-based training of medical procedures.

F. Ambellan et al.

82

2.7

 linical Research based C on Shape Analysis

There is increasing evidence that studying shape rather than derived scalar measurements such as volume provides quantitative measures that are not only statistically significant but also anatomically relevant and intuitive. In particular, a scalar description can only capture one of the many aspects of a full structural characterization. Also, analysis of individual clinical variables using independent models for each variable do not account for correlation between the measures. Contrary, statistical shape modeling allows to account for all shape features and their correlations at once without the need to predefine discrete shape measurements. This advocates the use of SSMs for extracting clinically relevant information as required for modern precision medicine strategies. A major theme in shape-based clinical research is to determine whether the morphological changes found in one group are significantly different to those found in another. For example, one might ask if the cardiac anatomy of patients with chronic regurgitation evolves differently than that of healthy aging subjects. As SSMs provide an estimate of the probability density function that underlies the observed shapes, group testing can be performed using suitable multivariate distances as test-statistics. In this context, permutation tests allow to build statistically powerful tests in a nonparametric fashion that do not require strong assumptions underlying traditional parametric approaches. Beyond the statistical framework, SSMs provide a generative shape model that allows to explore (within certain limits) the shapes belonging to an object class under study (see Fig.  9). For instance, the visualization of shape changes that are most dominant within a population or show a high correlation to clinical variables could help to develop an intuition about underlying mechanisms. Ideally, this would spur the development of novel hypothesis, which could be tested against new data and, hence, lead to an improved knowledge.

3

Future Implications of Statistical Anatomy

Statistical 3D shape models form the basis for a wide range of possible applications. The above described examples demonstrate the possibilities of shape analysis for medical applications only. SSMs also provide an interesting foundation for various other questions, such as in anthropology, biometry, evolutionary biology, biomechanics and many more. In runners, for example, it was investigated whether unusually long heel bones (Calcanei) give the calf muscles a better leverage effect and whether these runners are therefore more successful (Ingraham 2018). In forensics, it would be conceivable to draw conclusions from the shape of the skull to the external shape of the head using SSMs. Products around the human body can be better tailored by means of shape analysis. The spectrum extends into the entertainment sector, where character designs can be created more intuitively and more diversely through statistical modelling than has been the case to date. Acknowledgements  The authors gratefully acknowledge the financial support by the German research foundation (DFG) within the research center MATHEON (Germany´s Excellence Strategy – MATH+ : The Berlin Mathematics Research Center, EXC-2046/1 – project ID: 390685689), the German federal ministry of education and research (BMBF) within the research network on musculoskeletal diseases, grant no. 01EC1408B (Overload/PrevOP) and grant no. 01EC1406E (TOKMIS), the research program “Medical technology solutions for digital health care”, grant no. 13GW0208C (ArtiCardio), as well as the BMBF research campus MODAL.

References Agostini V, Balestra G, Knaflitz M (2014) Segmentation and classification of gait cycles. IEEE Trans Neural Syst Rehabil Eng 22(5):946–952 Akbari Shandiz M (2015) Component placement in hip and knee replacement surgery: device development, imaging and biomechanics. Doctoral dissertation, University of Calgary Akbari Shandiz M, Boulos P, Saevarsson SK, Ramm H, Fu CK, Miller S, Zachow S, Anglin C (2018) Changes in knee shape and geometry resulting from total knee arthroplasty. Proc Inst of Mech Eng H J  Eng Med 232(1):67–79

Statistical Shape Models: Understanding and Mastering Variation in Anatomy Ambellan F, Tack A, Ehlke M, Zachow S (2019) Automated segmentation of knee bone and cartilage combining statistical shape knowledge and convolutional neural networks: Data from the Osteoarthritis Initiative. Med Image Anal 52:109–118 Bergmann RA, Thompson SA, Afifi AK, Saadeh FA (1988) Compendium of human anatomic variation. Urban & Schwarzenberg. https://www.anatomyatlases.org Bernard F, Salamanca L, Thunberg J, Tack A, Jentsch D, Lamecker H, Zachow S, Hertel F, Goncalves J, Gemmar P (2017) Shape-aware surface reconstruction from sparse 3D point-clouds. Med Image Anal 38:77–89 Bindernagel M, Kainmüller D, Seim H, Lamecker H, Zachow S, Hege HC (2011) An articulated statistical shape model of the human knee. In: Bildverarbeitung für die Medizin, pp 59–63 Boisvert J, Cheriet F, Pennec X, Labelle H, Ayache N (2008) Geometric variability of the scoliotic spine using statistics on articulated shape models. IEEE Trans Med Imaging 27(4):557–568 Bookstein FL (1986) Size and shape spaces for landmark data in two dimensions. Stat Sci 1(2):181–222 Bruse JL, Zuluaga MA, Khushnood A, McLeod K, Ntsinjana HN, Hsia TY, Taylor AM, Schievano S (2017) Detecting clinically meaningful shape clusters in medical image data: metrics analysis for hierarchical clustering applied to healthy and pathological aortic arches. IEEE Trans Biomed Eng 64(10):2373–2383 Davis RH, Twining CJ, Cootes TF, Waterton JC, Taylor CJ (2002) A minimum description length approach to statistical shape modelling. IEEE Trans Med Imaging 21:525–537 Dworzak J, Lamecker H, von Berg J, Klinder T, Lorenz C, Kainmüller D, Hege HC, Zachow S (2010) 3D reconstruction of the human rib cage from 2D projection images using a statistical shape model. Int J Comput Assist Radiol Surg 5(2):111–124 Ehlke M, Ramm H, Lamecker H, Hege HC, Zachow S (2013) Fast generation of virtual X-ray images for reconstruction of 3D anatomy. IEEE Trans Visual Comput Graph 19(12):2673–2682 Galloway F, Kahnt M, Ramm H, Worsley P, Zachow S, Nair P, Taylor M (2013) A large scale finite element study of a cementless osseointegrated tibial tray. J Biomech 46(11):1900–1906 Gerig G, Fishbaugh J, Sadeghi N (2016) Longitudinal modeling of appearance and shape and its potential for clinical use. Med Image Anal 33:114–121 German National Cohort. German federal and local state governments and the Helmholtz Association. https:// nako.de/informationen-auf-englisch Gomes J, Darsa L, Costa B, Velho L (1999) Warping and morphing of graphical objects. Morgan Kaufmann Publishers, San Francisco Grewe CM, Zachow S (2016) Fully automated and highly accurate dense correspondence for facial surfaces. In: European conference on computer vision, pp 552–568

83

Griffiths I (2012) Choosing running shoes: the evidence behind the recommendations. http://www.sportspodiatryinfo.co.uk/choosing-running-shoes-the-evidencebehind-the-recommendations Gundelwein L, Ramm H, Goubergrits L, Kelm M, Lamecker H (2018) 3D Shape analysis for coarctation of the Aorta. In: International workshop on shape in medical imaging, pp 73–77 Hochfeld M, Lamecker H, Thomale UW, Schulz M, Zachow S, Haberl H (2014) Frame-based cranial reconstruction. J Neurosurg Pediatr 13(3):319–323 The Osteoarthritis Initiative, National Institute of Health, USA. https://oai.nih.gov/ Ingraham L (2018) You might just be weird: the clinical significance of normal – and not so normal – anatomical variations. https://www.painscience.com/articles/ anatomical-variation.php Jones KL, Jones MC, Del Campo M (2013) Smith’s recognizable patterns of human malformation, 7th edn. Elsevier/Saunders, London Kainmüller D, Lange T, Lamecker H (2007) Shape constrained automatic segmentation of the liver based on a heuristic intensity model. In: MICCAI workshop 3D segmentation in the clinic: a grand challenge, pp 109–116 Kainmüller D, Lamecker H, Zachow S, Hege HC (2009) An articulated statistical shape model for accurate hip joint segmentation. In: IEEE Engineering in medicine and biology society annual conference, pp 6345–6351 Kamer L, Noser H, Lamecker H, Zachow S, Wittmers A, Kaup T, Schramm A, Hammer B (2006) Three-­ dimensional statistical shape analysis  – a useful tool for developing a new type of orbital implant? AO Development Institute, New Products Brochure 2/06, pp 20–21 Kendall DG, Barden D, Carne TK, Le H (2009) Shape and shape theory. Wiley, New York Klinder T, Wolz R, Lorenz C, Franz A, Ostermann J  (2008) Spine segmentation using articulated shape models. In: International conference on medical image computing and computer-assisted intervention, pp 227–234 Lamecker H (2008) Variational and statistical shape modeling for 3D geometry reconstruction. Doctoral dissertation, Freie Universität Berlin Lamecker H, Zachow S (2016) Statistical shape modeling of musculoskeletal structures and its applications. In: Computational radiology for orthopaedic interventions. Springer, pp 1–23 Lamecker H, Lange T, Seebaß M (2002) A statistical shape model for the liver. In: International conference on medical image computing and computer-assisted intervention, pp 421–427 Lamecker H, Seebaß M, Hege HC, Deuflhard P (2004) A 3D statistical shape model of the pelvic bone for segmentation. In: Medical imaging 2004: image processing, vol. 5370, pp 1341–1352 Lamecker H, Zachow S, Haberl H, Stiller M (2005) Medical applications for statistical shape models.

84 Computer Aided Surgery around the Head, Fortschritt-­Berichte VDI – Biotechnik/Medizintechnik 17(258):1–61 Lamecker H, Wenckebach TH, Hege HC (2006a) Atlas-­ based 3D-shape reconstruction from X-ray images. In: IEEE 18th International conference on pattern recognition, pp 371–374 Lamecker H, Zachow S, Hege HC, Zockler M, Haberl H (2006b) Surgical treatment of craniosynostosis based on a statistical 3D-shape model: first clinical application. Int J Comput Assist Radiol Surg 1(Suppl 7):253–254 Moore KL (1989) Meaning of “normal”. Clin Anat 2(4):235–239 Mukhopadhyay A, Victoria OSM, Zachow S, Lamecker H (2016) Robust and accurate appearance models based on joint dictionary learning data from the osteoarthritis initiative. In: International workshop on patch-­ based techniques in medical imaging, pp 25–33 Nava-Yazdani E, Hege H-C, von Tycowicz C, Sullivan T (2018) A shape trajectories approach to longitudinal statistical analysis. Technical report, ZIB-report 18-42 Rybak J, Kuß A, Hans L, Zachow S, Hege HC, Lienhard M, Singer J, Neubert K, Menzel R (2010) The digital bee brain: integrating and managing neurons in a common 3D reference system. Front Syst Neurosci 4:1–30 Sañudo JR, Vázquez R, Puerta J (2003) Meaning and clinical interest of the anatomical variations in the 21st century. Eur J Anat 7(1):1–3 Seim H, Kainmüller D, Lamecker H, Bindernagel M, Malinowski J, Zachow S (2010) Model-based auto-­ segmentation of knee bones and cartilage in MRI data. In: MICCAI workshop medical image analysis for the clinic, pp 215–223 Study of Health in Pomerania. Forschungsverbund Community Medicine at Greifswald Medical School. http://www2.medizin.uni-greifswald.de/cm/fv/ship Tack A, Zachow S (2019) Accurate automated volumetry of cartilage of the knee using convolutional neural networks: data from the osteoarthritis initiative. In: IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019), (accepted for publication) Tack A, Mukhopadhyay A, Zachow S (2018) Knee menisci segmentation using convolutional neural networks: data from the Osteoarthritis Initiative. Osteoarthr Cartil 26(5):680–688 Thompson DAW (1917) On growth and form. Cambridge University Press, Cambridge Toga AW (1998) Brain warping. Elsevier, Amsterdam van Kaick O, Zhang H, Hamarneh G, CohenOr D (2011) A survey on shape correspondence. Comput Graphics Forum 30(6):1681–1707 Vidal-Migallon I, Ramm H, Lamecker H (2015) Reconstruction of partial liver shapes based on a statis-

F. Ambellan et al. tical 3D shape model. In: Shape symposium Delemont Switzerland, p 22 von Berg J, Dworzak J, Klinder T, Manke D Kreth A, Lamecker H, Zachow S, Lorenz C (2011) Temporal subtraction of chest radiographs compensating pose differences. In: Medical imaging 2011: image processing, 79620U von Tycowicz C, Ambellan F, Mukhopadhyay A, Zachow S (2018) An efficient Riemannian statistical shape model using differential coordinates: with application to the classification of data from the osteoarthritis initiative. Med Image Anal 43:1–9 Wilson DAJ, Anglin C, Ambellan F, Grewe CM, Tack A, Lamecker H, Dunbar M, Zachow S (2017) Validation of three-dimensional models of the distal femur created from surgical navigation point cloud data for intraoperative and postoperative analysis of total knee arthroplasty. Int J  Comput Assist Radiol Surg 12(12):2097–2105 Yao J (2002) A statistical bone density atlas and deformable medical image registration. Doctoral dissertation, Johns Hopkins University Zachow S, Lamecker H, Elsholtz B, Stiller M (2005) Reconstruction of mandibular dysplasia using a statistical 3D shape model. In: Computer Assisted Radiology and Surgery (CARS), pp 1238–1243 Zachow S, Zilske M, Hege HC (2007) 3D reconstruction of individual anatomy from medical image data: Segmentation and geometry processing. In: Proceedings of the 25. ANSYS conference and CADFEM users’ meeting, ZIB Preprint 07-41 available at opus4.kobv.de/opus4-zib/files/1044/ZR_07_41.pdf Zachow S, Kubiack K, Malinowski J, Lamecker H, Essig H, Gellrich NC (2010) Modellgestützte chirurgische Rekonstruktion komplexer Mittelgesichtsfrakturen. In: Proceedings of Biomedical Technology Conference (BMT), pp 107–108 SHIP (2019) Study of Health in Pomerania. Forschungsverbund Community Medicine at Greifswald Medical School. http://www2.medizin. uni-greifswald.de/cm/fv/ship OAI (2019) The Osteoarthritis Initiative, National Institute of Health, USA. https://oai.nih.gov/ GNC (2019) German National Cohort. German federal and local state governments and the Helmholtz Association. https://nako.de/informationen-auf-englisch https://opus4.kobv.de/opus4-zib/files/1044/ZR_07_41. pdf Zachow S, Zilske M, Hege HC (2007) 3D reconstruction of individual anatomy from medical image data: Segmentation and geometry processing. In: Proceedings of the 25. ANSYS conference and CADFEM users’ meeting, ZIB Preprint 07-41 available at ­ https://opus4.kobv.de/opus4-zib/files/1044/ ZR_07_41.pdf

Towards Advanced Interactive Visualization for Virtual Atlases Noeska Smit and Stefan Bruckner

Abstract

An atlas is generally defined as a bound collection of tables, charts or illustrations describing a phenomenon. In an anatomical atlas for example, a collection of representative illustrations and text describes anatomy for the purpose of communicating anatomical knowledge. The atlas serves as reference frame for comparing and integrating data from different sources by spatially or semantically relating collections of drawings, imaging data, and/or text. In the field of medical image processing, atlas information is often constructed from a collection of regions of interest, which are based on medical images that are annotated by domain experts. Such an atlas may be employed, for example, for automatic segmentation of medical imaging data.

N. Smit (*) Department of Informatics, University of Bergen, Bergen, Norway Mohn Medical Imaging and Visualization Centre, Haukeland University Hospital, Bergen, Norway e-mail: [email protected] S. Bruckner Department of Informatics, University of Bergen, Bergen, Norway e-mail: [email protected]

The combination of interactive visualization techniques with atlas information opens up new possibilities for content creation, curation, and navigation in virtual atlases. With interactive visualization of atlas information, students are able to inspect and explore anatomical atlases in ways that were not possible with the traditional method of presenting anatomical atlases in book format, such as viewing the illustrations from other viewpoints. With advanced interaction techniques, it becomes possible to query the data that forms the basis for the atlas, thus empowering researchers to access a wealth of information in new ways. So far, atlas-based visualization has been employed mainly for medical education, as well as biological research. In this survey, we provide an overview of current digital biomedical atlas tasks and applications and summarize relevant visualization techniques. We discuss recent approaches for providing next-generation visual interfaces to navigate atlas data that go beyond common text-based search and hierarchical lists. Finally, we reflect on open challenges and opportunities for the next steps in interactive atlas visualization. Keywords

Biomedical visualization · Virtual atlases · Interactive visualization · Atlases · Visualization

© Springer Nature Switzerland AG 2019 P. M. Rea (ed.), Biomedical Visualisation, Advances in Experimental Medicine and Biology 1156, https://doi.org/10.1007/978-3-030-19385-0_6

85

N. Smit and S. Bruckner

86

1

Introduction

Since the sixteenth century, the word ‘atlas’ has been used to describe a collection of geographical maps. In the medical context, an atlas of human anatomy refers to a collection of illustration and descriptive text that captures knowledge on the morphological structure of the human body. An example of such an atlas is Netter’s Atlas of Human Anatomy (Netter 2017), which depicts the human body in handpainted illustrations, annotated radiological images, and quick look-up tables. The main aim of such an atlas is to improve the understanding of anatomy and how it applies to medicine. Anatomical atlases are an important reference in both medical education as well as in clinical practice, providing information on shape, position, and structural relations. With the advent of increased computing power, it became feasible to construct virtual atlases. These virtual atlases can be used in the traditional sense, as a digital collection of texts and illustrations, but also enable more advanced representations of human anatomy, for instance by constructing virtual three-dimensional reference models of standard anatomy. Such models allow for additional interaction techniques, such as rotation, zooming, and showing and hiding of structures, which were not possible with traditional illustrations. The digital nature of such atlases offers several advantages over traditional printed atlases. First, this opens up for additional content creation methods, above the limitations of printed materials, such as 3D reconstruction. Second, virtual atlases allow for novel methods of content curation, where additions to the atlas can be made continuously. Finally, virtual atlas information can be combined with complex (visual) querying techniques, empowering researchers to access a wealth of information via simple interactions. These advantages have given rise to the creation of a multitude of diverse virtual atlases in the biomedical domain, for example the Allen Brain Atlas (Jones et  al. 2009), the DigiMouse atlas (Dogdas et  al. 2007), and an atlas of the adult human brain transcriptome (Hawrylycz

et al. 2012). For an overview of atlases in developmental biology, please refer to the survey of online atlases and gene expression resources for model organisms by Clarkson (2016). Given the wealth of virtual atlases now available, there is an opportunity to employ advanced visualization and interaction techniques that go beyond traditional atlas use as a static reference collection. In this work, we present a characterization of tasks and applications within the context of virtual biomedical atlases. Subsequently, we provide an overview of advanced visualization techniques that are applicable to atlas visualization, followed by a description of potential interaction and navigation strategies. We briefly describe relevant technology which enables interactive atlas visualization and conclude with an outlook on open challenges and opportunities. Our aim with this work is twofold; we seek to raise awareness among atlas curators of advanced data analysis and visualization techniques, and we hope to highlight open research questions and challenges in atlas visualization for visualization researchers.

2

Biomedical Atlas Tasks and Applications

There is a wide variety of tasks and application areas that virtual atlases may support. A well-­ known application originating from the traditional use of the anatomical atlas is to use a virtual atlas for educational purposes. Preim and Saalfeld (2018) presented a comprehensive survey on virtual human anatomy education systems. While the authors do not explicitly focus on virtual atlases in this survey, they do mention that most of the virtual anatomy systems were described as a digital atlas. The authors characterize the sources of spatial information that may be collected in such an educational digital atlas as (commercial) 3D models, radiological imaging data, cadaver data, and segmentations. The Virtual Surgical Pelvis, for example, consists of cadaver data, segmentations, 3D models, and knowledge from histological analysis, and was so far mainly employed as an educational resource

Towards Advanced Interactive Visualization for Virtual Atlases

(Smit et  al. 2016). There are also commercial platforms available aimed at anatomy education via a web-interface. Examples are the Biodigital Human (Qualter et  al. 2012) and ZygoteBody (Kelc 2012). Typically the 3D models are developed in house and as such are protected intellectual property, but Zygote also sells their assets. In addition to education, a virtual atlas may also support data analysis and image processing. An example of this is to use an atlas dataset for image segmentation, for instance in segmentation of MR brain scans (Cabezas et al. 2011). Through registration of the atlas to an unseen dataset, the unseen dataset can be segmented based on the mapped atlas information. This approach can also be modified to work with multiple atlases (Aljabar et al. 2009). When registering an atlas to patient-specific imaging data, it may be used to construct patient-specific models for treatment planning purposes (Smit et al. 2017) (see Fig. 1). Generally, the atlas is used to transfer knowledge to an unseen dataset, however, the reverse is also possible. By registering additional datasets to a virtual atlas, an atlas may be further enriched with additional information. For example, by registering patient-specific information to the atlas, pathology such as a tumor can be visualized with the atlas as an anatomical reference (Kikinis et al. 1996). The atlas space may also be used as a common frame of reference, for instance to bring gene activity imaging data together in an idealized expert-defined atlas (Walter et al. 2010). Virtual atlas information can also be employed for simulation and prediction. An atlas dataset

87

can form a basis for biomechanical simulations, for instance to compensate brain shift (Dumpuri et al. 2007). Furthermore, virtual atlases may be used for consolidation and summarization of research data. The adult human brain transcriptome atlas (Hawrylycz et al. 2012) is an example of such an atlas that caters to researchers as a baseline for studies of (ab)normal human brain function. The Allen Human Brain Atlas (Shen et al. 2012) similarly aims to boost brain research by bringing together structure, function, and gene expression data. In addition to these diverse tasks that virtual atlases may support, there is a wide range of types of application domains that are supported. On the medical side, a virtual atlas may describe anatomy, physiology, pathology, variation, development, or a mixture of multiple aspects. In the biological domain, digital atlases also vary from describing gene expression, neural circuitry, cell types, to development.

3

Visualization Techniques

When virtual atlas data features a spatial aspect, there are many standard and advanced visualization techniques that can be employed to visualize this data efficiently. In the survey by Clarkson (2016), an overview of common design patterns for graphic representation of anatomy is presented.

Fig. 1  The Virtual Surgical Pelvis atlas is mapped to a patient-specific MRI scan (left), allowing for the creation of patient-specific models (right). (Smit et al. 2017)

N. Smit and S. Bruckner

88

Fig. 2 A 2D slice-based visualization of the Virtual Surgical Pelvis atlas visualized in the browser showing cryosection and segmentation information (Smit et  al. 2016). The axial (red outline), coronal (green outline), and

sagittal (blue outline) plane are visible. Slices can be selected by moving the crosshair in one of the views, which updates the other views to the slices indicated by color-coordinated lines

Fig. 3  A 3D surface visualization of the Virtual Surgical Pelvis atlas visualized in the browser (Smit et al. 2016). The slice planes from Fig. 2 are also visible in the 3D surface view for reference

3.1

Standard Visualization Techniques

There are two standard visualization techniques that are currently regularly employed for virtual atlas-data: 2D slice-based visualization and 3D surface visualization. In 2D slice-based visualization, a 3D volume is sectioned in the axial, coronal, or sagittal plane, and presented as a collection of 2D images. A single slicing direction may be presented, or a combination of all three orthogonal views can be employed. An example of the latter is visible in Fig.  2. Here, to navigate through the stack of slices, users can drag a crosshair around to control the two other views indicated by the colored lines. The Allen Brain Reference Atlases, such as

the Human Brain Atlas (Hawrylycz et al. 2012), also offer a slice-based view, but they offer a single anatomical plane and visualize the slices next to each other in juxtaposition. A slider is then used to navigate to slices of interest. It is also possible to pick arbitrary slicing directions, a feature which is available for instance in the eMouseAtlas (Armit et al. 2012). This complicates the interaction by adding three degrees of freedom (pitch, yaw, roll) to select an appropriate slicing plane, but can be essential if the subject of interest is not aligned with the standard orthogonal planes. When the atlas features 3D models, they can be shown in a surface visualization. In Fig. 3, we see the Virtual Surgical Pelvis atlas in a 3D surface visualization. Here, the surfaces feature tex-

Towards Advanced Interactive Visualization for Virtual Atlases

tures which employ colors that are either representative for tissue color, or standard in anatomy communication. Such an anatomical standard is to use red for arteries, blue for veins, and yellow for nerves. The textures themselves are meant to communicate the type of tissue that is visualized, for instance with a veined appearance for the organs, and a striped appearance for the musculature. The Allen Mouse Brain Connectivity Atlas (Oh et al. 2014) is also visualized in a surface visualization in the browser: the Allen Brain Explorer. However, here the structures are not textured, and the colors are varying in hue in such a way that structure classes are visually separable, and groups of similar structures can easily be identified. Many of the more comprehensive virtual atlases feature an aggregation of multiple datasets. In such cases, a summarization visualization can be used to provide an overview of the fused information. For instance, a representative average of the atlas dataset can be presented for navigation purposes, as is visible in the AFQ-Browser tool (Yeatman et  al. 2018). Here, a general 3D model of major fiber tracts is presented to the user for selection of bundles of interest.

3.2

Advanced Visualization Techniques

In addition to standard visualization techniques, more recent visualization research presents novel techniques that have good potential for interactive atlas visualization, yet are currently under-explored. The cumbersome task of creating 3D models may be avoided by direct visualization of volumetric data using a volume rendering approach. Direct volume rendering (Levoy 1988) is currently not employed in atlas visualization, but can be a highly effective way of visualizing volumetric data without the need for generating explicit surface models. One advantage of directly visualizing the volumetric data is that the full three-dimensional information can be represented. While previously considered to be prohibitively expensive, advances in the performance

89

of Graphics Processing Units (GPUs) have led to the availability of advanced volume rendering techniques even on low-end systems. For instance, a direct volume rendering approach is used in the BrainGazer project (Bruckner et  al. 2009) to render volumetric confocal microscopy data. As discussed in the comprehensive survey by Jönsson et al. (2014), in recent years a number of interactive volume rendering methods that can even incorporate global illumination effects such as ambient occlusion (Hernell et al. 2010), multiple scattering (Kniss et al. 2003), or refraction (Magnus and Bruckner 2018), have been presented. While such methods can be utilized to generate visually appealing results, they may not necessarily be ideally suited for the purpose of visualizing atlas data. In illustrative visualization, on the other hand, rendering techniques are inspired by scientific illustrations. Here, the focus is not on rendering structures as realistically as possible, but rather on adapting the visual representation in such a way that essential information is emphasized. This reduces visual clutter, which can become an issue in comprehensive virtual atlases that feature a multitude of structures. The challenge of reproducing the clarity and aesthetic quality of traditional illustrations such as those found in medical textbooks has been one of the main drivers in illustrative visualization, and several sophisticated techniques for the visualization of surface and volume data have been developed (see Fig.  4). One way to classify these methods is according to the level of abstraction that individual approaches operate on. Low-level abstraction techniques tend to focus on the appearance of structures and include approaches that aim to reproduce particular artistic styles. Lawonn et al. (2018) present a comprehensive survey on illustrative visualization for 3D surface models, in which they categorize techniques into silhouettes and contours, feature lines, hatching, stippling, and shading. Likewise, for volumetric data several powerful methods for mimicking various rendering styles including stippling (Lu et al. 2002), line drawing (Burns et al. 2005), and many other artistic techniques, have been devel-

90

N. Smit and S. Bruckner

Fig. 4  Examples for illustrative visualization techniques for different types of volumetric data available in the VolumeShop framework. (Bruckner and Gröller 2005)

oped. Style transfer functions (Bruckner and Gröller 2007), for instance, enable the specification of object appearance based on the image of a sphere shaded in the desired manner. High-level abstraction techniques, on the other hand, are concerned with what is visible and recognizable in the scene. This class of methods, also referred to as smart visibility (Viola and Gröller 2005), aims to reveal otherwise hidden or poorly visible objects by selectively displacing or altering the visual prominence of occluding structures. Examples include approaches such as cutaways and ghosting (Feiner and Seligmann 1992; Bruckner et  al. 2006; Diepstraten et  al. 2003), where an occluding object is removed or its opac-

ity is reduced, or exploded views (Bruckner and Gröller 2006; Li et al. 2008). Viola and Isenberg (2018) further expand on the idea of abstraction in illustrative visualization. A concept closely related to abstraction is the notion of focus + context, where both low-level and high-level abstraction techniques are employed in order to emphasize particular structures (e.g., the results of a current selection or query) while still presenting them in relation to their surroundings. Focus+context approaches typically employ the concept of an importance function (Viola et  al. 2005) to characterize the relevance of an object or region. Such importance functions have been, for instance, used to steer

Towards Advanced Interactive Visualization for Virtual Atlases

interactive cutaways (Krüger et al. 2006), close-­ ups (Taerum et  al. 2006), peel-aways (Correa et al. 2006), or lenses (Tominski et al. 2017). For biomedical data, the importance function is typically defined based on a geometric region in the data or by a segmentation mask. More advanced ideas aim to provide fine-grained control over the mapping of data attributes to visual styles. Rautek et  al. (2007), for instance, presented a system based on fuzzy-logic, which allows users to formulate rules for data and illustration semantics, while Svakhine et al. (2005) proposed the use of multi-level motifs that encapsulate domain knowledge of illustration styles. While such approaches could potentially enable a more tailored experience, the difficulty of specifying and maintaining appropriate rule bases has to date prevented the widespread adoption of these methods. In the context of atlas visualization, illustrative abstraction may be a suitable approach to visualize information at different scales. In an exploratory user study, Kuß et al. (2010) evaluated the use of different illustrative enhancements for the visualization of filament-surface relationships in 3D brain models. They conclude that the best results are achieved using a combination of line coloring and intersection glyph display. Swoboda et al. (2017) make heavy use of abstraction in the information and interaction design of their neuronal atlas interface. In collaboration with artists, they propose a highly reduced spatial visualization in order to avoid visual clutter. Additional information is presented in the form of glyphs which also convey quantitative information and are used as a central interaction element to provide details on demand. In uncertainty visualization, uncertainty in the data coming from a variety of sources is visually communicated in order to give a faithful representation of the underlying data. Potter et  al. (2012) present a taxonomy of uncertainty visualization approaches. In the context of atlas visualization, uncertainty visualization techniques may be employed to visually encode variability, for instance when visualizing a statistical atlas of bone anatomy (Chintalapani et al. 2007). While approaches such as average volumes are fre-

91

quently used to characterize variation, more advanced techniques may be beneficial. Raj et al. (2016), for example, evaluated the use of 3D contour boxplots in the construction and analysis of anatomical atlases and showed that they provide superior information about shape variability. Comparative visualization deals with the challenge of making visual comparisons of data. Approaches for comparative visualization in general are categorized into juxtaposition, superposition and explicit encodings (Gleicher et al. 2011). In juxtaposition, visualizations are placed side by side, while in superposition visualizations are placed on top of each other. In explicit encoding, the difference between the datasets are explicitly visualized. With respect to atlas visualization, a comparison between multiple selections of atlas data is often desirable. In the AFQ-Browser tool (Yeatman et al. 2018) for example, comparisons of cohort selections are made by a combination of juxtaposition for multiple fiber tract selections, and superposition to display individual cohort members. Kim et  al. (2017) provide an extensive survey of comparative visualization techniques for spatial data.

4

Interaction and Navigation Strategies

To access the wealth of information virtual atlases offer, good interaction and navigation strategies are essential. Clarkson (2016) offers a comprehensive overview of textual and graphical design patterns for querying gene expression databases. Hierarchical navigation techniques are often employed to browse atlas information. Typically, structural information is presented in a nested list, where groups of structures or individual structures can be selected and deselected. Examples of such hierarchical navigation tools are visible in both the Online Anatomical Human web application (Smit et  al. 2016) and the The Allen Mouse Brain Connectivity Atlas (Oh et al. 2014) visualized in the Allen Brain Explorer. Selections via hierarchical menus can be used to make structures visible or invisible, or to retrieve

N. Smit and S. Bruckner

92

more detailed information on the selection. A benefit of using a hierarchical list is that groups may be collapsed or unfolded such that the user can pick an appropriate level of detail for his/her investigation. In cases where there is no underlying hierarchy in the items or there is only a limited number of items, a list may be offered instead as a compact representation. In addition to querying via hierarchical or list menus, text-based search can be a powerful addition for information retrieval. This is especially useful when combined with auto-completion to suggest search terms that may be relevant based on the textual input so far. A combination of a hierarchical menu and text-based search can be especially powerful when the amount of structures and groups of structures are very large. Rather than searching explicitly for specific information, similarity search can be used to find information that is either semantically or spatially close to selected information. This can be an additional strategy to navigate large amounts of data, as well as to navigate to additional resources linked from the atlas. Appropriate similarity criteria need to be decided upon when offering such a search feature. An example of such a criterion could be spatial proximity. Besides the more traditional textual search and navigation strategies, visual queries can be an intuitive way to search directly from within a visualization. An straight-forward example of this is querying a structure by clicking on it in a graphical representation. Smit et al. (2012) also allow atlas querying via a selection sphere. A 3D sphere can be placed in the surface visualization, and all information present inside the sphere, for instance anatomical landmarks and related literature, will be retrieved. In addition to providing traditional search and browsing facilities, BrainGazer (Bruckner et al. 2009) allows for several types of interactive visual queries based on distance and structural information, and subsequent work extended this approach to support more advanced shape-based object retrieval (Trapp et al. 2013).

5

Technology

To enable storage and querying of virtual atlas information, the technology stack must be chosen to adequately handle the specific requirements an atlas may have. There is a plethora of database technologies available, and the best fit depends on the atlas specifics. When at the start of the data acquisition the type and characteristics of the data are already known, a traditional relational database may be a good fit. If, however, it is not yet possible to state the exact format of the data that will be a part of the atlas in advance, a so-­ called schema-less database may be employed, which is considered a promising for clinical data storage (Lee et al. 2013). Another technology decision must made with respect to designing the atlas interface as a desktop application, for the Web, or as a combination of the two. While traditionally desktop applications were needed to utilize the advanced graphics processing power, currently many web-technologies have become available that allow for interactive visualization in the browser. Examples of this are the WebGL standard, which now allows for volume rendering in the browser (Congote et  al. 2011), and the Three.js framework (Danchilla 2012). Yeatman et al. (2018), the authors of the AFQBrowser tool, argue that browser-based tools will be increasingly employed for high-dimensional data exploration, scientific communication, data aggregation across labs, and data publication. Commercial tools such as the Biodigital Human (Qualter et al. 2012) platform currently also allow content API access as well as a mobile SDK to support mobile- and web-developers.

6

Open Challenges and Opportunities

There are still many open challenges and opportunities for interactive atlas visualization. These challenges and opportunities lie both on the side of atlas creators, as well on the side of visualization researchers. For atlas creators, it may be worthwhile to employ more advanced data analy-

Towards Advanced Interactive Visualization for Virtual Atlases

sis and visualization techniques, while for visualization researchers, there are open research challenges that atlases present which require the development of new methods. As the volume, variety, and complexity of data to be represented in atlases constantly increases, novel solutions for efficient and effective exploration are needed. Visual analytics defined as “the science of analytical reasoning facilitated by visual interactive interfaces” (Thomas and Cook 2006) has grown out of the fields of information visualization and scientific visualization in computer science with a specific focus on enabling the analysis of large amounts of heterogeneous data, integrating techniques from visualization, interaction, and automatic data analysis. It is characterized by a strong emphasis on enabling the formulation and validation of hypotheses, facilitated by a combination of human knowledge and intellect with automated techniques. Interactive visualization acts as a high-­throughput channel used to make this human-machine interface as efficient as possible. Partly due to its origins in U.S. national security, visual analytics research has mostly focused on abstract data (Cammarano et al. 2007), i.e., points located in a high-dimensional space without any particular a-priori preferences among the dimensions. The aim of interactive visual analysis is to provide users with insight into the meaning of the data. Using multiple, interactively linked views of the same data set allows the user to productively combine different aspects of the available information. The visual information-seeking-­ mantra – overview first, filter, zoom in, details on demand – as defined by Shneiderman (1996), is frequently used as a guiding principle. Weaver (2004) showed that the use of multiple linked views can assist the analysis of complex phenomena but requires careful coordination. The concept of linking and brushing allows the user to select an area or parameter range of interest by interactively placing selections on a rendering. Other views and interactions are linked to the selections and focus on information related to the selected subset. Hauser (2006) states that as soon as a notion of interest in some subset of the data is established, we can visualize the selection in full detail while reducing the amount of visual

93

information about the remaining data. One example for the power of visual analytics in the context of medical data visualization is the work of Termeer et al. (2007), who present an interactive system for the investigation of cardiac models augmented with patient-specific late enhancement MRI data. In the context of atlas data, we believe that similar interactive analysis mechanisms could greatly expand the power and flexibility of existing interfaces. As atlas data is becoming richer and more heterogeneous, the analysis and visualization of such data also becomes more challenging. In future interactive atlas visualization platforms, it could therefore be worthwhile to provide data science facilities and tools directly integrated into an interactive visual analysis interface, such that these large and heterogeneous datasets can be analyzed and visualized more effectively. Examples of such techniques are clustering (Xu and Wunsch 2005) and dimensionality reduction (van der Maaten et  al. 2009). Providing data science tools could provide more insight into the complex data sets that an atlas may constitute and may lift the purpose of an atlas from use as a descriptive resource to use as a research tool. Furthermore, as the curation of atlas data takes place at multiple scales, all the way from the organism level to detailed DNA acquisitions, and in multiple domains, interlinking across these scales and domains would be an interesting avenue for visualization research, extending upon the notion of seamless transitions between visual representations (Miao et al. 2018). Many of the current atlas interfaces are set up in such a way that there is a general 3D model visible, and additional information can be retrieved via hierarchical menus. Interactive visualization of variation and distributions within data collections is currently under-explored. While the AFQ-Browser (Yeatman et  al. 2018) features tractography information from a cohort, the mean and variation are not explicitly visualized. The uncertainty visualization techniques described in Sect. 3.2 could play a crucial role here to showcase variability and distribution. In recent years, more and more emphasis has been placed on the advantages of open science. By having platforms and data openly available,

N. Smit and S. Bruckner

94

there is an increase in transparency and reproducibility, which facilitates a more efficient scientific process (Molloy 2011). In this light, many of the atlases are also freely available, as for instance the atlases of the Allen institute are. To further strengthen such initiatives, providing standardization of atlas formats would allow easier integration and exchange of information between different initiatives worldwide. With the movement towards open access of publicly funded research data, there are now a multitude of publicly available datasets. These data collections are often stored in larger repositories dedicated to a specific theme, such as for example the Cancer Imaging Archive (Clark et al. 2013). It would further enrich atlases if they could integrate stronger links to these general data repositories and specifically to closely related datasets within such repositories.

7

Conclusion

We have presented an overview of digital biomedical atlas tasks and applications along with relevant visualization and interaction techniques for interactive atlas visualization. There are still many challenges and research opportunities for both atlas developers and visualization researchers alike. We hope that this chapter can form a solid foundation and reference for both of these target audiences to further advance the field towards advanced interactive visualization for virtual atlases. Acknowledgments  The research presented in this paper was supported by the Trond Mohn Foundation under grant ID BFS2016TMT01 and the MetaVis project (#250133) funded by the Research Council of Norway.

References Aljabar P, Heckemann RA, Hammers A, Hajnal JV, Rueckert D (2009) Multi-atlas based segmentation of brain images: atlas selection and its effect on accuracy. NeuroImage 46(3):726–738. https://doi.org/10.1016/j. neuroimage.2009.02.018 Armit C, Venkataraman S, Richardson L, Stevenson P, Moss J, Graham L, Ross A, Yang Y, Burton N,

Rao J  et  al (2012) eMouseAtlas, EMAGE, and the spatial dimension of the transcriptome. Mamm Genome 23(9–10):514–524. https://doi.org/10.1007/ s00335-012-9407-1 Bruckner S, Gröller ME (2005) VolumeShop: an interactive system for direct volume illustration. In: Proceedings of IEEE visualization, pp  671–678. https://doi.org/10.1109/VISUAL.2005.1532856 Bruckner S, Gröller ME (2006) Exploded views for volume data. IEEE Trans Vis Comput Graph 12(5):1077– 1084. https://doi.org/10.1109/TVCG.2006.140 Bruckner S, Gröller ME (2007) Style transfer functions for illustrative volume rendering. Comput Graphics Forum 26(3):715–724. https://doi. org/10.1111/j.14678659.2007.01095.x Bruckner S, Grimm S, Kanitsar A, Gröller ME (2006) Illustrative context-preserving exploration of volume data. IEEE Trans Vis Comput Graph 12(6):1559– 1569. https://doi.org/10.1109/TVCG.2006.96 Bruckner S, Soltészová V, Groller E, Hladuvka J, Buhler K, Jai YY, Dickson BJ (2009) Braingazer  – visual queries for neurobiology research. IEEE Trans Vis Comput Graph 15(6):1497–1504. https://doi. org/10.1109/TVCG.2009.121 Burns M, Klawe J, Rusinkiewicz S, Finkelstein A, DeCarlo D (2005) Line drawings from volume data. ACM Trans Graph 24(3):512–518. https://doi. org/10.1145/1073204.1073222 Cabezas M, Oliver A, Lladó X, Freixenet J, Cuadra MB (2011) A review of atlasbased segmentation for magnetic resonance brain images. Comput Methods Prog Biomed 104(3):e158–e177. https://doi.org/10.1016/j. cmpb.2011.07.015 Cammarano M, Dong X, Chan B, Klingner J, Talbot J, Halevey A, Hanrahan P (2007) Visualization of heterogeneous data. IEEE Trans Vis Comput Graph 13(6):1200–1207. https://doi.org/10.1109/ TVCG.2007.70617 Chintalapani G, Ellingsen LM, Sadowsky O, Prince JL, Taylor RH (2007) Statistical atlases of bone anatomy: construction, iterative improvement and validation. In: International conference on medical image computing and computer-assisted intervention, Springer, pp 499– 506. https://doi.org/10.1007/978-3-540-75757-3_61 Clark K, Vendt B, Smith K, Freymann J, Kirby J, Koppel P, Moore S, Phillips S, Maffitt D, Pringle M et  al (2013) The Cancer imaging archive (TCIA): maintaining and operating a public information repository. J  Digit Imaging 26(6):1045–1057. https://doi. org/10.1007/s10278-013-9622-7 Clarkson MD (2016) Representation of anatomy in online atlases and databases: a survey and collection of patterns for interface design. BMC Dev Biol 16(1):18. https://doi.org/10.1186/s12861-016-0116-y Congote J, Segura A, Kabongo L, Moreno A, Posada J, Ruiz O (2011) Interactive visualization of volumetric data with WebGL in real-time. In: Proceedings of the 16th international conference on 3D web technology, ACM, pp  137–146. https://doi. org/10.1145/2010425.2010449

Towards Advanced Interactive Visualization for Virtual Atlases Correa C, Silver D, Chen M (2006) Feature aligned volume manipulation for illustration and visualization. IEEE Trans Vis Comput Graph 12(5):1069–1076. https://doi.org/10.1109/TVCG.2006.144 Danchilla B (2012) Three.Js framework. In: Beginning WebGL for HTML5, Springer, pp  173–203. https:// doi.org/10.1007/978-1-4302-3997-0_7 Diepstraten J, Weiskopf D, Ertl T (2003) Interactive cutaway illustrations. Comput Graphics Forum 22(3):523–532. https://doi.org/10.1111/1467-8659. t01-3-00700 Dogdas B, Stout D, Chatziioannou AF, Leahy RM (2007) Digimouse: a 3D whole body mouse atlas from CT and cryosection data. Phys Med Biol 52(3):577. https://doi.org/10.1088/0031-9155/52/3/003 Dumpuri P, Thompson RC, Dawant BM, Cao A, Miga MI (2007) An atlas-based method to compensate for brain shift: preliminary results. Med Image Anal 11(2):128– 145. https://doi.org/10.1016/j.media.2006.11.002 Feiner SK, Seligmann DD (1992) Cutaways and ghosting: satisfying visibility constraints in dynamic 3D illustrations. Vis Comput 8(5):292–302. https://doi. org/10.1007/BF01897116 Gleicher M, Albers D, Walker R, Jusufi I, Hansen CD, Roberts JC (2011) Visual comparison for information visualization. Inf Vis 10(4):289–309. https://doi. org/10.1177/1473871611416549 Hauser H (2006) Generalizing focus+context visualization, Springer, pp  305–327. https://doi. org/10.1007/3-540-30790-7_18 Hawrylycz MJ, Lein ES, Guillozet-Bongaarts AL, Shen EH, Ng L, Miller JA, Van De Lagemaat LN, Smith KA, Ebbert A, Riley ZL et  al (2012) An anatomically comprehensive atlas of the adult human brain transcriptome. Nature 489(7416):391. https://doi. org/10.1038/nature11405 Hernell F, Ljung P, Ynnerman A (2010) Local ambient occlusion in direct volume rendering. IEEE Trans Vis Comput Graph 16(4):548–559. https://doi. org/10.1109/TVCG.2009.45 Jones AR, Overly CC, Sunkin SM (2009) The Allen brain atlas: 5 years and beyond. Nat Rev Neurosci 10(11):821. https://doi.org/10.1038/nrn2722 Jönsson D, Sundén E, Ynnerman A, Ropinski T (2014) A survey of volumetric illumination techniques for interactive volume rendering. Comput Graphics Forum 33(1):27–51. https://doi.org/10.1111/cgf.12252 Kelc R (2012) Zygote body: a new interactive 3-­dimensional didactical tool for teaching anatomy. WebmedCentral ANATOMY 3(1):WMC002903. https://doi.org/10.9754/journal.wmc.2012.002889 Kikinis R, Shenton ME, Iosifescu DV, McCarley RW, Saiviroonporn P, Hokama HH, Robatino A, Metcalf D, Wible CG, Portas CM et al (1996) A digital brain atlas for surgical planning, model-driven segmentation, and teaching. IEEE Trans Vis Comput Graph 2(3):232– 241. https://doi.org/10.1109/2945.537306 Kim K, Carlis JV, Keefe DF (2017) Comparison techniques utilized in spatial 3D and 4D data visualizations: a sur-

95

vey and future directions. Comput Graph 67:138–147. https://doi.org/10.1016/j.cag.2017.05.005 Kniss J, Premoze S, Hansen C, Shirley P, McPherson A (2003) A model for volume lighting and modeling. IEEE Trans Vis Comput Graph 9(2):150–162. https:// doi.org/10.1109/TVCG.2003.1196003 Krüger J, Schneider J, Westermann R (2006) Clearview: an interactive context preserving hotspot visualization technique. IEEE Trans Vis Comput Graph 12(5):941– 948. https://doi.org/10.1109/TVCG.2006.124 Kuß A, Gensel M, Meyer B, Dercksen VJ, Prohaska S (2010) Effective techniques to visualize filament-­ surface relationships. Comput Graphics Forum 29:1003–1012. https://doi.org/10.111 1/j.1467-8659.2009.01703 Lawonn K, Viola I, Preim B, Isenberg T (2018) A survey of surface-based illustrative rendering for visualization. Comput Graphics Forum. https://doi. org/10.1111/cgf.13322 Lee KKY, Tang WC, Choi KS (2013) Alternatives to relational database: comparison of NoSQL and XML approaches for clinical data storage. Comput Methods Prog Biomed 110(1):99–109. https://doi. org/10.1016/j.cmpb.2012.10.018 Levoy M (1988) Display of surfaces from volume data. IEEE Comput Graph Appl 8(3):29–37. https://doi. org/10.1109/38.511 Li W, Agrawala M, Curless B, Salesin D (2008) Automated generation of interactive 3D exploded view diagrams. ACM Trans Graph 27(3):101:1–101:7. https://doi. org/10.1145/1360612.1360700 Lu A, Morris CJ, Ebert DS, Rheingans P, Hansen C (2002) Non-photorealistic volume rendering using stippling techniques. In: Proceedings of IEEE visualization, pp  211–218. https://doi.org/10.1109/ VISUAL.2002.1183777 Magnus JG, Bruckner S (2018) Interactive dynamic volume illumination with refraction and caustics. IEEE Trans Vis Comput Graph 24(1):984–993. https://doi. org/10.1109/TVCG.2017.2744438 Miao H, De Llano E, Isenberg T, Gröller ME, Barišić I, Viola I (2018) DimSUM: dimension and scale unifying map for visual abstraction of DNA origami structures. Comput Graphics Forum 37(3):403–413. https://doi.org/10.1111/cgf.13429 Molloy JC (2011) The open knowledge foundation: open data means better science. PLoS Biol 9(12):e1001195. https://doi.org/10.1371/journal.pbio.1001195 Netter FH (2017) Atlas of human anatomy E-book. Elsevier Health Sciences Oh SW, Harris JA, Ng L, Winslow B, Cain N, Mihalas S, Wang Q, Lau C, Kuan L, Henry AM et  al (2014) A mesoscale connectome of the mouse brain. Nature 508(7495):207. https://doi.org/10.1038/nature13186 Potter K, Rosen P, Johnson CR (2012) From quantification to visualization: a taxonomy of uncertainty visualization approaches. In: Uncertainty quantification in scientific computing, Springer, pp  226–249. https:// doi.org/10.1007/978-3-642-326776_15

96 Preim B, Saalfeld P (2018) A survey of virtual human anatomy education systems. Comput Graph 71:132– 153. https://doi.org/10.1016/j.cag.2018.01.005 Qualter J, Sculli F, Oliker A, Napier Z, Lee S, Garcia J, Frenkel S, Harnik V, Triola M (2012) The biodigital human: a web-based 3D platform for medical visualization and education. Stud Health Technol Inform 173:359–361. https://doi. org/10.3233/978-1-61499-022-2-359 Raj M, Mirzargar M, Preston JS, Kirby RM, Whitaker RT (2016) Evaluating shape alignment via ensemble visualization. IEEE Comput Graph Appl 36(3):60–71. https://doi.org/10.1109/MCG.2015.70 Rautek P, Bruckner S, Gröller E (2007) Semantic layers for illustrative volume rendering. IEEE Trans Vis Comput Graph 13(6):1336–1343. https://doi. org/10.1109/TVCG.2007.70591 Shen EH, Overly CC, Jones AR (2012) The Allen human brain atlas: comprehensive gene expression mapping of the human brain. Trends Neurosci 35(12):711–714. https://doi.org/10.1016/j.tins.2012.09.005 Shneiderman B (1996) The eyes have it: a task by data type taxonomy for information visualizations. In: Proceedings of the IEEE symposium on visual languages, pp  336–343. https://doi.org/10.1109/ VL.1996.545307 Smit N, Kraima A, Jansma D, de Ruiter M, Botha C (2012) A unified representation for the model-based visualization of heterogeneous anatomy data. In: EuroVis short papers, pp  85–89. https://doi.org/10.2312/PE/ EuroVisShort/EuroVisShort2012/085089 Smit N, Hofstede CW, Kraima A, Jansma D, de Ruiter M, Eisemann E, Vilanova A (2016) The Online Anatomical Human: web-based anatomy education. In: Proceedings of the 37th annual conference of the European Association for computer graphics: education papers, Eurographics Association, pp  37–40. https://doi.org/10.2312/eged.20161025 Smit N, Lawonn K, Kraima A, DeRuiter M, Sokooti H, Bruckner S, Eisemann E, Vilanova A (2017) Pelvis: atlas-based surgical planning for oncological pelvic surgery. IEEE Trans Vis Comput Graph 23(1):741– 750. https://doi.org/10.1109/TVCG.2016.2598826 Svakhine N, Ebert DS, Stredney D (2005) Illustration motifs for effective medical volume illustration. IEEE Comput Graph Appl 25(3):31–39. https://doi. org/10.1109/MCG.2005.60 Swoboda N, Moosburner J, Bruckner S, Yu JY, Dickson BJ, Bühler K (2017) Visualization and quantification for interactive analysis of neural connectivity in drosophila. Comput Graphics Forum 36(1):160–171. https://doi.org/10.1111/cgf.12792

N. Smit and S. Bruckner Taerum T, Sousa MC, Samavati F, Chan S, Mitchell JR (2006) Real-time super resolution contextual close-up of clinical volumetric data. In: Proceedings of EuroVis, pp  347–354. https://doi.org/10.2312/ VisSym/EuroVis06/347-354 Termeer M, Bescós JO, Breeuwer M, Vilanova A, Gerritsen F, Gröller E (2007) CoViCAD: comprehensive visualization of coronary artery disease. IEEE Trans Vis Comput Graph 13(6):1632–1639. https:// doi.org/10.1109/TVCG.2007.70550 Thomas JJ, Cook KA (2006) A visual analytics agenda. IEEE Comput Graph Appl 26(1):10–13. https://doi. org/10.1109/MCG.2006.5 Tominski C, Gladisch S, Kister U, Dachselt R, Schumann H (2017) Interactive lenses for visualization: an extended survey. Comput Graphics Forum 36(6):173– 200. https://doi.org/10.1111/cgf.12871 Trapp M, Schulze F, Bühler K, Liu T, Dickson BJ (2013) 3D object retrieval in an atlas of neuronal structures. Vis Comput 29(12):1363–1373. https://doi. org/10.1007/s00371-013-0871-8 van der Maaten L, Postma E, van den Herik J  (2009) Dimensionality reduction: a comparative review. J Mach Learn Res 10:66–71. doi:10.1.1.112.5472 Viola I, Gröller E (2005) Smart visibility in visualization. In: Computational aesthetics, pp  209– 216. https://doi.org/10.2312/COMPAESTH/ COMPAESTH05/209-216 Viola I, Isenberg T (2018) Pondering the concept of abstraction in (illustrative) visualization. IEEE Trans Vis Comput Graph 24(9):2573–2588. https://doi. org/10.1109/TVCG.2017.2747545 Viola I, Kanitsar A, Gröller ME (2005) Importance-driven feature enhancement in volume visualization. IEEE Trans Vis Comput Graph 11(4):408–418. https://doi. org/10.1109/TVCG.2005.62 Walter T, Shattuck DW, Baldock R, Bastin ME, Carpenter AE, Duce S, Ellenberg J, Fraser A, Hamilton N, Pieper S et al (2010) Visualization of image data from cells to organisms. Nat Methods 7(3s):S26. https://doi. org/10.1038/nmeth.1431 Weaver C (2004) Building highly-coordinated visualizations in improvise. In: Proceedings of IEEE InfoVis, pp 159–166. https://doi.org/10.1109/INFVIS.2004.12 Xu R, Wunsch D (2005) Survey of clustering algorithms. IEEE Trans Neural Netw 16(3):645–678. https://doi. org/10.1109/TNN.2005.845141 Yeatman JD, Richie-Halford A, Smith JK, Keshavan A, Rokem A (2018) A browserbased tool for visualization and analysis of diffusion MRI data. Nat Commun 9(1):940. https://doi.org/10.1038/ s41467-018-03297-7

An Experiential Learning-Based Approach to Neurofeedback Visualisation in Serious Games Ryan Murdoch

Abstract

This study explores brain-computer interfacing, its possible use in serious or educational games and frameworks. Providing real-time feedback regarding cognitive states and behaviours can be a powerful tool for mental health education and games can offer unique and engaging environments for these neurofeedback experiences. We explore how EEG neurofeedback systems can be affordably created for further research and experimentation and suggest design choices that may assist in developing effective experiences of this nature. Keywords

Brain-computer interfacing · Neurofeedback · Serious games · Meditation

1

Introduction

Video games present an exciting and experimental multidisciplinary frontier from which many fields can learn and benefit. Two of those fields are visualisation and education. Visualisation and R. Murdoch (*) Graduate of Glasgow School of Art’s School of Simulation and Visualization, The Glasgow School of Art, Glasgow, UK e-mail: [email protected]

graphics are at the very core of modern video games, with games pioneering 3D digital visualisation and continuing to push technology to its limits. The relationship between games and education may be less obvious, but at their heart games are experiences all about learning. A good game is easy to play and therefore easy to learn and the way players are taught is often inventive and engaging. As a result, games present a powerful vehicle for teaching, games designed with this in mind are known as serious games. In the pedagogical world of serious games approaches often considered are those related to ‘learning by doing’, such as the experiential theory of learning proposed by Kolb in 1984. The theory focuses on learning as four stages: experiencing information, reflecting upon it, formulating abstract concepts and interactively experimenting to test the validity of those concepts. Games, as a medium, offer as a unique way to experience and interact with data and the degree to which this is true is only increasing as a result of emerging technologies. Virtual reality offers a way to immerse users in a fully 3D stereoscopic world of learning that they can interact with naturally and intuitively. Off-the-shelf brain-computer interfacing technology is democratising the previously medical field of electroencephalography, allowing game developers and researchers to include user brain-activity in educational games and virtual reality experiences.

© Springer Nature Switzerland AG 2019 P. M. Rea (ed.), Biomedical Visualisation, Advances in Experimental Medicine and Biology 1156, https://doi.org/10.1007/978-3-030-19385-0_7

97

R. Murdoch

98

This research hopes to outline an approach for aiding user learning through novel visualisation and interaction with biometric data in the context of experience-based educational games. Evaluating and exploring the potential of readily available consumer technologies for neurofeedback and visualisation. In particular, focusing on the profound effect of “brain-control” in video games and VR experiences as a means to engage users with tacit knowledge of cognitive ability and how this can be leveraged for mental health education and treatment.

2

Brain-Computer Interfacing

In 1973 Jacque Vidal first discussed Brain-­ Computer Interfacing when he asked if “observable electrical brain signals could be put to work as carriers of information in man-computer communication”, he wondered if it would be possible to control “prosthetic device or spaceships” with our minds alone. This may sound fantastical but today prosthetic limps are being controlled by brain-computer interfaces (Coffey et  al. 2014) and by the 1970s the human brain had been monitored by and connected to machines for 50 years. The polygraph or lie detector was an early example of brain-machine communication and was invented in 1921. Today many examples of brain-­ monitoring techniques exist including fMRI and electroencephalography (the approach focused on in this study).

2.1

EEG

EEG or electroencephalogram is a method of using electrodes placed on the scalp to measure electrical brain activity, pioneered by Hans Berger in 1924. EEG is a particularly useful technique in that it is not intrusive, it has seen vast use in medicine and is often considered ‘one of the most useful tools in the diagnosis of epilepsy and other neurological disorders’ (Miranda and Brouse 2005). When neurons in the brain com-

municate information to one another the small electrical signals can be detected, together some 10^6 neurons create a measurable reading of a few microvolts that can be monitored this way. Despite being filtered by the meninges, skull and scalp there is a latency of less than a millisecond in this method of monitoring (Woodman 2010). From these small electrical impulses evaluations of different frequencies and signals can be used to understand various cognitive states and brain activity. However, eye blink, eye movement, heartbeat and other bodily functions and movements can create artefacts in the EEG signal.

2.2

BCI in Art and Popular Culture

Brain-computer interfacing has long been a fascination of many other science fiction writers and directors. It has featured in Star Trek, novels like “Neuromancer” the 1984 science fiction tale by William Gibson and films such as RoboCop (1987), The Matrix (1999) and more recently Chappie (2015). Despite wacky depictions in science fiction, many brain-computer interfacing pioneers have been using BCI in their art and work since the 1960s. In 1965 Alvin Lucier (1982) connected EEG to amplifiers to shake percussion in his musical work “Music for Solo Performer”. Nina Sobell used two participants monitored by EEG to move shapes on a television set, creating her collaborative ‘Brain Wave Drawings’ in 1972. Today people continue to use brain-computer interfaces creatively, such as in ‘Enheduanna – A Manifesto of Falling’ (Zioga et al. 2017) creating a ‘brain-computer cinema performance”. With interaction between a performer and two audience members using BCI.

2.3

BCI in Video Games

With BCI solutions, in particular those that make use of EEG and ERP, becoming increasingly accessible and affordable to developers there has

An Experiential Learning-Based Approach to Neurofeedback Visualisation in Serious Games

been an increase in the research into BCI in video games over the last 20 years. Examples include ‘Brainball’ (Hjelm and Browall 2000) which allowed to players to relax to move a metal ball on a game table, with a visualization of their relaxation as measured by EEG displayed to the players on a projected screen. Krepki et al. (2007) created a series of simple 2D BCI controlled games. Player’s wore multi-­ electrode caps and their EEG feedback was used to allow them interact with various game scenarios. Exploring BCI’s “path towards multimedia applications”. Bos et al. (2010) explored using user focused human-computer interaction approaches in the domain of BCI games. Discussing the increasing availability of BCI solutions and its possible use in games and new mediums. “Recently, the focal point has shifted to a new group of potential users: the general population. From a commercial perspective, such a large target group could be a very lucrative endeavour. Especially gamers are great potential users, as they are early adopters, willing to learn a new modality if it could prove advantageous” (Bos et al. 2010). Bonnet et al. (2013) experimented with multiuser BCI games, finding that a multiplayer element improved player’s engagement in the game and reflecting on the trend of moving from more medical BCI uses towards games and media over the proceeding decade. It can be seen that BCI in video games is an increasing area of academic interest, providing for technical and artistic creativity through the medium of video games.

2.4

Neurofeedback: A Tool for Learning

Neurofeedback is the process of providing visual or sonified feedback to a user of their brain activity. This use of BCI was limited to experimental use in medical fields, displaying potential in treating ADHD in children (Duric et  al. 2012) and epilepsy (Sterman and Egner 2006). With a

99

notable increase in interest for BCI, and so neurofeedback, in multimedia applications there has been criticism from the more established fields. “However, the ease in the development of new software programs for feedback functionality and displays, together with the entry into the field of a diverse group of professionals and semi-­ professionals, has led to an unfortunate lack of consensus on methodology and standards of practice. In turn this has contributed to reluctance by the academic and medical communities to endorse the field” (Sterman and Egner 2006). Despite the criticism of the more creative side of neurofeedback, meaningful work has been carried out within the domain of serious games. Friedrich et  al. (2015) created a serious game focused on neurofeedback for children with autism spectrum disorder, monitoring the MU signal via EEG and creating a neurofeedback game. Parents reported improved behaviour in the day to day life of the young participants, support neurofeedback as a powerful learning tool with potential in serious games. Hinterberger (2011) combined EEG feedback with electrocardiogram (ECG) heart monitoring to create ‘The Sensorium: A multimodal Neurofeedback Environment’. Projecting a biofeedback augmented light show onto the walls of the test room of the ‘Sensorium’ accompanied by a reactive soundscape. This visual and sonified neurofeedback was created to help users interact with their physiological functions, participants reported “An increase in contentment, relaxation, happiness and inner harmony” after using the neurofeedback experience. ‘MeditAid’ is a wearable neurofeedback devices that provides audio feedback on states of meditation developed by Sas and Chopra (2015). It used dynamic audio to aid users in the practice of deep meditation, using ‘binaural beats’ a method of brain entrainment that creates a psychoacoustic frequency to encourage similar frequencies neurologically. Brain entrainment methods have been shown to be beneficial for those “suffering from cognitive functioning deficits, stress, pain, headache/migraines, PMS, and

R. Murdoch

100

behavioural problems” (Huang 2008). MeditAid was shown to benefit the “deepening of meditative states”, helping users engage with meditation and mindfulness, “particularly novice meditators” (Sas and Chopra 2015).

3

Serious Games

“When we think of games, we think of fun. When we think of learning we think of work. Games show us this is wrong.” (J.P Gee 2005) Games indeed are a fun way to learn, they can also be an extremely effective way to learn and it is for a reason that is often overlooked. Every game has a set of rules, in video games there are often a great number of these rules and the player is required to learn them quickly. Game developers quickly learned that games that were difficult to pick up and play were less engaging and that as games got more complicated clever ways of getting players to learn how to play them were required. Tricks like using colours and graphics to draw players attention to what they should be doing, tutorials built into the early stage of the game to explain game mechanics, or audio to make important actions stand out were adopted, continually tweaked and developed as games progressed. Serious games cover a vast number of disciplines from pedagogy, psychology, user experience to game art, audio, design and narrative. (cf. Lim et al. 2014)

3.1

Models for Learning

Mitgutsch (2011) describes serious games as: “designing a spoonful of sugar and filling it with “serious” content, pedagogy and learning models are the tools that help get the serious into the sugar. These models of learning form a bridge between the fun world of games and the much more rigid world of education and learning, allowing the two to attempt to work together. There are many models for learning and many of these models and approaches are relevant to serious games.

3.1.1 Cognitive Learning Cognitive approaches see learning as behavioural change based on gained information. An example of a heavily cognitive approach to learning is Bloom’s Taxonomy (cf. Bloom et al. 1956). It was created to provide educators with a common means of assessing student’s learning and is divided into three domains: cognitive; regarding intellectual capacity, affective; considering the learner’s motivations and feelings and psychomotor; physical skills and tasks (Catalano et al. 2014). 3.1.2 Experiential Learning The experiential theory of learning focuses on learning from experience and was proposed by Kolb in 1984. It draws heavily on ‘constructivist’ ideas of learning, suggesting that learning is a result of active interaction and participation. Kolb’s learning theory consists of four stages: “obtaining concrete experience, observing and reflection on the experience, formulating abstract concepts, experimenting to test the validity of these concepts.”

3.2

Deep Learning

We can look to learning frameworks in an attempt to guide serious game design, but what is the ultimate goal of serious games? What serious games look to achieve is the transfer of information and learned skills from the game world to the real world. This is known as transformative or deep learning, where information is so truly known it can be re-contextualized and understood in various framings and situations. James Paul Gee highlights the difference between verbal learning and this kind of deep learning by comparing types of understanding. He describes general or verbal understanding as “the ability to explicate one’s understanding in terms of other words or general principles but not necessarily an ability to apply this knowledge to actual situations” (Gee 2008). This is compared to ‘situated understanding’ the ability to apply knowledge across situations, with a customizable and deep understanding of

An Experiential Learning-Based Approach to Neurofeedback Visualisation in Serious Games

the information. Using serious games the intention is to create educational experiences, informed by pedagogy and learning approaches, which have real impact and allow for this ‘situated understanding’ of the chosen learning outcomes.

3.3

 Framework for Design A and Analysis

In the literature there are several approaches to serious game design best practice and evaluation, such as those proposed by Catalano (2014) and Mitgutsch (2011). Catalano lays out best practice for designing serious games with learning outcomes in mind. Suggesting that ‘learning should be situated’, meaning that game worlds and environments should be created to “fit the context of use at best”. Suggesting that game environments should reflect or represent the situation in which the learning outcomes put forward by the game would be used. For instance, a game intending to teach players about the medieval period would be well served by an appropriate environment, such as a castle or feudal village. With reference to experiential learning Catalano makes an argument for ‘minimizing cognitive load’. By keeping the experience light and only including what is absolutely necessary for the player to achieve their goals experiences can avoid being over complicated. Maintaining player motivation and engagement and not creating unnecessary obstacles to play and therefor learning. This is further supported by “engaging the learner constructively/experientially” using game feedback and experience to support engaging play. Catalano also encourages designers to “facilitate the learning task” supporting the in-game learning with additional materials and information and creating flexibility, reusability and exploitability in game experiences. By creating a plethora of scenarios and situations within a game players have more options and therefor

101

greater potential for exploration, engagement and learning. With regards to evaluation Catalano proposes using questionnaires and qualitative interviews to map game experience to existing models of learning. Mitgutsch (2011) works with Bateson’s (1972) ‘hierarchical model of learning levels’ to suggest a framework for learning in serious games, inspired by Bateson’s focus on learning as change: “The word ‘learning’, undoubtedly denotes change of some kind. To say what kind of change is a delicate matter”. (Bateson 1972) The levels, when applied to games are: Level zero: the player’s natural reaction to the events of the game. The simplest kind of response. These cover the basic interactions the player has with the game, without contextualizing them within the game. This basic level is supported by general game usability. Level one: this level supports the contextualization of the information and events gathered at level zero in the broader context of the game. Here players make sense of the meaning of the game and have learned how to play. Creating new strategies and approaches and exploring the game, empowered by a new level of understanding. Level two: the third and most important ‘learning’ level. Mitgutsch describes this level as the question “What does this mean to me?” this he argues supports deep or transformational learning. By taking the learned skills, knowledge and context of the game and applying it themselves, their world and beliefs the player ‘transfers’ the knowledge acquired in the game into something meaningful. Taking their learning from a low level of understanding to what Gee would refer to as a situated understanding of the intended learning outcomes. These frameworks and approaches outline the importance of considering learning models, in particular constructivist and experiential learning approaches. They also consider player motivation

R. Murdoch

102

and engagement, accessibility and ease of game experiences. Furthermore these approaches aim to try to support and assess true, deep, meaningful or transformational learning within serious games.

4

Meditation

“Meditation can be defined to include any of a family of practices in which the practitioner trains an individual to consciously calm his/her mind in order to realize some benefit or achieve inner peace or harmony” (Chen et al. 2011). This study focuses on serious game theory and serious games always have an intended educational outcome. Often seen to make boring topics more fun, serious games can also be used to teach things that are difficult to convey with more traditional approaches. Meditation is something that is uniquely experienced and as such is a kind of tacit knowledge that is hard to transfer traditionally. Using serious game theory to structure experiences for teaching meditation, using novel biometric approaches, we attempt to make meditation a more tangible, approachable and hopefully educational experience.

4.1

Practices

There are many different variations of meditation which exist all around the world, for the sake of simplicity this study will break meditation down into two comparable practices that can be universally practiced.

4.1.1 Deep Meditation Deep meditation represents extremely involved states of meditation, often achieved through closing the eyes, attempting to ‘clear the mind’ and breathing exercises. Mantras and visualization can also aid meditation practice. Meditation can be measured using electroencephalography (Sect. 4.1.2), using the spectral analysis of EEG data, mostly an increase in alpha frequency EEG (Banquet 1973).

4.1.2 Mindfulness Meditation Mindfulness meditation on the other hand can be practiced with eyes opened, when using this approach, the meditator tries to bring an increased awareness of the present moment. Individuals can practice mindfulness when walking, listening to music or watching television. Bishop et  al. (2004) describe this practice: “We see mindfulness as a process of regulating attention in order to bring a quality of non-elaborative awareness to current experience and quality of relating one’s experience within an orientation of curiosity, experiential openness, and acceptance. We further mindfulness as a process of gaining insight into the nature of one’s mind and the adoption of a de centred perspective… on thoughts and feelings so that they can be experienced in terms of their subjective (versus their necessary validity) and transient nature (versus their permanence).” This description shows the practice of mindfulness to be deeply experiential, supporting the idea that this kind of meditation could benefit from experiential learning based serious game approaches. But what is the value of teaching these practices? Meditation practices have been shown to support a wide range of benefits for personal health and mental wellbeing that could benefit players and in particular those suffering from mental health issues.

4.2

Benefits

Mental health is a very serious issue in the UK. In 2015 it represented the largest portion of the NHS disease burden (28%), affecting one in four people annually. Despite this during that year mental health received only 5.5% of NHS spending (2015 Mentalhealth.org). In 2016  in any given week one in 6 people would experience a symptom of mental illness, such as depression, anxiety or severe stress (2019 Mentalhealth.org). Currently mindfulness meditation is deployed by the NHS to help manage the symptoms of mental illness, with the practice taught through counselling and learning materials. An

An Experiential Learning-Based Approach to Neurofeedback Visualisation in Serious Games

e­ xperiential serious game approach using braincomputer interfacing, as discussed in the next chapter, could provide an engaging alternative to these approaches. Meditation practices have been shown to benefit mental health and cognitive function widely. Reducing and managing the symptoms of anxiety (Chen et  al. 2011), addressing ‘prefrontal EEG Asymmetry in previously depressed individuals’ (Barnhofer et  al. 2010) (Ramel et  al. 2004) and therapeutic approaches to managing stress (Dooley 2009). Additional benefits include managing pain (Golianu and Waelde 2012), increasing cognitive function and emotional regulation in students (Waters 2017), aiding students with ADHD to benefit them academically (Singh 2015) and benefits to attention and focus in general (Jha 2007). It is clear these practices are worth teaching, but with the previously mentioned tacit nature of meditation and the difficulty of quantifying it prove challenges. Some approaches have been created to try to measure mindfulness, such as the Friedburg Mindfulness Inventory (FMI) (Walach et al. 2006).

4.3

 urrent Approaches to Novel C Learning

Other than the approaches previously mentioned, counselling and written learning material, other innovative approaches exist to teach mediation; gamified applications like ‘HeadSpace’ and ‘Stop, Breathe, Think’ available on mobile devices and games such as Deep VR, a virtual reality game controlled by player breathing. (https://www.exploredeep.com) This study suggests that brain monitoring techniques able to measure meditation (EEG) could create novel learning solutions.

5

Method and Design

To explore the potential of off-the-shelf BCI technology in video games a prototype game was created, this example exists as an attempt to fit

103

such experiences into existing serious game theories and frameworks. As a novel technology this approach is exploratory, examining what these kinds of games could look like and what kind of educational use they could potentially have. There are several challenges to creating neurofeedback games and this study suggests accessible solutions for developers and educators to overcome them.

5.1

Bringing Brain-Computer Interfacing to Game Engines

Game engines exist as powerful environments for developers to work within to create games. As an essential part of the game development process game studios may build their own bespoke engine, other engines such as Unity and Unreal are available to independent developers. Unity is accessible, with a free version containing most of its functionality available to any developer. It is also extremely flexible and for these reasons it was chosen for this study. As discussed, there are many neurofeedback devices available to consumers, one of the most affordable devices is the NeuroSky Mindwave mobile. The NeuroSky headset measures EEG using a single electrode on the forehead with a grounding electrode clipped to the ear. It is much easier to put on than conventional EEG caps and sends the information via Bluetooth to connected devices and computers, meaning no complicated setup or wiring. After selecting a game engine to create the game within and a neurofeedback device to capture the players EEG data the challenge becomes getting those two components to communicate with one another. The NeuroSky headset process the EEG data and sends it wirelessly as numerical values. These values represent the varying reading captured by the device, as well as summed values, representing cognitive behaviours such as attention, meditation and relaxation. By using algorithms to give numerical values for these behaviours the EEG data becomes much more

R. Murdoch

104

Fig. 1  An overview of the BCI system developed

accessible to a developer, advanced knowledge of analysing EEG signals is not an obstacle. This does result in depending on NeuroSky’s algorithms for the data analysis, however it makes that data much simpler to feed into an interactive game. Inexpensive Arduino microcomputers were used to receive the Bluetooth signal containing the Neurosky “meditation” value, display it and send it via USB to the Unity game engine (Fig. 1). Once inside the game engine the value can be utilised in real-time, allowing the player to interact with the game with their relaxation. The developed prototype was played on a Windows computer with a keyboard, mouse and EEG headset.

5.2

Establishing a Narrative

Storytelling is an important part of game design; it makes us empathise with the games characters and world. It is a tool with which a game developer can help immerse the player in the games setting, in the same way the narrative of a movie or book draws us into those worlds. As the game created was intended to teach about meditation and its benefits using neurofeedback the narrative was created to support this. The game’s story follows a boy named Kevin who is anxious about moving, his anxiety manifests as nightmares leaving him feeling scared

and helpless at night. This helps create a scenario that the player can understand and empathise with. The story is presented to the player in the game as 2D illustrations (Fig. 2). The illustrations show Kevin’s plight as he falls asleep to the sight of his room morphing into a dangerous labyrinth. With this the game changes from 2D illustrations to a 3D environment representing Kevin’s nightmare. This environment represents the ‘level’ the player will be interacting with and it is with the interaction that the player becomes part of the game. The player is encouraged by the game to relax in order to help Kevin overcome his nightmare; a hellish castle filled with dangerous traps (Fig. 3).

5.3

Game Mechanics: The Building Blocks of Interaction

With the games setting established we can now focus on how the player will play the game and interact with its world. Mechanics are the rules and constructs created in the game that allow the player to interact with the games state. In chess there are rules about how different pieces move and how they interact with one another, these rules dictate how the player interacts and progress within the game. When developing the example game, it was not only important to consider the games key rules and constructs, but also

An Experiential Learning-Based Approach to Neurofeedback Visualisation in Serious Games

105

Fig. 2  In-game 2D illustrations telling the game’s story

Fig. 3  An example of the game’s 3D gameplay

how the neurofeedback will function as a mechanic. To navigate Kevin’s nightmare the player can use the arrow keys to move and the spacebar to jump, attempting to avoid moving traps throughout the castle. On contact with the player a trap will cause Kevin to lose ‘health’ and after five instances of this Kevin will wake up frightened from his dream. The player’s main goal is to reach the exit of the three castles that represent the levels in the game. With the games environments and levels being a manifestation of Kevin’s anxiety, the goal was to establish a way for meditation to combat the games challenges. Two mechanics were created that utilised the players EEG meditation value; by becoming relaxed and meditative the player can slow down time in the game world, making

the traps easier to avoid and keeping Kevin safe. The player could also open locked doors throughout the castle by having Kevin move near to them and by reaching a high level of meditation as measured by the EEG headset. By making the neurofeedback a mechanic that makes the game easier but is not essential to its completion the player is not forced to relax but is instead encouraged to and rewarded for doing so. The “slowing down of time” was clearly highlighted for the player, with sounds changing pitch in the game world to complete the Hollywood slow-motion effect. This was intended to really draw attention to the neurofeedback mechanic and help players understand exactly how relaxed they were at any point in the game. The narrative helps frame the game’s challenge and helps to make sense of why the players

R. Murdoch

106

meditation works as a mechanic in the game. The meditation mechanic, in turn, helps the player overcome the game’s challenge. Upon completing the game and escaping the castles Kevin wakes up feeling more in control, thanks to his newly mastered skill of meditation, ready to face the move to his new home. Using game narrative and mechanics this way we can hope to make neurofeedback an engaging experience and explain why the practiced skill of meditation is beneficial.

6

Analysis

6.1

Defining Gameplay

The gameplay classification work carried out by Djaouti et  al. (2007) proposes a simple set of ‘game bricks’ to represent types of game mechanic, by classifying mechanics we can identify them in other games and implement them knowingly in our own games. For example move, shoot and destroy are represented as individual ‘mechanic’ bricks, if we put these elements together we can imagine a game similar to ‘Space Invaders’ where the player moves, shoots and destroys alien spaceships to complete game outcomes. The player’s character gameplay can best be described using this simple classification as ‘Move’ and ‘avoid’. They are given control of the character’s movement and need to avoid the traps and dangers of the game environment. The player’s meditation works as an interesting mechanic, it is certainly novel which makes for difficult classification. It can form an ideal state to match in order to interact with game objects, such as opening doors by maxing out meditation. It can also act as a kind of directly player generated resource, by managing it the player can benefit from choosing to control the game’s speed and challenge. A great deal of the gameplay interaction is dictated by the player’s constant reading.

6.2

Gameplay Loops

Games can generally be summarised to have a simple kind of ‘gameplay loop’ or cycle. Generally, this is a challenge, a reward and subsequent growth. For instance, the player overcomes a challenge, such as defeating an enemy, they are rewarded perhaps with treasure that enemy had. These rewards allow the player to advance and grow within the context of the game. So, what is the gameplay loop regarding the designed experience using a ‘manage’ mechanic? In the designed experience the player must undergo the challenge of managing their ‘meditation’ as read in real time using BCI to a certain threshold. Upon completing this challenge or task the player is rewarded by a visual and sonic change in the game environment, acknowledge the players successful interaction and achievement of matching the required state. This success allows the player to achieve knowledge of what strategies, actions or approaches allowed them to achieve this meditative state, with the intention of letting them hone their meditation ‘skills’ within the context of the game and letting their understanding of the practice grow. Together with the ‘move’ and ‘avoid’ mechanics we get a complete game loop; the player attempts to manage their meditation, moves ‘Kevin’ and avoids the traps. These player actions are repeated to progress and finally complete the game.

6.3

Experiential Learning

The design of the gameplay loops can also be considered with regards to Kolb’s learning cycle. As the player experiences their meditation through neurofeedback they can actively experiment and learn, ‘obtaining concrete experience’. The game is structured as three interactive levels allowing the player time to reflect on their experience between each level and consider approaches for the next. This encourages experimentation, with the game prompting players to consider how they can master their ‘meditation ability’ as the game progress and challenge increases.

An Experiential Learning-Based Approach to Neurofeedback Visualisation in Serious Games

In this way the game can serve as not only a tool for visualizing meditation through neurofeedback, but as a means for players to experiment and challenge their own abilities and beliefs. The game was created with adjustable difficulty, changing the meditation threshold required to be reached, this was to ensure that all players had a better chance to engage in experimentation. Players struggling to relax could lower the difficulty, while players finding it easy could increase the difficulty. Players could also increase the difficulty over time to give themselves more of a challenge, independent of which level they had reached in the game.

6.4

“Levels of Learning” and Game Design

Bateson’s “levels of learning”, as discussed earlier, serve as a great inspiration when designing a game for educational purposes. We can seek to try to address each step he describes towards transformative learning through the game’s narrative, feedback and mechanics. Level zero could be reached through the players interaction with the games neurofeedback as it responds to their base state. Towards level two the player can begin to contextu-

Fig. 4  The game mapped to Bateson’s framework

107

alize the neurofeedback ‘ability’ they have within the game, not only in terms of a mechanic they can control, but as part of the game’s narrative. As players complete the game practicing their meditation, they help Kevin overcome his anxiety, this aims to demonstrate a real-­world application of meditation. Understanding what has been learned in a broader context and being able to apply it in other situations is what defines this final level in Bateson’s framework (Fig. 4). The low-level mechanics and feedback of the game serve to encourage the low levels of learning and initial engagement. As the learning levels increase towards making sense of that learned information the narrative can play a more important role. Though the game’s mechanics and environment provide a means to engage and experiment with neurofeedback, its story attempts to demonstrate the value of meditation as a practice when feeling anxious.

7

Conclusion

This exploratory study hopes to suggest that these kind of learning experiences are possible and address how they could be designed. Further research into the potential of educational neuro-

108

feedback games is required, particularly regarding how they could benefit mental health education. However, it is hoped that with the technology required becoming increasing assessible this field will continue to develop. By utilising the power of game engines and the accessibility of commercial EEG headsets we can create novel neurofeedback games and experiences. By designing these experiences around pedogeological frameworks we can hope to make them educational, surpassing their novelty and perhaps allowing them to hold some value. With mental health a very real issue today alternative approaches to education regarding the mind could be massively beneficial. Relaxation, meditation and focus are all detectable with EEG and brain-computer interfacing, yet these cognitive behaviours are difficult to describe and teach. Here serious game approaches combined with neurofeedback could serve as an interactive bridge for learning. Using neurofeedback as an engaging game mechanic and utilising the powerful abilities of narrative we can hope to give users transferable knowledge, the ultimate goal of serious game approaches.

References Banquet JP (1973) Spectral analysis of the EEG in meditation. Electroencephalogr Clin Neurophysiol 35(2):143–151 Barnhofer T, Chittka T, Nightingale H, Visser C, Crane C (2010) State effects of two forms of meditation on prefrontal EEG asymmetry in previously depressed individuals. Mindfulness 1(1):21–27 Bateson G (1972) Steps to an ecology of mind: collected essays in anthropology, psychiatry, evolution, and epistemology. University of Chicago Press, Chicago Bishop SR, Lau M, Shapiro S, Carlson L, Anderson ND, Carmody J, Segal ZV, Abbey S, Speca M, Velting D, Devins G (2004) Mindfulness: a proposed operational definition. Clin Psychol Sci Pract 11(3):230–241. Mental Health.org 2015 Bloom BS, Engelhart MD, Furst EJ, Hill WH, Krathwohl DR (1956) Taxonomy of educational objectives, handbook I: the cognitive domain, vol 19. David McKay Co Inc, New York, p 56 Bonnet L, Lotte F, Lecuyer A (2013) Two brains, one game: design and evaluation of a multi-user BCI video game based on motor imagery. IEEE Transactions on Computational Intelligence and AI in Games,.

R. Murdoch IEEE Comput Intell Soc 5(2):185–198. https://doi. org/10.1109/TCIAIG.2012.2237173 Bos DPO, Reuderink B, van de Laar B, Gurkok H, Muhl C, Poel M, Heylen D, Nijholt A (2010) Human-­ computer interaction for bci games: Usability and user experience. In: Cyberworlds (CW), 2010 International conference on, October, pp 277–281. IEEE Catalano CE, Luccini AM, Mortara M (2014) Best practices for an effective design and evaluation of serious games Chen Z, Hood RW Jr, Yang L, Watson PJ (2011) Mystical experience among Tibetan Buddhists: the common core thesis revisited. J Sci Study Relig 50(2):328–338 Coffey AL, Leamy DJ, Ward TE, (2014) A novel BCI-­ controlled pneumatic glove system for home-based neurorehabilitation. In: Engineering in Medicine and Biology Society (EMBC), 2014 36th Annual International conference of the IEEE, August, pp 3622–3625). IEEE Djaouti D, Alvarez J, Jessel JP, Methel G, Molinier P (2007) Towards a classification of video games. In: Artificial and ambient intelligence convention. Artificial Societies for Ambient Intelligence Dooley C (2009) The impact of meditative practices on physiology and neurology: a review of the literature. Scientia Discipulorum 4(1):3 Duric NS, Assmus J, Gundersen D, Elgen IB (2012) Neurofeedback for the treatment of children and adolescents with ADHD: a randomized and controlled clinical trial using parental reports. BMC Psychiatry 12(1):107 Friedrich EV, Sivanathan A, Lim T, Suttie N, Louchart S, Pillen S, Pineda JA (2015) An effective neurofeedback intervention to improve social interactions in children with autism spectrum disorder. J Autism Dev Disord 45(12):4084–4100 Gee JP (2005) Learning by design: good video games as learning machines. E-learn Digital Media 2(1):5–16 Gee JP (2008) Game-like learning: an example of situated learning and implications for opportunity to learn. In: Assessment, equity, and opportunity to learn, pp 200–221 Golianu B, Waelde L (2012) P02. 122. Mindfulness meditation for pediatric chronic pain: effects and precautions. BMC Complement Altern Med 12(1):P178 Hinterberger T (2011) The sensorium: a multimodal neurofeedback environment. Adv Hum Comput Interact 2011:3 Hjelm SI, Browall C (2000) Brainball: using brain activity for cool competition. In: Proceedings of the first nordic conference on computer-human interaction (NordiCHI 2000), Stockholm, Sweden Huang-Storms L (2008) Efficacy of neurofeedback for children with histories of abuse and neglect: pilot study and meta-analytic comparison to other treatments. University of North Texas Jha AP, Krompinger J, Baime MJ (2007) Mindfulness training modifies subsystems of attention. Cogn Affect Behav Neurosci 7(2):109–119

An Experiential Learning-Based Approach to Neurofeedback Visualisation in Serious Games Kolb DA (1984) Experiential learning. Prentice Hall, Englewood cliffs Krepki R, Blankertz B, Curio G, Müller K-R (2007) The Berlin brain-computer Interface (BBCI)  – towards a new communication channel for online control in gaming applications. Multimed Tools Appl 33(1):73– 90. https://doi.org/10.1007/s11042-006-0094-3 Lim T, Louchart S, Suttie N, Hauge JB, Stanescu IA, Ortiz IM, Moreno-Ger P, Bellotti F, Carvalho MB, Earp J, Ott M (2014) Narrative serious game mechanics (NSGM)–insights into the narrative-pedagogical mechanism. In: International conference on serious games. Springer, Cham, pp 23–34 Lovely Music, Ltd.Sobell, 1972 Lucier A (1982) Music for solo performer (1965), for enormously amplified brain waves and percussion Mentalhealth.org (2015) Fundamental facts about mental health. [ONLINE] available at: https://www.mentalhealth.org.uk/sites/default/files/fundamental-facts-15. pdf. Accessed 20 July 2017 Mental Health Foundation (2019) Fundamental facts about mental health 2016. [online] Available at: https://www.mentalhealth.org.uk/publications/fundamental-facts-about-mental-health-2016. Accessed 20 Jul 2017 Miranda E, Brouse A (2005) Toward direct brain-­computer musical interfaces. In: Proceedings of the 2005 conference on new interfaces for musical expression, May. National University of Singapore, pp 216–219 Mitgutsch K (2011) Serious learning in serious games. In: Serious games and edutainment applications. Springer, London, pp 45–58

109

Ramel W, Goldin PR, Carmona PE, McQuaid JR (2004) The effects of mindfulness meditation on cognitive processes and affect in patients with past depression. Cogn Ther Res 28(4):433–455 Sas C, Chopra R (2015) MeditAid: a wearable adaptive neurofeedback-based system for training mindfulness state. Pers Ubiquit Comput 19(7):1169–1182 Singh A, Yeh CJ, Verma N, Das AK (2015) Overview of attention deficit hyperactivity disorder in young children. Health Psychol Res 3(2):2115. https://doi. org/10.4081/hpr.2015.2115 Sterman MB, Egner T (2006) Foundation and practice of neurofeedback for the treatment of epilepsy. Appl Psychophysiol Biofeedback 31(1):21 Vidal JJ (1973) Toward direct brain-computer communication. Annu Rev Biophys Bioeng 2(1):157–180 Walach H, Buchheld N, Buttenmüller V, Kleinknecht N, Schmidt S (2006) Measuring mindfulness  – the Freiburg Mindfulness Inventory (FMI). Personal Individ Differ 40(8):1543–1555 Waters L, Sun J, Rusk R, Cotton A, Arch A (2017) Positive education. In: Wellbeing, recovery and mental health, p 245 Woodman GF (2010) A brief introduction to the use of event-related potentials in studies of perception and attention. Atten Percept Psychophys 72(8):2031–2046 Zioga P, Chapman P, Ma M, Pollick F (2017) Enheduanna–a manifesto of falling: first demonstration of a live brain-computer cinema performance with multi-brain BCI interaction for one performer and two audience members. Digital Creativity 28(2):103–122

Visual Analysis for Understanding Irritable Bowel Syndrome Daniel Jönsson, Albin Bergström, Isac Algström, Rozalyn Simon, Maria Engström, Susanna Walter, and Ingrid Hotz

power of interactive visual data analysis and exploration to generate an environment for scientific reasoning and hypothesis formulation for data from multiple sources with different character. Three case studies are presented to show the utility of the presented work.

Abstract

The cause of irritable bowel syndrome (IBS), a chronic disorder characterized by abdominal pain and disturbed bowel habits, is largely unknown. It is believed to be related to physical properties in the gut, central mechanisms in the brain, psychological factors, or a combination of these. To understand the relationships within the gut-brain axis with respect to IBS, large numbers of measurements ranging from stool samples to functional magnetic resonance imaging are collected from patients with IBS and healthy controls. As such, IBS is a typical example in medical research where research turns into a big data analysis challenge. In this chapter we demonstrate the

D. Jönsson (*) · A. Bergström · I. Algström · I. Hotz Department of Science and Technology, Linköping University, Linköping, Sweden e-mail: [email protected]; [email protected]; [email protected]; [email protected] R. Simon · M. Engström Department of Medical and Health Sciences, Linköping University, Linköping, Sweden e-mail: [email protected]; [email protected] S. Walter Department of Clinical and Experimental Medicine, Linköping University, Linköping, Sweden e-mail: [email protected]

Keywords

Explorative data analytics · Visualization in medicine · Irritable bowel syndrome

1

Introduction

Irritable bowel syndrome (IBS) is a chronic disorder characterized by abdominal pain and disturbed bowel habits. Its cause is largely unknown, but IBS is believed to be related to physical properties in the gut, central mechanisms in the brain, psychological factors, or a combination of these. Understanding the dynamic relationships within the gut-brain axis and inter-relationships between sub-groups of IBS patients requires concurrent analysis of many different patient parameters. Therefore, large amounts of diverse data are collected including imaging, physiological measurements, and questionnaires. Understanding such data, finding correlates between the parameters and comparing patient groups is a challenging task. The challenges associated with this kind of

© Springer Nature Switzerland AG 2019 P. M. Rea (ed.), Biomedical Visualisation, Advances in Experimental Medicine and Biology 1156, https://doi.org/10.1007/978-3-030-19385-0_8

111

D. Jönsson et al.

112

diverse and rich data set is not specific to IBS research but applies to other medical applications as well. There are advanced tools for specific data analysis tasks. CONN for example, is a functional connectivity toolbox for functional magnetic resonance imaging (fMRI) data, however few tools support simultaneous interactive analysis of multiple data types at the same time. Typically, the different data types are processed sequentially by switching between tools. The goal of this project is to utilize interactive data exploration concepts to exploit the data more efficiently and to simplify the data analysis tasks and work-flow. We demonstrate the usefulness of these concepts for the specific case of IBS research. For the study underlying this report data from 97 patients with and without IBS have been used. The primary goal in this study is to understand the difference between patient groups. The contribution of our work can be summarized as –– Change of the data analysis work-flow from a sequential analysis of the data to an interactive parallel work-flow for medical data analysis. –– Real-time computation of statistical measures supporting interactive patient group configuration with immediate visual feedback. –– Connecting 3D spatial renderings with statistical plots for interactive data exploration and hypothesis framework for the comparison of patient groups.

2

I rritable Bowel Syndrome (IBS)

IBS is a group of symptoms that can cause abdominal pain or discomfort and altered bowel habits which in-turn can substantially reduce the quality of life and work productivity. It is a chronic condition that affects 7–12% of the general population. Due to a lack of clear etiology and therefore biomarkers in IBS, diagnosis relies on the integration of clinical symptoms and comorbidities. Though the cause of IBS is largely unknown, it is believed that the condition is dependent on contributing factors from both the gut and the brain, see Chey et al. (2015).

The developing model of bidirectional communication between the central nervous system and the enteric nervous system, is generally referred to as the gut-brain axis. In terms of IBS research, this results in the fusion of multiple data sets ranging from psychological questionnaires to blood tests. As no individual factor seems to account for more than a fraction of what is a broadly heterogeneous disease state, it’s widely acknowledged that a summation of multiple factors is a more likely representation of IBS etiology. As more data points are introduced to the model, clearer subgroups begin to take shape, helping to clarify differences in disease origin and potentially improving future patient outcomes. In a complex model of this kind, traditional approaches toward data analysis are limiting. In cases like IBS, the researcher needs to be able to investigate the model in a more fluid manner, allowing for multiple variables to influence subgroup formation. The aim of the application presented here is to increase our understanding of this multifactorial disease model with respect to IBS.

2.1

The Data

IBS data collection typically begins with the determination of inclusion and exclusion criteria based on symptoms and other clinical variables. Typically, data from one or more control groups are also included, for example healthy controls or patient reference groups. From these groups a variety of data hypothesized to be associated with IBS, such as psychometric, chemometric, neurometric and microbiomemetric data, are collected. Our specific data sets in this case, illustrated in Fig. 1, are as follows: Neurometric data. fMRI records the activity of different regions in the brain for every participant. The raw fMRI data is preprocessed, resulting in image alignment and smoothed brain volume activity data. From this data, correlations i.e., functional connectivity, between different areas are computed. For this project the connectivity between the medial

Visual Analysis for Understanding Irritable Bowel Syndrome Fig. 1  Illustration of the variety and types of data sources that are relevant for IBS research, such as psychometric, chemometric, neurometric and microbiomemetric data

113

fMRI measurements

Brain

Quesonnaires and psychiatric evaluaons

Blood samples

Gut

pre-frontal cortex and the remaining areas of the brain was of special interest. The fMRI scans have an approximate resolution of 1003 voxels. Chemometric and microbiomemetric data. This data are a set of gut and peripheral independent measurements, mostly numerical measurements like stool samples, blood samples, and colon biopsies. Psychometric data. These are questionnaires and psychiatric evaluations from each patient, which result in numerical and categorical data that do not have a direct spatial reference, here referred to as abstract data. Context data. As anatomical reference for the neurometric data, structural MRI reference brains are provided. These reference brains serve as basis for registration and spatial normalization of the fMRI volume data. As additional context information a brain atlas, a semantic segmentation of the brain, is available. Each of these data categories vary in terms of their reliability, commensurability, resolution, and dimensionality. In our case, the number of parameters for one individual in psychometric and chemometric data alone totals around 200, while the addition of neurometric and metagenomic microbiome data increases this by at least two orders of magnitude, even when simplified.

2.2

Bacteria samples

 ommon Data Analysis C Work-Flow

The collection from the parameters above results in a large data set with widely varying characteristics. Analysis of this large a data set is a tedious and time-consuming task, involving going back and forth between many different analysis tasks and methods making it difficult to find causal relations. Between-Group Comparison and Statistical Correction  As a first step, IBS patients are usually compared to healthy controls to determine if there are clear between-group differences in any of the individually collected metrics. Depending on the research question, this comparison is done with simple two-sample t-tests or ANOVAs between any of the individual chemometric, microbiomemetric, psychometric, or neurometric data. For this, at least two analysis packages are required, one that can handle scalar data and one that can handle volume fMRI data. For scalar data, the results are presented in the form of tables with p-values and t-statistics. For volume fMRI data, the results are presented as images that are thresholded at a certain p-value cutoff and typically colour-coded according to the t-­statistics. For such a large number of parameters for each patient, all statistical tests need additional correction for multiple comparisons. In the most restrictive correction method, the Bonferroni

D. Jönsson et al.

114

correction, the chosen significance level is divided by the number of hypotheses tested. Even if there are less restrictive methods available, correction for multiple comparisons typically results in the disregard of half of the collected data due to the sheer number of associations in such a large model. Following correction, we then cross-­ correlate measures with significant between-­ group differences to see if there are additional relationships between surviving data sets. For example, to determine if there is a relationship between anxiety and depression.

Patient Group Specification  Patient groups are usually defined a priori based on symptoms or other standardized clinical variables. In research, patient groups can also be defined based on a certain hypothesis for example ‘IBS patients with high anxiety perceive pain differently compared to patients with low anxiety’. According to this method, the hypotheses should be based on previous reported data and/or reasoning. One challenge with this approach, is the difficulty in finding something unexpected, not hypothesized, which might be apparent with an intuitive visualization method like the one described here.

Integrating Brain Activity with Chemometric, Microbiomemetric and Psychometric Data  Determining the difference between two groups is only a first step towards understanding neurometric data. The next step is to understand how neurometric data is related to other chemometric, microbiomemetric, and psychometric measures. Here, it is possible to analyze peripheral data as covariates of interest in relation to the neurometric data which aids in identifying regions of the brain which are associated with these other measures. That is to say, it is possible to determine if the connectivity in a certain region of the brain is related to any of the measures from chemometric, microbiomemetric, and psychometric data. Another possibility is to divide the participants into subgroups based on any of the

peripheral measures and investigate subgroup differences in neurometric data. Here, standard methods come to a limit because it is both very time consuming to divide the material in different subgroups and also difficult to make exploratory subgroup divisions. Every step mentioned above is time consuming, static, dimensionally limiting, and is aimed at the reduction rather than the summation of data which we know contributes to the overall model of IBS. In addition, this manual processing and integration of data sets can result in researchers overlooking or eliminating essential data which could help better define the IBS gut-brain axis model. As our understanding of IBS is still unclear and patients considerably heterogeneous, a more exploratory approach is necessary, which would allow for the fluid investigation of potential patient subgroups.

3

Interactive Visual Analysis Work-Flow for IBS

A careful analysis of current work-flow, tasks, and the most severe bottlenecks resulted in the following set of requirements for the analysis environment. –– Visualization of abstract patient and neurometric data. –– Interactive patient group selection. –– Visual comparison of patient groups and their attributes. –– Statistical analysis for understanding group differences. –– Knowledge interchangeability with existing analysis tools. The design of the new analysis environment was then a participatory process of domain scientists and visualization experts. Guided by these requirements. A schematic representation of the tool is depicted in Fig. 2.

Visual Analysis for Understanding Irritable Bowel Syndrome Gut and peripheral measurements

Group selection

115 Functional volume correlation

Anatomical volumes

Multidimensional filtering

Patient groups

Visual group comparison

Welch's t-test

Interaction

Statistical group computation

Group average

Statistically significantly different regions

Parallel coordinates

Slice views

Fused volume rendering

Patient overview and details

Neurometric details

Neurometric overview

Anatomical context

Fig. 2  Schematic representation of the analysis steps involved in the visual analysis work-flow for IBS. A key component is the fluid interaction between group selec-

tion and visual group comparison enabled by linking the different views of the data

Fig. 3 The presented application allows concurrent visual analysis of neurometric and other clinical measurements of varying characteristics and type. The bottom view visualizes each patient and its clinical measurements as a line. A group of patients with similar attributes can be

specified by changing the range of the different parameters, i.e., vertical bars. The top views visualize the patient group’s neurometric data as well as its difference compared to another group

The first step in the work-flow is identifying the top parameter candidates resulting from a statistical correlation analysis using the work of Whitfield-Gabrieli and Nieto-Castanon (2012). Next, the top parameter candidates are fed into the visual analysis tool together with brain correlation measurements of all patients. Figure 3 on the facing page illustrates the process of integrating all these measurements into the visual analysis tool. Three different linked views are used to show different aspects of the data. Firstly, an

overview of the difference between groups’ brain measurements are provided through volume renderings. Here, statistical analysis is used to reveal distinguishing regions to allow for visual difference comparison between groups. Secondly, details of group brain connectivity as well as the anatomical regions in the brain are provided through three axis aligned slice views. Lastly, a parallel coordinates plot is used to visualize, and filter patient groups based on the available input parameters, e.g., psychological and physiological

116

D. Jönsson et al.

measurements. As the group selection changes the group’s brain measurements in the first and second views are updated, thereby enabling interactive analysis. The following sub-sections describe the details of these three components.

next page, and direct volume rendering, see right side in Fig.  4 on the following page. An MRI template brain provides the context. It is treated as an additional emission-absorption component (see Sect. 3.2) in the volume. An atlas volume is further used to support contextual queries related to the specific regions of the brain. Details about 3.1 Visualization the numerical value and index of the selected of the Neurometric Data voxel are shown using textual representation. The voxel index is used as a link to other tools if more The neurometric data consists of three-­ detailed statistical analysis is desired as mendimensional scalar fields commonly referred to tioned in the requirements. as volumes. In our case, the neurometric data Due to the central use of the direct volume describes the functional connectivity in the brain. rendering technique we first briefly introduce this These volumes are visualized using three axis-­ technique. Next, we describe how the processing aligned slice views, see left side in Fig. 4 on the and visualization of this data has been performed

Fig. 4  Neurometric data visualized through orthogonal slice views in axial, coronal, and sagittal planes (left panels) and fused volume rendering (right panels) where (a) shows the group’s average connectivity from the medial prefrontal cortex, providing a rough understanding of the brain network; red color visualizes positive connectivity

and blue color negative connectivity, and (b) shows the statistically significant different regions between healthy controls and IBS patients using Welch’s t-test, p = 0.05. Here red color visualizes areas where one group has higher connectivity than the other and blue color visualizes lower connectivity

Visual Analysis for Understanding Irritable Bowel Syndrome

for patient groups in order to analyze their functional connectivity as well as differences of the functional connectivity between two groups.

3.2

The equation above simulates how light is attenuated in a medium, for example sun light traveling through a cloud. A thorough overview of different techniques efficiently implementing Eq. (1) is provided by Jönsson et al. (2014). When it comes to interactive analysis and visualization of multi-modal volume data, e.g., combining MRI and fMRI data, we refer the reader to Eklund et al. (2010) for more details on concurrent analysis and the work by Nguyen et  al. (2010); Sundén et  al. (2015); Jönsson and Ynnerman (2017) for more details on visualization aspects.

Direct Volume Rendering

By mapping the volume data into optical properties and simulating light transport in the volume it is possible to see three dimensional structures in the same way as in the real world. There are two fundamental optical properties used during light transport simulation, light emission and absorption. Emission produces a color by describing how much light a point in the volume is emitting. Absorption provides occlusion cues by describing the amount of light energy removed at a given point. Visualizing the volume data from a given view point, the position xc of a virtual camera, requires accumulating all the light reaching the camera and thereby the screen. Mathematically, this is done by integrating the radiance L, the amount of energy flowing per unit area, in the direction towards the camera. Starting from an initial position x0 and going in the direction towards the camera, ω = ||xc − x0||, the amount of light energy reaching the camera is given by:

3.3

D

L ( x c ,w ) = T ( x 0 ,x c ) L0 ( x 0 ,w ) + ò T ( x c ,x )s a ( x ) Le ( x,w ) ds,   0      background

attenuation

emission

117



Functional Connectivity of Patient Groups

As a pre-processing step the functional connectivity is computed for each patient in CONN, c.f., Whitfield-Gabrieli and Nieto-Castanon (2012), resulting in one connectivity volume per patient. Once this data has been fed into the application all steps and methods provided are interactive. A rough idea of the functional connectivity is given by computing and visualizing the average of these connectivity volumes for the selected group. An example of a such a visualization for healthy controls is given in Fig. 4a on the preceding page. The averaging aggregation method was chosen due to its fast computation time, which thereby guarantees a more fluid exploration process.

(1) where D = ||xc − x0|| is the length of the ray and x = x0 + sω, s ∈ [0, D], is the position along the ray. Going from left to right in the equation above; the light coming from the starting point is attenuated by the transparency between the initial and background positions and then, in the second part, the emitted light Le(x,ω) is attenuated along the ray using σa(x) to denote the absorption coefficient. The transparency determining the attenuation between two points x1 and x2 is given by the exponential of the integrated extinction coefficient σt:

T ( x1 ,x 2 ) = e ò0

x2 - x1

s t ( x1 + sw ) ds

.



(2)

3.4

Functional Connectivity Difference Between Patient Groups

First experiments showed that comparing the group averages side-by-side and switching between them is not sufficient since the group averages have too many similar areas, making it difficult to spot the difference between them, see Fig. 4a on the facing page. Highlighting the absolute difference between the group averages successfully removed similar areas but raised concerns about its statistical relevance, i.e., is a particular region actually different or does it just seem to be?

D. Jönsson et al.

118

Instead, the unequal variances t-test between two groups by Welch (1947) is computed for each voxel in order to determine the statistically significant different ones. Succinctly, the resulting p-value is used to filter away areas above a user-specified value, which effectively remove areas that are similar between the two groups and keeps different ones. As seen in Fig.  4b, this approach effectively singles out a few areas making it easier to visually compare two groups. The Welch t-test can take a few seconds to compute for about 50 volumes with 1003 voxels. Therefore, to ensure interactivity, the computations are performed in parallel on a background thread. The volume visualizations of group neurometric data are linked to the group selection and multidimensional data visualization described next.

3.5

Visualization of Chemometrics, Microbiomemetrics, and Psychometrics

Measurements stemming from the gut as well as peripheral measurements such as blood samples do not have a spatial extent in the same way as the brain activity measurements. This type of multidimensional data is often visualized using a scatter plot matrix as originally shown by Carr et  al. (1987). A scatter plot matrix shows one scatter plot for each combination of parameters in a matrix layout and therefore requires a relatively large screen space. Other popular concepts for high numbers of parameters are the parallel coordinates by Inselberg (1985) and star plots by Siegel et al. (1972). Parallel coordinates display each parameter from the measurements as one parallel vertical axis, in our case e.g., microbiota, anxiety, depression, physical fatigue, pain inference, barrier function and gut immune cells. Then each patient is represented by a poly-line connecting its values in each dimension on the coordinates axis resulting in one plot for all patient data. Originally parallel coordinates have been developed to highlight correlations between the different parameters. Here parallel coordi-

nates have been chosen since they are effective at representing higher dimensions Siirtola and Räihä (2006) and also have been successfully used for filtering multidimensional data connected to volume data, see for example Jankowai and Hotz (2018).

3.6

 atient Group Selection P and Exploration

Parallel coordinates support the definition of high dimensional filters with immediate visual feedback, which is one of the main objectives when it comes to selecting and exploring patient groups. Patient groups can be configured by adjusting the range of the attributes using the triangular slider buttons shown on the axes in Fig. 5 on the facing page. This figure further demonstrates a simple example where healthy controls and IBS patients have been separated into two different groups by changing the range of the left most IBS/Healthy axis. More advanced questions such as ’compare IBS patients having high pain inference with low anxiety’ can be answered by further adjusting the sliders belonging to the corresponding IBS/ Healthy, pain inference, and anxiety attributes. Since all different types of measurements have not been collected for all patients it is necessary to deal with this missing data. However, as discussed by Eaton et  al. (2005), visualization of missing data is a widely discussed research challenge in the visualization community. In this work, missing measurements will result in missing line connections to their corresponding attributes. Consequently, filtering operations applied to these attributes do not affect patients with missing data.

4

Case Studies

The tool has been evaluated with respect to three different cases. First, the tool was evaluated with respect to the time taken to select patient sub-­ groups. Second, we performed an exploratory analysis with respect to the differences between

Visual Analysis for Understanding Irritable Bowel Syndrome

119

Fig. 5  Parallel coordinate plots visualizing 11 parameters, each represented by a vertical line. Each patient is represented by a horizontal line passing through the corresponding coordinate for the respective parameter.

Patients are filtered by modifying the range of an axis as illustrated in (a) and (b) where healthy controls and IBS patients have been separated into different groups

patients with low and high anxiety. Third, within the IBS patient group, we performed a visual analysis of the differences between patients with high anxiety and high pain interference versus high anxiety and low pain interference.

4.2

4.1

Study Setup

97 patients, 34 healthy and 63 with IBS had been selected. Measurements related to the gut included microbiota levels, barrier function, and immune cell count. Psychiatric measurements included anxiety, depression, physical fatigue, pain interference, catastrophizing, physical neglect and positive affect. Brain measurements included functional connectivity between the medial prefrontal cortex region and all other voxels in the brain. The evaluation was performed on an off-the-shelf desktop computer with a 2.2 GHz Intel CPU, 32  GB RAM, and Nvidia Geforce 1080 GPU.

Group Selection Efficiency

In the first evaluation, we compared the time it took to form groups in the CONN toolbox, see Whitfield-Gabrieli and Nieto-Castanon (2012), and the presented tool. The results of this comparison showed that generating multiple subgroups in the presented approach is on the order of a few seconds while it requires several minutes in the work used for comparison due to the need to generate and load multiple text files. In addition, we also saw that the latency introduced by the statistics computation in our tool was often hidden by the fact that it is performed while the second group is being filtered. The faster group selection compared to CONN is also due to the combination of overview of all the patients and their attributes as well as the interaction enabled by the parallel coordinates view. Furthermore, this view makes it easy to separate the participants into multiple subgroups and view results immediately, providing a visual of trends and contribut-

D. Jönsson et al.

120

ing factors from multiple data sets, not possible in other software applications such as CONN.

4.3

 nalysis of Patients with High A Versus Low Anxiety

For the second evaluation, all participants were divided into either a high or low anxiety group as shown in Fig. 6. Here, the visual display made it

easy to identify relationships in the data that previously was not clear, such as: high anxiety is not a good predictor of depression, as patients with high anxiety (third axis) show a spread from low to high depression scores (fourth axis). The same pattern is seen for other psychological and gut measures. At the same time we can easily see substantial differences in brain connectivity between the high and low anxiety groups spread out across the different regions in the brain.

Fig. 6  Comparison between patients with (a) high anxiety and (b) low anxiety

Visual Analysis for Understanding Irritable Bowel Syndrome

4.4

I BS Patient Sub-group Exploration

In Fig. 7a it can, for example, be seen that high anxiety (third axis) and high pain (sixth axis) in IBS does not predict depression (fourth axis), however it seems that these measures are related to higher physical neglect (eighth axis). Comparing the functional connectivity in Fig. 7a, b it can be seen a clear difference between the two

121

groups, as the high pain group shows large areas with negatively correlated connectivity (blue color in Fig. 7a) compared to the low pain group.

5

Conclusions

This work showed that providing an interactive environment for analyzing patients, dividing them into subgroups, while being able to visual-

Fig. 7  Comparison between patients with (a) high anxiety and high pain and (b) high anxiety and low pain

D. Jönsson et al.

122

ize their functional connectivity, improves the speed at which data-centered hypothesis about patient groups can be formed and answered. Furthermore, by integrating statistical group comparison into the functional connectivity visualization it became easier to see differences between groups. The presented tool therefore fills an important gap when it comes to analyzing patient groups with spatial and abstract data simultaneously. Due to the increasingly common data-driven analysis within clinical research and practice, we hope that the approach present here can become an important part in the discovery process of complex diseases. The end goal of the presented approach is to provide a simple and accessible application to appeal to all levels of users, from researchers to clinicians, where the rapid visual feedback from the combination of visualization techniques makes it possible to easily and fluidly explore the contributions of variables and determine their impact on neurometric data. Reaching these goals will require further work in three different areas. First, we would like to integrate more analysis tools for spatial data, such as the topological analysis presented by Reininghaus et al. (2010), and region-based statistics. Second, we would like to investigate the usefulness of integrating other visualization techniques for abstract data, such as chord diagrams or connectivity matrices. Third, we would like to investigate techniques to better visualize uncertainty and missing data. Finally, we would also like to conduct a formal user study of the interface to determine the efficacy of the tool when expanding its use beyond IBS data. Acknowledgements This work was supported through grants ‘Seeing Organ Function’ from the Knut and Alice Wallenberg Foundation (KAW) grant 2013-0076, the Swedish research council grant 2015-05462, the SeRC (Swedish e-Science Research Center) and the ELLIIT environment for strategic research in Sweden. The presented concepts have been realized using the Inviwo open source visualization framework (www.inviwo.org) presented by Jönsson et al. (2018).

References Carr DB, Littlefield RJ, Nicholson W, Littlefield J (1987) Scatterplot matrix techniques for large n. J  Am Stat Assoc 82(398):424–436 Chey WD, Kurlander J, Eswaran S (2015) Irritable bowel syndrome: a clinical review. JAMA 313(9):949–958 Eaton C, Plaisant C, Drizd T (2005) Visualizing missing data: graph interpretation user study. In: IFIP conference on human-computer interaction. Springer, Berlin, pp 861–872 Eklund A, Andersson M, Ohlsson H, Ynnerman A, Knutsson H (2010) A brain computer interface for communication using real-time FMRI.  In: Pattern Recognition (ICPR), 2010 20th International Conference on, IEEE. Springer, Berlin, pp 3665–3669 Inselberg A (1985) The plane with parallel coordinates. Vis Comput 1(2):69–91 Jankowai J, Hotz I (2018) Feature level-sets: generalizing ISO-surfaces to multivariate data. IEEE Trans Vis Comput Graph. https://doi.org/10.1109/ TVCG.2018.2867488 Jönsson D, Ynnerman A (2017) Correlated photon mapping for interactive global illumination of time-­ varying volumetric data. IEEE Trans Vis Comput Graph 23(1):901–910 Jönsson D, Sundén E, Ynnerman A, Ropinski T (2014) A survey of volumetric illumination techniques for interactive volume rendering. Comput Grap Forum 33(1):27–51 Jönsson D, Steneteg P, Sundén E, Englund R, Kottravel S, Falk M, Ynnerman A, Hotz I, Ropinski T (2018) Inviwo – a visualization system with usage abstraction levels. ArXiv e-prints arXiv:1811.12517 Nguyen TK, Ohlsson H, Eklund A, Hernell F, Ljung P, Forsell C, Andersson M, Knutsson H, Ynnerman A (2010) Concurrent volume visualization of realtime FMRI.  In: 8th IEEE/EG international symposium on volume graphics, Norrköping, Sweden, 2–3 May, 2010, Eurographics-European Association for Computer Graphics, pp 53–60 Reininghaus J, Günther D, Hotz I, Prohaska S, Hege HC (2010) TADD: a computational framework for data analysis using discrete Morse theory. In: Proceedings of the international congress on mathematical software (ICMS). Springer, Berlin Siegel JH, Farrell EJ, Goldwyn RM, Friedman HP (1972) The surgical implications of physiologic patterns in myocardial infarction shock. Surgery 72(1):126–141 Siirtola H, Räihä KJ (2006) Interacting with parallel coordinates. Interact Comput 18(6):1278–1309. https:// doi.org/10.1016/j.intcom.2006.03.006 Sundén E, Kottravel S, Ropinski T (2015) Multimodal volume illumination. Comput Graph 50:47–60 Welch BL (1947) The generalization of student’s problem when several different population variances are involved. Biometrika 34(1/2):28–35 Whitfield-Gabrieli S, Nieto-Castanon A (2012) Conn: a functional connectivity toolbox for correlated and anticorrelated brain networks. Brain Connect 2(3):125–141

Immersive Technology and Medical Visualisation: A Users Guide Neil McDonnell

Abstract

The immersive technologies of Virtual and Augmented Reality offer a new medium for visualisation. Where previous technologies allowed us only two-dimensional representations, constrained by a surface or a screen, these new immersive technologies will soon allow us to experience three dimensional environments that can occupy our entire field of view. This is a technological breakthrough for any field that requires visualisation, and in this chapter I explore the implications for medical visualisation in the near-to-medium future. First, I introduce Virtual Reality and Augmented Reality respectively, and identify the essential characteristics, and current state-­ of-­the-art, for each. I will then survey some prominent applications already in-use within the medical field, and suggest potential use cases that remain under-explored. Finally, I will offer practical advice for those seeking to exploit these new tools. Keywords

Medical visualisation · Virtual · Augmented · Reality · Immersive

N. McDonnell (*) School of Humanities, College of Arts, University of Glasgow, Glasgow, UK e-mail: [email protected]

1

Immersive Technology

Anatomical structures and physiological processes occur in three dimensions, and much of what takes place within the human body remains beyond our natural perceptual faculties  – either too small too see, or obscured under the skin. Whilst the techniques used to capture these structures or processes for visualisation advanced dramatically throughout the twentieth century – from radiography, to (functional) magnetic resonance imaging  – the means by which we actually viewed the captured data remained confined to two dimensions, and to the surface of a sheet or screen. We have been exploring three-­dimensional (3D) structures, via two-dimensional (2D) media. The advent of immersive technology offers a breakthrough for this historic mis-match between the information we want and the representational mode that we have available. The term ‘immersive technology’ actually covers two different technologies that allow engagement with 3D information in a 3D medium: Virtual Reality (VR), and Augmented Reality (AR). Whilst these technologies differ in their features, they share a common technological core in the ability to render 3D computer environments in such a way as to allow the user to perceive virtual objects in much the same way as they do objects in the natural world. In this chapter I will introduce these technologies, comment on their current, and hypothesised,

© Springer Nature Switzerland AG 2019 P. M. Rea (ed.), Biomedical Visualisation, Advances in Experimental Medicine and Biology 1156, https://doi.org/10.1007/978-3-030-19385-0_9

123

N. McDonnell

124

application within medical visualisation, and offer practical advice on how practitioners should go about integrating immersive technology into their work. In Sects. 2 and 3, I will lay the groundwork for understanding what is possible, and discuss the essential features of VR and AR respectively. I will give a snapshot of the current state-of-the-art hardware, as well as some predictions about progress in the medium term. In Sect. 4, I will discuss existing applications for training, diagnosis, and treatment, before outlining some future applications that will be viable once the hardware has further matured. In Sect. 5, I will offer practical advice about what makes for a good application of immersive technology, and what does not.

2

Virtual Reality

The term ‘Virtual Reality’ appears to have been coined by Jaron Lanier, a pioneer in the field, in the 1980s (Lanier 2017). Lanier launched a VR software and hardware company (VPL) in 1984, and went on to popularise the technology over the following decade. It is clear that his work paved the way for the systems we have now, however Lanier points to the work of Morton Heilig,1 and Ivan Sutherland2 in the 1950s and 1960s as having made the crucial early technological breakthroughs. After its initial rise to prominence in the late 1980s and early 1990s, VR soon faded in popularity in part due to the cost of the hardware, and in part due to the poor user experience on offer – VR nausea was commonplace. Whilst VR continued to be used in industry and research niches, it took the announcement of the Oculus VR system on Kickstarter in 2012 for the current VR renaissance to take hold. Oculus was purchased by Facebook for $2 billion in 2014, and launched the Rift in 2016. HTC teamed up with Valve (the makers of gaming platform Steam) to launch the Vive VR system less than a month later, in April, http://www.mortonheilig.com/InventorVR.html https://en.wikipedia.org/wiki/Ivan_Sutherland

1  2 

2016. As of 2018, there are dozens of headsets available to consumers, and within the next decade, it is widely expected that immersive headsets will become ubiquitous.

2.1

 irtual Sight, Sound, V and Touch

Virtual Reality technology intervenes on the senses to represent a virtual environment in place of the real one. Current hardware achieves this through a headset, worn by a user, which presents slightly different images to each eye via a combination of high-resolution screens, and carefully constructed lenses. The software element of the system is able to take information about the head position, and movement, and dynamically render an accurate perspective of the virtual world to the user. So, when a user looks up, or down, or even behind them, the scene that is delivered to each eye is as it would be if that environment were being naturally perceived. The result is that, to varying degrees, the user feels immersed or present in the virtual world (Champel et al. 2017). This characterisation privileges the visual dimension to VR, but most systems also use the same software calculations about head position to mimic directional audio. Thus, not only are the visual cues consistent with the presence of the virtual objects, the audio cues are too. There is evidence to suggest that the combination of visual and auditory input for a VR experience significantly magnifies the sense of presence for users (Brinkman et al. 2015). What remains most clearly missing, at least for the time being, is a highly realistic haptic dimension to virtual experiences. What do exist are prototype gloves, suits, and robotic arms that mimic genuine haptic engagement to some degree (Pacchierotti et  al. 2017) but the resistance and weight that we feel when, say, lifting a bowling ball, or baseball bat, remains beyond the reach of all but the most specialist, and custom-­ ­ built, solutions (i.e. by placing a VR tracker on a real-world bat). Among the openly available VR systems, haptic effects today are

Immersive Technology and Medical Visualisation: A Users Guide

largely limited to mild rumble effects in handheld controllers. This is particularly relevant for the medical field, and I will return to this issue in Sect. 4.4.

2.2

Virtual Movement

Another significant dimension to immersion, is the issue of movement within VR.  Movement mechanisms vary between hardware systems, and even between software applications, but they split into four broad categories: –– Static: the user is rooted to a single position in three-dimensional space, but can still look around by tilting their head along each of the X, Y, and Z axes. This gives the user only three degrees of freedom (3DoF) in the virtual environment. –– Continuous motion: using a gamepad, or controller, users can instruct the camera or avatar in VR to continuously advance through virtual space along the X, Y and Z axes. This allows virtual locomotion, and so up to six degrees of virtual movement (6DoF), even if the hardware can only track 3DoF user movement in the real world. –– Teleportation: like continuous motion, this uses a gamepad or controller to move, and allows 6DoF virtual movement, even on 3DoF hardware. However, unlike continuous motion, this method of movement takes the user abruptly from one point in virtual space to another. –– Tracking: more advanced “room-scale” VR systems3 can track the user’s real-world movement in all six dimensions (tilting and movement through X, Y, and Z). Using this method, virtual movement can match real world movement. The simplest of these to execute is the static approach, but it is also the least rewarding in Current examples include the HTC Vive, Oculus Rift, and Samsung Odyssey. 3 

125

terms of presence or immersion. If you take a step to the left, or crouch, in the real world, but the virtual perspective remains unchanged, that breaks the sense that you are really in that virtual environment (Champel et al. 2017). By contrast, the most complex, and most satisfying, approach is that of tracking. When every move you make is matched in the virtual world, then immersion and presence are at their peak. This approach requires real world space, however, and your movement in the virtual world is limited to where and how you can move in the real world. For this reason, it is standard to blend the tracking approach with one of the controller-­ based strategies. Of the controller-based approaches to movement, teleportation may seem like the most unnatural – we cannot teleport in the real world, but we can walk or ride as we do with the continuous movement approach. It turns out, however, that continuous movement is a significant contributor to virtual reality nausea as it creates a mismatch between the visual information provided by the VR system, and the real-world inputs detected by the vestibular system (Akiduki et  al. 2003). Thus, teleportation has become a standard way for users to move in VR.

2.3

VR Hardware

VR Hardware comes in three broad categories: –– Phone-in-a-box –– Standalone –– Tethered The phone-in-a-box variety was the first to become widely available in 2014 (Lunden 2016). It combined a box-frame and lenses with the existing high-resolution screens, and gyroscopes, in modern smartphones, to create a simple VR experience. The earliest versions such as the Google Cardboard, and the Samsung Gear, allowed anyone with a sophisticated enough phone to experience VR very cheaply (from around $10). These experiences were limited,

N. McDonnell

126

however, by the resolution, graphical power, and battery of the phone, and  – since separate controllers were not available initially – by the strict limitation of having only 3DoF movement within the virtual experiences. In March 2016 Oculus launched their much-­ anticipated Rift: a VR system that had to be tethered to a PC for power and graphical processing (Lomas 2016). The Oculus allowed 6DoF movement in the real world to be tracked by two sensors (also tethered). Just a few weeks later in April 2016, HTC launched their tethered system, the Vive (Vive 2016). This system did not require tethered sensors, but rather infra-red beacons by which the headset could triangulate its own location. This allowed for even greater freedom than the standard two-sensor Oculus system, and was arguably the first truly room-scale VR system available to the public. The standalone systems – such as the Oculus Go, and the Vive Focus – are dedicated VR systems, so do not use a further device such as a phone or a PC. This means that they are independent of sensors, beacons, or a PC, unlike the tethered sort. The Vive Focus offers 6DoF by using an inside-out tracking system that senses the environment and uses that to triangulate head motion (Vive 2018). The additional processing resources of the tethered systems allowed for considerably more ambitious graphics, but more importantly it enabled a very high image refresh-rate (90 Hz) that appears to have significantly reduced VR nausea issues (Hunt 2016). Without the power of a PC, the phone-in-a-box, and standalone headsets available as of 2018 offer a compromise in terms of refresh rate (60–75  Hz) and graphical quality. However, without the bulk and expense of a PC, they offer more portable, and more affordable option. The ideal VR Hardware system remains elusive, but significant progress can be expected in the short  - medium term (1–5  years) towards a more ideal system that is standalone, affordable, and which has the kind of processing power currently reserved for tethered systems.4 In October 2018, Oculus announced the Quest standalone system, which will be launched in Spring 2019 4 

2.4

VR Feature Set

As the foregoing should make clear, there is much diversity within the current VR hardware offering, so there is no definitive “VR feature set” that would fit all cases. That said, if we set aside the phone-in-a-box, and more limited standalone headsets, we do get significant convergence on the following. VR experiences involve computer generated environments. Even when the content is a 360-degree movie, or an environment built from photographs, what is being experienced by the user in the moment, is computer generated. Just as with CGI techniques in films, this removes the constraints of the actual world. You can experience distant or unreachable places, journey to the past or the future, occupy molecular or galactic scales. You can forgo gravity, manipulate light and sound, you can destroy mountains, or produce objects ex nihilo, or adopt super-human perceptual abilities. In short: anything goes. VR experiences offer a realistic sense of virtual depth and 3D. When in VR, you experience the environment as being genuinely three dimensional, and you can assess depth (and so height, and scale) in a natural way. In application, this means that users can be shown an object or environment in VR and genuinely grasp its 3D structure without the interpretive work required when only presented with 2D media.5 VR is Immersive. Perhaps the primary driver behind VR technology is its ability to make the use feel as though they are genuinely ­present in the virtual space. This can be used for games, or leisure in obvious ways, but it can also help with training and education in much the same way as real-world field trips, or practical observations, do. (Oculus 2018). The explicit claim is that this will have “Rift-level” visual quality, but at the time of writing, this quality claim remains unverified. 5  Though depth perception appears to err systematically in VR (Thompson et al. 2004).

Immersive Technology and Medical Visualisation: A Users Guide

VR is isolating. Since VR intervenes on the senses to represent a virtual environment, it screens-­ off the actual world, and the people who are in it. This isolates the user – a bonus for immersive gaming, or meditation, but perhaps a demerit for certain teaching or social applications. This is the flip-side of immersion – it is the price we pay for feeling like we are in the virtual world. VR can be disorienting. Coming out of VR, many users take a moment to re-adjust to the real-­ world surroundings. The lighting is different, the colours less vivid, and their orientation within the room may be surprising. This can all be mildly disorienting, but after having given hundreds of VR demonstrations, I have never experienced that disorientation become distress. That said, the potential for more severe reactions is there, and practitioners need to bear this in mind, especially when dealing with vulnerable populations. VR is nausea-inducing. VR certainly was nausea inducing in the past, but the high refresh rate issue has eliminated this as a general VR feature. I cross it out, but do not delete it, because nausea does remain an issue if developers are not careful with their approach to virtual movement. It has become a matter of choice, not an essential feature of the technology. I will return to this list in Sect. 5 when I offer my practical advice. I now turn to the sister technology of Augmented Reality.

3

Augmented Reality

Augmented reality technology intervenes on the senses to create a realistic impression of virtual objects within the real environment. Where VR aims to replace the real-world environment with a virtual one, AR aims to integrate virtual elements with our real-world surroundings. It is tempting to think of AR as a kind of partial VR – if VR aims to take over 100% of the experience, AR aims for something less  – but this characterisation risks giving the misleading impression that AR is easier

127

to achieve. Achieving fully-­functional AR is considerably more complex, and technologically challenging, than VR (Ashley 2015).

3.1

AR, MR, and HUDs

What exactly deserves the name ‘augmented reality’ has been somewhat controversial. At launch in 2011, Google Glass was heralded as a breakthrough ‘augmented reality’ device available to consumers, and yet many would argue that Google Glass merely puts a small screen between you and the world. Such screens have been around in the military and elsewhere for a very long time, and are typically referred to as Heads-Up Displays (HUDs) (Wikipedia Contributors 2018). The characterisation I give above rules out Google Glass (and others, such as the Vuzix blade (Statt 2018)) as AR, since there is no integration of the contents of the screen into the real environment, there is merely a display between you and the world. So, in my parlance at least, HUDs are not AR. Another rival term that is sometimes used is Mixed Reality, or MR. This nomenclature seems to have arisen in an attempt to distinguish technology that integrates (mixes) with the real world, form that which merely overlays upon it (as with HUDs). Again, on the characterisation of AR given above, no additional terminology is required in order to make this distinction, so I consider this unnecessary, but it has mainly gained currency through a concerted effort by Microsoft to create a unified brand around their efforts around VR and AR. I will stick with the more neutral terminology of AR.

3.2

AR Hardware

The primary challenge of AR is to make the virtual fit with the real, in a convincing or helpful way. This is what makes perfecting AR more difficult than VR, since VR can largely ignore the real-world environment entirely, but AR systems must in some sense detect the world.

128

There are three main technological approaches to this:

N. McDonnell

SLAM technologies require no pre-set aspect of the real-world to latch-onto. Their major strength is that they can operate in a wide variety –– Trackers: use distinctive, high-contrast, of unknown environments. This places greater images or patterns in the real world (e.g. QR strain on the device, however, both in terms of codes) to give the AR device a point of refer- processing load, and in terms of sensing ability, ence by which to orientate and locate the vir- so whilst trackers could work on older camera tual object in the scene. phones (Samsung Galaxy S6 era), even Basic –– Basic SLAM: Simultaneous Localization and SLAM requires newer, more sophisticated devices Mapping (SLAM) is the process of having a (iPhone 7 era onwards). device map an unknown environment, and Much can be done with even Basic SLAM, locate itself within that environment, in real however, as the success of Pokémon Go showed time. Basic SLAM, as I refer to it, predomi- in 2017 (Chamary 2018). By detecting real-world nantly uses the optical inputs from a camera, surfaces such as floors and tables, devices are together with a gyroscope and/or accelerome- able to render virtual objects – like Pikachu – into ter, to achieve this understanding of the envi- the scene quite realistically. Both Android and ronment, and to approximate its location iOS platforms now include a native AR capacity within it. using this sort of SLAM technology, and devel–– Advanced SLAM: This uses an array of sen- oping and publishing for AR has become vastly sors, including stereoscopic cameras, infra-­ easier as a result. red sensors, gyroscopes, and accelerometers, Advanced SLAM remains the obvious next to accurately map the environment, and posi- step. With an array of sensors, devices can detect tion the device. more than just basic surfaces. They could, (in theory) detect the size and shape of objects, and The first AR systems to be widely available, their depth from the device. This would allow for were those which used smart-phone, or tablet, realistic placement, even in cluttered or busy cameras to detect the size and orientation of track- scenes, and (eventually) realistic occlusion by ers in the environment, and then render virtual intermediary objects. As of 2018, there are only assets (3D models, virtual video screens, etc.) two broadly-available devices that are capable of relative to those trackers. The result played on the this kind of Advanced SLAM AR: Microsoft’s phone/tablet’s screen integrated with the standard HoloLens, and the Magic Leap One. camera’s feed. This can yield impressive results, The HoloLens was launched as a “Mixed especially when users control the size and orienta- Reality” prototype spectacularly early, in 2016, tion of the virtual objects by varying the distance and remains largely unchanged (and still and orientation of the tracker (Azuma 1997). restricted to developers only), in 2018 (Microsoft The tracker approach is quite seriously lim- 2018). The HoloLens is not a phone, but a wearited, however. Firstly, the rendering of the virtual able headset, with transparent lenses that sit in object is insensitive to what else is in the environ- front of the eyes, like a visor. Both the HoloLens, ment, so rendering large objects in small spaces, and Magic Leap’s One, use novel projection or objects which should be partially obscured techniques instead of a screen. The benefit of this from your perspective, ends up looking like a is that virtual objects can be represented as occupoorly-executed photoshop edit rather than vir- pying a part of the visual field, whilst the rest tual objects integrated into the scene. Secondly, remains naturally perceived by the user. This is a this approach requires that we alter the real world major advancement over the pass-through in some way (i.e. printing QR codes) to trigger approach on smartphones and tablets, where the the AR experience. This makes scaling the expe- entire scene, including the real-world environriences difficult, and severely limits the range of ment, had to be viewed via a screen. use cases to which AR can be applied.

Immersive Technology and Medical Visualisation: A Users Guide

As extraordinary as the head-mounted, Advanced SLAM, devices are, they remain impractical in two main ways. First, they are prohibitively expensive, at $2500–$3000 each, and neither will be available to the general public until mid-2019 at the earliest – perhaps never, in the case of the original HoloLens. Second, they can only render virtual objects in a narrow portion of the visual field: around 30–40 degrees compared to the 90 degrees we get on VR headsets like the Vive (Ashley 2018). The effect of this is that when a user looks off to one side, the virtual object either clips, or disappears entirely, from the scene. It can seem like you are looking down a tube at the world. In the medium term, however, we can expect the projection technology to improve, the cost to come down, and the bulky/awkward form factor to be refined. There is good reason to think that AR headsets in 2030 will have replaced the smartphone, as we will no longer need a screen to see our data  – it will be overlaid on the world around us in a whole new data interface.

3.3

AR Feature Set

It is useful to consider the feature set associated with AR, by contrast with the feature list for VR. AR uses computer generated elements in real environment: As with VR, every virtual element is computer generated and so not bound by the rules of the real world. You can have a virtual screens at your desk, distant people virtually in the room with you, or virtual arrows directing your actions. Almost anything goes. Integrated with real world: Whilst VR ignores the real world, AR enhances it. You can have any available sort of information  – temperature, blood pressure, scans, etc.  – appear on the organ, or patient in front of you. AR is partly-Immersive: VR takes you away from the real world, and immerses you in the virtual. AR leaves you in the real world but can, to a lesser extent, immerse you in as different version of the real world.

129

AR can be a shared experience (non-isolating): Since AR leaves you in the real world, it leaves you in touch with the objects, and people, in your actual environment. This makes group tasks, or collaborations, around some virtual object much more natural and effortless than in VR. AR can be orienting: AR does not disorientate users in the manner that VR can, as it leaves them in an enhanced version of the world  – potentially one with in-built directions. AR does not cause nausea: As there is no mismatch between perceived movement and actual movement in AR, nausea is simply not an issue.

4

Immersive Technology in Medical Contexts

With the groundwork laid in terms of VR and AR hardware and feature sets, I will now outline three use cases where immersive technology has already been applied in the medical sphere. One is in the context of medical training, one in diagnosis and pre-surgical planning, and the last is therapeutic.

4.1

Immersive Technology and Medical Training

Perhaps the most common application of immersive technology to the medical sphere, is in training and simulation. Using realistic (though generic) 3D computer models, applications using VR and AR can help impart genuine 3D understanding of the anatomical structures, and physiological processes, within the human body. One prominent example of this, is the Medical Realities Platform, which takes users through several stages of surgical training, appropriate for undergraduate or postgraduate level (Medical Realities 2018). Specific lessons on the app include various different laparoscopy procedures, and a series of 360 degree videos within live surgeries, which allow the student to experience the realistic context. This application is published

N. McDonnell

130

across all VR platforms for broad reach, but in order to remain accessible to those on the likes of Google Cardboard, the app functionality is limited to 3DoF experiences, with basic, or no, controller interaction. At Case Western University, the School of Medicine partnered with Microsoft to trial the use of HoloLens in teaching anatomy (Case Western University 2015). Using AR, rather than VR, allowed a natural interaction between students and teachers  – gesturing at particular elements when explaining, or asking about aspects of the animation. Students could view the brain, heart, or digestive system from any angle, and can strip away layers of the model to see the underlying structure, or function. The practical limitations of the HoloLens prevent widespread adoption of this approach today, but it is clear that the students and staff felt the move to 3D teaching was transformative: students who had used the HoloLens devices reported that 15 minutes with the three-­dimensional images “could have saved them dozens of hours” in their traditional anatomy labs  – Dean Pamela Davis, School of Medicine (Case Western University 2015).

4.2

 R, Diagnosis, and Surgical V Planning

If volumetric information is available for a patient, then volumetric rendering, and viewing, of that data could allow greater insight in diagnosing a condition, or planning a surgery. That is the motivating thought behind the “Anatomy Viewer” application, from Body VR (The Body VR n.d.). This application takes patient-specific medical data from MRI, CT and PET imaging, and allows users to view that data in 3D through VR. Instead of having an array of 2D slices of the brain represented in individual scans, practitioners can instead see the combined structure in 3D, and interrogate the information without the cognitive effort of translating 2D information into 3D understanding. There is some evidence to suggest

that this can speed the process of surgical planning, and increase accuracy (Stanford University 2017), but it should also be able to help patients understand their condition. In the future, this sort of application could be extended to take advantage of better AR hardware, so that the 3D models can be seen by doctors and patients simultaneously, or can be available as a reference during surgery.

4.3

Therapeutic Applications of Immersive Technology

Where training and diagnostic applications predominantly have the medical professional as the user, the therapeutic applications have the patient engage in the immersive experience. I will highlight three existing applications of this technology: in stroke rehabilitation, in the treatment of phobias, and as a non-pharmaceutical analgesic. Stroke: Motor recovery is a major element in post-stroke rehabilitation, and there have been dozens of trials of using VR to aid with this. A meta-analysis of those trials conducted showed that the approach had promise (Saposnik et  al. 2011). In early 2018, the Magic Moovr app was launched with the explicit aim of aiding motor recovery in stroke patients, by having them play an immersive game in VR. It is worth noting that this sort of approach will always require VR systems that can either track the movement of the body, or controllers, in 6DoF, and so wide adoption may be stymied by the availability of the hardware. Therapy: Exposure therapy is a widely used treatment for phobias, and for PTSD, but it requires repeated, incremental exposures to the target of the phobia  – spiders, heights, triggering environments etc. VR offers the opportunity to iterate those incremental exposures safely, cheaply, and with greater frequency. A recent literature review of clinical applications concluded that VR-based exposure therapy ­ “has demonstrated equivalent outcomes to in

Immersive Technology and Medical Visualisation: A Users Guide

vivo exposure, the gold-standard treatment” (Maples-Keller et al. 2017). Whilst promising, the review also notes that existing studies have had low numbers of participants (10–20), and typically no control group. Analgesic: VR has been used as a non-­ pharmocological analgesic in the amelioration of both acute and chronic pain (Hoffman et al. 2011). It has been hypothesised this works by diverting the patient’s attention to the virtual world, and thereby leaving less cognitive capacity for the processing of pain signals (ibid.).

4.4

Future Directions

In the future, I expect that we will see considerably more use of VR and AR in the training, diagnostic, and therapeutic applications outlined above, but the major revolutions in the application of immersive technology to medical visualisation awaits two key hardware advances: realistic haptics, and remote presence. Realistic haptic feedback is essential to realistic simulation of tactile tasks, and it remains the most serious technical barrier to the meaningful displacement of cadavers in surgical training. Without realistic feel and super-precise (sub 0.1  mm) tracking, VR will not be able to sufficiently simulate the target circumstances to allow surgeons to develop the necessary motor skills. Several solutions have come to market to try and address the haptic issue, including Xitact medical simulators, and 3D System’s Touch device.6 A systematic review of the available technologies in 2016 concluded: While haptic simulations are an interesting and low cost alternative to training by using real tissues, they are still hindered by the low realism of the visual environment or the high price for high quality devices. (Escobar-Castillejos et al. 2016)

This is likely to remain the case for some time yet. The present devices focus on providing realistic resistance to the user, so that there is some https://uk.3dsystems.com/haptics-devices/touch

6 

131

physical sensation of presence to the user, even when there is no real-world object in front of them, and this is achieved by having the user handle a proxy object (the haptic device) instead. However successful such devices are in that dimension, they lack the additional qualities of touch such as texture, and temperature, as well as limiting the mode by which the user feels them – you cannot detect pressure on the back of the hand or finder, for example. These combined challenges remain substantial, but it is clear that the medical industry is leading the way. Remote presence is another avenue of significant potential, but it represents an equally significant technical challenge. The idea of remote presence is that the user of an AR or VR system could experience the virtual presence of a remotely-located person, as though they were in fact in the room. This means that they can see, and be seen by, those in the room, together with the physical surroundings, and communicate naturally with the expressive power of gesture, and the nuance of facial expressions and body language. One major benefit of such a technology would be that a world-expert in some procedure or topic could be virtually present in seconds, if required, and present in a second surgery moments after leaving the first, regardless of geography. Once we see the potential in that case, it is becomes obvious that the benefits really apply far more broadly, perhaps to include the virtual presence of a doctor with a paramedic crew, or with a patient. This technology will require not only the highly-accurate mapping of the target environment (i.e. the operating room), from all angles simultaneously, it would also require a highly-­ accurate capture of the remote person, such that their expressions, their movements and gestures, could be conveyed to those physically present, whilst they in turn see the target environment via some face-covering device. These are substantial enough technical challenges on their own, but for remote presence to function well, it will require that both are solved, and that the two-way communication of the captured information is fast

N. McDonnell

132

and reliable enough to make the presence work in high-accuracy, high-stakes, applications (compare with voice delay on long distance calls).

5

Practical Advice

A future with realistic haptics, and remote presence, in medicine could be bright, but for the time being applications need to be designed within the practical constraints we face today. In this final section, I outline some practical advice for those seeking to develop applications using immersive technology.

5.1

Do

Do consider your target audience, and their practical limitations. If you are aiming to reach the masses, then niche devices like the HoloLens, or even the HTC Vive, are simply not established in a wide enough population to be the appropriate platform. If you are targeting a highly specialised audience, consider whether the environment they will use it in can incorporate the sensors or beacons that may be required for room-scale VR. Do consider the limitations of the platform you are targeting. For applications with significant movement for the user, avoid the 3DoF systems, and the nausea-inducing continuous movement approach to virtual locomotion. Do try and match the application to the feature set of each technology. VR should be used for immersive, isolating, experiences where the user gets the sense that they really went somewhere, or genuinely did some task. AR should be used when it is an object that needs to be scrutinised, or manipulated, rather than a whole environment, or when it is important that you remain oriented in the real-world environment. Do use the superpowers that immersive technology allows.7 Unaided, we cannot see light of It is interesting to note that Iron Man has no inherent superpowers, but his use of technology – including AR – 7 

certain frequencies, we cannot see temperature, or colourless gasses, or inside opaque objects, or what they looked like in the past, or should look like after a procedure. Given the computer-generated nature of the virtual, these limitations simply need not apply in AR and VR applications. Thinking of what we would want to be able to see, or hear, or feel in a context is the first step to really exploiting this new media. Do start planning today. The hardware may not be ready for your desired application today, but designing and planning the application in advance, and prototyping it on non-ideal hardware, will give an extremely valuable head-­ start when it is ready. For example, apps that will require sophisticated AR hardware, can be prototyped very effectively in top VR systems today. The connection with the real world will need to be faked at first, but the learning and development that take place in VR should port easily to AR once the hardware matures sufficiently.

5.2

Don’t

Don’t use immersive tech for the sake of it. If the application you want could be done via videos, interactive web-apps, or using some cheap physical props, then immersive technology represents an expensive, restrictive, and over-engineered solution to the problem. Don’t model what you don’t have to. The smooth running, and visual quality, of an app significantly depends upon how well optimised, the virtual environment is. Given the extraordinary processing requirements of AR and VR, and the bottleneck that processing represents for many devices, processing unnecessary detail in the virtual scene can seriously impact on the quality of the experience. Simplified backdrops in VR, or textures in AR, will typically improve the performance without sacrificing the experience. Note that one bonus of puts him on a par with those, like Captain America, or Thor, who do.

Immersive Technology and Medical Visualisation: A Users Guide

AR apps is that you get the real-world environment for free (both in terms of cost to develop, and processing load). Don’t overstimulate the user. VR in particular can become overwhelming for users if too much is going on, and too little time is allowed for them to look around, and find things in the virtual environment at their own pace. A common problem in the design of AR and VR applications, is that users do not naturally know where they are supposed to direct their attention. This issue is compounded by overly complex experiences. Don’t incorporate unnecessary movement. Given the technical challenges around movement, and the potential for nausea and disorientation, it is wise to limit the movement within the experience as much as is practically possible. One counterpoint to this is the additional immersion VR users can experience if they move, even just a little, to experience the 6DoF tracking. Don’t overlook the haptics. It should be plain from the above discussion that only fairly basic haptic feedback technology exists today. That does not mean that one should completely overlook the topic, however. Picking up virtual objects that should have some bulk about them, but don’t, can quickly break the sense of immersion that certain applications are aiming for. If you cannot avoid the need to physically interact with an object in your application, then consider whether a proxy object in the real world could be used instead. For example, if you need the user to lean on a virtual surface, then bring a real-world table or similar and position it to match the virtual surface’s location. The Void experience has used this approach to build appropriate tactile elements into sophisticated VR experiences.8 Don’t be daunted. There is a lot to consider and balance about when forming the design of an immersive application, and many of those who find themselves inspired by the potential of this technology, soon give up because the daunting complexity of getting from idea, to https://www.thevoid.com/

8 

133

implementation. Whilst understandable, this stifles progress, and given the rapid spread of expertise in this area – particularly in the US, and UK – means that the skills and experience exist to guide and advise you through. If the application is important enough, help is available.9

References Akiduki H et  al (2003) Visual-vestibular conflict induced by virtual reality in humans. Neurosci Lett 340(3):197–200 Ashley J  (2015) Imaginative Universal. [Online] Available at: http://www.imaginativeuniversal.com/ blog/2015/09/09/why-augmented-reality-is-harderthan-virtual-reality/. Accessed 21 Dec 2018 Ashley J  (2018). Imaginative Universal. [Online] Available at: http://www.imaginativeuniversal. com/blog/2018/10/08/magic-leap-one-vs-hololensv1-comparison/. Accessed 21 Dec 2018 Azuma RT (1997) A survey of augmented reality. Presence Teleop Virt Environ 6:355–385 Brinkman W-P, Hoekstra ARD, Egmond R (2015) The effect of 3D audio and other audio techniques on virtual reality experience. Stud Health Technol Inform 219:44 Case Western University (2015) Case Western University. [Online] Available at: http://case.edu/hololens/. Accessed 21 Dec 2018 Chamary J (2018) Forbes. [Online] Available at: https:// www.forbes.com/sites/jvchamary/2018/02/10/pokemon-go-science-health-benefits/#5964d1b03ab0. Accessed 21 Dec 2018 Champel M-L, Doré R, Mollet N (2017). Key factors for a high-quality VR experience. SPIE, pp 103960Z–103960Z-12 Escobar-Castillejos D, Noguez J, Neri L, Magana A, Benes B (2016) A review of simulators with haptic devices for Medical Training. J Med Syst 40(4):104. https://doi.org/10.1007/s10916-016-0459-8 Hoffman HG et al (2011) Virtual reality as an adjunctive non-pharmacologic analgesic for acute burn pain during medical procedures. Ann Behav Med 41:183–191 Hunt C (2016) VR Heads. [Online] Available at: https:// www.vrheads.com/tips-avoid-motion-sicknesscaused-vr-gaming. Accessed 21 Dec 2018 Lanier J (2017) Dawn of the new everything. Henry Holt and Co, New York

One organisation which aims to connect academics, industry, and practitioners who work with immersive technology, is ImmerseUK: https://www.immerseuk.org/

9 

134 Lomas N (2016) Tech Crunch. [Online] Available at: https://techcrunch.com/2016/01/06/oculus-rift-headset-priced-at-599-for-consumers-ships-in-march/. Accessed 21 Dec 2018 Lunden I (2016). Tech Crunch. [Online] Available at: https://techcrunch.com/2017/02/28/google-hasshipped-10m-cardboard-vr-viewers-160m-cardboardapp-downloads/?ncid=rss. Accessed 21 Dec 2018 Maples-Keller JL, Yasinski C, Manjin N, Rothbaum BO (2017) Virtual reality-enhanced extinction of phobias and post-traumatic stress. Neurotherapeutics 14:554–563 Medical Realities (2018) Medical Realities. [Online] Available at: https://www.medicalrealities.com/. Accessed 21 Dec 2018 Microsoft (2018) Microsoft. [Online] Available at: https:// www.microsoft.com/en-gb/hololens. Accessed 21 Dec 2018 Oculus VR (2018) Oculus. [Online] Available at: https:// www.oculus.com/blog/introducing-oculus-quest-ourfirst-6dof-all-in-one-vr-system-launching-spring2019/. Accessed 21 Dec 2018 Pacchierotti C et  al (2017) Wearable haptic systems for the fingertip and the hand: taxonomy, review, and perspectives. IEEE Trans Haptics 10:580–600 Saposnik G, Levin M; Outcome Research Canada (SORCan) Working Group (2011) Virtual reality in

N. McDonnell stroke rehabilitation: a meta-analysis and implications for clinicians. Stroke 42:1380–1386 Stanford University (2017) Stanford Medicine. [Online] Available at: https://med.stanford.edu/news/allnews/2017/02/virtual-reality-imaging-gives-surgeonsa-better-view-of-anatomy.html. Accessed 21 Dec 2018 Statt N (2018) The Verge. [Online] Available at: https:// www.theverge.com/2018/1/9/16869174/vuzix-bladear-glasses-augmented-reality-amazon-alexa-aices-2018. Accessed 21 Dec 2018 The Body VR (n.d.) The Body VR. [Online] Available at: https://thebodyvr.com/anatomy-viewer/. Accessed 21 Dec 2018 Thompson WB et al (2004) Does the quality of the computer graphics matter when judging distances in visually immersive environments? Presence Teleop Virt Environ 13:560–571 Vive (2018) Vive Blog. [Online] Available at: https://blog. vive.com/us/2018/11/08/htc-vive-launches-full-suitepremium-vr-offerings-businesses-sizes/. Accessed 21 Dec 2018 Vive Team (2016) Vive. [Online] Available at: https:// blog.vive.com/us/2016/02/21/unveiling-the-vive-consumer-edition-and-pre-order-information/. Accessed 21 Dec 2018 Wikipedia Contributors (2018) Wikipedia. [Online] Available at: https://en.wikipedia.org/wiki/Head-up_ display. Accessed 21 Dec 2018

A Showcase of Medical, Therapeutic and Pastime Uses of Virtual Reality (VR) and How (VR) Is Impacting the Dementia Sector Suzanne Lee

Abstract

Dementia is on the rise as our population ages and it is still untreatable. Our global society is changing but so is our technological landscape. Virtual Reality has been around for a while but it has not been widely adopted within the general healthcare sector, never mind for niche use cases, that is until now… Keywords

Dementia · Economy · Entertainment · Healthcare · Therapy · Virtual reality

Allow me to take you on a journey of discovery as I share my own research into both the dementia and virtual reality industries. I started a company (Pivotal Reality) to help people by using virtual reality with people who are living with dementia. I became aware of this dreadful condition after both my Grans were diagnosed with it. Both are no longer here. My love for technology and my passion for helping people thrives in this virtual reality for dementia space. I am a conference speaker, blogger and active on social media about everything VR (virtual reality) and dementia VR related. Recently I graduated from the first S. Lee (*) Digital Innovation and Strategy/Pivotal Reality, Glasgow, UK

global Transformative Technology Academy and also listed as 1 of 100 voices in VR AR (Augmented Reality) in Education. Let me share some incredibly inspiring insights with you about how virtual reality is disrupting the dementia sector from all angles! You never know, you may feel empowered to join the cause and motivated to carve out your own piece of the puzzle! I do hope so because we need people like you, yes you!

1

 aking a Look at Dementia T and Why We Should Give It Our Attention

The World Health Organisation (WHO) details ‘Dementia is a syndrome, usually of a chronic or progressive nature, caused by a variety of brain illnesses that affect memory, thinking, behaviour and ability to perform everyday activities’ (WHO; 1) ‘It has been indicated that patients suffering from AD are vulnerable to abnormalities in behavioural and psychological symptoms, such as depression, irritability, agitation, aggression, and delusions during the overall course of the disease’ (Li et  al. 2017). According to the WHO the number of people living with dementia worldwide is currently estimated at 47 million and is projected to increase to 75 million by 2030. The number of cases of dementia are estimated to almost triple by 2050.

© Springer Nature Switzerland AG 2019 P. M. Rea (ed.), Biomedical Visualisation, Advances in Experimental Medicine and Biology 1156, https://doi.org/10.1007/978-3-030-19385-0_10

135

S. Lee

136

The total number of new cases of dementia each year worldwide is nearly 9.9 million, implying one new case every 3  s. The number of people with dementia is expected to increase to 82 million by 2030 and 150 million in 2050 (WHO; 2). The symptoms start 20 years before diagnosis so there is a large number of our global population who are yet to be diagnosed. With citizens who were born post 2015 expected to reach the age of 100 years old, this is a major cause of concern for us as a society. Dementia is now also the seventh leading cause of death and there is no known cure (WHO; 3). More research is needed to develop new and more effective treatments and to better understand the causes of dementia. Research identifies the modifiable risk factors of dementia is still scarce.

2

 he Cost on Healthcare/ T Economy

The high cost of the disease will challenge health systems to deal with the predicted future increase of cases. The total cost of dementia to the UK is estimated at £26 billion a year (Alzheimer’s Society). The WHO expects the global cost to rise to US$ 2 trillion by 2030 (WHO; 4).

2.1

The Impact

Caring for those living with dementia not only costs society and the economy but it also costs families too. Many carers are family especially if living at home. Some sell their homes (and forfeit inheritance for their family) in order to live at a professional facility such as a nursing home with dedicated dementia care. These changes can cause a change in relationship dynamics which has emotional consequences in addition to financial costs.

2.2

Lack of Funding and Research

Only £90 is spent on dementia research per patient per year in the UK according to Alzheimer’s Society. This is nowhere near enough

to help us discover causes and cures for dementia! As the fifth largest cause of death across the globe we should be investing more in this space, in my personal opinion.

2.3

Current ‘Solutions’/ Treatments

Today’s offerings vary in choice of therapies and experiences that I will cover in two categories: 1. Reminiscence Therapy Things like memory boxes or on a larger scale reminiscence pods can be used to help stimulate memory recollection. Senses can be triggered through smell, touch, sound and sight and tasting items. The impact of these is extremely effective for the majority of people especially as a symptom of dementia is a loss of confidence or ‘one’s self’. Outcomes of this style of therapy can be conversation starters, confidence boosting, boredom busting and remembering personal self. Resulting in improved cognitive functioning and mood levels (The Abbeyfield Society). Although the author of the book ‘Somebody I used to know’, Wendy Mitchell who is currently living with early onset dementia herself said in an article recently that wall murals can actually be confusing for some. The scenes are meant to be a way of escape but actually because you cannot ‘switch them off’ or ‘unplug’ yourself from them they can be confusing and a cause of stress for someone who is living with dementia (Learner 2018). 2. Entertainment Activities Music, pets, children, sensory and outings are all activities that I would class in this category. All of which prove valuable for improving mood and cognitive functions. The Channel 4 programme called Old People’s Home for 4 Year Olds demonstrated activities that had emotional connectivity with others (in this case, children) to have health and mental benefits too. They conducted an experiment that monitored volunteers when children aged 4 years old were introduced to the environment is an important case study with

A Showcase of Medical, Therapeutic and Pastime Uses of Virtual Reality (VR) and How (VR) Is Impacting…

massively successful results for health, mood and cognitive improvement from all elders (Channel 4). I believe all these activities are fantastic and should be brought in to care environments as often as possible, ideally on a regular basis! The challenges are consistency, costs, availability of volunteers and sometimes locations. These real-world solutions that do work well (to a point) are the reason why I personally believe virtual reality to be a valuable tool for those living with dementia. Let’s take a look at how VR is playing its part in dementia reminiscence, entertainment and so much more!

2.4

 ow VR Is Disrupting H the Sector

Virtual Reality is the when users are transported into a new software generated environment visually by placing a HMD (Head Mounted Display) over their eyes. There are technological advancements that allow users to experience touch in VR through what is called haptics. This is when the user has the sensation of touch in the real-world at the same time of contact being made in the virtual reality world either through the device they are holding or wearing. This technology is still at an early stage of development but will become readily accepted as a VR enhancer. I’ve heard it described as, if VR is a ball then haptics will be the bat – it is fun to use the ball by itself but the bat enhances the experience and opens up further opportunities for the user to enjoy.

3

 he Areas That We See VR T Innovation in the Dementia Sector Today

3.1

 irtual Reality Reminiscence V for Dementia (Therapy)

Virtual Reality is a fantastic tool for aiding reminiscence or as it is known in VR, ‘immersive reminiscence’. Just like the real-world examples

137

except you can see 3D models of objects instead. You have the ability to personalise these mementos for users or offer a selection of generalised items. In addition, it is common to create VR scenes or 360° videos/photos which can be viewed within VR.  Users can experience the same benefits from using VR reminiscence experiences as they would in the real-world at a fraction of the cost. It is not only cost saving but it is more practical too as you don’t need to consider the space or storage of these items given that the only equipment needed to view them all is a small HMD (Head Mounted Display). (Standalone VR HMDs are now available for a few hundred pounds meaning that mobile phones and laptops are no longer a necessity and that also keeps the costs low too ultimately improving accessibility to the technology). The standalone HMDs are great quality but not a necessary expense as lower cost options are available too. Google Cardboard is a prime example of a lower cost option, whereby a mobile phone is inserted inside the Cardboard headset. One of my personal favourite examples of VR reminiscence, or immersive reminiscence, is The Wayback Team’s Coronation Day 1953 360° video (The Wayback Team). I recommend that you check this out and please do share this with anybody you happen to know who is living with dementia. This multi award winning solution has captured re-enacting a moment in history with exceptional tribute to historical details such as fashion, décor, props etc. Many users are thrilled to watch it as it helps to trigger forgotten memories. Producing further films throughout the decades would help reach more people of all ages. Therefore, assisting with compiling a variety of content that allows users a meaningful choice of immersive reminiscence experiences. Pivotal Reality makes local 360° videos to capture locations of interest for clients or personalised experiences for clients to help to start creating a catalogue of choice for dementia VR. Overall the industry is still in the early stages of this library and there is plenty room for additional footage, be it homemade or of professional standards.

S. Lee

138

4

Impact/Output/Research

5

Virtual Reality Entertainment for Dementia

Rendever based in Boston is one who uses multiple user experiences for entertainment to help There are many benefits of using virtual reality as improve social skills. Pivotal Reality is another a reminiscence therapeutic tool. There are a few and we were inspired by TribeMix who were the publications written specifically about the bene- first company that we saw in the UK to do this fits and impact of VR on dementia too (Garcia-­ activity. They inspired us into this sector! Even Betances et al. 2015). Studies have shown that it care facilities such as Balhousie Care Group in takes approx. 20  s for virtual worlds to be Perth, Scotland and Patyna in Netherlands are accepted and full immersion of users occurs independently using the technology with their (Tussyadiah et  al. 2018). Having the ability to residents. ‘travel’ and experience a new scene from your real-world position is a quite liberating especially 5.2 Benefits for people who have little or no mobility.

Just like in the real-world there are many variants of activities that can be classed as VR Entertainment for dementia too. The most popular is the ability to ‘travel’ to other places without leaving the comfort of the real-world space that the user is situated in. I have taken people living with dementia to the top of Mount Everest, underneath our oceans, on safari in Africa and even into outer space! Those types of VR apps are common and more general but people who are living with dementia enjoy them equally. A lovely lady told me that she had even been tobogganing with her grandson after he received a VR headset from Santa Claus! Don’t let preconceived ideas of our elderly generation or stereotypes associated with this cognitive disease discourage you from promoting and offering these types of experiences to this user group. You must of course adhere to the VR device’s health and safety protocol and always introduce the VR experience in advance of your user starting the experience e.g. no adrenaline fuelled experiences without prior warning and consent!

5.1

Use Cases

There are many examples of VR companies across the globe who visit places within communities such as local care/nursing homes.

The most significant benefit of using Virtual Reality for entertainment purposes is that it helps to combat loneliness. The immersive experience absolutely includes users and makes them feel part of the scene/environment. Even visiting a change of scenery is beneficial for people living with dementia and this medium really excels at delivering that in a practical, safe manner. When was the last time this social group attended a live concert at a stadium for example? With VR, this type of recreational visit is possible! VR has also shown a 70% stress reduction for users too. (Samsung Healthcare Insights 2018).

6

 irtual Reality Education V and Training for Dementia

The audience that I refer to in this section is applicable to academia, professionals, family/ carers, people living with dementia and for general public awareness. Virtual Reality is a fantastic tool for not only displaying empathy but thanks to the fully immersive experience it actually allows users to ‘feel’ empathy too. That is precisely why VR is such a powerful resource for education and training. (Other industries also use VR for educational and training purposes because it is so effective). Users observed to have scored 10% higher when training with a VR headset (HMD; Briggs 2017). The immersive experience can easily be designed for each specific user group too. There’s a wide variety of choice of 360° videos on various content plat-

A Showcase of Medical, Therapeutic and Pastime Uses of Virtual Reality (VR) and How (VR) Is Impacting…

forms but let’s take a look at a few examples that stand out for me: Alzheimer’s Research UK ‘A Walk Through Dementia’ 360° video is perfect for helping others to understand some of the symptoms of Alzheimer’s and Dementia. Narrated from the point of view of somebody with the condition, we hear how the female protagonist becomes confused, see how her vision is impacted and feel how vulnerable she is too. Not only does this video capture the very real symptoms but it helps us as the audience understand the implications the symptoms play on that individual’s life including challenges to their independence. (Alzheimer’s UK Research have more videos like this in their range that you should check out too!). For care providers there are various variants of training on the marketplace. A recent study from the University of Maryland found that people recall information better when it is presented in a virtual reality environment rather than a desktop computer. (Kelly 2018). From showing you bespoke dementia visors vs ‘normal’ sight such as Wireframe Immersive’s solution or experiencing the effects of words and tone used when caring with someone who is living with dementia. There are tailored simulations available in virtual reality, simulations of care homes that are suitable for training inductions and event accessories such as a dementia suit that can be worn simultaneously with the VR to signify the mobility difficulties of some patients too. Osso VR is well known as being the surgical simulation VR platform in healthcare and it will not be long before there is a runner for the Dementia VR platform, something we at Pivotal Reality are working towards achieving. Most training solutions are bespoke offerings so it really wouldn’t surprise me if there are other niche examples out there that are yet to be discovered!

7

Research

Every aspect of using virtual reality within the dementia sector, especially when using directly with someone who has the condition, is analysed in depth through research. Generalised research

139

including looking into the effects of VR from various angles such as looking at specifically the memory (memory palaces; Vindenes et al. 2018), therapeutics being tailored for impact such as One Caring Team and VR has even been clinically proven to help with depression, anxiety, post-traumatic stress disorder and chronic pain (Jerdan et  al. 2018; Maples-Keller et  al. 2017; Virtual Reality Therapy News 1, 2). The research that I refer to in this section is how VR is contributing towards gaining a deeper understanding of Dementia, it’s symptoms and effects on those affected. In particular I want to share Sea Hero Quest. A large research experiment, in fact it is the biggest of its kind with over three million participants to date, that focuses on spatial navigation awareness and ability of users. The game consists of being on a boat and being given specific tasks to achieve. The user actions and response rates are then collected and measured as part of the research experiment. Remarkably 2 minutes of game play provides 5 hours of research data with the equivalent of 12,000 years of research being collected already (Sea Hero Quest)! Users of all types are welcome to take part in this large global experiment – even if you have not been diagnosed with any conditions. VR for dementia apps and experience designers are mostly mindful to include data measurements that will help validate or prove hypothesis so this section is as broad as it is narrow. Sea Hero Quest’s (and others) mission is to ultimately help scientists diagnose the disease early.

8

Diagnostics

In addition to the Sea Hero Quest project, Cambridge University and Alzheimer’s Society are working on utilising VR for diagnosing Alzheimer’s (a form of dementia) too. They are at an early stage with their studies with just over 300 participants being involved (McKie 2018). I believe that with all the research papers in parallel with various companies who are continually working across this sector can somehow collaborate to deliver monumental change.

S. Lee

140

9

Future Considerations/ Challenges

Dementia Digital Design Code and Frameworks  – There is very little public information defining what considerations to take into account when designing any digital products for people living with dementia. Australia are leading the way with sharing this information but even then, the context relates to real-world design. We need Digital Dementia Friendly Standards that should be adhered to. The standards should come from a coalition of experts from design and medical backgrounds and should be reviewed/ updated on a regular basis. Controls ranging from design in general e.g. colours, brightness, spatial awareness/depth design etc. through to specific functionality should be covered. Avatars – virtual reality can induce the illusions of embodiment and if the avatar can not be personalised either manually or systematically then there is a risk of the user feeling agitated or distressed. Therefore, further research and testing of avatars is required with consideration that VR Avatars are still in the infancy stages for more general use currently. Ethics  – we need an ethics code of conduct to ensure that VR solutions meet requirements and not detrimental in any way to any user especially as we are working with vulnerable people. I would also suggest including after support/care for users on the off chance of triggering any memories that might have been purposely forgotten, for example. Hardware – will evolve and already has at a fast pace. Things to consider are mobility of users both with neck and hand grip and hardware/ accessory adaption as a result. We haven’t even touched on Haptic technology. AI, Data and Personalisation  – technology innovation is exploding and the opportunity to mix technologies together is already happening which is hugely exciting to be able to fill this blank canvas, however, emphasises the urgency for an ethical code of conduct. Personalisation and ability to tailor VR experi-

ences based on the user’s needs in the future will be immensely beneficial to their wellbeing.

10

Conclusion

Virtual Reality is being used everywhere within the dementia sector  – we haven’t even spoken about conferences or events either but I have even attended that in VR this year. The reactions of people living with dementia after experiencing VR is extremely uplifting, their smile, their bright wide eyes and their disbelief is contagious. It is the reason why I got into this sector in the first place! I could have detailed more in this chapter but I wanted to give you a short flavour of each side of the dementia industry as an introductory whistle-­ stop tour. You may be in awe about discovering some of these facts and stories but I would also like to mention that we are still very much ‘early stage’ with this technology and there is plenty room for more people to join the cause. Remember with one person being diagnosed every 3  s, we have a big job on our hands! We need more VR content, policies, resources, problem solvers, developers and so much more – we need more people, we need you! At the very least, help us to spread the word about this important use of virtual reality. Thank you.

References [1] World Health Organisation What is Dementia. Available via http://www.who.int/features/factfiles/ dementia/en/. Accessed 14 Jan 2019 [2] World Health Organisation Dementia Fact Sheet. Available via https://www.who.int/news-room/factsheets/detail/dementia. Accessed 14 Jan 2019 [3] World Health Organisation The Top 10 causes of death. Available via https://www.who.int/en/newsroom/fact-sheets/detail/the-top-10-causes-of-death. Accessed 14 Jan 2019 [4] The World Health Organisation 10 Facts on dementia Available via https://www.who.int/features/factfiles/ dementia/en/. Accessed 14 Jan 2019 Alzheimer’s Research UK A Walk Through Dementia  – walking home. Available via https://www.youtube. com/watch?v=R-Rcbj_qR4g. Accessed 14 Jan 2019

A Showcase of Medical, Therapeutic and Pastime Uses of Virtual Reality (VR) and How (VR) Is Impacting… Alzheimer’s Society. Available via https://www.alzheimers.org.uk/about-us/policy-and-influencing/what-wethink/dementia-research. Accessed 14 Jan 2019 Briggs J  (2017) TechCrunch VR helps us remember. Available via https://techcrunch.com/2018/06/14/vrhelps-us-remember/. Accessed 14 Jan 2019 Channel 4 Old People’s Home for 4 Year Olds. Available via https://www.channel4.com/programmes/old-peoples-home-for-4-year-olds. Accessed 14 Jan 2019 Garcia-Betances RI, Waldmeyer MTA, Fico G, Cabrera-­ Umpierrez MF (2015) A succinct overview of virtual reality technology use in Alzheimer’s disease. Front Aging Neurosci. Available via https://www.frontiersin.org/articles/10.3389/fnagi.2015.00080/full. Accessed 14 Jan 2019 Kelly R (2018). Study: people remember information better through VR.  Available via https://campustechnology.com/articles/2018/06/14/study-people-remember-information-better-through-vr. Accessed 14 Jan 2019 Learner S (2018) Dementia writer Wendy Mitchell warns care homes nostalgic murals are ‘confusing and disturbing. Available via https://www.carehome.co.uk/ news/article.cfm/id/1601526/dementia-writer-wendymitchell-care-homes-murals/dementia-writer-wendymitchell-care-homes-murals. Accessed 14 Jan 2019 Li M, Lyu j-H, Zhang Y, Gao M-L, Li W-J, Ma X (2017) The clinical efficacy of reminiscence therapy in patients with mild-to-moderate Alzheimer disease. Medicine 96(51):e9381. Available via https://www. ncbi.nlm.nih.gov/pmc/articles/PMC5758240/#R4. Accessed 14 Jan 2019 Maples-Keller JL, Bunnell BE, Kim S-J, Rothbaum BO (2017) The use of virtual reality technology in the treatment of anxiety and other psychiatric disorders. Harv Rev Psychiatry 25(3):103–113. Available via https:// www.ncbi.nlm.nih.gov/pmc/articles/PMC5421394/. Accessed 14 Jan 2019 McKie R (2018). Virtual reality to help detect early risk of Alzheimer’s. Available via https://amp.theguardian. com/society/2018/dec/16/alzheimers-dementia-curevirtual-reality-navigation-skills. Accessed 14 Jan 2019 One Caring Team. Available via https://onecaringteam. com/. Accessed 14 Jan 2019

141

Samsung Healthcare Insights (2018) Ronda Swaney Virtual reality delivers real-world benefits to dementia patients. Available via https://insights.samsung. com/2018/03/13/virtual-reality-delivers-real-worldbenefits-to-dementia-patients/. Accessed 14 Jan 2019 The Abbeyfield Society Four Benefits of Memory Boxes to Those with Dementia. Available via https://www. abbeyfield.com/2018/01/four-benefits-of-memoryboxes-to-those-with-dementia/. Accessed 14 Jan 2019 The Wayback Team Coronation Day (1953). Available via https://www.youtube.com/watch?v=TNSx_b8B37I. Accessed 14 Jan 2019 Tussyadiah IP, Wang D, Jung TH, M Dieck C (2018) Virtual reality, presence, and attitude change: empirical evidence from tourism. Tour Manag 66:140–154. Available via https://www.sciencedirect.com/science/ article/pii/S0261517717302662. Accessed 14 Jan 2019 Vindenes J, de Gortari AO and Wasson B (2018). Mnemosyne: adapting the method of loci to immersive virtual reality. International conference on Augmented Reality, Virtual Reality and Computer Graphics. AVR 2018, pp 205–213. Available via https://link.springer. com/chapter/10.1007%2F978-3-319-95270-3_16. Accessed 14 Jan 2019 Virtual Reality Therapy News [1]. Virtual reality therapy for depression. Available via http://www.vrtherapynews.com/helping-you-with/virtual-reality-therapy-for-depression/. Accessed 14 Jan 2019 Virtual Reality Therapy News [2]. Virtual reality therapy for post traumatic stress disorder (PTSD). Available via http://www.vrtherapynews.com/helping-you-with/ virtual-reality-therapy-for-post-traumatic-stress-disorder-ptsd/. Accessed 14 Jan 2019 Jerdan SW, Grindle M, van Woerden HC, Boulos MNK (2018) Head-mounted virtual reality and mental health. Crit Rev Curr Res JMIR Serious Games 6(3):e14. Available via https://www.ncbi.nlm.nih.gov/ pmc/articles/PMC6054705/. Accessed 14 Jan 2019 Sea Hero Quest is an online and virtual reality experiment that collects data relating to special navigation that is being used to help with dementia research studies. Available via http://www.seaheroquest.com/site/en/ why-play-sea-hero. Accessed 14 Jan 2019