Biomedical Visualisation: Volume 7 (Advances in Experimental Medicine and Biology (1262)) [1st ed. 2020] 3030439607, 9783030439606

This edited book explores the use of technology to enable us to visualise the life sciences in a more meaningful and eng

712 24 19MB

English Pages 254 [247] Year 2020

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Biomedical Visualisation: Volume 7 (Advances in Experimental Medicine and Biology (1262)) [1st ed. 2020]
 3030439607, 9783030439606

Table of contents :
Preface
Acknowledgements
About the Book
Contents
About the Editor
Contributors
1: Virtual Anatomy Museum: Facilitating Public Engagement Through an Interactive Application
1.1 Introduction
1.1.1 Medical Museums
1.1.2 Digitisation
1.1.3 Virtual Museums
1.1.4 Aims and Objectives
1.2 Methods
1.2.1 Modelling the Museum
1.2.2 Models of the Specimens
1.2.2.1 Studio Set-Up and Photography Acquisition
1.2.2.2 Generation of Photogrammetric Models
1.2.3 Design of the Application
1.3 Results
1.3.1 Virtual Museums Can Facilitate Public Engagement
1.4 Discussion
1.4.1 Challenges of Creating Specimen Models
1.4.2 Implementation into Unity
1.5 Future Development and Conclusion
References
2: eLearning and Embryology: Designing an Application to Improve 3D Comprehension of Embryological Structures
2.1 Introduction
2.1.1 Current Use of eLearning in Embryology and Histology
2.1.2 Problems and Advances in Digital Histology
2.2 Methods
2.2.1 Design
2.2.2 Development Phase
2.3 Evaluation
2.3.1 Participants
2.3.2 Apparatus
2.3.3 Experimental Procedure
2.3.4 Results
2.4 Discussion and Conclusion
2.5 The Modelling Method and Its Fitness for Purpose of Visualisation
2.6 Future Developments of 3D Reconstruction in Histology and Embryology
References
3: Animated Guide to Represent a Novel Means of Gut-Brain Axis Communication
3.1 Introduction
3.1.1 Rationale
3.1.2 Research Aim
3.2 Literature Review
3.2.1 The Microbiome-Gut-Brain (MGB) Axis
3.2.1.1 Microbiome-Derived Carnitine Mimics (Hulme et al. 2020)
3.2.2 Learning Science with Animations
3.2.2.1 Multimedia Learning
3.2.2.2 Pros and Cons of Animation as a Learning Tool
3.2.2.3 Using Animation Efficiently
3.3 Materials and Methodology
3.3.1 Materials
3.3.1.1 Online Platforms
3.3.2 Design and Methods
3.3.3 Development
3.3.3.1 Segments
3.3.4 Production
3.3.4.1 Models
3.3.4.2 Animation
3.3.5 Post-production
3.3.5.1 After-Effects
3.3.5.2 Survey Design
3.3.6 Product Testing
3.3.6.1 Participants
3.3.6.2 Animation Distribution
3.4 Evaluation
3.4.1 Introduction
3.4.2 Methods
3.4.2.1 Manual Use
3.4.2.2 Participants
3.4.2.3 Apparatus
3.4.2.4 Design and Procedure
3.4.2.5 Results
Knowledge
Perceived Understanding
Helpfulness of Media
3.4.3 Discussion
3.4.3.1 Knowledge
3.4.3.2 Perceived Understanding
3.4.3.3 Helpfulness of Media
3.4.4 Conclusion
3.5 Discussion and Conclusion
3.5.1 Key Findings
3.5.2 Contributions
3.5.3 Limitations
3.5.3.1 Population Validity
3.5.3.2 Construct Validity
3.5.3.3 Content Validity
3.5.3.4 Concurrent Validity
3.5.3.5 Pretesting
3.6 Conclusion
References
4: Engaging with Children Using Augmented Reality on Clothing to Prevent Them from Smoking
4.1 Introduction
4.2 Background Context
4.2.1 The Current Situation Regarding Smoking in the UK and Beyond
4.2.2 Smoking Interventions
4.2.3 Emerging Technologies in Public Awareness and Education
4.2.3.1 Virtual Reality
4.2.3.2 Augmented Reality
4.3 Methods
4.3.1 Workflow
4.3.2 Storyboard
4.3.3 Digital 3D Anatomical Content
4.3.3.1 Segmentation
4.3.3.2 3D Modelling
4.3.3.3 Texturing
4.3.3.4 Animations
4.3.4 2D Content
4.3.4.1 Illustrations and Interface Design
4.3.4.2 Informational Animations
4.3.5 Application Development
4.3.5.1 Pattern Generation and Impression
4.3.5.2 Application Development Outcomes
4.4 Evaluation of the App
4.4.1 Research Questions
4.4.2 Participants
4.4.3 Materials
4.4.4 Procedure
4.4.5 Data Analysis
4.5 Results
4.5.1 Observational Analysis
4.5.2 Results of the Questionnaires
4.5.2.1 Usability
4.5.2.2 Additional Usability Input
4.5.2.3 Educational Aspect
4.5.2.4 Comments
4.6 Discussion
4.6.1 Review of the Research Findings
4.6.2 Review of Design and Development Process
4.6.3 Limitations and Future Developments
4.7 Conclusion
References
5: Enabling More Accessible MS Rehabilitation Training Using Virtual Reality
5.1 Introduction
5.2 Background
5.2.1 Multiple Sclerosis
5.2.1.1 MS and Rehabilitation
5.2.1.2 Motivation and Rehabilitation
5.2.2 Hand Tracking Technology
5.2.2.1 Virtual Reality
5.2.3 Usability
5.3 Methods
5.3.1 Storyboard
5.3.2 Main Menu and Game Menu
5.3.3 Game 1: Piano
5.3.4 Game 2: Recycle
5.3.5 Game 3: Tidy Up
5.3.5.1 Experimental Procedure
5.4 Results
5.4.1 Observational Results
5.4.2 Questionnaire Results
5.4.2.1 Qualitative Data
5.4.3 Quantitative Data
5.4.3.1 SUS Data
5.4.3.2 ASQ Data
5.4.3.3 PQ Data
5.5 Discussion
5.5.1 Limitations
5.5.2 Future Improvements
5.6 Conclusion
References
6: The Use of Augmented Reality to Raise Awareness of the Differences Between Osteoarthritis and Rheumatoid Arthritis
6.1 Introduction
6.2 Theoretical Background
6.2.1 Arthritis
6.2.2 Inflammation
6.2.3 Rheumatoid Arthritis and Osteoarthritis
6.2.4 Current Public Awareness Methods
6.2.5 Conventional Print-Based Approaches
6.2.6 Technologies in Public Awareness
6.2.7 Public Awareness in RA
6.3 Materials and Methods
6.3.1 Materials
6.3.1.1 Data
6.3.1.2 Software (Table 6.2)
6.3.1.3 Hardware
6.3.2 Methods
6.3.2.1 Initial Model Generation
6.3.2.2 Remodelling
6.3.2.3 Composing the Complete Models
6.3.2.4 Texturing
6.3.2.5 Patient and Public Involvement in Design
Feedback on Content
6.3.2.6 Interactive Application
Augmented Reality Setup
Event System Function
Activation and Deactivation of UI Panels
Rotation of 3D Models
Clickable Anatomical Structures
Camera and Lights
Skinning and Rigging
Animating
Importing the Animations into Unity
Creation of Sprites
6.3.2.7 AR Poster Creation
6.4 Results
6.4.1 Final Interactive Application
6.4.1.1 Scene 0: Opening Scene
6.4.1.2 Scene 1: AR Scene
6.4.1.3 Scene 2: Labelled 3D Model Scene
6.4.1.4 Scene 3: Animation Hub Scene
6.4.1.5 Scene 4–6: Animations
6.5 Evaluation
6.5.1 Research Questions
6.5.2 Methods
6.5.2.1 Materials
6.5.2.2 Experimental Procedure
6.5.2.3 Participants
Technology Literacy
Prior Arthritis Awareness Knowledge
6.5.3 Data Analysis
6.5.3.1 Questionnaire Analysis: Healthy Joint, OA and RA Knowledge
6.5.3.2 Questionnaire Analysis: Usability
6.5.3.3 Observational Analysis
6.6 Discussion and Conclusion
6.6.1 Discussion
6.6.1.1 Design and Development
6.6.1.2 Evaluation
Augmented Reality
3D Labelled Models
Animations
6.6.2 Evaluation
6.6.2.1 Limitation and Future Development
6.7 Conclusion
References
7: Understanding the Brain and Exploring the Effects of Clinical Fatigue: From a Patient’s Perspective
7.1 Introduction
7.2 Background Context
7.2.1 What Is Fatigue?
7.2.1.1 Physical Limitations of Fatigue
7.2.1.2 Social Limitations of Fatigue
7.2.1.3 Psychological Consequences of Fatigue
7.2.1.4 Scales of Measurement
7.2.1.5 Educational Applications
Mobile Technology in Academia
Mobile Technology in a Clinical Setting
7.2.1.6 The Use of Augmented Reality to Enhance Learning and Understanding
7.3 Methods
7.3.1 Workflow
7.4 Brain Model
7.4.1 Segmentation
7.4.2 3D Modelling
7.4.2.1 ZBrush
7.4.2.2 UV Mapping and Texturing
3Ds Max
ZBrush
7.4.3 The Animation
7.4.3.1 Animation Audio
7.4.4 The Application
7.4.4.1 The Animation Scene
Patient and Public Involvement in the Design of the Application
7.4.4.2 The AR Scene
3D Object Scan
Material Change
7.5 Evaluation
7.5.1 Methods
7.5.1.1 Participants
7.5.1.2 Materials
7.5.2 Experimental Protocol
7.5.2.1 Pretest
7.5.2.2 Application
7.5.2.3 Posttest
7.5.3 Data Analysis
7.5.4 Results of the Questionnaire
7.5.4.1 Brain Anatomy
7.5.4.2 Fatigue
7.5.4.3 Usability
7.5.4.4 Augmented Reality
7.6 Discussion
7.6.1 Summary of Findings
7.6.2 Review of Design and Development Process
7.6.3 Future Development
7.7 Conclusion
References
8: A Methodology for Visualising Growth and Development of the Human Temporal Bone
8.1 Introduction
8.2 Theoretical Background
8.2.1 Anatomy and Development of the Human Temporal Bone
8.2.2 Digital Technologies in Anatomy Education
8.3 Materials and Methods
8.3.1 Software
8.3.2 Concept
8.3.3 Development Pipeline
8.3.4 Modelling References
8.3.5 Practical Tests
8.3.5.1 Test Models
8.3.5.2 Deformation Setups
Deformation Setup A: Approximated Joint Placement
Deformation Setup B: Per-vertex Joint Placement
Deformation Setup C: Blend Shapes
8.3.5.3 Results of the Practical Tests
8.3.6 3D Modelling
8.3.6.1 Textures
8.3.7 Application Development
8.3.7.1 Interaction
8.3.7.2 Touchscreen Scene
8.3.7.3 Managing Touch Input
8.3.7.4 Augmented Reality Scene
8.3.7.5 AR Marker
8.4 Results
8.4.1 3D Models
8.4.2 The Interactive Application
8.4.2.1 Scene Layout
8.4.2.2 Touchscreen Scene
8.4.2.3 Augmented Reality Scene
8.5 Conclusion
References
9: Collect the Bones, Avoid the Cones: A Game-Based App for Public Engagement
9.1 Background Information
9.1.1 Public Engagement in Anatomical Sciences
9.1.2 Public Awareness of Helmet Safety
9.1.3 Serious Games and Game-Based Learning
9.1.4 Research Gap
9.2 Chapter Scope
9.3 Methods
9.3.1 Application Design
9.3.2 Application Workflow
9.4 Evaluation
9.4.1 Questionnaire Development
9.4.2 Participant Recruitment
9.4.3 Statistical Analysis
9.5 Results
9.5.1 Demographic Data
9.5.2 Knowledge Test and Confidence Scores
9.5.3 User Feedback
9.5.3.1 Usability
9.5.3.2 Presentation
9.5.3.3 Educational Value
9.5.3.4 Enjoyability
9.5.3.5 Overall Use
9.5.3.6 Open Text Feedback
9.6 Discussion
9.6.1 User Feedback
9.6.2 Strengths and Limitations of Research
9.6.3 Future Application Development
9.6.4 Future Research
9.7 Concluding Remarks
References
10: A Serious Game on Skull Anatomy for Dental Undergraduates
10.1 Introduction
10.1.1 Chapter Scope
10.1.2 Background and Project Rationale
10.2 Methodology
10.2.1 Design Concept
10.2.2 Materials and Methods
10.2.2.1 Optimisation of the 3D Skull Model
10.2.2.2 Game Structure
10.2.2.3 3D Modelling
10.2.2.4 Game Development
10.3 Development Outcome
10.3.1 Game Content
10.3.2 Normal Anatomy Scene
10.3.3 Level Layout
10.3.3.1 Office
10.3.3.2 Artist’s Studio
10.3.3.3 Normal Anatomy Laboratory
10.3.3.4 Artist Responses
10.3.3.5 Client Responses
10.3.3.6 Feedback
10.3.3.7 Rewards
10.4 Evaluation
10.4.1 Methods
10.4.2 Discussion of Qualitative Feedback
10.5 Discussion and Future Developments
10.5.1 Discussion
10.5.1.1 Future Development
References

Citation preview

Advances in Experimental Medicine and Biology 1262

Paul M. Rea   Editor

Biomedical Visualisation Volume 7

Advances in Experimental Medicine and Biology Volume 1262

Series Editors Wim E. Crusio, Institut de Neurosciences Cognitives et Intégratives d’Aquitaine, CNRS and University of Bordeaux UMR 5287, Pessac Cedex, France John D. Lambris, University of Pennsylvania, Philadelphia, PA, USA Heinfried H. Radeke, Institute of Pharmacology & Toxicology, Clinic of the Goethe University Frankfurt Main, Frankfurt am Main, Hessen, Germany Nima Rezaei, Research Center for Immunodeficiencies, Children’s Medical Center, Tehran University of Medical Sciences, Tehran, Iran

More information about this series at http://www.springer.com/series/5584

Paul M. Rea Editor

Biomedical Visualisation Volume 7

Editor Paul M. Rea School of Life Sciences, College of Medical, Veterinary and Life Sciences University of Glasgow Glasgow, Scotland, UK

ISSN 0065-2598     ISSN 2214-8019 (electronic) Advances in Experimental Medicine and Biology ISBN 978-3-030-43960-6    ISBN 978-3-030-43961-3 (eBook) https://doi.org/10.1007/978-3-030-43961-3 © Springer Nature Switzerland AG 2020 This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland

Preface

The utilisation of technologies in the biomedical and life sciences, medicine, dentistry, surgery and the allied health professions has been utilised at an exponential rate over recent years. The way we view and examine data now is significantly different to what has been done perhaps 10 or 20 years ago. With the growth, development and improvement of imaging and data visualisation techniques, the way we are able to interact with data is much more engaging than it has ever been. These technologies have been used to enable improved visualisation in the biomedical fields, but also how we engage our future generations of practitioners when they are students within our educational environment. Never before have we had such a wide range of tools and technologies available to engage our end-stage user. Therefore, it is a perfect time to bring this together to showcase and highlight the great investigative works that is going on globally. This book will truly showcase the amazing work that our graduates have produced from the MSc Medical Visualisation and Human Anatomy degree programme. This is run jointly both by the School of Life Sciences within the College of Medical, Veterinary and Life Sciences at the University of Glasgow and the School of Simulation and Visualisation, The Glasgow School of Art. By sharing best practice and innovation, we can truly aid our global development in understanding how best to use technology for the benefit of society as a whole. Glasgow, UK

Paul M. Rea

v

Acknowledgements

I would like to truly thank every author who has contributed to the seventh edition of Biomedical Visualisation. The lead authors are all now graduates of the MSc Medical Visualisation and Human Anatomy, the postgraduate taught degree run jointly by the School of Life Sciences within the College of Medical, Veterinary and Life Sciences at the University of Glasgow and the School of Simulation and Visualisation, The Glasgow School of Art. Thank you also to our wonderful colleagues locally and nationally who supervised these projects and made this volume possible. By sharing our innovative approaches, we can truly benefit students, faculty, researchers, industry and beyond in our quest for the best uses of technologies and computers in the field of life sciences, medicine, the allied health professions and beyond. In doing so, we can truly improve our global engagement and understanding about best practice in the use of these technologies for everyone. Thank you! I would also like to extend out a personal note of thanks to the team at Springer Nature who have helped make this possible. The team I have been working with have been so incredibly kind and supportive. Without them, this would not have been possible. Thank you kindly!

vii

About the Book

Following on from the success of the first six volumes, Biomedical Visualisation, Volume 7, will demonstrate the numerous options we have in using technology to enhance, support and challenge education. The chapters presented here highlight the wide use of tools, techniques and methodologies we have at our disposal in the digital age. These can be used to image the human body; to educate patients, the public, faculty and students in the plethora of how to use cutting-edge technologies in visualising the human body and its processes and creation and integration of platforms for teaching and education; and to visualise biological structures and pathological processes. All chapters in this volume feature collaborative and innovative postgraduate research projects from students, now graduates, of the MSc Medical Visualisation and Human Anatomy: https://www.gla.ac.uk/postgraduate/ taught/medicalvisualisation/. This pioneering, world-leading postgraduate taught degree programme has now been running for 9 years. It is a joint partnership degree between the School of Life Sciences within the College of Medical, Veterinary and Life Sciences at the University of Glasgow and the School of Simulation and Visualisation, The Glasgow School of Art. These chapters truly showcase the amazing and diverse technological applications that have been carried out as part of their research projects.

ix

Contents

1 Virtual Anatomy Museum: Facilitating Public Engagement Through an Interactive Application����������������������    1 Zbigniew Jędrzejewski, Brian Loranger, and Jennifer A. Clancy 2 eLearning and Embryology: Designing an Application to Improve 3D Comprehension of Embryological Structures ����������������������������   19 Keiran Tait, Matthieu Poyade, and Jennifer A. Clancy 3 Animated Guide to Represent a Novel Means of Gut-Brain Axis Communication����������������������������������������������   39 Hana Pokojna, Daniel Livingstone, Dónal Wall, and Richard Burchmore 4 Engaging with Children Using Augmented Reality on Clothing to Prevent Them from Smoking������������������������������   59 Zuzana Borovanska, Matthieu Poyade, Paul M. Rea, and Ibrahim Daniel Buksh 5 Enabling More Accessible MS Rehabilitation Training Using Virtual Reality������������������������������������������������������   95 Hannah K. Soomal, Matthieu Poyade, Paul M. Rea, and Lorna Paul 6 The Use of Augmented Reality to Raise Awareness of the Differences Between Osteoarthritis and Rheumatoid Arthritis ������������������������������������������������������������  115 Florina Fiador, Matthieu Poyade, and Louise Bennett 7 Understanding the Brain and Exploring the Effects of Clinical Fatigue: From a Patient’s Perspective ����������������������  149 Jacqueline Zurowski, Matthieu Poyade, and Louise Bennett 8 A Methodology for Visualising Growth and Development of the Human Temporal Bone������������������������  183 Norbert Šulek, Matthieu Poyade, and Eilidh Ferguson

xi

xii

9 Collect the Bones, Avoid the Cones: A Game-Based App for Public Engagement��������������������������������  203 Yasmin Wong, Paul M. Rea, Brian Loranger, and Ourania Varsou 10 A Serious Game on Skull Anatomy for Dental Undergraduates ����������������������������������������������������������  217 Ruaridh Dall, Daisy Abbott, Paul M. Rea, and Ourania Varsou

Contents

About the Editor

Paul  M.  Rea is a Professor of Digital and Anatomical Education at the University of Glasgow. He is qualified with a medical degree (MBChB), an MSc (by research) in Craniofacial Anatomy/Surgery, a PhD in Neuroscience, a Diploma in Forensic Medical Science (DipFMS) and an MEd with Merit (Learning and Teaching in Higher Education). He is an Elected Fellow of the Royal Society for the Encouragement of Arts, Manufactures and Commerce (FRSA), Elected Fellow of the Royal Society of Biology (FRSB), Senior Fellow of the Higher Education Academy, Professional Member of the Institute of Medical Illustrators (MIMI) and a Registered Medical Illustrator with the Academy for Healthcare Science. He has published widely and presented at many national and international meetings, including invited talks. He sits on the Executive Editorial Committee for the Journal of Visual Communication in Medicine, is Associate Editor for the European Journal of Anatomy and reviews for 25 different journals/ publishers. He is the Public Engagement and Outreach Lead for anatomy coordinating collaborative projects with the Glasgow Science Centre, NHS and Royal College of Physicians and Surgeons of Glasgow. He is also a STEM Ambassador and has visited numerous schools to undertake outreach work. His research involves a long-standing strategic partnership with the School of Simulation and Visualisation, The Glasgow School of Art. This has led to multimillion pound investments in creating world-leading 3D digital datasets to be used in undergraduate and postgraduate teaching to enhance learning and assessment. This successful collaboration resulted in the creation of the world’s first taught MSc Medical Visualisation and Human Anatomy, combining anatomy and digital technologies. The Institute of Medical Illustrators also accredits it. This degree, now into its 9th year, has graduated over 100 people and created college-wide, industry, multi-institutional and NHS research-linked projects for students. He is the Programme Director for this degree.  

xiii

Contributors

Daisy Abbott  School of Simulation and Visualisation, The Glasgow School of Art, Glasgow, Scotland, UK Louise Bennett  Institute of Infection, Immunity and Inflammation, College of Medical, Veterinary and Life Sciences, University of Glasgow, Glasgow, UK Zuzana Borovanska  School of Simulation and Visualisation, The Glasgow School of Art, Glasgow, UK Anatomy Facility, School of Life Sciences, College of Medical, Veterinary and Life Sciences, University of Glasgow, Glasgow, UK Ibrahim  Daniel  Buksh School of Life Sciences, College of Medical, Veterinary and Life Sciences, University of Glasgow, Glasgow, UK Richard  Burchmore College of Medical, Veterinary and Life Sciences, University of Glasgow, Glasgow, Scotland, UK Jenny  A.  Clancy Anatomy Facility, School of Life Sciences, College of Medical, Veterinary and Life Sciences, University of Glasgow, Glasgow, Scotland, UK Ruaridh  Dall Anatomy Facility, School of Life Sciences, College of Medical, Veterinary and Life Sciences, University of Glasgow, Glasgow, Scotland, UK School of Simulation and Visualisation, The Glasgow School of Art, Glasgow, UK Florina Fiador  School of Life Sciences, College of Medical, Veterinary and Life Sciences, University of Glasgow, Glasgow, UK Zbigniew  Jędrzejewski School of Simulation and Visualisation, The Glasgow School of Art, Glasgow, UK Anatomy Facility, School of Life Sciences, College of Medical, Veterinary and Life Sciences, University of Glasgow, Glasgow, UK Daniel Livingstone  School of Simulation and Visualisation, The Glasgow School of Art, Glasgow, UK Brian  Loranger School of Simulation and Visualisation, The Glasgow School of Art, Glasgow, Scotland, UK

xv

xvi

Lorna  Paul Department of Physiotherapy and Paramedicine, School of Health and Life Sciences, Glasgow Caledonian University, Glasgow, UK Hana  Pokojna The Glasgow School of Art, The University of Glasgow, Glasgow, UK Matthieu  Poyade School of Simulation and Visualisation, The Glasgow School of Art, Glasgow, UK Paul M. Rea  Anatomy Facility, School of Life Sciences, College of Medical, Veterinary and Life Sciences, University of Glasgow, Glasgow, Scotland, UK Hannah K. Soomal  Anatomy Facility, School of Life Sciences, College of Medical, Veterinary and Life Sciences, University of Glasgow, Glasgow, Scotland, UK School of Simulation and Visualisation, The Glasgow School of Art, Glasgow, UK Norbert Šulek  School of Simulation and Visualisation, The Glasgow School of Art,, Glasgow, UK Keiran Tait  School of Simulation and Visualisation, The Glasgow School of Art, Glasgow, UK Anatomy Facility, School of Life Sciences, College of Medical, Veterinary and Life Sciences, University of Glasgow, Glasgow, Scotland, UK Ourania  Varsou Anatomy Facility, School of Life Sciences, College of Medical, Veterinary and Life Sciences, University of Glasgow, Glasgow, Scotland, UK Dónal Wall  Institute of Infection, Immunity and Inflammation, University of Glasgow, Glasgow, UK Yasmin  Wong Anatomy Facility, School of Life Sciences, College of Medical, Veterinary and Life Sciences, University of Glasgow, Glasgow, Scotland, UK School of Simulation and Visualisation, The Glasgow School of Art, Glasgow, UK Jacqueline Zurowski  Anatomy Facility, School of Life Sciences, College of Medical, Veterinary and Life Sciences, University of Glasgow, Glasgow, Scotland, UK School of Simulation and Visualisation, The Glasgow School of Art, Glasgow, UK

Contributors

1

Virtual Anatomy Museum: Facilitating Public Engagement Through an Interactive Application Zbigniew Jędrzejewski, Brian Loranger, and Jennifer A. Clancy

Abstract

Digitisation has become a common practice in the preservation of museum collections. Recent development of photogrammetry techniques allows for more accessible acquisition of threedimensional (3D) models that serve as accurate representations of their originals. One of the potential applications of this is presenting digital collections as virtual museums to engage the public. Medical museums, particularly, would benefit from digitisation of their collections as many of them are closed to the public. The aim of this project was to design and create an interactive virtual museum which would represent the Anatomy Museum at the University of Glasgow with key specimens digitised using photogrammetry techniques.

Members of the general public (25 participants) were asked to evaluate the usability and effectiveness of the interactive application by completing questionnaires. A process to digitise anatomical specimens using photogrammetry and convert them into game-ready 3D models was developed. The results demonstrated successful generation of 3D models of specimens preserved using different techniques, including specimens preserved in fluid and glass jars. User tests and evaluation of the application by members of the general public were positive, with participants agreeing that they would now consider visiting the real museum after using the virtual version.

Keywords Z. Jędrzejewski School of Simulation and Visualisation, The Glasgow School of Art, Glasgow, UK Anatomy Facility, School of Life Sciences, College of Medical, Veterinary and Life Sciences, University of Glasgow, Glasgow, UK B. Loranger School of Simulation and Visualisation, The Glasgow School of Art, Glasgow, Scotland, UK J. A. Clancy (*) Anatomy Facility, School of Life Sciences, College of Medical, Veterinary and Life Sciences, University of Glasgow, Glasgow, Scotland, UK e-mail: [email protected]

Medical museum · Virtual museum · Photogrammetry · Digitisation · 3D reconstruction

1.1

Introduction

1.1.1 Medical Museums Medical and anatomy museums have played an important role in teaching curricula of universities in past centuries. Collections of preserved

© Springer Nature Switzerland AG 2020 P. M. Rea (ed.), Biomedical Visualisation, Advances in Experimental Medicine and Biology 1262, https://doi.org/10.1007/978-3-030-43961-3_1

1

2

anatomical specimens were helpful in understanding gross anatomy, as well as pathologies and the progression of disease (Venkatesh et al. 2013; Marreez and Willems 2016). Presently, with the development of teaching methods and resources such as plastinated specimens, anatomy museums lack their previously well-defined role in medical teaching programme. In addition, they are expensive facilities, and as a consequence, some medical museums are being abandoned, e.g. College of Medicine of the University of Toronto (Wakefield 2007; Marreez and Willems 2016). This could lead to the loss of unique and irreplaceable specimens that contribute to our knowledge of anatomy and pathology. The Anatomy Museum located in the University of Glasgow contains historically important and unique specimens. It is part of the oldest museum in Scotland, the Hunterian, dating to 1807. Its vast collection is based on the specimens accumulated by William Hunter and in 1783 gifted by his will to the University of Glasgow. Hunter was a remarkable physician and anatomist who made significant contributions to the field of medicine in the eighteenth and nineteenth centuries (Teacher 1900). His main research and discoveries concerned obstetrics, osteology and lymphatics, and many of the specimens personally dissected by William Hunter are on display in the Anatomy Museum at the University of Glasgow (SanchezJauregui 2018). Overall, the collection consists of more than 3000 anatomical and pathological preservations, approximately 87% of them human (Reilly and McDonald 2018). These unique specimens are irreplaceable; therefore, it is vital that they are preserved. In addition, the Anatomy Museum is open to the public; however, the complicated nature of anatomy and the inability to lift and rotate the specimens can make them difficult to understand. A method of preserving these ­important specimens and making them accessible to the public is therefore required.

1.1.2 Digitisation Medical schools have traditionally preserved dissected specimens for long-term use in education. Presently, digitisation of museum collections, in

Z. Jędrzejewski et al.

general, is becoming a common method of preservation (Zhao and Loy 2015; Miyamae 2016; Skamantzari and Georgopoulos 2016). Holding digital records of collections is valuable in case of a disaster or damage caused to the object (Zhao and Loy 2015; Earley and McGregor 2019). It is especially relevant to fragile and unique medical specimens. Specimens in the Anatomy Museum at the University of Glasgow are up to 250 years old, and, although many have retained their structure, they are vulnerable to decay, colour loss and degradation (Venkatesh et  al. 2013; Turchini et al. 2018). The recent development of 3D scanning technology allows acquisition of highly detailed 3D digital representations of real objects. Photogrammetry is the most accessible technique of digitisation. In comparison to other methods such as triangulation laser scanning, photometric stereo or structured light scanning, photogrammetry does not require special and expensive equipment (Graham et  al. 2017; Turchini et  al. 2018). Photogrammetry requires only a digital camera and possibly studio lighting, making it an accessible and relatively inexpensive technique (Santagati et  al. 2017). Moreover, several comparisons have shown that, although 3D models acquired with a laser scanner can produce higher quality, results of photogrammetry are comparable, and sometimes the difference in quality is negligible (Fau et al. 2016; Graham et al. 2017; Santagati et  al. 2017). The small difference in quality is not balanced by the substantial difference in cost between laser scanning and photogrammetry. In cases where the details of an object can be represented with a colour texture map rather than the actual model, photogrammetry is a good solution. The results of photogrammetry are highly dependent on the correct acquisition of photographs and the lighting conditions. Ideally, the object should be evenly illuminated with soft diffuse light, and shadows should be reduced to a minimum. Occluded areas of the object will result in inferior detail of the model or holes in the 3D mesh. Highlights will affect the colour texture and might also mislead the software in the process of building the model (Jocks 2014; Benoit 2016; Fau et al. 2016; Santagati et al. 2017;

1  Virtual Anatomy Museum: Facilitating Public Engagement Through an Interactive Application

3

Petriceks et al. 2018). The main challenges in the digitisation of medical specimens concern those preserved in fluid. Distortion of the object caused by the curvature of the container and the liquid inside of it may cause significant problems for photogrammetric software in aligning the photographs correctly (Jocks 2014; Rea et  al. 2017; Turchini et al. 2018). Additionally, the fluid may be discoloured which could occlude the specimen, change its colour or create unwanted artefacts on the model. Furthermore, correct illumination of such specimens is challenging without producing highlights on the jar (Turchini et al. 2018). As a result, a more tailored process to mitigate these issues is required for generating 3D models of specimens preserved in fluid.

1.1.4 Aims and Objectives

1.1.3 Virtual Museums

1.2.1 Modelling the Museum

Creating 3D digital models of museum objects provides a wide range of possibilities for their implementation and the dissemination of their educational value (Miyamae 2016; Earley and McGregor 2019). For example, a virtual museum could be an interactive form of presenting digital collections online. The goal of virtual museums is the same as any museum: hosting exhibitions that educate and engage (Lepouras et  al. 2004; Kontogianni and Georgopoulos 2015). Additionally, virtual museums are inclusive to anyone at any time and allow interactions without the risk of damaging the objects (Zhao and Loy 2015; Skamantzari and Georgopoulos 2016; Earley and McGregor 2019). Users can explore the museum freely and investigate 3D digitised models of the collection in ways that are not possible in a physical museum. Interactive virtual elements may also enrich physical collections, releasing the specimens from their static position and limited possibilities (Skamantzari and Georgopoulos 2016; Seebach 2018). It is particularly relevant to anatomy museums, where many specimens are preserved in jars and stored in glass cabinets that limit the angle of possible inspection, thus not providing immediate value to the general public (Venkatesh et al. 2013).

In order to recreate the experience of exploring the Anatomy Museum, the interior of the room was reconstructed using digital 3D tools. Photogrammetry was not used at this stage, as it would produce noisy models that would require excessive remodelling time. The primary consideration in creating the model was performance of the scene in the final application. The Virtual Anatomy Museum mobile application required the models to be efficient in order to reduce any dropping frames while using the application. To achieve this level of efficiency, the models needed to consist of as few polygons as possible. The model of the museum was created based on reference photographs. In total, 98 photographs were taken showing the overall structure of the interior as well as details such as railings, cabinets or capitals of columns. Some of the photographs focused on surface details (e.g. wood) and would later serve as a direct source for texture creation. In addition to photographs, gross measurements of the interior were taken and drawn in the form of a simple plan. Based on the measurements, an initial model was built using 3ds Max (Table 1.1). The model consisted of cubes and planes that indicated the positions of the main elements of the interior, setting up the proportions and alignment. The initial

This project aimed to create an interactive mobile application that would provide a virtual tour through the University of Glasgow’s Anatomy Museum and include digitised 3D models of some of the most interesting and important ­specimens. This included investigating the feasibility of creating 3D models of different types of anatomy specimens, particularly specimens preserved in fluid. The usability, educational value and effect on public engagement were evaluated by asking members of the general public to test the application and complete a questionnaire.

1.2

Methods

Z. Jędrzejewski et al.

4

Table 1.1  A summary of the software used and their purpose in creating the Virtual Anatomy Museum application Company Adobe Systems Inc.

Software Adobe Photoshop 19.1.5 64 bit

Autodesk Inc.

3ds Max 2018

Agisoft

Metashape Standard 1.5.0 64 bit

Photogrammetry software used for generating 3D models of photographed specimen Generating diffuse maps for the specimen models

Pixologic

ZBrush 4R8

Remeshing and retopologising photogrammetry models UV unwrapping specimen models

Allegorithmic

Substance Painter 2019.1.2

Baking normal maps and creating textures of specimen models

Unity Technologies

Unity3D 2018.3.12.f1 Personal

Game engine used for implementation of 3D models into a real-time rendered interactive application

Visual Studio 2017 15.8.6

External editor of C# code, used together with Unity3D

model was imported to Unity in the early stages. This created a simple scene that allowed the set­up of cameras to simulate the final outcome, indicating points of focus and which parts of the museum should be treated with more detail. The cubes and planes of the initial museum model were replaced by models of corresponding elements of the interior. The objects were simplified with details reduced to a minimum, yet modelled with attention to their proportions (Fig. 1.1). To reduce the polygon count, all invisible faces

Purpose General 2D image editor Masking photographs of specimen Creating textures for Anatomy Museum model Creating UI elements for use in Unity 3D tool used in modelling and UV unwrapping the game environment Correction of specimen models

were deleted. The interior of the museum includes multiple repeatable elements, e.g. columns; thus, only parts of it were modelled, UV unwrapped and later duplicated. The final model consisted of 12,000 polygons (Fig. 1.2). Most of the textures applied to the model were created directly from the reference photographs using Adobe Photoshop (Table 1.1). Parts of photographs depicting the desired surface textures were cut, multiplied and transformed to cover the 2048 × 2048 pixel canvas. The seams were cov-

1  Virtual Anatomy Museum: Facilitating Public Engagement Through an Interactive Application

5

Fig. 1.1  The elements of the museum were built in 3ds Max according to reference photos. The models were kept simple, but with attention to their proportions

ered in order to achieve continuity and to allow for tiling (Fig. 1.3). With efficiency remaining a priority, complex structures (e.g. carved wooden details or the balustrade that consists of multiple posts) were created using opacity maps applied to single-face objects. Some of the models were enhanced with normal maps creating the illusion of convexity/ concavity of their surfaces, without increasing the polygon count.

The finished models of the repeated elements were imported into Unity and converted into prefabs in order to maximise performance. Duplicated prefabs act as exact reflections of the originals, with all the applied changes passing to all the copies. Two lighting set-ups were then created in Unity: a night-time warm artificial lighting used in the intro scene and a daytime lighting for the main scene.

6

Z. Jędrzejewski et al.

Fig. 1.2  The final model of the museum created in 3ds Max, with the roof hidden. The model was divided into repeatable parts that were exported to Unity

1.2.2 Models of the Specimens 1.2.2.1 Studio Set-Up and Photography Acquisition Nine anatomical specimens from the Anatomy Museum at the University of Glasgow were selected for digitisation using photogrammetry (Table 1.2). Specimens were selected to represent the range of the collection held in the museum with a focus on historically important specimens such as those that were part of William Hunter’s research in the field of lymphatics and the human gravid uterus. Moreover, specimens were chosen that represented the range of different preservation techniques used in the collection in order to develop a process of digitisation for dry ­specimens, specimens preserved in fluid and dry specimens held in glass containers. Objects held in glass containers or in fluid have previously been described as challenging to digitise (Rea et al. 2017; Turchini et al. 2018).

Photographs were taken of the chosen specimens in the Anatomy Museum. The photographic studio set-up consisted of two LED diffuse lights placed on both sides and slightly in front of the specimen. The lights were set to full power, and their distance from the objects set to reduce any highlights or shadows. Surfaces were covered with white paper to create a uniform background. The specimen itself was placed on a turntable covered in white paper in order to bounce the light upwards. The background together with the turntable was placed on a desk with a height of 80 cm to allow photographs to be acquired from high as well as from low angles. Additionally, the turntable was marked with high-contrast symbols to help the photogrammetric software (Metashape, Table  1.1) align the photographs. The total time of setting up the studio did not exceed 15 min. A Canon 5D mark III camera equipped with a 25–105 mm 1.4 zoom lens was used. The camera

1  Virtual Anatomy Museum: Facilitating Public Engagement Through an Interactive Application

7

Fig. 1.3  Process of texture creation out of photographs of the museum. Parts of photo with consistent texture are selected (1). The parts were copied, transformed and com-

bined (2). Visible seams between parts were hidden with Clone Stamp tool (3). Border seams were hidden using Offset filter and Clone Stamp tool (4)

was placed on a tripod in front of the studio. The aperture was set to f22, producing a sharp image all over the specimen, with deep depth of field. This resulted in a long exposure time of around 2 s; thus, to reduce any unwanted movement, the shutter was operated by a remote control. For the purpose of reducing lens distortion, the focal length was set between 60  mm and 84  mm depending on the specimen. The ISO was set to 1600 and the focus was set manually. The pixel size of the photographs was 5760  ×  3840 and they were saved as RAW and JPEG files.

Each specimen was placed on the turntable, and, after acquiring desired focus and exposure of the image, a set of photographs were taken by rotating the turntable by approximately 10  degrees between each shot. After reaching a full rotation of the specimen, the camera was set on a different height and angle to capture as much of the surface of the object as possible. Some of the specimens (e.g. woolly mammoth tooth) were turned upside down to acquire photographs of their whole structure. Approximately two to three sets of photographs of each specimen were taken

Z. Jędrzejewski et al.

8 Table 1.2  Summary of the 9 specimens chosen for digitisation Name Juvenile skull

Image

Type of preservation Dry bones

Reason of selection Skull representing collection of human specimens at different stages of development

Distortion of the thorax

Dry bones

From Hunter’s collection of bone pathologies. Complex structure with many gaps and occlusions. Chosen to test possibilities of model generation

Woolly mammoth tooth

Dry specimen

Potentially interesting object for general public. One of specimens showing Hunter’s fascination with teeth and fossils

Cast of the skull of Robert the Bruce

Dry specimen

A specimen presenting a slightly different aspect of the Anatomy Museum and of particular interest to members of the public interested in history

Plastinated hand 01

Dry, plastinated specimen

An example of modern anatomical preservation techniques

Plastinated hand 02

Dry, plastinated specimen

An example of modern anatomical preservation techniques

Septic osteomyelitis

Dry bones in a glass jar

One of Hunter’s bone specimens. Chosen as a dry, jarred specimen to test photogrammetry

Vascularity of the gravid uterus

Specimen preserved in fluid inside a glass jar

One of the specimens from Hunter’s research on the human gravid uterus. Preserved in fluid

Lymphatics of the intestine and mesentery. Turtle

Specimen preserved in fluid inside a glass jar

Specimen presenting William Hunter’s research on lymphatics and his use of mercury injections technique. Preserved in fluid

Specimens were selected based on their preservation type and potential interest to the general public

this way. The total time of photography for each specimen varied between 10 min and 15 min.

1.2.2.2 Generation of Photogrammetric Models Metashape (Table 1.1) was chosen to generate 3D models from the photographs. The software provides relative control over the process of generat-

ing the model and at the same time is reliable and produces good results. Generation of 3D models in Metashape follows a rigged workflow consisting of the following steps: photograph alignment, building of dense cloud, building the mesh and building the texture. The correct alignment of photographs is the crucial step, after which the user’s contribu-

1  Virtual Anatomy Museum: Facilitating Public Engagement Through an Interactive Application

9

Fig. 1.4  Dense cloud generated from unmasked photographs often included parts of the turntable or background. All unwanted points were cleaned before generating the mesh

Fig. 1.5  Process of editing 3D models and their textures, starting from generation of the initial model in Metashape and finishing with implementation of the low-polygon model in Unity

tion to the process is limited almost entirely to choosing quality settings. The quality settings divide the photographs’ dimensions in half with each downgrade step. After alignment of the photographs, Metashape compares common points on the photographs and builds a dense cloud of points set in a 3D space that creates the shape of the object. Very often, the software calculates parts of the background or the preservation fluid as a

c­onnected element of the object. All unwanted points were therefore deleted before generating the mesh to avoid artefacts (Fig. 1.4). The generated mesh depicting the specimen is a high-quality model consisting of millions of polygons, yet at this stage, it usually contains holes, imperfections or noisy surfaces. In order to implement the model to Unity, the mesh required thorough editing and decimation.

10

Z. Jędrzejewski et al.

Fig. 1.6  Some parts of the models were missing or were not fully generated. The holes and missing elements were remodelled using simple polygonal structures in 3ds Max

The process of preparing the specimen models to real-time rendering standards was based on alternative use of 3dsMax, ZBrush, Substance Painter and Metashape (Table 1.1 and Fig. 1.5). The outcome of the process was a low-polygon mesh with baked normal maps (containing information of details of the surface) and edited diffuse (colour) maps. After generation in Metashape, the mesh was exported to 3dsMax where it was initially corrected. First, all the floating parts of the mesh not adjacent to the main model were selected and deleted. Then, any holes in the mesh were replaced by creating simple cubic forms in their place and connecting them to the model. Missing structures, such as zygomatic arches on the model of the juvenile skull, were reconstructed in a similar way (Fig. 1.6). With these changes applied, the model was exported to ZBrush, where it underwent further correction and decimation process. In ZBrush, the mesh was converted to DynaMesh. This option of automatic retopology fuses all the objects contributing to the model and clears many imperfections of the mesh such as rats’ nests or small holes. Any faults left were corrected by hand and the DynaMesh refreshed so that the structure of the model was recalcu-

lated. Some of the models generated by Metashape (in particular bones, because of their smooth, uniform and shiny surface) contained noisy structures and needed to be corrected using smooth brush in ZBrush. Metashape does contain built-in tools that smooth the mesh automatically, although it is applied uniformly over the whole model and may result in loss of detail. Furthermore, specimens, especially those held in liquid, were generated with missing parts that required re-sculpting according to the photographs for reference. Corrections performed by hand may take more time, but this technique provides control and results in higher-quality models. Corrections applied in ZBrush resulted in high-quality models, consisting of millions of polygons each. In order to implement the models in the real-time rendering engine of Unity3D, the models required general decimation, to a level of around 10,000  polygons. ZBrush is recommended for decreasing the polygon count. The ZRemesher tool allows the user to specify the desired polygon count, and the decimation is performed automatically. The process was applied to a copy of the high-polygon mesh. The decimated mesh was then subdivided into four layers of density, and then the high-polygon model was

1  Virtual Anatomy Museum: Facilitating Public Engagement Through an Interactive Application

projected onto it. Next, the low-polygon model was divided into polygroups to indicate the islands of the UV layout. Ideally, the islands would unfold flat in a way that would not stretch the polygons. After this stage, the UV unwrapped low-polygon model and the high-polygon model were exported in order to create textures. In order to apply the diffuse textures to the newly unwrapped clean UV layout, the low-­ polygon model was reimported to Metashape in the place of the initially generated mesh. The software creates the diffuse map by projecting the acquired photographs onto the surface of the model. After that, the two models together with the diffuse map were imported into Substance Painter, a software designed to create textures directly on the model (Table 1.1). Generated diffuse maps needed slight correction at this point as Metashape tended to projected small areas of background onto the model, e.g. between the fingers of the plastinated hand. All the spots of background were covered using the Clone tool in Substance Painter. The colours of the maps were also changed in order to resemble the original look, taking into consideration how fluid and glass occlude the specimens and change their appearance. Substance Painter (Table 1.1) was used to create any additional texture maps. Normal maps were generated for each specimen by projecting the high-polygon mesh on a low-polygon s­ urface. Each model was enhanced by a black and white roughness map that was created based on the already existing diffuse texture. Any metalness or opacity maps included in the models were created by painting masks directly on the models.

1.2.3 Design of the Application The model of the museum, together with the models of the specimens, was implemented in Unity3D in the form of an interactive, mobile, virtual museum application. The design of the user interface was directly inspired by William Hunter’s original publication of The Anatomy of the Human Gravid Uterus. Blank pages from the scanned copy of this publication served as a

11

background for each panel inside the application. The simple colour palette was chosen based on the details of the book: cold and warm beige of the pages, graphite mimicking the ink tone and pale gold inspired by the enrichments of the cover. The font used in the titles and menus was found using a font searcher engine (https://www. fontsquirrel.com/matcherator) that compares input images with text and suggests matching fonts. The font used in the application was based on that used on the title page of the original publication of The Anatomy of the Human Gravid Uterus.

1.3

Results

Of the nine specimens selected for digitisation and photographed, seven were successfully modelled (Table  1.3). This included two specimens that were preserved in fluid (lymphatics of the intestine and vascularity of the gravid uterus). The two specimens that did not result in 3D models were the specimen of bone held in glass container and a plastinated hand specimen (Table  1.3). Metashape was unable to correctly align the photographs of these specimens. The plastinated hand specimen was placed on its palmar surface in order to photograph the dorsal aspect and vice versa. During this process, the hand was laying flat, and Metashape was unable to combine these two sets of photographs, with insufficient contact points between them. In the case of the bone specimen, one of the factors that might have led to incorrect alignment of photographs was white highlights on the glass container, produced by lights, which occluded the object. The seven successful 3D models were included in the final application. The application consists of three types of scene: introduction, museum and individual specimen scenes. The introduction scene contains a short description of the Anatomy Museum together with brief instructions of controls. The descriptions are followed by an animation of a camera going through the museum, presenting the interior. The museum scene allows users to freely explore the museum, by rotating the cam-

Z. Jędrzejewski et al.

12

Table 1.3  Specimens with the number of photographs used to generate the models and the polycount of initial and final models Number of acquired photos/aligned photos 48/48

Number of polygons after Number of polygons generation in Unity3D 2,708,091 polygons 3434 polygons

Distortion of the thorax

51/52

1,760,299 polygons

9963 polygons

Woolly mammoth tooth

77/79

384,794 polygons

8148 polygons

Cast of skull of Robert the Bruce

98/98

1.217,173 polygons

6833 polygons

Plastinated hand 01

Failed to create the 3D model (the dorsal and ventral surfaces could not be merged)

Plastinated hand 02

35/66 (palmar surface) 31/66 (dorsal surface) 71/71

Septic osteomyelitis

13/33

Failed to create the 3D model (only one side of the object could be generated)

Vascularity of the gravid uterus

34/73

451,144 polygons

8247 polygons

Lymphatics of intestine and mesentery. Turtle

39/39

920,672 polygons

7193 polygons

Name Juvenile skull

Image

570,577 polygons

8326 polygons

Two specimens did not result in 3D models due to technical difficulties

era and jumping between points of interest placed next to the modelled specimens which are displayed on the shelves (Fig.  1.7). Alternatively, users can select specimens from the minimap of the museum located in the upper left panel of the museum scene. Each specimen is complemented

by a floating panel with its name and button taking the user to the corresponding specimen scene. Each specimen scene is based on the same template that presents the 3D model of the anatomical preparation and a panel with images and text describing relevant anatomical or historical

1  Virtual Anatomy Museum: Facilitating Public Engagement Through an Interactive Application

13

Fig. 1.7  Main museum scene. The panels indicate the position of specimens that were modelled. The user can move the camera using the minimap (upper left corner) or by pressing the floor by the specimen

concepts (Fig. 1.8). The models of the specimens can be rotated and scaled with intuitive touch controls. The models are complemented by floating labels indicating some of their features. The labels serve as buttons changing the panel beside the model, providing more information about the concept or structure. The text about each specimen was prepared in accessible, short paragraphs and split over several different panels each representing a different concept. This was to prevent users from being overwhelmed by the amount of text. The text is supplemented by images that can be enlarged by clicking them. Some of the images depict original engravings from The Anatomy of the Human Gravid Uterus by Hunter, and the others were selected to continue this visual style.

1.3.1 V  irtual Museums Can Facilitate Public Engagement The application was tested by the target group of the general public. Testing took place in the Hunterian Museum at the University of Glasgow over 2  days and recruited 25 participants. After using the application, participants were asked to complete a survey consisting of 11 statements with which they expressed their agreement. The responses were based on a Likert scale from 1 to 5 (1 for strongly disagree and 5 for strongly agree). The statements were divided into three groups based on their focus: usability, educational value and engagement. Participants were also given the opportunity to write feedback comments.

14

Z. Jędrzejewski et al.

Fig. 1.8  One of the specimen scenes presenting the plastinated hand. Tendons are highlighted after pressing the button. On the right is the description of the structure complemented by an image

The application received an overall positive response. All the participants agreed or strongly agreed with the statement that they “found the application engaging”. All the participants also agreed or strongly agreed with the statement that they “would use the application again”. In terms of educational value, 24 of the 25 participants (96%) agreed or strongly agreed that the 3D models allowed them to “investigate the specimens more thoroughly than a physical museum” with only 1 participant giving a neutral response. When asked about usability of the application, most of the participants (16/25; 64%) agreed or strongly agreed that “the controls were easy to use” with only 3 participants (12%) disagreeing. This was supported by most of the participants (19/25; 76%) disagreeing with the statement that they “felt lost or frustrated while using the application”. However, 2 participants (8%) did agree that they felt lost or frustrated while using the

application. In addition to these results, 11 participants provided free text comments that supported the findings that the application was engaging and user-friendly although the controls would be improved. For example, participants stated: The app is great! I would visit the museum after using it! One thing that needs some improvement is the 3D model of the inside of the museum, but apart from that it’s fantastic! Clear and easy to use. Exhibits are shown in great detail, more so than I could get just at looking at the physical exhibit. The controls for moving around the museum were a little awkward or confusing but I did get used to them after a little while.

Finally, 24 of 25 participants (96%) agreed or strongly agreed with the statement “I would consider visiting the real Anatomy Museum after using the app”. This indicates that a virtual museum has the potential to engage the public

1  Virtual Anatomy Museum: Facilitating Public Engagement Through an Interactive Application

not only as a mobile application, but also in encouraging visits to the physical museum as well.

1.4

Discussion

Digitised museum specimens present a unique opportunity for members of the public to interact with them. Despite the optimisation made for the tablet devices, the models present a high-quality visual representation of their originals which users can investigate more freely and from different angles than would be possible in a physical museum. The virtual museum also provides opportunities for extended interactions and education by incorporating additional relevant information, images, animations or links. For example, specimens in the Virtual Anatomy Museum application are enhanced with exchangeable textures that highlight some structures or visualise anatomical processes such as blood circulation between the foetus and the placenta. The accessibility and inclusivity of virtual versions compared to physical museums are advantages that may increase their importance in the museal world. However, in terms of the experience of the visit, virtual museums are simplified versions of the originals. There are many technological limits that do not allow for a fully realistic visualisation of museum sites. The Virtual Anatomy Museum was designed as a mobile, tablet application, that could be used globally as well as inside the museum. The performance of tablet devices required significant optimisation of the application, including simplification of the models. For example, the interior of the museum in the Virtual Anatomy Museum application is a simplified preview of the original. Users may familiarise themselves with the environment through a virtual museum, and this may serve as an invitation to visit the physical museum, as indicated by the evaluation feedback. A range of specimens, including those preserved in fluid, were successfully digitised to produce high-quality 3D models utilised in the application. However, creating 3D models of the specimens was time-consuming and may not be

15

possible for all objects in museum collections. Indeed, models of two specimens (plastinated hand 01 and septic osteomyelitis) were not generated due to difficulties in aligning the photographs (Table 1.3). This means that the application contains models of only seven specimens compared to the thousands held in the Anatomy Museum collection. A complete digitisation of the collection would take years; therefore, the utilisation of virtual museum applications may be limited to previewing a subsection of the main collection.

1.4.1 Challenges of Creating Specimen Models The different techniques used to preserve the specimens presented several challenges to overcome during the digitisation process. A major challenge that has been suggested in the literature is digitisation of specimens preserved in fluid (Jocks 2014; Rea et al. 2017). The distortion produced by the liquid and the glass container is a significant difficulty that could potentially mislead Metashape in aligning the photographs. The best photograph alignment results were produced by photographs taken from the 0-degree angle towards the wall of the specimen jar. A 0-degree angle reduced the distortion of the object to a minimum, allowing Metashape to align all the photographs, while only some photographs were aligned from the photographs taken from high or low angles. The fragile nature of the specimens held in liquid and their containers can make it impossible to acquire images covering the top and the bottom of the object. The generated models often lacked some of the surfaces and textures in these areas. The missing structures can be reconstructed and adjusted in ZBrush and Substance Painter. Successful creation of models of two specimens preserved in liquid (vascularity of the gravid uterus and lymphatics of the intestine) demonstrated that such objects can be digitised with similar quality to dry specimens, provided that there is good acquisition of photographs and the use of post-processing.

16

Textures created in Substance Painter were crucial in producing high-quality models. Normal maps were generated by projecting the detail and curvature information of the high-polygon model onto a low-polygon mesh. Normal map produces the illusion of detail and allows the achievement of high-quality models without increasing the polygon count. Roughness maps also increased the realism of the models. These black and white maps contain information of the amount of light that reflects off the model, producing the appearance of high reflection or matte surfaces (Fig. 1.9). The roughness maps were mainly created based on the diffuse maps, to differentiate reflections on subtle surface details. Roughness maps are not a common feature of models made using photogrammetry technique. Photogrammetric models

Z. Jędrzejewski et al.

typically only use diffuse or diffuse and normal maps, yet the increased realism gained by using roughness maps is substantial. Moreover, some of the models were supplemented by metalness maps indicating metal parts on the object and rendering them in Unity as highly reflective, e.g. mercury injections in the lymphatics of the intestine specimen. Contrary to expectations, some of the dry specimens presented a greater challenge to 3D model compared to specimens preserved in fluid. Failure in aligning photographs of a dry bone specimen (septic osteomyelitis) held in a glass container indicates that it is not the fluid that is most challenging but the glass. Reflections on surfaces such as glass can be reduced by using a polarising filter. Furthermore, dry bones with a uniform texture generated noisy surfaces that required post-processing. One of the most challenging specimens to model was the distortion of the thorax specimen due to the gaps in the structure between the ribs. Creating a mesh with holes between the faces of each rib would produce insufficient quality after decimation of the model or would require a significant increase of the polycount. To avoid these problems, opacity maps were applied to a low-polygon cylinder-­ shaped model in Substance Painter. Then a black and white mask was painted with a brush directly on the model, making intercostal spaces completely invisible (Fig. 1.10). Opacity maps are a good solution for cases where the model requires complex structures of holes or borders that are difficult to represent with a lowpolygon mesh.

1.4.2 Implementation into Unity

Fig. 1.9  Model in Substance Painter showing the roughness channel. Black areas reflect the light strongly, while white areas are matte and less reflective

The implementation of the models into Unity was successful with the application running without any issues or dropping frames, although testing the application on older versions of Samsung tablets (Galaxy Tab S3) required optimisation of lighting inside the museum scene. Nevertheless, the maximum possible polygon count of the scene has not been reached, and the museum model may serve as a base for implementing

1  Virtual Anatomy Museum: Facilitating Public Engagement Through an Interactive Application

17

Fig. 1.10  Opacity map painted in Substance Painter and the effect of intercostal spaces being invisible

more detailed models. This also applies to the specimen models, each of which has a separate scene in which the user can manipulate and investigate specimen. A single model in the scene allows the implementation of highly detailed models and high-resolution textures.

1.5

Future Development and Conclusion

The process developed in this project for the digitisation of anatomical specimens resulted in high-quality models that could be manipulated by users. This included successful generation of models of specimens preserved in fluid which has previously been perceived as a challenge and potential limitation of using photogrammetry to create digital models of museum specimens. The evaluation of the application suggests that interactive virtual museum applications could be a valuable tool in public engagement. Moreover, the models that were created could be used in animations or be 3D printed to further enhance public engagement in museum collections. An additional benefit of this approach is that 3D

models could be made of objects contained in museum stores that are unable to be displayed in the physical museum due to lack of space or fragility of the object. The current application can be used by visitors inside the physical Anatomy Museum, introducing them to aspects of the collection and its history in an interactive and engaging way. The application proved to be a good way of visualising complex structures of anatomy, and using it inside the museum might enhance its educational value. Furthermore, there are other visualisation techniques that could be incorporated in the future. For example, implementing the models using augmented reality features could be trialled and the effect on public engagement evaluated. There is also the possibility of converting the Virtual Anatomy Museum application into a virtual reality (VR) game. VR can create an immersive experience and increase engagement for users that are not able to visit the physical museum. Indeed, in this respect, virtual museums may surpass physical museums as it would be possible to create an application hosting a collaborative exhibition, combining specimens from institutions around the world.

18

References Benoit B (2016) The poor man’s guide to photogrammetry. Available at: https://bertrand-benoit.com/blog/ the-poor-mans-guide-to-photogrammetry/. Accessed 10 Sept 2019 Earley K, McGregor R (2019) Visualising medical heritage: new approaches to digitisation and interpretation of medical heritage collections. In: Rea PM (ed) Biomedical visualisation: Volume 1. Springer, Cham, pp 25–38. https://doi. org/10.1007/978-3-030-06070-1_3 Fau M, Cornette R, Houssaye A (2016) Apport de la photogrammétrie à la numérisation 3D d’os de spécimens montés: potentiel et limites. In: Comptes Rendus  – Palevol. Elsevier Masson SAS, Paris, pp  968–977. https://doi.org/10.1016/j.crpv.2016.08.003 Graham CA et  al (2017) Epic dimensions: a comparative analysis of 3D acquisition methods. ISPRS Arch 42(2W5):287–293. https://doi.org/10.5194/ isprs-archives-XLII-2-W5-287-2017 Jocks IT (2014) Digitising the Hunterian and Cleland collections of human and comparative anatomy  – potential for education, research, conservation, and public engagement MSc Medical Visualisation and Human Anatomy Digitising the Hunterian and Cleland Collections of, August Kontogianni G, Georgopoulos A (2015) Exploiting textured 3d models for developing serious games. ISPRS Arch 40(5W7):249–255. https://doi.org/10.5194/ isprsarchives-XL-5-W7-249-2015 Lepouras G et  al (2004) Real exhibitions in a virtual museum. Virtual Reality 7(2):120–128. https://doi. org/10.1007/s10055-004-0121-5 Marreez YMA-H, Willems LNA (2016) The use of medical school museums in teaching “anatomy” within an integrated medical curriculum. Acad Med 91(10):267– 274. https://doi.org/10.1097/acm.0000000000001339 Miyamae C (2016) Multi-class production framework based on 3D scanning data for archaeological artifacts—the digitalization of Dogū. Archaeol Anthropol Sci 8(4):663–671. https://doi.org/10.1007/s12520014-0224-1. Springer Petriceks AH et  al (2018) Photogrammetry of human specimens: an innovation in anatomy education. J Med Educ Curric Dev 5:238212051879935. https:// doi.org/10.1177/2382120518799356 Rea P et  al (2017) Digitisation of anatomical specimens and historical pathology specimens for educational benefit. In: Ma M, Oikonomou A (eds)

Z. Jędrzejewski et al. Serious games and edutainment applications: Volume II.  Springer, Cham, pp  101–119. https://doi. org/10.1007/978-3-319-51645-5_5 Reilly M, McDonald S (2018) Anatomical preparations. In: William Hunter and the anatomy of the modern museum. Yale University Press, New Haven, pp 228–247 Sanchez-Jauregui MD (2018) Anatomical jars and butterflies: curating knowledge in William Hunter’s museum. In: William Hunter and the anatomy of the modern museum. Yale University Press, New Haven, pp 159–177 Santagati C et  al (2017) 3D models for all: low-cost acquisition through mobile devices in comparison with image based techniques. Potentialities and weaknesses in cultural heritage domain. In International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences  – ISPRS Archives. International Society for Photogrammetry and Remote Sensing, pp 221–228. https://doi.org/10.5194/ isprs-archives-XLII-2-W8-221-2017 Seebach S (2018) Creativity, interactivity and the hidden structures of power: a reflection on the history and current reality of the museum with the eyes of Foucault. Digithum 21:11–20. https://doi.org/10.7238/d. v0i21.3124. Fundacio per la Universitat Oberta de Catalunya Skamantzari M, Georgopoulos A (2016) 3D visualization for virtual museum development. ISPRS Arch 41(July):961–968. https://doi.org/10.5194/ isprsarchives-XLI-B5-961-2016 Teacher J (1900) Introduction. In: Catalogue of the anatomical preparations of Dr. William Hunter. James MacLehose & Sons, Glasgow, pp vii–lxxviii Turchini J et  al (2018) Three-dimensional pathology specimen modeling using “‘structure-from-motion’” photogrammetry: a powerful new tool for surgical pathology. Arch Pathol Lab Med 142(11):1415–1420. https://doi.org/10.5858/arpa.2017-0145-OA. College of American Pathologists Venkatesh SK et al (2013) MRI for transformation of preserved organs and their pathologies into digital formats for medical education and creation of a virtual pathology museum. A pilot study. Clin Radiol 68(3):e114– e122. https://doi.org/10.1016/j.crad.2012.10.009. The Royal College of Radiologists Wakefield D (2007) The future of medical museums: threatened but not extinct. Med J Aust 187(7):380–381 Zhao F, Loy SC (2015) Application of 3D digitization in cultural heritage preservation, pp 227–241

2

eLearning and Embryology: Designing an Application to Improve 3D Comprehension of Embryological Structures Keiran Tait, Matthieu Poyade, and Jennifer A. Clancy

Abstract

Embryology and histology are subjects that are viewed as particularly challenging by students in higher education. This negative perception is the result of many factors such as restricted access to lab facilities, lack of allocated time to these labs, and the complexity of the subject itself. One main factor that influences this viewpoint is the difficulty of grasping 3D orientation of sectioned tissues, especially regarding embryology. Attempts have been made previously to create alternative teaching methods to help alleviate these issues, but few have explored 3D visualisation. We aimed to address these issues by creating 3D embryological reconstructions from serial histology sections of a sheep embryo. These were deployed in a mobile application K. Tait School of Simulation and Visualisation, The Glasgow School of Art, Glasgow, UK Anatomy Facility, School of Life Sciences, College of Medical, Veterinary and Life Sciences, University of Glasgow, Glasgow, Scotland, UK

that allowed the user to explore the original sections in sequence, alongside the counterpart 3D model. The application was tested against a currently available eHistology programme on a cohort of life sciences graduates (n = 14) through qualitative surveys and quantitative testing through labelling and orientation-­ based tests. The results suggest that using a 3D modality such as the one described here significantly improves student comprehension of orientation of slides compared to current methods (p  =  0.042). Furthermore, the developed application was deemed more interesting, useful, and usable than current eHistology tools (p  4,500,000) to make them less demanding on any software that would use them ( 0.05) between the groups. This was also true for questions regarding the interest in histology and embryology that the participants had, with the average scores for the responses being mainly neutral or slightly negative.

The knowledge-based tests found no significant differences between groups (p > 0.05) in labelling structures; however, the test group did achieve a higher median score than the control group (17/25 compared to 13/25, respectively) (Fig. 2.10a).

K. Tait et al.

32

Fig. 2.10 Interquartile range diagram for knowldge-­ not statistically significant (p > 0.05) for the knowledge-­ based test (a) and labelling test (b) showing a higher based section. Highest and lowest scores illustrated by median score in the test group in both, although this was upper and lower lines

The test group scored significantly higher (p  =  0.042) than the control group in the orientation-­based questions (CG  =  1.43  ±  1.40, TG = 4.71 ± 3.55) (Fig. 2.10b). One notable difference between the two groups was that the test group correctly drew all answers in the transverse plane; however 74.2% of answers (26/35 total answers) in the control group were drawn in coronal or oblique planes. This indicates a fundamental misunderstanding of the plane of section/ orientation in relation to the embryo (Fig. 2.11). The post-test questionnaire indicated that the test group enjoyed the session more and rated the usefulness of the application significantly higher than the control group (p  >  0.05). When asked “How would you rate your enjoyment of the lab?” (Q1), “How interested were you in today’s lab?” (Q2), “How would you have rated the software used in today’s lab in terms of usability?” (Q3), and “How would you have rated the software used in today’s lab in terms of how helpful it was to aid identification of the embryological structures?” (Q4), the test group’s average response was higher (6.29/7) compared to the control group’s more neutral response (4/7) (Fig.  2.12). There was a significant difference

(p  0.05. Helpfulness of Media A five-point Likert scale measured ten questions testing for helpfulness of media in presenting the content. The questions were rated on a five-point Likert scale. The test of significance used match-­paired design between questions to compare mean average scores. The difference was calculated by two-tailed standard z-test with assumed unequal variance, which was calculated beforehand. The results are shown in the Fig. 3.6. The results showed high significant difference in question 26 and 29. Question 26 Group A score was M = 4.0 and Group B M = 2.67, making the results significantly different (p = 0.002). Question 29 had Group A mean score M = 4.21 and Group B means score M = 2.83, making the results significantly different (p = 0.05).

52

Fig. 3.5  Significant difference in Q16 and Q17 perceived understanding through two types of media

Fig. 3.6  Significant difference in Q27 and Q29

H. Pokojna et al.

3  Animated Guide to Represent a Novel Means of Gut-Brain Axis Communication

The remaining eight questions had no significant difference. Meaning that their p > 0.05.

3.4.3 Discussion All participants were considered in the final results. No participants were disqualified before or after as outliers for post hoc test.

3.4.3.1 Knowledge The results looking at the correctness measured by short-answer questionnaire were an effective way of measuring perceived information learned in each media. The questionnaires were identical in both groups, and the significant difference (p = 0.04) in Group A achieving higher score clearly shows that animation confirmed H1  – participant in the animation group will score higher on the correctness of information learned. 3.4.3.2 Perceived Understanding The results look at the perceived learning measured by a five-point Likert scale. The questions were identical and between Group A and Group B. They referred to specific information that was mentioned. The mean of score from each question from Group A was matched and compared with the mean score of the same question from Group B. The questions were structured in the “I understand how…” manner, meaning that higher score on the Likert scale corresponded with higher perceived understanding of the concept. Therefore, questions with significant difference show that one of the groups had lower perceived understanding of the concept. Significant difference was found in Q16 and Q17. The discrepancy in the scores in these two questions confirmed the second hypothesis H2 – participants in the animation group will perceive higher learning experience. These two questions dealt with structural concept (Q17) and abstract concept of reactions (Q16), suggesting that animation is highly efficient as a learning tool related to abstract concept specifically to do with structural and spatial aspects.

53

16) “I understand how the molecules interact with fatty acids in the mitochondria”. Group A had scored mean average M = 4.33 and Group B mean score M = 2.33. The result had high significant difference p = 0.023. 17) “I understand the difference between 3-TMAB, 4-TMAB and carnitine molecules”. Group A scored mean average M  =  4.22 and Group B M = 2.67, giving the results high significant different (p = 0.01).

3.4.3.3 Helpfulness of Media The results look at the perceived learning measured by a five-point Likert scale. The questions were not identical between Group A and Group B. Group A asked about the likability of the animation and the models. Group B was asked the same types of questions but with “written narrative” instead of animation and “would be helpful” statements in regard to models. The mean of score from each question from Group A was matched and compared with the mean score of the same question from Group B. The questions were structured in a way where higher score on the Likert scale corresponded with higher enjoyment and helpfulness of animation/narration or hypothetical use of animation. Significant difference was found in Q27 and Q19. The discrepancy in the scores in these two questions confirmed H3 – participants in the animation group will rate their media as highly helpful. The average means show higher score in answers, indicating higher score of helpfulness in understanding the content  as a result of presentation. The remaining eight questions were rated higher because of the significant difference; the third hypothesis was confirmed H3  – participants in the animation group will rate their media as highly helpful. 26) “The models were not distracting from the information” (Group A) versus “The written text was too boring to not distract from the information it tried to convey” (Group B). Question 26 Group A score was M  =  4.0 and Group B M = 2.67, making the results significantly different (p = 0.002).

H. Pokojna et al.

54

29) “I have enjoyed watching this animation” (Group A) versus “I have enjoyed reading this informational sheet” (Group B). Question 29 had Group A mean score M = 4.21 and Group B mean score M  =  2.83, making the results significantly different (p = 0.005). The rest of the questions did not have significant difference p = 4.33, which in this case means that the participants from both groups agreed on the statements. This was especially important in Question 30, as it confirmed that animation was (Group A) or would be very helpful (Group B) in presenting the abstract concepts regarding gut-­ brain axis.

can be used as a learning tool and how specifically it should be used to be effective in doing so. This chapter will focus on summarizing the key findings of the study, its contribution to research, limitations to be considered in application of results and directions for future work.

3.5.1 Key Findings

The results showed that people in the animation group scored higher on knowledge tests regarding the new information presented in either of the conditions. In terms of perceived understanding between the two groups, people in the animation group scored higher on the five-point scale showing that they felt like they understood more, 3.4.4 Conclusion which correlates with the actual knowledge results. These were found especially in the more Future research would aim to investigate anima- complex concepts regarding structure and visual tion as a learning tool. While previous research representation of chemical reactions in cellular supports animation as a suitable learning tool, mitochondria. The testing for enjoyment and finding an appropriate balance between under-­ helpfulness of each media has shown that visual stimulation and overloading the cognitive load representation would be helpful in learning with animation is difficult to define. Therefore, as abstract concepts. previously discussed in Chap. 2, animation can be both a help and an obstacle in learning. This specific study focused on teaching 3.5.2 Contributions abstract novelty ideas to people with relevant background. To improve this study, it would be A vast majority of small-scale experiments testhelpful to test this animation on the general pub- ing for effectiveness of animation as a learning lic without such knowledge and compare their tool like this one use general population as their results. As mentioned before, some literature target audience. While it is very helpful, research supports that people with previous knowledge into specific demographic helps to contribute to benefit more from interactive animations while more niche aspect of this research field. Using people who are learning things from new field target audience  – people with specific backbenefit more from animated movie, such as this ground – has shown which parts of the animation one. were done correctly and which could be improved. A specific contribution of this project is carrying a project not only on a target audience but 3.5 Discussion and Conclusion also on the abstract concept. As mentioned The data collected support the hypothesis that before, the scientific concepts and reactions are animation that provides visual and auditory stim- all not visible to the human eye; therefore, their ulation is more helpful in teaching abstract scien- visualization is very important as it helps with tific concepts, compared to reading a text building the bigger picture that depends on it. The MGB axis is an area of microbiology that providing the same information. These were rooted in background research in how animation intrigued scientists in the recent years, but its

3  Animated Guide to Represent a Novel Means of Gut-Brain Axis Communication

vague principles remained a frontier. The research paper cited here has managed to find evidence for a function of microbiome that plays a major role in mammalian cell function. The visualization of the key concepts within this whole field is very important, as it will help direct future research into microbiome and how to improve human health more effectively. It is also an area of intense public interest, highlighting the need for tools to aid in understanding by the general public of often-detailed scientific concepts.

3.5.3 Limitations 3.5.3.1 Population Validity Overall, the shortcoming of this research was the small sample size. While 15 participants serve as a good number for a pilot study, especially with a target audience, more participants would yield results with higher population validity. Therefore, despite the findings supporting the initial hypothesis, these should be interpreted with caution due to the small sample size. 3.5.3.2 Construct Validity While the measurement of effectiveness of animation was correctly compared to a control group, the written narrative group retained some shortcomings. First of all, some studies testing for the effectiveness of animation use still images as references alongside the written article they provide. This study has only used a written narrative and by that made the retention of abstract information more difficult. Secondly, there were shortcomings in the questionnaire measuring the effectiveness between the two media. Some of the answers pointed to questions being unclearly written. Question 13 “What is the name of the reaction in FAO?” referred to the esterification reaction. Only 2 participants out of total 15, compared to the rest who stated either “oxidation” or “FAO”, gave this answer. Questions that had higher point score were the questions where participants in both groups lost the most points. Question 9 is “What areas of the brain are the molecules most abundant in?”

55

Only one participant managed to get all five marks. The same participant who gained the five marks was the only one to gain marks in Question 11 “What health conditions are effected by FAO inhibition or mitochondrial dysfunction (7)?”. This is compared other 11 participants who have answered with 1–3 responses, except for 2 answers (one participant from each group), stating “I don’t remember” indicating that they were not paying attention to the content. This shows the individual differences between the participants and how much attention they have paid to the content. Some other shortcomings of the animation were pointed out by participants’ comments. An example is a comment that suggested to have more clear labels/comments: 1) “It ‘d be clear to have some key points written in subtitles because narration was quite long and it needed to carefully listen to get the key messages”. This project specifically did not use subtitles and only use narration and the most important labels to eliminate cognitive overload as mentioned in previously discussed guides. However, this response showed that preventing cognitive overload by omitting subtitles might not be always efficient, and it prevents intake of the information. 2) “Some important points, for example, the flow and connection of the mitochondrial oxidation to brain, were shown in very short time”. This clearly suggests giving more time to explain ideas that are more complex 3) “This part took too long”. While using 3D models was rated with four points in terms of helping to understand, the creator of the animation should be more careful with making decisions about not making models too distracting and not get away from the important information to be learned. Moreover, a comment section was not included in either of the questionnaires because some of the participants took the time to make suggestions for future improvement; an improvement of the study of itself would be to include comment section, so all participants have the option to

H. Pokojna et al.

56

write future pointers for improvement, not just the ones who deemed absolutely necessary.

3.5.3.3 Content Validity This study used different measures to determine relationship between the knowledge and perceived understanding. However, adding a new dimension to this study would be a discussion afterwards with fellow people to determine higher understanding in practice outside of preset short-answer knowledge questions. 3.5.3.4 Concurrent Validity The concurrent validity of this study is considered to be high based on the fact that the hypothesis, which was based on previous research, was confirmed with the data collected. Being that, the animation is more effective as a learning tool in relation to abstract concept. However, it can be argued that it is low due to small sample size and the phrasing of the questions of measurement of the effectiveness. 3.5.3.5 Pretesting Pretesting is useful as it determines the baseline knowledge of participants before teaching them about a given concept, regardless of test group. Determining baseline knowledge on the topic and additionally comparing it to tests after they were exposed to the information increase reliability and validity of the research experiment. Because it was omitted in this project, the internal validity of the test decreased.

3.6

Conclusion

This experiment consisted of taking abstract concepts and transforming them into a visual representation. It tested the effectiveness of using visual stimuli, specifically animated movie, as a means of learning tool. Based on previous research and the cognitive learning theory in multimedia, the animation was composed in a way where it aimed to limit as cogni-

tive overload and foster meaningful learning as possible. Taking recent research into the MGB axis as the abstract concept was great choice for the concept, as the importance of this discovery is relevant for future research in human health. Quicker understanding of these concepts is therefore beneficial in real life. The experimental results have confirmed the hypothesis, such as the participants in the animation condition scored higher on the knowledge tests and had higher scores on the perceived higher scores of understanding that correlated with the knowledge. Specifically, the significant differences were found in questions having to do with spatial and structural understanding. These are the strengths of animation that narration does not have as improvised the learners with visual stimuli which helps them to remember the information more easily. On the helpfulness scale and enjoyment questions, the participants in the animation have scored significantly higher than participants in the article condition. The questions that were not significantly higher were indicating agreement on models’ visual representations, specifically 2D and 3D models, to be helpful and hypothetically helpful in conveying abstract idea. These results have therefore confirmed the hypothesis which was that animation is significantly helpful in teaching abstract images than a narrative article alone. These findings are useful as a pilot study due to the limitations of the study; however, future improvements should be made and research into learning abstract ideas through animation should be continued. Animation is a powerful tool in cases wherein it is difficult to identify the right balance regarding how much stimuli should be used for it not to become too distracting. Conveying novel scientific discoveries can be especially challenging as they are full of complex processes  invisible to the human eye. Visually explaining the scientific concepts must show crucial parts of the process and by that  serve as a building block to which additional information can be then established on.

3  Animated Guide to Represent a Novel Means of Gut-Brain Axis Communication

References

57

tive illustrations both fosters and hinders learning in computer-based learning environment. Learn Instr 29:141–152 Ayeres P, Paas F (2007) Making instructional animaMathias M (2019) Auto-intoxication and historical pretions more effective: a cognitive load approach. Appl cursors of the microbiome-gut-brain-axis, talk at the Cogn Psychol 21:695–700 (this issue). https://doi. Royal College of Physicians and Surgeons in Glasgow, org/10.1002/acp.1343 January 17 Bedrina O (2016) Teaching with animation: from theory Mayer RE (ed) (2005) The Cambridge handbook of to practice – ICT in practice. [online] ICT in practice. multimedia learning. Cambridge University Press, Available at: http://www.ictinpractice.com/teachingNew York with-animation-from-theory-to-practice/. Accessed 1 Mayer RE (2018) Thirty years of research on online Aug 2019 learning. Appl Cogn Psychol [online] 33(2):152–159. Berney S, Bétrancourt M (2016) Does animation Available at: https://onlinelibrary.wiley.com/doi/ enhance learning? A meta-analysis. Comput Educ full/10.1002/acp.3482. Accessed 5 May 2019 101:150–167 Mayer RE, Moreno R (2003) Nine ways to reduce cogCryan P (2016) The Physiological Society Annual Public nitive load in multimedia learning. Educ Psychol Lecture 2016, video recording, YouTube. Viewed 18 38(1):43–52 April 2019. https://www.youtube.com/watch?v=Me3 Mayer RE, Wittrock MC (1996) Problem-solving transBAGaR1io&t=215s fer. In: Berliner D, Calfee R (eds) Handbook of eduD’Mello S, Lehman B, Pekrun R, Graesser A (2013) cational psychology. Macmillan, New York, pp 45–61 Confusion can be beneficial to learning. Learn Instr Schnotz W, Rash T (2005) Enabling, facilitating and 29:153–170 inhibiting effects of animations in multimedia learnHulme H, Meikle LM, van der Hooft JJJ, Strittmatter N, ing: why reduction in cognitive load can have negaSwales J, Bragg RA, Villar HVH, Ormsby M, Barnes tive results on learning. Educ Technol Res Dev S, Brown SL, Dexter A, Kamat MT, Komen J, Walker 53(3):47–58 D, Milling S, Osterweil E, MacDonald AS, Tardito S, Solomon G (1994) Interaction of media, cognition, and Bunch J, Dounce G, Edgar J, Edrada-Ebel R, Goodwin learning. Erlbaum, Hillsdale RJA, Burchmore R, Wall DM (2020) ‘Microbiome-­ Sweller J (1994) Cognitive load theory, learning difficulty, derived carnitine mimics as previously unknown and instructional design. Learn Instr 4(4):295–312 mediators of gut-brain axis communication’, Science Unknown (2019) Mouse animal Png-transparent mouse Advances, 6 (11). Available from: https://advances. [Online]. Available at: https://www.trzcacak.rs/imgm/ sciencemag.org/content/6/11/eaax6328. [Accessed 12 JmihJx_mouse-animal-png-transparent-backgroundMarch 2020] transparent-mouse/. Accessed 27 June 2019 Lowe RK (2003) Animation and learning: selective proVadimmmus (2019) Vector  – male body outline. Vector cessing of information in dynamic graphics. Learn illustration [Online]. Available at: https://www.123rf. Instr 13:157–176 com/photo_11660601_male-body-outline-vectorMagner UIE, Schwonke R, Aleven V, Popescu O, Renki illustration.html. Accessed 2 June 2019 A (2013) Triggering situational interest by decora-

4

Engaging with Children Using Augmented Reality on Clothing to Prevent Them from Smoking Zuzana Borovanska, Matthieu Poyade, Paul M. Rea, and Ibrahim Daniel Buksh

Abstract

Smoking is a harmful habit, causing a range of severe consequences which could lead to premature death. This habit is still prevalent amongst young people. In order to protect children, effective early interventions supported by public instances need to be set in place. Raising awareness and educating the youth is crucial to change their mindset about the severity of smoking. Emerging technologies, such as augmented reality (AR) on mobile devices, have been shown to be useful in providing engaging experiences and educating children about a range of issues, including health and anatomy. This chapter presents a research which explores the use of AR as an exciting and engaging medium to effectively Z. Borovanska School of Simulation and Visualisation, The Glasgow School of Art, Glasgow, UK Anatomy Facility, School of Life Sciences, College of Medical, Veterinary and Life Sciences, University of Glasgow, Glasgow, UK M. Poyade (*) · I. D. Buksh School of Life Sciences, College of Medical, Veterinary and Life Sciences, University of Glasgow, Glasgow, UK e-mail: [email protected] P. M. Rea Anatomy Facility, School of Life Sciences, College of Medical, Veterinary and Life Sciences, University of Glasgow, Glasgow, Scotland, UK

help educating children from 5 to 13 years about the effects of smoking. A mobile application, called SmokAR, was developed. This app includes AR visualization amongst other functionalities, whereby children are presented a realistic model of the human lungs of a healthy person and of a smoker. The aim of this research is to propose a transformative experience in order to put children off this dangerous habit whilst they gain knowledge about the effect of smoking on their organs. The anatomical accuracy of the 3D models and animations proposed by the app has been verified by an expert anatomist. A group of children (n  =  17) also took part in usability and knowledge acquisition testing at the Glasgow Science Centre. Findings showed a significant high usability suggesting a user-­ friendly app design. Moreover, results also suggested that participants gained knowledge to a certain extent and felt discouraged from smoking after seeing the model of the smoker’s lungs. Although there were several limitations to the study, the potential of the app to support learning and raising awareness is encouragingly positive. In addition, user testing in a more controlled environment, such as a classroom, can help gain further insights into the effectiveness and usability of the app. In the future, this simple but engaging approach to raise public awareness and support education could be used to further communicate

© Springer Nature Switzerland AG 2020 P. M. Rea (ed.), Biomedical Visualisation, Advances in Experimental Medicine and Biology 1262, https://doi.org/10.1007/978-3-030-43961-3_4

59

Z. Borovanska et al.

60

with children about negative health effects of other harmful habits such as alcohol or drug consumption. Keywords

Smoking · Augmented reality · 3D visualization · Education · Community engagement

4.1

Introduction

Smoking is a worldwide problem, leading to a range of negative consequences on people’s health, which in many occasions lead to fatal diseases. Smoking is a habit that is amongst the top causes of death in the UK and yet is preventable (Office for National Statistics 2017. Moreover, in the population, smoking commences at an earlier age than before, making the prevalence of young children (ages 7–11) who have tried smoking as high as 19% (NHS Statistics on Smoking 2018). Smoking has negative effects on various parts of the body. Beginning with the respiratory system, smoking can damage the alveoli (air sacks) that aid the exchange of oxygen and carbon dioxide during breathing, causing some of them to burst which results in less oxygen getting into the body (Centers for Disease Control and Prevention 2010). The bronchial tubes can also be damaged by the smoke and thus produce excessive mucus, which can then present as a smokers’ cough as smokers try to cough the mucus out (Centers for Disease Control and Prevention 2010). Moreover, smoking also causes damage to the cardiovascular system. It causes blood vessels to narrow due to fatty plaque building up. This results in lowered blood flow, which can not only cause muscle tiredness but also a heart attack (Taylor et  al. 1998). In addition, smoking also affects the nervous system through the substance called nicotine which causes addiction to cigarettes (NHS 2018). Currently, global strategies trying to achieve smoking cessation, such as restriction of smoking in public places or off-putting and dissuasive images displayed on cigarette packages, need to be supported with further strategies to combat the

seeming apathy of those who smoke or try smoking regarding the negative impacts of smoking (Sandford 2003). Furthermore, there is need for greater focus on educating younger generations of the harmful effects of smoking on human body in order to discourage them from starting to smoke. Thereby this can create a meaningful impact on future generations of adults where preventing this habit is the focus rather than its cessation. New innovative technologies such as augmented reality (AR) have been shown to be useful for educating and raising public awareness in healthcare-specific contexts (Moro et  al. 2017). Therefore and expectedly, user-friendly interactions experienced when utilizing AR have been shown to enhance learning and engagement (Moro et  al. 2017). Moreover, when interacting with AR, memory encoding, which is crucial for information retention, and also visual attention of users increase significantly and may even triple (Zappar 2019). Thus, it can be hypothesized that using AR to convey the effects of smoking can be more engaging and have greater impact on users than conventional 2D material such as images, posters, banners and animations (Mohan et  al. 2015). In addition, Riva et  al. (2016) demonstrated that AR can be beneficial in urging users’ change of clinical or personal views and empowering more profound self-reflection. Thus, it is believed that AR can help creating a transformative experience able to prevent users from smoking. With this in mind, creating an application which uses AR in order to engage with younger generations and educate them about the effects of smoking would be valuable and could potentially be impactful on National Health Services in the future. This chapter presents a study which aims to design and implement an AR application that can identify a pattern on clothing such as T-shirts and explore this approach as engagement mean with young children when receiving information about the effects of smoking. Moreover, a test measures the usability of the app and assesses the impact on learning about smoking by measuring the acquisition of knowledge through a pre- and post-test questionnaire. This chapter also

4  Engaging with Children Using Augmented Reality on Clothing to Prevent Them from Smoking

d­ iscusses the potential impact of mobile AR, now available to most, on raising awareness about the effects of smoking.

4.2

Background Context

This chapter reviews, first, the global problem of smoking amongst the youth and explores intervention strategies already implemented. Second, it explores the role of emerging technologies such as virtual reality (VR) and AR.

4.2.1 T  he Current Situation Regarding Smoking in the UK and Beyond Smoking poses a global threat to people’s health, with a range of consequences, and most of them are very severe with long-term effects with some leading to  death. The consequences include emphysema (Max 2001) and lung cancer (70% of all lung cancers are as a result of smoking) along with other types of cancers such as mouth or throat cancer, damage to cardiovascular system, increasing the risk of heart attack or stroke and many more negative effects on health (NHS 2018). Smoking is the most common reason for death or sickness in the UK (NHS 2018); 4% out of all hospital admissions are due to smoking (NHS Statistics on Smoking 2018). In 2006– 2007, smoking as a contributing factor for various chronic diseases cost NHS an economical cost of £3.3  billion (Scarborough et  al. 2011). Furthermore, it is not only active smokers that are at risk of these consequences. Passive smoking from inhaling the smoke increases one’s chances of the same health consequences the smokers face (NHS 2018), for example, having a partner who smokes makes the passive smoker four times more likely to develop lung cancer. Passive smoking is especially dangerous for children, making them more vulnerable to chest infection, cough or even meningitis (NHS 2018). Currently, more than a seventh of adults classify themselves as smokers, and even more alarming is that almost one in every five children

61

of the age 7–11 have already experimented with smoking (NHS Statistics on Smoking 2018). The prevention of smoking amongst the youth is very important because 88% of daily smokers started before the age of 18, and 99% of regular smokers had their first cigarette before they were 26 (National Center for Chronic Disease Prevention and Health Promotion 2012). Around the world, over 12% of youth who never smoked are now prone to start; however, education about smoking has been shown to decrease this susceptibility, along with effective policies in place against smoking (Veeranki et  al. 2014). Smoking in youth can significantly increase the chances of having chronic diseases in adulthood (National Center for Chronic Disease Prevention and Health Promotion 2012).

4.2.2 Smoking Interventions Often smokers view the negative effects of smoking as not guaranteed and occurring too far in the future to cause alarm to stop the habit (West 2017). Thus, effectively communicating the information about the effects of smoking and raising public awareness may be beneficial for decreasing the likelihood of taking up the habit. Furthermore, educating the youth can turn them into anti-smoking advocates and possibly have impacts on smoking habits of their relatives. Children who are properly educated about the effects of active and passive smoking can also help give them voice to complain or avoid exposure in situations where they are exposed to passive smoking (Johansson et al. 2003; Thaqi et al. 2005). Currently, there are a range of different smoking interventions already in place. Some with higher, some with lower effectiveness, they aim to help people stop smoking or prevent them from starting. It is important to recognize the features that make anti-smoking strategies successful to discourage people from smoking. Likewise, it is crucial to recognize the interventions that may actually have the opposite effect. Likewise, it is crucial to recognize the interventions that may actually have the opposite

62

Z. Borovanska et al.

effect. It has been revealed that tobacco company commercials targeting parents have increased the likelihood of the youth who happen to be viewing it to start smoking, making them perceive smoking as better and less harmful than it actually is, whilst youth-targeted commercials have no effect (Wakefield et  al. 2006). So, youth access interventions are often not effective because they urge youth to perceive smoking as adult-like (Fichtenberg and Glantz 2002). Some other interventions that aim to provide alternatives to tobacco smoking, such as e-cigarettes, have been shown to heighten the risk of youth smoking (Dai and Hao 2016). Effective anti-smoking interventions have been shown to combine several different components such as media campaigns, price increase and youth education about the effects of smoking (Fichtenberg and Glantz 2002; National Center for Chronic Disease Prevention and Health Promotion 2012). Mass media is an effective way to provide anti-smoking messages to the youth, and these messages, if interesting and presented frequently to the youth, can help promote a non-­ smoking lifestyle (Flynn et al. 1992). Media can be very useful for communicating the negative consequences and the actual harm that smoking can cause and to also refer to health services for further support (West 2017).1 In addition, effective government interventions, which include a ban on smoking in public places and tobacco advertising, an increase in tax on cigarettes, educational campaigns and health warnings on cigarettes packages, are important (Sandford 2003). However, gaining the support of the public through educational campaigns and public engagement cannot be omitted if the interventions should succeed (Sandford 2003). When a youth receives guidance regarding smoking and is educated about its dangers, it can prevent them from starting to smoke (Urrutia-Pereira et  al. 2017), which is crucial and can be life-changing as adolescence is the period when most smokers start smoking (White et al. 2008).

It is not only the type of intervention that is important but also the way it is presented. The use of images as warnings on cigarette packages2 was significantly better than using text warnings, doing so by increasing the engagement and attention, emotional reaction and also the attitude against smoking (Noar et al. 2016). Large and clear image-based warning labels about smoking were also more memorable by young people as compared to text warnings (White et  al. 2008). The study by White et  al. (2008) showed that higher percentage of young people attended to, considered and discussed smoking when the graphic cigarette packages were introduced. Moreover, these picture-based warnings have been shown to have an impact on current smokers along with adolescents considering starting this habit, causing a higher proportion of them to recognize the risks associated with smoking (White et al. 2008). Additionally, mobile-based educational interventions have shown great benefits for youth as mobiles are common means of accessing information that is available to most (Whittaker et al. 2008). There is also a need to facilitate the behavioural change in youth and not to only communicate information (Bruvold 1993), which emerging technologies, such as augmented reality, have been useful for.

1  https://www.itv.com/news/2013-11-28/cigarettepackaging-through-the-ages/

2  https://www.itv.com/news/2013-11-28/cigarette-packagingthrough-the-ages/

4.2.3 Emerging Technologies in Public Awareness and Education Most commonly used technologies for public education are virtual reality (VR) and augmented reality (AR). The difference between them is their proportion of real and virtual components – in VR, the user is immersed in a virtual environment that has been built to replace the real world, whilst in AR, the virtual environment or information is overlaid on the user’s point of view of the real world (Bower et  al. 2014) (Fig. 4.1). Research further discussed in

4  Engaging with Children Using Augmented Reality on Clothing to Prevent Them from Smoking

63

Fig. 4.1  Mixed reality continuum inspired by Bower et al. (2014)

this chapter has highlighted the advantages and disadvantages of both technologies.

between textbooks and intraoperative training by increasing the speed and efficiency of learning. 4.2.3.1 Virtual Reality There are various advantages of using VR for Virtual reality has been used to simulate com- educational purposes. For instance, a study plex tasks in immersive ways that support the involving 10–17-year-old children found that VR provision of depth cues, e.g. surgical tasks or can facilitate better understanding and retention maintenance tasks. Virtual reality, similarly as of information, making it easier to remember the AR, can be used to turn an otherwise currently learning materials (Freina and Ott 2015). The use inaccessible experience to the user, into a possi- of VR also increases users’ motivation and ble experience, for example, training of danger- engagement (Freina and Ott 2015). Furthermore, ous tasks from a  safe environment  (Freina and the perception of presence in the virtual environOtt 2015). ment is very important to consider and desirable Through immersion and properly designed to achieve (Sanchez-Vives and Slater 2005). Riva narrative, VR empowers to physically and psy- et al. (2007) showed that the higher the sense of chologically involve a user, making it possible to presence is, the higher is also its capacity to motivate behavioural and mental changes. Thus, induce emotion. Likewise, highly emotional virVR can not only simulate exposure to fearful sit- tual environments can enhance the sense of presuations in a safe controlled setting but also simu- ence in VR (Riva et al. 2007). Other features of late perception of body shape that a person is not VR such as immersion and imagination are also able to embody until they experience it. This can connected with users’ learning benefits (Huang be very useful in motivating personal change, et al. 2016). changing people’s attitudes and enhancing their However, using VR technology also has well-being (Riva et  al. 2016) and is commonly some drawbacks and limitations. For instance, used, for example, in treating obesity (Ferrer-­ cyber sickness, which presents itself as nausea, García et al. 2013). disorientation, headaches, fatigues and more, is VR has shown to have great potential to sup- common to experience with the use of VR and port education, enabling to bridge the gaps can significantly impair the efficiency of learnbetween theoretical approach and practices that ing (Moro et  al. 2017). The experimenters can often be logistically and ethically challeng- should also be aware of possible blurred vision ing, being at least as effective as more tradi- and difficulty concentrating for participants tional methods (Codd and Choudhury 2011). when using VR (Moro et  al. 2017). Moreover, VR has been successfully used to simulate 3D the VR set-up is more expensive with regard to interactive stereoscopic views of different neu- the technological components such as the headrosurgical approaches, which has shown to be sets. It also requires more space and time to set an effective educational approach. Henn et  al. up the educational experience, which makes it (2002) suggested that using VR can be a bridge less portable.

64

4.2.3.2 Augmented Reality There are vast examples of research using AR for educating people and facilitating experiences that would otherwise be hard to experience in reality through providing a smooth connection between the virtual and real environments (Kerawalla et al. 2006). AR can be used to show how correct parts should be fitted during a maintenance task (Bower et al. 2014). This can potentially lead to a decrease of the cognitive load as the user does not have to keep referring to manuals and can be presented only with the right amount of specific information (Bower et al. 2014). Another use of AR was to enable students to view sculptures in 3D so they could interpret them and adjust their own perception of its meaning and thus make history and art more engaging. The students acknowledged its usefulness/helpfulness as they could adjust their view of the sculpture and perform other activities they would not do in real life, such as breaking apart a rock from the sculpture or looking from the top view (Bower et al. 2014). AR has also been shown to be useful in communicating environmental issues and educating children about marine life. It showed evidence of making learning more engaging, especially increasing performance and learning of students with lower academic abilities (Lu and Liu 2014). Moreover, amongst other things, AR can provide less stressful and more accessible education to students, improve distance learning through AR conferences and shared AR workspace and thus widen the access to information (Yuen et  al. 2011). AR-based approach to education can also enhance the effectiveness of learning  – for instance, language learners can benefit from having an AR to explain the correct position of tongue for accurate pronunciation (Yuen et  al. 2011). More specifically, AR has been very effective for educating and imparting information about human anatomy, through which students have shown improved information retention and engagement during learning (Chien et al. 2010). By using AR, students were shown to improve their understanding of the 3D nature of the ana-

Z. Borovanska et al.

tomical structures and the spatial relationship between the structures compared to just viewing them as 2D images (Chien et al. 2010). In medical schools, AR cadaveric dissections have used various clipping planes and transparency options to improve students’ understanding about the spatial location of structures (Thomas et  al. 2010). There are several benefits in using AR to display fictional models in a familiar environment (Dunleavy et al. 2009). Firstly, even though our world is 3D, we mostly use 2D tools in education; AR can enhance effective learning, using 3D objects/visualizations (Kesim and Ozarslan 2012). Moreover, AR can create a very lifelike experience which is especially important when teaching health- or anatomy-related information since training and education using AR can be highly realistic and applicable (Kamphuis et  al. 2014). Secondly, AR can be particularly engaging, especially for children. A study conducted with 10-year-old children has shown that they were more engaged when using AR virtual mirror interface to learn about the interaction between the sun and earth as part of their learning materials compared to using traditional resources (Kerawalla et al. 2006). AR can also be beneficial in focusing the users’ attention on the important learning aims. By allowing the visualization of the invisible, children demonstrated better teamworking and longer interactions when using AR as opposed to when they were not (Yoon and Wang 2014). AR has even been suggested to be a very useful engagement tool for students with learning difficulties (Dunleavy et al. 2009). Thirdly, not only is AR filled with great potential for education; it is also very accessible to most on their mobile devices (Nincarean et al. 2013). Mobile devices have unique features such as ‘social interactivity’, ‘convenience/portability’ and others which further enhances the value learning experiences through this device (Nincarean et al. 2013). There is a need to improve health literacy, and given that people carry their phones on them all the time, AR-based health awareness can easily and readily prompt and facilitate important k­ nowledge

4  Engaging with Children Using Augmented Reality on Clothing to Prevent Them from Smoking

about health. AR can even enhance public displays, such as at bus stops or shopping centres, in order to communicate various messages to the wider audience, effectively serving to raise public awareness (Parker et al. 2016). Lastly, AR technology has the potential to induce personal or clinical change through creating transformative experiences which inspire the user towards reflection and self-efficacy. AR can spark up this change through the evoked emotions and sense of presence it offers (Riva et al. 2016). Although AR technology is relatively easy to use, there are various challenges that have been discussed in the literature. Mainly, these challenges consist of technical problems that can occur whilst using AR-based application, such as insufficient lighting, computing power of the device, inaccuracy of GPS for location or low sensitivity to recognize the triggered image/pattern (Akçayır and Akçayır 2017). However, these can often be solved by strong Internet connection or by using technology with appropriate hardware (Akçayır and Akçayır 2017). Another paper offers solutions to technological challenges suggesting that schools often do not need to buy expensive hardware; instead, they can ask their students to use their phones for an engaging AR learning experience (Dunleavy et  al. 2009). However, this is a major limitation when it comes to children with less financial means, possibly resulting in putting them at a disadvantage. Moreover, there was an unsolved debate in the literature whether AR decreases cognitive load or is in fact causing it (Akçayır and Akçayır 2017). However, recently, Lai et  al. (2018) suggested that the AR applications they designed with the contiguity principle of multimedia learning in mind significantly decreased the cognitive load of the children learning (Lai et  al. 2018). And lastly, it has been suggested that the efficiency of AR is directly related to how each AR experience has been designed; the requirements suggested for classroom-based AR educational tools are, for instance, the flexibility of the educator to add or remove elements or enabling the children to interact with the AR sufficiently (Kerawalla et al. 2006). Moreover, AR education needs to be capa-

65

ble of teaching the same amount of information as a traditional class would and to take into account the space restrictions of the given institution (Kerawalla et al. 2006). The effectiveness of an AR educational approach also depends on the type of learning content to be taught through AR – e.g. spatial content is more suitable for AR than text-based content (Radu 2014). In summary, emerging technologies have the potential to offer transformative self-paced experiences geared towards behavioural and mental changes by empowering stronger critical thinking (Riva et al. 2016). They can facilitate the kind of experiences that can typically be experienced in the real world. This has been shown to empower a different kind of training practices in clinical settings (Riva et  al. 2016). Moreover, when targeting youth, these experiences can build on the concept of playfulness, which has been shown to be useful for effective learning and behavioural shift (Yilmaz 2016). In this project, AR technology will be used to provide a more accessible educational experience about the effects of smoking. As previously discussed, AR-based applications can be viewed using mobile devices such as smartphones and are therefore accessible to most. Thus, this project will utilize a mobile device delivery method and altogether is expected to increase the impact on raising awareness about the negative consequences of smoking.

4.3

Methods

4.3.1 Workflow Figure 4.2 details the workflow that was followed for the design and development of the final application, named ‘SmokAR’.

4.3.2 Storyboard Storyboarding was an important stage in planning of the design and development of the application (Fig.  4.3). At this point, the selection of organs in the thorax to visualize was made and

Z. Borovanska et al.

66

Fig. 4.2  The workflow followed during design and development of the application

due to time constraints, only the three organs  – ribs, heart and lungs – were chosen to be visualized. The vasculature and muscles in the thorax were left out. Visualising the lungs was necessary to show the effects of smoking on the respiratory system, whilst the heart was chosen as smoking also negatively affects the cardiovascular system. Ribs were modelled purely to provide additional spatial cues; extensive focus was not paid to the ribs. In addition, the sequence of the scenes and their aims were decided. The first scene shows the model of healthy lungs and heart in AR to spark excitement and immersion with these realistic models. The second scene shows the progression from those healthy lungs to black lungs

of someone who smokes. The third scene shows the smoker’s lungs with options to click on several points and learn about the underlying negative health consequences of smoking. Importance was placed on simple orientation and usability, with many visuals and minimal text information in order to make it more appealing and engaging for the target audience  – children 5–13 years old. Additionally, storyboards for informational animations in the scene where the users can learn about the effects of smoking were created. The animations were designed to help retention of the information by children instead of purely using text to educate them about the effects of smoking.

4  Engaging with Children Using Augmented Reality on Clothing to Prevent Them from Smoking

67

Fig. 4.3  Storyboard of the app

4.3.3 D  igital 3D Anatomical Content 4.3.3.1 Segmentation 3D Slicer was used to segment the desired organs from sample data provided in the software. The contrast and brightness were adjusted in the Slicer interface in order to improve the distinction between the structures and enable more precise segmentation. Segmentation of ribs was tricky as threshold Slicer segmentation method is based on density of the structure. Therefore, the segmentation of ribs was repeated manually, which produced better results with a model without holes (Fig. 4.4).

Segmentation of lungs was relatively straightforward; the threshold, paint and fast marching tools were used to segment the lungs and trachea (Fig. 4.5). The segmentation of the heart was difficult; the result was only a model of the basic heart shape and aortic arch (Fig. 4.6). Based on this initial heart model, the final model of the heart was produced, expanding it further by modelling from reference images.

4.3.3.2 3D Modelling In the 3D modelling stage, the models segmented from 3D Slicer were used as a base, providing anatomical accuracy, and further modelled using

68

Z. Borovanska et al.

Fig. 4.4  Segmentation of the ribs done in 3D Slicer

Fig. 4.5  Segmentations of the lungs in 3D Slicer

3ds Max and ZBrush. Switching between these software allowed a utilization of their respective advantages. The models were retopologized in ZBrush, using ZRemesher, DynaMesh and Decimation Master, to finally create models with around 20,000 polygons which are suitable for mobile and tablet applications.

After retopologizing, one side of the ribs maintained its shape significantly better; therefore, symmetry was applied to the other side. Moreover, the model’s bottom area was cut as it was out of the main area of focus (Fig. 4.7). However, consequently, a rapidly segmented model of the ribs from the same dataset as the lungs was made in order to use it as a guidance

4  Engaging with Children Using Augmented Reality on Clothing to Prevent Them from Smoking

69

Fig. 4.6  Segmentation of the heart in 3D Slicer

Fig. 4.7  Retopologized and adjusted ribs model on the right (24,000 polygons); original model from the segmentation on the left (39,000 polygons)

towards the generation of a more accurate anatomical structure, ensuring a good fit between the ribs and the lungs (Fig.  4.8). The dataset from which the model of ribs was initially segmented was chosen due to a substantial rotation of the ribs in the other dataset, from which the lungs were segmented. Consequently, ZBrush tools

were used to make the ribs less thin, and alpha was utilized to sculpt the surface details of the bone (Fig. 4.9). The retopology of lungs was relatively simple (Fig. 4.10), and afterwards, the fissures and surface details were sculpted in ZBrush using alphas and pinch brush (Fig. 4.11).

70

Z. Borovanska et al.

Fig. 4.8  The model of the ribs after it has been adjusted according to the rapid segmentation, ensuring a better fit with the lungs’ model, with sculpted detail and texture colour

Fig. 4.9  The rapid segmentation of ribs from the lung’s dataset, ensuring a good fit between the ribs’ and lungs’ models

The heart has been modelled using the segmented data, giving the model its shape and size, but also heavily relying on reference materials to model the surface details of the muscle and fat around the heart (Fig. 4.12). Moreover, the inferior vena cava (IVC), superior vena cava (SVC), pulmonary arteries and veins were modelled using reference images, and their accuracy was supervised by an anatomy expert from the supervision team. Other vasculature was not included in this project due to time constraints. After consultation with one of the supervisors, it was concluded that the model of heart that was slightly too diagrammatic, thus a more realistic reference to sculpt it into a more natural and realistic-­ looking model was used (Fig. 4.13). Lastly, the cardiac vessels were modelled as splines in 3ds Max according to the sculpt, allowing higher flexibility during the workflow and making vessels stand out better from the model (Fig. 4.14).

4  Engaging with Children Using Augmented Reality on Clothing to Prevent Them from Smoking

71

Fig. 4.10 The retopologized model of the lungs, low poly version (18,000 polygons)

Fig. 4.11  Models of lungs with surface detail and fissures sculpted in. On the left, there is a model of the smoker’s lungs, and the model of healthy lungs is on the right

4.3.3.3 Texturing All three models were manually UV unwrapped in 3ds Max (Fig. 4.15), and according to these UV maps, the normal maps from high polygon versions of the models have been generated in ZBrush. This was a strategy used to visualize the highly detailed surfaces of the models through their normal maps whilst keeping the models’ polycount low. Therefore, the result was low poly version models with normal map generated from their corresponding high poly detailed versions. The aim was to create efficient models for

the use in mobile applications whilst keeping the visualizations realistic, with highly detailed quality (Fig. 4.16). Adobe Photoshop was used to create most of the alphas and textures for ZBrush from reference images, but some textures were provided by Lecturer Mike Marriott from the School of Simulation and Visualisation at the Glasgow School of Art so as to achieve as much photorealism as possible. In the end, the specular and glossiness were adjusted in 3ds Max, providing the final results (Fig. 4.17).

72

Z. Borovanska et al.

Fig. 4.12  Retopologizing and adjusting the model of the heart. On the left, there is the retopologized model after segmentation from 3D Slicer. On the right, there is the further adjusted and sculptured model (25,000 polygons)

Fig. 4.13  The more diagrammatic model of the heart is on the left. On the right, there is the more realistically sculpted final high poly version of the heart model

Overall, creating 3D models was an effort-­ filled process with a steep learning curve. Constant improvements of the shape, detail and textures of the models had to be made to make them as realistic as possible.

4.3.3.4 Animations 3D models of the lungs and the heart were animated to simulate realistic breathing and heart beating. Firstly, the lungs were expanded to their ‘inhale’ form in ZBrush. Then, these were transferred into 3ds Max to be animated, using Morpher modifier which allows a blend between two shapes – the original model or an ‘exhale form’ and the ‘inhale form’ of the lungs’ model (Fig. 4.18).

The heart animation consisted of inflating each section separately and storing those as different layers in ZBrush. These different stages of the heartbeat were synced in 3ds Max using Morpher modifier with two layers and adjusting the percentages at each layer according to the stage of the heartbeat. Reference videos (the Children’s Hospital of Philadelphia 2011) were used to help generate accurate heartbeat animation. Final animations were supervised and approved of by an anatomy expert from the supervision team. The created animations were exported into Unity in FBX format and played using the Unity Animator.

4  Engaging with Children Using Augmented Reality on Clothing to Prevent Them from Smoking

Fig. 4.14  The blood vessels of the heart modelled as splines

4.3.4 2D Content 4.3.4.1 Illustrations and Interface Design Firstly, the colour palette for the application was decided on (Fig. 4.19). The choice was based on a research regarding colour theory and design for children (Trythall 2015). Blue colour is often associated with medical aspects. However, the basic blue was adjusted to reach a friendlier tone (#13616F  – colour code for the main colour for the application) in order to make the application more appealing, approachable and user-friendly. Moreover, since the most common type of colour blindness is red and green colour blindness, these colours were avoided (Birch 2012). The rest of the colours in the palette consist of the base colours of the healthy lungs, unhealthy lungs and heart. The last colour is a lighter shade of the main colour for the app (#5F8E97). Regarding the icons for the application, the author decided to create her own instead of downloading assets. The icons used for the application were hand drawn in Photoshop, in order to make them more friendly, ‘easy-going’ and original. The focus of design of the application interface was put on appealing to young generation,

73

clarity in navigation and minimalisms. Thus, the scenes where the user can learn about the effects of smoking were designed to be simple and not crowded to allow for an easy navigation when using the app. The information button and the back button were kept in the same place across the app to keep them easy to find for the users. The same notion was applied to the buttons navigating between the scenes in the app which were uniformly kept at the bottom of the application. The splash screen when entering the application was created in Photoshop, using simple introduction of the author and the logo used also as an icon for the application. The colour throughout the app was mostly the main colour with its lighter shade; however, for the splash screen and logo screen, another colour from the app was used to provide a contrast. The choice of fonts for the application was kept to minimum, with the readability aspect in mind. Only two different fonts – one text font and one heading font – were used in order to make the application simple to use.

4.3.4.2 Informational Animations For the scene educating about the effects of smoking, small informational animations were created to demonstrate the information in a more digestible manner instead of presenting only test to the users. Firstly, these animations were storyboarded using Photoshop. Subsequently, the final models of the lungs and heart were rendered as a still image and using Photoshop tools adjusted into an outline that was coloured in by a solid colour to make the transition between the realistic nature of the 3D models to the basic 2D nature of the informational videos/animations (Fig. 4.20). Then the content of the videos was produced using 3ds Max and modelling the blood vessel, alveoli and bronchial tubes with cilia. The blood cells in the blood flow animation were created using particle effect and a custom particle that was modelled to look like a red blood cell (Fig.  4.21) and later instantiated several times. The fat was animated using Morpher modifier as previously mentioned when animating heartbeat and breathing.

Z. Borovanska et al.

74

Fig. 4.15  The models of ribs, lungs and heart UVW unwrapped in 3ds Max

For the alveoli animation, the alveoli were scaled up and down to simulate breathing, and blend material was animated to change from pink of a healthy person to black-looking alveoli of a smoker (Fig. 4.22). The animation of cilia in the bronchial tubes was animated adjusting their positional information, and the mucus was added and then adjusted using MeshSmooth and Displace Modifier. Adjusting the parameters of these modifiers allowed an achievement of such a bubbly mucus-­ like appearance during its animation (Fig. 4.23). After the animations were rendered in 3ds Max, After Effects were used to put the final animations together and blend images and videos via a change in opacity and adding informational text and labels. Additionally, there was a voice-­

over of the informational text added, which was recorded in order to make the information easier to digest for the children using the application.

4.3.5 Application Development This application was developed using Unity 3D and imported assets – the 3D models, animations, icons and others created from anatomical dataset provided in 3D Slicer free of charge. The development of the application started as a proof of feasibility, which included three scenes with key functionalities (Fig.  4.24) and blocky, in-­ progress, models. These functionalities included an AR scene where the user can observe the models overlaid on a pattern that will be printed on a

4  Engaging with Children Using Augmented Reality on Clothing to Prevent Them from Smoking

75

Fig. 4.16  Texturing of the models in ZBrush. The high poly versions of the models were used to generate detailed texture maps

T-shirt, a possibility to blend between two textures using a slider which will be used to blend between the healthy and smoker’s lung textures and, lastly, panels that pop up when the user clicks on a 3D model of the smoker’s lungs. There is no set order in which the users should interact with the scenes, giving them the space to explore the functionalities and visualizations the app offers. Each scene has an information box where the users can read short instructions if they needed to.

4.3.5.1 Pattern Generation and Impression A custom target image for the AR to detect was created from the model of lungs rendered as

wireframe in 3ds Max and adjusted in Photoshop to increase the resolution and contrast. A pattern for the front and back of the T-shirt was created, allowing users to view the models from different angles. These images were then tested in Vuforia target manager, which received a rating of 5 stars (Fig. 4.25). This AR target was then printed on a T-shirt and tested out for functionality.

4.3.5.2 Application Development Outcomes The final application is presented through screenshots from all the scenes (Figs. 4.26, 4.27, 4.28, 4.29 and 4.30). The application was designed for landscape viewing on a Samsung Galaxy S2 tablet with 2048 × 1536 resolution.

Z. Borovanska et al.

76 Fig. 4.17  The final look of the lung and heart models, after adjusting glossiness

4.4

Evaluation of the App

The SmokAR application was developed utilizing AR to raise awareness about the effects of smoking amongst the youth. In order to assess the level of usability, effective learning and enjoyment when children use the SmokAR app, user testing of the targeted age group has been conducted at the Glasgow Science Centre.

• Will the tablet-based AR application be suitable for raising awareness amongst children about the effects of smoking? • Will the application trigger enjoyment and playfulness amongst the children interacting with it? • Will the application be effective in helping children to gain more knowledge about smoking?

4.4.1 Research Questions

4.4.2 Participants

When creating and designing the questionnaires, the following aspects were taken into consideration: usability, user-friendly design and educational effectiveness of the app. Based on these factors, the following research questions were formed:

Seventeen participants (8 females and 9 males) aged between 5 and 13 (M = 9.35 ± 2.69) were recruited at the Glasgow Science Centre. Following the demographics screening, participants were asked whether they have access to a tablet or mobile device at home in order to

4  Engaging with Children Using Augmented Reality on Clothing to Prevent Them from Smoking

77

Fig. 4.18  The animation of the lungs’ breathing. The top left image shows the ‘exhaled’ version of lungs; the bottom left image shows the ‘inhaled’ version of lungs

Fig. 4.19  The colour palette for the application

determine their familiarity with this technology. Only one participant did not have access to such technology. Furthermore, the participants were asked questions to explore their existing awareness about active and passive smoking. Five out of 17 participants had previously not learnt about smoking, and in contrast, only one participant stated to have learnt about smoking previously and commented that smoking was a cause for air pollution. When asked if they knew someone who smokes, 12 said they do.

Most mentioned people in their family as those they knew who smoke except for two children who mentioned children at their high school.

4.4.3 Materials The equipment used during the testing was the SmokAR app installed on a Samsung Galaxy S2 Tablet, a mannequin, named ‘Ambrosia’, that was wearing a T-shirt with an original design that

78

Z. Borovanska et al.

Fig. 4.20  Outline of the heart and lungs for animation

Fig. 4.21  The animation of the blood vessel in 3ds Max

served as the AR target (Fig. 4.31) and a set of headphones.

4.4.4 Procedure Prior to testing, the parents/guardians of all the children read the information sheet, which introduced the project and had commonly asked

Fig. 4.22  The animation of alveoli in 3ds Max

q­ uestions such as withdrawal and data protection. Then if the parent(s)/guardian(s) and the children were interested in participating, the parents/guardians signed consent forms on behalf of the child.

4  Engaging with Children Using Augmented Reality on Clothing to Prevent Them from Smoking

Participants were asked to fill in a questionnaire before interacting with the app. The questions were divided into two categories – demographic questions as aforementioned and pre-test questions which asked about the effects of smoking and a disease someone who smokes can get. Afterwards, the children were given the tablet with SmokAR app in order to test it. It was explained to them that by pointing the tablet’s camera towards the mannequin’s T-shirt, they could see the lungs and heart through AR.  The other parts of the app were also introduced, showing them where they could listen or read about the effects of smoking, watch an illustrational video and observe what smoking does to the lungs (Figs. 4.29 and 4.30). After the children used the app and interacted with its features, they were asked to fill in an

79

evaluation questionnaire. It consisted of 10 usability questions  – utilizing the standardized System Usability Scale (Brooke 19963) – and 11 additional questions related to users’ opinions about the app, it’s technical set-up, design, possible future uses and their enjoyment of it. The questions used a 5-point Likert scale from ‘1, Strongly Disagree’, to ‘5, Strongly Agree’, represented with a set of smiley faces (Fig.  4.32) sourced from SurveyLegend (2014). All but one of the 11 additional questions consisted of questions phrased as positive statements. The standardized usability questions were an equal mix of positively and negatively phrased statements. Lastly, after the questionnaire, there were four more questions  – two which asked what their favourite part of the app was as well as what they learnt and two questions that were the same as pre-test questions. There was also a box for further comments. Furthermore, the participants and their parents/guardians were given an information leaflet to take home which included the information they learnt in the app and useful NHS websites for further information (Fig. 4.33).

4.4.5 Data Analysis

Fig. 4.23  The animation of the bronchial tubes in 3ds Max

The usability question scores were analysed according to the System Usability Scale guidelines in order to convert the scores on a percentile ranking to aid to comparisons with a standardized benchmark (Brooke 1996).4 Firstly, scores from the questions phrased with negative state-

Fig. 4.24  The key functionalities of the application 3  https://www.usability.gov/how-to-and-tools/methods/ system-usability-scale.html 4  https://www.usability.gov/how-to-and-tools/methods/ system-usability-scale.html

Z. Borovanska et al.

80

Fig. 4.25  The pattern created for use as AR targets on T-shirts and its 5-star rating in Vuforia target manager

Fig. 4.26  The intro scene of SmokAR, with its informational panel

ments were reversed to make the interpretation of the results from the graphs consistent, and then all scores were then converted to a 0–4-point scale. Each participant’s converted ratings were then summed up and further converted to a 0–100-point scale by multiplying it with 2.5. Participants’ final usability scores were compared to the standardized recommended usability average using a one-sample t-test, using PSPP software5 (gnu.org n.d.). Means and standard deviations were calculated for the rest of the Likert scale questions. The negative statement amongst these 11 questions was reversed too in order to make interpretation of the results from the graphs consistent. https://www.gnu.org/software/pspp/

5 

The data from the pre- and post-test questions were analysed together to determine the level of learning with the app. There was also an open question urging the children to say at least one thing they learnt when using the app. This was given as an additional means of testing, to see the learning efficiency from using the app, which further assessed the educational prospects of the app. Lastly, children were asked to say what their favourite part of the app was along with any extra comments, which were analysed to find a pattern and determine the favourite and possibly most engaging aspects of the application. An additional assessment by an independent expert user was conducted by Jenny Clancy from the University of Glasgow’s anatomy depart-

4  Engaging with Children Using Augmented Reality on Clothing to Prevent Them from Smoking

81

Fig. 4.27  The AR scene, showing the heart and lungs after the ribs have been clicked on. On the right, it shows the panel that appears on click with facts about the heart

Fig. 4.28  The effects of smoking scene, showing the different appearances of the lungs, which can be changed by the slider

Z. Borovanska et al.

82

Fig. 4.29  The ‘Learn more’ scene and its informational panel

ment. This result is not included in the statistical analysis as it is not a part of the target testing group’s results. However, useful insights and comments were gained from the expert user regarding the anatomical accuracy of the models and animations, commonly referred as face validity, used in the app.

4.5

Results

4.5.1 Observational Analysis From the observations made during the user testing, the attitude of children testing the app was very crucial. Those who were genuinely interested to learn took time to explore the app, but the children whose parents suggested they try the app

because they wanted them to learn about smoking showed less enthusiasm. Regardless of the initial level of excitement, it was observed and further validated from the questionnaires that both groups of children learnt something new from the application. Moreover, another crucial factor was the age  – children who were at the older spectrum of the age group understood the questions in the questionnaires better compared to the younger children who required the researcher or the parent/guardian to clarify certain words in the phrasing of the sentences. In general, all the children seemed entertained and excited to see the AR version of the lungs and heart. Furthermore, all children, no matter the age, seemed to navigate the app easily, interacting with the UI intuitively and having no issues with it.

4  Engaging with Children Using Augmented Reality on Clothing to Prevent Them from Smoking

83

Fig. 4.30  Images from the four different informational animations regarding the effects of smoking

4.5.2 Results of the Questionnaires 4.5.2.1 Usability A one-sample t-test was performed in order to determine the significance of the divergence between participants’ data and the standardized average reference for system usability. The analysis suggested that there was a very significant difference (p