Biomedical Visualisation: Volume 9 3030611248, 9783030611248

This edited book explores the use of technology to enable us to visualise the life sciences in a more meaningful and eng

762 117 9MB

English Pages 217 [211] Year 2021

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Biomedical Visualisation: Volume 9
 3030611248, 9783030611248

Table of contents :
Preface
Acknowledgements
About the Book
Contents
List of Contributors
About the Editor
1: Pair-Matching Digital 3D Models of Temporomandibular Fragments Using Mesh-To-Mesh Value Comparison and Implications for Commingled Human Remain Assemblages
1.1 Introduction
1.1.1 Commingled Human Remain Assemblages
1.1.2 Sorting Commingled Assemblages
1.1.3 Biomedical Visualization and the Improvement of Segregation Techniques
1.1.4 The Mesh-To-Mesh Value Comparison (MVC) Method
1.2 Materials and Methods
1.2.1 Sample
1.2.2 Workflow of the Method
1.2.3 Segmentation and 3D Model Building
1.2.4 Cropping
1.2.5 Mirror Imaging
1.2.6 Alignment/Pre-registration
1.2.7 Viewbox Software
1.2.8 Statistical Analyses
1.3 Results
1.3.1 Results of Comparisons for Pair-Matching
1.3.1.1 Lowest Common Value (LCV) Selection
1.3.1.2 Receiver Operating Characteristic (ROC) Analysis
1.3.2 Results for Comparisons of Articular Correlates
1.3.2.1 Lowest Common Value (LCV) Selection
1.3.2.2 Receiver Operating Characteristic (ROC) Analysis
1.3.3 Summary of Results
1.4 Discussion
1.5 Conclusions
Bibliography
2: Forensic Recreation and Visual Representation of Greek Orthodox Church Saint Eftychios of Crete
2.1 Introduction
2.1.1 Brief Description of the Facial Reconstruction Methods
2.1.2 Historical Information
2.1.3 Aims of the Study
2.2 Material and Methods
2.2.1 Skull’s Geometry Documentation
2.2.2 Fused Deposition Modeling (FDM) Reproduction of the Skull
2.2.3 Facial Reconstruction
2.2.3.1 Manual Method
2.2.3.2 Virtual Method
2.3 Results
2.4 Discussion
References
3: Virtual Trauma Analysis of the Nineteenth-Century Severed Head of the Greek Outlaw Stavrou
3.1 Introduction
3.1.1 The Criminology Museum of Athens
3.1.2 A Short History of Banditry in Twentieth-Century Greece
3.1.3 The Case of Stavrou
3.1.4 PMCT in the Investigation of Violent Deaths
3.2 Material and Methods
3.2.1 Macroscopic Examination of the Head
3.2.2 CT Scanning and Data Acquisition
3.2.3 Trauma Reconstruction
3.3 Discussion
3.3.1 Types of Firearms Used by the Greek Army in the Early Twentieth Century
3.3.2 Wound Ballistics
3.3.3 Ballistic Trauma Interpretation
3.3.4 Benefits of Virtual Forensic Reconstruction of Stavrou’s Death
3.4 Conclusions
References
4: Using Computed Tomography (CT) Data to Build 3D Resources for Forensic Craniofacial Identification
4.1 Background
4.2 Potential Contributions of CT Data to Forensic Craniofacial Identification
4.2.1 Research
4.2.2 Visualization and Interaction with 3D CT Data
4.2.2.1 3D Slicer
4.2.2.2 Meshlab
4.2.2.3 3D Printing 3D CT Models
4.2.3 Application to Workshops and Training
4.2.4 Application to Forensic Facial Approximation Casework
4.3 Summary
References
5: Instructional Design of Virtual Learning Resources for Anatomy Education
5.1 Introduction
5.2 Methods
5.2.1 Virtual Learning Resource Development
5.2.2 Virtual Learning Resource Delivery
5.2.3 Participants
5.2.4 Virtual Learning Resource Implementation
5.2.5 Objective and Subjective Measures of Cognitive Load
5.2.6 Data Analysis
5.2.6.1 Cognitive Load Experienced for Stereoscopic and Desktop Virtual Learning Resource Deliveries
5.2.6.2 Impact of (a) Prior Anatomy Knowledge and (b) Prior University Experience on the Cognitive Load Experienced for the Desktop Virtual Learning Resource Delivery
5.3 Results
5.3.1 Participants
5.3.2 Objective and Subjective Measures of Cognitive Load
5.3.2.1 Cognitive Load Experienced for Stereoscopic and Desktop Virtual Learning Resource Deliveries
5.3.2.2 Impact of (a) Prior Anatomy Knowledge on the Cognitive Load Experienced for the Desktop Virtual Learning Resource Delivery
5.3.2.3 Impact of (b) Prior University Experience on the Cognitive Load Experienced for the Desktop Virtual Learning Resource Delivery
5.4 Discussion
5.4.1 Considerations in the Instructional Design of Virtual Learning Resources for Anatomy Education
5.4.1.1 Virtual Learning Resource Delivery Modality
Immersion
Stereopsis
Interactivity
Motion
5.4.1.2 Collaborative Learning
5.4.1.3 Learner Characteristics
Prior Knowledge
Prior University Experience
5.4.1.4 Fidelity
5.4.2 Guidelines for the Instructional Design of Anatomy Virtual Learning Resources
5.4.3 Limitations of the Study
5.4.4 Future Directions
5.5 Conclusion
Supplementary Material
Appendix: National Aeronautics and Space Administration Task Load Index (NASA-TLX)
Sources of Workload
Rating Scales
References
6: Implementation of Ultrasound in Anatomy Education
6.1 Introduction
6.2 History of Ultrasound
6.3 Implementation of Ultrasound into Medical Education
6.3.1 Costs
6.4 Students’ Experiences
6.4.1 Impact on Anatomical Knowledge
6.4.2 Time on Probe
6.4.3 Ratios of Students/Faculty/Ultrasound Machine
6.4.4 Interest and Motivation
6.4.5 Skills and Confidence
6.5 Two Case Studies
6.5.1 The University of Auckland
6.5.2 Brighton and Sussex Medical School
6.6 Areas of Concern
6.6.1 Incidental Findings
6.6.2 Models
6.6.3 Exposure
6.6.4 Health and Safety
6.7 Recommendations
6.8 Conclusion
Appendices
Appendix 1
The Standard Operating Procedure for Incidental Finding Used at Brighton and Sussex Medical School
Appendix 2
References
7: What the Tech? The Management of Neurological Dysfunction Through the Use of Digital Technology
7.1 Introduction
7.1.1 What Is Neurological Dysfunction?
7.1.2 Current Treatment of Disability as a Result of Neurological Dysfunction
7.1.3 Some Problems with Current Therapies
7.1.3.1 Loss of Patient Motivation
7.1.3.2 Poor Access to Physiotherapy for Patients Living in Rural Areas
7.1.4 What Digital Technology Is Out There?
7.1.4.1 Wearable Sensors
7.1.4.2 Virtual Reality
7.1.4.3 Robotics
7.1.5 What Is Telehealth?
7.1.6 Aim of the Study
7.2 Methodology
7.3 Results
7.3.1 Stroke
7.3.2 Parkinson’s Disease
7.3.3 Multiple Sclerosis
7.4 Discussion
7.4.1 Improvement of Function
7.4.1.1 Using Robotics
7.4.1.2 Using Virtual Reality (VR)
7.4.2 High Patient Acceptability, Increased Motivation, Reduced Anxiety and Social Aspects
7.4.2.1 Post-stroke Patients
7.4.2.2 Parkinson’s Disease Patients
7.4.2.3 Multiple Sclerosis Patients
7.4.2.4 Use of Exergaming
7.4.3 Accessibility Within a Home Setting
7.4.3.1 Virtual Reality
7.4.3.2 Robotics
7.4.3.3 Wearable Sensors
7.4.3.4 Mobile Phone Reporting and Databases
7.5 Methodological Issues
7.6 Conclusions
References
8: Teaching with Disruptive Technology: The Use of Augmented, Virtual, and Mixed Reality (HoloLens) for Disease Education
8.1 Modern-Day Teaching Environment
8.1.1 Choice of Technology for Teaching Anatomy and Physiology
8.1.2 Defining Modern Disruptive Technologies
8.1.3 Virtual Reality
8.1.4 Augmented Reality
8.1.5 Mixed Reality and Holograms
8.2 Using Modern Technology to Teach Disease
8.2.1 The Complexities Around Stroke Education
8.2.2 Stroke Management Through Education
8.2.3 The Complexities Around Asthma Education
8.2.4 Need for Improved Asthma Education
8.2.5 Asthma Education Programmes
8.2.6 Concluding Remarks on Novel Technologies in Education
References
9: “Inform the Head, Give Dexterity to the Hand, Familiarise the Heart”: Seeing and Using Digitised Eighteenth-Century Specimens in a Modern Medical Curriculum
9.1 Introduction
9.2 Anatomical Preparations in the Eighteenth-Century Anatomy “Curriculum”
9.3 Motivations for Digitising Historic Collections
9.4 Digital Anatomy in the Modern Curriculum
9.5 Teaching History with Digitised Collections
9.6 Conclusion
References
10: Contact-Free Pulse Signal Extraction from Human Face Videos: A Review and New Optimized Filtering Approach
10.1 Introduction
10.2 Literature Review
10.2.1 Classical Signal Processing Approaches
10.3 Deep Learning Approaches
10.3.1 A New Optimal Filtering Approach
10.3.2 Introduction
10.3.3 Proposed Method
10.3.4 Filter-Based Heart Signal Extraction
10.4 Results
10.5 Discussion
10.6 Conclusions and Future Work
References

Citation preview

Advances in Experimental Medicine and Biology 1317

Paul M. Rea  Editor

Biomedical Visualisation Volume 9

Advances in Experimental Medicine and Biology Volume 1317 Series Editors Wim E. Crusio, Institut de Neurosciences Cognitives et Intégratives d’Aquitaine, CNRS and University of Bordeaux, Pessac Cedex, France Haidong Dong, Departments of Urology and Immunology, Mayo Clinic, Rochester, MN, USA Heinfried H. Radeke, Institute of Pharmacology & Toxicology, Clinic of the Goethe University Frankfurt Main, Frankfurt am Main, Hessen, Germany Nima Rezaei, Research Center for Immunodeficiencies, Children's Medical Center, Tehran University of Medical Sciences, Tehran, Iran Junjie Xiao, Cardiac Regeneration and Ageing Lab, Institute of Cardiovascular Sciences, School of Life Science, Shanghai University, Shanghai, China

Advances in Experimental Medicine and Biology provides a platform for scientific contributions in the main disciplines of the biomedicine and the life sciences. This series publishes thematic volumes on contemporary research in the areas of microbiology, immunology, neurosciences, biochemistry, biomedical engineering, genetics, physiology, and cancer research. Covering emerging topics and techniques in basic and clinical science, it brings together clinicians and researchers from various fields. Advances in Experimental Medicine and Biology has been publishing exceptional works in the field for over 40 years, and is indexed in SCOPUS, Medline (PubMed), Journal Citation Reports/Science Edition, Science Citation Index Expanded (SciSearch, Web of Science), EMBASE, BIOSIS, Reaxys, EMBiology, the Chemical Abstracts Service (CAS), and Pathway Studio. 2019 Impact Factor: 2.450 5 Year Impact Factor: 2.324 More information about this series at http://www.springer.com/series/5584

Paul M. Rea Editor

Biomedical Visualisation Volume 9

Editor Paul M. Rea School of Life Sciences University of Glasgow Glasgow, UK

ISSN 0065-2598     ISSN 2214-8019 (electronic) Advances in Experimental Medicine and Biology ISBN 978-3-030-61124-8    ISBN 978-3-030-61125-5 (eBook) https://doi.org/10.1007/978-3-030-61125-5 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland

Preface

The utilisation of technologies in the biomedical and life sciences, medicine, dentistry, surgery, veterinary medicine and surgery, and the allied health professions has grown at an exponential rate over recent years. The way we view and examine data now is significantly different to what has been done in the recent past. With the growth, development and improvement of imaging and data visualisation techniques, the way we are able to interact with data is much more engaging than it has ever been. These technologies are being used to enable improved visualisation in the biomedical field, but also to engage our future generation of practitioners while they are students within our educational environment. Never before have we had such a wide range of tools and technologies available to engage our end-stage user. Therefore, it is a perfect time to bring them together to showcase and highlight the great investigative work that is going on globally. This book will truly showcase the amazing work that our global colleagues have done through their investigations and researches, ultimately to improve student and patient education, understanding and engagement. By sharing best practices and innovation, we can truly aid our global development in understanding how best to use technology for the benefit of society as a whole. School of Life Sciences University of Glasgow Glasgow, UK

Paul  M. Rea

v

Acknowledgements

I would like to truly thank every author who has contributed to the ninth edition of Biomedical Visualisation. By sharing our innovative approaches, we can truly benefit students, faculty, researchers, industry and beyond, in our quest for the best uses of technologies and computers in the fields of life sciences, medicine, the allied health professions and beyond. In doing so, we can truly improve our global engagement and understanding about best practices in the use of these technologies for everyone. Thank you! I would also like to extend a personal note of thanks to the team at Springer Nature who have helped make this possible. The team I have been working with have been so incredibly kind and supportive, and without you, this would not have been possible. Thank you kindly!

vii

About the Book

Following on from the success of the first eight volumes, Biomedical Visualisation, Volume 9, will demonstrate the numerous options we have in using technology to enhance, support and challenge education. The chapters presented here highlight the wide use of tools, techniques and methodologies we have at our disposal in the digital age. These can be used to image the human body; educate patients, the public, faculty and students on how to use a range of cutting-edge technologies in visualising the human body and its processes; create and integrate platforms for teaching and education; visualise biological structures and pathological processes; and aid visualisation of the forensic and historical arenas.

ix

Contents

1 Pair-Matching Digital 3D Models of Temporomandibular Fragments Using Mesh-To-Mesh Value Comparison and Implications for Commingled Human Remain Assemblages��������������������������    1 Alana S. Acuff, Mara A. Karell, Konstantinos E. Spanakis, and Elena F. Kranioti 2 Forensic Recreation and Visual Representation of Greek Orthodox Church Saint Eftychios of Crete ����������������   17 Nectarios Vidakis, Markos Petousis, Despoina Nathena, Elena F. Kranioti, and Andreas Manios 3 Virtual Trauma Analysis of the Nineteenth-Century Severed Head of the Greek Outlaw Stavrou��������������������������������   35 Elena F. Kranioti, Nikos Tsiatis, Kristina Frandson, Maria Stefanidou, and Konstantinos Moraitis 4 Using Computed Tomography (CT) Data to Build 3D Resources for Forensic Craniofacial Identification��������������������   53 Terrie Simmons-Ehrhardt, Catyana R. S. Falsetti, and Anthony B. Falsetti 5 Instructional Design of Virtual Learning Resources for Anatomy Education ����������������������������������������������   75 Nicolette S. Birbara and Nalini Pather 6 Implementation of Ultrasound in Anatomy Education��������������  111 C. F. Smith and S. Barfoot 7 What the Tech? The Management of Neurological Dysfunction Through the Use of Digital Technology������������������  131 Caitlin Carswell and Paul M. Rea 8 Teaching with Disruptive Technology: The Use of Augmented, Virtual, and Mixed Reality (HoloLens) for Disease Education ����������������������������������  147 Zane Stromberga, Charlotte Phelps, Jessica Smith, and Christian Moro

xi

xii

9 “Inform the Head, Give Dexterity to the Hand, Familiarise the Heart”: Seeing and Using Digitised Eighteenth-Century Specimens in a Modern Medical Curriculum������������������������������������������������  163 Francis Osis 10 Contact-Free Pulse Signal Extraction from Human Face Videos: A Review and New Optimized Filtering Approach������������������������������������������������������������������������  181 Muhammad Waqar, Reyer Zwiggelaar, and Bernard Tiddeman

Contents

List of Contributors

Alana S. Acuff  University of Edinburgh, Edinburgh, UK S.  Barfoot Department of Anatomy and Medical Imaging, University of Auckland, Auckland, New Zealand Nicolette S. Birbara  Department of Anatomy, School of Medical Sciences, Faculty of Medicine, UNSW Sydney, Sydney, NSW, Australia Caitlin  Carswell Anatomy Facility, School of Life Sciences, College of Medical, Veterinary and Life Sciences, University of Glasgow, Glasgow, UK Anthony B. Falsetti  College of Science, Forensic Science Program, George Mason University, Fairfax, VA, USA Catyana R. S. Falsetti  School of Integrative Studies, George Mason University, Fairfax, VA, USA Kristina Frandson  School of History Classics and Archaeology, University of Edinburgh, Edinburgh, UK Mara A. Karell  University of Edinburgh, Edinburgh, UK Elena F. Kranioti  Forensic Medicine Unit, Department of Forensic Sciences, University of Crete, Heraklion, Greece Andreas Manios  Plastic Surgery Unit, Surgical Oncology Clinic, General University Hospital of Heraklion, Heraklion, Greece Christian Moro  Faculty of Health Sciences and Medicine, Bond University, Robina, Australia Konstantinos Moraitis  Department of Forensic Medicine and Toxicology, School of Medicine, National and Kapodistrian University of Athens, Athens, Greece Despoina  Nathena Forensic Medicine Unit, Department of Forensic Sciences, University of Crete, Heraklion, Greece Francis  Osis School of History, College of Arts, University of Glasgow, Glasgow, UK Nalini Pather  Department of Anatomy, School of Medical Sciences, Faculty of Medicine, UNSW Sydney, Sydney, NSW, Australia

xiii

xiv

Markos Petousis  Mechanical Engineering Department, Hellenic Mediterranean University, Heraklion, Greece Charlotte Phelps  Faculty of Health Sciences and Medicine, Bond University, Robina, Australia Paul M. Rea  School of Life Sciences, University of Glasgow, Glasgow, UK Terrie  Simmons-Ehrhardt Department of Forensic Science, Virginia Commonwealth University, Richmond, VA, USA C. F. Smith  Department of Anatomy, Brighton and Sussex Medical School, University of Sussex, Brighton, UK Jessica Smith  Faculty of Health Sciences and Medicine, Bond University, Robina, Australia Konstantinos E. Spanakis  University of Crete, Heraklion, Greece Maria  Stefanidou Department of Forensic Medicine and Toxicology, School of Medicine, National and Kapodistrian University of Athens, Athens, Greece Zane Stromberga  Faculty of Health Sciences and Medicine, Bond University, Robina, Australia Bernard Tiddeman  Aberystwyth University, Aberystwyth, UK Nikos Tsiatis  Department of Forensic Medicine and Toxicology, School of Medicine, National and Kapodistrian University of Athens, Athens, Greece Nectarios Vidakis Mechanical Engineering Department, Hellenic Mediterranean University, Heraklion, Greece Muhammad  Waqar  Aberystwyth University, Aberystwyth, UK Reyer  Zwiggelaar  Aberystwyth University, Aberystwyth, UK

List of Contributors

About the Editor

Paul M.  Rea is a Professor of Digital and Anatomical Education at the University of Glasgow. He is qualified with a medical degree (MBChB), an MSc (by research) in craniofacial anatomy/surgery, a PhD in neuroscience, a diploma in forensic medical science (DipFMS), and an MEd with merit (Learning and Teaching in Higher Education). He is an elected Fellow of the Royal Society for the Encouragement of Arts, Manufactures and Commerce (FRSA), elected Fellow of the Royal Society of Biology (FRSB), Senior Fellow of the Higher Education Academy, professional Member of the Institute of Medical Illustrators (MIMI) and a registered medical illustrator with the Academy for Healthcare Science. Paul has published widely and presented at many national and international meetings, including invited talks. He sits on the Executive Editorial Committee for the Journal of Visual Communication in Medicine, is Associate Editor for the European Journal of Anatomy and reviews for 25 different journals or publishers. He is the Public Engagement and Outreach lead for anatomy coordinating collaborative projects with the Glasgow Science Centre, NHS and Royal College of Physicians and Surgeons of Glasgow. Paul is also a STEM ambassador and has visited numerous schools to undertake outreach work. His research involves a long-standing strategic partnership with the School of Simulation and Visualisation, the Glasgow School of Art. This has led to a multimillion-pound investment in creating world-leading 3D digital datasets to be used in undergraduate and postgraduate teaching to enhance learning and assessment. This successful collaboration resulted in the creation of the world’s first taught MSc in Medical Visualisation and Human Anatomy course combining anatomy and digital technologies. The Institute of Medical Illustrators also accredits it. It has created college-wide, industry, multi-institutional and NHS research–linked projects for students. Paul is the Programme Director for this degree.

xv

1

Pair-Matching Digital 3D Models of Temporomandibular Fragments Using Mesh-To-Mesh Value Comparison and Implications for Commingled Human Remain Assemblages Alana S. Acuff, Mara A. Karell, Konstantinos E. Spanakis, and Elena F. Kranioti

Abstract

The mesh-to-mesh value comparison (MVC) method developed by Karell et al. (Int J Legal Med 130(5):1315–1322, 2016) facilitates the digital comparison of three-dimensional mesh geometries obtained from laser-scanned or computed tomography data of osteological materials. This method has been employed with great success to pair-matching geometries of intact skeletal antimeres, that is, left and right sides. However, as is frequently the case for archaeological materials, there are few circumstances which proffer complete skeletal remains and fewer still when considering contexts of commingling. Prior to the present research, there existed a paucity of sorting techniques for the diverse taphonomic conditions of skeletal A. S. Acuff · M. A. Karell University of Edinburgh, Edinburgh, UK e-mail: [email protected]; [email protected] K. E. Spanakis University of Crete, Heraklion, Greece e-mail: [email protected] E. F. Kranioti (*) Forensic Medicine Unit, Department of Forensic Sciences, University of Crete, Heraklion, Greece e-mail: [email protected]

materials found within commingled assemblages, especially regarding fragmentary remains. The present chapter details a study in which the MVC method was adapted to encompass comparisons of isolated components of bone in lieu of entire bone geometries in order to address this dearth. Using post-mortem computed tomography data from 35 individuals, three-dimensional models of 70 mandibular fossae and 69 mandibular condyles were created and then compared using Viewbox 4, to produce numerical mesh-to-mesh values which indicate the geometrical and spatial relationship between any two given models. An all-to-all comparison was used to determine if the MVC method, using an automated Trimmed Iterative Closest Point (TrICP) algorithm, could be utilized to (1) match corresponding bilateral pairs of condyles and fossae and (2) match samesided articular correlates. The pair-matching of both the condyles and the fossae generally produced high sensitivity and specificity rates. However, the articulation results were much poorer and are not currently recommended. Keywords

Pair-matching · Mesh-to-mesh value comparison · MVC · 3D · CT · PMCT · TMJ ·

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 P. M. Rea (ed.), Biomedical Visualisation, Advances in Experimental Medicine and Biology 1317, https://doi.org/10.1007/978-3-030-61125-5_1

1

A. S. Acuff et al.

2

Articulation · Digital · Osteology · Forensic · Commingled remains · ICP · TrICP

1.1

Introduction

In osteological analyses of human remain assemblages, commingling often hampers the development of mortuary and individual biological profiles. The first aspect of such analyses is segregating and quantifying the incorporated elements, which often proves challenging with large assemblages. The second task, of re-associating disarticulated remains for individuation, also creates unique challenges based on the composition of the assemblage. There is a distinct need for methods which accelerate the resolution of commingled remains, whatever their origin, to the level of the individual. However, there are relatively few methods which are specific to sorting commingled assemblages, and fewer still that are usable for diverse taphonomic conditions. The present chapter details an extenuation, and condensation of anatomical scope, of the mesh-to-­ mesh value comparison (MVC) method developed by Karell et al. (2016) which seeks to address this issue. The temporomandibular joint was chosen to be the focus of this study as, while this joint space has received some attention regarding the re-association of remains (Preissler et  al. 2018), there are no reported statistical means of re-associating the osseous components to their articular matches beyond approximations of rough articular congruity. Moreover, the original study suggested that “comparison of any two objects with identical or symmetrical components” could be possible with MVC (Karell et al. 2016). The presented study sought to address this claim and to test the adaptability of the MVC method to matching articular correlates to each other as well as their bilateral pairs. The current research has, in effect, extended the purview of the MVC method to include skeletal elements which exhibit overt taphonomic alterations, providing that the aspect under consideration is uniformly delimited and consistently captured. The present research also effectively demonstrates

how biomedical visualization technologies can advance techniques used for sorting commingled remains, regardless of their archaeological or forensic context.

1.1.1 C  ommingled Human Remain Assemblages The phenomenon of commingling in human remain assemblages is oft discussed in the fields of both archaeology and forensic anthropology. In either context, commingling is described as the mixing of skeletal elements from more than one individual within a single depositional context (Osterholtz 2018; White and Folkens 2005). However, commingling in archaeological contexts can also occur incidentally to processes of transportation, handling, and storage and thus irrespective of the original depositional environment of the assemblages in question (Osterholtz et  al. 2014). In forensic contexts, commingling can also occur due to willful acts, such as the creation of mass graves. Every commingled assemblage is unique, and a single methodology is often inadequate for their resolution. Therefore, it is essential that methodologies are developed not only to tackle the specific requirements of each assemblage but also to facilitate their broader comparison.

1.1.2 Sorting Commingled Assemblages Traditional osteological methods of re-­ associating commingled remains to individuals, a process known variably as “individualization” or “individuation” (Buikstra and Gordon 1980), are mostly based on visual assessments of bone. Frequently utilized means of sorting commingled remains include visual pair-matching of bilaterally expressed elements, matching on the basis of shared surface or textural changes (Grumbkow et al. 2012), and matching elements on the basis of rough articular congruence. In addition, sorting of commingled remains is frequently conducted via osteometric and geometric

1  Pair-Matching Digital 3D Models of Temporomandibular Fragments Using Mesh-To-Mesh Value…

morphometric analyses (e.g., Adams and Byrd 2006; Garrido-Varas et  al. 2015). These metric techniques use principles of allometry, where matches are determined out of a known pool of subjects on the basis of their relative dimensions (Graham et  al. 2010). All of these techniques’ advantages and disadvantages will be explored below. Visual pair-matching has been reported to be successfully implemented in resolving cases of commingling when applied to femora, tibiae, and humeri (Adams and Königsberg 2004). The visual matching of skeletal elements on the basis of shared morphological and textural features has also been conducted with reported successes regarding similarities of entheseal development on various elements (Grumbkow et  al. 2012). Visual sorting of commingled elements is often predicated upon shared superficial characteristics, such as staining or other textural changes associated with taphonomic change (Adams and Byrd 2006; Wieberg and Wescott 2008; Grumbkow et al. 2012). However, most osteologists and forensic anthropologists caution against reliance upon these criteria as they are both highly subjective and speculatively less accurate than statistically verifiable sorting techniques (Adams and Byrd 2006; Puerto et  al. 2014). Associating disarticulated remains on the basis of rough congruency between articular elements has been a staple facet of numerous commingled remain analyses and has even been described as the most reliable indicator of a match between elements (Kerley 1972; Dudar and Castillo 2016). This technique is extremely applicable to commingled assemblages with a large constituent of juvenile remains, and methodologies have been developed pertaining specifically to the re-association of separated epiphyses and their associated diaphyses (Schaefer 2014; Schaefer and Black 2007). The assessment of joint congruency via traditional osteological techniques has been lauded for its successful reunification of disarticulated remains in historic forensic cases (e.g., the Ruxton case), in which the identification of the

3

multiple deceased persons from a single gravesite was determined via the matching of elements based on rough articulation or “harmony” between joint spaces (Glaister and Brash 1937; Buikstra and Gordon 1980). However, the concept of rough articulation often fails to meet modern assumptions of admissibility for international evidentiary standards, as there are often no parameters set for describing a match beyond “more or less” congruent or an articulation being of “high” or “low” accuracy (Adams and Byrd 2006). In order to achieve a positive match or the successful individuation of remains for a forensic setting, it is required that the statistical probability of the compatibility between the skeletal elements is provided (Christensen and Crowder 2009; Buikstra and Gordon 1980). The desire for verifiable and replicable methods for re-associating disparate articular correlates has also led to the development of criteria for assessing the morphological compatibility between two such elements. Notably, the articular compatibility of bones incorporated in the temporomandibular (Preissler et  al. 2018), atlanto-occipital (Dudar and Castillo 2016; Buikstra and Gordon 1980; Briggs et al. 2008), tibio-femoral (Parkinson and Craig-Atkins 2017), and acetabulo-femoral (London and Curran 1986; London and Hunt 1998; Parkinson and Craig-Atkins 2017) articulations have been explored. However, this has been conducted largely on a case-by-case basis (e.g., Preissler et al. 2018), and a framework for the comparison or results yielded from these various methods does not exist. The sole exception is, to the authors’ knowledge, the methodological approach for re-associating calcanei and tali on the basis of the morphological compatibility of their corresponding articular facets developed by Anastopoulou et  al. (2018). Novel methods for re-associating disparate articular correlates have also been developed which do not seek to assess articular congruency between elements. Significantly, a method proposed by Cheverko (2012) sought to reunify separated articular elements on the basis of shared patterns of osteoarthritic changes. Still, there is a need for methods

4

which re-associate separated articular correlates that are both statistically verifiable and flexible enough to of extended use to numerous joint surfaces. Another approach to pair-matching commingled remains is osteometric sorting. Osteometric sorting operates by using metric thresholds, against which matches are either confirmed or negated for paired skeletal elements (Byrd and Adams 2003; Thomas et al. 2013). It is primarily used for postcranial elements, in particular long bones, though it has also been used to re-­ associate articulating elements, such as vertebrae (Thomas et  al. 2013, Byrd and LeGarde 2013: 174; Buikstra and Gordon 1980). This approach has recently been extended to include matching components of bone in lieu of entire elements, allowing the sorting of fragmentary remains (Lynch 2018). One drawback to osteometric sorting is that an entire population must be assessed before inferences can be made regarding matches, as they are established on an inductive basis (Byrd and Adams 2003; Byrd and LeGarde 2013). Therefore, osteometric sorting is generally most useful for segregating rather than re-associating bones (Byrd and LeGarde 2013). Furthermore, the confidence of the method is proportionate to the size of the sample being assessed and has previously been critiqued for allowing too many false rejections (Anastopoulou et al. 2018; Vickers et al. 2015). Geometric morphometrics is a technique which segregates elements on the basis of shared landmarks between comparable elements, subtracting the element of size from the comparison (Garrido-Varas et al. 2015). In this manner, geometric morphometrics only assesses the shape comparison between two or more bones, not the size. As size is a potentially useful discriminating factor in the sorting of commingled remains, this technique has only been explored in one article on pair-matching metacarpals (Garrido-Varas et al. 2015). While the method was relatively successful at matching positive pairs, its main drawback is that it has no way of determining negatives, or bones that do not have a pair-match present, which is a significant issue in commin-

A. S. Acuff et al.

gled assemblages (Garrido-Varas et  al. 2015; Karell et al. 2016).

1.1.3 Biomedical Visualization and the Improvement of Segregation Techniques The integration of biomedical visualization techniques within archaeology and forensic anthropology has forever changed the scope of osteological analysis, allowing the rapid conduction of once time-consuming techniques with increasingly accurate and replicable results. With the use of digital imaging technologies, such as computed tomography and laser scanning, many traditional methodologies for sorting commingled remain assemblages have been met with digital counterparts. For a recent example, osteometry has been segued into the digital realm by De Simone and Hackman (2019) who tested the applicability of osteometric sorting techniques developed on dry bone for metric comparisons taken from computed tomography data. Although traditional osteological means of assessment are by no means obsolete, digital techniques are frequently favored as they are nondestructive, and the virtual data produced are of high fidelity and can ostensibly exist in perpetuity. The utility of postmortem computed tomography (PMCT) in the identification of individuals has been explored in detail by numerous studies (e.g., Dedouit et al. 2007). Scanning of remains from individuals involved in mass fatality events is commonplace and has been incorporated into protocols for Disaster Victim Identification (DVI) protocols internationally (O’Donnell et al. 2011; Ramsthaler et al. 2010; Blau et al. 2008). While PMCT has been instrumental in the generation of numerous novel methodologies for developing the biological profile, the utility of this technology in commingled remain resolution has largely been in the automation and digitization of previously developed protocols.

1  Pair-Matching Digital 3D Models of Temporomandibular Fragments Using Mesh-To-Mesh Value…

1.1.4 The Mesh-To-Mesh Value Comparison (MVC) Method Mesh-to-mesh value comparison (MVC) is one such method which melds the principles which govern visual pair-matching and biomedical visualization. The mesh-to-mesh value comparison method is a pair-matching technique, an approach common to commingled remain analyses, which uses mesh-to-mesh values (MMV) as signifiers of the similarity between two digital three-dimensional (3D) surface models of bone. In this context, similarity is described as the closeness-of-fit in terms of geometric value and spatial relationship between two models determined via the Trimmed Iterative Closest Point algorithm (TrICP) (Chetverikov et  al. 2002). Thus, a mesh-to-mesh value is a numerical value which indicates the distance, or similarity, between the geometries of two comparable 3D models. The MVC method has been described using both manual and automatic comparison methods with the software Flexscan3D and Viewbox 4, respectively, and has included two methods of analysis of the resulting comparison data of MMV, via Lowest Common Value (LCV) selection and Receiver Operating Characteristic (ROC) curves (Karell et  al. 2016, 2017; Tsiminikaki et  al. 2019). The accuracy of the method regardless of comparison software or analysis type is independently measured in regard to sensitivity and specificity of the generated matches. These protocols have yielded highly accurate results across bone type and software type, with results as high as 100% sensitivity and 100% specificity (Karell et  al. 2016, 2017; Tsiminikaki et al. 2019).

1.2

Materials and Methods

1.2.1 Sample Computed tomography (CT) scans of 35 adult, articulated skulls were used for this study. The individuals were part of a greater collection of modern human remains housed at the University

5

Hospital of Heraklion, Crete, otherwise known as the Cretan Collection (Kranioti et  al. 2008). The collection comprises of 214 skeletons and it was created in 2005 at the University of Crete. It constitutes one of the two reference osteological collections that currently exist in Crete. The remains are in good condition and mostly intact, and they have been used in developing population-specific standards for modern Cretans applicable in forensic cases. Specimens exhibiting overt taphonomic alteration or pathologies within the temporomandibular joint space were excluded from the sample. The CT data were captured via a Siemens Somatom Sensation 16 computed tomography scanner at the Department of Medical Imaging at the University Hospital of Heraklion, Crete, in 2013 using the following settings. The slice thickness was set to 0.75  mm during scanning, with a data window of 512  ×  512. Data were saved as Digital Imaging and Communications in Medicine (DICOM) files. A detailed scanning protocol can be found in Osipov et al. 2013.

1.2.2 Workflow of the Method The workflow of the specific methods used for the study will be discussed in depth in the further sections below. However, for ease of understanding the whole process including which software was used at each step and how they connect, see Fig.  1.1. The diagram starts after CT scanning, with segmentation and 3D model building in Amira and progresses until the final statistical analysis in MedCalc with ROC analysis.

1.2.3 S  egmentation and 3D Model Building Manual segmentation of the articular elements of the temporomandibular joint was conducted in AMIRA (5.2.2) for each of the 35 individuals. A total of 139 three-dimensional models were created; four models (i.e., one left mandibular fossae, one right mandibular fossa, one left

6

A. S. Acuff et al.

Fig. 1.1 Analytical workflow pursued by the present study

mandibular condyle, and one right mandibular condyle) were created for each of the individuals within the sample, with the exception of one individual, for whom only three models could be created due to the absence of this individual’s left mandibular condyle. While the coronal (XZ) and transverse (XY) planes were instrumental in delimiting the areas of consideration and visualizing the rendered label data, all segmentation was conducted on the sagittal (YZ) plane as this offered the best (i.e., most complete) view in which to conceptualize the anatomical elements under consideration. Segmentation was relegated to a single plane in order to establish the most consistent results and conducted in 6:1 zoom. For the purposes of the current research, articular surfaces of interest pertaining to mandibles and temporals were defined thusly and delimited on the basis of the relative anatomical expression of the described features for each individual. Regarding the mandibular fossae: the posterio-­ lateral maximum was delimited at the postglenoid tubercle or squamotypanic and petrotympanic fissures; the former feature is

known to be prevalent in modern human populations and was observed in the majority of the sample population (Katsavrias and Dibbets 2002). The anterio-lateral maximum was set at the articular tubercle of the zygomatic process. The medial border was designated to include the entire curvature of the mandibular fossa. The capture of the relative anatomical features of this region was conducted via the segmentation editor in AMIRA using the “brush” tool at a size of one voxel; the researcher traced the area defined above with the issuant width of the highlight space, not in excess of five voxels. Regarding the mandibular condyle: the entire geometry of the condylar process was included, not only the area that directly constitutes the articular surface of the mandibular condyle but also extending onto the anatomical neck of this feature. The extent of this feature was defined anterio-inferiorly at the mandibular notch, and the inferio-medial delimitation was constructed to include the pterygoid fovea. The inclusion of the described osteological features was in light of the anatomical properties and architecture of the soft tissues involved in the temporomandibu-

1  Pair-Matching Digital 3D Models of Temporomandibular Fragments Using Mesh-To-Mesh Value…

7

1.2.5 Mirror Imaging All right-sided elements were then mirrored in the software Netfabb (Autodesk) and subsequently imported back into FlexScan3D for alignment/pre-registration.

1.2.6 Alignment/Pre-registration

Fig. 1.2  An example of the segmentation process in Amira (5.2.2)

lar joint. The relevant geometry was captured using a combination of both the “magic wand” tool and the previously mentioned “brush” tool (Fig. 1.2). In cases where crania exhibited direct contact between the articular surfaces of the mandibles and temporals, they were delineated based on the author’s anatomical expertise using the brush tool. Otherwise, all thresholds for visualization and segmentation were set at half maximum height for each dataset, as described by Spoor et al. (1993). All three-dimensional surface meshes, or models, were built using constrained smoothing for the appropriate model shape (Fig.  1.3) and saved as Wavefront OBJ [.obj] files. An example of the generated models can be seen in Figs. 1.4 and 1.5.

1.2.4 Cropping All left and right mandibular condyle and fossa portions were cropped into individual models in Flexscan3D, before being mirror imaged. This resulted in a total of 139 individual models: 35 left mandibular fossae, 35 right mandibular fossae, 34 left mandibular condyles, and 35 right mandibular condyles.

All models were aligned, or pre-registered, in Flexscan3D using the “Fine Alignment” feature, in order to speed up computational time and assure that correct features were being compared directly using the subsequent TrICP algorithm. To do this, all of the models were divided so that each of the ensuing Flexscan3D files contained data relevant solely to either a left or a right mandibular condyle; all left condyles and all right condyles were then grouped within the same project and the “Alignment” tool, which was calibrated to the “Fine Alignment” setting, was then used to rectify all similar meshes to the same coordinates. The point of alignment for all meshes was arbitrarily designated as the original geospatial context of the left mandibular condyle of Kranio 5 (see Fig. 1.3). This process was replicated with the meshes pertaining to paired mandibular fossae. The convergence of all meshes to a uniform point of coordinates assured diminished variation in the issuant mesh-to-mesh values generated at a later stage in the analysis by decreasing the distance between them. For the mandibular condyles, the estimated overlap for the generated surface meshes was 100% as the automated “Fine Alignment” setting was able to superimpose all condyles accurately with a single press of the button. However, for the majority of the fossae models, it was necessary to manually superimpose the meshes using the mouse and then use the “Selected Geometry Alignment” before the “Fine Alignment” configuration could be used. This is likely in light of the complex geometry of the fossae models and the particularly rough and irregular texture of the superior surface (i.e., relating to the internal table of the temporal cortex) of the main concavity of this element.

Fig. 1.3 Superimposition of 3D surface data pertaining to the left mandibular condyle (in green) and mandibular fossa (in purple) of Kranio 38 in Amira (5.2.2)

Fig. 1.4  Example of a model of a left condyle generated with Amira 6.5

[mm] 10 9 8 7 6 5 4 3 2 1 0

Fig. 1.5  Example of a model of a left fossa generated with Amira 6.5

[mm] 10 9 8 7 6 5 4 3 2 1 0

1  Pair-Matching Digital 3D Models of Temporomandibular Fragments Using Mesh-To-Mesh Value…

9

Fig. 1.6  An example of the final alignment position for all right-sided mandibular condyle models in FlexScan3D. Each color represents a different mandibular condyle

An example of this process, where all of the 2 . For final alignment, the nearest neighbor mandibular condyles have been aligned/pre-­ search was set to “Exact with normal compatregistered, can be seen in Fig. 1.6. Each color in ibility” with point sampling at 100%, and Fig. 1.6 represents an individual mandibular conmatching set to “Point to Plane.” dyle. These aligned models were then used for 3. The estimated overlap of the models was set at the subsequent comparison process in Viewbox. 100%. 4. The initial number of starting positions for comparison was set at 20, following the guid1.2.7 Viewbox Software ance in the literature regarding ICP algorithms (e.g., Besl and McKay 1992). Viewbox constitutes an advanced software for cephalometric analysis that can be customized to The estimated time of researcher interface was no perform data acquisition and analysis of mea- more than 5 min. The total run time for this comsurements and analyses from 2D and 3D radio- parison took approximately 26  h for a total of graphs and images. It was customized to perform 43,472 individual comparisons, or mesh-to-mesh the analysis in the original publication by Karell values (MMV), generated. This is approximately et al. (2016), and it was further used in Tsiminikaki one comparison every 2 sec, which is incredibly et  al. (2019) and Karell et  al. (2017)  mesh-to-­ quick and demonstrates the benefits of comparmesh comparisons. In the current study all mod- ing smaller portions of bone. An Excel spreadels were compared against all models in Viewbox sheet of the resultant mesh-to-mesh values for 4 (beta) using the following parameters: each comparison was issued at the termination of the comparison. 1. For rough alignment, the nearest neighbor Figure 1.7a illustrates two superimposed fossa search was set to “Approximate (fast)” with models after alignment in Viewbox and Fig. 1.7b point sampling at 1%, and matching set to illustrates a color map of the actual distances “Point to Point.” between the two meshes applied as texture in one

10

A. S. Acuff et al.

Fig. 1.7 (a) Two superimposed fossa models after alignment in Viewbox (b) color map of the actual distances between the two meshes applied as texture in the reference model

of the meshes. It is evident that there are large distances in the center of the fossa gradually increasing in the areas of the red color. This is a comparison between two meshes that do not belong to the same individual, as one of them exhibits increased porosity in the center of the fossa probably due to old age or pathology.

For the ROC curve analysis, instead of selecting a lowest common value, matches are determined using a cut-off or threshold value, much like osteometric sorting. If two bones’ MMV is below the threshold, it is considered a match, and if it is above, it is not a match. This threshold value is calculated dynamically, where the statistical software plots the sensitivity of the results against the specificity of the results over a con1.2.8 Statistical Analyses tinuous relational curve. To understand the statistical significance of the ROC curves generated, Following the comparison in Viewbox, the mesh-­ the Area Under the Curve (AUC) for each of the to-­mesh values (MMV) were evaluated in a pro- ROC curve was assessed. The closer an AUC is to cess called the Lowest Common Value (LCV) 1, the more effective the test. An AUC of 0.5 or selection and ROC curve analysis. The LCV below indicates that random chance is equally as selection was conducted via the following steps in effective or better than the comparison method Excel: (1) separate the aspects under consider- being tested. The ROC curves were calculated in ation along the two axes of the spreadsheet; (2) MedCalc (19.0.7) based on the mesh-to-mesh mark  the true matches of each comparison; (3) values using the methodology outlined by determine the three lowest values across and three DeLong et al. (1988). lowest values down for each row and column; (4) discern for each comparison the agreed-­upon lowest value; (5) describe each corresponding match 1.3 Results as either a true positive, true negative, false positive, or false negative; and (6) use counts of MMV 1.3.1 Results of Comparisons for Pair-Matching that fall within each of these categories to determine the sensitivity and specificity of each comparison. Sensitivity is determined from the number 1.3.1.1 Lowest Common Value (LCV) Selection of true positives divided by the combined number of true positives and false negatives. Likewise, The LCV selection for pair-matching mandibular specificity is calculated from the number of true condyles was 91.17% sensitive and 100% spenegatives divided by the combined number of true cific, yielding 62 true positive, 1 true negative, 0 negatives and false positives. false positive, and 6 false negative selections

1  Pair-Matching Digital 3D Models of Temporomandibular Fragments Using Mesh-To-Mesh Value…

11

Table 1.1  Summary of the results of LCV and ROC analysis for pmMC, pmMF, LACm, and RACm

pmMC pmMF LACm RACm a

LVC Sensitivity 91.17% 88% 0% 0%

Specificity 100% 0% 0% 0%

ROC Sensitivity 100 85.7 38% 83%

Specificity 94.4 97.5 49% 31%

Threshold 0.574 0.529 1.52 1.6

AUC 0.993 0.958 0.504a 0.503a

Not statistically significant

(Table 1.1). The LCV analysis for pair-matching mandibular fossae was 88.58% sensitive but 0% specific, yielding 62 true positive, 0 true negative, 0 false positive, and 8 false negative selections (Table 1.1). The lack of specificity is due in part to the lack of true negatives in the sample. These results illustrate, however, that this selection method is suitable for matching bilateral pairs of isolated mandibular condyles and mandibular fossae.

1.3.1.2 Receiver Operating Characteristic (ROC) Analysis For the pair-matching analysis for mandibular fossae using ROC analysis, the optimal sensitivity and specificity were 85.7% and 97.5%, respectively, with an AUC of 0.958 (Table  1.1, Fig. 1.6). The corresponding threshold value was 0.529  mm. Pair-matching for mandibular condyles resulted in 100% sensitivity and 94.4% specificity with an AUC of 0.993 (Table  1.1, Fig. 1.6). The corresponding threshold value was 0.574 mm. The p-values for pair-matching mandibular fossae (pmMF) and pair-matching mandibular condyles (pmMC) were both found to be significant (p 1  m distance, there is nothing to indicate increasing range until it reaches maximum range (Saukko and Knight 2016). Radiating and concentric fractures normally appear as a result of high-velocity ballistic impacts (Berrymann and Symes 1998; Ross 1996; Spitz 2006). The lack of radiating fractures has been observed experimentally in 0.22 LR handgun shot from 30  cm (Taylor and Kranioti 2018) and in contact shot a 0.22 long rifle (Kranioti et al. in prep), but in the second case, the bullet did not exit the head proxy.

47

3.3.3 Ballistic Trauma Interpretation Virtual analysis revealed two clear entry wounds to the left parietal bone consistent with ballistic injuries. Diameters of wounds 1 and 2 are approximately 6.46 mm and 6.73 mm, respectively, suggesting that both were caused by a small round projectile or a single-shot rifle. Both these rounds are quite small, and it is very unlikely that they caused the larger diameter entry wounds in the case examined here. Table  3.2 combines data from forensic cases where the firearm was confirmed with experimental data using synthetic spheres filled with ballistic gelatin as head proxies. It is evident that bone entry diameter can be much smaller than the calibre size. Gunshot wound diameters are very close to the Mannlicher-Schonauer M1903 rounds but larger diameters 7.62–7.65 mm cannot be safely excluded. Full-metal jacketed pistols will probably have caused an entry wound with diameter closer to the diameter of the bullet due to the jacket resisting deformation. The directions of the two shots are from the left side and the back towards the front and right sides where the exits are crossed causing a larger defect. The direction of the shots from the back and left sides does not point to a fair fight because in that case the injuries would have inflicted in the frontal area of the body and head. Thus, either Stavrou was ambushed by members of the squad, while he was engaged by the rest, or he was captured and executed.

Table 3.2  Relationship between calibre and diameter on gunshot wounds to the head and head proxies in relation to calibre dimensions Calibre 0.22 0.22 0.25 0.32 0.38 9 × 19 mm 0.4 0.45 Ross (1996) Taylor and Kranioti (2018)

a

b

Calibre in mm 5.588 5.588 6.35 8.128 9.652 9 10.16 11.43

Minimum diameter (mm)a 5.6

Diameter (mm)b 6

6 6.6 8.7

9.5–10 10 12 13

E. F. Kranioti et al.

48

There was no evidence of soot or gunpowder in either of the two entry wounds (see Figs. 3.7 and 3.8) which suggests a distance larger than 45  cm. The severed head of Stavrou, however, may have been preserved first with alcohol or rock salt and later with formalin solution to avoid decomposition. This may have affected any evidence of gun powder or soot on the skin. The exact circumstances regarding the distance, the body position, and the rifled weapon and round that led to his death cannot be assessed definitely with the virtual examination of the remains. A series of experimental simulations similar to previous work by our research group (Taylor and Kranioti 2018) and others (Carr et al. 2015; Thali et al. 2002) would allow for ballistic testing of firearms of that period and may shed more light into Stavrou’s violent death. Skin-­ brain-­skull models can be shot at different distances with a variety of small riffled firearms, and the resulting injuries can be compared with the actual trauma on Stavrou’s head.

3.3.4 B  enefits of Virtual Forensic Reconstruction of Stavrou’s Death Three-dimensional virtual reconstruction of the severed head allows for a complete and detailed visualisation of the cranial injuries sustained, their size, and direction, which is extremely more informative than the macroscopic appearance of the head under display. The endocranial surface was observed in detail using the various virtual tools available in Amira software and different thresholds allowed the inspection of different anatomical details. All models produced from this work are available to the Criminology Museum curators for further use. This is the first of five severed heads that were scanned similarly, so that can be part of a larger study. Taking into account the emerging issues regarding exhibition and handling of human remains, this project can offer an alternative way of exhibition based on virtual material for historical and teaching purposes to different locations and audiences con-

tributing to the dissemination of Greece’s Post-Ottoman ethnographic history. Three-dimensional models of the severed head can be used to create holograms, animations, and virtual reality stories as in the outreach project “Polyphonic Murders” (Kranioti et  al. 2020). The aforementioned project mixes medical imaging and forensic pathology with animation, holography, and virtual reality to present to a wide audience the stories of historical victims of homicides emerging from an international archaeological archive. Animations of five exemplary cases can be found online in http://www. holoxica.com/forensics/ and it is the authors’ ambition that the case of Stavrou becomes one of the virtual stories represented in this project. Moreover, the emerging details on Stavrou’s death can inspire new documentaries on Greek Banditry or movies such as the 1959 film “Lygkos, the chief Bandit” or the 1960 film “Tsakitzis, the protector of the poor” that became very popular amongst rural communities in the 1960s. Similar studies of the remaining outlaws of the Criminology Museum in combination with a detailed study of ethnographic archives will offer an insight into the real circumstances of the bandits’ deaths as opposed to the widespread rural tales. Last, this new information can be an inspiration for other forms of art such as the Greek comic production based on the stories of Greek bandits called “Brigands”. In the later, many confirmed stories of famous bandits such as Kardaras and Giagoulas are presented in a colourful comic book written in English.

3.4

Conclusions

The use of virtual methods is becoming an invaluable tool for the accurate reconstruction of traumatic fatal events. The examination of the CT scans and virtual 3D reconstructions revealed two typical entry wounds of a small round riffled weapon entering from the back and left of the head and exiting, from extended area of the right orbit and right parietal region with extension of

3  Virtual Trauma Analysis of the Nineteenth-Century Severed Head of the Greek Outlaw Stavrou

the fractures to the frontal bone. The entry wounds on the skin were lacking soot and gunpowder deposition which implies a distance larger than 45  cm, assuming that no such evidence was destroyed due to the head’s preservation media. The most probable weapon from the available at the time from the Greek Gendarmerie was a Mannlicher-Schonauer M1903 but larger rounds cannot be safely excluded. In addition, since ex bandit Efthymios Regzas was part of the squad that chased after Stavrou, one cannot exclude that he was carrying an illegal firearm not registered at the Greek gendarmerie official archives. The direction of the shots coming from the back and left suggests that Stavrou was more likely ambushed by members of the squad, while he was engaged by the rest and not shot in a fair fight. This is in line with the historical evidence regarding this debate. On the other hand, the possibility of being captured alive and executed cannot be discarded. This work would be excellent supplementary material to the actual human exhibit for the accurate presentation of Stavrou’s history at the Criminology Museum. In addition, it would allow the virtual exhibition of the material for historical and teaching purposes to museums and universities anywhere in Greece and along the globe, thus overcoming the obstacles of moving the actual remains. Acknowledgements The authors would like to express their sincere appreciation and thanks to the personnel of the Radiology Department of Tzaneio General Hospital of Piraeus for their cooperation. We also thank Ms Natalia Tsourma for her assistance in researching the historical newspapers of that period. Authors’ Contribution Kranioti Elena: concept/ design, data analysis/interpretation, discussion of the results, drafting of the manuscript. Tsiatis Nikolaos: ballistic information on historical firearms, discussion of the results, and review of the manuscript. Kristina Fredson: literature review, 3D modelling, discussion of the results, and review of the final manuscript. Stefanidou: historical information, data interpretation of the results, drafting of the manuscript.

49

Moraitis Konstantinos: concept/design, acquisition of CT scan data, photographic documentation of the exhibit, interpretation of the results, and critical review. Compliance with Ethical Standards Disclosure of Potential Conflicts of Interest The authors declare that they have no conflict of interests. Informed Consent  Informed consent is not applicable in the study. Funding  No funding was received for this work.

References Albrecht A (1910) Cesare Lombroso. J Crim Law Criminol 1(2):74–83 Ampanozi G, Ruder T, Ebert L, Thali M, Flach P (2013) Gunshots to the head: characteristics on post-­ mortem CT.  J Forensic Radiol Imaging 1(2):77. Elesevier.  https://doi.org/10.1016/j.jofri.2013.03.027. Elsevier Ampanozi G, Halbheer D, Ebert LC, Thali MJ, Held U (2020) Postmortem imaging findings and cause of death determination compared with autopsy: a systematic review of diagnostic test accuracy and meta-­ analysis. Int J Legal Med 134(1):321–337 Andenmatten MA, Thali MJ, Kneubuehl BP, Oesterhelweg L, Ross S, Spendlove D et  al (2008) Gunshot injuries detected by post-mortem multislice computed tomography (MSCT): a feasibility study. Legal Med 10(6):287–292 Berens S, Ketterer T, Kneubuehl BP, Thali MJ, Ross S, Bolliger SA (2011) A case of homicidal intraoral gunshot and review of the literature. Forensic Sci Med Pathol 7(2):209–212 Berrymann H, Symes SA (1998) Recognizing gunshot and blunt cranial trauma through fracture interpretation. In: Reichs K (ed) Forensic osteology: advances in the identification of human remains, 1st edn. Charles C Thomas, Springfield, pp 333–352 Boer L, Radziun AB, Oostra RJ (2017) Frederik Ruysch (1638–1731): historical perspective and contemporary analysis of his teratological legacy. Am J Med Genet Part A 173(1):16–41 Bolliger SA, Ampanozi G, Kneubuehl BP, Thali MJ (2014) Gunshot to the pelvis  – experimental ballistics and forensic radiology. J Forensic Radiol Imaging 2(1):17–19. Elsevier. https://doi.org/10.1016/j. jofri.2013.12.001 Brogdon BG (1998) Forensic radiology. CRC Press, Boca Raton Carr D, Lindstrom AC, Jareborg A, Champion S, Waddell N, Miller D et al (2015) Development of a skull/brain

50 model for military wound ballistics studies. Int J Legal Med 129(3):505–510 Christensen AM, Hatch GM, Brogdon BG (2016) A current perspective on forensic radiology. J Forensic Radiol Imaging 2(3):111–113. Elsevier.  https://doi. org/10.1016/j.jofri.2014.05.001 DiMaio VJ (1999) Gunshot wounds: practical aspects of firearms, ballistics, and forensic techniques. CRC Press, Boca Raton DiMaio VJ, DiMaio D. (2001) Forensic pathology, 2nd edn. CRC Press, Boca Raton Dirnhofer R, Jackowski C, Vock P, Kimberlee P, Thali MJ (2006) VIRTOPSY: minimally invasive, imaging-guided virtual autopsy. Radiographics 26(5):1305–1333 Donchin Y, Rivkind AI, Bar-Ziv J, Hiss J, Almog J, Drescher M (1994) Utility of postmortem computed tomography in trauma victims. J Trauma 37(4):552–555 Farkash U, Scope A, Lynn M, Kugel C, Maor R, Abargel A, ldad A (2000) Preliminary experience with postmortem computed tomography in military penetrating trauma. J Trauma 48(2):303–308 Flach PM, Ampanozi G, Germerott T, Ross SG, Krauskopf A, Thali MJ et  al (2013) Shot sequence detection aided by postmortem computed tomography in a case of homicide. J Forensic Radiol Imaging 1(2):68–72. Elsevier. https://doi.org/10.1016/j.jofri.2013.03.045 Hart BL, Dudley MH, Zumwalt RE (1996) Postmortem cranial MRI and autopsy correlation in suspected child abuse. Am J Forensic Med Pathol 17(3):217–224 Harris LS (1991) Postmortem magnetic resonance images of the injured brain: effective evidence in the courtroom. Forensic Sci Int 50(2):179–185 Heard BJ (2008) Handbook of firearms and ballistics: examining and interpreting forensic evidence, 2nd edn İşcan MY, Steyn M (2013) The human skeleton in forensic medicine, 3rd edn. Charles C Thomas, Springfield Karamanou A, Stefanidou M (2015) The Greek bandit Fotios Giagoulas: an introduction to his mummified head and future conservation aims. Papers Anthropol 23(1):37–56 Koliopoulos JS (1987) Brigands with a cause: brigandage and irredentism in modern Greece, 1821–1912. Clarendon Press, Oxford Kranioti EF, Nathena D, Spanakis K, Bouhaidar R, McLaughlin S, Papadomanolakis A et  al (2017) Postmortem CT in the investigation of decomposed human remains: advantages and limitations. La Rev Médecine Légale 8(4):184–185.  http://www.sciencedirect.com/science/article/pii/S1878652917300901 Kranioti EF, Girdwood L-K, Garcia-Donas JG, Wallace J, Boyle A, Papadopoulos A, Reeve I, Bonicelli A, Coskun G, Karell MA (2020) Polyphonic murders: a holographic biography of trauma. Transl Res Anat 21:100085. https://doi.org/10.1016/j.tria.2020.100085

E. F. Kranioti et al. Michalodimitrakis M (2001) Medicolegal investigation of death, 2nd  edn. Paschalidis Medical  Publications, Athens Moraitis K, Athanaselis S, Spiliopoulou C, Stefanidou M (2015) The Criminology Museum at the University of Athens. In: Mouliou M, Soubiran S, Talas S, Wittje R (eds) Turning inside οut European University heritage: collections, audiences, stakeholders. National and Kapodistrian University of Athens Press, Athens, pp 1–11 Moritz A (1954) Pathology of trauma, 2nd edn. Lea & Febiger, Philadelphia Nalli NR (2018) Gunshot-wound dynamics model for John F.  Kennedy assassination. Heliyon 4(4):1–41. Elsevier. https://doi.org/10.1016/j.heliyon.2018. e00603 Oehmichen M, Gehl HB, Meissner C, Petersen D, Höche W, Gerling I, König HG (2003) Forensic pathological aspects of postmortem imaging of gunshot injury to the head: documentation and biometric data. Acta Neuropathol (Berl) 105(6):570–580 Oliver WR, Chancellor AS, Soltys M, Soltys M, Symon J, Cullip T, Rosenman J, Hellman R, Boxwala A, Gormley W (1995) Three-dimensional reconstruction of a bullet path: validation by computed radiography. J Forensic 40(2):321–324 Pappas NCJ (2018) Brigands and brigadiers: the problem of banditry and the military in nineteenth-century Greece. Athens J Hist 4(3):175–196 Precht BLC, Fontes EB, Babinski MA, de Paula RC (2014) Frederik Ruysch (1638–1731): life and lessons from a memorable anatomist. Eur J Anat 18(3):205–208 Ross AH (1996) Caliber estimation from cranial entrance defect measurements. J Forensic Sci 41(4):629–633 Sazanidis C (1995) The arms of Helenes: a historical survey of the small arms of the Hellenic armed forces, the security forces and guerilla bands (1821–1992). Menandros, Thessaloniki Saukko P, Knight B (2016) Knight’s forensic pathology, 4th edn. CRC Press, London Spitz W (2006) Injury by gunfire. In: Spitz WS (ed) Spitz and Fisher’s medicolegal investigation of death: guidelines for the application of pathology to crime investigation, 4th edn. Charles C Thomas, Springfield, pp 607–747 Taylor CR (2010) From anatomy to surgery to pathology: eighteenth century London and the Hunterian schools. Virchows Arch 457(4):405–414 Taylor SC, Kranioti EF (2018) Cranial trauma in handgun executions: experimental data using polyurethane proxies. Forensic Sci Int 282:157–167. Elsevier. https://doi.org/10.1016/j.forsciint.2017.11.032 Thali MJ, Kneubuehl BP, Zollinger U, Dirnhofer R (2002) A study of the morphology of gunshot entrance wounds, in connection with their dynamic creation,

3  Virtual Trauma Analysis of the Nineteenth-Century Severed Head of the Greek Outlaw Stavrou utilizing the “skin-skull-brain model”. Forensic Sci Int 125(2–3):190–194 Thali MJ, Viner MD, Brogdon BJ (eds) (2010) Brogdon’s forensic radiology, 2nd edn. CRC Press, Boca Raton Tzanakaris V (2002) The best lads are killed by the hand of their comrades. Kastaniotis, Athens (in Greek)

51

Tzanakaris V (2013) Fotis Giagulas: the undead and other brigand stories. Metaixmio, Athens (in Greek) Tzanakaris V (2016) The brigands: the best lads are killed by the hand of their comrades. Metaixmio, Athens (in Greek) von Hagen G (2010) Body worlds

4

Using Computed Tomography (CT) Data to Build 3D Resources for Forensic Craniofacial Identification Terrie Simmons-Ehrhardt, Catyana R. S. Falsetti, and Anthony B. Falsetti

Abstract

Forensic craniofacial identification encompasses the practices of forensic facial approximation (aka facial reconstruction) and craniofacial superimposition within the field of forensic art in the United States. Training in forensic facial approximation methods historically has used plaster copies, high-cost commercially molded skulls, and photographs. Despite the increased accessibility of computed tomography (CT) and the numerous studies utilizing CT data to better inform facial approximation methods, 3D CT data have not yet been widely used to produce interactive resources or reference catalogs aimed at forensic art practitioner use or method standardization. There are many free, open-source 3D software packages that allow engagement in immersive studies of the rela-

T. Simmons-Ehrhardt (*) Department of Forensic Science, Virginia Commonwealth University, Richmond, VA, USA e-mail: [email protected] C. R. S. Falsetti School of Integrative Studies, George Mason University, Fairfax, VA, USA e-mail: [email protected] A. B. Falsetti College of Science, Forensic Science Program, George Mason University, Fairfax, VA, USA e-mail: [email protected]

tionships between the craniofacial skeleton and facial features and facilitate collaboration between researchers and practitioners. 3D CT software, in particular, allows the bone and soft tissue to be visualized simultaneously with tools such as transparency, clipping, and volume rendering of underlying tissues, allowing for more accurate analyses of bone to soft tissue relationships. Analyses and visualization of 3D CT data can not only facilitate basic research into facial variation and anatomical relationships relevant for reconstructions but can also lead to improved facial reconstruction guidelines. Further, skull and face surface models exported in digital 3D formats allow for 3D printing of custom reference models and novel training materials and modalities for practitioners. This chapter outlines the 3D resources that can be built from CT data for forensic craniofacial identification methods, including how to view 3D craniofacial CT data and modify surface models for 3D printing. Keywords

Craniofacial identification · 3D modeling · 3D printing · Facial approximation · Craniofacial superimposition · Computed tomography

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 P. M. Rea (ed.), Biomedical Visualisation, Advances in Experimental Medicine and Biology 1317, https://doi.org/10.1007/978-3-030-61125-5_4

53

T. Simmons-Ehrhardt et al.

54

4.1

Background

Forensic craniofacial identification encompasses the practices of forensic facial approximation (aka forensic facial reconstruction) and craniofacial superimposition. Forensic facial approximation in the United States has been used as a means to attract attention to unidentified person cases since the 1960s when artist Betty Pat Gatliff and forensic anthropologist Dr. Clyde Snow started collaborating (Gatliff and Snow 1979). This process involves the creation of a three-dimensional (3D) sculpture based on 21 points of soft tissue depths from unidentified human skeletal remains for the purposes of publicly releasing a facial image for potential recognition by someone who knew the decedent. Craniofacial superimposition involves a comparison of an unidentified skull or cranium to a facial photo of a known individual  (Aulsebrook et  al. 1995; Ubelaker et  al. 2019). This process can be applied when there is a suspected identity and therefore one or more facial photos that can be superimposed onto images of the skull to see if the facial features line-up to anatomical markers. Although these methods are not considered forms of “positive identification,” they can be useful social tools in gathering the needed data to achieve one (i.e., DNA or fingerprints). In the United States, forensic facial approximation and craniofacial superimposition may be produced by either a forensic artist or a forensic anthropologist. If the forensic artist generates the image, they generally consult with a forensic anthropologist to acquire biodemographic information about the unknown. Forensic facial approximation methods include using vellum or Photoshop® to sketch over skull photographs, sculpting with clay over the actual skull or a physical mold of the skull, or digitally sculpting onto a 3D scan of the skull. The methods of working directly on the skull or a cast often obscure the relationship between the skull and soft tissue and make it difficult to perform accurate changes. Reference data for ­ facial approximations consist primarily of facial tissue depth tables, listing descriptive statistics for usually fewer than 30 sparse points on the face. The standard reference in the United States

by Taylor (2001) compiles American-based tissue depth tables from Rhine (Rhine and Campbell 1980; Rhine and Moore 1984) and lists and illustrates guidelines and methods outlined by Krogman (1962) and Krogman and Iscan (1986), artist observations, and rules-of-thumb. Much of the research relevant to forensic facial approximation methods has involved the collection of tissue depths for different global populations and assessments of the validity of measurements and relationships between soft and hard tissue described by Krogman (1962). There has been an attempt to make sparse-point tissue depth data available to practitioners through a public  website (www.craniofacialidentification.com) with consolidated depth tables for adults and subadults (Stephan and Simpson 2008a, b; Stephan 2017). This is necessary since other published findings in facial approximation research remain essentially inaccessible to practitioners without an academic/institutional affiliation. Outside of the United States, many of the craniofacial identification researchers are also forensic facial approximation practitioners. However, this is not the case in the United States, with facial approximation practitioners frequently employed independently or affiliated with law enforcement, an arrangement that results in the absence of a translation of current research to the practice of facial approximation. Many of the forensic facial approximation practitioners in the United States also may not possess formal training in forensic anthropology or facial anatomy. Case-specific methods for forensic cases are rarely published (Hayes 2014), resulting in an absence of transparency and standardization. Recently, the prevalence of commercial software packages and services has increased, often with untested methods and data and lacking in validation documentation. Additionally, observations of public images of facial approximations reveal that updated research findings that could significantly improve traditional guidelines are not being translated into practice. For example, multiple studies have shown that the eyeball is not centered in the orbit, being more lateral and superior (Dorfling et  al. 2018; Guyomarc’h et  al. 2012; Stephan and Davidson 2008; Stephan et al.

4  Using Computed Tomography (CT) Data to Build 3D Resources for Forensic Craniofacial Identification

2009), but facial approximations are still being produced with centered eyeballs that are too medially positioned. Assessments of accuracy are subjective exercises performed by visual comparison of the artist’s estimation to facial photos of the subject in life. Larger-scale assessments of facial approximation accuracy include the ability of observers (usually unfamiliar with the study subjects) to “match” the facial approximation to an array of photos (Fernandes et  al. 2012; Decker et  al. 2013; Lee and Wilkinson 2016). As with case-specific methods, there is also a scarcity of publications of accuracy assessments with accuracy studies primarily limited to computerized methods of facial approximation. There has been one published study of the accuracy of a facial approximation for a forensic case after a positive identification was made (Hayes 2016). Formal training opportunities in facial approximation methods are limited in the United States in any given year. The facial approximation instructional methods in the United States consist largely of workshops by individual forensic artists. These teaching modalities typically focus on either 2D estimation of the face over skull photos or 3D clay sculpting on physical copies of skulls (most recently 3D printed copies). Craniofacial superimposition relies on the assumption that there are direct correspondences between bone and soft tissue landmarks/features, but there is no training situation where practitioners can visualize these relationships. Standards for forensic facial approximation training and methods are limited. A Scientific Working Group for Forensic Anthropology published a document for best practices in Facial Approximation (SWGANTH 2011), with the traditional position that forensic art is a subfield of forensic anthropology. This has been countered by forensic art practitioners, as many of them perform duties other than (sometimes excluding) facial approximation. Recently an updated document has been proposed by the Anthropology Subcommittee of the Organization of Scientific Area Committees for Forensic Science under the National Institute of Standards and Technology (currently not publicly available), despite the fact that few forensic anthro-

55

pologists in the United States perform facial approximation and that forensic art is itself a certifiable discipline under the International Association for Identification (Forensic Art Certification 2020). Craniofacial identification methods have long histories of use in the United States, Russia, the United Kingdom, and Europe (Krogman 1962; Stephan and Claes 2016). In the United States, craniofacial identification has not achieved method standardization due to a disconnect between practitioners and researchers as well as access to appropriate reference materials. While facial approximation and craniofacial superimposition are different methods with craniofacial superimposition employed much less frequently, they both rely on assumed relationships between facial features and identifiable/quantifiable anatomical markers on the craniofacial skeleton. Unfortunately, these relationships have not been well-established despite the continued application of assumed correspondences. One reference source that can provide appropriate analysis and specific guidance for bone to skin relationships in the craniofacial skeleton is biomedical imaging data because bone and skin can be simultaneously visualized and analyzed. While recent research in craniofacial identification is more frequently based on biomedical imaging data, primarily computed tomography (CT), the efforts result in new tissue depths (according to varying methods and landmark sets) and refined regression equations but do not address the most critical issue: translation of the research findings to facial approximation practitioners. The field of forensic craniofacial identification needs standardization for data collection, analytical methods, training, and assessments of accuracy. Many of these deficiencies, from establishing common/standardized data collection techniques to accessible and modern reference resources for practitioners, can be addressed with the incorporation of medical imaging data and the open-source publication of these findings. These could be used not only for data collection but also for building interactive resources for practitioners. The increasing availability of de-­ identified CT data, even publicly available data

T. Simmons-Ehrhardt et al.

56

such as the Cancer Imaging Archive (TCIA) (http://www.cancerimagingarchive.net/) (Clark et al. 2013), provides an opportunity to modernize craniofacial identification research and training by facilitating the development of quantitative methods, interaction/immersion with craniofacial anatomy and variation, and collaboration between researchers and practitioners (Falsetti et  al. 2018; Simmons-Ehrhardt et  al. 2016a, b, 2018b, 2019). The application of free, open-­ source software to analyze and interact with CT data allows for the possible standardization of research methods and common training methods that can be applied internationally via synchronous or asynchronous modalities. A CT reference dataset consisting of skulls and corresponding faces also provides a unique opportunity to more objectively assess the accuracy of published/recommended guidelines as well as of individual practitioners. This chapter outlines several ways that CT data can contribute to craniofacial identification including research methods, visualization methods, and workflows for translation of data to practitioners and forensic casework, including current implementation by the authors using free software.

4.2

 otential Contributions of CT P Data to Forensic Craniofacial Identification

4.2.1 Research CT and other medical images such as magnetic resonance imaging (MRI) and cone-beam CT (CBCT) provide a continually updated source of modern craniofacial data. Arguably, these medical imaging modalities are the only way to simultaneously visualize soft tissue and the craniofacial skeleton from any angle and is therefore an ideal means for craniofacial identification practitioners to study the relationships and establish direct ­correspondences between the craniofacial skeleton and soft tissue facial features other than cadaver dissection. One of the primary benefits of incorporating 3D CT data into craniofacial identification research and training resources is the

ability to include medical imaging data from different sources around the world to establish common methods and a common reference dataset that encompasses a wide range of craniofacial variation. A database that includes CT data from as many sources as possible would allow the incorporation of biogeographical ancestry and facial variations that present in different regions. The same data collected and resources built for facial approximation methods are also applicable to craniofacial superimposition, which is sometimes used to provide supportive analysis for the identification of individuals when no positive identification methods are possible. CT data are in more frequent use in craniofacial identification research to collect traditional sparse point tissue depths (Bulut et  al. 2014; Cavanagh and Steyn 2011; Chung et  al. 2015; Deng et  al. 2020; Dong et  al. 2012; Drgacova et al. 2016; Fourie et al. 2010; Guyomarc’h et al. 2013; Hwang et  al. 2012a, b; 2015; Kim et  al. 2005; Lodha et  al. 2016; Meundi and David 2019; Panenkoa et  al. 2012; Parks et  al. 2014; Phillips and Smuts 1996; Ruiz 2013; Ueno et al. 2011; Tilotta et al. 2009), to generate dense facial tissue depth maps (Shrimpton et  al. 2014; Shui et al. 2016; Simmons-Ehrhardt et al. 2018a), and for statistical estimation of a face from a 3D skull (Berar et  al. 2005; Claes et  al. 2006; de Buhan and Nardoni 2018; Deng et  al. 2011; Gietzen et  al. 2019; Guyomarc’h et  al. 2014; Hu et  al. 2013; Jones 2001; Kustar et al. 2013; Nelson and Michael 1998; Quatrehomme et al. 1997; Tilotta et  al. 2010; Tu et  al. 2005; Turner et  al. 2005; Turner et  al. 2006; Vandermeulen et  al. 2006; Vandermeulen et  al. 2012; Vanezis et  al. 2000). The sparse point tissue depth studies have used different sets of landmarks, placed landmarks on skin first then measured to the bone and vice versa, collected depths parallel or perpendicular to the Frankfurt Horizontal (FH) plane, and collected depths via linear measurements on 2D CT slices, or used custom-developed software. The dense tissue depth mapping studies and statistical estimation studies have generally used custom-­ developed software and methods, making the methods somewhat inaccessible or difficult to replicate.

4  Using Computed Tomography (CT) Data to Build 3D Resources for Forensic Craniofacial Identification

Despite the ability to automatically extract depth information from 3D surface models of the face and skull, most CT tissue depth studies continue to apply a manual collection of tissue depth points, many extracting 2D distances between bone and skin landmarks which require precise alignment of the head to a specific orientation (Caple et al. 2016). This inconsistency exists because direct correlations between bone and skin landmarks have yet to be established despite the continued use of what is termed “corresponding” landmarks for sparse-point tissue depth collection, although there is a variation on the actual soft-tissue points used. Another point of contention is whether landmarks should be identified on the skin and measured to the bone or on the bone and measured to the skin. The studies by Hwang et al. (2012b, 2015) have used a custom application called “Skull Measure” (CyberMed) to automate depth collection after landmark placement, and in the 2015 study, used it to evaluate the reproducibility of tissue depths based on 32 presumably corresponding bone and skin landmarks on CBCT scans: landmarks placed on bone first and collecting the depth at a point on the skin perpendicular to the bone landmark (“perpendicular to bone”), landmark placed on skin first and collecting the depth at a point on the bone perpendicular to the skin landmark (“perpendicular to skin”), and direct measurement between a bone landmark and skin landmark (not perpendicular to each other). They found high inter- and intra-­ observer reproducibility for depths collected using the “perpendicular to bone” method, indicating that landmarks were easier to identify on bone than on skin. Much of the craniofacial identification research, even utilizing CT scans, has primarily continued to collect the same measurements that were described by early works such as Krogman (1962) to confirm or refute their validity rather than searching for new bone to skin relationships or to find and validate consistent bone to skin relationships. Additionally, the methods for working with CT craniofacial data have not been standardized and the published results are often difficult to transition to practical applications. Another issue in the craniofacial identification

57

and even forensic anthropology literature is the diversity of methods for converting CT volumes to 3D surface data and the frequent absence of detailed reporting of methods that would allow for replication and standardization of specific 2D to 3D conversions. Some of these issues exist because of the number of CT viewing software packages, both free and commercial, institutional access to specific software packages, funding, and user-preference. However, these issues are not unique to forensic anthropology, and the precedents being set by biomedical applications and the clinical literature for working with 3D anatomy would provide appropriate guidance for establishing common methods/protocols across software platforms. The large user and development community and abundance of interactive and analytical functions in the free and open-­ source software 3DSlicer  (Fedorov et  al. 2012) would make it an ideal candidate software to facilitate standardization of 3D model generation in craniofacial identification. Despite the various software/methods used for generating 3D CT surface models, the models themselves can be subject to analyses that can be standardized in order to increase compatibility among different datasets. Because craniofacial identification practitioners utilize specific facial orientations (frontal, lateral) and because CT craniofacial data may be extracted from CT volumes that include varying lengths of the body (torso or mid-thigh), the coordinate systems among different heads need to be adjusted to a consistent orientation and position in space. By utilizing the inherent CT coordinate system (x-axis = medial-­ lateral, y-axis = anterior-posterior, z-axis = superior-­ inferior), the head itself can be subject to slight rotations and translations to center and align with reference to Frankfurt Horizontal. The coordinate system we have selected sets Frankfurt Horizontal at z = 0 by aligning the left orbitale and left and right porion. The y-axis is set perpendicular to Frankfurt Horizontal by aligning left and right porion at y = 0, and the x-axis is perpendicular to the y- and z-axes by setting nasion at x  =  0 (Fig. 4.1). The method for calculating and applying the transformation to this orientation and the coordinate system has been made available

58

T. Simmons-Ehrhardt et al.

Fig. 4.1  Reference planes used to align 3D CT models to a standardized orientation and common coordinate system to facilitate viewing in 2D and 3D

(Simmons-Ehrhardt et  al. 2017a) and can be applied to any CT- or CBCT-derived head that are in the standard CT x−y−z coordinate system. Regardless of transformation, because of the coordinate system built into CT slices, 3D face and skull surface models extracted from the same CT scan will remain in the correct anatomical orientation to each other when exported from the CT rendering software to other software. Transformed 3D coordinates described with the above method represent the distances of landmarks from the constructed reference planes and can, therefore, be used to calculate 3D distances as well as distances in one or two axes (frontal view  =  xz, profile view  =  yz, superior  =  xy). Alignment of the 3D surfaces along these reference planes facilitates the extraction of quantifiable geometric relationships between the face and skull in any axis or plane once landmarks are positioned. Angles and other geometric relationships can be calculated between landmark coordinates using trigonometric formulae and line equations (slope, intercept, line intersection, midpoint) rather than a manual drawing of angles or lines (Fig. 4.2). Note that the calculation of 3D distances between landmarks does not require the head to be in a specific orientation as the 3D distances represent geometric relationships between the face and skull as long as they are in the correct anatomical position to each other; however, deriving single-axis distances or 2D distances from these 3D coordinates requires a standardized orientation. The standardized orientation

and coordinate system facilitate quick visualization of landmark/distance data with or without the 3D surfaces themselves in any plane view which can be helpful to view multiple individuals at once or to visualize specific features that might be obscured by other features (Fig. 4.3). Actual craniofacial distances for multiple individuals can be superimposed in 2D or 3D making it easier to identify consistent bone to skin correspondences, such as the frontal, profile, or superior positioning of cheilion relative to the distal canine. The visualization and analysis of these direct bone to skin correspondences will also provide the foundational data needed to build objective methods for craniofacial superimposition and allow for objective guidelines for positioning facial features on forensic facial approximations of unidentified skulls. The generation of 3D surface models also allows for dense computational analyses, including the estimation of the entire facial surface from the surface of the skull. The methods that statistically estimate the face essentially calculate the 3D distances between the facial surface and the skull surface for a CT reference database and use these distances to predict the facial surface for a 3D-rendered unidentified skull. A few studies have focused on collecting and analyzing the distances themselves, generating dense facial tissue depth maps (FTDMs) (Shrimpton et al. 2014; Shui et al. 2016; Simmons-Ehrhardt et al. 2018a). When colorized according to depth, these FTDMs provide an interactive dataset for practitioners to

4  Using Computed Tomography (CT) Data to Build 3D Resources for Forensic Craniofacial Identification

59

Fig. 4.2  Example of frontal view estimation of eyeball position for facial approximation: comparing the actual position of oculus anterius (Guyomarc’h et al. 2012) to the intersection of a horizontal line from maxillofrontale to Whitnall’s tubercle and a vertical line approximately perpendicular utilizing the x- and z-coordinates of the relevant landmarks to calculate the intersection point

visualize tissue depths over the entire face, making it easier to visualize depth changes across a single face as well as among multiple faces (Fig.  4.4). This output is a much more intuitive way of visualizing the distribution of depths over the face than the sparse-point tissue depth tables that are traditionally provided to practitioners and could lead to new training opportunities as well as new methods for generating facial approximations. We have published a workflow (Simmons-­ Ehrhardt et  al. 2018a) as well as a step-by-step technical guide (Simmons-Ehrhardt et al. 2017b) for generating FTDMs utilizing Meshlab v.1.3.3 (Cignoni et al. 2008), a free open-source program for 3D analyses, making it possible for other researchers to generate FTDMs with the same method on their own 3D CT models. In summary, the workflow after generating and exporting 3D face and skull surface models includes hollowing the soft tissue model to generate a facial shell, cropping the face model to a distance from pronasale that is anterior to the ears, and mapping all vertices on the face to the closest points on the skull (Fig. 4.5). The depth values are saved in the “vertex quality” field of a polygon file format (PLY) model for both bone and skin and can be used to apply colorization (we chose a ­red-green-­blue scale from 0.0  mm to 40.0  mm)

and extraction of specific points or ranges of points. In ASCII format, the coordinates and depth values stored in a mapped PLY model are readable in a text editor, and corresponding bone and skin points are in the same sequence and have the same “normal.” The process of generating FTDMs via our method results in multiple 3D models for one individual: hollowed face shell, cropped face shell (anterior to ears), mapped and colorized facial surface, mapped and colorized face points, mapped and colorized skull points, in addition to the original face and skull models generated from the CT scan. The automatic 3D distance calculation between surfaces utilizing vertices, faces, and/or edges eliminates the need for manual landmark placement, but it also provides an opportunity to evaluate the conventionally applied corresponding bone and skin landmarks. By placing landmarks on the skin and directing the software to find the closest point on the bone underneath (perpendicular to skin) utilizing the Hausdorff distance filter (Aspert et al. 2002; Cignoni et al. 1998), we can determine if the placement of landmarks on the skin first results in the same corresponding bone landmarks among different individuals (Fig. 4.6). The reverse operation can also be applied, by calculating the closest skin landmarks to manually positioned bone landmarks (perpendicular to

60

T. Simmons-Ehrhardt et al.

Fig. 4.3  Frontal view plot of bone (filled circles) and skin (open circles) landmarks around the nasal aperture

bone). Because of how the curvature of the bone surface affects the skin surface, some bone-skin landmark pairs will likely be more consistently positioned than others: those occurring along the midline show greater curvature and therefore a smaller likelihood of similar relationships across individuals between conventionally corresponding bone and skin landmarks. All of these considerations can be evaluated utilizing automatic distance calculations that exist in free and open-­ source software, accessible to any researchers or practitioners.

While increasing computational capabilities are accessible in free, open-source software, the application of such analyses to craniofacial identification does not usually result in practical resources for the individuals generating facial approximations or performing craniofacial superimposition. The 3D surfaces themselves, however, have enormous potential to give practitioners new insight into craniofacial relationships and variation, especially given the visual nature of the data and the ability to visually project quantitative relationships onto the surfaces.

4  Using Computed Tomography (CT) Data to Build 3D Resources for Forensic Craniofacial Identification

61

umes, including FTDMs, or 3D printed versions of the surface models. The visualization and interaction tools described can be implemented by practitioners as well as researchers looking to familiarize themselves with existing free software. The use of common tools by researchers would facilitate data sharing and the establishment of common reference datasets. We have set up a blog (www.forensiccraniofacialid.wordpress.com) to outline and demonstrate some of the visualization methods discussed below.

Fig. 4.4  Example FTDM colorized from thinnest (red) to thickest (blue)

4.2.2.1 3D Slicer 3D Slicer (slicer.org) (Fedorov et  al. 2012) is a free, open-source medical imaging software package with a large user and developer community. There are numerous online tutorials that facilitate learning of the various features in 3D Slicer. It contains a Volume Rendering module for quick 3D visualization of CT volumes, a Segment Editor for generating 3D surface models, a Markups module for collecting or viewing landmarks and measurements, Screen Capture

Fig. 4.5  Mapped and colorized face and skull points generated with FTDM method in Meshlab

4.2.2 Visualization and Interaction with 3D CT Data Visualization and interaction with 3D CT data can include working with the raw CT volumes, 3D surface models generated from the CT vol-

tools for generating images/animations, and clipping tools that can be used to modify models for 3D printing. 3D Slicer also has a direct link to publicly available medical imaging data via the TCIA Browser, making it even easier for

62

T. Simmons-Ehrhardt et al.

Fig. 4.6  Example application of Meshlab distance tools to automatically find the closest points on the skull after manually placing facial landmarks

researchers and practitioners who would like access to de-identified craniofacial images. The Volume Rendering module provides a quick 3D render of a craniofacial CT volume, imported as either a directory of DICOM images or Nearly Raw Raster Data (.nrrd) volume. Preset transfer functions facilitate quick viewing of tissues of interest, but the visualization of features can also be adjusted via a shift slider to optimize viewing of bone, skin, or even underlying soft tissue such as muscles and fatty tissue. 3D surface models can also be imported into 3D  Slicer (if generated by a different program), and the volume rendering can be viewed in conjunction with 3D surface models (Fig.  4.7). For example, the soft tissue surface model can be made transparent in the Models module and even clipped mid-­ sagittally while the volume rendering is set to display underlying tissues. Using this combination, direct relationships between soft tissue (either skin or underlying) and the craniofacial skeleton can be easily visualized and documented. Landmarks can be collected in the Markups module or imported from external collection sources and converted to .fcsv files and visualized with surface models and/or volume renderings. 3D Slicer would be optimal for the intensive

study of one head, and its Screen Capture tools would facilitate the collection of 2D screenshots or rotations/animations for reference building or comparison with other reference heads.

4.2.2.2 Meshlab Meshlab would be the software of choice for visualizing multiple heads or FTDMs. Meshlab treats multiple 3D models as layers with the ability to toggle visibility and also has a multiple window capability, making it possible to view the data for more than one head at the same time (Fig.  4.8). Additionally, the FTDMs (Simmons-­ Ehrhardt et  al. 2018a) can be generated in Meshlab, facilitating the possibility of a common data collection method that can be applied to any CT-derived heads regardless of orientation. When the FTDMs are generated in Meshlab, the distance data is stored in a “vertex quality” field within the 3D PLY point cloud. This feature allows for the selection of points based on distances, including adjusting color mapping to specific depth ranges. To enhance visualization for facial approximation reference and comparison of depths among individuals, we divided FTDMs into 1.0  mm increments using a series of script functions in Meshlab that has been made available with the technical guide mentioned previ-

4  Using Computed Tomography (CT) Data to Build 3D Resources for Forensic Craniofacial Identification

63

Fig. 4.7  Volume Rendering module of 3D Slicer used in conjunction with 3D surface models

Fig. 4.8  Meshlab interface showing multiple window capability and visualization of incremental FTDMs as layers

ously (Simmons-Ehrhardt et  al. 2017a). This provides a mechanism to visualize the “connection” of facial areas at or within a specific range of depths, as opposed to the traditional sparse-­ point markers. Quick statistics for FTDMs can be generated by applying “Render → Show Quality Histogram” or “Filters → Quality Measures and Computations → Per Vertex Quality Stat, or Per

Vertex Quality Histogram.” Meshlab can also read point clouds of landmark files (.xyz) (see Fig.  4.6) and be used to visualize geodesic distances from specific landmarks by applying color mapping according to established distance parameters (Fig. 4.9).

64

4.2.2.3 3D Printing 3D CT Models 3D printed models can be used to generate a physical reference collection for teaching craniofacial identification methods either in a workshop or an academic setting. Printed models can include full skulls for sculpting practice/training and models of specific facial features and profile “slices.” 3D printing would be especially useful to demonstrate specific or unusual morphological variations. The preparation of models for 3D printing can be implemented in 3D Slicer and Meshmixer  (www.meshmixer.com)  (Autodesk, Inc.). Because of the standardized orientation and coordinate system for our head CT data, specific landmark coordinates can be entered into the EasyClip module of 3D Slicer to generate clipping planes in reference to the FH plane. For example, the FH plane itself has been set to z = 0, so to position an axial clipping plane at FH, one can enter 0.0 mm in the red slice. Note that PLY or stereolithography (STL) models can be used for the extrusion and clipping steps described below. PLY models have a smaller file size so would be less burdensome on the computer system; however, the final models will need to be

Fig. 4.9  Mapping the face by geodesic distance from pronasale in Meshlab

T. Simmons-Ehrhardt et al.

converted to STL for 3D printing (in 3D Slicer, Meshlab, or Meshmixer). Clipping Facial Feature Models  Specific facial feature models can be generated by applying 3D Slicer’s EasyClip tools to the hollowed face model and to the underlying skull model. It is recommended that the hollowed face model first be extruded to the minimum tissue depth for that individual in Meshmixer to provide a thickness for 3D printing that is anatomically and individually accurate: this can be read from an FTDM point cloud in Meshlab by selecting “Render → Quality Histogram” (see Fig. 4.6 for quality histogram example). Rather than selecting an arbitrary thickness to the face shell, the minimum tissue depth would be more accurate for a specific individual and allow visualization of where the minimum tissue depth contacts the bone— most frequently, on the superior or lateral s­ urfaces of the nasal bones, but also at the lateral orbital margin. To extrude a face shell (Fig. 4.10):

• Import the face shell model into Meshmixer. • Click “Analysis → Inspector.” –– Click the 2 blue pins to fill nostrils and click “Done”. • Go to “Select” and key “CTRL + a” to select the full surface. • Select “Edit → Extrude.” –– Enter the minimum tissue depth as a negative number in the “Offset” field (negative depth value causes the extrusion to expand inward rather than outward). –– Slide “Harden” up to square edges. –– Select “Direction → Normal” and “Accept.” –– Select “File → Export” to export extruded face shell. The thickened face shell can be imported into 3D Slicer for precise clipping at anatomical landmarks or features in conjunction with the skull model. The following steps outline clipping for a nose study model but can be followed for other facial features as well with the appropriate reference landmarks (Fig. 4.11):

4  Using Computed Tomography (CT) Data to Build 3D Resources for Forensic Craniofacial Identification

65

Fig. 4.10  Extrusion of the face shell to the minimum tissue depth in Meshmixer: (a) filling nostrils, (b) selecting facial surface, (c) entering negative tissue depth for inward

extrusion in normal direction, (d) exporting new face shell with thickness

• Import the extruded facial shell and skull model into 3D Slicer (drag and drop). • Open the “EasyClip” module from the module drop-down. –– Adjust the layout view to “Conventional” if the 3 slice views disappeared. –– To clip only the face or skull model, toggle the visibility off for the model that is not being clipped and click on the model name to be clipped. • Check the box for “yellow slice” to make a midsagittal clip at nasion. –– A yellow clipping plane will appear in the 3D window with a directional arrow: “Keep

Top Arrow” refers to the part of the model at the top part of the arrow and “Keep Down Arrow” refers to the opposite end. –– Use the yellow slice slider to adjust the clipping plane to hard tissue nasion (x = 0,

Fig. 4.11  Clipping in 3D Slicer: (a) “EasyClip” interface and midsagittal clipping of face shell, (b) frontal view of

nose model after subsequent clipping steps of both bone and skin, (c) profile view of nose model showing thickened face shell over bone model

66 T. Simmons-Ehrhardt et al.

4  Using Computed Tomography (CT) Data to Build 3D Resources for Forensic Craniofacial Identification

yellow coordinate) or highlight the coordinate for the yellow slice and enter 0.00 mm. –– Decide which side to keep based on the direction of the arrow and select “Keep Top Arrow” or “Keep Down Arrow.” –– Press “Clipping.” –– To undo or if the wrong side was kept, click “Undo” and adjust. –– Uncheck “Yellow Slice Clipping” when complete. –– Saving of the initial clip is recommended by clicking “Save”; the clipped face model will have the same name as your original model, but you can rename it by double-­ clicking the name before saving (select the appropriate directory for saving). • Follow the steps above for midsagittal clipping to apply axial (red), coronal (green), and additional sagittal (yellow) clipping to produce the desired model for 3D printing. The face shell and skull model can be clipped simultaneously for some areas. • Export final clipped models as separate, appropriately named STL models.

67

c­ omponents out of 11 = 1 component remaining). To combine the cleaned and cropped skull model with the cropped face model, import both into Meshlab, right-click on one model name in the layer dialog, select “Flatten Visible Layers,” and export as a new combined STL.

3D Printing Combined Models  The combined model is most optimally 3D printed with the posterior surface on the print bed, although printing with the inferior surface on the print bed would also work. Printing with a raft will add a flat surface resulting in a cleaner model or a small flat plane or box that can be added with Meshmixer or Meshlab. Print with supports to ensure that the face model adheres to the skull model; some supports can be removed after printing. Pieces may be printed separately if different colors or materials are desired, but be aware that the minimum tissue depth does not always occur at rhinion, so the actual contact point may not be present in a specific cropped model.

If specific landmark coordinates are not known, the sliders within each slice window can be used to visually adjust the clipping planes to the desired locations; this is especially useful when making a final clipping to demark the “depth” of the model for 3D printing. The CT volume can also be imported so that the 2D CT slices can be used to provide anatomical references for clipping. Saving models frequently during sequential clipping is recommended.

Preparing Midsagittal Profiles  Midsagittal profiles of the full soft tissue model (not hollowed) provide a view similar to a lateral radiograph of the midsagittal contour of the face, leaving the bone as negative empty space. The midsagittal view can be especially helpful for practitioners who generate 2D facial approximation sketches in both frontal and profile views. To generate a profile model (Fig. 4.12):

Cleaning and Combining Clipped Models  As the clipping process for the skull model will produce small disconnected fragments, these will need to be removed before 3D printing. Import the cropped skull model to Meshlab, select “Filters → Cleaning and Repairing → Remove Isolated pieces (wrt Diameter).” Enter 50 in the percent field of the max diameter and apply. Small pieces will disappear and the text-dialog at the bottom right of the screen will indicate how many components were removed out of the total number of components (10 connected

• Import the full face (not hollowed or extruded) model into 3D Slicer. • Open the “EasyClip” module. • Select “Yellow Slice Clipping.” • Enter +1.0  mm into the yellow slice, select “Keep Down Arrow,” and apply. • Enter −1.0  mm into the yellow slice, select “Keep Top Arrow,” and apply. • For a smaller model, apply additional coronal clipping (Green Slice) to remove posterior portions or axial clipping (Red Slice) to remove inferior portions by sliding the planes

T. Simmons-Ehrhardt et al.

68 Fig. 4.12 Midsagittal profile model (left) generated by clipping on either side of nasion through the full face model with 3D printed model (with additional coronal clipping) on the right

or adjusting the slice sliders (the clipping plane can be rotated off-axis by selecting and rotating the arrow that runs through the clipping plane). The resulting model may require the removal of disconnected fragments. Follow the steps for cleaning in Meshlab described above. 3D print with either of the lateral surfaces on the print bed, no raft, with or without supports (for small overhangs). Note that this profile type of model can be generated for other profile planes as well, such as sagittal planes along the infraorbital foramen, distal canine, cheilion, or even axial planes.

4.2.3 Application to Workshops and Training While craniofacial identification research more frequently includes CT data, there has been no global effort to produce the necessary interactive translational tools for practitioners. The publication of new research is assumed to become incorporated into practice, although mechanisms for translation are not always obvious. Additionally, at least in the United States, many practitioners work independently or for small agencies and would not have ready-access to peer-reviewed literature without an academic affiliation. A num-

ber of free 3D software packages facilitate the ability to include practitioners in research via model sharing if training opportunities for working with 3D models were made available. The increasing availability of de-identified CT datasets as well as free software for interacting with CT data provides an opportunity to encourage craniofacial identification practitioners to incorporate more anatomical data into training, their own personal study, as well as interactive reference materials. Training workshops can focus on specific anatomical features and correspondences, incorporate 3D printed models or digital 3D models that would allow for direct surface comparison with 3D sculpted approximations. Interaction with 3D CT data provides an immersive experience into craniofacial anatomy that traditional training environments cannot replicate. There is also a need to expose practitioners to common reference datasets that include data from multiple population profiles to facilitate common methods and evidence-based practice. In addition, the building of foundational reference data from CT scans can facilitate the development of training and education in academic settings, as many of the required resources would be readily available and accessible. In addition to technique-based workshops (sculpting, drawing), CT data provide an opportunity for anatomy-based workshops either digi-

4  Using Computed Tomography (CT) Data to Build 3D Resources for Forensic Craniofacial Identification

tal or 3D printed: for practitioners to engage in an immersive workshop experience with actual 3D craniofacial data in order to visualize craniofacial variation and changes due to age, sex, tooth loss, or weight changes. Direct anatomical correspondences between landmarks or muscle attachments on bone and features on the face can be visualized through interaction with 3D CT volumes, such as with 3D Slicer as described above (Sect. 4.2.2.1). This type of workshop would provide interaction with a wide range of anatomical variation, as models/CT volumes would be loaded into a computer or accessed from the TCIA browser in 3D Slicer, allowing for exposure to more variation, rather than the few examples that are available in a traditional workshop setting or textbook. For the dataset we have generated from TCIA, many individuals have more than one CT scan, taken at different times and exhibiting changes in morphology due to tooth loss and/or weight changes, providing a unique opportunity to observe a variety of individual changes due to these processes. Training with CT volumes can provide practitioners the opportunity to study 3D anatomy in a way that simulates practical anatomy with cadavers. Digital-content workshops would also allow interaction with dense FTDMs. Tissue depths are traditionally applied as pegs and presented in sparse-point data tables with no sense of the depths or changes in depths across the entire face. The FTDMs we have generated can be easily interacted with in Meshlab in a workshop or academic setting. Because the 3D CT models can be customized and 3D printed, technique workshops could also include 3D printed physical specimens, not just for sculpting but also for exhibiting a wide range of anatomical examples. Such workshops could use full skull models for sculpting or potentially focus on one feature, such as the nose. The generation of specific facial feature reference models was described in Sect. 4.2.2.3. A 3D printed collection of nose references could allow for practice sculpting on a wide variety of noses in one workshop as well as testing of existing nose prediction guidelines. Training workshops traditionally include skulls with corresponding facial photos for visual

69

comparison to artist-generated clay sculptures or sketches. The incorporation of 3D CT reference data would allow for direct 3D assessment of sculpted 3D facial approximations by 3D scanning the approximation and overlaying it with the actual CT face, utilizing the skull as the alignment reference. Practitioner accuracy and method accuracy can be assessed for the overall face as well as for specific facial features. 2D approximations can be similarly compared by generating appropriate 2D screenshots of CT faces and skulls and overlaying the sketch for facial feature comparison. Incorporating more objective accuracy assessments can improve individual practitioner skills as well as potentially identify specific estimation methods that may consistently produce inaccurate results.

4.2.4 A  pplication to Forensic Facial Approximation Casework A 3D dataset generated from CT volumes, including 3D surface models, landmarks, and 2D screen captures can provide a new form of reference for craniofacial identification practitioners. Our previous work has generated such a dataset, and where allowable, other craniofacial identification researchers are encouraged to do the same to inspire data sharing so that the field of craniofacial identification can become a more objective practice with evidence-based methods. While computational analyses of the data can provide higher-level predictions and guides to replace traditional facial approximation guidelines, the visual data itself can provide a direct reference for casework practitioners, much like a 2D facial catalog. However, a 3D CT-derived catalog would also contain skull data. Access to a common reference dataset would encourage evidence-­ based practice in craniofacial identification. Because practitioners receive an unidentified skull or images thereof upon which to generate a face or to make a comparison to facial photos, they need access to a catalog of skulls in order to find those with similar features, such as nasal aperture shape, orbit shape, or midsagittal profile. Eventually, a reference catalog would also allow

T. Simmons-Ehrhardt et al.

70

the ability to morphometrically identify skulls or crania with similarly shaped features: this would further remove subjectivity from the facial approximation process. Upon identifying skulls with similar features, the practitioner would then be able to view the corresponding soft tissue in 3D to directly visualize how a specific facial feature should correspond to the observed skeletal feature. Although we have outlined methods in this chapter, in multiple conference presentations (Falsetti et  al. 2018; Simmons-Ehrhardt et  al. 2016a, b, 2018b, 2019), a publication (Simmons-­ Ehrhardt et al. 2018a), as well as in a blog format (www.forensiccraniofacialid.wordpress.com), the learning curve for working with 3D software for inexperienced users is high and these resources are not a substitute for more immersive formats. Without hands-on workshops to encourage practitioners to utilize the software programs outlined above, the likelihood of adoption of such methods is low. To address these issues as well as to combine features from multiple programs into one, we are developing an interface utilizing the 3DHOP framework (Potenziani et  al. 2015; Simmons-Ehrhardt et al. 2019). Even though the interface does not need to be installed on a web server, implementing a browser-based interface eliminates the need to install multiple software packages and allows models to be viewed in a local HTML web page. To lessen the learning curve associated with 3D software, we have included only the most relevant tools for craniofacial identification practitioners. These features include buttons for standard views (anterior, left, right, inferior), model transparency, model color, visibility toggle, lighting, and face clipping. Measurement tools include a landmarking tool and a ruler with copy and export functions. A “Notes” button provides observations recorded by Simmons-Ehrhardt and C. Falsetti of qualitative features, some in reference to traditional facial approximation guidelines. To provide some interaction with FTDMs, the incremental depth maps have been added as point cloud layers in increments of 2.0 mm. A skull index divided by sex makes up the first page to facilitate application in casework by allowing visual searches of

skulls with similar features to an unidentified case. A “3D” button takes the user to the 3D page for that individual which includes face and skull models, reference planes, tissue depth maps, and the tools listed above (Fig.  4.13). While this interface would be useful to forensic artists during the facial approximation process, it may also be useful for forensic anthropologists to develop facial feature guidelines for a facial approximation case prior to/during discussions with a forensic artist. Expansion is also possible by generating new 3D HTML pages for new reference heads. The 3D HTML pages can also be opened through an R Shiny (RStudio, PBC) interface allowing for the future development of quantitative or morphometric search functions of the skull index.

4.3

Summary

The methods and tools described in this chapter reflect a trend of open-source resources that can alter the trajectory of craniofacial identification research and applications to be more open, standardized, and evidence based. However, the availability of such resources is ineffectual without engagement between researchers and practitioners and the willingness to share data, collaborate, and invest in accessible training modalities. We believe that the most reliable means by which to improve the quality, accuracy, and reproducibility of facial approximations in forensic casework begins with a standardized set of data that encompasses as much of the variation in facial architecture including tissue depths across the human face. Variation in the face is driven by a variety of mechanisms, including intrinsic factors such as age, sex, and underlying disease processes. Because not all factors affecting a person’s visage can be accounted for via a routine skeletal analysis, the forensic art practitioner is faced with translating the biological profile generated by a forensic anthropologist onto an unidentified skull using traditional sparse-­ point tissue depth measures that provide no transitional information between points. 3D volume data derived from CT technology provided to the end-user will enhance their proficiency, accuracy,

4  Using Computed Tomography (CT) Data to Build 3D Resources for Forensic Craniofacial Identification

71

Fig. 4.13  3DHOP interface for 3D craniofacial models illustrating skull index (top) and 3D page for an individual (bottom)

72

and ability to produce a face that reflects the underlying morphology and hopefully more closely represents the unidentified person. Acknowledgments  Some of the information in this work was supported by award 2014-DN-BX-K005 from the National Institute of Justice, Office of Justice Programs, US Department of Justice. The opinions, findings, and conclusions or recommendations expressed in this publication/program/exhibition are those of the author(s) and do not necessarily reflect those of the Department of Justice.

References Aspert N, Santa-Cruz D, Ebrahimi T (2002) MESH: measuring errors between surfaces using the Hausdorff distance. In: Proceedings of the IEEE International Conference in Multimedia and Expo (ICME), Lausanne, August 2002, vol 1, pp 705–708 Aulsebrook W, Iscan M, Slabbert J, Becker P (1995) Superimposition and reconstruction in forensic facial identification: a survey. Forensic Sci Int 75:101–120 Berar M, Desvignes M, Bailly G, Payan Y (2005) 3D statistical facial reconstruction. In: Proceedings of the 4th international symposium on image and signal processing and analysis, Zagreb, September 2005, pp 365–370 Bulut O, Sipahioglu S, Hekimoglu B (2014) Facial soft tissue thickness database for craniofacial reconstruction in the Turkish adult population. Forensic Sci Int 242:44–61 Caple J, Stephan C, Gregory L, MacGregor D (2016) Effect of head position on facial soft tissue depth measurements obtained using computed tomography. J Forensic Sci 61(1):147–152 Cavanagh D, Steyn M (2011) Facial reconstruction: soft tissue thickness values for South African black females. Forensic Sci Int 206:215.e1–215.e7 Chung J-H, Chen HT, Hsu WY, Huang GS, Shaw KP (2015) A CT-scan database for the facial soft tissue thickness of Taiwan adults. Forensic Sci Int 253:132. e1–132.e11 Cignoni P, Rocchini C, Scopigno R (1998) Metro: measuring error on simplified surfaces. Comput Graph Forum 17:167–174 Cignoni P et  al (2008) MeshLab: an open-source mesh processing tool. In: Scarano V, De Chiara R, Erra U (eds) Sixth Eurographics Italian chapter conference, pp 129–136 Claes P, Vandermeulen D, De Greef S, Willems G, Suetens P (2006) Craniofacial reconstruction using a combined statistical model of face shape and soft tissue depths: methodology and validation. Forensic Sci Int 159:S147–S158

T. Simmons-Ehrhardt et al. Clark K, Vendt B, Smith K, Freymann J, Kirby J, Koppel P, Moore S, Phillips S, Maffitt D, Pringle M, Tarbox L, Prior F (2013) The Cancer imaging archive (TCIA): maintaining and operating a public information repository. J Digit Imaging 26:1045–1057 de Buhan M, Nardoni C (2018) A facial reconstruction method based on new mesh deformation techniques. Forensic Sci Res 3:256–273 Decker S, Ford J, Davy-Jow S, Faraut P, Neville W, Hilbelink D (2013) Who is this person? A comparison study of current three-dimensional facial approximation methods. Forensic Sci Int 229:161.e1–161.e8 Deng Q, Zhou M, Shui W, Wu Z, Ji Y, Bai R (2011) A novel skull registration based on global and local deformations for craniofacial reconstruction. Forensic Sci Int 208:95–102 Deng C, Wang D, Chen J, Li K, Yang M, Chen Z, Zhu Z, Yin C, Chen P, Cao D, Yan B, Chen F (2020) Facial soft tissue thickness in Yangtze River delta Han population: accurate assessment and comparative analysis utilizing cone-beam CT. Legal Med 44:101693 Dong Y, Huang L, Feng Z, Bai S, Wu G, Zhao Y (2012) Influence of sex and body mass index on facial soft tissue thickness measurements of the northern Chinese adult population. Forensic Sci Int 222:396.e1–396.e7 Dorfling H, Lockhat Z, Pretorius S, Steyn M, Oettle AC (2018) Facial approximations: characteristics of the eye in a South African sample. Forensic Sci Int 286:46–53 Drgacova A, Dupej J, Veleminska J (2016) Facial soft tissue thicknesses in the present Czech population. Forensic Sci Int 260:106.e1–106.e7 Falsetti A, Simmons-Ehrhardt T, Falsetti C, Ehrhardt C (2018) Facilitating practitioner interaction with 3D craniofacial identification resources. In: Poster presented at the 87th annual meeting of the American Association of Physical Anthropologists, Austin, 11–14 April 2018. Figshare. Available at: https://doi. org/10.6084/m9.figshare.6804224.v1 Fedorov A, Beichel R, Kalpathy-Cramer J, Finet J, Fillion-­ Robin JC, Pujol S, Bauer C, Jennings D, Fennessy F, Sonka M, Buatti J, Aylward S, Miller JV, Pieper S, Kikinis R (2012) 3D slicer as an image computing platform for the quantitative imaging network. Magn Res Imaging 30(9):1323–1341 Fernandes CM, Serra Mda C, da Silva JV, Noritomi PY, Pereira FD, Melani RF (2012) Tests of one Brazilian facial reconstruction method using three soft tissue depth sets and familiar assessors. Forensic Sci Int 214:211.e1–211.e7 Forensic Art Certification (2020) International Association of Identification. Retrieved from: https:// www.theiai.org/forensic_art.php Fourie Z, Damstra J, Gerrits P, Ren Y (2010) Accuracy and reliability of facial soft tissue depth measurements using cone beam computer tomography. Forensic Sci Int 199:9–14 Gatliff B, Snow C (1979) From skull to visage. J Biocommun 6:27–30

4  Using Computed Tomography (CT) Data to Build 3D Resources for Forensic Craniofacial Identification Gietzen T, Brylka R, Achenbach J, Zum Hebel K, Schomer E, Botsch M, Schwanecke U, Schulze R (2019) A method for automatic forensic facial reconstruction based on dense statistics of soft tissue thickness. PLoS One 14:e0210257 Guyomarc’h P, Dutailly B, Couture C, Coqueugniot H (2012) Anatomical placement of the human eyeball in the orbit – validation using CT scans of living adults and prediction for facial approximation. J Forensic Sci 57(5):1271–1275 Guyomarc’h P, Santos F, Dutailly B, Coqueugniot H (2013) Facial soft tissue depths in French adults: variability, specificity and estimation. Forensic Sci Int 231:411.e1–411.e10 Guyomarc’h P, Ditailly B, Charton J, Santos F, Desbarats P, Coqueugniot H (2014) Anthropological facial approximation in three dimensions (AFA3D): computer-assisted estimation of the facial morphology using geometric morphometrics. J Forensic Sci 59:1502–1516 Hayes S (2014) Facial approximation of ‘Angel’: case specific methodological review. Forensic Sci Int 237:e30–e41 Hayes S (2016) A geometric morphometric evaluation of the Belanglo ‘Angel’ facial approximation. Forensic Sci Int 268:e1–e12 Hu Y, Duan F, Yin B, Zhou M, Sun Y, Wu Z, Geng G (2013) A hierarchical dense deformable model for 3D face reconstruction from skull. Multimed Tools Appl 64:345–364 Hwang HS, Kim K, Moon DN, Kim JH, Wilkinson C (2012a) Reproducibility of facial soft tissue thicknesses for craniofacial reconstruction using cone-­ beam CT images. J Forensic Sci 57:443–448 Hwang HS, Park MK, Lee WJ, Cho JH, Kim BK, Wilkinson CM (2012b) Facial soft tissue thickness database for craniofacial reconstruction in Korean adults. J Forensic Sci 57:1442–1447 Hwang HS, Choe SY, Hwang JS, Moon DN, Hou Y, Lee WJ, Wilkinson C (2015) Reproducibility of facial soft tissue thickness measurements using cone-beam CT images according to the measurement methods. J Forensic Sci 60:957–965 Jones M (2001) Facial reconstruction using volumetric data. VMV 2001, Stuttgart, pp 21–23 Kim KD, Ruprecht A, Wang G, Lee JB, Dawson DV, Vannier MW (2005) Accuracy of facial soft tissue thickness measurements in personal computer-based multiplanar reconstructed computed tomographic images. Forensic Sci Int 155:28–34 Krogman W (1962) The human skeleton in forensic medicine. Charles C. Thomas, Springfield Krogman W, Iscan M (1986) The human skeleton in forensic medicine. Charles C. Thomas, Springfield Kustar A, Forro L, Kalina I, Fazekas F, Honti S, Makra S, Friess M (2013) FACE-R – a 3D database of 400 living individuals’ full head CT- and face scans and preliminary GMM analysis for craniofacial reconstruction. J Forensic Sci 58:1420–1428

73

Lee WJ, Wilkinson CM (2016) The unfamiliar face effect on forensic craniofacial reconstruction and recognition. Forensic Sci Int 269:21–30 Lodha A, Mehta M, Patel MN, Menon SK (2016) Facial soft tissue thickness database of Gujarati population for forensic craniofacial reconstruction. Egypt J Forensic Sci 6:126–134 Meundi M, David C (2019) Application of cone beam computed tomography in facial soft tissue thickness measurements for craniofacial reconstruction. J Oral Maxillofac Path 23:114–121 Nelson LA, Michael SD (1998) The application of volume deformation to three-dimensional facial reconstruction: a comparison with previous techniques. Forensic Sci Int 94:167–181 Panenkoa P, Benus R, Masnicova S, Obertova Z, Grunt J (2012) Facial soft tissue thicknesses of the mid-­ face for Slovak population. Forensic Sci Int 220:293. e1–293.e6 Parks C, Richard A, Monson K (2014) Preliminary assessment of facial soft tissue thickness utilizing three-­ dimensional computed tomography models of living individuals. Forensic Sci Int 237:146.e1–146.e10 Phillips VM, Smuts NA (1996) Facial reconstruction: utilization of computerized tomography to measure facial tissue thickness in a mixed racial population. Forensic Sci Int 83:51–59 Potenziani M, Callieri M, Dellepiane M, Corsini M, Ponchio F, Scopigno R (2015) 3DHOP: 3D heritage online presenter. Comput Graph 52:129–141 Quatrehomme G, Cotin S, Subsol G, Delingette H, Garidel Y, Grevin G, Fidrich M, Bailet P, Ollier A (1997) A fully three-dimensional method for facial reconstruction based on deformable models. J Forensic Sci 42:649–652 Rhine JS, Campbell HR (1980) Thickness of facial tissues in American Blacks. J Forensic Sci 25:847–858 Rhine JS, Moore CE (1984) Tables of facial tissue thickness of American Caucasoids in forensic anthropology. Maxwell Museum Technical Series Ruiz NAP (2013) Facial soft tissue thickness of Colombian adults. Forensic Sci Int 229:160.e1–160.e9 Shrimpton S, Daniels K, de Greef S, Tilotta F, Willems G, Vandermeulen D, Suetens P, Claes P (2014) A spatially-dense regression study of facial form and tissue depth: towards an interactive tool for craniofacial reconstruction. Forensic Sci Int 234:103–110 Shui W, Zhou M, Deng Q, Wu Z, Ji Y, Li K, He T, Jiang H (2016) Densely calculated facial soft tissue thickness for craniofacial reconstruction in Chinese adults. Forensic Sci Int 266:573.e1–573.e12 Simmons-Ehrhardt T, Falsetti CS, Ehrhardt C (2016a) Craniofacial analysis of 3D computed tomography (CT) models and a new method for dense facial tissue depth mapping: a collaboration between forensic science researchers and forensic art practitioners. In: Poster presented at the 68th annual meeting of the American Academy of Forensic Sciences, Las Vegas, 22–27 February 2016. Figshare. Available at: https:// doi.org/10.6084/m9.figshare.3582222.v1

74 Simmons-Ehrhardt T, Falsetti CS, Ehrhardt C (2016b) Innovative uses of CT scans for the enhancement of forensic facial approximation methods: a collaboration between forensic science researchers and facial approximation practitioners. In: Poster presented at the 101st International Association for Identification Forensic Educational Conference, Cincinnati, 7–13 August 2016. Figshare. Available at: https://doi. org/10.6084/m9.figshare.3627162.v1 Simmons-Ehrhardt T, Falsetti C, Falsetti A, Ehrhardt C (2017a) Fileset: procedure for transforming 3D computed tomography (CT) skull and face models to a common orientation. Figshare. Available at: https:// doi.org/10.6084/m9.figshare.4924694.v1 Simmons-Ehrhardt T, Falsetti C, Falsetti A, Ehrhardt C (2017b) User guide: dense facial tissue depth mapping of 3D CT models using Meshlab. Figshare. Available at: https://doi.org/10.6084/m9.figshare.5082445.v1 Simmons-Ehrhardt T, Falsetti C, Falsetti A, Ehrhardt C (2018a) Open-source tools for dense facial tissue depth mapping (FTDM) of computed tomography models. Hum Biol 90(1):63–76 Simmons-Ehrhardt T, Falsetti CS, Falsetti A, Ehrhardt C (2018b) Enhancing craniofacial identification methods with CT data. In: Poster presented at the 87th annual meeting of the American Association of Physical Anthropologists, Austin, 11–14 April 2018. Figshare. Available at: https://doi.org/10.6084/ m9.figshare.6804092.v2 Simmons-Ehrhardt T, Falsetti A, Falsetti C, Ehrhardt C (2019) Interactive resources for craniofacial identification. In: Poster presented at the 71st annual meeting of the American Academy of Forensic Sciences, Baltimore, 18–23 February 2019. Figshare. Available at: https://doi.org/10.6084/m9.figshare.7749680.v1 Stephan CN (2017) 2018 tallied facial soft tissue thicknesses for adults and sub-adults. Forensic Sci Int 280:113–123 Stephan CN, Claes P (2016) Craniofacial identification: techniques of facial approximation and craniofacial superimposition. In: Blau S, Ubelaker D (eds) Handbook of forensic anthropology and archaeology. Routledge, New York, pp 402–415 Stephan CN, Davidson PL (2008) The placement of the human eyeball and canthi in craniofacial identification. J Forensic Sci 53:612–619 Stephan CN, Simpson EK (2008a) Facial soft tissue depths in craniofacial identification (Part I): An analytical review of the published adult data. J Forensic Sci 53:1257–1272 Stephan CN, Simpson EK (2008b) Facial soft tissue depths in craniofacial identification (Part II): An

T. Simmons-Ehrhardt et al. analytical review of the published sub-adult data. J Forensic Sci 53:1273–1279 Stephan CN, Huang AJR, Davidson PL (2009) Further evidence on the anatomical placement of the human eyeball for facial approximation and craniofacial superimposition. J Forensic Sci 54:267–269 SWGANTH (2011). Available at: https://www.nost.gov/ system/files/documents/2018/03/13/swganth_facial_ approximation.pdf Taylor K (2001) Forensic art and illustration. CRC Press, Boca Raton Tilotta F, Richard F, Glaunes J, Berar M, Gey S, Verdeille S, Rozenholc Y, Gaudy JF (2009) Construction and analysis of a head CT-scan database for craniofacial reconstruction. Forensic Sci Int 191:112.e1–112.e12 Tilotta F, Glaunes J, Richard F, Rozenholc Y (2010) A local technique based on vectorized surfaces for craniofacial reconstruction. Forensic Sci Int 200:50–59 Tu P, Hartley RI, Lorensen WE, Alyassin A, Gupta R (2005) Face reconstructions using flesh deformation modes. In: Clement J, Marks M (eds) Computer-­ graphic facial reconstruction. Elsevier Academic Press, Burlington, pp 145–162 Turner W, Brown R, Kelliher T, Tu P, Taister M, Miller K (2005) A novel method of automated skull registration for forensic facial approximation. Forensic Sci Int 154:149–158 Turner WD, Tu P, Kelliher T, Brown R (2006) Computer-­ aided forensics: facial reconstruction. In: Westwood JD, Haluck RS, Hoffman HM, Mogel GT, Phillips R, Robb RA, Vosburgh KG (eds) Medicine meets virtual reality 14. IOS Press, Amsterdam, pp 550–555 Ubelaker DH, Wu Y, Cordero QR (2019) Craniofacial photographic superimposition: new developments. Forensic Sci Int Synergy 1:271–274 Ueno D, Sato J, Igarashi C, Ikeda S, Morita M, Shimoda S, Udagawa T, Shiozaki K, Kobayashi M, Kobayashi K (2011) Accuracy of oral mucosal thickness measurements using spiral computed tomography. J Periodontol 82:829–836 Vandermeulen D, Claes P, Loeckx D, De Greef S, Willems G, Suetens P (2006) Computerized craniofacial reconstruction using CT-derived implicit surface representations. Forensic Sci Int 159S:S164–S174 Vandermeulen D, Claes P, De Greef S, Willems G, Clement J, Suetens P (2012) Automated facial reconstruction. In: Wilkinson C, Rynn C (eds) Craniofacial identification. Cambridge University Press, Cambridge, pp 203–221 Vanezis P, Vanezis M, McCombe G, Niblett T (2000) Facial reconstruction using 3-D computer graphics. Forensic Sci Int 108:81–95

5

Instructional Design of Virtual Learning Resources for Anatomy Education Nicolette S. Birbara and Nalini Pather

Abstract

Virtual learning resources (VLRs) developed using immersive technologies like virtual reality are becoming popular in medical education, particularly in anatomy. However, if VLRs are going to be more widely adopted, it is important that they are designed appropriately. The overarching aim of this study was to propose guidelines for the instructional design of VLRs for anatomy education. More specifically, the study grounded these guidelines within cognitive learning theories through an investigation of the cognitive load imposed by VLRs. This included a comparison of stereoscopic and desktop VLR deliveries and an evaluation of the impact of prior knowledge and university experience. Participants were voluntarily recruited to experience stereoscopic and desktop deliveries of a skull anatomy VLR (UNSW Sydney Ethics #HC16592). A MyndBand® electroencephalography (EEG) headset was used to collect brainwave data and theta power was used as an objective cognitive load measure. The National Aeronautics and Space Administration task load index (NASA-TLX) was used to collect perceptions N. S. Birbara · N. Pather (*) Department of Anatomy, School of Medical Sciences, Faculty of Medicine, UNSW Sydney, Sydney, NSW, Australia e-mail: [email protected]

as a subjective measure. Both objective and subjective cognitive load measures were higher overall for the stereoscopic delivery and for participants with prior knowledge, and significantly higher for junior students (P  =  0.038). Based on this study’s results, those of several of our previous studies and the literature, various factors are important to consider in VLR design. These include delivery modality, their application to collaborative learning, physical fidelity, prior knowledge and prior university experience. Overall, the guidelines proposed based on these factors suggest that VLR design should be learner-­ centred and aim to reduce extraneous cognitive load. Keywords

Virtual reality · Instructional design · Anatomy education · Cognitive load · Electroencephalography

5.1

Introduction

Instructional design can be described as “a technology for the development of learning experiences and environments which promote the acquisition of specific knowledge and skill by ­students”, with the aim of making this “more

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 P. M. Rea (ed.), Biomedical Visualisation, Advances in Experimental Medicine and Biology 1317, https://doi.org/10.1007/978-3-030-61125-5_5

75

N. S. Birbara and N. Pather

76

efficient, effective and appealing” (Merrill et al. 1996). In this description, Merrill et  al. (1996) consider instructional design as a “technology” founded on the science of instruction, which is the development of instructional strategies involving identifying the variables to consider as well as any potential relationships between these variables, and then investigating them. Our previous work has investigated several variables to consider in virtual learning resource (VLR) design for anatomy education, as well as potential relationships between these and the potential impact on VLR effectiveness. The main findings of this work are presented in Table 5.1. The science of instruction is grounded in learning theories, and educational psychology research has given rise to four major learning theories—association theory, information

processing theory, metacognitive theory and social constructivist theory (summarized by Terrell 2006). These have shifted over time from a behavioural model (association theory) to a cognitive model (information processing, metacognitive and social constructivist theories) which encompasses learners themselves rather than simply the learning process (Terrell 2006). Learning theories under this cognitive model are appropriate for guiding instruction in disciplines involving large amounts of complex information, as they provide an insight into human cognitive architecture and processes and examine how this can be utilised most efficiently to learn. Anatomy is one such discipline, and an example of applying cognitive learning theories to anatomy instruction has been demonstrated by Terrell (2006), who implemented five cognitive-based instructional

Table 5.1  Findings of previous work investigating variables to consider in virtual learning resource (VLR) design for anatomy education Research question Do perceptions differ between highly immersive stereoscopic and less immersive desktop VLR deliveries among anatomy students and tutors?a

Variable investigated VLR delivery modality

Does prior university experience impact anatomy students’ perceptions of VLRs?a Are VLRs more effective than traditional resources for face-to-face collaborative learning in anatomy?b

Prior university experience VLRs for face-to-face collaborative learning

Main findings Students considered stereoscopic delivery more useful, while tutors considered desktop delivery more useful and enjoyable Higher perceived mental effort for stereoscopic delivery by students and tutors, although greater difference between deliveries and higher ratings overall for students Physical discomfort disliked for stereoscopic delivery and difficulty with navigation disliked for desktop delivery Exploration considered useful aspect for desktop delivery by students and tutors No significant differences in perceptions between junior and senior students Greater knowledge change for traditional activity, although higher ratings for virtual activity as a learning experience, significantly regarding interest and engagement Difficulty with navigation disliked for virtual activity Pointing and interacting with models were the main gestures observed for both virtual and traditional activities Rotation of turns between participants for virtual activity and allocation of roles for traditional activity (continued)

5  Instructional Design of Virtual Learning Resources for Anatomy Education

77

Table 5.1 (continued) Research question Do prior anatomy knowledge and university experience impact the effectiveness of VLRs for face-to-face collaborative learning?b

Variable investigated Prior anatomy knowledge and university experience

Does physical fidelity impact the effectiveness of VLRs for anatomy?c

Physical fidelity

Main findings Greater knowledge change for virtual activity in cohort with prior knowledge and for traditional activity in cohort without prior knowledge Cohort with prior knowledge significantly more likely to recommend virtual activity to others Greater percentage of new knowledge retained for traditional activity in junior cohort and for virtual activity in senior cohort Higher ratings of virtual activity as a learning experience by junior cohort, significantly regarding motivation Better knowledge outcomes for highfidelity activity in cohort without prior knowledge and for low-fidelity activity in cohort with prior knowledge Realism considered a liked and useful aspect of high-fidelity VLR by cohort without prior knowledge, although low-fidelity VLR considered significantly more useful for understanding Difficulty with navigation disliked for both VLRs

Perceptions study (Birbara et al. 2019) Collaboration study (submitted for publication) c Fidelity study (submitted for publication) a

b

innovations in lectures for an undergraduate anatomy course over a three-year period. The results of this preliminary study demonstrated significant increases in student learning and motivation, with the course’s student retention rate doubling over the three-year period and student satisfaction with the course improving. Khalil and Elkhider (2016) state that the predominant cognitive learning theory in educational psychology is the information processing theory, which forms the basis of John Sweller’s cognitive load theory (CLT) (Sweller 1988, 1994; Sweller et al. 1998). Cognitive load refers to the amount of mental effort being used by working memory, which is the part of memory involved in processing information. According to CLT there are three components of cognitive load—intrinsic load, which is related to the inherent complexity of the information being learnt; extraneous load, which is related to the delivery of this information; and germane load, which is associated with the mental

resources required to process the information and encode it to long-term memory (Sweller et  al. 1998; van Merrienboer and Sweller 2010). CLT is centred around the idea that working memory has a limited capacity, and if the combination of the three components of cognitive load exceeds this capacity, this leads to cognitive overload (van Merrienboer and Sweller 2010). Therefore, CLT and design principles based on it are concerned with aligning the delivery of learning content with the limitations of working memory by managing intrinsic load, decreasing extraneous load and optimising germane load (van Merrienboer and Sweller 2010). Decreasing extraneous load increases germane load by dedicating more working memory resources to processing the learning content itself rather than its delivery. Cognitive load is traditionally measured using subjective tools such as the Paas mental effort rating scale (Paas 1992) and the National Aeronautics and Space Administration task load

78

index (NASA-TLX) (Hart and Staveland 1988). In anatomy education, Kucuk et al. (2016) used the Paas mental effort rating scale to measure cognitive load when learning neuroanatomy using augmented reality (AR) compared to two-­ dimensional (2D) pictures and text, and Foo et al. (2013) used the NASA-TLX to compare the mental workload experienced when locating anatomical structures in 2D compared to three-­ dimensional (3D) views. While the Paas mental effort rating scale is a single nine-point scale, the NASA-TLX accounts for different components of workload and allows for an overall workload score to be calculated based on these components. However, it still only provides a “snapshot” of the workload experienced at a particular point in time, such as immediately following an intervention. This, therefore, does not allow for the continuous assessment of workload over time, and it is here that objective measures can be more useful. Electroencephalography (EEG) is an example of such an objective measure and it is suitable to use in an educational setting as it can measure brain activity in a sensitive but non-­ invasive manner. The continuous EEG signal is made up of oscillations in various frequencies (Antonenko et al. 2010), with the five main frequency bands form lowest to highest being delta (0.1–3.9 Hz), theta (4.0–7.9  Hz), alpha (8.0–12.9  Hz), beta (13–29.9  Hz) and gamma (30–100  Hz; summarised by Kumar and Kumar (2016)). Of these, early studies have reported the theta and alpha bands to be indicative of memory and cognitive performance (Gerě and Jaušovec 1999; Klimesch 1999), with later studies showing that these bands are important in working memory processes (Jaušovec and Jaušovec 2012; Maurer et  al. 2015). The theta band is proportional to cognitive load and is most prominent in frontal brain regions, while the alpha band is inversely proportional and is most prominent in posterior brain regions (Gevins et  al. 1998; Klimesch 1999; Gevins and Smith 2003; Klimesch et  al. 2005; Sauseng et  al. 2005; Holm et  al. 2009), although some studies have also reported increases in the alpha band with increasing working memory load (Jensen et  al. 2002) as

N. S. Birbara and N. Pather

well as differences among individuals (Michels et al. 2008). In an educational context, Dan and Reiner (2017) used EEG to compare the cognitive load experienced while learning origami in 2D and 3D learning environments and Makransky et  al. (2019) compared the cognitive load experienced for highly immersive head-mounted display (HMD) and less immersive desktop deliveries of a virtual reality (VR) biology simulation. While the Kucuk et  al. (2016), Foo et  al. (2013), Dan and Reiner (2017) and Makransky et al. (2019) studies used only either subjective or objective measures of cognitive load in their comparisons of different educational methods, none of them used both measures. EEG and the NASA-TLX have been used together in other studies outside of education to measure cognitive load in pilots while performing scenarios of increasing difficulty (Gentili et  al. 2014) and in surgeons during different surgical procedures and exercises (Guru et  al. 2015; Morales et al. 2019), although there is a need to apply this in anatomy education. Additionally, the Guru et  al. (2015) study evaluated the performance of only one surgeon and the other studies mentioned only included participants from a single cohort, notably students with minimal prior knowledge (Foo et al. 2013; Kucuk et al. 2016; Makransky et al. 2019). Given the shift that has occurred over time in anatomy education away from a predominantly dissection-based approach due to practical, financial and ethical concerns (Chien et al. 2010; Thomas et  al. 2010; Kamphuis et  al. 2014), as well as the limitations associated with resources such as anatomical models, textbooks and atlases that have been noted in the literature (Nieder et  al. 2000; Luursema et  al. 2006; Nicholson et  al. 2006; Yeom 2011; Ma et  al. 2016), technology has become a large part of delivering anatomy content. As such, an increasingly important aspect of instructional design in anatomy is developing technology-based resources and cognitive learning theories can serve as a useful foundation for this. One that is particularly relevant is Richard Mayer’s cognitive theory of multimedia learning (CTML; Mayer 2009, 2014), which was developed from several

5  Instructional Design of Virtual Learning Resources for Anatomy Education

other theories including CLT.  Sorden (2012) explains that the CTML is based on three assumptions—first, working memory has two channels through which to receive information, auditory and visual; second, similar to CLT, working memory has a limited capacity; and third, people engage in meaningful learning when they focus on the material to be learnt, mentally organise it and then integrate it with prior knowledge. The theory provides 12 principles for the design of multimedia resources, with multimedia being defined as the combination of words, which can be either written or spoken, and pictures, which can be in various graphical forms including drawings, photos, animations and videos (Sorden 2012). At the centre of the theory is the “multimedia principle”, which states that that people learn more deeply from this combination than from words alone (Mayer 2009). The 12 principles in the CTML are organised around three types of cognitive processing, and these are synonymous with the three components of cognitive load in CLT— essential processing (synonymous with intrinsic load), which results from the complexity of the material; extraneous processing (synonymous with extraneous load), which results from distractions or poor delivery of information; and generative processing (synonymous with germane load), which results from the learner’s motivation to make sense of the information (Sorden 2012). Therefore, similar to CLT, through these 12 principles the CTML is concerned with aiming to manage essential processing, reduce extraneous processing and foster generative processing (Mayer 2009). Multimedia can encompass not only traditional resources combining words and pictures such as textbooks, PowerPoint presentations and videos, but also VLRs based on immersive technologies such as VR and AR. However, compared to these more traditional resources, VLRs are unique as the use of technologies like VR and AR can create highly immersive and realistic learning experiences. The principles in the CTML (Mayer 2009) as well as principles based on CLT (van Merrienboer and Sweller 2010) may, therefore, not be entirely catered towards the design of

79

VLRs. The most appropriate instructional design principles for VLRs need to consider other factors that are more specific to the characteristics and capabilities of immersive technologies. Makransky et al. (2019) investigated whether the “redundancy principle” of the CTML, which states that people learn better from graphics and narration than from graphics, narration and printed text (Mayer 2009), was applicable to VR. They found that overall there was no major effect of the redundancy principle, although this could be because they compared the combination of text and narration to text alone rather than narration alone, which would have aligned more closely with the redundancy principle. Nonetheless, they express the need to develop guidelines for the design of learning resources with immersive technologies. Moreno and Mayer (2007) have explored this by extending the CTML to technologies such as VR and proposing five principles that can be applied when designing interactive multimodal resources using such technologies. However, these principles are very similar to those in the CTML, meaning that evidence-­based guidelines specific to VLRs are still limited in this field. A review conducted by Zhu et al. (2014) on the use of AR in healthcare education highlights that there is little evidence to guide the instructional design of AR resources and therefore proposed a potential design framework for this purpose (Zhu et al. 2015). However, this was developed specifically to guide the design of mobile AR apps and was applied only in the context of educating doctors on informed antibiotic prescription. The ADDIE (analysis, design, development, implementation, evaluation) instructional design framework (Peterson 2003) was applied by Codd and Choudhury (2011) in the development of a VR anatomy resource with positive results, although this is a general instructional design framework and is not specifically catered to the design of VLRs. If VLRs are going to be increasingly adopted in anatomy education, it is important that their design promotes effective learning. Therefore, the overarching aim of this study was to propose guidelines for the instructional design of VLRs for anatomy education. In order to ground these

N. S. Birbara and N. Pather

80

guidelines within CLT and the CTML, however, the immediate aim of the study was to compare the cognitive load experienced for highly immersive and less immersive VLR deliveries using both objective and subjective measures, as well as to evaluate the impact of (a) prior anatomy knowledge and (b) prior university experience on the cognitive load experienced.

5.2

Methods

5.2.1 V  irtual Learning Resource Development A VLR based on the clinical applied anatomy of the human skull was designed and created using the Unity® gaming platform (Unity Technologies, San Francisco, CA) and has been described in our previous work (Birbara et al. 2019). To briefly summarise, the VLR was based on a clinical scenario and simulated a guided 3D “fly-through” exploration of a virtual 3D skull model. Based on the scenario as well as learning outcomes for the relevant anatomy to be examined, ten “stations” or “pit stops” were embedded on the virtual 3D skull model that each required the completion of a series of tasks (e.g. reading a brief explanation of the skull feature at the pit stop, navigating through a foramen or canal, viewing an explanatory video clip or locating a structure at the pit stop). Guides and hints to locate structures and to navigate to the next pit stop were provided by using green circles and arrows.

5.2.2 V  irtual Learning Resource Delivery The VLR was delivered using two different modalities—a highly immersive stereoscopic projection-based system and a less immersive desktop system, both of which have been described in our previous work (Birbara et  al. 2019). To briefly summarise, the stereoscopic delivery was achieved using the AVIE 360-degree stereoscopic immersive interactive visualization system located in the iCinema Centre for

Interactive Cinema Research at UNSW Sydney (McGinity et al. 2007), and an Xbox® controller (Microsoft, Redmond, WA) was used to navigate through the VLR.  The desktop delivery took place in a computer laboratory and a standard mouse and keyboard were used to navigate through the VLR.

5.2.3 Participants Ethical clearance for this study was obtained from the UNSW Sydney Human Research Ethics Committee (HC16592). Participants were voluntarily recruited for this study and included junior and senior university students without prior anatomy knowledge, as well as anatomy tutors. All student participants were enrolled in an undergraduate anatomy course which included content on skull anatomy.

5.2.4 V  irtual Learning Resource Implementation The VLR was delivered to participants in small groups. The junior students experienced either the stereoscopic or desktop VLR delivery only, while the senior students and anatomy tutors experienced the desktop VLR delivery. The stereoscopic delivery involved an externally directed exploration of the VLR and the desktop delivery involved each participant working through the VLR independently on their own individual computer.

5.2.5 Objective and Subjective Measures of Cognitive Load During the VLR experiences, participants’ EEG recordings were collected as an objective measure of cognitive load using the MyndBand® Bluetooth low energy (BLE) EEG brainwave headset (MyndPlay Ltd, London, United Kingdom). This is a wireless single-channel EEG headset that uses ThinkGear™ technology (NeuroSky Inc., San Jose, California) consisting

5  Instructional Design of Virtual Learning Resources for Anatomy Education

of three dry sensors on the forehead to measure brainwave activity at a 512  Hz sampling rate. The three sensors represent the ground, reference and active EEG electrodes, with the active electrode being placed in the Fp1 position according to the international 10–20 system (Jasper 1958). Single-­ channel EEG headsets using ThinkGear™ dry sensor technology have been validated against medical-grade EEG (Johnstone et  al. 2012; Rieiro et  al. 2019) and used to measure mental workload in previous studies (So et al. 2017; Morales et al. 2019). The raw brainwave data are automatically filtered and processed by the headset to remove artefacts, and both the raw and processed data are transmitted via Bluetooth to a computer running MyndPlay Pro® software (MyndPlay Ltd, London, United Kingdom). The computer used was a Predator Helios 300 laptop (Acer Inc., New Taipei City, Taiwan). Recordings were collected from three volunteer participants in each small group for both VLR deliveries due to the availability of three headsets. Participants’ perceptions of the workload experienced during their VLR experience were gathered using the NASA-TLX as a subjective measure of cognitive load (Supplementary Material Appendix). The NASA-TLX allows for an overall workload score to be calculated based on six 21-point subscales—mental demand, physical demand, temporal demand, performance, effort and frustration (subscale descriptions in Supplementary Material Table  5.S1). The first part of the index involves weighting each subscale based on its perceived contribution to workload using pairwise comparisons. The second part of the index involves rating each subscale from 0 to 100  in increments of five (for the “performance” subscale, 0 = good and 100 = poor; for all other subscales, 0 = low and 100  =  high). The weights and raw ratings are multiplied for each subscale to calculate weighted-adjusted ratings, which are then summed and divided by 15 to calculate an overall weighted workload score. Because the tool is multidimensional, accounting for different sources of workload, it provides a more compre-

81

hensive assessment of workload than other unidimensional tools such as the Paas mental effort rating scale. Additionally, because it involves not only rating each subscale but also weighting them, it allows for identification of the sources of workload that are most important and relevant to the task. Participant demographic information including age, sex and the anatomy course being studied was also gathered separately. The NASA-TLX was distributed to all participants electronically immediately following their VLR experience. Completion of the tool was voluntary and not mandatory for participation.

5.2.6 Data Analysis The EEG and NASA-TLX data were analysed to compare the cognitive load experienced for the stereoscopic and desktop VLR deliveries and evaluate the impact of (a) prior anatomy knowledge and (b) prior university experience on the cognitive load experienced for the desktop VLR delivery. For the EEG data, the medians of the processed theta power values for each participant were analysed. The theta frequency band was chosen due to its prominence in frontal brain regions (Gevins et  al. 1998; Klimesch 1999; Gevins and Smith 2003; Klimesch et  al. 2005; Holm et  al. 2009), the corresponding frontal placement of the MyndBand® sensors and the reported association between theta and working memory performance (Gerě and Jaušovec 1999; Klimesch 1999; Jaušovec and Jaušovec 2012; Maurer et  al. 2015). For the NASA-TLX data, the weight-adjusted ratings for each subscale as well as the overall weighted workload scores were analysed. All statistical analysis was conducted using SPSS Statistics, version 24 (IMB Corp., Armonk, NY) and a P-value of