Simulation-based Medical Training : A User-Centred Design Perspective [1 ed.] 9781443830850, 9781443829465

This volume explores the development process of a Virtual Reality (VR) and web-based medical training system from a user

147 122 2MB

English Pages 175 Year 2011

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Simulation-based Medical Training : A User-Centred Design Perspective [1 ed.]
 9781443830850, 9781443829465

Citation preview

Simulation-based Medical Training

Simulation-based Medical Training: A User-Centred Design Perspective

By

Erik Lövquist

Simulation-based Medical Training: A User-Centred Design Perspective, by Erik Lövquist This book first published 2011 Cambridge Scholars Publishing 12 Back Chapman Street, Newcastle upon Tyne, NE6 2XX, UK British Library Cataloguing in Publication Data A catalogue record for this book is available from the British Library Copyright © 2011 by Erik Lövquist All rights for this book reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, without the prior permission of the copyright owner. ISBN (10): 1-4438-2946-3, ISBN (13): 978-1-4438-2946-5

TABLE OF CONTENTS

List of Illustrations .................................................................................... vii List of Tables.............................................................................................. ix Preface ........................................................................................................ xi Acknowledgements .................................................................................. xiii Chapter One................................................................................................. 1 Background to Study Chapter Two ................................................................................................ 7 Virtual Reality and Web-Based Training Chapter Three ............................................................................................ 13 Data Sources, Data Collection and Data Analysis Techniques Chapter Four .............................................................................................. 23 The User and the Development Process Chapter Five .............................................................................................. 35 Case Study 1: The DBMT-Project Chapter Six ................................................................................................ 67 Case Study 2: The MedCAP-Project Chapter Seven............................................................................................ 97 User Roles and User Guidance Chapter Eight........................................................................................... 127 Conclusion and Summary Appendix A ............................................................................................. 133 Data from Usability Study

vi

Table of Contents

Appendix B.............................................................................................. 137 Data from Trial Implementation Appendix C.............................................................................................. 141 Shared Language and Concepts Appendix D ............................................................................................. 145 Selection of LMS and Simulator Images Bibliography ............................................................................................ 149 Index........................................................................................................ 159

LIST OF ILLUSTRATIONS

2-1 Example of a VR-based medical training system............................................... 9 2-2 A screen shot of OpenLabyrinth. ..................................................................... 11 5-1 Illustration of a patient being prepared for SA. ................................................ 36 5-2 Immersive workbench and haptic device. ........................................................ 41 5-3 The figure shows a screen-shot of the prototype.............................................. 43 5-4 Illustration of the landscape testing environment............................................. 47 5-5 Illustration of the haptic tissue model. ............................................................. 50 5-6 Screen-shot of a prototype. .............................................................................. 51 5-7 Screen-shots of the visualisations incorporated into the simulator................... 52 5-8 The usability-tested version of the prototype. .................................................. 58 5-9 Trainee and trainer at a training course in Emergency Medicine. .................... 63 6-1 Photograph of paper prototype......................................................................... 75 6-2 Example of photographs taken by the clinician................................................ 77 6-3 A screen-shot of the prototype interface implemented in Moodle. .................. 78 6-4 The picture sent to a clinician. ......................................................................... 80 6-5 The mock-up prototype sent to a clinician. ...................................................... 81 6-6 Two of the final textures with marked landmarks. ........................................... 82 6-7 The main components of the system architecture. ........................................... 86 6-8 Photographs taken at the usability evaluation stage. ........................................ 89 6-9 Metrics score on paramedian patient................................................................ 92 6-10 The number of competencies for each group. ................................................ 93 6-11 Example of an expert anaesthetist’s competence level................................... 94 7-1 The managing clinician’s role in the projects................................................. 101 7-2 The relation between user roles and the development process....................... 124

LIST OF TABLES

4-1 Relation user participation and development process ...................................... 31 5-1 User groups, user roles and methods used during the problem statement phase. ............................................................................................................... 38 5-2 User groups, user roles and methods used during the domain analysis phase. . 40 5-3 User groups, user roles and methods used during the design phase. ................ 46 5-4 User groups, user roles and methods used during the formal evaluation. ........ 48 5-5 User groups, user roles and methods used during the second design and evaluation phase............................................................................................... 53 5-6 User groups, user roles and methods used during the usability evaluation. ..... 56 5-7 User groups, user roles and methods used during the third design phase......... 58 5-8 The table shows examples of three components that were used as part of the global rating scale during the simulator and clinical assessment........................................................................................................ 60 5-9 User groups, user roles and methods used during the clinical trials. ................ 61 5-10 User groups, user roles and methods used during the trial implementation. .. 65 6-1 User groups, user roles and methods used during the problem statement phase. ............................................................................................................... 70 6-2 The table illustrates competence 380 to 383 from the competence space. ....... 71 6-3 The table illustrates four of the metrics that were identified during the focus groups. .............................................................................................. 72 6-4 User groups, user roles and methods used during the analysis......................... 73 6-5 User groups, user roles and methods used during the development of the LMS. ...................................................................................................... 79 6-6 The metrics that were implemented in the simulator........................................ 84 6-7 User groups, user roles and methods used during the development of the assessment system.................................................................................. 87 6-8 User groups, user roles and methods used during the formal evaluation. ........ 90 6-9 User groups, user roles and methods used during the clinical trial. ................. 95 7-1 The table summarises how the different roles of the users contributed to the development process. ........................................................................... 113

PREFACE

This volume explores the development process of a VR and web-based medical training system from a user-centred perspective. It highlights the importance of user participation in this context by analysing two case studies concerned with the development of a VR and web-based medical training system for Spinal Anaesthesia. This research investigates who should be considered as users, when and why users should be involved in the development process and how to utilise their guidance efficiently. In order to analyse these aspects, empirical data was collected from the case studies by applying participant observation, document analysis and interviews. The analysis of the data is based on literature discussing users and the development of computer systems in Information Systems (IS) and Human-Computer Interaction (HCI). The findings illustrate the relation between user participation and the development process of a VR and web-based medical training system. User groups, along with their input and degrees of participation and influence are classified. The findings show how a democratic arrangement between users and developers is beneficial and maybe even mandatory in order to utilise the users’ guidance efficiently. In this arrangement, the use of prototypes is instrumental in bridging the expertise and knowledge gap between users and developers. The results of this research may aid other research teams developing VR and web-based medical training system in deciding if, why and how to involve relevant user groups in the overall development process.

ACKNOWLEDGEMENTS

This volume is based on a PhD dissertation that was presented to the University of Limerick, Ireland during 2010. The research was performed in conjunction with two research projects: Design Based Medical Training (DBMT) and Competence Assessment Procedure in Medicine (MedCAP). DBMT was partly funded by the HSE, Ireland. MedCAP was funded by the European Commission through grant no. LLP/LdV/TOI/2007/IRL-513 within the Lifelong Learning Programme, Leonardo da-Vinci subprogramme. The research in this volume is based on observations, interviews and project documents emerging from these two projects. The research presented in this volume was also partly supported by the Computer Science and Information Systems department (CSIS), University of Limerick, Ireland. I would like to thank Dr. Annette Aboulafia for all her guidance and support throughout this work. I would also like to acknowledge Prof. Nigel W. John, Bangor University, UK and Dr. Chris Exton, University of Limerick, Ireland for reading and improving on the details of the presentation of this research. In addition, Dr. Micheal O’Haodha, University of Limerick, Ireland kindly proof read parts of this work. Finally, I would like to deeply and sincerely thank and acknowledge Grace McKinley. Grace spent endless hours of proof reading this volume. She did not only help me with the language but also ensured that the writing actually said what I was trying to say.

CHAPTER ONE BACKGROUND TO STUDY

Summary of research This volume explores the relation between user participation and the development process of a Virtual Reality (VR) and web-based medical training and assessment system1. It presents a perspective on VR and webbased medical training and assessment where user participation is an integral and necessary part of the development process of such systems. The subject of how to involve users in the development of computer systems has received much attention within Information Systems (IS) and Human-Computer Interaction (HCI). Recently, this literature has focussed on how to involve users at the right time and in the right form, depending on the context of a system’s use. To date, there is little written on user participation and the specific context of VR and web-based medical training systems. Developments of such systems are dependent on a number of aspects, such as understanding complex medical elements, identification of training objectives, validation of training, etc. In this volume it is discussed how users (trainers, trainees, medical educators and others) can aid the process of achieving these aspects. The primary research question that this research sets out to answer is: “What is the relation between user participation and the development process of a VR and web-based medical training and assessment system?”. This question has two parts: “What were the roles of the users in the development process during the two case studies?” and “How was user guidance accomplished during the two case studies?”. In qualitative research, a hypothesis is generally not stated during the initial stages of a research process (Kaplan and Maxwell, 1994). Research questions evolve with the researcher’s understanding of the activities and behaviours under study. However, if the research in this volume was approached as a quantitative study, the research questions would translate 1

Training in this field refers to the acquisition of skills, cognitive or psychomotor (Gallagher et al., 2005)

2

Chapter One

into the hypotheses: a) The development process of a VR and web-based system consists of separate phases, b) users have certain roles in this development process and c) users are instrumental in guiding developers in the process of developing such systems. This research’s perspective on user participation is illustrated through a qualitative, empirical data analysis of two case studies. The case studies present two research projects concerned with the creation of a VR and web-based medical training and assessment system for Spinal Anaesthesia. In order to understand the development process’ dependency on user participation, an analytical framework based on literature from IS and HCI as well as empirical findings is applied. This framework is used to determine user-related factors such as user groups, user input, methods used to involve the users, degree of participation and influence over the development process. It also provides a frame for dividing the development process into discrete phases. The analytical framework is applied to the two case studies to investigate the relation between user participation and the phases of the development process of the VR and web-based medical training and assessment system.

Research context and issue VR and web-based medical training systems are developed from within a highly specialised area. Hospitals are under pressure from governments, legislators and the general public to deliver safe and efficient high-quality medical care. Medical trainers, teaching hospitals and medical training bodies need to review their current ways of training junior doctors (Bradley, 2006; Aggarwal and Darzi, 2006). Junior doctors practicing potentially hazardous procedures on patients has up until recently been regarded as a “necessary evil” (Reznick et al., 2006). However, Reznick et al. (2006) argue that this is in conflict with the demand for providing safe medical care to patients. As a response, medical trainers and educators are looking for alternative ways of teaching medical procedures. VR and webbased training has emerged as a potential alternative (Vozenilek et al., 2004). There is a need to involve users (trainers, trainees, medical educators, medical society, etc.) to create “appropriately” designed VR and webbased training systems. John (2008) argues that the guidance of domain experts, along with other factors such as appropriate use of technology, training theory and validation of training, is critical for the success of simulation-based medical training system. Also, Berg (2001) argues that it is necessary to include users in the development and implementation

Background to Study

3

process of health care technologies. In addition, Lettl (2007) describes how users can play a significant role in the process of developing new and innovative technologies for supporting health care. Shah and Robinson (2006) argue that users provide valuable input to the development of new medical devices during different stages of the development process. However, there is little literature available discussing the potential roles of users when developing training systems in the context of VR and web-based medical training. In addition, there is little discussed in literature in how to effectively utilise guidance from domain experts and other potential users during the development of such training systems. In contrast, there is extensive research on user participation in the field of Information Systems. In this field of research, several studies provide empirical evidence on the usefulness of user participation in a computer system’s development (see, e.g., Lin and Shao, 2000). Successful user participation depends on a range of factors. For example, research has indicated that for the participation to be effective, the users have to believe that the development is meaningful and not feel forced to contribute (Hunton and Beeler, 1997). Cavaye (1995) argues that user participation depends on aspects such as top management commitment, willingness to participate and ability to participate. User participation is also a key perspective in the discipline of HCI. The discipline of HCI is concerned with understanding the relation between users, tasks and the machine (Dix et al., 2004). In HCI, users are considered to have a central role in the development of useful and usable computer systems. Dix et al. (2004) argue that computer systems have to be designed so people (users) have the freedom to use the system as they please and it must also allow them to make mistakes. To achieve this, the user has to be the number one priority when designing a computer system. Usability evaluation is a specific method that can be used in the development process of a computer system to involve users. It is used to investigate how well a user is able to perform certain tasks on a system and how they perceive it (user satisfaction) in order to improve on the system’s usefulness (Nielsen, 1994). Participatory Design (PD) is another approach which promotes user participation (Asaro, 2000). PD is based on a democratic development arrangement where users are encouraged to participate in the development process to ensure that their opinions and ideas influence the final design of a system.

4

Chapter One

Aim of research Two specific arguments which relate to user participation are of particular interest to what is discussed in this volume. The first argument is that user participation is context-specific (Marti and Bannon, 2009). The authors state that user participation has to be considered in relation to a system’s intended use, who the users are and whether they can contribute to the development process. The authors also state that the role and value of a user varies depending on what stage a development process is at. The second argument is that the real value of user participation is gained when a development process is dependent on specific domain expertise (He and King, 2008). The authors state that the challenge within user participation is to identify appropriate domain experts that are willing to participate in the development process and to know how to utilise their expertise efficiently. Hence, in order to benefit from user participation, it has to be established how to “involve the ‘right’ users at the ‘right’ time in the ‘right form’” (Lettl, 2007, p. 53). The aim of this volume is to highlight the importance of user participation in the context of VR and web-based medical training. More specifically, the aim is to identify who should be considered as users, when and why they should be involved in the process and how to utilise their guidance efficiently. This volume provides a novel perspective on the development process of VR and web-based medical training systems. The perspective is based on applying existing approaches and views of user participation from IS and HCI to empirical data collected from two case studies. This perspective may support researchers in this area to involve users in the development process of VR and web-based medical training systems and as a result help to develop appropriately designed systems. In particular, this work illustrates a) how the development process of a VR and webbased medical training system can be described as discrete, but interdependent phases, b) the different roles of users during the phases of the development process and c) how the expertise from users can be utilised in order to efficiently guide developers in the process of creating such systems.

Structure of this volume Following this introduction, Chapter 2 provides a background to the context of VR and web-based medical training. It contextualises the research within the current discussions of medical training and presents a

Background to Study

5

few examples of where VR and web-based technology has already been used for medical training and assessment. This chapter also argues why it is important to involve users in this context. Chapter 3 presents the approach applied for investigating the research questions. The approach consists of collecting data from two case studies by using participant observation, document analysis and interviews. Coding and interpretive writing have been applied to formatively analyse the collected data. Chapter 4 describes the framework that is used to analyse the relation between user participation and the different phases of the development process. The chapter discusses relevant user literature and adopts an existing approach of a system’s development process to separate the case studies into discrete development phases. Chapter 5 analyses the empirical data that was collected during the first case study. It provides examples on each development phase and how each phase was dependent on user participation. User groups, forms of user input and how users were involved in the development process are identified. Chapter 6 analyses the relation between user participation and the development process during the second case study. This follows the same format as that of Chapter 5. Chapter 7 provides a summarising analysis of the results from the data analysis of the two case studies. It analyses the different user groups in detail, how their participation influenced the development process and how user guidance was accomplished. Finally, Chapter 8 draws conclusions from the results of the summarising analysis and discusses the contribution of this volume. Chapter 2 outlines why there is a need to improve medical training and how VR and web-based technology has been used in medical training. In this chapter it is argued why user participation is important in this context.

CHAPTER TWO VIRTUAL REALITY AND WEB-BASED TRAINING

Factors influencing the use of VR and web-based training It is argued that there is an urgent need to reform worldwide medical training and revise existing medical curriculum (Aggarwal and Darzi, 2006; Bradley, 2006; Harden et al., 2000). Bradley (2006) describes insufficiently prepared junior doctors with little practical experience as a major drive for this reform. A survey of junior doctors showed that they found their practical skills training inadequate and were frequently asked to perform procedures of which they had no experience (Mason and Strike, 2003). The traditional model of learning procedures on live patients under supervision of a trainer, commonly referred to as the apprenticeshipmodel, has been criticised. A number of different arguments have been given. According to Reznick et al. (2006), the apprenticeship model has been regarded as a necessary evil due to a lack of satisfactory alternatives. They argue that it is unethical to train on live patients and that exposure to rare patient conditions depends on chance. They also argue that it is difficult to standardise training as each individual patient is different and it is expensive to involve patients in the training. Maran and Galvin (2003) state that it is difficult to formally train doctors in how to deal with ambiguous medical situations because of the restricted availability of patients for training. Ziv et al. (2006) describe the contradiction between using live patients for medical training and simultaneously trying to ensure the delivery of safe health care. At the same time, Bradley and Postlethwaite (2003) argue that the medical sector has to ensure lifelong learning, training and professional development of doctors. It is these and similar concerns which need to direct the development of VR and web-based medical training systems. If training issues are addressed appropriately, it is likely to result in a system that is able to counteract at least some of the aforementioned drawbacks of the

8

Chapter Two

apprenticeship model. In order to address training issues appropriately, it is mandatory to involve medical trainers and trainees in the development process. The current issues in medical training have led to an on-going, worldwide change in how medical procedures are trained (Issenberg et al., 2003). New approaches are being sought, which would allow junior doctors to efficiently learn, practice and be assessed on both routine and uncommon medical procedures in safe and controlled environments (Vozenilek et al., 2004). VR and web-based training has been used to train and assess doctors’ skills in safe and controlled environments and has been widely discussed as a potential way of improving medical training. If applied correctly, such technologies offer great potential for addressing the current needs of medical training, see, e.g., Scalese et al., (2008), Dawson (2006), Friedrich (2002), Ziv et al. (2006). Among the benefits of such systems are diagnostic screening (Satava et al., 1998; McCloy et al., 2001), customisable patient reproduction (Issenberg et al., 2005), objective assessment (Champion and Gallagher, 2003; Chou and Handa, 2006), immediate feedback (Chou and Handa, 2006), patient rehearsal (Kneebone and Aggarwal, 2009) and intelligent computer-based tutoring (Champion and Gallagher, 2003). At the same time, it is crucial to validate that the training works and can be transferred to improved performance in the operating theatre (Salas et al., 2005; Sutherland et al., 2006). Integrating aspects that can provide efficient training and assessment in a VR and web-based system is challenging. The identification of relevant training objectives1 and to translate them into system requirements requires the input from medical trainers and medical experts. Such experts also need to be involved in the validation of a system.

VR and web-based medical training systems VR can be defined as immersive, computer-generated environments, which uses technology to visually and physically interface the user with the virtual environment (Brooks, 1999). In medical training, these environments can be populated with virtual, interactive patients to simulate medical procedures and scenarios. Haptic technology can be used to create physical interfaces with virtual environments. Haptic technology refers to computer-controlled robotic 1

The development of simulation-based training has to be guided by relevant training objectives (Henriksen and Patterson, 2007)

Virtual Reality and Web-Based Training

9

equipment (so-called haptic devices) that can track and interface the user with sensations of virtual objects through force feedback or vibration stimuli (Srinivasan and Basdogan, 1997). See Figure 2-1 for an example of a haptic-based medical training system.

Fig. 2-1. Example of a VR-based medical training system for arthroscopic knee and shoulder surgery (Image courtesy of GVM).

VR-based systems can be used to recreate complex medical procedures. Medical training systems that utilise VR-based technology have been developed for procedures such as ultrasound guided needle puncture (Vidal et al., 2008), minimally invasive surgery (Basdogan et al., 2007),

10

Chapter Two

transferring knowledge of bio-elasticity (Nakao et al., 2006), chest tube insertion (Cline et al., 2008) and palpation (Howell et al., 2008). The case studies reported in this volume utilised VR-technology as part of the medical training system that was developed for Spinal Anaesthesia (see Chapter 5 and 6). VR-based systems offer the opportunity to train cognitive and psychomotor skills. However, the development of such systems relies on the guidance from medical trainers and medical experts to help developers understand the skills necessary for performing a procedure. Web-based training provides distributed, interactive learning environments that can be accessed from anywhere with a computer connected to the Internet (Kahn, 2001). Web-based systems have been applied to the undergraduate and postgraduate training in the US as an attempt to enhance the traditional teaching model (Henriksen and Dayton, 2006). The web has also been used to teach surgical skills online (John et al., 2001). Recent advances in web technology have meant that interactive medical procedures and anatomy can be displayed directly in a web-browser (John, 2007). John (2007) describes how the web can be used in general medical education, diagnosis of patients, training of medical procedures and collaborative training. The web has also been used to create “Virtual Patients” (Ellaway et al., 2009). A virtual patient is a web-based simulation of a real life patient. It can be used to train novices in how to diagnose patients. An example of a system simulating a patient for diagnosis is OpenLabyrinth2, see Figure 22. OpenLabyrinth is a web-based training tool that presents text, pictures and different options in order to diagnose a patient. The system is based on real patient data and reacts dynamically depending on the user’s clinical decisions. Another web-based tool that has been used to enhance the learning of doctors is Learning Management Systems (LMS). A LMS provides a webbased platform for managing learners and course material. Such systems can manage several hundreds of users simultaneously and are used to electronically present learning material and managing exams. The opensource system Moodle3 and the commercially available Blackboard4 are two examples of LMS that have been used in medical education. A LMS was used as part of the medical training system that was developed during the second case study (see Chapter 6). 2

http://labyrinth.mvm.ed.ac.uk/ (accessed 4th March, 2010 ) www.moodle.org (accessed 19th Nov, 2009) 4 www.blackboard.com (accessed 19th Nov, 2009) 3

Virtual Reality and Web-Based Training

11

Fig. 2-2. A screen shot of a Virtual Patient being diagnosed using OpenLabyrinth. 5

Web-based systems offer new opportunities for training medical procedures. They offer the opportunity to train procedural skills and to teach skills in how to react in complex medical situations. However, to create such systems, developers are dependent on the specialised expertise and skills of medical trainers and medical experts.

Summary Chapter 2 describes the context of VR and web-based medical training and argue that the development process of such systems is dependent on users’ participation and input. This dependency will be addressed in the following chapters by analysing the roles of the users and their guidance of the development process during two case studies. The investigation of user’s role will focus on how users supported the identification of training objectives and helped ensure a system’s validity and usefulness. The investigation of how user guidance was accomplished will focus on how the medical expertise and specialised skills of users were utilised during the development process of a VR and web-based medical training system. Hence, this research will address the following two questions (as 5

The figure is a screen-shot taken from: http://labyrinth.sgul.ac.uk/openlabyrinth/mnode.asp?id=qgxlrdbarsx9qarsx9qgxlrd b1rx7jz (accessed 19th Nov, 2009)

12

Chapter Two

mentioned before): 1) “What were the roles of the users in the development process during the two case studies?” and 2) “How was user guidance accomplished during the two case studies?”.

CHAPTER THREE DATA SOURCES, DATA COLLECTION AND DATA ANALYSIS TECHNIQUES

This chapter describes the methods used for collecting and analysing empirical data from user participation during the development process of a VR and web-based medical training system.

Data sources Two research projects are used to analyse the relation between user participation and the development process of a medical training system. The first project was Design Based Medical Training (DBMT). This project is referred to as Project 1 hereafter. It was an open-ended research project with the aim of improving the training of Spinal Anaesthesia by using VR technology (see Chapter 5 for more information). The second project was Competence Assessment Procedure for Medical Procedures (MedCAP). This project is referred to as Project 2 hereafter. It involved the design and implementation of a VR and web-based competence assessment procedure for Spinal Anaesthesia (see Chapter 6 for more information).

Data collection: Qualitative research The research described in this volume uses a qualitative approach for collecting and analysing data. Qualitative research is concerned with generating theoretical interpretations of empirical data for describing behaviours, activities and social interactions (Myers, 2000). It is used for understanding how and why behaviours occurred, compared to quantitative research which is usually concerned with what happened and whether that observation is statistically valid. A main difference from quantitative research is that qualitative research is open-ended and tries to generate new ideas and theoretical propositions, rather than trying to answer predefined hypothesis for generalising to the entire population (Myers, 2000).

14

Chapter Three

According to Nandhakumar and Avison (1999), the collection and analysis of qualitative data is considered as a suitable approach for generating theoretical interpretations of context-specific development processes of computer systems. Based on the findings of Nandhakumar and Avison (1999), it was deemed appropriate to apply qualitative investigations to this volume, given that the desired outcome was to generate theoretical interpretations of user participation within the development process. The quantitative approach will aid to understand and describe behaviours and activities of users and developers in relation to the development process.

Participant observation in the field Qualitative research generally requires the researcher to do observations in the field (Patton, 2002). According to Patton (2002), being in the field helps to capture the context of the study. It also helps to avoid relying on previously held conceptualisations of the field and aids in identifying patterns and routines which the habitants might not be aware of themselves. One particular method for doing observations in the field is participant observation. The method is used for engaging the researcher in a community and actively participating in its activities in order to gain an indepth understanding of underlying behaviours (Marshall and Rossman, 2006). Participant observation is commonly used in qualitative research for collecting empirical data. This requires an extensive time in the field to understand observed behaviours and activities. The investigations conducted in this volume were influenced by participant observation. To investigate user participation, the development of a medical training system and the relation between both, the author of this work participated as a system developer during Project 1 and 2. The two projects gave the author the opportunity to participate unobtrusively as part of a development team. The author had full access to project material (documents, emails, pictures, minutes from meetings etc.) and interacted regularly with the rest of the members of the development team. The author participated actively in the majority of the development process. Participant observation generated new perspectives of medical training systems development. It helped identify and refine relevant research themes and categories from analysing empirical data (see Chapter 3 – Data collection and analysis process).

Data Sources, Data Collection and Data Analysis Techniques

15

Field notes Field notes are considered an important aspect when doing fieldwork (Patton, 2002). According to Patton (2002), no specific guidelines on how to take notes exist; it depends on the researcher’s style and the setting. In the setting of this volume, field notes were taken for collecting observational data of the development process during the two projects. However, as the projects required the researcher’s active participation, it proved difficult to take extensive field notes during meetings with the development team. It has been argued that observations should be as unobtrusive as possible (Seaman, 1999) and also that the technique of field notes has to be adapted to each unique observational situation and context (Patton, 2002). To suit the specific study circumstances, general notes were taken on a wide range of aspects of the project, such as design decisions and action points. Observations, possible reflections and highlevel analysis of the process were added to the field notes afterwards. The notes were transcribed and were used in the final analysis of user participation and the development process.

Interviews Interviewing is another common method for gathering data in qualitative enquiry (Seaman, 1999). Interviews provide clarification on aspects that arise during participation observation. Hence, interviews and participant observation are often used in combination during field studies. Short, informal interviews were conducted in conjunction with participating observations during Project 1 and 2. The interviews consisted of only one or a few questions. The participants’ responses allowed the users’ perspective on the development process to be gained, which might have otherwise been missed. The questions were generally asked during work with the users, e.g., a design session or a project meeting. However, the questions were asked in a manner that did not distract from the ongoing activity. At the end of the two projects, the key participating clinicians and developers were interviewed using semi-structured interviews. Seaman (1999) says that semi-structured interviews can help both to confirm aspects that were expected prior to the interview, as well as generating new, unanticipated data. They facilitate open-ended discussions in addition to pre-defined questions. The overall goal of using semistructured interviews for this volume was to get the clinicians’ view on the development process, as a complement to the observations. The questions

16

Chapter Three

posed to the clinicians during the final semi-structured interviews were based on findings from the formative data analysis process (see Chapter 3 – Data collection and analysis process). However, unexpected issues emerged from the interviews as well. The interviews were recorded and transcribed and were used in the final analysis.

Documents and artefacts collection In qualitative studies, documents and artefacts are often considered as valuable sources of information when studying a specific context or situation (Kaplan and Maxwell, 1994). The two projects resulted in a large quantity of documents and artefacts. For example, the physical distance between clinicians, developers and other team members forced the development team to rely on instant messaging and emails. According to Prior (2007), documents are representations of human interaction and create a structure to a community, not only in regards of the content, but also how they are used and created. Prior (2007) argue that documents can hold an extensive amount of information about a community, which consequently requires careful consideration and analysis. Documents and artefacts are included in the final analysis of this volume for creating a rich description of the activities involved in the development process of the two projects. The documents are analysed based on their content and how user participation influenced how they were used and created during the development process. Examples of documents include project proposals, minutes from meetings, emails, chats and assessment and learning material. Examples of artefacts include 3d models of anatomy, pictures of patients and anatomy, CT1-data and the resulting system.

Quality of data collection The collection process of qualitative data is considered to have a major impact on the overall quality of research (Klein and Meyers, 1999). Klein and Meyers (1999) argue that the relationship between researcher and participants affect the quality and “truthfulness” of the data. The presence of the researcher might alter participants’ responses during interviews or how they act in front of the observer (Kaplan and Maxwell, 1994). The researcher in this work participated as a member of the development team 1 Computed tomography (CT): A medical imaging method for aquiring threedimensional images of bodily structures (internal and external).

Data Sources, Data Collection and Data Analysis Techniques

17

on a day-to-day basis. Hence, the observed events and behaviours were not likely to have been affected by the researcher’s presence. Kaplan and Maxwell (1994) argue that the collection of a large amount of data increases the quality of qualitative research. The two projects generated a large amount of data over time. The author of this work participated as a developer during Project 1 and 2 for three and a half years. Due to the prolonged exposure to the development process of a VR and web-based medical training system, it allowed me to collect and analyse a large amount of data. Triangulation is the utilisation of two or more data collection methods. It is argued as a valuable approach to increase the validity of data in qualitative research (Mays and Pope, 1995). If the same results are given by different data collection methods, it allows the researcher to internally validate the data. Three different data collection methods were applied during Project 1 and 2: participant observation, document analysis and interviews. By applying these methods, it was possible to disregard data that was not supported by two or more of these. At the same time, it strengthened the confidence of observations which were supported by two or more of the applied methods.

Data collection and analysis process Qualitative research is an on-going, iterative process where research questions and ideas are refined over time. According to Kaplan and Maxwell (1994) a qualitative research process aims to: “Identify themes; develop categories; and explore similarities and differences in the data, and relationships among them” (p. 41-42). This section describes the formative data collection and analysis process and how themes and categories were identified and refined. The applied research methods (participant observation, document collection and interviews) were overlapping and dependent on each other. Data was collected and analysed during the research process, which allowed iterative use of research methods as a response to previously observed or identified behaviours and activities. During the initial stages of this research, the main method for collecting data was participant observation. Empirical data was collected during Project 1, which allowed initial themes and research questions to be identified and formulated. During this part of the research process, the development of the VR and web-based medical training system in Project 1 appeared to be highly complex. Simultaneously, a literature review of the research domain was performed. The literature discussing medical

18

Chapter Three

training systems was divided between technical papers, validation papers and training theory papers. Based on the literature review and empirical findings from the participant observation, the main conclusion at this stage of the process was that these issues (technical development, validation and training theory) need to be closely connected, but were nonetheless discussed separately in the literature. The different issues were difficult to understand as a combined process. It was not clear how they related to “traditional” design and development approaches. The development process became the first theme of this research. At the same time, it was observed how the users in Project 1 significantly aided the developers understanding of training elements (training objectives, validation of training, etc.). It was also observed how the development process was heavily dependent on the users’ medical expertise and procedural skills. Users became the second and main theme of this research. Hence, empirical data from participant observation in combination with literature reviews generated the two themes. To investigate the themes in further detail, other research methods were applied in addition to participant observation and literature reviews. As Project 1 progressed, short, informal interviews with people involved in the development process were conducted during the project work. Whenever a behaviour or activity occurred relevant to the two themes, short questions were asked in-situ to clarify other team members’ view of the development process. For instance, the clinical trial in Project 1 appeared to be an activity difficult to organise. This activity was led by representatives of the users. In order to clarify the activity, it was necessary to ask the clinicians if, why and how they believed the clinical trial was difficult to organise. The answers from the clinicians led to a better understanding of the complexity of performing clinical trials (see Chapter 5 – Clinical trial). Similar questions were asked during the different phases of the development process and were used to further understand the relation between the development process and user participation. During the later stages of Project 1 and throughout Project 2, document collection was applied to gather project material emerging from the two development processes. This material consisted of instant messaging and emails between project members. It also consisted of project documents, such as training analysis, usability and clinical trial reports. In order to manage this text-based data, the method of coding was used to apply the overall themes (development process and users) to the collected documents in order to break the themes down into more specific

Data Sources, Data Collection and Data Analysis Techniques

19

categories. Coding is a method used for structuring qualitative data into categories (Patton, 2002). The technique consists of the researcher reading through and arranging text, such as interview transcriptions, documents, etc., into categories relevant to the research perspective/questions. Coding builds a foundation for detailed interpretations (comparisons, conclusions, etc.) of data (Patton, 2002). The coding process in this research consisted of the researcher reading through the collected project documents with a general aim of identifying behaviours and activities in relation to the two themes. Each document, section or passage of text relevant to the development process or user participation was highlighted and given a key-word, i.e. labelled with a code. These codes consisted of short phrases that suggested how certain sections of text were relevant to the two themes. The coding at this stage was open with the aim of generating initial concepts and categories of general interest. For instance, from this coding process it appeared that the development process was dependent on many different user groups. How each of these groups affected the development process was then categorised further using a framework developed by Scaife et al. (1997) (see Chapter 4). Throughout the coding process, the generated codes and their relationships were regularly discussed with another researcher (a developer) who also participated as part of the development team. This was performed to help guard against bias and to help generate new insights and perspectives of the studied behaviours and activities. The coded documents that described the training analysis, usability studies and clinical trials (see Chapter 5 and 6) contained data from activities that were part of the development process of the SA system. These activities required certain research methods which generated both qualitative and quantitative data. This data was collected to direct and improve the training system. The documents contained results from a variety of research activities, such as focus groups, questionnaires and semi-structured interviews. These activities were dependent on the motives and beliefs of the development team and were not specifically designed to direct the research of this volume. However, the collected data held important information of the development process and how users influenced it. In order to follow the research presented in this volume, some examples of the actual data are referenced to here. A report from the first usability study (see Chapter 5 – Usability evaluation) is included in Appendix A. The questionnaires that were given to the trainer and the trainees during the trial implementation in Project 1 (see Chapter 5 – Trial implementation) are included in Appendix B. The applied research methods

20

Chapter Three

and results from the training analysis in Project 1 (see Chapter 5 – Training analysis) are described in Kulcsár et al. (2008). The method of participant observation was applied continuously throughout the research process until the end of Project 2. By performing observations during both Project 1 and 2, it helped to deepen the understanding of the development process and its relation to user participation. Short interviews were also used continuously, which helped to further clarify the users’ view on the development process. The participant observation, literature reviews, short interviews and initial coding had up to this point generated a deeper understanding of the development process and users in the context of VR and web-based medical training systems. At this stage it was deemed necessary to apply focused literature to understand the analysed behaviours and activities in further detail. For instance, a description of the development process of information systems that correlated with the empirical finding was identified. In addition, specific user-related literature helped to better describe and understand the users’ behaviours and activities. The focused literature is described in Chapter 4. Coding was applied again to analyse project documents and observations in further detail. However, the previous coding process was based on the initial themes. Now more detailed coding was required. The coding was performed based on the focused literature (the analytical framework described in Chapter 4), in combination with the categories identified during previous coding iterations. The documents were coded based on discrete development phases, who the users were, their input, degree of participation, etc. The new codes and categories were also analysed in relation to each other. For instance, the participating user groups and the groups’ input was analysed in relation to each discrete phase of the development process. The overall coding process was iterative and directed by the participant observation and short interviews. It gave rise to categories that were grounded in empirical data and refined using the existing literature that discussed development processes and user participation in IS and HCI. During the final stages of Project 2, semi-structured interviews were performed with users involved in the development processes. By conducting the interviews at the end of the research process, it was possible to focus on topics specific to the research questions. These topics were directed by categories emerging from the participant observations and coding process. Examples of topics covered by the interviews are:

Data Sources, Data Collection and Data Analysis Techniques

• • • • •

21

Users view on the development process: Iterative design / Development phases, etc. Users perceived degree of participation and degree of influence Whether the users believed expertise sharing between users and developers was necessary, and if so, how? Usefulness of participatory design / prototypes / usability testing Users view on the end-result Other views/opinions not anticipated by the researcher

The interviews were transcribed and analysed using the categories that had emerged from the participant observation and coding process. The interviews helped to both confirm the generated categories as well as identify new ones. For example, the users believed that in order for a constructive development process to occur, users needed influence and strong control over the development process (for further detail see Chapter 7). Based on the results from the formative analysis, interpretive writing is used in Chapter 5 and 6 to describe how the development processes in Project 1 and 2 were dependent on user participation. The aim of interpretive writing is to present qualitative data as richly written descriptions for communicating tacit knowledge (Myers, 2000). It is used in this volume to describe different user roles, their influence on the development process and how the users guided the development process. Thus, the interpretive writing is a sequence of events as observed by the author, enriched with examples from the interviews and documents analysis.

Summary Chapter 3 describes the approach that is applied to answer the research questions. During a formative analysis process, participant observation, field notes, documents collection and interviews were utilised in order to identify themes and categories in relation to user participation and the development of a medical training system. Certain considerations, such as triangulation, were made to ensure the quality of the collected data. The collected data was analysed using coding. These results are analysed indetail using interpretive writing and a theoretical framework (see Chapter 5, 6 and 7).

CHAPTER FOUR THE USER AND THE DEVELOPMENT PROCESS

This chapter reviews the concepts of users and user participation and suggests an approach for describing the development process of a VR and web-based medical training system. The chapter presents a framework for analysing the relation between user participation and the development process.

Theme one: The user This section discusses how the concepts of users and user participation will be applied in the analysis of this volume.

Role of the user There exist many different definitions of users, with different importance depending on context and belief (Mackay et al., 2000). Mackay et al. (2000) argue that users are ”complex and fragmented in nature, and are attributed with varying significance” (p. 738). In Human-Computer Interaction the user is considered as a representative of the group of people who will use a system for accomplishing a certain task or a goal (Abras et al., 2004). The user of a system generally belongs to certain user groups. For instance, in the area of medical device development, user groups have been classified as clinicians, patients, carers and others (Shah et al., 2006). Thus, such user groups are in many cases context-specific and have specific requirements (Marti and Bannon, 2009). In order to identify how each user group can contribute to the development process of a VR and web-based medical training system, potential user groups are categorised. According to Abras et al. (2004), three types of users exist: primary, secondary and tertiary users. The primary type uses the system. The secondary type uses it through a primary user. The tertiary type is affected by its use or makes decision about its purchase. In the case of medical training, primary users could be trainees training on a system, secondary

24

Chapter Four

users could be trainers monitoring the trainees’ progress and tertiary users could be patients receiving better care. Abras et al. (2004) argue that the three types of users have to be considered for the design of a system to be successful, because systems are likely to be used differently by its different user groups. This volume will use this categorisation to identify how each user group might use the medical training system. Mackay et al. (2000) argue that it is important to separate system users from the developers of that system. The users do not generally have the skills or the technical knowledge necessary for developing computer systems. Engaging users in the development process is a way for the developer to gain an understanding of the complexity of the users work situation. Nonetheless, Mackay et al. (2000) say that the border between the developer and user is fluid and that the user might take the role of a developer. In the context of this volume, the analysis will distinguish between developer and user and consider their difference in background, expertise and motives. However, Mackay et al. (2000) also argue that the guidance from users might turn into design suggestions and development decisions, which will be considered in the analysis of this volume.

User participation Barki and Hartwick (1994) argue that users are engaged in a development process through user participation. User participation relates to the behaviours and/or activities of the user during the development process. User participation in this volume is measured by observing and analysing what activities the users participated in and how the activities contributed to the development process (see Chapter 3 for a detailed description of the methods used for data collection). To be consistent with other literature discussing users, the term “user” will be applied to people who provide input to the development process as primary, secondary or tertiary users (see 4– Role of the user). However, input from developers and other domain specialists are also included in the analysis. The term informant is used to classify anyone who does not fall under the categories of users, but provide input to the development process. Scaife et al. (1997) have developed a framework for investigating user participation during the development process of a computer system. The framework was originally developed in the context of designing interactive learning tools for children. The aim of the framework is to understand how to make efficient use of user input and maximise the contribution from different user groups during the stages of a development

The User and the Development Process

25

process. In practice, the framework considers the development process to contain discrete development phases, where the answers to three userrelated questions are analysed for each phase: • • •

Who is the user / informant? What kind of input does the user / informant provide? What methods are used for getting input from the user / informant?

Scaife et al.’s (1997) framework is used in this volume to identify user groups, what each user group offers in terms of input and methods used for getting input from the user groups in the development process of a medical training system. However, the framework also requires that the discrete phases of the development process are identified. Chapter 4 – Theme two: The development process discusses a categorisation of the development process of a medical training system into discrete phases. In Chapter 5 and 6 the framework is applied to the categorisation described in Chapter 4 – Theme two: The development process. In the context of medical device design, Shah and Robinson (2006) state that many projects include users at one or more stages of the development process. From this, the authors argue that the level of user input is highest during the design phase of a medical device, followed in descending order by testing and trials, deployment and concept stages. Shah and Robinson (2006) also details a set of methods commonly used for user participation: usability testing, interviews, surveys, discussions, simulations and design sessions. Based on Shah and Robinson’s (2006) and Scaife’s et al. (1997) findings, this volume considers that the roles of the users might not be uniform throughout the development process of a medical training system. Also, the techniques used for involving users might differ between the discrete development phases. Users can participate in varying degrees of participation (Cavaye, 1995). Cavaye (1995) argues that if the domain is unknown to the developer, a high degree of participation from the user is necessary. Ives and Olson (1984) separate user participation into different degrees of participation. These degrees are: 1. 2. 3.

No participation: Users are not willing or asked to participate. Symbolic participation: Users provide input but are ignored. Participation by advice: Users do not take part in the development team but provides input through different methods, e.g., focus groups and interviews.

26

Chapter Four

4. 5.

Participation by doing: Users participate as part of the development team. Participation by strong control: Users are responsible for the development process.

This volume utilises Ives and Olson’s (1984) categorisation to determine to what degree a certain user group participated and how this participation affected the development process of a medical training system. Users also participate with different degrees of influence (Cavaye, 1995). This is a factor which enables one to identify to what extent suggestions from the users affect the development process. The degree of user influence ranges from none, where users’ input to the development process is ignored, to high, where users’ input determines the direction and outcome of the development process. This volume considers that different user groups might participate with varying degree of influence in the development process of a medical training system. It has also been argued that input from people participating in a development process does not only affect how the product is developed, but also influences the individual participants. Woolgar (1991) argue that developers often configure the user during user participation by providing them with a better understanding of the technology used in the system under development. Mackay et al. (2000) extend this argument by adding that user participation and user input during the development process also leads to configuration of the developers. Thus, this work argues that user participation creates a two-way process of knowledge sharing. The twoway process provides an opportunity for users and developers to enhance their input to the development process. In the context of developing medical training systems, developers are dependent on guidance from domain experts. It has been argued that the real value of user participation is gained when a development process requires specialised domain expertise (He and King, 2008). Hence, user participation as a two-way process of knowledge sharing is considered in the analysis of this volume.

Theme two: The development process This section categorises the development of a VR and web-based medical training system into discrete development phases, so that user participation can be analysed for each phase.

The User and the Development Process

27

The development phases Information Systems is a research domain with a long history in analysing development processes of computer-based systems (He and King, 2008). The development of a system has been described as going “from initial problem statement through system analysis to system design, building and implementation” (Cavaye 1995, p. 312). These development stages are used to categorise the development process of a VR and webbased medical training system into discrete phases. The problem statement is the rationale behind developing or improving a system. Henriksen and Patterson (2007) state that the design of a medical training system has to be based on relevant research questions. In this context, a problem statement can originate from different sources, such as medical training bodies, governments or legislators. For example, recommendations that have led to problem statements for developing medical training systems are competence-based training (Tetzlaff, 2007) and improvement of the apprenticeship model (Aggarwal and Darzi, 2006). This volume analyses the problem statement phase in relation to user participation. In the development process, an analysis phase generally follows the problem statement (Cavaye, 1995). During this phase it is necessary to establish what the system should consist of, what it should do and who is going to use it. When developing health care systems, Berg (2001) argues that a well-designed system needs to provide “a match between the functionalities of the system and the needs and working patterns of the organization” (p. 144). A thorough system’s analysis is often required to establish this match between the system, user and organisation. In the context of medical training systems, it appears that a system’s analysis stage is a critical part of the development process. For instance, Schaffer et al. (2001) argue that elements of a training system which promote learning have to be considered from the very beginning of its development. According to Salas and Cannon-Bowers (2003), training objectives are generally generated from a training analysis. A training analysis consists of two parts: organisational analysis and job/task analysis. The organisational analysis determines how the training of a job is influenced by the surrounding organisation (available resources, constraints, etc.). The job/task analysis ascertains what operations are necessary for performing the job and it covers all the necessary operations and sub-operations of a procedure. Task analysis has previously been used to direct the development of medical training systems. Johnson et al.

28

Chapter Four

(2006) performed a task analysis to determine the thought processes and physical actions involved in interventional radiology (IR)1. Their task analysis resulted in written task descriptions and decision protocols. Gould et al. (2006) used this general task analysis to derive and validate metrics for IR. The intention of creating metrics from the task analysis was to integrate relevant performance measurements in a simulation-based training system for IR. How a task analysis should influence the design of medical training systems has also been discussed by England et al. (2008), Johnson et al. (2007) and England et al. (2007). Hence, a training analysis can inform the design, development and evaluation of a medical training system (Henriksen and Dayton, 2006). This volume analyses the relation between the analysis phase and user participation. Lawson (2006) argues that design depends on your personal background and your aims. A product or system is formed by the design process and is driven by unique design problems. In the development of medical training systems, the design phase is critical. This is when training objectives are translated into system requirements. For instance, Salas et al. (2005) argue that a simulation-based medical training system has to include well-designed guidance of learning, exercises/scenarios and feedback. Applying existing design approaches to the development process can be beneficial during the development of a medical training system. For instance, usability evaluation is an existing design approach that can be applied to measure if a system is easy to learn, efficient to use, easy to remember, causing few errors and subjectively pleasing (Nielsen, 1994). Usability has been argued to result in enhancing “the ability to use a product for its intended purpose” (Bevan 1995, p. 115). In the context of medical training systems, Moody et al. (2004) utilises usability evaluation in the development of a knee arthroscopy simulator. The simulator’s usefulness and usability was evaluated several times with representative end-users during the development process. The authors report an increased user acceptance of the training system as a result of the applied design approach. Design has been expressed as a complex creative endeavour consisting four different parts (Dorst, 2008): i. 1

The object: the design problem and its solution.

Interventional radiology is a group of minimally invasive procedures performed using image guidance. Liver biopsy is one such procedure, where needle insertion is guided by ultrasound for taking samples of tissue for analysis.

The User and the Development Process

ii. iii. iv.

29

The actor: the designer/design team. The context: the surrounding elements affecting the design. The design process: the complex activities undertaken during the process.

Dorst’s (2008) view on design is applied in order to analyse i) how the solution to a design problem was supported by the users, ii) how the design process was dependent on users as participating actors and iii) how the context of design (VR and web-based medical training) was supported by the users. This view on design is used to analyse how training objectives are implemented in a medical training system and to identify existing design approaches that could be suitable in this context. The building phase generally consists of incorporating the results from a design phase into programming code or artefact. To date, there is a large body of literature describing aspects in relation to the building phase of medical training systems. These are technical papers concerned with aspects such as generation of tissue sensations (Färber et al., 2007), deformable tissue simulation (Choi et al., 2003) and ultra-sound synthesis (Kuttera et al., 2009). Users are generally not involved in the process of building (Cavaye, 1995). This was also true for the two cases studied in this volume, as this phase did not require user participation. The users did not participate in the actual process of writing code or creating prototypes. Hence, the building phase of the systems is not included in the analysis of this volume. The implementation of new systems into health care is a challenging task. For example, Littlejohns et al. (2003) have identified reasons why information systems are prone to fail in the health care sector. Their system failed because it did not take into account the general culture of healthcare. The intended users were never informed why the system should have been used and it clashed with the everyday practice of the users. Berg (2001) argues that the intended users have to take part in both the development and the implementation of a system to ensure that it fits in the everyday practice of the users. In the context of medical training systems, both Gaba (2006) and Issenberg et al. (2005) argue that if a system is to have a major impact on medical education in general, it has to be fully implemented in the current health care practice and not just acting as an “add-on”. However, it is hard to predict how a training system will be used in practice and what effects it will have on training and the organisation (teaching hospital, medical

30

Chapter Four

school etc.). Ammenwerth et al. (2003) say that longitudinal evaluations are necessary to identify the long-term effects of system use. However, it is not the aim of this volume to perform longitudinal studies on the effects that a training system might have on an organisation and its training of doctors. Instead, it limits itself to consider how the user participation affects considerations for future implementation during the development process of a medical training system. Evaluation is not explicitly stated in Cavaye’s (1995) definition of the development process. However, the author also refers to the evaluation of a system as an important element of the development process. In the context of medical training, it is critical to evaluate whether VR and web-based training improves performance in the operating theatre (Sutherland et al., 2006). In the general training literature, this is referred to as training transfer2. These evaluation studies should be performed before a system is implemented as part of the training of a medical procedure. For example, Seymour et al. (2002) showed how training on a VR-based simulator for laparoscopic training improved performance in the operating theatre. Grantcharov et al. (2004) and Larsen et al. (2009) have shown similar results. However, the findings from evaluation papers are generally based on commercially available systems. The papers do not discuss how the findings relate to the design decisions made during the development process. This volume considers how the analysis and design phases of a training system may affect the outcomes of an evaluation study and how these outcomes can potentially direct future improvements of a system. Evaluations can also be performed during the design and development of a system (formative evaluation). Usability evaluation is an example of a method that can be used for formative evaluation of a system (Nielsen, 1993). During the development of a knee arthroscopy simulator, Moody et al. (2004) involved users in evaluations which provided feedback of the systems usability and utility. The users’ input directed improvements of the system, which increased the user acceptance of the system. This volume analyses the evaluation phase in relation to user participation.

2

See Salas and Cannon-Bowers (2003) for an in-depth meta-analysis of training.

The User and the Development Process

31

The development process of a VR and web-based medical training system From reviewing papers in the area of VR and web-based medical training, it is noted that the development process is generally only discussed in terms of single, discrete development phases. For example, technical papers describe solutions behind the building of systems (e.g., Grottke et al., 2009), clinical papers evaluate existing systems (e.g., Seymour et al., 2002) and educational papers discuss training principles (e.g., Issenberg et al., 2005; Kneebone, 2003). However, there are some papers that discuss the relation between different development phases. For instance, John, (2008) argues for the need to combine training theory with state-of-the-art technical solutions that are validated throughout the development process of a medical training system. The author describes a case study from interventional radiology, which illustrates a rigorous development approach. The development process was based on a task analysis of the procedure, which directed the technical developments of the system. At the same time, validations of the system were performed during the development process. Kneebone et al. (2004) argues that a closer connection between system developers and evaluators of medical training systems is required. Hence, to identify the effects of user participation in this context, the complete development process of a medical training system has to be considered. This volume adopts Cavaye’s (1995) view on a system’s development process to describe the development process of a VR and web-based medical training system. This volume analyses user participation in relation to each individual development phase: problem statement, analysis, design, build, evaluation and implementation.

Analytical framework Table 4-1 illustrates Scaife et al.’s (1997) framework when arranged in relation to the development process (as presented in Chapter 4). Each phase of the development process is analysed in relation to who the user is, what input the user provides and the method used for getting input from the user.

32 Phase of Development

Chapter Four User/Informant

Input

Method

Problem statement

Analysis

Design

Build

Evaluation

Implementation

Table 4-1. The table illustrates how the relation between user participation and the development process is analysed in Chapter 5 and 6. In order to make the analysis of this volume manageable, Scaife et al.’s (1997) framework is applied to the case studies to extract user groups, user input and methods used for involving the users from the case studies (Chapter 5 and 6). These results are then used in a summarising analysis (Chapter 7), which is based on the remaining elements of user participation that were discussed in Chapter 4.

The User and the Development Process

33

Summary Chapter 4 discusses how the relation between user participation and the development process of a VR and web-based medical training system is analysed. This chapter describes how Cavaye’s (1995) definition of a development process is utilised to separate the development process of a VR and web-based medical training system into discrete phases. In this volume, each of these phases are analysed in relation to user participation. First, Scaife et al.’s (1997) framework are applied to the phases of the case studies to determine user groups, user input and methods used for involving the user (Chapter 5 and 6). Then, additional user concepts (user type, degree of participation, influence of participation, knowledge sharing, etc.) are used in a summarising analysis (Chapter 7).

CHAPTER FIVE CASE STUDY 1: THE DBMT-PROJECT

This chapter analyses the empirical data that was collected during Project 1. The analysis is divided into separate development phases, where each phase provides examples of different user groups, input from users and how methods were used to involve the users in the development process.

Introduction to Project 1 Project 1 was a research project with an overall aim to identify and evaluate possible improvements in the current training in Spinal Anaesthesia. It was partly funded by the Health Service Executive (HSE), Ireland. The project started at the end of 2005 and was finished during the middle of 2007. The author of this volume became involved in the project in August 2006 as a system developer. Spinal Anaesthesia (SA) is performed by injecting a small amount of anaesthetic solution into the spine in the lumbar area, just below the spinal cord. The anaesthetist uses his/her hands to palpate and locate the ideal skin puncture point. At this point, a thicker and shorter introducer needle is inserted to puncture the skin. After that, a longer and thinner spinal needle is advanced through and beyond the introducer. The spinal needle is used to access the spine and inject the anaesthetic solution. If the needle is inserted too high in the back, there is a risk of hitting the spinal cord and causing permanent damage. If inserted too low, the procedure will not give the desired effect. A successful procedure results in loss of sensation below the injection point, allowing surgery on the lower part of the abdomen and legs. See Figure 5-1 for an illustration of the procedure.

36

Chapter Five

Fig. 5-1. Illustration of a patient being prepared for Spinal Anaesthesia. The spinal needle will be inserted between the vertebrae, below the spinal cord. The anaesthetic solution is injected into the spinal fluid.

User participation during the development process in Project 1 This section analyses the relation between user participation and the development process in Project 1. The analysis is based on Scaife et al.’s (1997) framework presented in Chapter 4.

Problem statement The initial problem statement emerged from a professor in medicine at the Cork University Hospital (CUH), Ireland. He will be referred to as the managing clinician hereafter. The rationale behind the problem statement originated from medical training bodies and health care organisations. They were at the time pressurising trainers to review their current practice of “learning by doing” on patients. For instance, the organisation World Federation for Medical Education1 (WFME) argued for an urgent need to improve the training for procedural skills of junior doctors. The managing clinician believed it necessary to address this problem and explore new 1

http://www.wfme.org/ (accessed on the 19th Feb. 2010)

Case Study 1: The DBMT-Project

37

ways of training medical procedures. He purposely selected SA as a pilotstudy. SA is a well-established and commonly-performed procedure. In finding new approaches to the training in SA, the managing clinician connected with members of the Interaction Design Centre (IDC). IDC has a strong tradition in the disciplines of Interaction Design (IxD) and HCI. The IDC-group consists of experts in design, engineering and human factors. This group will be referred to as the developers hereafter. Together, the two groups decided to design novel technology to enhance doctors’ embodied learning of SA. The objectives of the collaboration were to: 1. 2.

Perform a structured analysis of the teaching and learning components of SA. Design and evaluate prototypes based on the elements identified in the analysis.

This was articulated as a project proposal and led to the start of Project 1. A multidisciplinary research collaboration was initiated. The managing clinician was the primary investigator. He managed the project and was responsible for its deliverables. He also recruited other clinicians to participate as part of the project team. A second clinician was assigned to Project 1 as part of her degree in Doctor of Medicine (M.D.)2. Her main contribution was to investigate the current issues in the training in SA and to identify potential approaches for improving the training in the procedure. The initiator and the second clinician formed the main core of the research group. However, a group of 3 additional clinicians also participated occasionally in the project team and provided additional expertise. At the end of Project 1, a sixth clinician joined the development team as a part of another M.D. Her main task was to perform clinical validation trials with the system that was developed during Project 1. This group will be referred to as the clinicians in the development team hereafter. The managing clinician and the clinicians in the development team will be referred to as the clinicians. The clinicians and the developers together formed the development team. See Table 5-1 for a summary of input from users and informants during this phase.

2

In UK/Ireland, M.D. is a higher doctoral research degree.

38 Phase of development Problem statement phase

Chapter Five User / Informant

User / Informant Input

Method

Managing clinician

Explored new ways of training SA, research objectives, connected with other researchers

N.A.

Developers

Suggested to utilise novel technologies for learning

N.A.

General stakeholder: Health care organisation (WFME)

Recommendations for best practice (learning by doing not excepted)

N.A.

Table 5-1. User groups, user roles and methods used during the problem statement phase.

Training analysis of Spinal Anaesthesia The first official activity of Project 1 was to perform a training analysis of SA. The analysis focused on both organisational and individual elements that affected the training in the procedure. The analysis was undertaken in order to identify issues relating to the training and practice of the procedure. The results were intended to direct the development of novel training technology. However, rather than performing a detailed task analysis outlining all necessary sub-operations of the procedure, participants were asked how they believed the procedure should be taught and which skills were necessary for performing the procedure proficiently. The training analysis was designed and initiated by the managing clinician and the clinicians in the development team. They wrote and distributed questionnaires to suitable participants and organised focus groups by locating participants at the teaching hospital. The analysis allowed independent trainers, trainees, theatre nurses, surgeons and patients to provide input on problems with the current training and the necessary determinants for learning the procedure.

Case Study 1: The DBMT-Project

39

In summary, the results showed that the following aspects need to be considered for the training of SA: 1.

2. 3.

4. 5.

A formal, structured training programme: It was suggested that an ideal training programme should consist of a knowledge programme (anatomy, physiology and pharmacology), simulationbased training, standardised assessment and trainee debriefing. Time constraints/theatre efficiency: Finding the balance between training opportunities and maintaining efficiency in the operating theatre was difficult. Trainer-trainee interaction: The participants agreed that the trainer-trainee interaction was not optimal. For instance, the trainers did not know the trainee’s level of theoretical knowledge. Also, the trainees did not get feedback from the trainers after a training session. Patient safety/trainee/trainer stress factors: Trainers, trainees and patients found the situation of training in the operating theatre stressful due to patient safety/issues. “Visualisation” of anatomy and the procedure: The trainees needed aids so as to learn how to recognise and respond accurately to the different haptic sensations of tissue during needle insertion. They also needed aids to acquire a conceptual understanding of the anatomy and the procedure.

Aspects 1 and 5 initiated the systems initial training objectives which were to create a simulation-based component as part of a training programme for teaching the “feels” and the anatomies of the procedure. For a full description of the resulting learning requirements for SA, see Kulcsár et al. (2008). During the training analysis phase, the procedure was video-recorded in the operating theatre in order to provide the developers with data on how SA is performed. The developers also got the opportunity to see the procedure in real life. Observing and video recording surgery with patients required the participation of the clinicians to grant access to the theatre and ensuring the consent of the patients. Through detailed observance of how the procedure was performed, the developers were able to identify suitable technology that would support the training in SA. See Table 5-2 for a summary of input from users and informants during this phase.

40 Phase of development Training analysis

Chapter Five User / Informant

User / Informant Input

Method

Managing clinician and clinicians in the development team

Design and organisation of training analysis

Training analysis

Independent clinicians (trainers, trainees)

Input on current training of SA, identified issues with training in SA, suggested determinants for learning SA Input on current training of SA, identified issues with training SA

Questionnaires, focus groups, interviews

Patients

Input on current training of SA

Questionnaires

Developers

N.A.

Observations (of focus groups, procedure etc.)

Other medical staff

Questionnaires

Table 5-2. User groups, user roles and methods used during the domain analysis phase.

Design, build and informal evaluation of the human-tissue model The training analysis showed that haptic recognition of the different sensations and that “visualising” the anatomy during needle insertion was very important to the success of the procedure. The decision of what technology to use for training SA was made after the training analysis in Project 1 was completed. The developer participating during the training analysis had only limited technical expertise. Instead, he involved two engineers in the process of selecting appropriate technology (these engineers were later involved full-time in the project). The developer showed a video of the procedure being performed on a patient in the operating theatre. After seeing the video, the engineers suggested using haptic technology and 3D visualisations as potential tools for training needle insertion. Based on this suggestion, the developer recognised the potential value of using such technologies for addressing some of the issues arising from the training analysis. For

Case Study 1: The DBMT-Project

41

instance, these technologies could potentially support the recognition of the sensations and visualisations (procedural skills) during the training of novices. Haptics and 3D visualisations had also been successfully used by other researchers for needle insertion simulations (see Chapter 2). Hence, this seemed to be an appropriate research approach since it could potentially simulate the insertion of a needle into human-tissue and, in combination with an immersive workbench, provide visualisations of anatomy and procedure (see Figure 5-2). The development team decided to use a 3 DOF haptic device called Phantom Desktop3 from Sensable Technologies. This particular device was chosen based on its fidelity of haptics in relation to its cost. This device also appeared to be the popular choice among other researchers doing similar research of needle insertion, which helped to validate the choice of hardware. Other devices that provided higher fidelity, such as the Phantom Premium4, were considered too expensive in relation to the required fidelity. Devices of lower fidelity, such as the Phantom Omni5, did not provide sufficient haptics to accurately model the required tissue sensations.

Fig. 5-2. The picture shows the immersive workbench and haptic device (Phantom Desktop) that was used during Project 1. 3

http://www.sensable.com/haptic-phantom-desktop.htm (accessed on the 17th of May, 2010) 4 http://www.sensable.com/haptic-phantom-premium.htm (accessed on the 17th of May, 2010) 5 http://www.sensable.com/haptic-phantom-omni.htm (accessed on the 17th of May, 2010)

42

Chapter Five

It was necessary to evaluate if the suggested technology was a potential option for the managing clinician and the clinicians in the development team. They had no experience of haptic devices and virtual environments. The clinicians got the opportunity to try a haptic application (originally developed for stroke rehabilitation by Lövquist and Dreifaldt (2006)) before any developments began. This application consisted of an onscreen maze which the user had to navigate without touching any walls with the haptic device. During the concluding interviews, one of the clinicians in the development team said that she could already see the potential value of the haptic technology at this stage: “And that time I said 'This wall is like the bone, so if you can build in one element of the whole procedure, bone is an important element'.”

The haptic sensation which was generated when touching a wall within the maze resembled a simulation of touching bone. By trying out the application, the clinicians were able to get an initial understanding of the technology. After the first design session, a simple prototype was implemented as a computer program simulating skin resistance. The prototype was developed using the haptics library H3D API6. It is based on the scenegraph standard X3D and utilises Python script for the dynamic events. The H3D API is a high-level library and facilitated the development of prototypes within a relatively short period of time. The development of a prototype was based on observing videos of SA being performed in the operating theatre. The videos helped the developers to understand how needle insertion was performed during the procedure. The result was a simple, interactive prototype, consisting of a virtual rectangle with a skin texture assigned to it (see Figure 5-3). The prototype was programmed to allow physical interaction with the surface using the haptic device. In order to make the interaction more realistic, the haptic device was represented as a needle displayed on screen in the virtual environment. The virtual skin was visually deformed when touched and the device provided force-feedback of the surface tension and the friction to the user. However, the surface did not break as in the “reallife” situation when the skin is penetrated by a needle and proceeds into the underlying tissue.

6

www.h3dapi.org (accessed on the 10th Feb. 2010)

Case Study 1: The DBMT-Project

43

Fig. 5-3. The figure shows a screen-shot of the prototype that illustrated how needle interactions with surfaces could be simulated for the clinicians using the chosen technology to the clinicians.

At an informal evaluation session, the prototype allowed the clinicians to envisage the possibilities of using the haptic device for needle insertion. From this hands-on experience with the prototype, the clinicians recognised the potential of using the technology for training in this procedure. However, the clinicians critiqued the program by describing the skin sensation as “too soft”, as lacking skin puncture functionality and needing to simulate needle in tissue. Trying the prototype also allowed the clinicians to become familiar with haptics at an early stage of the development process and provide suggestions for future development. After the technology had been verified with the clinicians, the development team explored how to recreate the human-tissue sensations associated with SA. A human-tissue model had to be developed. The haptics and graphics development continued to utilise the H3D API. In addition, a second haptics library called Volume Haptics Toolkit7 (VHTK) was used to implement tissue viscosities. The human-tissue model was a major challenge for the developers, as they did not have practical experience of needle insertion. The generation of the model relied on verbal descriptions from the clinicians of their “real-life” experiences of the sensations in SA procedure. The clinicians described the specific sensations of interest as follows: 1.

7

Surface sensation: the resistance (tension and friction) of a surface acting between needle-tip and surface.

http://webstaff.itn.liu.se/~karlu/work/VHTK_H3D (accessed on the 23rd of Feb. 2010 )

44

Chapter Five

2. 3.

Pop sensation: a drastic decrease of resistance as a surface breaks due to pressure from the needle. Viscosity of tissue sensation: felt as the needle is passing through a volume of tissue.

However, it was difficult for the developers to “imagine” what the sensation should feel like from these verbal descriptions. An explorative development approach involving developers, clinicians and the haptic device was initiated. As a start, the developers attempted to model rough representations of the clinicians’ descriptions. These initial prototype sensations created a common ground for the developers and the clinicians to discuss further improvement. The clinicians experienced the sensations of the evolving human-tissue model at several stages during the development process, adjusting the model accordingly. After each design session, the feedback from the clinicians was used to improve the tissue model. Each time the clinicians tried a new sensation, they used verbal descriptions of how they thought the sensations should feel. Through this process, they directed the developers in how to improve the model. For example, during a design session one of the clinicians described the modelled tissue sensation of ligamentum flavum as “bouncy”. Instead, they described how it should feel like a gritty sensation, similar to how it would feel putting a needle through a pear. Furthermore, the sensation of bone was continuously stressed as a significant sensation to be modelled for the success of the system. Needle hitting bone was described as an indication of incorrect needle insertion. If this occurred, redirecting the needle’s insertion path would be necessary. Consequently, the modelling of bone required significant attention from the developers. At the same time, the developers could try the sensations and use their expertise in haptics to make the changes suggested by the clinicians. This enabled the developers to better understand the clinicians’ verbal descriptions of the sensations: “By experience the sensations generated by the haptic device, it helped me to relate to the clinicians descriptions of what the procedure should feel like.”8

When the human-tissue model was close enough to the real experience, the clinicians further helped to modify the sensations by giving detailed verbal feedback, e.g., “less resistance in the pop”, “a little softer” and “slightly tougher”. Based on this information, the developers adjusted the 8

Field note

Case Study 1: The DBMT-Project

45

individual parameters of the model, thereby fine-tuning the sensations generated by the haptic device. After several iterations, the clinicians described the human-tissue-model as representative of the real life experience. Another, alternative method exists for creating human-tissue-models. The actual forces acting on a needle going through tissue have been measured using specialised equipment (Brett et al., 1997). However, the existing data for this is limited. Live patients have to be recruited9 in order to measure needle-in-tissue forces, which in Project 1 would have required additional time and resources. In reality, the sensations in SA can vary greatly from patient to patient depending on age, obesity and other factors. To cover these variations, it is necessary to measure forces of needle insertion on a range of different patients. This alternative process would have required the purchase of specialised equipment, getting ethics approval, recruiting suitable patients and processing of the acquired data. The approach in Project 1 allowed the human-tissue-model to cover a wider range of patients more efficiently and avoided exposing patients to any risk. However, force measuring is a superior approach as it generates exact values of tissue sensation without any subjectivity, but was not feasible during Project 1. During this design phase, the developers continuously presented the clinicians with prototypes. This process helped the clinicians to get an understanding of how the technology could be used for training in the procedure. At the same time, the clinicians explained elements of the procedure and described the different sensations involved in the procedure of SA. Based on this, a human-tissue model was developed, as directed by feedback from the clinicians. See Table 5-3 for a summary of input from users and informants during this phase.

9

Measuring forces on cadavers is not sufficient as tissue structures are not the same as on a living person.

46

Chapter Five

Phase of development

User / Informant

Design phase 1

User Input

Method

Managing clinician and clinicians in the development team

Evaluated prototypes of the system, provided medical expertise, directed the humantissue model, evaluated face validity of the human-tissue model

Collaborative design sessions

Developers

Designed and built prototypes of the system, provided the clinicians with knowledge of the technology

Collaborative design sessions

Table 5-3. User groups, user roles and methods used during the design phase.

Formal evaluation of the human-tissue model The managing clinician decided to validate the resulting human-tissue model in detail. The managing clinician, clinicians in the development team and the developers designed a perceptual study to validate the sensations of the human-tissue model with independent clinicians. The study investigated perceptual differences between experts (consultants) and novices (junior doctors), involving in total 36 participants. A clinician of the development team, supervised by the managing clinician, recruited suitable participants and located a suitable room for testing. The haptic perception study was designed in such a way so that participants were asked to recall and score sensations based on a specific patient type (age, weight and needle used). The developers suggested that the different sensations should be aligned in a grid, which made large exposure and multitude comparison of individual sensations possible. This particular arrangement was named a haptic landscape by the development team (see Figure 5-4). Each individual square within the landscape had a certain sensation assigned to it. Also, four of the sensations occurred twice in the landscape so as to test the consistency of the individual participants. Finally, two clinicians from the development team adjusted the sensations so that the range only consisted of possible patients.

Case Study 1: The DBMT-Project

47

Fig. 5-4. Illustration of the landscape testing environment.

Each of the participants was subjected to a set of questions, each of which was assigned to a corresponding haptic landscape. Nine questions were concerned with the properties of SA: three questions regarding surface properties (skin, dura mater and bone), two of pop sensations (skin and dura mater) and four of viscosity (subcutaneous tissue, interspinous ligament, ligamentum flavum and cerebrospinal fluid). All questions were phrased as “Does this feel like”. The questions were answered by setting a slider going from “too soft” to “best fit” to “too hard” for each individual square in the haptic landscape. This study generated an understanding of differences in haptic perception between different levels of expertise. The results indicated that the primary ability of the expert was the recognition of the transition between the two layers, instead of recalling a certain sensation, as was initially anticipated. The study also validated the human-tissue model. All participants believed the sensations represented those experienced during a procedure on a real patient. To study the perceptual differences between experts and novices was an additional research objective originating from the managing clinician. Originally, the main concern of the developers was to validate the realism of the sensations with anaesthetists outside the development team. While on the contrary, the managing clinician was particularly interested in investigating perceptual differences between experts and novices. Consequently, this broadened the developers’ view of how to use the technology as a training tool. See Table 5-4 for a summary of input from users and informants during this phase.

48

Chapter Five

Phase of development Formal evaluation of humantissue model

User / Informant

User / Informant Input

Method

Managing clinician

Specified additional research objectives, Designed and organised the study

N.A.

Clinicians in the development team

Designed and organised the study

N.A.

Independent clinicians (trainers) (trainees)

Procedural skills, evaluated face validity

Haptic testing environment / Landscape

Developers

Designed the test, observed the study

Prototyping, Observation

Table 5-4. User groups, user roles and methods used during the formal evaluation.

Design, build and informal evaluation of next prototype The next step was to develop a training tool for SA that utilised the human-tissue model. The model had to be mapped to an interactive 3D representation of the relevant anatomy and learning considerations had to be implemented in the system. One of the clinicians in the development team provided a CT data set of a patient to the developers. The data set was segmented by the developers using specialised software and resulted in a 3D surface model of the relevant anatomy. The surface model required some manual postprocessing which was also performed by the developers. The resulting 3D model was sent as a 3D rendering (picture) by email to the clinician for feedback. The resulting model was considered of adequate validity for the training system. Simultaneously, the developers investigated how to build the needle insertion logic. The developers’ observations of the procedure in theatre (live and video recorded) gave them the background knowledge to discuss the characteristics of needle insertion with the clinicians in the development team. As the developers had no practical experience of performing the procedure, clinicians in the development team were required

Case Study 1: The DBMT-Project

49

to clarify the procedural elements of the procedure. For example, it was impossible to judge from the videos how deep (i.e. what tissues) the introducer needle should penetrate. A clinician from the development team described how the introducer needle goes through skin and subcutaneous tissue, but never passes the interspinous ligament. However, it was impossible to give exact specifications of how deep the introducer could go, due to anatomic variations between patients. The clinician explained how they re-directed the insertion path without fully withdrawing the needle. However, the haptic device that was available to the development team did not allow this to be simulated. This device was only able to generate force-feedback at the needle tip. Rotation around the needle tip could not be mechanically controlled. Hence, the real-life situation, where the shaft of the needle is under pressure from surrounding tissue, could not be modelled with the current device. Instead, once the virtual skin was punctured, the needle on screen was visually fixed along the insertion direction, maintaining the initial direction of insertion. At the same time, the needle tip (haptic device) was fixed along the vector of the initial insertion direction using a haptic spring force. It was only possible for the device to move short distances sideways and could not diverge from the insertion direction. If the needle was withdrawn, the forces were turned off. The clinicians in the development team also described how SA requires two different needles. Further clarification on the use of these needles in relation to the system was necessary. They described that the thicker introducer needle is used to puncture the skin. The thinner spinal needle is used for accessing the spine, without causing damage to it. This was modelled by fixing the spinal needle in the introducer’s direction, as the spinal needle enters the introducer needle. The needle insertion logic was evaluated and validated by the clinicians. Despite the limitations of the logic, they considered it a viable substitute for needle re-direction. The directions from the clinicians in the development team helped the developers to re-create the needle insertion characteristics and needlehandling elements of SA. Figure 5-5 shows a conceptual illustration of the developers’ understandings of the needle insertion characteristics and its relation to the human-tissue model.

50

Chapter Five

Fig. 5-5. Illustration of the haptic tissue model.

After some development, a first interactive prototype of the training tool was available. The prototype at this stage consisted of needle insertion logic, the human-tissue model and the segmented surface model. This prototype was evaluated during a design session with the clinicians. The session resulted in suggestion for improvements and new design ideas. For instance, it was not possible to palpate the model. The clinicians described that in real-life the ideal skin-puncture point is located through palpation (i.e. to physically apply pressure using both hands to a patient’s back to identify the bone structure). However, the developers did not have a satisfactory solution for allowing haptic interactions with the hands at this stage. This was discussed with the clinicians and they suggested using a texture with a clearly visible bone structure instead. A clinician took a photograph with the appropriate skin texture and it was incorporated into the prototype by the developers, see Figure 5-6. It was also necessary to incorporate meaningful training objectives into the system. At this stage, two objectives originating from the training analysis (see Chapter 5 – Training analysis) were incorporated into the system. The first objective was to learn how to distinguish between tissue sensations of SA. This objective was supported by the haptic device and the human-tissue model. A clinician from the development team specified this objective as: “In the technique the most important part is to know where your needle tip [is]. The crucial part is the lig. flavum, dura, intrathecal space”10 10

Extract from Instant messaging 06/10/2006

Case Study 1: The DBMT-Project

51

Fig. 5-1. Screen-shot of a prototype incorporating needle logic, human-tissue model and a segmented surface model.

In testing the first version of the prototype, the clinician requested some form of feedback aiding the trainee to know the correct needle position in the body. This feedback was displayed as text on screen. The second objective was to learn how to conceptually visualise the relevant anatomy during needle insertion. This objective was integrated into the system by allowing the trainee to place the needle in the 3D anatomy and then rotate it. By doing so, the trainee could get feedback on the actual needle placement and alternative trajectories. To further enhance this training objective, an additional 2D view of the anatomy was incorporated into the system. The 2D view dynamically demonstrated the needle insertion path, superimposed over an image from the CT data-set of the anatomy. See Figure 5-7 for illustrations of the two visualisations. As the simulator was developed further, some additional teaching considerations were incorporated. Two modes were developed, i.e. training mode and assessment mode. In the training mode the texture of the back was transparent if hitting bone, allowing the trainee to adjust the needle accordingly. The text feedback and the 3D and 2D visualisations gave extra information to the trainee. In the assessment mode, all of these aspects were disabled so as to resemble the real clinical procedure as closely as possible.

52

Chapter Five

a)

b)

Fig. 5-7. Screen-shots of the visualisations incorporated into the simulator: a) needle is placed in the anatomy and rotated 90 degrees b) introducer (green) and spinal needle (blue) are superimposed over the CT image.

The developers were exposed to the working traditions of the clinicians during the development process. The clinicians were under severe time pressure and had varying degrees of computer experience. According to one of the clinicians from the development team, trainers and trainees had different levels of computer experience. “The consultants (trainers) are less used to computers than the medical students (trainees)”11

The hectic lifestyle of the clinicians was also evident throughout the collaborative development work. “I was up early today […] and the day will be so long, as I’m on call […] I'll be on call all weekend...”12

The close collaboration up to this point helped the developers to understand the clinicians’ characteristics as end-users: “A major characteristic of the clinicians as users, is their busy lifestyle, hence the system has to be carefully designed with this in mind”13

11

Extract from instant messaging 11/10/2006 Extract from instant messaging 09/11/2006 13 Researcher field note 12

Case Study 1: The DBMT-Project

53

This affected the developers’ choice when refining the simulator interface. It was important to make the simulator usable and accessible for all levels of computer experience. These aspects led to a range of interface specifications, for instance: • • • •

Simple design A limited amount of choices to avoid distraction Interactions should be done with the haptic device Neutral colours associated with a familiar (clinical) environment

Hence, the simulator interface was designed with the intention of being easy to learn and to use. It was designed so that trainers would not require extensive time to learn to use the system which would be advantageous if it was implemented as a training tool. The development of the 3D model, needle simulation logic, integration of human-tissue model and training considerations required an iterative development process with the managing clinician, the clinicians in the development team and the developers collaborating closely. See Table 5-5 for a summary of input from users and informants during this phase. Phase of development

User / Informant

User Input

Method

Design and informal evaluation phase 2

Managing clinician and clinicians in the development team

Designed and evaluated prototypes of the system: provided medical expertise, provided CT data, suggested training considerations, evaluated face validity, training and working context

Collaborative design sessions

Developers

Designed and built prototypes of the system, provided the clinicians with knowledge of the technology

Collaborative design sessions

Table 5-5. User groups, user roles and methods used during the second design and evaluation phase.

54

Chapter Five

Usability evaluation (formal) with independent anaesthetists By the second half of Project 1, the prototype had emerged into a fully workable training system. The system was ready to be evaluated by independent clinicians outside of the development team. The aim of this study was to get feedback on the system, identify system errors and evaluate its face validity as a training tool. The study was organised by a clinician in the development team, supervised by the managing clinician. They recruited six consultant anaesthetists at the local hospital. The participants were asked to “think aloud”14 as they performed tasks on the system. This was then followed by a semi-structured interview. The session was video recorded for analysis. All participants stated that the system had significant potential for teaching how to recognise the sensations associated with different tissues, correct needle insertion and achieving a greater knowledge of the relevant anatomy. The perceived realism (face validity) of the tissues, needle insertion and the simulation of the procedure was expressed as high by the participants. Some quotes include: (Clinician 1): “The different layers of feel I think is very accurate” (Clinician 2): “I think it's a great tool” (Clinician 3): “That you can get a 3 dimensional view of the back is very good”15

Furthermore, errors within the training system were identified during the evaluation session. For instance, the participants accidentally performed unintended actions in the menu and the rotation of the anatomy did not work as intended. These observations led to overall improvements of the system’s usability. The usability study also generated suggestions for improvement, emerging directly from representatives of potential endusers. For instance, suggestions for improvement were: •

14

The need to incorporate the non-dominant hand. It is used to determine the skin puncture point and to stabilise the needle during insertion. This led to further investigations on how to

Think aloud is a method for capturing the cognitive state of a person using a system (Lewis, 1982). 15 Extract from a usability report 21/03/2007

Case Study 1: The DBMT-Project



55

develop such an interface. However, during the length of the project, no satisfactory solution was reached. One of the anaesthetists described how he had problems appreciating the interactions in 3D space. This led to an investigation of how to simplify the navigation in 3D. For example, a box was added to encapsulate the anatomy with the intention of simplifying the visuospatial manipulations of the haptic device in the virtual environment.

The suggestions for improvement were implemented by the developers into the system (depending on their feasibility). During the usability evaluation it emerged that there were many ways to perform the procedure of SA. For instance, the technique of only using the spinal needle for performing the procedure was not covered in the prototype tested during the usability evaluation. One of the participants did not appreciate how he had to use the introducer, as he always performed the procedure using only the spinal needle. The system was later adjusted to allay the concerns associated with individual approaches to SA. A number of the participants also attempted to perform an alternative approach, called paramedian16, on the system. The development team had not previously considered the paramedian approach. However, the humantissue model and the composition of the anatomy allowed this procedure to be performed on the system without any alterations. A certain ambiguity in the medical world became evident to the developers during the usability evaluation. It made the developers aware that it was necessary to consider the differences in working traditions in medicine if a system is to suit every-day use by the intended users. The usability evaluation allowed consultant anaesthetists outside of the development team to influence the system with their medical expertise and their input. The independent clinicians’ participation in the usability evaluation helped the developers to improve the system’s usability and utility and to ensure it was perceived as accurate and useful. See Table 5-6 for a summary of input from users and informants during this phase.

16

The paramedian is a technique where the needle is inserted approximately 2 cm lateral to the midline of the spine. The needle is inserted at an approximate 15 degree angle to the spine. This procedure is used for patients with calcifications or deformed spines.

56

Chapter Five

Phase of development Formal evaluation

User / Informant

User / Informant Input

Method

Managing clinicians and clinicians in the development team

Organised the study (Released consultants from theatre)

Observations of anaesthetists using the system

Developers

Identification of usability issues

Observations of the anaesthetists using the system

Independent clinicians (Trainers: Consultant anaesthetists)

Evaluated the system (medical expertise, design errors, face validity, feedback on improvements, alternative approaches)

Usability (Think aloud, short interviews)

Table 5-6. User groups, user roles and methods used during the usability evaluation.

Re-design and build of final prototype The results from the usability evaluation informed the developers of a range of possible improvements, with many of them being implemented into the system. However, the development of the training tool still required the clinicians to regularly contribute to the process. As the development process progressed, the clinicians became more involved in many development aspects, and not only related to clinical elements. For example: “should trial / practice mode tabs look different to the others (bigger/red/top of screen?)”17

This is a quote showing how a clinician gave feedback on the layout of the system’s interface and the interactive aspects of the training system. The clinicians recognised the necessity to make the system usable and accessible for fellow anaesthetists on several occasions during the redesign phase. Seeing the prototype evolve and progress, it enabled them to 17

Extract from email 23/01/2007

Case Study 1: The DBMT-Project

57

provide more detailed feedback and suggestions to improve the current version of the system. The clinicians’ participation in organising the usability study appeared to have made them more aware of usability issues. The clinicians also suggested additional improvements that would have been difficult for the developers themselves to identify. For example, the only outcomes of the procedure that the developers had originally considered were either a successful or unsuccessful procedure (dura puncture). However, as the prototype was refined, one of the clinicians recognised that the simulator should incorporate more specific outcomes in relation to an unsuccessful procedure. “Two other possibilities exist to end 'advance through tissues' – one is 'blood' (inadvertent puncture of an epidural vein) and the second is 'nothing' – the needle goes right in to the hub and does not reach the dura (either misdirected or the dura is unusually deep)“18

From this, the developers made sure the system facilitated the function of ending up in “nothing”, where the entire length of the needle is inserted without hitting bone or ligaments. However, the puncturing of a vein was left for future implementation due to time restrictions. The clinicians also explained how the procedure of SA is performed in either the sitting or lying-down position. The clinicians described how the sitting position is generally preferred. For some patients this is not possible, for example during a hip transplant. The procedure in the lyingdown position had not been considered previously. The ability to tilt the back 90 degrees for performing the procedure in the lying position was also implemented in the prototype. Figure 5-8 illustrates how the system changed from Figure 5-6. based on the feedback from the usability evaluation and from additional suggestions by the clinicians. The clinicians new design ideas and additional suggestions for improvements resulted in a version that the clinicians and developers agreed was ready for a clinical trial. See Table 5-7 for a summary of input from users and informants during this phase.

18

Extract from email 23/01/2007

58

Chapter Five

Fig. 5-8. The usability-tested version of the prototype improved based on the feedback from the usability testing and clinicians’ additional suggestions. Phase of development Design and informal evaluation phase 3

User / Informant

User Input

Method

Managing clinician and clinicians in the development team

Designed and evaluated prototypes of the system, medical expertise, new improvements, new design ideas

Collaborative design sessions

Developers

Designed and built prototypes of the system

Collaborative design sessions

Table 5-7. User groups, user roles and methods used during the third design phase.

Clinical trial: Preliminary validation of training The prototype had at this point evolved into a system that was potentially useful for training. The managing clinician and the clinicians in the development team decided that the next step was to perform a preliminary validation of the system as part of an experimental training

Case Study 1: The DBMT-Project

59

programme. The intention here was to investigate whether training on the system translated into improved performance in the operating theatre. The clinicians made the main design decisions for the clinical trial. For instance, they decided the appropriate amount of previous experience which the participants should have and the format of the teaching programme. The clinicians also identified suitable methods for assessing the knowledge and performance of the participants. Multiple choice questions (MCQ:s) were used to determine previous knowledge levels. A global rating scale (see Table 5-8) and a task-specific checklist were used to measure the procedural elements of SA in theatre and on the system. The overall assessment approach was based on an existing method called OSATS (Martin et al., 1997). This OSATS method has been applied for assessing other medical procedures. As the study involved live patients for the measurement of performance, the clinicians had to apply for the necessary ethics approval (which was approved). The clinicians recruited, in total, 27 medical interns with no previous experience of SA. The participants were split into two groups. The first was a control group which only received conventional training, consisting of reading materials and tutorials. The second group received both conventional training and training on the system. Supervised by the managing clinician, a clinician of the development team organised and was responsible for the training of the two groups. After the training programme, both groups had to first go through an assessment on the system and then an assessment in the operating theatre. Each participant performed both assessments once. One “blinded” observer19 rated the interns’ performance by observing the procedure on the simulator and “live” in the theatre. In addition, the performance in the theatre was video recorded. Two blinded observers rated the participants on their performance by observing the performance on video. The observers were independent consultant anaesthetists. Organising all the required elements for the clinical trial was difficult. This fact was expressed by one of the clinicians from the development team during the final interviews: “It was really difficult to organise everything together in the clinical testing to have the right patient, the right trainer, the right trainee, the evaluator, someone taking the video [..] at the same time”20

19

The observer did not know who belonged to the control group and to the systemtrained group. 20 Extract from interview

60

Spinal Needle Handling

Flow of Procedure / Rate of Advance

Chapter Five 1 Repeatedly makes tentative or awkward moves with needle by inappropriate use of it

2

1

2

Respect for Tissue

Frequently used unnecessary force on tissue or caused damage

4

Competent use of needle but occasionally appeared stiff or awkward

Frequently stopped procedure and seemed unsure of next move 1

3

2

5

Fluid moves with needle and no awkwardness

3 Demonstrated some forward planning with reasonable progression of procedure

4

3 Careful handling of tissue but occasionally caused inadvertent damage

4

5 Obviously planned course of procedure with effortless flow from one move to the next 5

Consistently handled tissues appropriately with minimal damage

Table 5-8. The table shows examples of three components that were used as part of the global rating scale during the simulator and clinical assessment21. Each component was scored between 1 and 5 by the observers. Performing a blinded randomised clinical trial is a complex undertaking. It is something which would not have been possible without the managing clinician and the clinicians in the development team’s participation. All 27 participants proceeded to assessment on the system after completing the training programme. 11 participants did both system and clinical assessment22. The system-trained group performed better on the system, on average, but did not reach statistical significance. There was 21

Extract from project document describing the clinical trial. The remaining 16 participant did not go through clinical assessment due to limitations in time and resources.

22

Case Study 1: The DBMT-Project

61

some disagreement between the observers for the clinical assessment. The observer assessing the procedure “live” scored the system group higher on average as relating to both the global rating scale and the task-specific checklist. However, the observers assessing the procedure through video recordings scored few differences between the two groups. Afterwards, qualitative feedback was also acquired from the participants. For example, the group trained on the system said that they felt less stress when performing the procedure in the theatre. Both groups believed that simulator training would be beneficial for their future training in SA. The study did not show a conclusive benefit in using the system as a training tool. Further studies with a larger number of participants and repeated trials would be necessary to reach statistical significance of predictive validity and reliability. In addition, the interns only had a limited amount of training on the system. A longer period of training might be necessary for the training to demonstrate improved performance in the operating theatre. See Table 5-9 for a summary of input from users and informants during this phase. Phase of User / Informant development Validity evaluation

User / Informant Input

Method

Managing clinician and clinicians in the development team

Designed and organised the study (MCQ:s, global rating list, task-specific list), made sure suitable participants and test subjects were available

OSATS, Randomised controlled trial

Independent clinicians (interns)

Evaluated on the system and in theatre

Randomised controlled trial

Independent clinicians (consultant)

Observed and scored the procedure, medical expertise

Randomised controlled trial

Patients

Test subjects

Randomised controlled trial

Developers

N.A.

Observation

Table 5-9. User groups, user roles and methods used during the clinical trials.

62

Chapter Five

Trial implementation: Emergency medicine course At the end of Project 1, a trial implementation of the system was conducted. A 3-day emergency medicine course at a nearby medical school created an opportunity for testing the system in the field. The teaching course allowed the developers and clinicians in the development team to observe a “real-life” training situation with trainer and trainees using the system. The course was intended for doctors of different backgrounds to refresh their knowledge of selected medical procedures. Lumbar puncture23 was one of the procedures taught on the course. This procedural element is appropriate for both lumbar puncture and SA, making the system suitable for training lumbar puncture also. The trainer of the course had never seen this or other VR-based training systems before. He received a 20-minute introduction to the system prior to the training course. The trainees were observed by the developers and the clinicians in the development team while they were trained on the system. From these observations, it was possible to identify some issues in relation to how the system was used: •



Some participants held the haptic device instead of the needle. This was not observed by the trainer. How the needle should be held has to be explicitly specified if the system is intended as part of a formal training programme. Some participants had difficulties in understanding the orientation of the needle in the virtual environment. From this, alternative ways of displaying the needle in relation to the anatomy and within the virtual environment were considered.

The trainees also answered questionnaires. The questionnaires were designed to capture the trainees’ subjective opinions of the system. In summary, the results from the questionnaires showed that: •

The trainees had a positive attitude to the training system and believed that it was a useful and beneficial tool in a training session.

23 The procedural aspect of SA shares close similarities with the medical procedure Lumbar Puncture. In this procedure the lumbar sack is punctured for extracting cerebrospinal fluid. The fluid is analysed for diagnosing patients of different conditions. In SA, the sack is punctured but instead of extracting fluid, anaesthetic solution is injected.

Case Study 1: The DBMT-Project



63

Most trainees believed that future training on the simulator would be beneficial.

Overall, the trainees were keen to use the system and it appeared to work well in a training situation. See Figure 5-9 for a picture of trainer and trainee using the system together.

Fig. 5-9. Trainee and trainer at a training course in Emergency Medicine.

The trainer was also observed while using the system for training lumbar puncture during the training course. The observations indicated that: • •

The system was a useful aid for him to teach lumbar punctures. The realism of the human-tissue model was perceived as good.

During the training session he also mentioned that: • •

He wanted to have a pair of 3D glasses of his own, which were acquired for future training sessions. The incorporation of palpation would be a major improvement of the system. He would also have preferred the ability to re-direct the needle once inserted. Both of these aspects had previously been addressed and were then under consideration.

After the training course, a questionnaire was given to the trainer. Regarding the system as a training tool and the realism of the haptic sensations, he said that the system was:

64

Chapter Five “Very useful. The sensation of 'give' for each of the tissues penetrated was just like the sensation when doing a lumbar puncture on a patient. This is the part of performing the technique that could, in my experience, heretofore only be taught by allowing the trainee to practice on a real patient.”24

The comment regarding the tissues confirmed the results of the previous evaluations performed during Project 1. By using the system during a real-life teaching course, the instructor envisioned how the system could be incorporated in a training course: “I would imagine it being used by the trainer showing the trainee the procedure being taught, then repeating it whilst explaining each stage, then getting the trainee to 'talk it through' with the trainer performing the technique, finally allowing the trainee to practice it themselves.”

The instructor believed that the interaction between trainer and trainee would be as important as the interaction with the system. He regarded the system as an aid, but not as an alternative to a trainer. This indicates that the objective of improved trainer and trainee communication (see Chapter 5 – Training analysis) was partly addressed by the system. The instructor also recognised the potential of using the same technology for other medical procedures: “I could see the same advantage being applied to the teaching of many medical procedures involving the insertion of needles into body cavities, including joint aspiration, pericardiocentesis, thoracentesis and laparoscopy.”

Knowing that the system could potentially be applied to other procedures was helpful for strategic planning within future projects. The contact with the instructor was maintained and led to further discussions about other potential uses for the technology. By allowing representatives of end-users use the technology in a reallife setting, the trial implementation helped to confirm data from previous investigations and generate new ideas for future enhancements of the system. By having both trainer and trainees participating, the trial indicated that the system was deemed useful by both groups. The trial implementation also showed that the system was usable, as the trainer was able to use it for training after only a short introduction to the system. See Table 5-10 for a summary of input from users and informants during this phase. 24

Extract from trainer questionnaire 24/04/2008

Case Study 1: The DBMT-Project Phase of development Trial implementation

User / Informant

User / Informant Input

65 Method

Independent clinician (trainer)

Evaluated the system, perceived usefulness, suggested improvements, medical expertise

Short interview, questionnaire

Independent clinician (intern)

Evaluated the system, perceived usefulness, medical expertise

Short interview, questionnaire

Developers and clinicians in the development team

Identified subjective beliefs and potential improvements of the system

Observation

Table 5-10. User groups, user roles and methods used during the trial implementation.

Summary Chapter 5 analyses how the development process in Project 1 was dependent on user participation. In this chapter Scaife et al.’s (1997) framework is utilised to differentiate the users into specific user groups, to determine what form of input the groups provided to the process and to identify how they were involved in each of the separate development phases. The problem statement originated from the managing clinician outlining research objectives based on recommendations of best practice from medical training bodies. He connected with other clinicians and developers to create novel learning technologies for training SA. A training analysis was designed and organised by the managing clinician and the clinicians in the development team. Independent anaesthetists, other medical staff and patients provided input on the training in SA. The analysis influenced the future development of the technology by grounding the development decisions of the system within current issues associated with the training in SA. The managing clinician and the clinicians in the development team participated during an initial design phase which resulted in a humantissue model of the procedure. The clinicians directed the developers in how to implement the human-tissue model in the system, so that it reflected a real-life patient.

66

Chapter Five

An evaluation of the human-tissue model was organised by the managing clinician and the clinicians in the development team. It was based on additional research objectives from the managing clinician. Independent anaesthetists were recruited in order to validate the model’s face validity and to determine differences in haptic perception between different levels of expertise. The managing clinician and the clinicians in the development team participated during a second design phase, where the human-tissue model was incorporated into a training system. The clinicians continuously provided medical expertise and procedural skills to developers. They provided a CT data-set, suggested how to incorporate training in the system and evaluated its face validity. The resulting system was tested during a usability evaluation. Independent anaesthetists (trainers) provided input on the system’s usability and utility. The evaluation was organised by the clinicians. A clinical trial was designed and organised by the managing clinician and the clinicians in the development team. The clinicians in the development team were also responsible for the training of the interns during the training programme. Interns, consultant anaesthetists and patients were required to perform the clinical trial. The project was concluded with a trial implementation. A trainer used the system in a training course and provided feedback on the system’s usefulness as a training tool and suggestions for improvement. Trainees participated in the course and provided feedback on the usefulness of the system. The user groups and informant groups are discussed in-detail in combination with the findings from the second case study in Chapter 7. The system’s development in Project 1 was explorative. The developers did not have adequate medical knowledge and the clinicians had no experience of developing advanced technology for medical training. It was a learning process for both sides. The close relation between both developers and clinicians helped to create system requirements from training objectives and helped to implement these into the medical training system. How user guidance was accomplished is discussed in-detail in combination with the results from the second case study in Chapter 7.

CHAPTER SIX CASE STUDY 2: THE MEDCAP-PROJECT

This chapter analyses the empirical data that was collected during Project 2. As in Chapter 5, the writing is divided into separate development phases. Each phase provides examples of different user groups, input from users and how methods were used to involve the users in the development process.

Introduction to Project 2 Project 2 was led by the same managing clinician as in Project 1 (see Chapter 5). The aim of the project was to develop a valid and reliable assessment procedure for assessing competencies associated with SA. The project was based on the outcomes from Project 1 and was funded by the Leonardo da Vinci Lifelong Learning Programme. The project started in autumn of 2007 and was finished by the beginning of 2010. The author of this work participated as a system developer for the duration of the project. Project 2 consisted of transferring an existing assessment approach called Competence-based Knowledge Space Theory (CbKST) to the medical domain. CbKST is based on the assumption that the skills involved in a job or procedure can be described as a competence space (Albert et al., 1999). The competence space contains a description of all the competencies that are related to performing the job, where such competencies are required for performing certain tasks. It is also assumed that some competencies are dependent on others. The individual competencies within the competence space are arranged based on various pre-requisites. For example, if applying CbKST to algebra, the skill that is performing addition is a pre-requisite for subtraction and subtraction is a pre-requisite for multiplication. CbKST can be used to identify the learner’s competence level in relation to the overall competence space. If the competence space is known, this approach can also provide adaptive learning paths. The learning path determines what the learner is ready to

68

Chapter Six

learn next. CbKST can also optimise assessment by asking only questions which suit the learner’s competence level. This approach requires computerbased assessment. The competence space of a knowledge domain is stored as a computer model and uses certain algorithms for calculating the competence level of a learner. The assessment procedure in Project 2 utilised a web-based system for assessing knowledge and a VR-based system for assessing procedural skills. Project 2 used the Accreditation Council of Graduate Medical Education’s (ACGME)1 competence description for creating the CbKSTbased competence space. The ACGME has identified six core competencies that are required by a proficient medical practitioner. The core competencies are: Patient Care, Professionalism, Medical Knowledge, Interpersonal and Communication Skills, System-based Practice and Practice-based Learning and Improvement. For instance, patient care involves subcompetencies such as competently performing technical procedures and performing an accurate investigation of a patient’s history. Interpersonal and communication skills involve creating and sustaining patient relationships as well as being a leader or a member of a group of health care professionals, etc. The ACGME’s description of competencies is a recommendation of the necessary skills that a medical practitioner should have. It was used in Project 2 to ensure that the developed competence space was relevant to the medical domain.

User participation in the development process in Project 2 This section analyses the relation between user participation and the development process in Project 2. The analysis is based on Scaife et al.’s (1997) framework presented in Chapter 4.

Problem statement The managing clinician argued that the lack of standardised assessment procedures was a major issue in the area of medical training. The current certification of anaesthetic registrars was based on completing a sevenyear apprenticeship programme, with no formal assessment during the process. The managing clinician said that there was a huge demand for standardised assessment in medical training. For instance, the medical training body The Irish College of Anaesthetists2 (COA) has adapted to a 1 2

http://www.umm.edu/gme/core_comp.htm (accessed 29th Jan 2010) http://www.anaesthesia.ie (accessed 29th Jan 2010)

Case Study 2: The MedCAP-Project

69

competence-based assessment format. However, the COA had, at the time, no standardised means for assessing individual competencies. They relied on the common sense of experienced trainers to judge acceptable performance. Hence, the managing clinician decided to focus on developing a competence-based assessment procedure. From this, he identified a potential approach for systematically assessing competencies in a meaningful way. The approach was CbKST (see Chapter 6 – Introduction to Project 2). The managing clinician established connections with educational specialists with expertise in CbKST. They were invited to participate in the project proposal leading to Project 2. This group will be referred to as the CbKST-specialists hereafter. The CbKST-specialists provided advice in how to create a competence space and how to utilise it in an assessment procedure for SA. The managing clinician also involved other partners in the project proposal. A second teaching hospital based in Hungary participated in the training analysis and evaluations of the system. The Interaction Design Centre (the developers) participated with responsibility for developing the web-based system intended to assess knowledge. They were also responsible for re-designing the system from Project 1 so it could be used for assessing procedural skills. Besides the focus on competence-based assessment, the managing clinician included essential elements such as self-directed learning and problem-based learning in the project proposal. He also decided to perform a clinical trial with the system. The managing clinician became the primary investigator of Project 2. He assigned a second clinician to work as a full-time researcher on the project. Her duties were to perform the training analysis and to support the development of the assessment procedure. Three additional clinicians also participated as part of the development team and provided additional expertise. During the final stages of the project, two other clinicians participated in the development team to perform a clinical trial with the system. See Table 6-1 for a summary of user participation during this phase.

70 Phase of development Problem statement

Chapter Six User / Informant

User / Informant Input Method

Managing clinician

Research objectives, connected with other groups

N.A.

Developers

Suggested how to apply technology

N.A.

CbKSTspecialists

Advice on how to apply CbKST for training in SA

N.A.

General stakeholders (COA)

Recommendations for best practice (standardised assessment, competence-based, problem-based learning, etc.)

N.A.

Table 6-1. User groups, user roles and methods used during the problem statement phase.

Training analysis of Spinal Anaesthesia The training analysis in Project 2 included a survey of Irish and Hungarian anaesthetists and focus groups. A survey was conducted to identify similarities and differences in medical practice between Ireland and Hungary. It was conducted both as a research exercise and to provide information on how to make the assessment procedure adaptable within both countries. Supervised by the managing clinician, the clinicians in the development team wrote the questionnaires used in the survey. The questionnaires required a deep understanding of the procedure and its surrounding elements. The survey enabled certain configurations and considerations of the assessment procedure and the final clinical trial. For example, the assessment procedure required having both English and Hungarian translations, which resulted in two versions of the system. The analysis also showed that problem-based learning was considered more important in Hungary than in Ireland. There was also a divergence in the amount of patients the Irish and the Hungarians were exposed to during their initial years of training.

Case Study 2: The MedCAP-Project

71

The clinical trial of the system was originally intended to be based upon the number of years in training as a distinguishing factor. However, based on the results of this survey, the amount of spinals performed was used as the distinguishing factor instead. Competence group

Competenceid

Prerequisites

Competence sub-group

Competence

Patient care: technical performance

380

231, 169, 171

Needle selection

Understands the importance of needle size

Patient care: technical performance Patient care: technical performance

381

228, 169, 170, 171

Needle selection

382

227

Needle selection

Can distinguish different needle types Uses an extra long needle in obese patients

Patient care: technical performance

383

230

Needle selection

Selects an introducer ( if desired) to steady the spinal needle

Table 6-2. The table illustrates competence 380 to 383 from the competence space. The competencies belong to the group “Patient care: technical performance”. Each of them depends on pre-requisite competencies3. The managing clinician and a clinician from the development team organised focus groups to identify the competence space of SA. The focus groups involved anaesthetists with different levels of expertise. The competence space was based on the ACGME’s competence categorisation so as to ensure that the competencies were organised in a meaningful way (see Chapter 6 – Introduction to Project 2). Pre-requisite competencies were also identified during the focus groups, as is required by the CbKST approach (see Chapter 6 – Introduction to Project 2). The CbKSTspecialists assisted the clinicians in the development team in the process of creating a competence space for SA. They clarified the theoretical assumptions that CbKST are based on. They also instructed the clinicians in how the competence map should be described in practice. After the focus groups, a clinician of the development team compiled the results on excel-sheets, as based on the CbKST-specialists advice. See Table 6-2 for 3

Extract from project document describing the competence space

72

Chapter Six

an example of the resulting competence space. The final competence space consisted of over 500 competencies. The competence space configured the content, context and use of the VR and web-based assessment system. Additional focus groups were organised by the clinicians to identify metrics describing the procedural components of SA. Independent anaesthetists participated and divided the procedure into separate tasks. These metrics were necessary for the simulator-based part of the system and to provide meaningful assessment. Each metric tested one or more competencies in the competence space. Table 6-3 shows four examples of the identified metrics. See Chapter 6 – Design, build and informal evaluation of the combined assessment system for the metrics implemented in the simulator. Fails to re-position after exaggeration of lumbar lordosis following skin infiltration Inserts introducer or a needle without giving superficial local anaesthetic for 2 each interspace used Fails to identify anatomical landmarks (midline) before attempting lumbar 3 puncture Fails to identify anatomical landmarks (iliac crests) before attempting lumbar 4 puncture 1

Table 6-3. The table illustrates four of the metrics that were identified during the focus groups4. During the training analysis, the managing clinician liaised with a behavioural scientist. He was a specialist in evaluating training transfer from simulation-based systems. The behavioural scientist provided additional expertise by delivering a two-day workshop to the project team. He gave advice on how to perform a randomised controlled trial (RCT) and how to utilise metrics (which was performed at the end of Project 2, see Chapter 5 – Clinical trial). See Table 6-4 for a summary of user participation during this phase.

4

Extract from project document describing the metrics

Case Study 2: The MedCAP-Project Phase of development Training analysis

User / Informant

73

User / Informant Input Method

Managing clinician and clinicians in the development team Independent anaesthetists (trainers, trainees)

Design and organisation of analysis

Training analysis

Input on training in SA, identified issues with training SA, identified competence space and metrics

Questionnaires, focus groups

Developers

N.A.

Observation

CbKSTspecialists

Advice on how to utilise CbKST

Project meetings

Behavioural scientist

Advice on randomised controlled trials and metrics

Workshop

Table 6-4. User groups, user roles and methods used during the training analysis.

Design, build and informal evaluation of the web-based system The web-based part of the assessment procedure utilised a Learning Management System (LMS). The managing clinician decided to use case scenarios and question assessments as part of the assessment procedure and LMS. The case scenarios were created by a clinician in the development team, supervised by the managing clinician. The case scenarios were written to reflect the content of the previously-identified competence space. The clinician wrote six case scenarios consisting of patient information, events and problems. For instance, one of the case scenarios began as: “A 22 year old lady (gravida 2, para 1) is scheduled for an elective caesarean section the following day. She is being reviewed pre-operatively on the ward. She has no concept about spinal anaesthesia..” etc.5

5

Extract from scenario document

74

Chapter Six

This patient description was followed by questions or true/falsestatements. For example, one of the true/false-statements in the scenario described above was: “Q3 It would be wise at this stage to leave sitting the cannula until after intrathecal injection of local anaesthetic given that spinal anaesthesia results in vasdilation.”

As the case proceeded, further information was provided. The case scenarios also described the procedural elements of SA for each patient type. These elements were covered by the simulation-based part of the assessment procedure (see Chapter 6 - Design, build and informal evaluation of the combined assessment system). In addition to the case scenarios, the clinician also wrote multiple choice questions (MCQ’s). The MCQ’s complemented the case scenarios so that a greater range of the competence space could be tested. At this stage, a LMS was required to present the case scenarios and MCQ’s and electronically capture the answers to the questions. Various different solutions were reviewed. The managing clinician, the clinicians in the development team and the developers had little previous experience of LMS. Based on initial discussions with the clinicians, a simple paperbased prototype was created by the developers (see Figure 6-1). The paper-based prototype showed how the interface of a possible LMS could work, illustrating the following functionalities: • •



• •

Log-in: The system needed to keep track of different users and the users’ assessment results. Assessment: This showed how the case scenarios could be implemented in the LMS. Events and questions relating to the scenarios were displayed on separate pages and in chronological order. Assessment results: This option provided the user with results from previous assessments, summarised as bullet points and as graphs. The system would also provide suggestions for further practice and reading. Background information (Library): The system would provide information of how to perform SA and additional reading material. Practice on the simulator: The system would allow the users to familiarise themselves with the simulation-based part of the assessment procedure.

Case Study 2: The MedCAP-Project

75

Fig. 6-1. Photograph of paper prototype. Each piece of paper represents a screen with certain functionalities.

The prototype was informally evaluated with a clinician from the development team during a design session. The functionalities of the LMS were confirmed by the clinician. This evaluation enabled a shared idea of the potentials for such a system. However, it was agreed that more research was necessary before making a decision on a LMS. No new development ideas were generated at this stage. Further investigations by the developers indicated that a custom-made LMS might not be possible to create within the timeframe of the project. Instead, it was decided to review the possibility of using an existing, open source LMS. The evaluation of the paper prototype had covered some necessary functionality and had created the basis for the next prototype development.

76

Chapter Six

A prototype assessment procedure of limited functionality was implemented in an open-source LMS called Moodle6. At this stage, the Moodle prototype displayed some information from one of the case scenarios along with a few questions. It was distributed over email to the clinicians for feedback. They needed to determine if Moodle was suitable for the assessment system: “I think that Moodle will work well [..] - can we use images and video clips - in picturing the 'tests' that we will set based on [other clinician]’s [competence space] mapping exercise, these will be invaluable.” 7

This feedback verified that Moodle was a potentially useful part of the assessment procedure. By experiencing a working prototype of the LMS, it enabled the clinicians to recognise the importance of images for strengthening the case scenarios. After the initial evaluations of Moodle, the rest of the case scenarios were incorporated into the LMS. In addition, relevant media (pictures and videos of patients and equipment) had to be identified. The media was intended to augment the information in the case scenarios. Films and photos needed to be taken in the operating theatre and this became a task for a clinician from the development team. However, taking the necessary photographs was not a trivial task: “Hanging around for photos [in the operating theatre] was very frustrating. Had to make some changes to clinical scenarios because of photos [..] as it was impossible to source some of the items in the hospital”8

The clinician had also to plan the photographs in-detail so as to ensure that they provided accurate representations of the scenario problems: “I got some photos for the LMS yesterday and the simulator but in reality I am realising how hard it is to convey poor positioning for example in a 2d photo - was even considering using a sketch instead last night”9

Taking the necessary photographs and films required medical expertise and suitable contacts to identify equipment (needles, solutions, etc.) and to arrange illustrative situations with anaesthetists and patients in theatre. 6

www.moodle.org (accessed on the 24th of Feb. 2010) Extract from email 03/04/2008 8 Extract from email 24/11/2008 9 Extract from email 20/11/2008 7

Case Study 2: The MedCAP-Project

77

Figure 6-2 shows an example of how photographs taken by the clinician were used in the LMS. The photographs illustrate the possible positioning of a patient.

Fig. 6-2. Example of photographs taken by the clinician (as used in the LMS).

The LMS went through several additional iterations until all six case scenarios and MCQ’s were implemented in the system. At the end of this process, the managing clinician and the clinicians in the development team verified that the LMS corresponded to their ideas of how the assessment procedure should behave. See Figure 6-3 for an illustration of the LMS interface. See Table 6-5 for a summary of user participation during this phase.

78

Chapter Six

Fig. 6-33. A screen-shot of the prototype interface implemented in Moodle.

Case Study 2: The MedCAP-Project Phase of development Design phase 1

User / Informant

User Input

79 Method

Managing clinician

Directed the design of LMS, evaluated prototypes of the system, provided medical expertise

Collaborative design sessions

Clinicians in the development team

Input on the design of LMS, evaluated prototypes of the system, provided medical expertise, wrote case scenarios, identified media

Collaborative design sessions

Developers

Designed and built prototypes of the system, provided the clinicians with knowledge of the technology

Collaborative design sessions

Table 6-5. User groups, user roles and methods used during the development of the LMS.

Design, build and informal evaluation of the combined assessment system The system from Project 1 was intended as a platform for assessing the procedural elements of SA. However, that system required extensive development for implementing each of the case scenarios and each of the metrics. The managing clinician had decided at the start of Project 2 that the case scenarios should cover six different patient types. Each of these six case scenarios required that a corresponding anatomical model was developed. Suitable CT-data for each patient type needed to be identified so as to create the new models. Supervised by the managing clinician, a clinician from the development team used her contacts at the teaching hospital to collate appropriate CT-data. However, this also proved to be a challenging task at the hospital.

80

Chapter Six “I have had so far 2 unsuccessful attempts to meet with [name] re the CT scans where I was waiting around for 1-2 hours. I am normally a patient person but I am getting a bit annoyed. Anyway I have another appointment with him today and will chain myself to the door of radiology if I don’t get to meet him!!!!”10

After a few attempts, the clinician acquired access to CT-data corresponding to the patients described in the case scenarios. The developers used the datasets to generate additional anatomical models for use within the simulation system. The managing clinician argued that the visual aspect of the simulation system would be of great importance when assessing needle insertion, especially as palpation was not covered by the system. The new anatomical models required that the system display textures that corresponded to each of the six patient types. However, the simulator in Project 1 had only incorporated a model and a texture of the lumbar region. The developers suggested using full-body photographs of patients taken in operating theatre as an alternative solution. The developers sent a sample picture of the new view to a clinician in the development team (see Figure 6-4).

Fig. 6-4. The picture sent to a clinician for illustrating the idea of a full body interface.

From viewing the picture, the clinician agreed that a full-body texture would be a potential improvement of the simulator. However, she said that

10

Extract from email 04/07/2008

Case Study 2: The MedCAP-Project

81

in most patients the iliac crest and spinous process11 are not generally visible (the landmarks of the back) as they were in the sample picture. To solve this problem, the clinician suggested placing a depiction of a hand in the picture so as to help show the iliac crest. The clinician organised with another clinician to take test photographs of patients in theatre: “he can take them on his list as he finds the patients, perhaps you could send me the photo we looked at [..] I will forward it on with list of the types and positions of patients we need for the case scenarios.“12

The developers combined one of the resulting photographs with a 3D model segmented from the datasets. This created a mock-up prototype of what the interface could potentially look like. It was then sent as a picture to the clinician for feedback (see Figure 6-5).

Fig. 6-5. The mock-up prototype sent to a clinician illustrating the new simulator interface.

This prototype gave the clinician a further understanding of how the developers intended to use the photographs in the simulation system. However, the clinician spotted a minor clinical error in the model at this point:

11

The iliac crest is the superior border of the pelvis, which is used as a landmark to determine the needle insertion point. Spinous process is the end of a vertebra. 12 Extract from email 18/06/2008

82

Chapter Six “these pictures do give a good idea of your task- interestingly I would have put the superimposed iliac crest as further up in the hand picture, also I think it highlights the need to identify the midline properly as to me your superimposed image looks midline but tilted (I think this is because the patients back has a natural off-centre curvature).”13

The developers had made an error when aligning the photograph of the patient with the segmented 3D model. From this, the clinician recognised the necessity to clearly mark the landmarks of the back (spinous processes and iliac crest) when taking photographs of the patients in the operating theatre. These markings would help the developers align the new anatomy with the patient textures. The clinicians in the development team took photographs for all of the six patient types based on these specifications. The resulting pictures were applied as textures to the segmented 3D models. See Figure 6-6 for two examples of the final texture.

Fig. 6-6. Two of the final textures with marked landmarks, which were used in the simulation system.

After the six patient models were generated, haptic feedback was applied to them by utilising the human-tissue model developed in Project 1. The clinicians in the development team adjusted the sensations so that they corresponded to a realistic representation of each patient type. The

13

Extract from email 20/10/2008

Case Study 2: The MedCAP-Project

83

sensations of each patient were later verified with anaesthetists from outside the development team (see Chapter 6 – Usability evaluation). At the same time, metrics were necessary to implement within the simulation system. These metrics were used to assess a selection of competencies from the competence space as associated with the technical performance of SA. After the metrics focus groups were completed (see Chapter 6 – Training analysis), the developers had to clarify the system’s capabilities for the clinicians in the development team: “I am writing my first draft of the metrics. Just wanted to check that you can track things like- the medial and cephalad direction of the needle? If a person continues to advance the needle despite puncturing the dura and the speed at which the needle can be advanced? The number of times bone is encountered”14

From reading the first draft, the developers got a better understanding of the metrics and suggested how the system could be used for the assessment procedure: ”I have looked through the metrics and I seen that there will be parts possible to assess through the simulator and parts that has to go through the learning management system. [..] So at some stage we can discuss what to capture in the simulator and what to capture through the LMS.”15

Due to limitations within the simulation system, only a selection of the identified metrics was possible to implement. The implementation of the metrics within the system required several iterations. For instance, the clinicians in the development team had not anticipated that each patient scenario would require variations on the metrics. This was discovered when the metrics were implemented in the system. The clinicians had to adapt the range of each metric depending on the patient type. For example, a difficult case required that more skin punctures were allowed. The collaborative implementation of the metrics development helped the clinicians get a better understanding of how the metrics were dependent on the system. It also aided the developers in understanding the concept of metrics and how they should be used in the system. Before the metrics document was written, the developing researcher had a somewhat naïve understanding of what was considered a successful procedure: 14 15

Extract from email 19/02/2008 Extract from email 26/02/2008

84

Chapter Six “I anticipated that finding the right spot in the back was an important metric, however, it is not part of the resulting metrics document. Technical success with the needle does not seem to necessarily mean a successful procedure“16

Hence, the developer’s knowledge of the procedure would not have been sufficient for generating accurate metrics. The expertise provided by the independent anaesthetists and the clinicians in the development team ensured the accuracy of the metrics. Table 6-6 shows the metrics that were implemented in the system. 1

While using the midline approach, initial needle direction is lateral

2

While using the midline approach, initial needle direction not cephalad

3

While using the midline approach, > x skin punctures/space are attempted (depending on patient type) While using the midline approach, > x spaces are attempted (depending on patient type) Inserts needle or introducer in an incorrect target area (depending on patient type) Re-insertion of needle without visibly changing angle following failed attempt

4 5 6

Table 6-6. The metrics that were implemented in the simulator17. Metric 3, 4 and 5 had to be adjusted based on the patient type. After the metrics integration, the simulator and the LMS were combined into one system. The managing clinician and the clinicians in the development team evaluated the system during a final design session. The system’s usefulness was evaluated and a range of improvements was suggested. For example: • • 16 17

The face validity of the system, i.e. questions, haptic sensations and virtual anatomy were all confirmed to be of appropriate fidelity by the clinicians. Up to this point, it had been discussed whether the markings of the landmarks in the back should be displayed within the

Researcher field note Extract from project document describing the metrics.

Case Study 2: The MedCAP-Project





85

simulation system. From trying the system without the markings and the absence of palpation, the clinicians realised that it would be too difficult to perform the procedure without them. Hence, the system was set to display the landmarks on all back textures. A clinician suggested having a name, a picture or a short summary of the patient repeated throughout each scenario in the LMS. This would help him to remember the case and it would not be necessary to go back and re-read previous information as often as it had been before. One of the clinicians decided that the impossible patient condition should have a time limit. The clinician described how it could not be assumed that an assessed anaesthetist would stop the procedure unless adequate warnings were given.

From this feedback, the developers improved the system further. This resulted in a system that was ready for user testing with participants from outside the development team. After the LMS and the simulation system were completed, the CbKSTspecialists implemented an additional system. This system consisted of algorithms that used assessment data from the LMS and the simulator and the pre-defined competence space within which to calculate the competence of an anaesthetist. It was also able to produce a graphical visualisation of an anaesthetist’s competence level. A web-service was utilised for communicating between the different systems. The CbKSTsystem also stored the competence space of SA and the previous assessment results of each anaesthetist. See Figure 6-7 for a simplified version of the system’s architecture. The CbKST-specialists’ participation in this phase resulted in a system that provided the necessary assessment logic. See Table 6-7 for a summary of user participation during this phase.

86

Chapter Six

Fig. 6-7. The main components of the system architecture . The simulator and the LMS send assessment data to the CbKST-system which determines the competence level of a user.

Case Study 2: The MedCAP-Project Phase of development Design phase 2

User / Informant

User / Informant Input

87 Method

Managing clinician

Directed the design of the system, evaluated prototypes of the system, medical expertise

Collaborative design sessions

Clinicians in the development team

Input on the design of the system, evaluated prototypes of the system, medical expertise, identified CT-data, photographs for new interface, advice on metrics

Collaborative design sessions

Developers

Designed and built prototypes of the system, provided the clinicians with knowledge of the technology

Collaborative design sessions

CbKSTspecialists

Built system for calculating and visualising competencies

N.A.

Table 6-7. User groups, user roles and methods used during the development of the assessment system.

Usability evaluation (formal) with independent anaesthetists The resulting system needed to be evaluated with independent anaesthetists. However, it only consisted of the LMS and the simulator at this stage. The CbKST-system was still in development. Five anaesthetists with different levels of expertise were recruited to evaluate the usability and utility of the assessment procedure. A thinkaloud protocol and short interviews were used to get feedback from the participants. Supervised by the clinician, the clinicians in the development team ensured that the assessment system was set up in a suitable room of the teaching hospital and that anaesthetists were released from theatre in order to evaluate the system. The evaluation generated similar forms of feedback as the usability evaluation in Project 1. For example, it helped to identify errors and difficulties caused by the system:

88

Chapter Six





It was not clear from the interface when the assessed person should move between the LMS and the simulator. Afterwards, an information screen was added to give instructions on starting the simulation, put on 3D glasses, and to move to the simulator. Also, after a successful procedure, there was no information to tell the assessed person to move back to the LMS. Such instructions were added later. The case scenarios in the LMS were divided up and presented over several pages. The participants found it frustrating having to go step-by-step backwards over several pages when reviewing previous information in the LMS. Shortcuts to all of the previous pages in a case scenario were implemented in order to solve this issue.

The evaluation process also helped to determine the participants’ opinions of the assessment system: (Clinician 1): "System very good"[..] "Continuation of the scenarios very good" (Clinician 2): "Interface move disrupts"[..] "Time is extremely important" (Clinician 4): "Sensations feels ok" (Clinician 5): "good questions" [..] "relates to the real world"[..] "interface works well"18

In addition, certain clinical elements were identified during the evaluation which led to changes in the system. For example: •



18

Although the development team had reviewed the content of the case scenarios several times, the participants still had a few general comments regarding the clinical material in the LMS. These comments were considered and integrated into the final version. The landmark markings on the backs were inconsistent within the simulation system. They were described as confusing by the participants. From this, it was decided that all markings of the spinous processes should be drawn as circles.

Extract from a usability report 15/05/2009

Case Study 2: The MedCAP-Project





89

Participants said that markings of puncture points in the simulator would help them to remember their total number of attempts. Such puncture marks were implemented within the simulation system. The LMS did not include a time element and this was deemed to be of significant importance when performing the procedure in real-life. One of the participants described how it was important to know how long a time had elapsed between each event in the case scenarios. He said that depending on how long it took for a certain symptom to appear, it might be necessary to take a different course of action. It was not feasible to implement this suggestion during Project 2, but it was considered for future development.

Figure 6-8 illustrates the system in use by an anaesthetist at the usability evaluation stage.

Fig. 6-8. Photograph taken at the usability evaluation stage. The anaesthetist is solving problems in the LMS (to the left) and performing the procedure on the simulator (to the right).

The independent anaesthetists’ input helped to improve the system’s ease of use, its clinical relevance and its usefulness. The usability evaluation prepared the LMS and the simulator for the clinical trial. See Table 6-8 for a summary of user participation during this phase.

90

Chapter Six

Phase of development Formal evaluation

User / Informant

User / Informant Input Method

Managing clinicians and clinicians in the development team

Organised the study (Released consultants from theatre)

Observations of anaesthetists using the system

Developers

Identification of usability issues

Observations of the anaesthetists using the system

Independent anaesthetists (trainers, intermediate trainees)

Evaluated the system (medical expertise, design errors, face validity, feedback on improvements)

Usability (Think aloud, short interviews)

Table 6-8. User groups, user roles and methods used during the formal evaluation.

Clinical trial Project 2 concluded with a clinical trial of the complete system (LMS, simulator, CbKST logic) at the training hospital19. The clinical trial evaluated the system’s ability to predict the competence level of the anaesthetists. This was achieved by comparing the results from the assessment procedure with different levels of expertise (construct validity). The system was also evaluated by comparing the results from the procedure with performance in the operating theatre (predictive validity). Supervised by the managing clinician, the clinicians in the development team recruited 24 independent anaesthetists with different levels of expertise to participate in the trial. The anaesthetists were divided into three groups: 1. 2. 3.

19

Novice trainee (registrar in SA): less than 10 spinals Intermediate trainee (registrar in SA): more than 20 spinals Expert (consultant / advanced registrar): more than 100 spinals

An additional clinical trial was performed at the Hungarian teaching hospital. However, the results of this trial are excluded as no data from the trial was available at the time of writing.

Case Study 2: The MedCAP-Project

91

Each participant was assessed on the system and in the operating theatre. The participants’ answers to the case scenarios were recorded in the LMS. The simulator and the metrics captured their performance on the virtual patients. The resulting information was sent to the CbKST-system. The participants were also video recorded as they performed SA on a patient in the operating theatre. These video recordings were analysed later by two blinded observers. They used a refined version of the global-rating scale as developed in Project 1. The observers also scored the performance using a task-specific checklist, which consisted of the metrics from the focus groups. The clinical trial was a difficult task. It required seeking ethical approval and to acquiring patient consent. It also necessitated that some anaesthetists were relieved from their theatre duties in order to take part in the trial. At the time of writing, all participants had not yet gone through assessment on the system and data from the assessment undertaken in operating theatre was not yet available. However, preliminary results from the system assessment indicate that the system was able to distinguish between the three different levels of expertise (construct validity). For example, Figure 6-9 shows a box plot of the three groups’ scores on the paramedian patient in the simulator. The novice group consisted at this point of just two participants, making it impossible to determine any exact statistical significance. The other groups consisted of 9 intermediates and 6 experts. In Figure 6.9, the intermediate group has a wide spread. This may be caused by variations of experience within that group. However, it appears that the expert group performed better than the other two on this particular scenario. The results here indicate that the system has the potential ability to assess competencies related to Patient Care: Technical Performance. At the same time, the LMS assessed additional competencies in relation to the competence-groups Patient Care, Interpersonal and Communication Skills and Medical Knowledge. Figure 6-10 indicates that the LMS system is potentially able to distinguish the core competency Medical Knowledge as between the three groups of expertise. It appears that the Expert group scored higher than the two others. The preliminary data indicates that the system is potentially able to distinguish between different groups of expertise. If statistically valid, the system could be used as part of a formal training programme and to determine the competency of a trainee. However, more data is required to come to a definitive conclusion on the construct validity of the system. In addition, the competence procedure will be compared to the performance in theatre. If the clinical trial demonstrates predictive validity, the system

92

Chapter Six

could possibly be considered for high-stakes assessment (i.e. credentialing or certification of anaesthetists). The various results will direct future development decisions. In the case of construct or (preferably) predictive validity, commercialisation of the system would be worth considering. If commercially-available, it would allow other training hospitals to implement and evaluate the system as part of their overall training. This would potentially help to establish the system’s validity and reliability even further. If the results are again inconclusive (see Chapter 5), the worth of the system has to be reviewed again.

Fig. 6-9. Metrics score on paramedian patient. From a maximum score of 6, the anaesthetist got a one point reduction for each error recorded in the metrics. Each of the metrics corresponded to one or more of the competencies in the competence space20. 20

Extract from a project document that describes the preliminary results of the trial.

Case Study 2: The MedCAP-Project

93

Fig. 6-10. The graphs show the number of competencies each group had in relation to the core competency Medical Knowledge21.

The assessment procedure also supported self-directed learning. The managing clinician’s initial decision to utilise CbKST ensured that the system could graphically display the ability of an assessed anaesthetist. See Figure 6-11 for an example of a resulting competence level for one consultant anaesthetist. The system provided the anaesthetist with information on her strong and weak areas of competency. If implemented within a training programme, this could be useful for trainers and trainees who wish to choose individual training paths depending on current competencies. At the time of writing, the tool developed to visualise the competence level is at an early stage of development. For instance, the overall competency needs to be arranged based on the six core competencies so as to better illustrate an individual’s competence level.

21

Extract from a project document that describes the preliminary results of the trial.

94

Chapter Six

Fig. 6-11. Example of an expert anaesthetist’s competence level. The green dots show the competencies which the person has and the yellow dots the ones the person does not have. The blue dots represent pre-requisite and untested competencies. For example, the anaesthetist in this case has, among others, the specific competence of 292. This indicates that the individual has Clinical reasoning: identifies complications and manages underlying causes22.

22

Screen-shot of the visualisation tool.

Case Study 2: The MedCAP-Project Phase of development Clinical trial

User / Informant

95

User / Informant Input Method

Managing clinician and clinicians in the development team

Designed and organised the study (Global rating list, task-specific list), made sure suitable participants and test subjects were available

Randomised controlled trial

Independent anaesthetists (trainers)

Observed and scored the procedure, medical expertise

Randomised controlled trial

Independent anaesthetists (trainers, intermediate trainees, novice trainees)

Assessed by the system and in theatre, medical expertise

Randomised controlled trial

Patients

Test subjects

Randomised controlled trial

Developers

N.A.

Observation

Table 6-9. User groups, user roles and methods used during the clinical trial.

Summary Chapter 6 analyses how the development process in Project 2 was dependent on user participation. As in Chapter 5, Scaife et al.’s framework is applied to illustrate different user groups, what form of input the user groups provided to the process and how they were involved in each of the separate development phases. The development process began with a problem statement where the managing clinician outlined research objectives based on recommendations of best practice from medical training bodies. He connected with clinicians, developers and CbKST-specialists to take part in the creation of a competence-based assessment procedure for SA.

96

Chapter Six

The training analysis was designed and organised by the managing clinician and the clinicians in the development team. Independent anaesthetists provided input on the training in SA, the competence space and the metrics. CbKST-specialists provided advice on how to utilise the CbKST. The developers participated as observers during the training analysis. A behavioural scientist provided advice on how to perform RCT’s. During the design phase, the clinicians wrote case scenarios representing real-life situations. The developers built prototypes of a LMS that were evaluated and improved on by the clinicians. The incorporation of the case scenarios within the LMS required continuous collaboration between the clinicians and the developers. The clinicians in the development team also ensured that relevant media were incorporated in the LMS. The design phase of the development process required a continuous collaboration between the managing clinician, the clinicians in the development team and the developers. The clinicians provided new design ideas, training content, suggestions for improvements and evaluated the system during its development. During this phase, the CbKST-specialists developed an additional system which provided competence assessment logic. A usability evaluation was organised by the managing clinician and the clinicians in the development team. Independent anaesthetists from outside the development team provided their expertise to this development process. Their input helped to improve the system’s ease of use and its clinical relevance. Their input also confirmed the system’s perceived usefulness. The usability evaluation prepared the LMS and the simulator for the clinical trial. A clinical trial was also designed and organised by the managing clinician and the clinicians in the development team. Independent anaesthetists of three different levels of expertise and various patients were recruited to determine the validity of the system. Expert anaesthetists from outside the development team participated through scoring the procedure in theatre. The user groups and informant groups are discussed in-detail in combination with the findings from the first case study in Chapter 7. In contrast to Project 1, Project 2 utilised a specific educational approach, CbKST. It had more resources, which allowed a more extensive training analysis and a clinical trial with more participants. As in Project 1, the developers were dependent on the clinicians’ medical expertise, design suggestions and feedback to translate training objectives into system requirements. How user guidance was accomplished is discussed in combination with the results from the first case study in Chapter 7.

CHAPTER SEVEN USER ROLES AND USER GUIDANCE

This chapter analyses the relation between user participation and the development process of the VR and web-based medical training system. Scaife et al.’s (1997) framework was used in Chapter 5 and 6 to identify user groups, user input and methods used for engaging users in each phase of the development process. By combining the results from the two projects, this chapter illustrates 1) the relation between user roles and the development process and 2) how user guidance was accomplished during the development process.

User roles By combining the results from Project 1 and 2, nine user groups have been identified. The groups are managing clinician, clinician in the development team, trainer, novice trainee, intermediate trainee, intern, other medical staff, patients and general stakeholders. In addition, an overall informant group has been identified. This group consists of other domain specialists. Each group provided different input to the development phases of the VR and web-based medical training system for Spinal Anaesthesia. In Project 1 and 2, the 10 groups were involved by varying degrees of participation. They participated either by providing advice to the development team (by advice), participating as part of the development team (by doing) or being responsible for the development process (strong control). Also, each group participated with a certain degree of influence. The influence of each user group on the process is weighted from no influence to high influence. Furthermore, the user groups are classified depending on their user type (primary, secondary and tertiary). In order to determine the roles of the users (and informants), the following sections analyses the contribution from each group in relation to the development phases (problem statement, training analysis, design, build, evaluation and implementation) they participated in.

98

Chapter Seven

Managing clinician A consultant with responsibility for the training of anaesthesia at the teaching hospital acted as the managing clinician user group. The user had an in-depth knowledge of the current training in the area of SA. He was also aware of alternative ways for training the procedure, which were recommended by training bodies and health care organisations. If the system was implemented for training the procedure, the managing clinician would supervise the trainees using it. He would also participate in the decision making of whether or not to use the system for training SA at the teaching hospital. Hence, the managing clinician is a secondary and tertiary user. The managing clinician participated to some extent in all development phases, except the building phase. His participation was deemed as strong control, which is the highest degree of user participation. The managing clinician identified gaps in the current training of SA at the beginning of Project 1 and 2. He wrote the initial problem statements based on current recommendations from organisations and training bodies concerned with the improvement of health care (see Chapter 5 – Problem statement and Chapter 6 – Problem statement). As a result, the managing clinician participated with strong control over the initial phases of the development process. It has been argued that research surrounding a medical training system needs to be led by relevant research questions (Henriksen and Patterson, 2007) and well-defined training objectives (Salas and Bowers, 2003). The managing clinician ensured that relevant training approaches were considered from the initial phases of both projects. Karsh (2004) argues that end-user input is necessary from these initial stages of a development process for a system to be successful. Hence, the managing clinician’s strong control over the development processes directed the projects towards an outcome that could potentially be deemed as useful by training bodies and health care organisations. Cavaye (1995) has stated that users have “effectively determined the direction and outcome of the development process” (p. 313) when participating with strong control, which was the case in Project 1 and 2. Another significant development activity during Project 1 and 2 was the training analysis. The managing clinician had an important role in this activity as he allocated a significant amount of time to the execution of the analyses of both projects. The identification of relevant training objectives is a crucial part of medical training design (Kneebone, 2003). The managing clinician supervised the clinicians in the development team as they identified training objectives and other organisational factors

User Roles and User Guidance

99

influencing the training in SA. It would have been difficult to perform as extensive training analyses as in the two projects without the managing clinician participating with strong control over the development process. Venkatesh and Davies (2000) say that Job relevance is a necessary element to incorporate in a system for it to be regarded as useful by its potential users and customers. The participation of the managing clinician ensured that the system incorporated the necessary attributes of the “job”, which in this case is the training of SA. The managing clinician participated during the design phase. He provided input to the design of the system and performed informal evaluations with the system, deciding what directions the developments of the system should take. The guidance from the managing clinician had significance in how the system was developed. This is discussed in detail in Chapter 7 – User guidance. The clinical trials were a particularly important part of the development process. The managing clinician decided that both projects should conclude with a clinical trial, to allow for validity testing. A medical training system has to be tested for validity for it to be accepted by the overall medical community. The purpose of doing validity testing of training and assessment is to ensure that a test measures what it is claimed to measure (Kline, 1993). The managing clinician’s strong control over the development process ensured that sufficient time and resources were set aside for performing clinical trials during Project 1 and 2. The clinical trials were challenging tasks, as no standardised assessment approach existed for measuring performance of SA in the operating theatre. The managing clinician initiated the development of such an approach. Significant parts of Project 1 and 2 consisted of creating and refining an assessment approach for use in the operating theatre. The developed approach utilised task-specific checklists and global ratings scales, which were used during the two clinical trials. Without the managing clinician’s participation it would have been difficult to get the expertise necessary for developing the assessment approach needed to validate the system. Sutherland et al. (2006) argue that that many studies in medical training systems lack quality. The authors argue for the need to measure performance in theatre, double-blinded assessment and large sample sizes. Large sample sizes were not possible with the resources available during the two projects. However, the managing clinician ensured that performance in theatre was measured and that double-blinded assessment was used.

100

Chapter Seven

The managing clinician’s decisions to perform clinical trials were also important in order to direct future decisions of how to improve the system. The trial in Project 1 did not answer whether the system improved performance in the operating theatre or not. Hence, Project 2 was a natural progression where the scope of the system was changed slightly and its objectives refined. The results to date indicate that the system from Project 2 could potentially be beneficial as part of a structured training programme for assessing competencies of trainees. The managing clinician’s participation by strong control over the development process helped ensure that other user groups were able and willing to participate during all phases. A user groups’ degree of participation in a development process generally depends on the management’s commitment to include users in the development process (Cavaye, 1995). The managing clinician ensured the other user groups’ ability to participate by sourcing funding through research proposals that were relevant to the current recommendations for training. He recruited two clinicians to participate in the projects as part of M.D. degrees. The other clinicians from the development team participated with the prospect of broadening their own research interests and to potentially publish findings in journals. At the same time, the managing clinician encouraged other domain specialists (developers, educational experts, etc.) to pursue research in the medical domain. The managing clinician, together with the clinicians in the development team, utilised their professional network in order to distribute questionnaires to trainers, trainees, other medical staff and patients at the local teaching hospital and to other teaching hospitals in Ireland and Hungary. During the training analysis, usability evaluation and clinical trials, the managing clinician’s role at the training hospital allowed him to release trainers and trainees from their duty in the operating theatre. He also encouraged them to participate during these phases by explaining the importance of the two projects. The managing clinician was also responsible for getting ethics approval for the clinical trials and to ensure the consent of patients. See Figure 7-1 for an illustration of how the managing clinician created a link to the other participants of the development process in Project 1 and 2. The figure is my interpretation of how the managing clinician’s role affected the other user groups’ participation during Project 1 and 2.

User Roles and User Guidance

101

Fig. 7-1. The managing clinician’s role in relation other user groups’ participation during Project 1 and 2.

The problems statement, training analysis, design, build, evaluation and clinical trial phases were all important parts of the development process. The managing clinician’s degree of influence was high throughout the projects as he ensured that each phase was allocated sufficient time and resources. Mckeen and Guimaraes (1997) describe a situation where user participation turned counterproductive because the user input was ignored in the final system. The managing clinician was responsible for meeting the two projects deliverables and his high degree of influence effectively ensured that development decisions and activities originating from the user side were incorporated in the development processes.

102

Chapter Seven

Clinicians in the development team The clinicians in the development team were represented by trainers (consultants) in SA with extensive expertise in the field. This group are secondary users. The group represents trainers who will use the system to train registrars (trainees) in SA. However, they are likely to have more knowledge of the system than an independent trainer (see Chapter 7 – Trainer.). They participated closely in the development process as part of the development team. Their participation was deemed as by doing. The clinicians in the development team participated in the design and organisation of the training analysis. Supervised by the managing clinician, they participated by designing questionnaires and identifying topics for focus groups. They also had an important role in recruiting (together with the managing clinician) independent anaesthetists and others to participate in the training analysis. For instance, the competency criteria of SA had to be established during the training analysis in Project 2. Competence-based training and assessment is regarded among some researchers as a potential route for achieving efficient simulation-based training (Issenberg et al., 2005; Reznick et al., 2006). The clinicians in the development team recruited independent trainers and intermediate trainees to participate in the focus groups. This user group also participated by leading the focus groups where trainers and trainers helped develop and peer-review the competence map. From the competence map, the clinicians in the development team were able to derive relevant metrics. Metrics can be used as discrete measurements for determining changes in behaviour (Kazdin, 1998). Metrics are used as performance measurements for determining whether the training objectives of a trainee have been met (Salas and Bowers, 2003). The clinicians in the development team’s participation ensured that relevant metrics were identified and later used as part of the system and the task-specific checklist. Hence, the clinicians in the development team provided the necessary in-depth understanding of SA for designing and organising meaningful training analysis of the procedure. The clinicians in the development team had an important role during the design phase for creating and implementing training content into the system. For example, the clinicians created case scenarios of patients. Salas and Burke, (2002) argue that simulation-based training is effective when “carefully crafted scenarios are embedded within the simulation” (p. 119). Currently, many medical schools have adapted a teaching format that uses scenarios, called problem-based learning (PBL) (Dolmans et al.,

User Roles and User Guidance

103

2005; Jones et al., 2001). A major benefit of PBL is that learning, training and assessment is based on real clinical problems (Davis and Harden, 1999). The participation of the clinicians in the development team was necessary in order to provide the sufficient expertise of SA for creating scenarios that presented relevant clinical problems. When applying PBL, case scenarios have to be developed so that they cover important elements of a procedure. This creates a challenge for medical educators as the quality of the training depends on the design of the scenarios (Dolmans et al., 1997). The clinicians in the development team’s participation ensured that the system in Project 2 consisted of well-written case scenarios. Clinical variation is also an important element for creating comprehensive learning and training situations (Issenberg et al., 2005). The clinicians in the development team gathered a selection of photographs and other media, which were used to create the six patient scenarios during Project 2. The clinicians also provided CT-scans to the developers so that anatomic models of different patient characteristics could be created and implemented in the system. However, it was not always clear to the developer how the training content should be used in the system. For instance, the scenarios were initially represented as text-documents and were not in the same format as information in the LMS. Several unexpected problems occurred while the developers were incorporating the scenarios into the LMS. A new format for presenting questions had to be designed by the clinicians in the development team to ensure that the scenarios were appropriately presented in the LMS. These clinicians also guided the developers in how to incorporate illustrations in the LMS. Additional training material, such as competence descriptions, metrics and photographs had to be revised in order to be incorporated in the system. The clinicians in the development team and the developers had to collaborate to make this material fit in the system. For example, it was not obvious to the developers how the competence map should be used in the system in Project 2. The clinicians created documents which detailed which competencies each scenarioquestion tested. At the same time, the developers had to explain to the clinicians how data acquisition in the LMS could potentially work. The expertise of the clinicians in the development team and their participation ensured that relevant medical training content was developed and appropriately incorporated into the training system. How the clinicians in the development team’s guidance were accomplished is discussed indetail in Chapter 7 – User guidance. The clinical trials performed during Project 1 and 2 were initiated and supervised by the managing clinician, but led by the clinicians in the

104

Chapter Seven

development team. They facilitated the recruitment of independent clinicians, identified suitable patients, set up the video recording equipment in the operating theatre and organised with blinded observers. The clinicians in the development team had a vital role in the clinical trials by ensuring that performance was assessed in a way that would potentially be perceived as meaningful by other anaesthetists, medical training bodies, etc. The clinicians in the development team provided medical expertise to the development process on a day-to-day basis. The clinicians in the development teams’ generic medical expertise1 of SA directed the developers during the design phases. Chapter 5 and 6 contain several examples where the developers were dependent on the clinicians’ generic medical expertise for developing the system. The clinicians in the development team also provided specialised expertise2 to the development process. Expert clinicians are able to determine which problems are relevant for a specific case (patient) and can utilise their medical knowledge to solve problems efficiently (Ericsson and Smith, 1991). Patel and Groen (1991) argue that medical intermediates and experts are likely to have similar factual knowledge of an area, but the medical experts are able to disregard irrelevant information and know what not to do. The clinicians in the development team’s specialised expertise were critical for creating a system capable of training and assessing specialised expertise. For example, the system emerging from Project 2 was not only intended to assess medical knowledge, but also to have the capability of assessing anaesthetists’ specialised expertise. The clinicians used their expertise to create a meaningful competence space and metrics of the procedure. The clinicians also utilised their specialised expertise to create assessment scenarios that consisted of both standard “textbook” problems as well as conflicting, life-like situations. They wrote the scenarios so that an anaesthetist had to choose between answers that were in conflict with what is traditionally considered as best practice. However, one of the responses was better suited than the others for that specific case in that specific situation. The inclusion of ambiguous clinical situations allowed the assessment procedure to potentially distinguish between novice, intermediate and expert.

1

Generic expertise is the ability to memorise and recall medical knowledge of the procedure (Patel and Groen, 1991). 2 Specialised expertise is the ability to utilise knowledge for making accurate clinical decisions (Patel and Groen, 1991).

User Roles and User Guidance

105

The development process was also dependent on the clinicians in the development team’s expert performance. Expert performance, also referred to as procedural skills, is the ability to utilise previous experiences, gained by deliberate practice, in order to perform technical procedures proficiently (Ericsson, 2004). For reaching the level of expert performance, deliberate practice over a long period of time is required (Ericsson, 2004). Project 1 and 2 were dependent on the clinicians’ procedural skills, in order to develop a system capable of simulating and training the procedural elements of SA. The developers had gained some basic knowledge of SA by reading about the procedure and from observing its procedural elements. However, there were many elements of the procedure that were not clear from readings and observations alone. For instance, Chapter 5 describes how the clinicians in the development team clarified aspects in relation to the human-tissue model and needle insertion during Project 1. Chapter 7 – User guidance provides a detailed discussion of how medical expertise and expert performance was utilised for the development of the system. The clinicians in the development team had a high degree of influence during the development process. They actively participated in the training analysis and clinical trials. They also created the training content and directed the developers in developing the system during the design phases. .

Trainer The trainers were represented by consultant anaesthetists with many years experience in performing SA. The trainers were independent of the development team. The trainers represent secondary users of the training system. If it is implemented in the training, they will supervise trainees using the system. However, if the system is shown to provide predictive validity, it could potentially be used for high-stakes assessment and lifelong learning, as suggested by Gallagher et al. (2003). If this would be the case, trainers would also be primary users of the system. The members of this user group participated by advice during the training analysis. They answered questionnaires and participated in focus groups so that learning objectives and issues in the current training of SA could be identified. It was possible to derive a set of learning objectives from the trainers’ participation in the training analysis. The trainers also provided input on the competence map and the metrics during the training analysis in Project 2. Hence, the input from the trainer group ensured that the system stemmed from issues with the current training of SA.

106

Chapter Seven

The trainers also participated by advice by evaluating the system’s usability and utility during the development process. For instance, by allowing independent trainers to use the system, it helped the developers to capture additional interface errors that were not identified during the design phases with the managing clinician and the clinicians in the development team. The trainers suggested alternative ways of performing the procedure. In general, there are several ways of performing a medical procedure. Hence, if a system is intended to appeal to a wider audience, alternative techniques needs to be taken into account. From the usability evaluation, the trainers advised on the worth of the system as a training tool. One such trainer said that the system in Project 1 could be useful for training, as long as a trainer was present to guide the trainees as they performed the procedure on the system. The trainers also provided their expertise and procedural skills to verify the face validity of the system. All trainers said the human-tissue model represented realistic sensations. In the usability evaluation, the trainers also expressed their issues with the systems. For example, the majority of trainers mentioned the lack of palpation and the inability to re-direct the needle. The trainers’ participation during the usability evaluation was important for improving and preparing the system for the next phase of the development process: the clinical trial. Evaluating the usability and utility of the system before the clinical trials was vital. It ensured that the system generated a useful representation of the procedure and that any design errors affecting the system’s data acquisition (such as metrics measurements and storage of questions) were captured. The trainers participated by advice in the clinical trials. They participated as blinded observers by scoring the performance of interns and anaesthetists. To score the performance, the trainers observed videos and used the assessment procedure developed by the managing clinician and the clinicians in the development team. The participation of independent trainers was necessary as it required that the observers did not know the level of experience of the participants. Trainers also participated in the second clinical trial as representatives of the expert group. A trainer had the opportunity to trial the system from Project 1 in a training course (see Chapter 5 – Trial implementation). He represented a potential end-user and customer. The trainer used the system to show and explain to trainees how the procedure should be performed. He then verbally guided them while they performed the procedure and finally assessed the trainees while they performed the procedure without any

User Roles and User Guidance

107

guidance. The trainer advised the development team by providing feedback on what he experienced while using the system as part of the training course. His feedback confirmed that the system could be useful for training lumbar punctures during a structured training course. The trainer also provided suggestions to improve the system, which directed the development team towards possible enhancements in the system. The trainers’ degree of influence was medium to high. This group provided input during the training analysis, which resulted in training objectives and peer-reviewed training content (competence map and metrics). Their input during the usability evaluation resulted in improvements in the system and verified its potential usefulness. The trainers part in the clinical trial also helped determine the validity of the system. However, their ability to provide input to the system was limited in comparison to the managing clinician and the clinicians in the development team. The trainers had not gained the same understanding of the projects objectives and applied technology. The trainer group itself did not determine the directions and outcomes of the two projects.

Intermediate trainee The intermediate trainees were registrars in SA. They were independent of the development team. The intermediate trainees had experience of the procedure, but had not reached the same level as that of a consultant. Intermediate trainees generally train other trainees before they themselves reach consultant level. If the system is implemented, they will use it as part of their own training and supervise its use by novice trainees. Hence, this group represents both primary and secondary users. The intermediate trainees participated by advice by answering questionnaires and participating in focus groups relating to the training of SA during the training analysis. For instance, the questionnaires showed that some individual trainers had different techniques for performing the procedure and as a result the trainees were taught a variety of ways to perform SA. They insisted on either a universal technique, or training from a single trainer. The intermediate trainees also provided input on the competence map and metrics. Their input on the training analysis complemented the input from the trainers and further directed the identification of learning objectives in the training of SA. Representatives of this group also participated during the second usability study. By using the system they helped identify some usability issues with the LMS and simulator. However, in comparison to the trainer

108

Chapter Seven

group, the intermediate trainees did not provide as many suggestions for improvements of the system. They also participated as a control group during the clinical trial in Project 2. This participation was important for determining whether the assessment procedure was able to separate out the three levels of expertise (novice, intermediate, expert). The intermediate trainees had a medium influence on the development process. Their input to the training of SA complemented the opinions of the trainers during the training analysis. Their participation in the second usability study did not result in the same quality of feedback as the trainers. Their participation in the clinical trial was valuable for determining the validity of the system. The intermediate trainees’ overall participation influenced the development process. However, their participation did not result in explicit system requirements.

Novice trainee The novice trainees were also registrars in SA, independent of the development team. They had most of the theoretical knowledge necessary for SA, but had limited practical experience of the procedure. If the system is implemented for training, novice trainees represent the main group of people who will use it. They will use the system to practice and it will be able to assess their knowledge and procedural skills during their training. Hence, novice trainees are primary users. This group participated by advice during the training analysis. They gave their own perspective on the training of SA. For instance, they mentioned the stress of performing the procedure on a patient for the first time. The novice trainees also said they found it difficult to know what was expected of them due to the lack of formal training programmes. They were also an important part of the clinical trial in Project 2, as they represented the group with the least experience of the procedure. The novice trainees participated during the development process similar to the intermediate trainees, in that they had medium influence.

Intern The intern group were medical students with some clinical experience. They had no specific theoretical knowledge of SA and no experience of the practical part of the procedure. This group is nearly on the same level as a novice registrar in SA, but without the theoretical knowledge of the

User Roles and User Guidance

109

procedure. If they would specialise in SA as a registrar and the system is used for training, then interns would be primary users. Interns participated by advice during the clinical trial in Project 1. The intent was to perform the clinical trial as part of a training programme involving clinicians with no experience of SA. After the clinical trial, they participated by answering a questionnaire to give feedback on the training programme. Their influence on the development process, based on the questionnaires, was low. Their feedback from the training programme was considered, but did not result in system improvements. However, the interns’ participation in the clinical trial led to significant reconsiderations of the system’s design, as the results were not conclusive. For instance, the interns were trained on a system with only one “ideal” patient case. In Project 2 it was decided that clinical variation should be included in the system, so to better represent the real-life situation of SA3. Hence, the interns input had a low degree of influence, but their participation had a major impact on the development process.

Other medical staff The group “other medical staff” consisted of theatre nurses and surgeons working with the trainers and trainees during and after a procedure. If the system is implemented as a training tool for SA, the performance of the trained anaesthetists in the operating theatre could potentially be improved. As the group, other medical staff, work closely with these anaesthetists, they could be affected in some way. From this point, this user group is classified as tertiary users. Other medical staff provided input by advice by answering questionnaires during the training analysis in Project 1. The group provided their views on the procedure from an “outsider’s” perspective. The surgeons said that the training of SA has to become more time efficient. Training in theatre was regarded as time consuming and resulted in delays in surgery. Nurses were concerned that the communication between patient and anaesthetists was impeded due to the training situation. The trainer was generally focused on the performance of the trainee, rather than ensuring the comfort of the patient.

3

SA is generally performed on elderly or obese patients. These patient types have frequently occurring anomalies in the anatomy, which can make the procedure difficult to perform.

110

Chapter Seven

Their degree of influence was low. Their feedback resulted in potential system requirements (improved time efficiency and communication), but they were not implemented during Project 1 and 2.

Patient This group consisted of people who were due to undergo the procedure or have had the SA procedure conducted by trainees. If the system is implemented for training registrars in SA, patients would be affected. If the system results in better trained anaesthetists, it will lead to better quality of care for this user group. Hence, patients are tertiary users. Patients who have experienced the procedure participated by advice by answering questionnaires during the training analysis. In general, they agreed to have the procedure performed by a trainee. However, they stated that the procedure itself was very uncomfortable despite the presence of a trained anaesthetist. Patients played a significant role during the clinical trials. However, involving patients in the clinical trials was difficult. It required that patients, whose condition was suitable for trainee exposure, were willing to give their consent to be involved in the trials. It was necessary to film these clinical trials for the blinded observers. This, combined with lack of experience of the trainee, led to the clinical trials being particularly stressful for the patients. The patients’ influence on the development process was minimal, if none at all. Their input was considered during the training analysis, but did not lead to specific adjustments of the system. However, they played a vital role in the clinical trials, as they were required for measuring performance in the operating theatre. Hence, their participation had a significant impact on the development process. As in the case of the interns, their participation in the clinical trial in Project 2 made the development team reconsider the system’s training objectives.

General stakeholder This group represents general stakeholders, such as medical training bodies, health care organisations and governments. The College of Anaesthetists of Ireland4 (Irish training body) promotes high quality care in anaesthesia and provides training regulations and proficiency recommendations. The European Union (EU) is another example of a 4

http://www.anaesthesia.ie (accessed 11th Feb. 2010)

User Roles and User Guidance

111

stakeholder. Stakeholders, such as these, recommend, govern or legislate how a procedure should be performed or trained. For instance, the EU have introduced the European Working Time Directive5 (EWTD), which limits the amount of time a clinician is allowed to work as well as train per week. If the training of SA improves as a result of using the system as part of a training programme, the quality and efficiency of patient care improves. This outcome would impact the general stakeholders’ recommendations for training. General stakeholders therefore are tertiary users. Representatives of this user group did not participate in the development process. However, the two problem statements, written by the managing clinician, were based on recommendations from such stakeholders. These recommendations were important to address, as the general stakeholders are also potential customers of the system. Their interest in the system could also influence other training hospitals to use the system. The stakeholders influence on the process was high, even if they did not participate in the development process. They in-directly directed the managing clinician and the rest of the development team in developing a system based on relevant research objectives.

Other domain specialist Specialists from other domains, such as Interaction Design, education and validation participated in the development process. They provided input to the development process, but did not represent a user. Hence, this group consists of informants. The developers can be placed in this group. The developers participated by doing and had a high degree of influence on the development process. They were responsible for generating system requirements together with the clinicians and building the system. Chapter 5 and 6 contain many examples of how the developers contributed to the development process. Another group of domain specialists was represented by the CbKSTspecialists in Project 2. They participated by advice during the training analysis and design phase. They directed the development process by demonstrating how to utilise CbKST for SA and how to implement competence assessment in the system. This group of informants also participated by doing. They implemented algorithms that allowed the system in Project 2 to calculate and display the competence state of an 5

http://www.dohc.ie/issues/european_working_time_directive/ (accessed 18th Feb. 2010)

112

Chapter Seven

assessed anaesthetist. Their influence on the development process was high, because they directly affected the behaviour of the final system. Another informant provided advice on how to perform validation studies. The validation specialist participated with a medium influence on the development process. His advice directed the development team of how to identify and develop metrics and how to apply them in the final clinical trial. If a development process is intended to result in a commercially available training system, it is likely to require the involvement of an additional informant group, an industrial entity. Such group was not part of Project 1 and 2, but would likely have had a significant impact on the development process if included in the development team. An industrial entity could bring many benefits to the development process. This group would be valuable in the process of transforming research into a commercially viable product. An industrial entity would provide domain expertise in the form of business and marketing plans. They would provide industrial contacts within the market and have knowledge of potential distribution channels. This group would also provide the expertise necessary to package the product in order to make it appealing for customers. They would be able to identify alternative hardware and software solutions that could enhance the product further. However, the inclusion of an industrial entity could also have a negative effect on the development process. For instance, their involvement could result in compromising cost over functionality. In order to cut the costs of a training system, they could influence the development team to choose a cheaper technical solution (e.g., a low-fidelity haptic device) and consequently compromise some of the training potential that a system otherwise could provide. The industrial entity could also put pressure on the development process to produce a product as quickly as possible. In order to gain widespread recognition of a product, it is often important to be early on the market (first-mover advantage6). This could result in compromising the usability and validity of a product to get it on the market as quickly as possible. The degree of participation of the industrial entity would either be by doing or strong control. Their degree of participation and influence would likely depend on how much they have invested financially in the development process. Table 7-1 outlines a summary of the different user groups and their roles in relation to the development phases in Project 1 and 2. 6

See for example http://www.referenceforbusiness.com/management/Ex-Gov/ First-Mover-Advantage.html (accessed on the 18th of May, 2010)

User Roles and User Guidance

User group

Type of user

Trainer

Primary / Medical expertise, training Analysis, evaluation, secondary issues, feedback, validity implementation

Intermediate trainee

Primary / Medical expertise, training Analysis, evaluation secondary issues, validity

113

Examples of input Phase of development Medical expertise, research objectives, training Problem statement, Secondary objectives, design ideas, analysis, design, Managing evaluation, clinical trial / Tertiary system validity clinician Medical expertise, training objectives, training Clinician in the content, design ideas, Analysis, design, development Secondary validity evaluation, clinical trial team

Novice trainee Primary Primary Intern Other medical Tertiary staff

Medical knowledge, training issues, validity Medical knowledge, validity

Training issues System validity, training Tertiary issues Patient Recommendations of best General Tertiary practice stakeholders Technical expertise, Education theory, Other domain Informant Validation theory specialist

Analysis, evaluation, implementation Analysis, evaluation Analysis Analysis, evaluation Problem statement Problem statement, analysis, design, build, evaluation, clinical trial

Table 7-1. The table summarises how the different roles of the users contributed to the development process.

User guidance The previous section illustrated how the development process was heavily dependent on users’ roles in the development process. This section analyses how user guidance was accomplished in practice. A range of methods was used for involving the different user groups in the development process. For instance, questionnaires and focus groups were used for getting input from users during the training analysis. Usability evaluation was used to get input from trainers and intermediate trainees during the evaluation phases. Randomised controlled trials were used to validate the system during the clinical trials. Project meetings were

114

Chapter Seven

held to get input from the CbKST-specialists. However, as this volume is written from a developer’s point-of-view, it will focus on how the managing clinician and the clinicians in the development team (the clinicians) guided the developers to design and build the system. According to Dorst (2008), design is dependent on the design solution, the actors, the context and the design process. Reaching solutions to design problems depend on the actors’ ability to constructively reach solutions together. The actors are in this case clinicians and developers. It is necessary for the actors to thoroughly understand the design context. The context in this case is VR and web-based medical training of SA. In Project 1 and 2, an approach for supporting developers and clinicians in the process of reaching design solutions was necessary.

Participatory Design The process of finding solutions to design problems was based on an approach called Participatory Design (PD). PD is a design philosophy that places the user in the centre of a development process, promoting a democratic decision process between developers and users. Asaro (2000) describes how two innovative design approaches significantly influenced the evolution of PD. The first originates from the 1960s in Scandinavia where sociological aspects of workers were taken into account when designing and developing technological aids for their workplace. A major concern was the need for a democratic process which would allow the workers to participate in the shaping of their own working environment. The designers performed empirical studies of the workplace to understand the daily practice of the workers. Equipped with observations from empirical studies, the designers met with the workers on the floor and were able to collaboratively affect future implementation of technology. The second approach which contributed to the evolution of PD was developed by IBM in the 1970s. It was called Joint Design Approach (JDA) and emerged as a response to problems encountered when new software systems were implemented at IBM. Systems were developed by computer scientists who had limited understanding of the work practice of the intended users. Consequently, the systems did not fit with the everyday practice of the workers. The JDA systematically engaged the workers in design meetings where they could voice their needs and actively participate and influence the design of a system. In essence, PD promotes a democratic relationship between developers and users, which enhances the users’ ability to guide the development of new technology. It allows

User Roles and User Guidance

115

developers and users to collaboratively reach solutions to design problems that are heavily dependent on the user domain. Project 1 and 2 were based on a democratic relationship between developers and clinicians. The managing clinician expressed, during the final interviews, the importance of a democratic development process. He believed that hospitals’ traditionally strict hierarchy had to be replaced with a democratic relationship for multidisciplinary collaborations to work. He said that the real value in a multidisciplinary development team is the shaping of mutual concepts and the sharing of knowledge between the domains. The democratic relationship allowed the developers to participate in the development activities that were led by the clinicians (problem statement, training analysis and clinical trials). Mackay et al. (2000) describe how users configure developers. By participating with advice and by observation during the clinician-led development activities, the developers were configured with a deeper understanding of the medical elements of SA and issues in the training of the procedure. As a consequence, it made the developers better equipped to discuss training objectives with the clinicians and how the requirements could potentially be incorporated in the system. At the same time, this relationship gave the clinicians a high degree of influence over the design of the system during the design and informal evaluation phases. They participated closely by continuously providing suggestions for system requirements and evaluating the system during the design phase. The relationship helped the clinicians to ensure that the objectives of the problem statement and results from the training analysis were incorporated into the system. The democratic relationship resulted in shared concepts and language between the clinicians and developers. For example, at the start of the metrics development, the metrics were initially unknown to both the developers and the clinicians. A clinician described during the interviews how he did not know what an optimal skin puncture point for SA was. He said he “could have started to describe it, but not known what it was”. The collaborative process of implementing the metrics into the system allowed the development team to quantify the optimal skin puncture point. In addition, a shared language emerged when the clinicians had to explain how the sensations of the human-tissue model should feel to the developers (see Chapter 5). Working with the prototype system created a shared language for describing the sensations involved in the procedure of SA. Expressions like skin pop, tapping on bone, gritty, bouncy etc. were

116

Chapter Seven

used frequently when discussing the human-tissue model. Further examples of the shared language and concepts are included in Appendix C.

The expertise gap The process of translating the outcomes of the training analysis into system requirements was difficult in both projects. The clinicians were not familiar with the technology that was available and how it could be utilised for training. Meanwhile, the developers had initially no knowledge of the procedure in Project 1 and both projects involved training theory which was not known to the developers. There was an expertise gap between clinicians and developers. The PD literature has described difficulties in communicating a technical system’s design due to the users’ lack of relevant technical knowledge (Asaro, 2000). Asaro (2000) says that systems are described using high-level technical language. A similar issue has been discussed in the medical informatics literature as a barrier of not knowing (Lettl, 2007). Lettl (2007) argues that a user has to be able to understand the technology in order to contribute to the development process. Without an understanding of the technology, they will not be able to evaluate its usefulness or provide input on new concepts and designs. In the specific context of minimally invasive surgery, it has been discussed how the design of new surgical tools depends on involving medical experts (Thomann et al., 2007). However, Thomann et al. (2007) argue that a designer often struggles to understand the needs of surgeons due to their difference in background and expertise. Cavaye (1995) argues that the degree of user participation is dependent on how well the needs and requirements of a system are initially known. Hence, close user participation is necessary if system requirements are hard to define. In Project 1 and 2, system requirements had to be established continuously in situ, as part of the design phase. Close user participation was required throughout the process in order to bridge the gap between the developers and the clinicians’ expertise. During the final interviews, a clinician described how he had to go through a certain process when trying to understand the technology and how it could be used in the projects. He described that he had to see technical elements implemented (haptics, human-tissue model, LMS, etc.) in order to understand what they meant. He said that seeing prototype versions of the system supported his process of understanding the new technology.

User Roles and User Guidance

117

Bodker and Iversen, (2002) and Muller (2002) describe the use of prototypes as an important element of PD during a system’s development process. Prototypes have been used to help overcome communication difficulties between developers and users. They have also been used to help participants express new ideas and for them to understand underlying constraints. Prototypes can also help to create a shared sense of ownership and improve contextual grounding (Muller, 2002). In Project 1 and 2 the clinicians’ influence on the design of the system was based on the use of prototypes. The prototypes took various forms depending on the current design problem. For example, mock-ups of the LMS were used to aid discussions of possible improvements between developers and clinicians. 3D-visualisations of anatomy and screen-shots of the systems were distributed via email to the clinicians for feedback. The prototypes aided the developers in presenting design ideas. From this, the clinicians provided feedback of the systems’ utility and suggestions for improvements. The prototypes were designed and refined during face-toface design sessions, where clinicians and developers worked with prototypes of the system.

Benefits of prototyping in this context The prototypes provided specific benefits in the context of developing a VR and web-based medical training system. Chapter 5 and 6 illustrate the need for the developers to be constantly gathering information about SA. The development process required regular clarification on medical elements from the clinicians. The prototypes played an important role in this process. As the prototypes evolved, so did the developers understanding of the procedure. This led to new design problems and questions, which meant the developers needed to seek information of elements they previously did not know were necessary for the procedure. For instance, the developers struggled to understand differences between the types of patients, which had to be developed for the simulation system in Project 2. The clinicians had initially specified a difficulty level for each case scenario (patient type). They said that the system should consist of an easy, a difficult but possible, a paramedian, an impossible and a morbidly obese case. Still, it was not clear to the developers what these difficulties meant in relation to anatomic characteristics. At this stage, the clinicians had provided the developers with CT-data. However, the clinicians had not classified the datasets into the difficulties mentioned above. Instead, the developers segmented the anatomy in the CT-data and sent examples of different anatomy in an email to a clinician for feedback. In one of the

118

Chapter Seven

pictures the clinician described how all vertebrae had wide spaces between them, giving access to the spinal cord at any given space. This was considered as an “easy back” by the clinician. In another picture, some of the spaces between the vertebrae were narrow, making needle insertion difficult, but possible. The pictures and the feedback from the clinician helped the developers to understand how patient characteristics should relate to the level of difficulty for each patient type. It was hard for the developers to understand clinical elements from explanations alone. The prototypes and the clinicians’ guidance helped bridge the gap in the developers’ knowledge of SA, so that a final system representative of the clinicians’ objectives could be developed. The prototypes were helpful in implementing the procedural skills of the clinicians into the system. Procedural skills often become automated after long periods of deliberate practice and the performer loses conscious control over these skills (Ericsson, 2004). Procedural skills can therefore be difficult for a clinician to explain in words. For instance, Chapter 5 showed the difficulty experienced by the developers in understanding the different haptic sensations of the procedure from verbally communicated explanations alone. Instead sensations from the prototype system were used by the clinicians to direct the developers in how the different sensations should feel. The prototype system was also necessary to translate the original text descriptions of the metrics into a format which could be interpreted by the simulation system. The clinicians had not given the metrics any numerical descriptions, such as needle angles or number of attempts. The metrics were initially developed for human observation (see Chapter 6 – Design, build and informal evaluation of the combined assessment system for detailed description of the metrics). However, each metric had to be quantified for accurate interpretation by the system. One of the clinicians was asked to quantify each metric, but this proved difficult to describe in writing. Instead, the values for the metrics were acquired using the prototype system. In this way, the clinician could show optimal skin puncture points, acceptable areas of needle insertion and insertion angles etc. on the system. By using the prototype to quantify the metrics, it allowed the developers to both observe and also set the system to automatically measure the desired properties of each metric. The data collected during this session was then translated into programming code. The text descriptions of the metrics were not sufficient for the developers to understand the procedural elements of SA. It required the clinicians to physically show the metrics on the system. Hence, the prototypes played

User Roles and User Guidance

119

an important role in translating the clinicians’ automated skills into the system. The prototypes were also used to aid the clinicians in providing new design suggestions for the system. During the final interviews, one of the clinicians described how seeing a new version of the system always helped her to come up with new design ideas. For example, from seeing a first working version of the simulation system with anatomy and needle handling, she provided several suggestions of how the visualisation of the anatomy should be represented on screen. The clinician wanted to be able to place the needle in the 3D anatomy and then rotate it to see the needle’s orientation in the anatomy. She also said that superimposing the needle over a CT-scan could potentially help a novice to know the exact location of needle in the anatomy. Another clinician provided new design ideas from seeing a prototype version of the LMS during Project 2. At this stage, the scenarios had been implemented to the system. The clinician believed it was important that the system gave the user a warning if a critical error was made. However, this meant that the clinicians had to define amongst themselves what a critical error was and which problems in the LMS represented a critical error. As a result, critical errors were implemented. If, during an assessment, an anaesthetist made a critical error, then pop-up windows alerted them to this issue. During the interviews, a third clinician also said that it was beneficial to see prototype versions in order to indentify new improvements to the system. For instance, she said that seeing the questions laid out in the LMS made her understand how the system could work in practice. In the first prototype version of the LMS, one could answer the scenario problems in any order. The applied LMS (Moodle) did not allow control over the order of questions. By seeing the prototype, she realised that the scenario problems should be presented one-by-one, in order. She also suggested that the system had to allow the user to go back and review previous information, but the user could not change a previously answered question. The user should not be allowed to skip ahead either, as information at later stages of a scenario revealed answers to earlier questions. By seeing prototype versions of the system, it allowed the clinicians to come up with both new as well as refined design requirements. The continuous generation of requirements guided the developers to create a system that was perceived as useful by the clinicians and could provide the relevant training.

120

Chapter Seven

The constant exposure of functioning prototypes to the clinicians meant that the face validity was continuously evaluated during the development process. For instance, the clinicians evaluated the face validity of the prototype after the human-tissue model was implemented in the system during Project 1. At this stage, the system consisted of anatomy, the human-tissue model and needle handling. From trying the system, the clinicians confirmed that each individual sensation of the new prototype version was correct. However, when these individual sensations were combined, they did not represent a realistic patient. For example, the transition between interspinous ligament and ligamentum flavum was not distinct enough and the dura pop was described as “too hard” in relation to the ligamentum flavum. The different sensations had to be adjusted in order to represent one specific patient type. The adjustments were performed during a design session by modifying the parameters of the human-tissue model. Between each modification, the clinician provided verbal directions on how each individual sensation should be changed in relation to the others. The developers later added a function to the system so that the clinicians could change the different sensations in real-time. This allowed them to create patients with different anatomic characteristics themselves. In general simulation development, it has been argued that experts can be consulted to verify that a system appears to be correct (Sargent, 2005). The use of prototypes enabled the clinicians to systematically guide the system’s face validity and correct it when needed.

Issues with Participatory Design and prototyping The use of Participatory Design and prototyping during the development process was dependent on the democratic arrangement between the developers and the clinicians. However, for the democratic arrangement to work, it required a strong commitment from the managing clinician. Cavaye (1995) discusses Top management commitment as a determinant for enabling user participation. The degree of user participation in a development process generally depends on the management’s willingness to include users in the development process. Lettl (2005) argues that the users’ ability and willingness to participate is the highest when users are inventors and play an entrepreneurial role in realising the initial concepts of the technology into a product. The author suggests that new technologies emerge when a user experiences a problem within a domain that has a high problem pressure and is familiar to the user. Also, the “inventing” user has to be open to new technologies and have access to

User Roles and User Guidance

121

resources and interdisciplinary “know-how” outside of his/her own knowledge sphere. In a similar way, Mckeen and Guirmaraes (1997) state that users should have a leading role in the development process if the user domain is complex. Also, if the users are involved from the start of a development and participate in pilot studies, it will help increase the acceptance of a final system (Karsh, 2004). During an interview, the managing clinician stated that he believed that technology developments in this context should be led by the users. In contrast to the democratic project relationship (previously mentioned in Chapter 7 – Participatory Design), he believed that users’ needs should not just be considered, but should lead the development process. He also argued this further by saying that users’ needs should take priority over feasibility. However, this creates a difficult challenge for developers, as they are often limited in time, finance and available technology of what is feasible to develop. At the same time, PD and prototyping was time-consuming. Regular design sessions were necessary to allow the clinicians to informally evaluate the prototypes and provide new design ideas. Time has been argued as a major constraint when involving users in the development process (Cavaye, 1995). Both developers and clinicians had to agree on the usefulness of regular design sessions, allocating the time necessary for Participatory Design and prototyping to work.

Mutual knowledge sharing PD and prototyping helped to support the guidance from users and bridge the expertise gap between the developers and clinicians. Mutual knowledge sharing between developers and users has previously been discussed (Mackay et al., 2000). Mackay et al. (2000) argue that the developer configures the user to gain certain information from them, but they themselves are at the same time configured (both deliberately and unintentionally) by the user. The developers as well as the use of prototypes configured the clinicians’ understanding of the technology. At the same time, the clinicians, with the help of prototypes, configured the developers’ understanding of the medical expertise and procedural skills required for the procedure. The clinicians configured the developers by influencing their decisions on suitable technical approaches and solutions. During the final interviews, one of the clinicians said that the greatest challenge was to ensure that every participant of the development team shared the same understanding of a system requirement or training

122

Chapter Seven

objective. She said that iterative development, refinement and evaluation of the system was required for shaping it into something that would be agreed as useful by the development team and others. The mutual configuration meant that training objectives could be implemented in the evolving prototype system. In Project 1, the clinicians’ participation led to the development of a human-tissue model, which could be used to train the sensations of the procedure. They also directed the developers in how to incorporate visualisations, which could be used to help the trainees to conceptualise anatomy and procedure. The system could also be used to enhance trainer and trainee communication, as it was designed so that a trainer had to actively guide a trainee on the system. The resulting system was shown to be potentially useful as part of a formal training programme during the clinical trial and trial implementation. In Project 2, the clinicians’ objective of creating a competence-based assessment procedure using self-directed and problem-based learning was implemented in the system. The clinicians ensured that the training content (case scenarios, MCQ:s) and training theory (CbKST, metrics) were implemented in a meaningful way. They also ensured that the prototype system’s validity was assessed during a clinical trial. The collaborative development of prototypes resulted in the design and build of a system that was considered as meaningful by the participating clinicians (see Chapter 7 – The relation between user participation and the development process).

The relation between user participation and the development process Chapter 7 – User roles discusses the user roles during the development process in separate phases. However, in practice these phases were depending on each other. The user participation at each phase of the development process resulted in an outcome that influenced the following development phase (see Figure 7-2). For instance, the research objectives from the problem statement determined the focus of the training analysis. The outcomes of the first clinical trial resulted in suggestions for improving the original research and training objectives. The development process was also iterative7. The iterative process during the design, build and informal evaluation phases (also illustrated in 7

Iterative design helps provide formative evaluations of a system, generate feedback back to the process and develop new, improved versions for further testing (Kushniruk et al., 2004). Patel et al. (1998) argue that an iterative design

User Roles and User Guidance

123

Figure 7-2) required several iterations with the managing clinician and the clinicians in the development team providing feedback and new system requirements. It was not until the development team decided it had reached certain fidelity that it was formally validated with independent anaesthetists. The formal evaluations were part of the iterative design process and resulted in new requirements which helped to improve the system further. However, the clinicians in the development team had to participate after the formal evaluations in order to decide whether the system was ready for the clinical trials. The clinical trial in Project 1 was also part if the iterative development, as it was not conclusive and led to reconsiderations of the system. Figure 7-2 illustrates how the development process was iterative and dependent on user roles for each phase of the process during Project 1 and 2. The figure is the author’s interpretation of the relation between user roles and the development process and is based on the collected empirical data. In addition, how the users perceive the system indicates the value of user participation during the development process. Chapter 5 and 6 provide examples of how a number of representative end-users perceived the system. The feedback from the usability evaluations and trial implementation shows that trainers and trainees were in favour of using the system in their own training. Specifically, the trainer running the emergency medicine course did not only provide positive feedback but also requested to use the system again. The system was used by the trainer in another emergency medicine course and he showed great interest in incorporating the system permanently in the course. The clinicians participating in the development team were asked what they thought of the resulting system at the end of the two projects. One of the clinicians believed that the system could be a very valuable part of a structured training programme. In particular, she mentioned that a major advantage was that training on the system would increase the trainees’ confidence before proceeding to clinical practice. Despite inconclusive results from the first clinical trial, the managing clinician referred to the system from Project 1 as the best tool for training SA he had seen to date. He also said that he believed that the system’s human-tissue model and visualisations would play an important part in the trainees learning of the procedure. At the time of the interview, the final results from the clinical trial in Project 2 were not available. Still, the managing clinician believed that the system would be perceived as valuable approach, in combination with other design methods, can aid the development of effective medical information systems

124

Chapter Seven

Fig. 7-2. The relation between user roles and the development process during Project 1 and 2.

User Roles and User Guidance

125

for medical training bodies and other training hospitals. The clinician said it would be “absolutely necessary” that the system was part of a curriculum, where the system’s role is clearly defined. A third clinician was asked what she thought of the final system from Project 2. She said she could picture it as an integral part of the current training of SA. She believed it could be used as a distinguisher after certain stages of training to determine if a trainee has the necessary skills and competencies to progress to the next level of training. The clinician viewed the system as a potential part of the current apprenticeship model within a training programme. The system would provide multiple assessments throughout the course of the programme. However, she didn’t believe it would be possible to use it for high-stakes assessment (i.e. credentialing or certification of anaesthetists). The system also received great interest from The Irish College of Anaesthetists who were invited to a launch of the preliminary results of the clinical trial at the end of Project 2. The overall positive views of the system from trainers, trainees, managing clinician, clinicians in the development team and The Irish College of Anaesthetists indicate that the systematic user participation aided the development of a potentially useful and usable system for the training of SA.

Summary Chapter 7 analyses the roles of the users in Project 1 and 2. More specifically, it is analysed how the development process was dependent on different user groups, types of users, user input, degree of participation and degree of influence. This chapter shows how the nine user groups and the one informant group influenced the phases that they participated in. In this chapter it is also analysed how user guidance was accomplished during the development process. It shows how the managing clinician and the clinicians in the development team’s input to the development process were dependent on a democratic relationship between developers and clinicians. The development process took the shape of a Participatory Design approach and was augmented by the use of prototypes. Prototyping facilitated mutual knowledge sharing between the developers and the clinicians, which helped to bridge the expertise gap that existed between the two groups. At the end of Chapter 7, it is illustrated how user participation was dependent on the development process of the VR and web-based medical

126

Chapter Seven

training system and the users’ views of the resulting system are summarised.

CHAPTER EIGHT CONCLUSION AND SUMMARY

Summary of volume This research investigated the relation between user participation and the development process of a VR and web-based medical training system. This relation has been addressed by focusing on user roles and user guidance during the development process. It was also suggested who should be considered as users, when and why users should be involved in the development process and how to utilise their guidance efficiently. By doing this, this research have attempted to address that user participation need to fit the context of development, that the right user has to be involved at the right time in the right form and that the challenge of user participation lies in utilising domain expertise efficiently. To investigate user participation in this context, the primary research question was asked: “What was the relation between user participation and the development process during the two case studies?”. This question consisted of two parts: “What were the roles of the users in the development process during the two case studies?” and “How was user guidance accomplished during the two case studies?”. A qualitative research approach was applied and data was collected from two case studies concerned with the development of a VR and webbased medical training system for Spinal Anaesthesia. The empirical methods applied were participant observation, document analysis and interviews to collect data. The collected data was analysed using coding and interpretive writing. An analytical framework based partly on empirical findings from the case studies and partly on literature from IS and HCI was applied. This framework allowed me to extract user groups, user input and methods used to involve users for each of the discrete development phases from the two case studies. The combined results from the case studies were then analysed in order to classify user roles and determine how user guidance can be accomplished in this context.

128

Chapter Eight

The result of this research is a classification of users which shows the relation between user groups’ roles and the development process of a VR and web-based medical training system. It establishes user groups and informant groups, shows their different forms of input and necessary degrees of participation and influence for each development phase. The user groups are the managing clinician, clinicians in the development team, trainers, intermediate trainees, novice trainees, interns, other medical staff, patients and general stakeholders. These groups provided different forms of input. Especially important was the clinicians and the independent anaesthetists’ input. Their input and involvement in the process generated relevant research and training objectives. These groups were also crucial for providing specialised expertise and procedural skills which helped to translate training objectives into system requirements. Finally, the clinicians, independent anaesthetists and patients were crucial in the evaluations of the system’s usability, usefulness and validity. In addition, other domain specialists provided input to the development process. For instance, the developers were necessary to build the system and the CbKST-specialists assisted significantly in the process of creating a competence-based assessment procedure. This research has also shown that prototyping can be extremely helpful for guiding developers and clinicians during the development process. Prototyping was instrumental for bridging the expertise and knowledge gap between the developers and the clinicians. This enabled them to discuss training objectives and system requirements on more equal terms. The prototypes also supported the transformation of the clinicians’ research objectives into a medical training system. Furthermore, presenting prototype versions of the system to the clinicians and independent anaesthetists supported the evaluation of the system’s face validity during the development process. As such, prototyping had a significant role in ensuring that the system was shaped in a way that was deemed as useful and appropriate by the clinicians and the independent anaesthetists. Without the users’ high influence over the design of the prototypes, the final system would not have been designed in the same way. As a result, the final system is likely to be regarded by stakeholders, such as teaching hospitals and medical training bodies, as a valuable aid for training SA. Some factors were particularly important in relation to how user guidance was accomplished during Project 1 and 2. A crucial factor was the democratic arrangement between developers and clinicians. The democratic relationship between developers and clinicians was similar to

Conclusion and Summary 

129

how user-developer relationships are described within the field of Participatory Design. The relationship allowed the developers to participate closely in the development activities that were led by the clinicians (problem statement, training analysis and clinical trial). At the same time, the democratic relationship allowed the clinicians to participate in the design of the system with strong control over the process. However, this required that a close relationship between the developers and the clinicians was established and also maintained. The democratic relationship required dedication from both sides. The clinicians had to be willing to participate in the development of new technology to improve the training of SA. They also had to be willing to share their medical expertise with the developers. At the same time, developers had to be willing to allow the clinicians to participate with strong control over the development process and to share their technical expertise. Another crucial factor was the managing clinicians’ high degree of participation and position within the hospital hierarchy which facilitated the other user groups’ willingness and ability to participate. The greatest challenge for productive user participation is the process of finding the right users who are willing and able to contribute to the development process. The managing clinician utilised his influence in the training hospital and understanding of medical training practice in general to connect with other user groups and informant groups. This was of great significance as each of the participating groups played an important role in the development of the medical training system for SA. Another novel perspective this volume presents is the separation of the development process into discrete phases. Identifying the phases was a necessary step in the research process in order to identify when, how and why users should be involved in the development process. This research shows how this development process can be separated into the following: problem statement, training analysis, design, build, informal evaluation, formal evaluation, clinical trial and implementation. The findings of this research may raise the awareness of other research teams in the complexity of user participation and the development process in this particular medical context. This research may support others in deciding if, why and how to involve relevant user groups in the overall development process of VR and web-based medical training systems. In essence, it shows that establishing and maintaining a close, democratic relationship between developers and users can greatly aid the process of developing a VR and web-based medical training system.

130

Chapter Eight

Developers have to strive to ensure that medical trainers, medical experts and other training specialists are participating constructively in their developments of VR and web-based medical training systems. At the same time, medical trainers and medical experts have to strive to get more involved and actively guide developments in this area.

Further research on this topic The research of this volume is based on two specific case studies. A specific benefit of case studies is that they cover contextual and complex multivariate conditions of the phenomenon that is studied (Yin, 2003). Barki and Hartwick (1994) have suggested an alternative, quantitative approach for investigating user participation. They have created a range of metrics that can be used to determine the user impact on a development process. These metrics are categorised into three core activities: overall responsibility, user-designer relationship and hands-on activity. Future studies of VR and web-based medical training developments can utilise these or similar metrics to quantitatively investigate user participation. For instance, one could investigate how other researchers in academia and industry consider and interact with users during the developments of their own medical training systems. In addition, applying development approaches and methods other than the ones described in this volume might result in a different relation between user participation and the development process. The findings of this research, in combination with quantitative studies and other case studies, could aid the future creation of development frameworks for VR and web-based medical training systems. This research presents a step in that direction. Such frameworks could help to structure, plan and control the development process. It would also be necessary to formally implement the system from Project 1 and 2 and perform longitudinal evaluations of its use as part of a training programme. Simulation-based training has to be closely integrated into clinical practice (Kneebone et al., 2004). However, it is difficult to introduce new approaches into existing clinical practice (Bero et al, 1998; Berg, 2001). It is difficult to predict how a system will be used and how trainers and trainees will receive it in practice. Further studies are required to understand how user participation relates to the long-term use, potential change in training practice and overall user satisfaction with the system. Based on the findings of such studies, additional improvements could be identified and implemented into the system. Hence, a continuous collaboration between developers and users would be beneficial even after the system’s implementation.

Conclusion and Summary 

131

Commercialisation The research presented in this volume has continued and is currently involved in a commercialisation process. The intention is to ensure that the competence-based assessment system will reach the market as a commercial product and be available for anaesthetists to integrate into their training of SA.

APPENDIX A DATA FROM USABILITY STUDY

This appendix provides the usability report that was written after the usability study during Project 1: Virtual Spinal Anaesthesia: Data from the usability test Method: The Think aloud-method was used. Occasionally we had to ask questions to help them think out loud. Subjects: 5 experts in spinal anaesthesia. Time: Between 30 minutes and 70 minutes (the longer sessions was due to very interested participants giving us more feedback than we initially anticipated). General observations: All five participants seemed to be pleased with the application features and none showed any difficulties of using the setup. We received only constructive criticism for possible improvements; no parts or features was pointed out as “wrong” or “far from being realistic”. Parts of the simulation use some aspects that do not occur in real life, for example the magnetic force keeping the spinal needle inside the introducer. These were sometimes pointed out initially but seemed to be accepted after a short while of use. Important features identified for teaching purposes: z How and where to insert the needle. z The visual feedback of where the needle was placed, both correctly and incorrectly. z The haptic feedback of the layers. When you perform the spinal in real life after popping a layer the sensation is lost, but with this tool the expert and trainee can now know that they are talking about the same sensation(s).

134

Appendix A z

z z z z

To be able to go back after a clinic session and use the application to show: “This is what went wrong (incorrect needle placement)” and “This is how you should have done it instead”. Having just a 2D view didn't seem to be that important; it’s enough having a small 2D picture enhancing the 3D visualisation. That it was possible to place the needle sideways (para-median) enabling an alternative way of practice needle insertion. The trial mode without any feedback of where you are inside the back was received as a good idea. When the procedure is performed in a correct way, cerebral fluid coming through the spinal needle should be visualised.

Issues and possible improvements: z The tissues felt a bit too bouncy (one participant mentioned the bone as bouncy as well). z The skin felt a bit grainy. z No visual feedback when deforming the skin from the pressure of the needle (the little bump in the skin just before the skin is popped). z The pressure drop when popping the dura was too large. Increase the fluid resistance or decrease the dura pop. z Sometimes it was hard to know the orientation and position of the needle in relation to the back, a suggestion was to use a shadow of the needle projected onto the back. z To have marks on the back where the needle previously have pierced. z The other hand was not in use! They agreed that a glove could be useful. One participant suggested creating a vertical extension attached to the device so the needle will be placed 10 cm under the tip of the device. This would create a space where a manikin could be placed. z Both the spinal needle and the introducer should be rotated with the back. This was pointed out as a very important teaching feature. z More obvious pictures (and additional text) on the rotation buttons. z When touching bone the jump to the transparent back is too fast: suggestion was to have a couple of seconds delay. z Would like to see the needle go into the back with the back transparent. But maybe just when the trainee is taught by the teacher and not when they are practicing themselves.

Data from Usability Study  z z

z z z z

z z z

135

There is no on-screen visual feedback of the hand holding the needle. If the spinal needle is placed in the intrathecal space cerebral fluid should come out. This was considered as a very important teaching aspect. Have several different back models to practice on. Young, old, thin, over-weight, abnormal etc. The idea of using sound increases the realism but should be changed to recorded ambient sound from a real procedure. Should be able to tap the bone without “falling” through the haptic model. The needle wiggled on a few occasions due to the magnetic force activating when inserting the spinal needle into the introducer (fine-tune the damping, Hooke's law). Though, the participants said that it wouldn't likely be a distraction in the learning process. The sensation after ligamentum flavum is punctured dura mater, not dura mater itself (as the text says.) The sensations are lost up until about 90% of the length of the introducer (when the spinal needle is going into the introducer). Some unintentional jumps from the introducer when placed inside the back after rotation.

Initial conclusions: The level of satisfaction was overall high. The overall realism of the tissues and simulation of the procedure was said to be high by the participants. The user interface seemed fairly easy to use and understand; some improvements of the icons have to be done. Very valuable suggestions for possible improvements were obtained during the sessions. Some quotes: z “Would be nice to be able to leave the spinal needle and actually see (where the needle is placed inside of the back)”, “That would give you a very good idea of why you're not in the right place, why you're touching bone” z “It's very good, I think it's fairly realistic” (regarding the feel of the layers) “It's the relative resistances of the various layers that are more important than the actual resistance” “It's the change of the resistances” z “That you can get a 3 dimensional view of the back is very good” z “The different layers of feel I think is very accurate”

136

Appendix A z z z

z

z

“I think it's a great tool” “That feels quite realistic, except right now I'm in the ligamentum flavum, (having) greater bounciness than it would be” “That excellent, that means that it is possible to use this to demonstrate essentially two different (the normal and the para median approach), as the model is realistic enough.” “Thats a superb learning feature” (Regarding the trial mode with no text or CT scan feedback and if the right place is found, you get information that you have found the correct space). “I really think it's wonderful and as it stands, adding nothing else, there is learning worth in this”.

APPENDIX B DATA FROM TRIAL IMPLEMENTATION

The trainer at the Emergency Medicine Course was given a questionnaire. Below are the questions asked and the trainer’s answers: 1. Did you find the functionality of the simulator easy to learn? Why/Why not? Very Easy. The set-up was very similar to the positioning of doctor and patient during the “real thing” and the handling of the virtual needle was very similar to an ordinary needle. 2. Did you find the simulator useful for teaching lumbar puncture? Why/Why not? Very useful. The sensation of “give” for each of the tissues penetrated was just like the sensation when doing a lumbar puncture on a patient. This is the part of performing the technique that could, in my experience, heretofore only be taught by allowing the trainee to practice on a real patient. 3. If you had the opportunity to use the simulator as a part of your teaching/training of lumbar puncture, how would you imagine it being used and what would be its main role? I would imagine it being used by the trainer showing the trainee the procedure being taught, then repeating it whilst explaining each stage, then getting the trainee to “talk it through” with the trainer performing the technique, finally allowing the trainee to practice it themselves.

138

Appendix B

4. Do you have any suggestions for improvements of the simulator? If it were possible to manipulate the needle after striking bone, without the need to withdraw it outside the skin, it would more closely reflect actual practice. I don’t know if giving the trainer 3-D glasses would allow them to see the same as a trainee (i.e. a single needle). I found my seeing two needles when the trainee could only see one limited my ability to positively critique their technique. 5. Any other general comments, suggestions or ideas regarding the simulator? We all thought it was fabulous. The main drawback, in my opinion, was inability to feel the spinous processes prior to needle insertion. I find that doing this gives me extra information about optimum needle direction (based on the angle of that particular patient’s spinous processes). I think the particular “winner” for the simulator is the feeling of give in the tissues mentioned above. I could see the same advantage being applied to the teaching of many medical procedures involving the insertion of needles into body cavities, including joint aspiration, pericardiocentesis, thoracentesis and laparoscopy.

Data from Trial Implementation 

139

The participants at the Emergency Medicine Course were also handed a questionnaire. Below are the questionnaire and a participant’s answers:

The questionnaires from the trial implementation was analysed and the overall response was positive. The answers are summarised in the graph below:

140

Appendix B

APPENDIX C SHARED LANGUAGE AND CONCEPTS

This appendix provides detailed examples of the shared language and concepts that emerged between developers and users. It is divided into three categories: “Haptic sensation”, “Patient characteristics” and “Needle insertion characteristics”. Each category contains several examples of different concepts and their detailed descriptions.

Category

Concept

Detailed description A layer describes a part of the anatomic composition that generates a certain sensation. For example, skin is a surface layer and ligamentum flavum is a layer of tissue. The drastic decrease of sensation when a surface is penetrated during needle insertion. In SA this occurs when passing through either skin or dura mater. The sensation experienced directly after the pop, where the pressure on the needle dramatically decreases when entering tissue or liquid.

Haptic sensation

Layer

Haptic sensation

Pop

Haptic sensation

Pressure drop

Haptic sensation

Needle on surface

The surface tension that is experienced just before a surface is penetrated (a pop).

Haptic sensation

Needle in tissue

The feeling of passing through tissue. For detailed descriptions see below.

Haptic sensation

Gritty

The sensation experienced as the needle goes through the ligamentum flavum.

Haptic sensation

Needle in pear

An alternative description to explain what a needle going through ligamentum flavum feels like.

142

Appendix C A sensation due to an artefact from the haptic device. The stiffness generated from some haptic devices is limited, which can results in this inaccurate, bouncy feeling when going through tissue. A sensation of going through a leather-like material. This was used by the clinicians in the context of the skin.

Haptic sensation

Bouncy

Haptic sensation

Grainy

Haptic sensation

Needle on bone

The sensation of bone is a crucial part of the spinal procedure. This is a cue for re-directing the needle.

Haptic sensation

Tapping on bone

Re-occurring failure to re-direct the needle after feeling that the needle is on bone. A common novice mistake.

Haptic sensation

Needle in liquid

After the dura pop, the needle enters the spine and goes through the CSF.

Haptic sensation

Calcified ligament

A calcified ligament provides higher resistance and is in some cases impossible to penetrate.

Haptic sensation

Nerve

Haptic sensation

Softness

Haptic sensation

Toughness

Nerves are generally part of the bone structure, hence the sensation is the same as for bone. However, if hitting a nerve the patient will experience a sharp pain and is likely to react distinctively. General term used to explain how soft the resistance of a tissue / surface as experienced by the clinicians. General term used to explain how tough the resistance of a tissue / surface as experienced by the clinicians.

Patient characteristics

Elderly

An elderly patient generally has calcified ligaments and other conditions that will affect the sensations (higher resistance) of the tissue model and also access to the spine.

Patient characteristics

Obese

An obese patient generally has excessive tissues that will affect the sensations of the tissue model and also how the procedure is performed.

Patient characteristics

Impossible back

A patient with severe calcifications or abnormal spine, which makes the procedure impossible to perform.

Shared Language and Concepts  Patient characteristics

Healthy back

Patient characteristics

Critical error

Needle insertion characteristics

Insertion (skin puncture) point

Needle insertion characteristics

Insertion space

Needle insertion characteristics

Angle of insertion

Needle insertion characteristics

Needle type

Needle insertion characteristics

Redirection of needle

Needle insertion characteristics

"Ideal" metric

143

The ideal case. This back has clearly identifiable landmarks and easy access to the spine between each interspace. An error made that is so severe that the trainee needs to be alerted / stopped immediately. Critical errors have to be defined and agreed with a number of experts. The point on skin where the needle is punctured and inserted. The amount of skin puncture points should be as low as possible in order to perform a safe procedure. The acceptable areas of needle insertion. On a healthy back there are 3 insertion spaces located between the vertebrae L2 - L5. In an abnormal back, there are 4 additional spaces when using the paramedian approach. The acceptable angle range of needle insertion. This is critical as the angle of insertion will predict whether a procedure will be successful or not. The type of needle that is used for the procedure affects the magnitude of the different sensations. For example, there is a difference in skin pop between a thicker and a thinner needle. The procedure of locating a new and appropriate needle insertion path while inside of tissue. However, re-direction might damage tissue. Ideal refers to the best way possible to perform distinct parts of the procedure. For example, the ideal skin-puncture point has to be defined for each patient case. The ideal way of doing something has to be identified and agreed with a number of experts.

APPENDIX D SELECTION OF LMS AND SIMULATOR IMAGES

146

Appendix D

Selection of LMS and Simulator Images 

147

148

Appendix D

BIBLIOGRAPHY

C. Abras, D. Maloney-Krichmar and J. Preece, “User-Centered Design” in Bainbridge, W. Encyclopedia of Human-Computer Interaction (Thousand Oaks: Sage Publications, 2004). R. Aggarwal and A. Darzi, “Technical-skills training in the 21st Century”, New England Journal of Medicine 355 (2006): 2695-6. D. Albert and J. Lukas, eds, Knowledge Spaces: Theories, Empirical Research, and Applications (Mahwah, NJ: Lawrence Erlbaum, 1999). E. Ammenwerth, S. Gräber, G. Herrmann, T. Bürkle and J. König, “Evaluation of health information systems—problems and challenges,” International Journal of Medical Informatics 71:3 (2003): 125-135. P. M. Asaro, “Transforming society by transforming technology: the science and politics of participatory design,” Accounting, Management and Information Technologies 10:4 (2000): 257-290. H. Barki and J. Hartwick, “Measuring User Participation, User Involvement, and User Attitude,” MIS Quarterly 18:1 (1994): 59-82. C. Basdogan, M. Sedef, M. Harders and S. Wesarg “VR-based Simulators for Training in Minimally Invasive Surgery,” IEEE Computer Graphics and Applications 27:2 (2007): 54-67. M. Berg “Implementing information systems in health care organizations: myths and challenges,” International Journal of Medical Informatics 64:2 (2001): 143-156. L. Bero, R. Grilli, J. Grimshaw, E. Harvey, A. Oxman and M. A. Thomson “Getting research findings into practice: Closing the gap between research and practice: an overview of systematic reviews of interventions to promote the implementation of research findings,” British Medical Journal 317 (1998): 465-468. N. Bevan “Measuring usability as quality of use,” Software Quality Journal 4 (1995): 115-150. S. Bødker and O. S. Iversen “Staging a professional participatory design practice: moving PD beyond the initial fascination of user involvement,” Proceedings of the second Nordic conference on Human-computer interaction, Aarhus, October (2002). P. Bradley and K. Postlethwaite "Simulation in clinical learning," Medical Education 37:SUPP/1 (2003): 1-5.

150

Bibliography

P. Bradley “The history of simulations in medical education and possible future directions,” Medical Education 40 (2006): 254-262. P. N. Brett, T. J. Parker, A. J. Harrison, T. A. Thomas and A. Carr “Simulation of resistance forces acting on surgical needles,” Journal of Engineering Medicine 211:4 (1997): 335 – 347. F. P. Brooks “What's Real About Virtual Reality?,” IEEE Computer Graphics and Applications, 19:6 (1999): 16-27. J. D. Carroll and J. C. Messenger “Medical Simulation: The New Tool for Training and Skill Assessment,” Perspectives in Medicine and Biology 51: 1 (2008): 47-60. A. L. M. Cavaye "User participation in system development revisited,” Information & Management 28:5 (1995): 311-323. H. Champion and A. G. Gallagher “Surgical simulation – a “good idea whose time has come”, British Journal of Surgery 90 (2003): 767–768. K. Choi, H. Sun and P. Heng “Interactive Deformation of Soft Tissues with Haptic Feedback for Medical Learning,” IEEE Transactions on Information Technology in Biomedicine, 7:4 (2003): 358-363. B. Chou and V. L. Handa "Simulators and Virtual Reality in Surgical Education," Obstetrics and Gynecology Clinics 33:2 (2006): 283-296. B. C. Cline, A. O. Badejo, I. I. Rivest, J. R. Scanlon, W. C. Taylor, and G. J. Gerling "Human performance metrics for a virtual reality simulator to train chest tube insertion," in Proceedings of Systems and Information Engineering Design Symposium, April (2008):168-173. M. H. Davis, and R. M. Harden "AMEE Medical Education Guide No. 15: Problem-based learning: a practical guide," Medical Teacher 21:2 (1999): 130-140. S. Dawson “Procedural Simulation: A Primer,” Radiology, 241:1 (2006): 17-25. A. Dix, J. Finlay and G. D. Abowd, Human-computer interaction (Pearson, 2004). D. H. J. M. Dolmans, H. Snellen-Balendong, I. H. A. P. Wolfhagen and C. P. M. Van der Vleuten "Seven principles of effective case design for a problem-based curriculum," Medical Teacher 19:3 (1997): 185-189. D. H. J. M. Dolmans, W. De Grave, I. H. A. P. Wolfhagen and C. P. M. Van der Vleuten "Problem-based learning: future challenges for educational practice and research," Medical Education 39:7 (2005): 732-741. K. Dorst "Design research: a revolution-waiting-to-happen," Design Studies 29:1 (2008): 4-11. R. Ellaway, T. Poulton, V. Smothers, and P. Greene “Virtual Patients Come of Age,” Medical Teacher 31:8 (2009): 683-684.

Simulation-based Medical Training: A User-Centred Design Perspective 151

A. England, C. Hunt, H. Woolnough, S. Johnson, A. Healey, W. Lewandowski and D. Gould “Development of Cognitive Task Analysis: The Challenge of Metrics,” British Society of Interventional Radiology Annual Scientific Meeting, Manchester, UK (2008). A. England, C. Hunt, H. Woolnough, S. Johnson, A. Healey, W. Lewandowski and D. Gould “Performing Cognitive Task Analysis of Interventional Procedures: the thin end of the wedge?”. British Society of Interventional Radiology Annual Scientific Meeting. Bournemouth, UK (2007). K. A. Ericsson and J Smith, Toward a general theory of expertise: Prospects and limits (Cambridge Univ. Press, 1991). K. A. Ericsson “Deliberate Practice and the Acquisition and Maintenance of Expert Performance in Medicine and Related Domains,” Academic Medicine 79:10 (2004): 570-581. M. J. Friedrich "Practice Makes Perfect: Risk-Free Medical Training With Patient Simulators” Journal Of the American Medical Association 288 (2002): 2808-2812. M. Färber, J. Heller and H. Handels “Simulation and training of lumbar puncture using haptic volume rendering and a 6 DOF haptic device,” in Proceedings of SPIE 6509 (2007). D. M. Gaba “The future vision of simulation in healthcare,” Quality and Safety in Healthcare 13:Suppl 1 (2004): 2–10. A. G. Gallagher, E. M. Ritter, and R. M. Satava “Fundamental principles of validation, and reliability: rigorous science for the assessment of surgical education and training,” Surgical endoscopy 17:10 (2003): 1525-1529. A. G. Gallagher, E. M. Ritter, H. Champion, G. Higgins, M. P. Fried, G. Moses, C. D. Smith, and R. M. Satava “Virtual reality simulation for the operating room: proficiency-based training as a paradigm shift in surgical skills training,” Annals of Surgery 241 (2005): 364-372. D. Gould, A. Healey, S. Johnson, W. E. Lewandowski and D. Kessel “Metrics for an interventional radiology curriculum: a case for standardisation?,” Studies in Health Technology & Informatics 119 (2006): 159-64. T. P. Grantcharov, V. B. Kristiansen, J. Bendix, L. Bardram, J. Rosenberg and P. Funch-Jensen “Randomized clinical trial of virtual reality simulation for laparoscopic skills training,” British Journal of Surgery 91 (2004): 146–150. O. Grottke, A. Ntouba, S. Ullrich, W. Liao, E. Fried, A. Prescher, T. Deserno, T. Kuhlen and R. Rossaint “Virtual reality-based simulator

152

Bibliography

for training in regional anaesthesia,” British Journal of Anaesthesia 103:4 (2009): 594-600. J. He and R. W. King “The Role of User Participation in Information Systems Development: Implications from a Meta-Analysis,” Journal of Management Information Systems 25:1 (2008): 301–331. K. Henriksen and E. Dayton “Issues in the design of training for quality and safety,” Quality and Safety in Health Care 15 (2006): 17-24. K. Henriksen and M. D. Patterson “Simulation in Health Care: Setting Realistic Expectations,” Journal of Patient Safety September 3:3 (2007): 127-134. J. N. Howell, R. R. Conatser, R. L. Williams, J. M. Burns and D. C. Eland “Palpatory Diagnosis Training on the Virtual Haptic Back: Performance Improvement and User Evaluations,” Journal of the American Osteopathic Association 108:1 (2008): 29-34. J. E. Hunton and J. D. Beeler "Effects of User Participation in Systems Development: A Longitudinal Field Experiment," MIS Quarterly 21:4 (1997): 359-388. S. B. Issenberg, M. S. Gordon, D. L. Gordon, R. E. Safford and I. R. Hart "Simulation and new learning technologies," Medical Teacher 23:1 (2001): 16-23. S. B. Issenberg, W. C. Mcgaghie, E. R. Petrusa, D. L. Gordon and R. J. Scalese “Features and uses of high-fidelity medical simulations that lead to effective learning: a BEME systematic review,” Medical Teacher 27 (2005): 10-28. N. W. John, M. Riding, N. I. Phillips, S. Mackay, L. Steineke, B. Fontaine, G. Reitmayr, V. Valencic, N. Zimie and A. Emmen “Web-Based Surgical Educational Tools,” Studies in Health Technology and Informatics 81 (2001): 212-217. N. W. John “The impact of Web3D technologies on medical education and training,” Computers & Education 49:1 (2007): 19-31. N. W. John “Design and implementation of medical training simulators,” Virtual Reality 12:4 (2008): 269-279. S. Johnson, H. Woolnough, C. Hunt, D. Gould, A. England, M. Crawshaw and W. Lewandowski “Simulator Training in Interventional Radiology: The Role of Task Analysis,” APA Annual Conference, Boston, US (2008). S. Johnson, A. Healey, J. Evans, M. Murphy, M. Crawshaw and D. Gould “Physical and cognitive task analysis in interventional radiology,” Journal of Clinical Radiology 61:1 (2006): 97-103. B. H. Khan, ed, Web-based training (Englewood Cliffs: Educational Technology Publications, 2001).

Simulation-based Medical Training: A User-Centred Design Perspective 153

B. Kaplan and J. A. Maxwell “Qualitative Research Methods for Evaluating Computer Information Systems,” in Evaluating Health Care Information Systems: Methods and Applications, J. G. Anderson, C. E. Aydin, and S. J. Jay, eds (Thousand Oaks: Sage, 1994) 45–68. B. T. Karsh “Beyond usability: designing effective technology implementation systems to promote patient safety,” Quality and Safety in Health Care 13 (2004): 388-394. A. E. Kazdin, Behavior Modification in Applied Settings (Pacific Grove:Brooks/Cole Publishing Co, 1998). R. Kneebone “Simulation in surgical training: educational issues and practical training,” Medical Education 37 (2003): 267–77. R. Kneebone and R. Aggarwal “Surgical training using simulation,” British Medical Journal 338 (2009): 1001. R. Kneebone, W. Scott, A. Darzi and M. Horrocks "Simulation and clinical practice: strengthening the relationship," Medical Education 38:10 (2004): 1095-1102. H. K. Klein and M. D. Myers "A Set of Principles for Conducting and Evaluating Interpretive Field Studies in Information Systems," MIS Quarterly 23:1 (1999): 67-94. P. Klein, The handbook of psychological testing (London: Routledge, 1993). Z. Kulcsár, A. Aboulafia, T. Hall and G. D. Shorten “Determinants of learning to perform spinal anaesthesia: a pilot study,” European Journal of Anaesthesiology 25:12 (2008): 1026-31. A. W. Kushniruk and V. L. Patel "Cognitive and usability engineering methods for the evaluation of clinical information systems," Journal of Biomedical Informatics 37:1 (2004): 56-76. O. Kuttera, R. Shamsb and N. Navaba "Visualization and GPU-accelerated simulation of medical ultrasound from CT images," Computer Methods and Programs in Biomedicine 94:3 (2009): 250-266. C. R. Larsen, J. L. Soerensen, T. P. Grantcharov, T. Dalsgaard, L. Schouenborg and C Ottose “Effect of virtual reality training on laparoscopic surgery: randomised controlled trial,” British Medical Journal 338:1802 (2009). B. Lawson, How Designers Think – The Design Process Demystified (Oxford: Architectural Press, 4th edition, 2006). C. Lettl “User involvement competence for radical innovation,” Journal of Engineering and Technology Management 24 (2007) 54-75. C. Lewis, Using the "Thinking-aloud" Method in Cognitive Interface Design (Yorktown Heights: IBM, 1982).

154

Bibliography

W. Lin and B. Shao “The relationship between user participation and system success: a simultaneous contingency approach,” Information & Management 37:6 (2000): 283-295. P. Littlejohns, J. C. Wyatt and L. Garvican "Evaluating computerised health information systems: hard lessons still to be learnt," British Medical Journal 326:7394 (2003): 860-862. E. Lövquist and U. Dreifaldt “The design of a haptic exercise for poststroke arm rehabilitation,” In Proceedings of the 6th International Conference on Disability, Virtual Reality and Associated Technologies, Esbjerg, Denmark (2006). H. Mackay, C. Carne, P. Beynon-Davies and D. Tudhope "Reconfiguring the User: Using Rapid Application Development" Social Studies of Science 30:5 (2000): 737-758. N. J. Maran and R. J. Glavin “Low to high fidelity simulation - a continuum of medical education?,” Medical Education 37 (2003): 2228. C. Marshall and G. B. Rossman, Designing Qualitative Research (Thousand Oaks: Sage Publications, 1999). P. Marti and L. Bannon “Exploring user-centred design in practice: Some caveats” Knowledge, Technology & Policy 22:1 (2009): 7-15. J. A. Martin, G. Regehr, R. Reznick, H. Macrae, J. Murnaghan, C. Hutchison and M. Brown “Objective structured assessment of technical skill (OSATS) for surgical residents,” British Journal of Surgery 84:2 (1997): 273-278. W. T. M. Mason and P. W. Strike “See one, do one, teach one-is this still how it works?,” Medical Teacher 25:6 (2003): 664-665. N. Mays and C. Pope "Qualitative research: rigour and qualitative research," British Medical Journal 311 (1995): 109-112. R. McCloy and R. Stone “Science, medicine, and the future. Virtual reality in surgery,” British Medical Journal 323:7318 (2003): 912-915. J. D. McKeen and T. Guimaraes "Successful Strategies for User Participation in Systems Development," Journal of Management Information Systems 14:2 (1997): 133-150. L. Moody, J. Arthur, A. Zivanovic and E. Dribble “Ensuring the Usability of a Knee Arthroscopy Simulato,r” Studies in Health Technology and Informatics 98 (2004): 241-243. M. J. Muller, “Participatory Design: The Third Space in HCI,” in Handbook of HCI (Mahway NJ: Erlbaum, 2003). M. Myers “Qualitative research and the generalizability question: Standing firm with Proteus,” The Qualitative Report 4 (2000).

Simulation-based Medical Training: A User-Centred Design Perspective 155

J. Nandhakumar and D. E. Avison “The fiction of methodological development: a field study of information systems development,” Information Technology and People 12:2 (1999): 176-191. M. Nakao, K. Minato, T. Kuroda, M. Komori, H. Oyama and T. Takahashi “Transferring Bioelasticity Knowledge through Haptic Interaction,”, IEEE Multimedia 13:3 (2006): 50-61. J. Nielsen, Usability Engineering (San Francisco: Morgan Kaufmann, 1993). V. L. Patel and G. J. Groen, “The general and specific nature of medical expertise: a critical look,” in K A Ericsson and J Smith, eds, Toward a General Theory of Expertise: Prospects and Limits (Cambridge: Cambridge Univ. Press, 1991) 93–125. V. L. Patel and D. R. Kaufman "Medical Informatics and the Science of Cognition," Journal of the American Medical Informatics Association 5:6 (1998): 493-502. M. Q. Patton, Qualitative research and evaluation methods (Thousand Oaks: Sage Publications, 2002). L. Prior, “Documents,” in Seale C., Silverman, D., Gubrium, J., Gobo, G., eds, Qualitative Research Practice (Thousand Oaks: Sage Publications, 2007). R. Reznick and H. MacRae “Teaching surgical skills - Changes in the wind,” New England Journal of Medicine 355:25 (2006): 2664-2669. E. Salas and C. S. Burke "Simulation for training is effective when..," British Medical Journal 11:2 (2002): 119-120. E. Salas and J. A. Cannon-Bowers “The Science of Training: A Decade of Progress,” Annual Review of Psychology 52 (2003): 471-499. E. Salas, K. A. Wilson, C. S. Burke and H. A. Priest “Using SimulationBased Training to Improve Patient Safety: What Does It Take?,” Joint Commission Journal on Quality and Patient Safety 31:7 (2005): 363371. R. G. Sargent “Verification and Validation of Simulation Models,” in Proceedings of the 37th conference on Winter simulation (2005): 130141. R. M. Satava and S. B. Jones “Current and future applications of virtual reality for medicine,” Proceedings of the IEEE 86:3 (1998): 484-489. R. J. Scalese, V. T. Obeso and S. B. Issenberg “Simulation Technology for Skills Training and Competency Assessment in Medical Education,” Journal of General Internal Medicine 23:SUPP/1 (2008): 46-49. D. W. Shaffer, S. L. Dawson, D. Meglan, S. Cotin, M. Ferrell, A. Norbash and J. Muller “Design principles for the use of simulation as an aid in

156

Bibliography

interventional cardiology training,” Minimally Invasive Therapy & Allied Technologies 10:2 (2001): 75-82. C. B. Seaman, “Qualitative Methods in Empirical Studies of Software Engineering,” IEEE Transactions on Software Engineering 25:4 (1999): 557-572. N. E. Seymour, A. G. Gallagher, S. A. Roman, M. K. O'Brien, V. K. Bansal, D. K. Andersen and R. M. Satava “Virtual reality training improves operating room performance: results of a randomized, double-blinded study,” Annuals of Surgery 236 (2002): 458-463 S. G. S. Shah and I. Robinson "Benefits of and barriers to involving users in medical device technology development and evaluation," International Journal of Technology Assessment in Health Care 23 (2007): 131-137. M. Srinivasan and C. Basdogan “Haptics in virtual environments: Taxonomy, research status, and challenges,” Computer Graphics 21:4 (1997): 393-404. L. M. Sutherland, P. F. Middleton, A. Anthony, J. Hamdorf, P. Cregan, D. Scott and G. J. Maddern "Surgical Simulation: A Systematic Review," Annals of surgery 243:3 (2006): 291-300. J. Tetzlaff “Assessment of Competency in Anesthesiology,” Anesthesiology 106:4 (2007): 812-825. G. Thomann and J. Caelen “Proposal of a New Design Methodology including PD and SBD in Minimally Invasive Surgery,” Proceedings of the 12th IFToMM World Congress (2007). V. Venkatesh and F. D. Davis "A Theoretical Extension of the Technology Acceptance Model: Four Longitudinal Field Studies," Management Science 46:2 (2000): 186-204. F. P. Vidal, N. W. John, A. E. Healey and D. A. Gould “Simulation of ultrasound guided needle puncture using patient specific data with 3D textures and volume haptics,” Computer Animation and Virtual Worlds 19:2 (2008): 111-127. J. Vozenilek, J. S. Huff, M. Reznek and J. A. Gordon “See one, do one, teach one: advanced technology in medical education,” Academic Emergency Medicine 11 (2004): 1149–54. S. Woolgar, “Configuring the user: the case of usability trials,” in A Sociology of monsters: essays on power, technology, and domination, Law, J. ed, (Routledge, 1991). R. K. Yin, Applications of Case Study Research (Thousand Oaks: Sage Publications Inc, 2003).

Simulation-based Medical Training: A User-Centred Design Perspective 157

A. Ziv, P. R. Wolpe, S. D. Small and S. Glick "Simulation-Based Medical Education: An Ethical Imperative," Simulation in Healthcare 1:4 (2006): 252-256.

INDEX

3D 44, 52, 53, 55, 56, 57, 59, 68, 89, 90, 97, 129, 131, 145, 171 ability and willingness to participate 133 analytical framework 8, 25, 139 apprenticeship-model 12 Asaro 9, 126, 128, 161 building phase 34, 109 Cannon-Bowers 33, 35, 169 Cavaye 9, 31, 32, 34, 35, 36, 38, 109, 111, 128, 133, 162 clinical trial iv, 23, 24, 62, 63, 64, 65, 71, 75, 76, 99, 100, 102, 105, 106, 110, 112, 117, 118, 119, 120, 121, 123, 125, 134, 135, 137, 141, 164 clinicians in the development team 41, 42, 44, 46, 50, 53, 54, 57, 58, 60, 63, 65, 66, 67, 70, 71, 76, 77, 80, 81, 85, 90, 91, 92, 93, 96, 99, 105, 106, 109, 111, 113, 114, 115, 116, 117, 118, 126, 135, 137, 138, 140 coding 24, 25, 26, 27, 139 CT-scan 131 degrees of participation v, 31, 108, 140 democratic arrangement v, 133, 140 design iv, 9, 18, 20, 23, 26, 29, 30, 32, 33, 34, 35, 36, 41, 46, 48, 50, 54, 57, 58, 60, 61, 62, 63, 71, 83, 87, 93, 96, 99, 106, 107, 108, 109, 110, 112, 113, 115, 116, 117, 120, 123, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 140, 141, 161, 162, 163, 165, 167, 168 developers v, 8, 10, 15, 16, 19, 20, 21, 23, 26, 29, 31, 36, 41, 43, 47, 48, 49, 50, 51, 52, 53, 54, 56, 57, 59, 60, 61, 62, 67, 70, 71, 75, 81, 83, 88, 89, 90, 91, 92, 94, 106, 107, 111, 114, 115, 116, 117, 122, 126, 127, 128, 129, 130, 131, 132, 133, 134, 138, 140, 141, 142, 152 development process iii, iv, v, 7, 8, 9, 10, 11, 13, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 28, 29, 30, 31, 32, 33, 35, 36, 37, 38, 39, 40, 48, 56, 57, 61, 70, 73, 74, 105, 106, 108, 109, 110, 111, 112, 113, 115, 116, 117, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 132, 133, 134, 135, 136, 138, 139, 140, 141, 142 document analysis v, 11, 22, 139 document collection 22, 24 documents analysis 26 empirical data v, 8, 10, 11, 18, 19, 23, 25, 39, 73, 135 Ericsson 115, 116, 130, 163, 164, 169 Evaluation 35, 37, 161 expertise v, 10, 16, 17, 23, 26, 29, 32, 41, 44, 49, 50, 51, 58, 60, 63, 66, 70, 71, 75, 77, 79, 84, 87, 92, 96, 99, 100, 102, 105, 106, 107, 110, 113, 114, 115, 116, 117, 119, 123, 125, 128, 129, 134, 138, 139, 140, 141, 163, 169 face validity 50, 52, 58, 60, 71, 93, 99, 117, 132, 140

160

Index

Field notes 20 framework 8, 11, 24, 27, 28, 29, 30, 37, 38, 40, 70, 74, 105, 108, 139 Gallagher 7, 13, 116, 162, 164, 170 Grantcharov 35, 164, 167 Haptic 13, 52, 152, 153, 162, 165, 169 Human-Computer Interaction (HCI) v, 7 hypothesis 7, 18 implementation iv, 8, 18, 25, 32, 34, 35, 36, 61, 67, 69, 70, 71, 92, 108, 117, 125, 126, 134, 137, 141, 142, 151, 161, 165, 166 Information Systems (IS) v, 7 interventional radiology33, 36, 164, 166 interviewsv, vi, 11, 20, 22, 23, 24, 25, 26, 30, 31, 44, 46, 60, 65, 96, 99, 127, 128, 129, 131, 134, 139 Issenberg 13, 35, 36, 113, 114, 165, 170 John vi, 8, 15, 36, 165, 171 Kneebone 13, 36, 109, 142, 166 Lettl 8, 10, 128, 133, 167 life long learning 12 literature review 23 LMS iv, 16, 80, 81, 82, 83, 84, 85, 87, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 106, 114, 119, 129, 131 longitudinal evaluations 35, 142 Mackay 28, 29, 31, 127, 134, 165, 168 managing clinician iii, 40, 41, 42, 46, 50, 52, 57, 58, 63, 65, 70, 71, 73, 74, 75, 76, 77, 79, 80, 81, 85, 87, 88, 93, 99, 103, 105, 106, 108, 109, 110, 111, 112, 113, 115, 117, 118, 122, 126, 127, 133, 135, 137, 138, 140, 141 MCQ 63, 66, 81, 85, 134 medical device 28, 30, 170 medical educators 7, 8, 114 medical training iii, v, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 22, 23, 25, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 38, 40, 70, 71, 74, 105, 108, 109, 110, 114, 115, 122, 126, 129, 137, 138, 139, 140, 141, 142, 165 Moodle iii, 16, 83, 86, 132 observations in the field 19 Omni 45 OpenLabyrinth iii, 15, 16 paramedian iii, 60, 100, 101, 130, 154 participant observation v, 11, 19, 20, 22, 23, 25, 26, 139 Participatory Design 9, 126, 133, 138, 141, 168 patient iii, 12, 13, 15, 40, 43, 44, 49, 51, 52, 53, 54, 65, 69, 71, 74, 80, 81, 84, 87, 88, 90, 91, 92, 93, 94, 100, 101, 114, 115, 119, 120, 121, 122, 130, 132, 148, 149, 153, 154, 166, 171 Phantom 45, 46 primary users 29, 116 problem statement iv, 32, 36, 40, 42, 70, 76, 105, 108, 127, 135, 141 procedural skills 16, 23, 41, 45, 71, 74, 75, 116, 117, 119, 130, 134, 140

Simulation-based Medical Training: A User-Centred Design Perspective 161 psychomotor skills 15 Qualitative research 18, 19, 22, 168, 169 research questions 7, 11, 22, 23, 26, 32, 109 Reznick 8, 12, 113, 168, 169 Salas 13, 33, 35, 109, 113, 169 Satava 13, 164, 170 secondary users 29 Seymour 170 Simulation 8, 13, 15, 33, 34, 43, 46, 57, 59, 79, 81, 83, 88, 90, 91, 92, 93, 94, 97, 98, 113, 130, 131, 132, 144, 146, 162, 164, 166, 167, 168, 170 Spinal Anaesthesia v, 8, 15, 18, 39, 40, 42, 76, 108, 139, 144 task analysis 33, 36, 42, 166 tertiary users 28, 29 trainees 7, 8, 13, 25, 29, 42, 43, 44, 52, 56, 67, 68, 69, 80, 99, 103, 105, 109, 111, 113, 116, 117, 118, 119, 120, 121, 126, 134, 137, 140, 142 trainers 7, 8, 13, 15, 16, 29, 40, 42, 43, 44, 52, 56, 57, 71, 75, 80, 99, 103, 105, 111, 113, 116, 117, 118, 119, 120, 126, 137, 140, 141, 142 training analysis iv, 24, 33, 42, 43, 44, 55, 70, 75, 76, 79, 80, 106, 108, 109, 111, 112, 113, 116, 118, 119, 120, 121, 123, 126, 127, 128, 135, 141 training objectives 7, 13, 17, 23, 33, 34, 43, 55, 71, 107, 109, 113, 118, 121, 125, 127, 134, 135, 140 training programme 43, 63, 65, 67, 71, 102, 103, 111, 120, 122, 134, 137, 142 Triangulation 22 Usability 9, 25, 34, 36, 58, 60, 91, 96, 99, 126, 168, 169 user guidance 7, 11, 17, 71, 107, 108, 126, 138, 139, 140 user participation iv, v, 7, 8, 9, 10, 11, 18, 19, 20, 21, 23, 24, 25, 26, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 40, 70, 74, 75, 79, 87, 96, 99, 104, 105, 108, 109, 112, 128, 129, 133, 134, 135, 136, 137, 138, 139, 141, 142, 167 user roles iii, iv, 26, 42, 44, 50, 52, 58, 60, 63, 66, 70, 76, 80, 87, 96, 99, 105, 108, 135, 136, 139 user-developer relationships 140 users iv, v, 7, 8, 9, 10, 11, 16, 17, 19, 20, 23, 24, 25, 26, 28, 29, 30, 31, 34, 35, 36, 38, 39, 41, 43, 50, 52, 57, 59, 60, 62, 66, 69, 70, 73, 82, 83, 108, 109, 110, 111, 113, 116, 118, 119, 120, 121, 122, 125, 126, 127, 128, 129, 133, 134, 136, 138, 139, 140, 141, 142, 152, 170 validation 7, 8, 13, 23, 41, 63, 122, 123, 164 virtual patient 15 Virtual Reality 7, 162, 165, 167 Vozenilek 8, 13, 171 VR iii, v, 7, 8, 9, 10, 11, 12, 13, 14, 15, 17, 18, 22, 23, 25, 28, 32, 34, 35, 36, 38, 67, 74, 77, 108, 126, 129, 138, 139, 141, 142, 161 web v, 7, 8, 9, 10, 11, 12, 13, 15, 16, 17, 18, 22, 23, 25, 28, 32, 34, 35, 36, 38, 74, 75, 77, 80, 94, 108, 126, 129, 138, 139, 140, 141, 142 World Federation for Medical Education 41